From nwagner at iam.uni-stuttgart.de Tue Aug 1 02:24:37 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 01 Aug 2006 08:24:37 +0200 Subject: [Numpy-discussion] svn install failure on amd64 In-Reply-To: <1154397584.14839.17.camel@amcnote2.mcmorland.mph.auckland.ac.nz> References: <1154397584.14839.17.camel@amcnote2.mcmorland.mph.auckland.ac.nz> Message-ID: <44CEF3A5.1010200@iam.uni-stuttgart.de> Angus McMorland wrote: > Hi people who know what's going on, > > I'm getting an install failure with the latest numpy from svn (revision > 2940). This is on an amd64 machine running python 2.4.4c0. > > The build halts at: > > compile options: '-Ibuild/src.linux-x86_64-2.4/numpy/core/src > -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core > -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' > gcc: numpy/core/src/multiarraymodule.c > In file included from numpy/core/src/arrayobject.c:508, > from numpy/core/src/multiarraymodule.c:64: > numpy/core/src/arraytypes.inc.src: In function 'set_typeinfo': > numpy/core/src/arraytypes.inc.src:2139: error: 'PyIntpArrType_Type' > undeclared (first use in this function) > numpy/core/src/arraytypes.inc.src:2139: error: (Each undeclared > identifier is reported only once > numpy/core/src/arraytypes.inc.src:2139: error: for each function it > appears in.) > In file included from numpy/core/src/arrayobject.c:508, > from numpy/core/src/multiarraymodule.c:64: > numpy/core/src/arraytypes.inc.src: In function 'set_typeinfo': > numpy/core/src/arraytypes.inc.src:2139: error: 'PyIntpArrType_Type' > undeclared (first use in this function) > numpy/core/src/arraytypes.inc.src:2139: error: (Each undeclared > identifier is reported only once > numpy/core/src/arraytypes.inc.src:2139: error: for each function it > appears in.) > error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall > -Wstrict-prototypes -fPIC -Ibuild/src.linux-x86_64-2.4/numpy/core/src > -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core > -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c > numpy/core/src/multiarraymodule.c -o > build/temp.linux-x86_64-2.4/numpy/core/src/multiarraymodule.o" failed > with exit status 1 > > Am I missing something or might this be a bug? > > Cheers, > > Angus. > I can build numpy on a 32-bit machine but it fails on a 64-bit machine. Travis, please can you have a look at this issue. In file included from numpy/core/src/arrayobject.c:508, from numpy/core/src/multiarraymodule.c:64: numpy/core/src/arraytypes.inc.src: In function ?set_typeinfo?: numpy/core/src/arraytypes.inc.src:2139: error: ?PyIntpArrType_Type? undeclared (first use in this function) numpy/core/src/arraytypes.inc.src:2139: error: (Each undeclared identifier is reported only once numpy/core/src/arraytypes.inc.src:2139: error: for each function it appears in.) In file included from numpy/core/src/arrayobject.c:508, from numpy/core/src/multiarraymodule.c:64: numpy/core/src/arraytypes.inc.src: In function ?set_typeinfo?: numpy/core/src/arraytypes.inc.src:2139: error: ?PyIntpArrType_Type? undeclared (first use in this function) numpy/core/src/arraytypes.inc.src:2139: error: (Each undeclared identifier is reported only once numpy/core/src/arraytypes.inc.src:2139: error: for each function it appears in.) error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -Ibuild/src.linux-x86_64-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c numpy/core/src/multiarraymodule.c -o build/temp.linux-x86_64-2.4/numpy/core/src/multiarraymodule.o" failed with exit status 1 Nils From oliphant.travis at ieee.org Tue Aug 1 02:54:54 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 01 Aug 2006 00:54:54 -0600 Subject: [Numpy-discussion] svn install failure on amd64 In-Reply-To: <1154397584.14839.17.camel@amcnote2.mcmorland.mph.auckland.ac.nz> References: <1154397584.14839.17.camel@amcnote2.mcmorland.mph.auckland.ac.nz> Message-ID: <44CEFABE.6060804@ieee.org> Angus McMorland wrote: > Hi people who know what's going on, > > I'm getting an install failure with the latest numpy from svn (revision > 2940). This is on an amd64 machine running python 2.4.4c0. > This was my fault. Revision 2931 contained a mistaken deletion of a line from arrayobject.h that should not have happened which affected only 64-bit builds. This problem is corrected in revision 2941. -Travis From lcordier at point45.com Tue Aug 1 04:05:46 2006 From: lcordier at point45.com (Louis Cordier) Date: Tue, 1 Aug 2006 10:05:46 +0200 (SAST) Subject: [Numpy-discussion] numpy vs numarray In-Reply-To: <44CE3EF5.9030508@ieee.org> References: <44CE3EF5.9030508@ieee.org> Message-ID: > I listened to this and it looks like Sergio Ray is giving an intro class > on scientific computing with Python and has some concepts confused. We > should take this as a sign that we need to keep doing a good job of > educating people. I'm on UTC+02:00 so only just saw there have been a few posts. Basically my issue was with numarray going to replace NumPy, and that the recording was only a few months old, sitting on the web where newcomers to Python will undoubtedly find it. I thought the proper thing to do would be to ask the 411 site to just append a footnote explaining that some of the info is out-dated. I just didn't want to do it without getting the groups opinion first. Regards, Louis. -- Louis Cordier cell: +27721472305 Point45 Entertainment (Pty) Ltd. http://www.point45.org From klemm at phys.ethz.ch Tue Aug 1 07:25:02 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Tue, 01 Aug 2006 13:25:02 +0200 Subject: [Numpy-discussion] unexpected behaviour of numpy.var Message-ID: Hello, numpy.var exhibits a rather dangereous behviour, as I have just noticed. In some cases, numpy.var calculates the variance, and in some cases the standard deviation (=square root of variance). Is this intended? I have to admit that I use numpy 0.9.6 at the moment. Has this been changed in more recent versions? Below a sample session Python 2.4.3 (#1, May 8 2006, 18:35:42) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> a = [1,2,3,4,5] >>> numpy.var(a) 2.5 >>> numpy.std(a) 1.5811388300841898 >>> numpy.sqrt(2.5) 1.5811388300841898 >>> a1 = numpy.array([[1],[2],[3],[4],[5]]) >>> a1 array([[1], [2], [3], [4], [5]]) >>> numpy.var(a1) array([ 1.58113883]) >>> numpy.std(a1) array([ 1.58113883]) >>> a =numpy.array([1,2,3,4,5]) >>> numpy.std(a) 1.5811388300841898 >>> numpy.var(a) 1.5811388300841898 >>> numpy.__version__ '0.9.6' Hanno -- Hanno Klemm klemm at phys.ethz.ch From David.L.Goldsmith at noaa.gov Tue Aug 1 11:59:16 2006 From: David.L.Goldsmith at noaa.gov (David L Goldsmith) Date: Tue, 01 Aug 2006 08:59:16 -0700 Subject: [Numpy-discussion] unexpected behaviour of numpy.var In-Reply-To: References: Message-ID: <44CF7A54.5050609@noaa.gov> Hi, Hanno. I ran your sample session in numpy 0.9.8 (on a Mac, just so you know; I don't yet have numpy installed on my Windows platform, and I don't have immediate access to a *nix box) and could not reproduce the problem, i.e., it does appear to have been fixed in 0.9.8. DG Hanno Klemm wrote: > Hello, > > numpy.var exhibits a rather dangereous behviour, as I have just > noticed. In some cases, numpy.var calculates the variance, and in some > cases the standard deviation (=square root of variance). Is this > intended? I have to admit that I use numpy 0.9.6 at the moment. Has > this been changed in more recent versions? > > Below a sample session > > > Python 2.4.3 (#1, May 8 2006, 18:35:42) > [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> import numpy >>>> a = [1,2,3,4,5] >>>> numpy.var(a) >>>> > 2.5 > >>>> numpy.std(a) >>>> > 1.5811388300841898 > >>>> numpy.sqrt(2.5) >>>> > 1.5811388300841898 > >>>> a1 = numpy.array([[1],[2],[3],[4],[5]]) >>>> a1 >>>> > array([[1], > [2], > [3], > [4], > [5]]) > >>>> numpy.var(a1) >>>> > array([ 1.58113883]) > >>>> numpy.std(a1) >>>> > array([ 1.58113883]) > >>>> a =numpy.array([1,2,3,4,5]) >>>> numpy.std(a) >>>> > 1.5811388300841898 > >>>> numpy.var(a) >>>> > 1.5811388300841898 > >>>> numpy.__version__ >>>> > '0.9.6' > > > > Hanno > > -- HMRD/ORR/NOS/NOAA From ivilata at carabos.com Tue Aug 1 12:02:01 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Tue, 01 Aug 2006 18:02:01 +0200 Subject: [Numpy-discussion] Int64 and string support for numexpr Message-ID: <44CF7AF9.2070200@carabos.com> Hi all, I'm attaching some patches that enable the current version of numexpr (r2142) to: 1. Handle int64 integers in addition to int32 (constants, variables and arrays). Python int objects are considered int32 if they fit in 32 bits. Python long objects and int objects that don't fit in 32 bits (for 64-bit platforms) are considered int64. 2. Handle string constants, variables and arrays (not Unicode), with support for comparison operators (==, !=, <, <=, >=, >). (This brings the old ``memsizes`` variable back.) String temporaries (necessary for other kinds of operations) are not supported. The patches also include test cases and some minor corrections (e.g. removing odd carriage returns in some lines in compile.py). There are three patches to ease their individual review: * numexpr-int64.diff only contains the changes for int64 support. * numexpr-str.diff only contains the changes for string support. * numexpr-int64str.diff contains all changes. The task has been somehow difficult, but I think the result fits quite well in numexpr. So, what's your opinion about the patches? Are they worth integrating into the main branch? Thanks! :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: numexpr-int64str.tar.gz Type: application/x-gzip Size: 24891 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From ndarray at mac.com Tue Aug 1 12:07:33 2006 From: ndarray at mac.com (Sasha) Date: Tue, 1 Aug 2006 12:07:33 -0400 Subject: [Numpy-discussion] unexpected behaviour of numpy.var In-Reply-To: References: Message-ID: I cannot reproduce your results, but I wonder if the following is right: >>> a = array([1,2,3,4,5]) >>> var(a[newaxis,:]) array([ 0., 0., 0., 0., 0.]) >>> a[newaxis,:].var() 2.0 >>> a[newaxis,:].var(axis=0) array([ 0., 0., 0., 0., 0.]) Are method and function supposed to have different defaults? It looks like the method defaults to variance over all axes while the function defaults to axis=0. >>> __version__ '1.0b2.dev2192' On 8/1/06, Hanno Klemm wrote: > > Hello, > > numpy.var exhibits a rather dangereous behviour, as I have just > noticed. In some cases, numpy.var calculates the variance, and in some > cases the standard deviation (=square root of variance). Is this > intended? I have to admit that I use numpy 0.9.6 at the moment. Has > this been changed in more recent versions? > > Below a sample session > > > Python 2.4.3 (#1, May 8 2006, 18:35:42) > [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> a = [1,2,3,4,5] > >>> numpy.var(a) > 2.5 > >>> numpy.std(a) > 1.5811388300841898 > >>> numpy.sqrt(2.5) > 1.5811388300841898 > >>> a1 = numpy.array([[1],[2],[3],[4],[5]]) > >>> a1 > array([[1], > [2], > [3], > [4], > [5]]) > >>> numpy.var(a1) > array([ 1.58113883]) > >>> numpy.std(a1) > array([ 1.58113883]) > >>> a =numpy.array([1,2,3,4,5]) > >>> numpy.std(a) > 1.5811388300841898 > >>> numpy.var(a) > 1.5811388300841898 > >>> numpy.__version__ > '0.9.6' > > > > Hanno > > -- > Hanno Klemm > klemm at phys.ethz.ch > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From davidgrant at gmail.com Tue Aug 1 12:56:15 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 1 Aug 2006 09:56:15 -0700 Subject: [Numpy-discussion] unexpected behaviour of numpy.var In-Reply-To: <44CF7A54.5050609@noaa.gov> References: <44CF7A54.5050609@noaa.gov> Message-ID: I also couldn't reproduce it on my 0.9.8 on Linux. DG On 8/1/06, David L Goldsmith wrote: > > Hi, Hanno. I ran your sample session in numpy 0.9.8 (on a Mac, just so > you know; I don't yet have numpy installed on my Windows platform, and I > don't have immediate access to a *nix box) and could not reproduce the > problem, i.e., it does appear to have been fixed in 0.9.8. > > DG > > Hanno Klemm wrote: > > Hello, > > > > numpy.var exhibits a rather dangereous behviour, as I have just > > noticed. In some cases, numpy.var calculates the variance, and in some > > cases the standard deviation (=square root of variance). Is this > > intended? I have to admit that I use numpy 0.9.6 at the moment. Has > > this been changed in more recent versions? > > > > Below a sample session > > > > > > Python 2.4.3 (#1, May 8 2006, 18:35:42) > > [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > > >>>> import numpy > >>>> a = [1,2,3,4,5] > >>>> numpy.var(a) > >>>> > > 2.5 > > > >>>> numpy.std(a) > >>>> > > 1.5811388300841898 > > > >>>> numpy.sqrt(2.5) > >>>> > > 1.5811388300841898 > > > >>>> a1 = numpy.array([[1],[2],[3],[4],[5]]) > >>>> a1 > >>>> > > array([[1], > > [2], > > [3], > > [4], > > [5]]) > > > >>>> numpy.var(a1) > >>>> > > array([ 1.58113883]) > > > >>>> numpy.std(a1) > >>>> > > array([ 1.58113883]) > > > >>>> a =numpy.array([1,2,3,4,5]) > >>>> numpy.std(a) > >>>> > > 1.5811388300841898 > > > >>>> numpy.var(a) > >>>> > > 1.5811388300841898 > > > >>>> numpy.__version__ > >>>> > > '0.9.6' > > > > > > > > Hanno > > > > > > > -- > HMRD/ORR/NOS/NOAA > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Tue Aug 1 13:40:35 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 1 Aug 2006 10:40:35 -0700 Subject: [Numpy-discussion] Graph class Message-ID: I have written my own graph class, it doesn't really do much, just has a few methods, it might do more later. Up until now it has just had one piece of data, an adjacency matrix, so it looks something like this: class Graph: def __init__(self, Adj): self.Adj = Adj I had the idea of changing Graph to inherit numpy.ndarray instead, so then I can just access itself directly rather than having to type self.Adj. Is this the right way to go about it? To inherit from numpy.ndarray? The reason I'm using a numpy array to store the graph by the way is the following: -Memory is not a concern (yet) so I don't need to use a sparse structure like a sparse array or a dictionary -I run a lot of sums on it, argmin, blanking out of certain rows and columns using fancy indexing, grabbing subgraphs using vector indexing -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Tue Aug 1 14:41:07 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 2 Aug 2006 03:41:07 +0900 Subject: [Numpy-discussion] Graph class In-Reply-To: References: Message-ID: Hi David, For a graph, the fact that it's stored as a matrix, or stored as linked nodes, or dicts, etc, is an implementation detail. So from a classical OO point of view, inheritance is not what you want. Inheritance says "this is a kind of that". But a graph is not a kind of matrix. A matrix is merely one possible way to represent a graph. Many matrix operations don't even make sense on a graph (although a lot of them do...). Also you say "memory is not a concern (yet)", but maybe it will be later, and then you'll want to change the underlying representation. Ideally you will be able to do this in such a way that all your graph-using code works completely without modification. This will be harder to do if you derive from ndarray. Because to prevent existing code from breaking you have to duplicate ndarray's interface exactly, because you don't know which ndarray methods are being used by all existing Graph-using code. On the other hand, in the short term it's probably easier to derive from ndarray directly if all you need is something quick and dirty. But maybe then you don't even need to make a graph class. All you need is Graph = ndarray I've seen plenty of Matlab code that just uses raw matrices to represent graphs without introducing any new type or class. It may be that's good enough for what you want to do. Python is not really a "Classical OO" language, in the sense that there's.no real data hiding, etc. Python's philosophy seems to be more like "whatever makes your life the easiest". So do what you think will make your life easiest based on the totality of your circumstances (including need for future maintenance). If memory is your only concern, then if/when it becomes and issue, a switch to scipy.sparse matrix shouldn't be too bad if you want to just use the ndarray interface. --bill On 8/2/06, David Grant wrote: > I have written my own graph class, it doesn't really do much, just has a few > methods, it might do more later. Up until now it has just had one piece of > data, an adjacency matrix, so it looks something like this: > > class Graph: > def __init__(self, Adj): > self.Adj = Adj > > I had the idea of changing Graph to inherit numpy.ndarray instead, so then I > can just access itself directly rather than having to type self.Adj. Is this > the right way to go about it? To inherit from numpy.ndarray? > > The reason I'm using a numpy array to store the graph by the way is the > following: > -Memory is not a concern (yet) so I don't need to use a sparse structure > like a sparse array or a dictionary > -I run a lot of sums on it, argmin, blanking out of certain rows and columns > using fancy indexing, grabbing subgraphs using vector indexing > > -- > David Grant > http://www.davidgrant.ca > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From charlesr.harris at gmail.com Tue Aug 1 15:49:00 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 1 Aug 2006 13:49:00 -0600 Subject: [Numpy-discussion] Graph class In-Reply-To: References: Message-ID: Hi David, I often have several thousand nodes in a graph, sometimes clustered into connected components. I suspect that using an adjacency matrix is an inefficient representation for graphs of that size while for smaller graphs the overhead of more complicated structures wouldn't be noticeable. Have you looked at the boost graph library? I don't like all their stuff but it is a good start with lots of code and a suitable license. Chuck On 8/1/06, David Grant wrote: > > I have written my own graph class, it doesn't really do much, just has a > few methods, it might do more later. Up until now it has just had one piece > of data, an adjacency matrix, so it looks something like this: > > class Graph: > def __init__(self, Adj): > self.Adj = Adj > > I had the idea of changing Graph to inherit numpy.ndarray instead, so then > I can just access itself directly rather than having to type self.Adj. Is > this the right way to go about it? To inherit from numpy.ndarray? > > The reason I'm using a numpy array to store the graph by the way is the > following: > -Memory is not a concern (yet) so I don't need to use a sparse structure > like a sparse array or a dictionary > -I run a lot of sums on it, argmin, blanking out of certain rows and > columns using fancy indexing, grabbing subgraphs using vector indexing > > -- > David Grant > http://www.davidgrant.ca > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Tue Aug 1 15:54:46 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Aug 2006 13:54:46 -0600 Subject: [Numpy-discussion] unexpected behaviour of numpy.var In-Reply-To: References: Message-ID: <44CFB186.8020802@ee.byu.edu> Sasha wrote: >I cannot reproduce your results, but I wonder if the following is right: > > > >>>>a = array([1,2,3,4,5]) >>>>var(a[newaxis,:]) >>>> >>>> >array([ 0., 0., 0., 0., 0.]) > > >>>>a[newaxis,:].var() >>>> >>>> >2.0 > > >>>>a[newaxis,:].var(axis=0) >>>> >>>> >array([ 0., 0., 0., 0., 0.]) > >Are method and function supposed to have different defaults? It looks >like the method defaults to variance over all axes while the function >defaults to axis=0. > > > They are supposed to have different defaults because the functional forms are largely for backward compatibility where axis=0 was the default. -Travis From davidgrant at gmail.com Tue Aug 1 16:31:35 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 1 Aug 2006 13:31:35 -0700 Subject: [Numpy-discussion] Graph class In-Reply-To: References: Message-ID: Thanks Bill, I think you are right, I think what I have is what I want (ie. not extending ndarray). I guess do go along with the "whatever makes your life the easiest" mantra, all I am really missing right now is the ability to access my Graph object like this g[blah] with square brackets and to do vector indexing and all that. What is the name of the double-underscored method that I should implement (and then call the underlying datastructure's corresponding method)? I see __getitem__ and __getslice__... hmm, this could get messy. Maybe the way I have it is ok. Maybe I can live with G.Adj. Dave On 8/1/06, Bill Baxter wrote: > > Hi David, > > For a graph, the fact that it's stored as a matrix, or stored as > linked nodes, or dicts, etc, is an implementation detail. So from a > classical OO point of view, inheritance is not what you want. > Inheritance says "this is a kind of that". But a graph is not a kind > of matrix. A matrix is merely one possible way to represent a graph. > Many matrix operations don't even make sense on a graph (although a > lot of them do...). Also you say "memory is not a concern (yet)", but > maybe it will be later, and then you'll want to change the underlying > representation. Ideally you will be able to do this in such a way > that all your graph-using code works completely without modification. > This will be harder to do if you derive from ndarray. Because to > prevent existing code from breaking you have to duplicate ndarray's > interface exactly, because you don't know which ndarray methods are > being used by all existing Graph-using code. > > On the other hand, in the short term it's probably easier to derive > from ndarray directly if all you need is something quick and dirty. > But maybe then you don't even need to make a graph class. All you > need is > > Graph = ndarray > > I've seen plenty of Matlab code that just uses raw matrices to > represent graphs without introducing any new type or class. It may be > that's good enough for what you want to do. > > Python is not really a "Classical OO" language, in the sense that > there's.no real data hiding, etc. Python's philosophy seems to be > more like "whatever makes your life the easiest". So do what you > think will make your life easiest based on the totality of your > circumstances (including need for future maintenance). > > If memory is your only concern, then if/when it becomes and issue, a > switch to scipy.sparse matrix shouldn't be too bad if you want to just > use the ndarray interface. > > --bill > > > On 8/2/06, David Grant wrote: > > I have written my own graph class, it doesn't really do much, just has a > few > > methods, it might do more later. Up until now it has just had one piece > of > > data, an adjacency matrix, so it looks something like this: > > > > class Graph: > > def __init__(self, Adj): > > self.Adj = Adj > > > > I had the idea of changing Graph to inherit numpy.ndarray instead, so > then I > > can just access itself directly rather than having to type self.Adj. Is > this > > the right way to go about it? To inherit from numpy.ndarray? > > > > The reason I'm using a numpy array to store the graph by the way is the > > following: > > -Memory is not a concern (yet) so I don't need to use a sparse structure > > like a sparse array or a dictionary > > -I run a lot of sums on it, argmin, blanking out of certain rows and > columns > > using fancy indexing, grabbing subgraphs using vector indexing > > > > -- > > David Grant > > http://www.davidgrant.ca > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Tue Aug 1 16:36:16 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 1 Aug 2006 13:36:16 -0700 Subject: [Numpy-discussion] Graph class In-Reply-To: References: Message-ID: I actually just looked into the boost graph library and hit a wall. I basically had trouble running bjam on it. It complained about a missing build file or something like that. Anyways, for now I can live with non-sparse implementation. This is mostly prototyping code for integeration in to a largely Java system (with some things written in C). So this will be ported to Java or C eventually. Whether or not I will need to protoype something that scales to thousands of nodes remains to be seen. Dave On 8/1/06, Charles R Harris wrote: > > Hi David, > > I often have several thousand nodes in a graph, sometimes clustered into > connected components. I suspect that using an adjacency matrix is an > inefficient representation for graphs of that size while for smaller graphs > the overhead of more complicated structures wouldn't be noticeable. Have you > looked at the boost graph library? I don't like all their stuff but it is a > good start with lots of code and a suitable license. > > Chuck > > On 8/1/06, David Grant wrote: > > > I have written my own graph class, it doesn't really do much, just has a > few methods, it might do more later. Up until now it has just had one piece > of data, an adjacency matrix, so it looks something like this: > > class Graph: > def __init__(self, Adj): > self.Adj = Adj > > I had the idea of changing Graph to inherit numpy.ndarray instead, so then > I can just access itself directly rather than having to type self.Adj. Is > this the right way to go about it? To inherit from numpy.ndarray? > > The reason I'm using a numpy array to store the graph by the way is the > following: > -Memory is not a concern (yet) so I don't need to use a sparse structure > like a sparse array or a dictionary > -I run a lot of sums on it, argmin, blanking out of certain rows and > columns using fancy indexing, grabbing subgraphs using vector indexing > > -- > David Grant > http://www.davidgrant.ca > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From myeates at jpl.nasa.gov Tue Aug 1 16:46:57 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Tue, 01 Aug 2006 13:46:57 -0700 Subject: [Numpy-discussion] a few problems and fixes Message-ID: <44CFBDC1.1000602@jpl.nasa.gov> Here are few problems I had with numpy and scipy 1) Compiling scipy on solaris requires running ld -G instead of gcc -shared. Apparently, gcc was not passing the correct args to my nongnu ld. I could not figure out how to alter setup.py to link using ld instead of gcc so I had to link by hand. 2) memmap has to be modified to remove "flush" on Windows. If calls to flush are allowed, Python (ActiveState) crashes at program exit. 3) savemat in scipy.io.mio had to be modified to remove type check since I am using the class memmap which derives from ndarray. In savemat a check is made that the object being save is an Array. Mathew From pau.gargallo at gmail.com Tue Aug 1 17:44:59 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Tue, 1 Aug 2006 23:44:59 +0200 Subject: [Numpy-discussion] Graph class In-Reply-To: References: Message-ID: <6ef8f3380608011444h120fd82fj49e8e530382af4cd@mail.gmail.com> you may be interested in this python graph library https://networkx.lanl.gov/ pau On 8/1/06, David Grant wrote: > I actually just looked into the boost graph library and hit a wall. I > basically had trouble running bjam on it. It complained about a missing > build file or something like that. > > Anyways, for now I can live with non-sparse implementation. This is mostly > prototyping code for integeration in to a largely Java system (with some > things written in C). So this will be ported to Java or C eventually. > Whether or not I will need to protoype something that scales to thousands of > nodes remains to be seen. > > Dave > > > On 8/1/06, Charles R Harris wrote: > > > > Hi David, > > > > I often have several thousand nodes in a graph, sometimes clustered into > connected components. I suspect that using an adjacency matrix is an > inefficient representation for graphs of that size while for smaller graphs > the overhead of more complicated structures wouldn't be noticeable. Have you > looked at the boost graph library? I don't like all their stuff but it is a > good start with lots of code and a suitable license. > > > > Chuck > > > > > > > > On 8/1/06, David Grant < davidgrant at gmail.com> wrote: > > > > > > > > > > > I have written my own graph class, it doesn't really do much, just has a > few methods, it might do more later. Up until now it has just had one piece > of data, an adjacency matrix, so it looks something like this: > > > > class Graph: > > def __init__(self, Adj): > > self.Adj = Adj > > > > I had the idea of changing Graph to inherit numpy.ndarray instead, so then > I can just access itself directly rather than having to type self.Adj. Is > this the right way to go about it? To inherit from numpy.ndarray? > > > > The reason I'm using a numpy array to store the graph by the way is the > following: > > -Memory is not a concern (yet) so I don't need to use a sparse structure > like a sparse array or a dictionary > > -I run a lot of sums on it, argmin, blanking out of certain rows and > columns using fancy indexing, grabbing subgraphs using vector indexing > > > > > > -- > > David Grant > > http://www.davidgrant.ca > > > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > > -- > David Grant > http://www.davidgrant.ca > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From davidgrant at gmail.com Tue Aug 1 18:20:00 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 1 Aug 2006 15:20:00 -0700 Subject: [Numpy-discussion] Graph class In-Reply-To: <6ef8f3380608011444h120fd82fj49e8e530382af4cd@mail.gmail.com> References: <6ef8f3380608011444h120fd82fj49e8e530382af4cd@mail.gmail.com> Message-ID: I saw that one as well. Looks neat! Too bad they rarely mention the word "graph" so they never come up on my google searches. I found them through del.icio.us by searching for python and graph. Dave On 8/1/06, Pau Gargallo wrote: > > you may be interested in this python graph library > https://networkx.lanl.gov/ > > pau > > On 8/1/06, David Grant wrote: > > I actually just looked into the boost graph library and hit a wall. I > > basically had trouble running bjam on it. It complained about a missing > > build file or something like that. > > > > Anyways, for now I can live with non-sparse implementation. This is > mostly > > prototyping code for integeration in to a largely Java system (with some > > things written in C). So this will be ported to Java or C eventually. > > Whether or not I will need to protoype something that scales to > thousands of > > nodes remains to be seen. > > > > Dave > > > > > > On 8/1/06, Charles R Harris wrote: > > > > > > Hi David, > > > > > > I often have several thousand nodes in a graph, sometimes clustered > into > > connected components. I suspect that using an adjacency matrix is an > > inefficient representation for graphs of that size while for smaller > graphs > > the overhead of more complicated structures wouldn't be noticeable. Have > you > > looked at the boost graph library? I don't like all their stuff but it > is a > > good start with lots of code and a suitable license. > > > > > > Chuck > > > > > > > > > > > > On 8/1/06, David Grant < davidgrant at gmail.com> wrote: > > > > > > > > > > > > > > > > I have written my own graph class, it doesn't really do much, just has > a > > few methods, it might do more later. Up until now it has just had one > piece > > of data, an adjacency matrix, so it looks something like this: > > > > > > class Graph: > > > def __init__(self, Adj): > > > self.Adj = Adj > > > > > > I had the idea of changing Graph to inherit numpy.ndarray instead, so > then > > I can just access itself directly rather than having to type self.Adj. > Is > > this the right way to go about it? To inherit from numpy.ndarray? > > > > > > The reason I'm using a numpy array to store the graph by the way is > the > > following: > > > -Memory is not a concern (yet) so I don't need to use a sparse > structure > > like a sparse array or a dictionary > > > -I run a lot of sums on it, argmin, blanking out of certain rows and > > columns using fancy indexing, grabbing subgraphs using vector indexing > > > > > > > > > -- > > > David Grant > > > http://www.davidgrant.ca > > > > > > > > > ------------------------------------------------------------------------- > > > Take Surveys. Earn Cash. Influence the Future of IT > > > Join SourceForge.net's Techsay panel and you'll get the chance to > share > > your > > > opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Numpy-discussion at lists.sourceforge.net > > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > > > > > > > > > > > -- > > David Grant > > http://www.davidgrant.ca > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From torgil.svensson at gmail.com Tue Aug 1 18:45:38 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Wed, 2 Aug 2006 00:45:38 +0200 Subject: [Numpy-discussion] unexpected behaviour of numpy.var In-Reply-To: <44CFB186.8020802@ee.byu.edu> References: <44CFB186.8020802@ee.byu.edu> Message-ID: > They are supposed to have different defaults because the functional > forms are largely for backward compatibility where axis=0 was the default. > > -Travis Isn't backwards compatibility what "oldnumeric" is for? +1 for consistent defaults. From oliphant at ee.byu.edu Tue Aug 1 20:21:49 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Aug 2006 18:21:49 -0600 Subject: [Numpy-discussion] Handling of backward compatibility In-Reply-To: References: <44CFB186.8020802@ee.byu.edu> Message-ID: <44CFF01D.4030800@ee.byu.edu> Torgil Svensson wrote: >>They are supposed to have different defaults because the functional >>forms are largely for backward compatibility where axis=0 was the default. >> >>-Travis >> >> > >Isn't backwards compatibility what "oldnumeric" is for? > > > As this discussion indicates there has been a switch from numpy 0.9.8 to numpy 1.0b of how to handle backward compatibility. Instead of importing old names a new sub-package numpy.oldnumeric was created. This mechanism is incomplete in the sense that there are still some backward-compatible items in numpy such as defaults on the axis keyword for functions versus methods and you still have to make the changes that convertcode.py makes to the code to get it to work. I'm wondering about whether or not some additional effort should be placed in numpy.oldnumeric so that replacing Numeric with numpy.oldnumeric actually gives no compatibility issues (i.e. the only thing you have to change is replace imports with new names). In other words a simple array sub-class could be created that mimics the old Numeric array and the old functions could be created as well with the same arguments. The very same thing could be done with numarray. This would make conversion almost trivial. Then, the convertcode script could be improved to make all the changes that would take a oldnumeric-based module to a more modern numpy-based module. A similar numarray script could be developed as well. What do people think? Is it worth it? This could be a coding-sprint effort at SciPy. -Travis From stefan at sun.ac.za Wed Aug 2 07:35:38 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 2 Aug 2006 13:35:38 +0200 Subject: [Numpy-discussion] Handling of backward compatibility In-Reply-To: <44CFF01D.4030800@ee.byu.edu> References: <44CFB186.8020802@ee.byu.edu> <44CFF01D.4030800@ee.byu.edu> Message-ID: <20060802113538.GB21448@mentat.za.net> On Tue, Aug 01, 2006 at 06:21:49PM -0600, Travis Oliphant wrote: > I'm wondering about whether or not some additional effort should be > placed in numpy.oldnumeric so that replacing Numeric with > numpy.oldnumeric actually gives no compatibility issues (i.e. the only > thing you have to change is replace imports with new names). In > other words a simple array sub-class could be created that mimics the > old Numeric array and the old functions could be created as well with > the same arguments. > > The very same thing could be done with numarray. This would make > conversion almost trivial. > > Then, the convertcode script could be improved to make all the changes > that would take a oldnumeric-based module to a more modern numpy-based > module. A similar numarray script could be developed as well. > > What do people think? Is it worth it? This could be a coding-sprint > effort at SciPy. This sounds like a very good idea to me. I hope that those of us who cannot attend SciPy 2006 can still take part in the coding sprints, be it via IRC or some other communications media. Cheers St?fan From bhendrix at enthought.com Wed Aug 2 13:46:12 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 02 Aug 2006 12:46:12 -0500 Subject: [Numpy-discussion] ANN: Python Enthought Edition 1.0.0 Released Message-ID: <44D0E4E4.4020304@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 1.0.0 (http://code.enthought.com/enthon/) -- a python distribution for Windows. About Python Enthought Edition: ------------------------------- Python 2.4.3, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numpy SciPy IPython Enthought Tool Suite wxPython PIL mingw MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com 1.0.0 Release Notes ------------------------- A lot of work has gone into testing this release, and it is our most stable release to date, but there are a couple of caveats: * The generated documentation index entries are missing. The full text search does work and the table of contents is complete, so this feature will be pushed to version 1.1.0. * IPython may cause problems when starting the first time if a previous version of IPython was ran. If you see "WARNING: could not import user config", either follow the directions which follow the warning. * Some users are reporting that older matplotlibrc files are not compatible with the version of matplotlib installed with this release. Please refer to the matplotlib mailing list (http://sourceforge.net/mail/?group_id=80706) for further help. We are grateful to everyone who has helped test this release. If you'd like to contribute or report a bug, you can do so at https://svn.enthought.com/enthought. From oliphant.travis at ieee.org Wed Aug 2 14:06:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 02 Aug 2006 12:06:45 -0600 Subject: [Numpy-discussion] Release Notes for 1.0 posted Message-ID: <44D0E9B5.3080001@ieee.org> http://www.scipy.org/ReleaseNotes/NumPy_1.0 Please correct problems and add to them as needed. -Travis From torgil.svensson at gmail.com Wed Aug 2 14:31:15 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Wed, 2 Aug 2006 20:31:15 +0200 Subject: [Numpy-discussion] Handling of backward compatibility In-Reply-To: <44CFF01D.4030800@ee.byu.edu> References: <44CFB186.8020802@ee.byu.edu> <44CFF01D.4030800@ee.byu.edu> Message-ID: > What do people think? Is it worth it? This could be a coding-sprint > effort at SciPy. > > > -Travis Sounds like a good idea. This should make old code work while not imposing unneccessary restrictions on numpy due to backward compatibility. //Torgil From jk985 at tom.com Sat Aug 5 15:09:29 2006 From: jk985 at tom.com (=?GB2312?B?IjjUwjEyLTEzyNUvyc+6oyI=?=) Date: Sun, 6 Aug 2006 03:09:29 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU7s7W85LncwO3Iy9SxsMvP7tDewbY=?= Message-ID: An HTML attachment was scrubbed... URL: From nvf at MIT.EDU Wed Aug 2 15:18:12 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Wed, 2 Aug 2006 15:18:12 -0400 Subject: [Numpy-discussion] Release Notes for 1.0 posted In-Reply-To: References: Message-ID: > Message: 2 > Date: Wed, 02 Aug 2006 12:06:45 -0600 > From: Travis Oliphant > Subject: [Numpy-discussion] Release Notes for 1.0 posted > To: numpy-discussion > Message-ID: <44D0E9B5.3080001 at ieee.org> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > http://www.scipy.org/ReleaseNotes/NumPy_1.0 > > Please correct problems and add to them as needed. > > -Travis > What's not clear to me upon reading this page is what diff set this is describing. Are these the changes between 0.9.8 and 1.0b1? Especially if this page is to be updated with each release, we should be explicit about what changed when. This is a helpful document. Thanks. Take care, Nick From loredo at astro.cornell.edu Wed Aug 2 15:46:57 2006 From: loredo at astro.cornell.edu (Tom Loredo) Date: Wed, 2 Aug 2006 15:46:57 -0400 Subject: [Numpy-discussion] Release Notes for 1.0 posted In-Reply-To: References: Message-ID: <1154548017.44d101312dabd@astrosun2.astro.cornell.edu> > http://www.scipy.org/ReleaseNotes/NumPy_1.0 > > Please correct problems and add to them as needed. This is incredibly helpful---quite a few things I wasn't aware of. Many, many thanks! -Tom Loredo ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From st at sigmasquared.net Wed Aug 2 15:52:05 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed, 02 Aug 2006 21:52:05 +0200 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users Message-ID: <44D10265.5010103@sigmasquared.net> Hi A user named jlc46 is misusing the wiki page "Installing SciPy/Windows" to ask for help on his installation problems. How can I a) contact him in order to ask him to post his questions on the mailing lists, and b) most easily revert changes to wiki-pages? Any hint would be appreciated. Regards, Stephan From davidlinke at tiscali.de Wed Aug 2 16:13:28 2006 From: davidlinke at tiscali.de (David) Date: Wed, 02 Aug 2006 22:13:28 +0200 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D10265.5010103@sigmasquared.net> References: <44D10265.5010103@sigmasquared.net> Message-ID: <44D10768.8030005@tiscali.de> Stephan Tolksdorf wrote: > Hi > > A user named jlc46 is misusing the wiki page "Installing SciPy/Windows" > to ask for help on his installation problems. How can I > a) contact him in order to ask him to post his questions on the mailing > lists, and You cannot find out his email address as a normal wiki-user. Alternatively, you may add a note at the top of the wiki-page. > b) most easily revert changes to wiki-pages? "Normally", you will have a revert link at each version (if you have 'admin'-permission) at the page-info: http://new.scipy.org/Wiki/Installing_SciPy/Windows?action=info I assume that the people listed on http://new.scipy.org/Wiki/Installing_SciPy/EditorsGroup have this 'admin' permission. Maybe you can be added. Regards, David > Any hint would be appreciated. > > Regards, > Stephan From robert.kern at gmail.com Wed Aug 2 16:14:34 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 Aug 2006 15:14:34 -0500 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D10265.5010103@sigmasquared.net> References: <44D10265.5010103@sigmasquared.net> Message-ID: Stephan Tolksdorf wrote: > Hi > > A user named jlc46 is misusing the wiki page "Installing SciPy/Windows" > to ask for help on his installation problems. How can I > a) contact him in order to ask him to post his questions on the mailing > lists, and Not sure. > b) most easily revert changes to wiki-pages? Click the "info" button on the page. There will be a list of revisions. Old revisions will have a "revert" link in the right-hand column. I believe (although I recommend checking the MoinMoin documentation before trying this) that clicking that link will revert the text back to whatever it was at that revision. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mark at mitre.org Wed Aug 2 16:51:07 2006 From: mark at mitre.org (Mark Heslep) Date: Wed, 02 Aug 2006 16:51:07 -0400 Subject: [Numpy-discussion] Fastest binary threshold? Message-ID: <44D1103B.9000808@mitre.org> I need a binary threshold and numpy.where() seems very slow on numpy 0.9.9.2800: python -m timeit -n 10 -s "import numpy as n;a=n.ones((512,512), n.uint8)*129" "a_bin=n.where( a>128, 128,0)" 10 loops, best of 3: 37.9 msec per loop I'm thinking the conversion of the min, max constants from python ints to n.uint8 might be slowing it down? Is there a better way? Scipy is also an option. Ive search up list quickly and nothing jumps out. For comparison Ive got some ctypes wrapped OpenCv code (that I'd like to avoid) doing the same thing in < 1 msec: Cv images here are unsigned 8 bit as above: python -m timeit -n 50 -s "import cv;sz=cv.cvSize(512,512);a=cv.cvCreateImage(sz, 8, 1); a_bin=cv.cvCreateImage(sz,8,1)" "cv.cvThreshold(a, a_bin, float(128), float(255), cv.CV_THRESH_BINARY )" 50 loops, best of 3: 348 usec per loop And with the Intel IPP optimizations turned on < 0.1msec: python -m timeit -n 50 -s "import cv; sz=cv.cvSize(512,512); a=cv.cvCreateImage(sz, 8, 1); a_bin=cv.cvCreateImage(sz,8,1)" "cv.cvThreshold(a, a_bin, float(128), float(255), cv.CV_THRESH_BINARY )" 50 loops, best of 3: 59.5 usec per loop Regards, Mark From strawman at astraw.com Wed Aug 2 16:53:48 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 02 Aug 2006 13:53:48 -0700 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D10768.8030005@tiscali.de> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> Message-ID: <44D110DC.7070901@astraw.com> David wrote: >Stephan Tolksdorf wrote: > > >>Hi >> >>A user named jlc46 is misusing the wiki page "Installing SciPy/Windows" >>to ask for help on his installation problems. How can I >>a) contact him in order to ask him to post his questions on the mailing >>lists, and >> >> > >You cannot find out his email address as a normal wiki-user. >Alternatively, you may add a note at the top of the wiki-page. > > > >>b) most easily revert changes to wiki-pages? >> >> > >"Normally", you will have a revert link at each version (if you have >'admin'-permission) at the page-info: >http://new.scipy.org/Wiki/Installing_SciPy/Windows?action=info > >I assume that the people listed on >http://new.scipy.org/Wiki/Installing_SciPy/EditorsGroup >have this 'admin' permission. Maybe you can be added. > > Stephan, I just added you to http://scipy.org/Wiki/EditorsGroup , so you should now have "revert" among your options in the "get info" page. The changes by jlc46, I agree, don't look like what we want up there in the long term. However, they do look like valid issues (s)he had while trying to follow the instructions on that page. Not being much of a Windows user myself, I have no idea what the issues involved are, but perhaps before simply reverting them you could get to the bottom of the issue? From st at sigmasquared.net Wed Aug 2 17:42:52 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed, 02 Aug 2006 23:42:52 +0200 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D110DC.7070901@astraw.com> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> <44D110DC.7070901@astraw.com> Message-ID: <44D11C5C.8090700@sigmasquared.net> > The changes by jlc46, I agree, don't look like what we want up there in > the long term. However, they do look like valid issues (s)he had while > trying to follow the instructions on that page. Not being much of a > Windows user myself, I have no idea what the issues involved are, but > perhaps before simply reverting them you could get to the bottom of the > issue? I think these questions should be posted on the mailing list so that everybody gets a chance to answer them, not only the people subscribing to the particular Wiki page. Regarding the installation problems on Windows: A while ago I put some effort into writing a patch to correct a few build issues on windows. Due to unfortunate reasons nobody tried to apply the patch until part of it was obsoleted by changes of David M. Cooke to system_info.py. As I didn't keep track of David's changes to the build system I asked him for advice regarding the integration of my patch, but I never got a reply. Seems like I will have to bite the bullet and replicate some of my earlier efforts... Regards, Stephan From tim.hochberg at ieee.org Wed Aug 2 18:09:52 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Wed, 02 Aug 2006 15:09:52 -0700 Subject: [Numpy-discussion] Int64 and string support for numexpr In-Reply-To: <44CF7AF9.2070200@carabos.com> References: <44CF7AF9.2070200@carabos.com> Message-ID: <44D122B0.9030909@ieee.org> Ivan Vilata i Balaguer wrote: > Hi all, > > I'm attaching some patches that enable the current version of numexpr > (r2142) to: > > 1. Handle int64 integers in addition to int32 (constants, variables and > arrays). Python int objects are considered int32 if they fit in 32 > bits. Python long objects and int objects that don't fit in 32 bits > (for 64-bit platforms) are considered int64. > > 2. Handle string constants, variables and arrays (not Unicode), with > support for comparison operators (==, !=, <, <=, >=, >). (This > brings the old ``memsizes`` variable back.) String temporaries > (necessary for other kinds of operations) are not supported. > > The patches also include test cases and some minor corrections (e.g. > removing odd carriage returns in some lines in compile.py). There are > three patches to ease their individual review: > > * numexpr-int64.diff only contains the changes for int64 support. > * numexpr-str.diff only contains the changes for string support. > * numexpr-int64str.diff contains all changes. > > The task has been somehow difficult, but I think the result fits quite > well in numexpr. So, what's your opinion about the patches? Are they > worth integrating into the main branch? Thanks! > Unfortunately, I'm in the process of moving everything over to a new box, so my build environment is all broken and I can't try them out right now. However, just so you don't think everyone is ignoring you, I figured I'd reply. What use cases do you have in mind for the string comparison stuff? Strings are one of those features of numpy that I've personally never seen a use for, so I'm not that enthusiastic about them them in numarray, Particularly since it sounds like support is likely to only be partial. However, feel free to convince me otherwise. Or just convince David Cooke ;-) -tim > :: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > C?rabos Coop. V. V V Enjoy Data > "" > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From cookedm at physics.mcmaster.ca Wed Aug 2 18:33:24 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 2 Aug 2006 18:33:24 -0400 Subject: [Numpy-discussion] Graph class In-Reply-To: <6ef8f3380608011444h120fd82fj49e8e530382af4cd@mail.gmail.com> References: <6ef8f3380608011444h120fd82fj49e8e530382af4cd@mail.gmail.com> Message-ID: <20060802183324.3e9b0e29@arbutus.physics.mcmaster.ca> On Tue, 1 Aug 2006 23:44:59 +0200 "Pau Gargallo" wrote: > you may be interested in this python graph library > https://networkx.lanl.gov/ There's also http://wiki.python.org/moin/PythonGraphApi, which lists a bunch. It's the result of a discussion on c.l.py a few years ago about trying to come up with a standard API for graphs. I don't believe they came up with anything, but that page contains ideas to consider. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Wed Aug 2 18:36:37 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 2 Aug 2006 18:36:37 -0400 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D11C5C.8090700@sigmasquared.net> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> <44D110DC.7070901@astraw.com> <44D11C5C.8090700@sigmasquared.net> Message-ID: <20060802183637.7a33c0cf@arbutus.physics.mcmaster.ca> On Wed, 02 Aug 2006 23:42:52 +0200 Stephan Tolksdorf wrote: > > The changes by jlc46, I agree, don't look like what we want up there in > > the long term. However, they do look like valid issues (s)he had while > > trying to follow the instructions on that page. Not being much of a > > Windows user myself, I have no idea what the issues involved are, but > > perhaps before simply reverting them you could get to the bottom of the > > issue? > > I think these questions should be posted on the mailing list so that > everybody gets a chance to answer them, not only the people subscribing > to the particular Wiki page. > > Regarding the installation problems on Windows: A while ago I put some > effort into writing a patch to correct a few build issues on windows. > Due to unfortunate reasons nobody tried to apply the patch until part of > it was obsoleted by changes of David M. Cooke to system_info.py. As I > didn't keep track of David's changes to the build system I asked him for > advice regarding the integration of my patch, but I never got a reply. > Seems like I will have to bite the bullet and replicate some of my > earlier efforts... I updated that patch to work (it's in ticket #114, btw, for those following along), and integrated it last week. Please give the current svn a try to see how it works. I had it done mid-July, but I guess you didn't get the Trac email? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From st at sigmasquared.net Wed Aug 2 19:00:06 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Thu, 03 Aug 2006 01:00:06 +0200 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <20060802183637.7a33c0cf@arbutus.physics.mcmaster.ca> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> <44D110DC.7070901@astraw.com> <44D11C5C.8090700@sigmasquared.net> <20060802183637.7a33c0cf@arbutus.physics.mcmaster.ca> Message-ID: <44D12E76.7080801@sigmasquared.net> Hi David, > I updated that patch to work (it's in ticket #114, btw, for those following > along), and integrated it last week. Please give the current svn a try to see > how it works. > I'm really sorry I overlooked your changes. Thanks a lot for your efforts. I will try the various windows builds in the next days and address the remaining issues. > I had it done mid-July, but I guess you didn't get the Trac email? I haven't received any email notfication from Trac. Is there something I can do about the missing notifications? Stephan From cookedm at physics.mcmaster.ca Wed Aug 2 19:22:25 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 2 Aug 2006 19:22:25 -0400 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D12E76.7080801@sigmasquared.net> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> <44D110DC.7070901@astraw.com> <44D11C5C.8090700@sigmasquared.net> <20060802183637.7a33c0cf@arbutus.physics.mcmaster.ca> <44D12E76.7080801@sigmasquared.net> Message-ID: <20060802192225.5b7efb42@arbutus.physics.mcmaster.ca> On Thu, 03 Aug 2006 01:00:06 +0200 Stephan Tolksdorf wrote: > I haven't received any email notfication from Trac. Is there something I > can do about the missing notifications? When logged in, check "Settings" (upper-right corner, besides Logout). Make sure your email address is in there. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Wed Aug 2 19:24:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 Aug 2006 18:24:57 -0500 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D12E76.7080801@sigmasquared.net> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> <44D110DC.7070901@astraw.com> <44D11C5C.8090700@sigmasquared.net> <20060802183637.7a33c0cf@arbutus.physics.mcmaster.ca> <44D12E76.7080801@sigmasquared.net> Message-ID: Stephan Tolksdorf wrote: > Hi David, > >> I updated that patch to work (it's in ticket #114, btw, for those following >> along), and integrated it last week. Please give the current svn a try to see >> how it works. > > I'm really sorry I overlooked your changes. Thanks a lot for your > efforts. I will try the various windows builds in the next days and > address the remaining issues. > > > I had it done mid-July, but I guess you didn't get the Trac email? > > I haven't received any email notfication from Trac. Is there something I > can do about the missing notifications? You can sign up for the numpy-tickets mailing list. http://www.scipy.org/Mailing_Lists -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Wed Aug 2 20:45:22 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 3 Aug 2006 02:45:22 +0200 Subject: [Numpy-discussion] Fastest binary threshold? In-Reply-To: <44D1103B.9000808@mitre.org> References: <44D1103B.9000808@mitre.org> Message-ID: <20060803004522.GC6682@mentat.za.net> On Wed, Aug 02, 2006 at 04:51:07PM -0400, Mark Heslep wrote: > I need a binary threshold and numpy.where() seems very slow on numpy > 0.9.9.2800: > > python -m timeit -n 10 -s "import numpy as n;a=n.ones((512,512), > n.uint8)*129" > "a_bin=n.where( a>128, 128,0)" > 10 loops, best of 3: 37.9 msec per loop Using numpy indexing brings the time down by a factor of 10 or so: In [46]: timeit b = N.where(a>128,128,0) 10 loops, best of 3: 27.1 ms per loop In [47]: timeit b = (a > 128).astype(N.uint8) * 128 100 loops, best of 3: 3.45 ms per loop Binary thresholding can be added to ndimage easily, if further speed improvement is needed. Regards St?fan From haase at msg.ucsf.edu Thu Aug 3 00:31:38 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 02 Aug 2006 21:31:38 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' Message-ID: <44D17C2A.2050601@msg.ucsf.edu> Hi! I just finished maybe a total of 5 hours tracking down a nasty bug. So I thought I would share this: I'm keeping a version of (old) SciPy's 'plt' module around. (I know about matplotlib - anyway - ...) I changed the code some time ago from Numeric to numarray - no problem. Now I switched to numpy ... and suddenly the zooming does not work anymore: it always zooms to "full view". Finally I traced the problem down to a utility function: "is_number" - it is simply implemented as def is_number(val): return (type(val) in [type(0.0),type(0)]) As I said - now I finally saw that I always got False since the type of my number (0.025) is and that's neither nor OK - how should this have been done right ? Anyway, I'm excited about the new numpy and am looking forward to it's progress Thanks, Sebastian Haase From robert.kern at gmail.com Thu Aug 3 00:43:44 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 Aug 2006 23:43:44 -0500 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D17C2A.2050601@msg.ucsf.edu> References: <44D17C2A.2050601@msg.ucsf.edu> Message-ID: Sebastian Haase wrote: > Hi! > I just finished maybe a total of 5 hours tracking down a nasty bug. > So I thought I would share this: > I'm keeping a version of (old) SciPy's 'plt' module around. > (I know about matplotlib - anyway - ...) > I changed the code some time ago from Numeric to numarray - no problem. > Now I switched to numpy ... and suddenly the zooming does not work > anymore: it always zooms to "full view". > > Finally I traced the problem down to a utility function: > "is_number" - it is simply implemented as > def is_number(val): > return (type(val) in [type(0.0),type(0)]) > > As I said - now I finally saw that I always got > False since the type of my number (0.025) is > > and that's neither nor > > OK - how should this have been done right ? It depends on how is_number() is actually used. Probably the best thing to do would be to take a step back and reorganize whatever is calling it to not require specific types. Quick-and-dirty: use isinstance() instead since float64scalar inherits from float. However, float32scalar does not, so this is not a real solution, just a hack to get you on your merry way. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From haase at msg.ucsf.edu Thu Aug 3 00:55:34 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 02 Aug 2006 21:55:34 -0700 Subject: [Numpy-discussion] Handling of backward compatibility In-Reply-To: <44CFF01D.4030800@ee.byu.edu> References: <44CFB186.8020802@ee.byu.edu> <44CFF01D.4030800@ee.byu.edu> Message-ID: <44D181C6.5060605@msg.ucsf.edu> Travis Oliphant wrote: > Torgil Svensson wrote: > >>> They are supposed to have different defaults because the functional >>> forms are largely for backward compatibility where axis=0 was the default. >>> >>> -Travis >>> >>> >> Isn't backwards compatibility what "oldnumeric" is for? >> >> >> > > As this discussion indicates there has been a switch from numpy 0.9.8 to > numpy 1.0b of how to handle backward compatibility. Instead of > importing old names a new sub-package numpy.oldnumeric was created. > This mechanism is incomplete in the sense that there are still some > backward-compatible items in numpy such as defaults on the axis keyword > for functions versus methods and you still have to make the changes that > convertcode.py makes to the code to get it to work. > > I'm wondering about whether or not some additional effort should be > placed in numpy.oldnumeric so that replacing Numeric with > numpy.oldnumeric actually gives no compatibility issues (i.e. the only > thing you have to change is replace imports with new names). In > other words a simple array sub-class could be created that mimics the > old Numeric array and the old functions could be created as well with > the same arguments. > > The very same thing could be done with numarray. This would make > conversion almost trivial. > > Then, the convertcode script could be improved to make all the changes > that would take a oldnumeric-based module to a more modern numpy-based > module. A similar numarray script could be developed as well. > > What do people think? Is it worth it? This could be a coding-sprint > effort at SciPy. > > > -Travis Hi, Just as thought of cautiousness: If people actually get "too much" encouraged to just always say " from numpy.oldnumeric import * " or as suggested maybe soon also something like " from numpy.oldnumarray import * " - could this not soon lead to a great state of confusion when later people on this mailing list ask questions and nobody really knows which of the submodules they are referring to !? Recently someone (Torgil Svensson) here suggested to unify the default argument between a method and a function - I think the discussion was about numpy.var and it's "axis" argument. I would be a clear +1 on unifying these and have a clean design of numpy. Consequently the old way of different defaults should be absorbed by the oldnumeric sub module. All I'm saying then is that this could cause confusion later on - and therefore the whole idea of "easy backwards compatibility" should be qualified by encouraging people to adopt the most problematic changes (like new default values) rather sooner than later. I'm hoping that numpy will find soon an increasingly broader acceptance in the whole Python community (and the entire scientific community for that matter ;-) ). Thanks for all your work, Sebastian Haase From oliphant.travis at ieee.org Thu Aug 3 01:02:39 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 02 Aug 2006 23:02:39 -0600 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D17C2A.2050601@msg.ucsf.edu> References: <44D17C2A.2050601@msg.ucsf.edu> Message-ID: <44D1836F.6070809@ieee.org> Sebastian Haase wrote: > Hi! > I just finished maybe a total of 5 hours tracking down a nasty bug. > > Finally I traced the problem down to a utility function: > "is_number" - it is simply implemented as > def is_number(val): > return (type(val) in [type(0.0),type(0)]) > > As I said - now I finally saw that I always got > False since the type of my number (0.025) is > > and that's neither nor > > OK - how should this have been done right ? > > Code that depends on specific types like this is going to be hard to maintain in Python because many types could reasonably act like a number. I do see code like this pop up from time to time and it will bite you more with NumPy (which has a whole slew of scalar types). The scalar-types are in a hierarchy and so you could replace the code with def is_number(val): return isinstance(val, (int, float, numpy.number)) But, this will break with other "scalar-types" that it really should work with. It's best to look at what is calling is_number and think about what it wants to do with the object and just try it and catch the exception. -Travis From haase at msg.ucsf.edu Thu Aug 3 01:16:59 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 02 Aug 2006 22:16:59 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D1836F.6070809@ieee.org> References: <44D17C2A.2050601@msg.ucsf.edu> <44D1836F.6070809@ieee.org> Message-ID: <44D186CB.6000307@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> Hi! >> I just finished maybe a total of 5 hours tracking down a nasty bug. >> >> Finally I traced the problem down to a utility function: >> "is_number" - it is simply implemented as >> def is_number(val): >> return (type(val) in [type(0.0),type(0)]) >> >> As I said - now I finally saw that I always got >> False since the type of my number (0.025) is >> >> and that's neither nor >> >> OK - how should this have been done right ? >> >> > > Code that depends on specific types like this is going to be hard to > maintain in Python because many types could reasonably act like a > number. I do see code like this pop up from time to time and it will > bite you more with NumPy (which has a whole slew of scalar types). > > The scalar-types are in a hierarchy and so you could replace the code with > > def is_number(val): > return isinstance(val, (int, float, numpy.number)) > > But, this will break with other "scalar-types" that it really should > work with. It's best to look at what is calling is_number and think > about what it wants to do with the object and just try it and catch the > exception. > > -Travis > Thanks, I just found numpy.isscalar() and numpy.issctype() ? These sound like they would do what I need - what is the difference between the two ? (I found that issctype worked OK while isscalar gave some exception in some cases !? ) - Sebastian From aisaac at american.edu Thu Aug 3 01:42:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 3 Aug 2006 01:42:05 -0400 Subject: [Numpy-discussion] Handling of backward compatibility In-Reply-To: <44D181C6.5060605@msg.ucsf.edu> References: <44CFB186.8020802@ee.byu.edu> <44CFF01D.4030800@ee.byu.edu><44D181C6.5060605@msg.ucsf.edu> Message-ID: On Wed, 02 Aug 2006, Sebastian Haase apparently wrote: > Recently someone (Torgil Svensson) here suggested to unify > the default argument between a method and a function > - I think the discussion was about numpy.var and it's > "axis" argument. I would be a clear +1 on unifying these > and have a clean design of numpy. Consequently the old way > of different defaults should be absorbed by the oldnumeric > sub module. +1 I think this consistency is *really* important for the easy acceptance of numpy by new users. (For a user's perspective, I also think is is just good design.) I expect many new users to be "burned" by this inconsistency. However, as an intermediate run (say 1 year) transition measure to the consistent use, I would be comfortable with the numpy functions requiring an axis argument. One user's view, Alan Isaac From pruggera at gmail.com Thu Aug 3 01:41:13 2006 From: pruggera at gmail.com (Phil Ruggera) Date: Wed, 2 Aug 2006 22:41:13 -0700 Subject: [Numpy-discussion] Mean of n values within an array In-Reply-To: References: Message-ID: A variation of the proposed convolve routine is very fast: regular python took: 1.150214 sec. numpy mean slice took: 2.427513 sec. numpy convolve took: 0.546854 sec. numpy convolve noloop took: 0.058611 sec. Code: # mean of n values within an array import numpy, time def nmean(list,n): a = [] for i in range(1,len(list)+1): start = i-n divisor = n if start < 0: start = 0 divisor = i a.append(sum(list[start:i])/divisor) return a t = [1.0*i for i in range(1400)] start = time.clock() for x in range(100): reg = nmean(t,50) print "regular python took: %f sec."%(time.clock() - start) def numpy_nmean(list,n): a = numpy.empty(len(list),dtype=float) for i in range(1,len(list)+1): start = i-n if start < 0: start = 0 a[i-1] = list[start:i].mean(0) return a t = numpy.arange(0,1400,dtype=float) start = time.clock() for x in range(100): npm = numpy_nmean(t,50) print "numpy mean slice took: %f sec."%(time.clock() - start) def numpy_nmean_conv(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") for i in range(0,len(list)): if i < n : a[i] /= i + 1 else : a[i] /= n return a[:len(list)] t = numpy.arange(0,1400,dtype=float) start = time.clock() for x in range(100): npc = numpy_nmean_conv(t,50) print "numpy convolve took: %f sec."%(time.clock() - start) def numpy_nmean_conv_nl(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") for i in range(n): a[i] /= i + 1 a[n:] /= n return a[:len(list)] t = numpy.arange(0,1400,dtype=float) start = time.clock() for x in range(100): npn = numpy_nmean_conv_nl(t,50) print "numpy convolve noloop took: %f sec."%(time.clock() - start) numpy.testing.assert_equal(reg,npm) numpy.testing.assert_equal(reg,npc) numpy.testing.assert_equal(reg,npn) On 7/29/06, David Grant wrote: > > > > On 7/29/06, Charles R Harris wrote: > > > > Hmmm, > > > > I rewrote the subroutine a bit. > > > > > > def numpy_nmean(list,n): > > a = numpy.empty(len(list),dtype=float) > > > > b = numpy.cumsum(list) > > for i in range(0,len(list)): > > if i < n : > > a[i] = b[i]/(i+1) > > else : > > a[i] = (b[i] - b[i-n])/(i+1) > > return a > > > > and got > > > > regular python took: 0.750000 sec. > > numpy took: 0.380000 sec. > > > I got rid of the for loop entirely. Usually this is the thing to do, at > least this will always give speedups in Matlab and also in my limited > experience with Numpy/Numeric: > > def numpy_nmean2(list,n): > > a = numpy.empty(len(list),dtype=float) > b = numpy.cumsum(list) > c = concatenate((b[n:],b[:n])) > a[:n] = b[:n]/(i+1) > a[n:] = (b[n:] - c[n:])/(i+1) > return a > > I got no noticeable speedup from doing this which I thought was pretty > amazing. I even profiled all the functions, the original, the one written by > Charles, and mine, using hotspot just to make sure nothing funny was going > on. I guess plain old Python can be better than you'd expect in certain > situtations. > > -- > David Grant From oliphant.travis at ieee.org Thu Aug 3 01:43:48 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 02 Aug 2006 23:43:48 -0600 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D186CB.6000307@msg.ucsf.edu> References: <44D17C2A.2050601@msg.ucsf.edu> <44D1836F.6070809@ieee.org> <44D186CB.6000307@msg.ucsf.edu> Message-ID: <44D18D14.8030609@ieee.org> Sebastian Haase wrote: > Thanks, > I just found > numpy.isscalar() and numpy.issctype() ? > These sound like they would do what I need - what is the difference > between the two ? > Oh, yeah. numpy.issctype works with type objects numpy.isscalar works with instances Neither of them distinguish between scalars and "numbers." If you get errors with isscalar it would be nice to know what they are. -Travis From rvandermerwe at ska.ac.za Thu Aug 3 05:02:11 2006 From: rvandermerwe at ska.ac.za (Rudolph van der Merwe) Date: Thu, 3 Aug 2006 11:02:11 +0200 Subject: [Numpy-discussion] Confusion re. version numbers Message-ID: <97670e910608030202i591fd9cbybbd1d297307204c2@mail.gmail.com> Is the current 1.0b1 version of Numpy a maintenace release of the stable 1.0 release, or is it a BETA release for the UPCOMMING 1.0 release of Numpy? -- Rudolph van der Merwe From xaz39 at 163.com Mon Aug 7 05:11:04 2006 From: xaz39 at 163.com (=?GB2312?B?IjjUwjEyLTEzyNUvyc+6oyI=?=) Date: Mon, 7 Aug 2006 17:11:04 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU6s7W85LncwO3Iy9SxsMvP7tDewbY=?= Message-ID: An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Thu Aug 3 05:26:32 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 3 Aug 2006 05:26:32 -0400 Subject: [Numpy-discussion] Confusion re. version numbers In-Reply-To: <97670e910608030202i591fd9cbybbd1d297307204c2@mail.gmail.com> References: <97670e910608030202i591fd9cbybbd1d297307204c2@mail.gmail.com> Message-ID: <20060803092632.GA10364@arbutus.physics.mcmaster.ca> On Thu, Aug 03, 2006 at 11:02:11AM +0200, Rudolph van der Merwe wrote: > Is the current 1.0b1 version of Numpy a maintenace release of the > stable 1.0 release, or is it a BETA release for the UPCOMMING 1.0 > release of Numpy? Beta. Maintenance releases will have version numbers like 1.0.1. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From mfajer at gmail.com Thu Aug 3 10:49:43 2006 From: mfajer at gmail.com (Mikolai Fajer) Date: Thu, 3 Aug 2006 10:49:43 -0400 Subject: [Numpy-discussion] Histogram versus histogram2d Message-ID: <3ff66ae00608030749h42e53469j5aa0901628622d79@mail.gmail.com> Hello, I have noticed some that the 1d histogram and 2d histogram. The histogram function bins everything between the elements of edges, and then includes everything greater than the last edge element in the last bin. The histrogram2d function only bins in the range specified by edges. Is there a reason these two functions do not operate in the same way? -- -Mikolai Fajer- From zhermjn at telecable.es Thu Aug 3 15:20:10 2006 From: zhermjn at telecable.es (terms inversion) Date: Thu, 3 Aug 2006 17:20:10 -0200 Subject: [Numpy-discussion] Plenum Message-ID: <127F909347769D1.AFAA39E16E@telecable.es> Chasing Vermeer Djinn Keys Kingdom Land Tower Maxs Mayhem X: descent Chinese Korean Japanese marriage thrives adopt evil Mama swam Signs Only copper dont Service at: Eckers Ruby Message Board constants puncture voltages One Meter Tube Homebuilt CuBr linksJons plasma tube. It absolute beginner suitable anyone who hobbyist and/or Note: construct requires fair degree skill Pakaste juicy isnt Tietokone angry demanding removed. stayed quotes duty disgusted contacted via Email Links Page.Back Sams Table and Books Other Free Trade Magazines Dowload Abdullah Cooneys Huge prevents viewing party supported Supported librarys archives ask them.Free trade rags optics industry. fields havemany card number: Beam: Race Make Oxford due Industry Hilger Eugene Alfred Zajac Karen Guardino August Practice Hrand Walther editors N. Miracle Larry Warwick Story Charles Vere Their Steven seen effort whither die lack usually happens bases postto anyhowFor artists Parental Advisory stickers karmic Leave ReplyName published ltabbr ltacronym ltbgt politic thrown comic relief applies fragile structure threatens collapse. Review: distant Hunt Smiley Bones Challenge Balloon Bust Which Member Andy Griffiths BSC patents. With computer searching andall database websites. charge still free.In days ago IBM had A. Lengyel along advanced courses. Stan Gibilisco Homemade Iovine Total VDC timer chip relay stepup Diode based Emitters Shuji soccer Dec Internal class purchased Sharplan improve Dying tubes welded future laserAron Justins Large Repair Backstage higher Safari Mac. mssg Safari. Block Avant falsevar Wyley Sons Inc. London Sydney If always really terms inversion hyperfine Goverment agencies labs Dowload Abdullah Cooneys Huge Mosss Stony Brook of as retrieved on Jul :: GMT.G the snapshot that we Middle supposed though novel makings important ACCOUNT SALES thought expected outsiders stabilize interface Sparc. Yen executive possess. broaden higherend -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 12718 bytes Desc: not available URL: From mark at mitre.org Thu Aug 3 11:25:26 2006 From: mark at mitre.org (Mark Heslep) Date: Thu, 03 Aug 2006 11:25:26 -0400 Subject: [Numpy-discussion] Fastest binary threshold? In-Reply-To: <20060803004522.GC6682@mentat.za.net> References: <44D1103B.9000808@mitre.org> <20060803004522.GC6682@mentat.za.net> Message-ID: <44D21566.9060708@mitre.org> Stefan van der Walt wrote: > Binary thresholding can be added to ndimage easily, if further speed > improvement is needed. > > Regards > St?fan Yes Id like to become involved in that effort. Whats the status of ndimage now? Has it all been brought over from numarray and placed, where? Is there a template of some kind for adding new code? Regards, Mark From charlesr.harris at gmail.com Thu Aug 3 11:38:25 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 3 Aug 2006 09:38:25 -0600 Subject: [Numpy-discussion] Mean of n values within an array In-Reply-To: References: Message-ID: Heh, This is fun. Two more variations with 1000 reps instead of 100 for better timing: def numpy_nmean_conv_nl_tweak1(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") a[:n] /= numpy.arange(1, n + 1) a[n:] /= n return a[:len(list)] def numpy_nmean_conv_nl_tweak2(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") a[:n] /= numpy.arange(1, n + 1) a[n:] *= 1.0/n return a[:len(list)] Which gives numpy convolve took: 2.630000 sec. numpy convolve noloop took: 0.320000 sec. numpy convolve noloop tweak1 took: 0.250000 sec. numpy convolve noloop tweak2 took: 0.240000 sec. Chuck On 8/2/06, Phil Ruggera wrote: > > A variation of the proposed convolve routine is very fast: > > regular python took: 1.150214 sec. > numpy mean slice took: 2.427513 sec. > numpy convolve took: 0.546854 sec. > numpy convolve noloop took: 0.058611 sec. > > Code: > > # mean of n values within an array > import numpy, time > def nmean(list,n): > a = [] > for i in range(1,len(list)+1): > start = i-n > divisor = n > if start < 0: > start = 0 > divisor = i > a.append(sum(list[start:i])/divisor) > return a > > t = [1.0*i for i in range(1400)] > start = time.clock() > for x in range(100): > reg = nmean(t,50) > print "regular python took: %f sec."%(time.clock() - start) > > def numpy_nmean(list,n): > a = numpy.empty(len(list),dtype=float) > for i in range(1,len(list)+1): > start = i-n > if start < 0: > start = 0 > a[i-1] = list[start:i].mean(0) > return a > > t = numpy.arange(0,1400,dtype=float) > start = time.clock() > for x in range(100): > npm = numpy_nmean(t,50) > print "numpy mean slice took: %f sec."%(time.clock() - start) > > def numpy_nmean_conv(list,n): > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > for i in range(0,len(list)): > if i < n : > a[i] /= i + 1 > else : > a[i] /= n > return a[:len(list)] > > t = numpy.arange(0,1400,dtype=float) > start = time.clock() > for x in range(100): > npc = numpy_nmean_conv(t,50) > print "numpy convolve took: %f sec."%(time.clock() - start) > > def numpy_nmean_conv_nl(list,n): > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > for i in range(n): > a[i] /= i + 1 > a[n:] /= n > return a[:len(list)] > > t = numpy.arange(0,1400,dtype=float) > start = time.clock() > for x in range(100): > npn = numpy_nmean_conv_nl(t,50) > print "numpy convolve noloop took: %f sec."%(time.clock() - start) > > numpy.testing.assert_equal(reg,npm) > numpy.testing.assert_equal(reg,npc) > numpy.testing.assert_equal(reg,npn) > > On 7/29/06, David Grant wrote: > > > > > > > > On 7/29/06, Charles R Harris wrote: > > > > > > Hmmm, > > > > > > I rewrote the subroutine a bit. > > > > > > > > > def numpy_nmean(list,n): > > > a = numpy.empty(len(list),dtype=float) > > > > > > b = numpy.cumsum(list) > > > for i in range(0,len(list)): > > > if i < n : > > > a[i] = b[i]/(i+1) > > > else : > > > a[i] = (b[i] - b[i-n])/(i+1) > > > return a > > > > > > and got > > > > > > regular python took: 0.750000 sec. > > > numpy took: 0.380000 sec. > > > > > > I got rid of the for loop entirely. Usually this is the thing to do, at > > least this will always give speedups in Matlab and also in my limited > > experience with Numpy/Numeric: > > > > def numpy_nmean2(list,n): > > > > a = numpy.empty(len(list),dtype=float) > > b = numpy.cumsum(list) > > c = concatenate((b[n:],b[:n])) > > a[:n] = b[:n]/(i+1) > > a[n:] = (b[n:] - c[n:])/(i+1) > > return a > > > > I got no noticeable speedup from doing this which I thought was pretty > > amazing. I even profiled all the functions, the original, the one > written by > > Charles, and mine, using hotspot just to make sure nothing funny was > going > > on. I guess plain old Python can be better than you'd expect in certain > > situtations. > > > > -- > > David Grant > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Thu Aug 3 12:32:30 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 3 Aug 2006 09:32:30 -0700 Subject: [Numpy-discussion] =?iso-8859-1?q?help!_type_=27float64scalar=27_?= =?iso-8859-1?q?is_not_type=09=27float=27?= In-Reply-To: <44D18D14.8030609@ieee.org> References: <44D17C2A.2050601@msg.ucsf.edu> <44D186CB.6000307@msg.ucsf.edu> <44D18D14.8030609@ieee.org> Message-ID: <200608030932.31118.haase@msg.ucsf.edu> On Wednesday 02 August 2006 22:43, Travis Oliphant wrote: > Sebastian Haase wrote: > > Thanks, > > I just found > > numpy.isscalar() and numpy.issctype() ? > > These sound like they would do what I need - what is the difference > > between the two ? > > Oh, yeah. > > numpy.issctype works with type objects > numpy.isscalar works with instances > > Neither of them distinguish between scalars and "numbers." > > If you get errors with isscalar it would be nice to know what they are. I'm still trying to reproduce the exception, but here is a first comparison that - honestly - does not make much sense to me: (type vs. instance seems to get mostly the same results and why is there a difference with a string ('12') ) >>> N.isscalar(12) True >>> N.issctype(12) True >>> N.isscalar('12') True >>> N.issctype('12') False >>> N.isscalar(N.array([1])) False >>> N.issctype(N.array([1])) True >>> N.isscalar(N.array([1]).dtype) False >>> N.issctype(N.array([1]).dtype) False # apparently new 'scalars' have a dtype attribute ! >>> N.isscalar(N.array([1])[0].dtype) False >>> N.issctype(N.array([1])[0].dtype) False >>> N.isscalar(N.array([1])[0]) True >>> N.issctype(N.array([1])[0]) True -Sebastian From Chris.Barker at noaa.gov Thu Aug 3 13:33:54 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 03 Aug 2006 10:33:54 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D17C2A.2050601@msg.ucsf.edu> References: <44D17C2A.2050601@msg.ucsf.edu> Message-ID: <44D23382.50606@noaa.gov> Sebastian Haase wrote: > Finally I traced the problem down to a utility function: > "is_number" - it is simply implemented as > def is_number(val): > return (type(val) in [type(0.0),type(0)]) > OK - how should this have been done right ? Well, as others have said, python is uses "duck typing", so you really shouldn't be checking for specific types anyway -- if whatever is passed in acts like it should, that's all you need not know. However, sometimes it does make sense to catch the error sooner, rather than later, so that it can be obvious, or handled properly, or give a better error message, or whatever. In this case, I still use a "duck typing" approach: I don't need to know exactly what type it is, I just need to know that I can use it in the way I want, and an easy way to do that is to turn it into a known type: def is_number(val): try: float(val) return True except ValueError: return False Though more often, I'd just call float on it, and pass that along, rather than explicitly checking This works at least with numpy float64scalar and float32scalar, and it should work with all numpy scalar types, except perhaps the long types that don't fit into a Python float. it'll also turn string into floats if it can, which may or may not be what you want. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From stefan at sun.ac.za Thu Aug 3 14:35:34 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 3 Aug 2006 20:35:34 +0200 Subject: [Numpy-discussion] Fastest binary threshold? In-Reply-To: <44D21566.9060708@mitre.org> References: <44D1103B.9000808@mitre.org> <20060803004522.GC6682@mentat.za.net> <44D21566.9060708@mitre.org> Message-ID: <20060803183534.GF6682@mentat.za.net> Hi Mark On Thu, Aug 03, 2006 at 11:25:26AM -0400, Mark Heslep wrote: > Stefan van der Walt wrote: > > Binary thresholding can be added to ndimage easily, if further speed > > improvement is needed. > > > > Regards > > St?fan > Yes Id like to become involved in that effort. Whats the status of > ndimage now? Has it all been brought over from numarray and placed, > where? Is there a template of some kind for adding new code? You can find 'ndimage' in scipy. Travis also recently added the STSCI image processing tools to the sandbox. St?fan From oliphant at ee.byu.edu Thu Aug 3 16:37:33 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 3 Aug 2006 13:37:33 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' Message-ID: <200608031337.33596.oliphant@ee.byu.edu> Sebastian Haase wrote: >On Wednesday 02 August 2006 22:43, Travis Oliphant wrote: >>Sebastian Haase wrote: >>>Thanks, >>>I just found >>>numpy.isscalar() and numpy.issctype() ? >>>These sound like they would do what I need - what is the difference >>>between the two ? >> >>Oh, yeah. >> >>numpy.issctype works with type objects >>numpy.isscalar works with instances >> >>Neither of them distinguish between scalars and "numbers." >> >>If you get errors with isscalar it would be nice to know what they are. > >I'm still trying to reproduce the exception, but here is a first comparison >that - honestly - does not make much sense to me: >(type vs. instance seems to get mostly the same results and why is there a >difference with a string ('12') ) These routines are a little buggy. I've cleaned them up in SVN to reflect what they should do. When the dtype object came into existence a lot of what the scalar types where being used for was no longer needed. Some of these functions weren't updated to deal with the dtype objects correctly either. This is what you get now: >>> import numpy as N >>> N.isscalar(12) True >>> N.issctype(12) False >>> N.isscalar('12') True >>> N.issctype('12') False >>> N.isscalar(N.array([1])) False >>> N.issctype(N.array([1])) False >>> N.isscalar(N.array([1]).dtype) False >>> N.issctype(N.array([1]).dtype) True >>> N.isscalar(N.array([1])[0].dtype) False >>> N.issctype(N.array([1])[0].dtype) True >>> N.isscalar(N.array([1])[0]) True >>> N.issctype(N.array([1])[0]) False -Travis >>>>N.isscalar(12) > >True > >>>>N.issctype(12) > >True > >>>>N.isscalar('12') > >True > >>>>N.issctype('12') > >False > >>>>N.isscalar(N.array([1])) > >False > >>>>N.issctype(N.array([1])) > >True > >>>>N.isscalar(N.array([1]).dtype) > >False > >>>>N.issctype(N.array([1]).dtype) > >False > > # apparently new 'scalars' have a dtype attribute ! > >>>>N.isscalar(N.array([1])[0].dtype) > >False > >>>>N.issctype(N.array([1])[0].dtype) > >False > >>>>N.isscalar(N.array([1])[0]) > >True > >>>>N.issctype(N.array([1])[0]) > >True > >-Sebastian From haase at msg.ucsf.edu Thu Aug 3 16:42:30 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 3 Aug 2006 13:42:30 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D25A59.4010907@ee.byu.edu> References: <44D17C2A.2050601@msg.ucsf.edu> <200608030932.31118.haase@msg.ucsf.edu> <44D25A59.4010907@ee.byu.edu> Message-ID: <200608031342.30977.haase@msg.ucsf.edu> On Thursday 03 August 2006 13:19, Travis Oliphant wrote: > Sebastian Haase wrote: > >On Wednesday 02 August 2006 22:43, Travis Oliphant wrote: > >>Sebastian Haase wrote: > >>>Thanks, > >>>I just found > >>>numpy.isscalar() and numpy.issctype() ? > >>>These sound like they would do what I need - what is the difference > >>>between the two ? > >> > >>Oh, yeah. > >> > >>numpy.issctype works with type objects > >>numpy.isscalar works with instances > >> > >>Neither of them distinguish between scalars and "numbers." > >> > >>If you get errors with isscalar it would be nice to know what they are. > > > >I'm still trying to reproduce the exception, but here is a first > > comparison that - honestly - does not make much sense to me: > >(type vs. instance seems to get mostly the same results and why is there > > a difference with a string ('12') ) > > These routines are a little buggy. I've cleaned them up in SVN to > reflect what they should do. When the dtype object came into > existence a lot of what the scalar types where being used for was no > longer needed. Some of these functions weren't updated to deal with > the dtype objects correctly either. > > This is what you get now: > >>> import numpy as N > >>> N.isscalar(12) > > True > > >>> N.issctype(12) > > False > > >>> N.isscalar('12') > > True > > >>> N.issctype('12') > > False > > >>> N.isscalar(N.array([1])) > > False > > >>> N.issctype(N.array([1])) > > False > > >>> N.isscalar(N.array([1]).dtype) > > False > > >>> N.issctype(N.array([1]).dtype) > > True > > >>> N.isscalar(N.array([1])[0].dtype) > > False > > >>> N.issctype(N.array([1])[0].dtype) > > True > > >>> N.isscalar(N.array([1])[0]) > > True > > >>> N.issctype(N.array([1])[0]) > > False > > > -Travis Great! Just wanted to point out that '12' is a scalar - I suppose that's what it is. (To determine if something is a number it seems best to implement a try: ... except: ... something like float(x) - as Chris has suggested ) -S. From myeates at jpl.nasa.gov Thu Aug 3 19:30:33 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Thu, 03 Aug 2006 16:30:33 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <200608031337.33596.oliphant@ee.byu.edu> References: <200608031337.33596.oliphant@ee.byu.edu> Message-ID: <44D28719.7020703@jpl.nasa.gov> Here is a similar problem I wish could be fixed. In scipy.io.mio is savemat with the line if type(var) != ArrayType which, I believe should be changed to if not isinstance(var,ArrayType): so I can use savemat with memory mapped arrays. Mathew Travis Oliphant wrote: > Sebastian Haase wrote: > >> On Wednesday 02 August 2006 22:43, Travis Oliphant wrote: >> >>> Sebastian Haase wrote: >>> >>>> Thanks, >>>> I just found >>>> numpy.isscalar() and numpy.issctype() ? >>>> These sound like they would do what I need - what is the difference >>>> between the two ? >>>> >>> Oh, yeah. >>> >>> numpy.issctype works with type objects >>> numpy.isscalar works with instances >>> >>> Neither of them distinguish between scalars and "numbers." >>> >>> If you get errors with isscalar it would be nice to know what they are. >>> >> I'm still trying to reproduce the exception, but here is a first comparison >> that - honestly - does not make much sense to me: >> (type vs. instance seems to get mostly the same results and why is there a >> difference with a string ('12') ) >> > > These routines are a little buggy. I've cleaned them up in SVN to > reflect what they should do. When the dtype object came into > existence a lot of what the scalar types where being used for was no > longer needed. Some of these functions weren't updated to deal with > the dtype objects correctly either. > > This is what you get now: > >>> import numpy as N > >>> N.isscalar(12) > > True > > >>> N.issctype(12) > > False > > >>> N.isscalar('12') > > True > > >>> N.issctype('12') > > False > > >>> N.isscalar(N.array([1])) > > False > > >>> N.issctype(N.array([1])) > > False > > >>> N.isscalar(N.array([1]).dtype) > > False > > >>> N.issctype(N.array([1]).dtype) > > True > > >>> N.isscalar(N.array([1])[0].dtype) > > False > > >>> N.issctype(N.array([1])[0].dtype) > > True > > >>> N.isscalar(N.array([1])[0]) > > True > > >>> N.issctype(N.array([1])[0]) > > False > > > -Travis > > >>>>> N.isscalar(12) >>>>> >> True >> >> >>>>> N.issctype(12) >>>>> >> True >> >> >>>>> N.isscalar('12') >>>>> >> True >> >> >>>>> N.issctype('12') >>>>> >> False >> >> >>>>> N.isscalar(N.array([1])) >>>>> >> False >> >> >>>>> N.issctype(N.array([1])) >>>>> >> True >> >> >>>>> N.isscalar(N.array([1]).dtype) >>>>> >> False >> >> >>>>> N.issctype(N.array([1]).dtype) >>>>> >> False >> >> # apparently new 'scalars' have a dtype attribute ! >> >> >>>>> N.isscalar(N.array([1])[0].dtype) >>>>> >> False >> >> >>>>> N.issctype(N.array([1])[0].dtype) >>>>> >> False >> >> >>>>> N.isscalar(N.array([1])[0]) >>>>> >> True >> >> >>>>> N.issctype(N.array([1])[0]) >>>>> >> True >> >> -Sebastian >> > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From misa-v-v at yahoo.co.jp Thu Aug 3 20:28:54 2006 From: misa-v-v at yahoo.co.jp (=?iso-2022-jp?B?bWlzYQ==?=) Date: Fri, 04 Aug 2006 00:28:54 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/07 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/07 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/07 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/07 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From oliphant.travis at ieee.org Thu Aug 3 23:48:42 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 03 Aug 2006 21:48:42 -0600 Subject: [Numpy-discussion] Numpy 1.0b2 for this weekend Message-ID: <44D2C39A.1070400@ieee.org> I'd like to release NumPy beta 2.0 on Saturday to get ready for the SciPy 2006 conference. Please post any bugs and commit any fixes by then. I suspect there will be 4 or 5 beta releases and then a couple of release candidates before the final release comes out at the first of October. -Travis From haase at msg.ucsf.edu Fri Aug 4 00:00:38 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 03 Aug 2006 21:00:38 -0700 Subject: [Numpy-discussion] link to numpy ticket tracker ontp the wiki Message-ID: <44D2C666.2080503@msg.ucsf.edu> Hi! I would like to suggest to put a link to the bug/wishlist tracker web site on the scipy.org wiki site. http://projects.scipy.org/scipy/numpy/ticket I did not do it myself because I could not decide what the best place for it would - I think it should be rather exposed ... The only link I could find was somewhere inside an FAQ for the SciPy package and it was only for the scipy-bug tracker. Thanks, Sebastian Haase From haase at msg.ucsf.edu Fri Aug 4 00:20:07 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 03 Aug 2006 21:20:07 -0700 Subject: [Numpy-discussion] bug tracker to cc email address by default Message-ID: <44D2CAF7.6090900@msg.ucsf.edu> Hi, Is it possible to have 'cc'-ing the poster of a bug ticket be the default !? Or is/can this be set in a per user preference somehow ? Thanks, Sebastian Haase From robert.kern at gmail.com Fri Aug 4 00:27:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 03 Aug 2006 23:27:00 -0500 Subject: [Numpy-discussion] bug tracker to cc email address by default In-Reply-To: <44D2CAF7.6090900@msg.ucsf.edu> References: <44D2CAF7.6090900@msg.ucsf.edu> Message-ID: Sebastian Haase wrote: > Hi, > Is it possible to have > 'cc'-ing the poster of a bug ticket be the default !? > Or is/can this be set in a per user preference somehow ? IIRC, if you supply your email address in your "Settings", you will get notification emails. http://projects.scipy.org/scipy/numpy/settings Otherwise, subscribe to the numpy-tickets email list, and you will get notifications of all tickets. http://www.scipy.org/Mailing_Lists -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pruggera at gmail.com Fri Aug 4 00:46:04 2006 From: pruggera at gmail.com (Phil Ruggera) Date: Thu, 3 Aug 2006 21:46:04 -0700 Subject: [Numpy-discussion] Mean of n values within an array In-Reply-To: References: <20060803184425.GC17862@ssh.cv.nrao.edu> Message-ID: Tweek2 is slightly faster, but does not produce the same result as the regular python baseline: regular python took: 11.997997 sec. numpy convolve took: 0.611996 sec. numpy convolve tweek 1 took: 0.442029 sec. numpy convolve tweek 2 took: 0.418857 sec. Traceback (most recent call last): File "G:\Python\Dev\mean.py", line 57, in ? numpy.testing.assert_equal(reg, np3) File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 130, in assert_equal return assert_array_equal(actual, desired, err_msg) File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 217, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 17.1428571429%): Array 1: [ 0.0000000000000000e+00 6.5000000000000002e-01 1.3000000000000000e+00 ..., 1.7842500000000002e+03 1.785550000... Array 2: [ 0.0000000000000000e+00 6.5000000000000002e-01 1.3000000000000000e+00 ..., 1.7842500000000002e+03 1.785550000... Code: # mean of n values within an array import numpy, time def nmean(list,n): a = [] for i in range(1,len(list)+1): start = i-n divisor = n if start < 0: start = 0 divisor = i a.append(sum(list[start:i])/divisor) return a def testNP(code, text): start = time.clock() for x in range(1000): np = code(t,50) print text, "took: %f sec."%(time.clock() - start) return np t = [1.3*i for i in range(1400)] reg = testNP(nmean, 'regular python') t = numpy.array(t,dtype=float) def numpy_nmean_conv(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") for i in range(n): a[i] /= i + 1 a[n:] /= n return a[:len(list)] np1 = testNP(numpy_nmean_conv, 'numpy convolve') def numpy_nmean_conv_nl_tweak1(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") a[:n] /= numpy.arange(1, n+1) a[n:] /= n return a[:len(list)] np2 = testNP(numpy_nmean_conv_nl_tweak1, 'numpy convolve tweek 1') def numpy_nmean_conv_nl_tweak2(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") a[:n] /= numpy.arange(1, n + 1) a[n:] *= 1.0/n return a[:len(list)] np3 = testNP(numpy_nmean_conv_nl_tweak2, 'numpy convolve tweek 2') numpy.testing.assert_equal(reg, np1) numpy.testing.assert_equal(reg, np2) numpy.testing.assert_equal(reg, np3) On 8/3/06, Charles R Harris wrote: > Hi Scott, > > > On 8/3/06, Scott Ransom wrote: > > You should be able to modify the kernel so that you can avoid > > many of the divides at the end. Something like: > > > > def numpy_nmean_conv_nl2(list,n): > > b = numpy.ones(n,dtype=float) / n > > a = numpy.convolve (c,b,mode="full") > > # Note: something magic in here to fix the first 'n' values > > return a[:len(list)] > > > Yep, I tried that but it wasn't any faster. It might help for really *big* > arrays. The first n-1 values still need to be fixed after. > > Chuck > > > I played with it a bit, but don't have time to figure out exactly > > how convolve is mangling the first n return values... > > > > Scott > > > > > > > > On Thu, Aug 03, 2006 at 09:38:25AM -0600, Charles R Harris wrote: > > > Heh, > > > > > > This is fun. Two more variations with 1000 reps instead of 100 for > better > > > timing: > > > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > > b = numpy.ones(n,dtype=float) > > > a = numpy.convolve(list,b,mode="full") > > > a[:n] /= numpy.arange(1, n + 1) > > > a[n:] /= n > > > return a[:len(list)] > > > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > b = numpy.ones(n,dtype=float) > > > a = numpy.convolve(list,b,mode="full") > > > a[:n] /= numpy.arange(1, n + 1) > > > a[n:] *= 1.0/n > > > return a[:len(list)] > > > > > > Which gives > > > > > > numpy convolve took: 2.630000 sec. > > > numpy convolve noloop took: 0.320000 sec. > > > numpy convolve noloop tweak1 took: 0.250000 sec. > > > numpy convolve noloop tweak2 took: 0.240000 sec. > > > > > > Chuck > > > > > > On 8/2/06, Phil Ruggera wrote: > > > > > > > >A variation of the proposed convolve routine is very fast: > > > > > > > >regular python took: 1.150214 sec. > > > >numpy mean slice took: 2.427513 sec. > > > >numpy convolve took: 0.546854 sec. > > > >numpy convolve noloop took: 0.058611 sec. > > > > > > > >Code: > > > > > > > ># mean of n values within an array > > > >import numpy, time > > > >def nmean(list,n): > > > > a = [] > > > > for i in range(1,len(list)+1): > > > > start = i-n > > > > divisor = n > > > > if start < 0: > > > > start = 0 > > > > divisor = i > > > > a.append(sum(list[start:i])/divisor) > > > > return a > > > > > > > >t = [1.0*i for i in range(1400)] > > > >start = time.clock() > > > >for x in range(100): > > > > reg = nmean(t,50) > > > >print "regular python took: %f sec."%(time.clock() - start) > > > > > > > >def numpy_nmean(list,n): > > > > a = numpy.empty(len(list),dtype=float) > > > > for i in range(1,len(list)+1): > > > > start = i-n > > > > if start < 0: > > > > start = 0 > > > > a[i-1] = list[start:i].mean(0) > > > > return a > > > > > > > >t = numpy.arange (0,1400,dtype=float) > > > >start = time.clock() > > > >for x in range(100): > > > > npm = numpy_nmean(t,50) > > > >print "numpy mean slice took: %f sec."%(time.clock() - start) > > > > > > > >def numpy_nmean_conv(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > for i in range(0,len(list)): > > > > if i < n : > > > > a[i] /= i + 1 > > > > else : > > > > a[i] /= n > > > > return a[:len(list)] > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > >start = time.clock () > > > >for x in range(100): > > > > npc = numpy_nmean_conv(t,50) > > > >print "numpy convolve took: %f sec."%(time.clock() - start) > > > > > > > >def numpy_nmean_conv_nl(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > for i in range(n): > > > > a[i] /= i + 1 > > > > a[n:] /= n > > > > return a[:len(list)] > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > >start = time.clock() > > > >for x in range(100): > > > > npn = numpy_nmean_conv_nl(t,50) > > > >print "numpy convolve noloop took: %f sec."%( time.clock() - start) > > > > > > > >numpy.testing.assert_equal(reg,npm) > > > >numpy.testing.assert_equal(reg,npc) > > > >numpy.testing.assert_equal(reg,npn) > > > > > > > >On 7/29/06, David Grant < davidgrant at gmail.com> wrote: > > > >> > > > >> > > > >> > > > >> On 7/29/06, Charles R Harris wrote: > > > >> > > > > >> > Hmmm, > > > >> > > > > >> > I rewrote the subroutine a bit. > > > >> > > > > >> > > > > >> > def numpy_nmean(list,n): > > > >> > a = numpy.empty(len(list),dtype=float) > > > >> > > > > >> > b = numpy.cumsum(list) > > > >> > for i in range(0,len(list)): > > > >> > if i < n : > > > >> > a[i] = b[i]/(i+1) > > > >> > else : > > > >> > a[i] = (b[i] - b[i-n])/(i+1) > > > >> > return a > > > >> > > > > >> > and got > > > >> > > > > >> > regular python took: 0.750000 sec. > > > >> > numpy took: 0.380000 sec. > > > >> > > > >> > > > >> I got rid of the for loop entirely. Usually this is the thing to do, > at > > > >> least this will always give speedups in Matlab and also in my limited > > > >> experience with Numpy/Numeric: > > > >> > > > >> def numpy_nmean2(list,n): > > > >> > > > >> a = numpy.empty(len(list),dtype=float) > > > >> b = numpy.cumsum(list) > > > >> c = concatenate((b[n:],b[:n])) > > > >> a[:n] = b[:n]/(i+1) > > > >> a[n:] = (b[n:] - c[n:])/(i+1) > > > >> return a > > > >> > > > >> I got no noticeable speedup from doing this which I thought was > pretty > > > >> amazing. I even profiled all the functions, the original, the one > > > >written by > > > >> Charles, and mine, using hotspot just to make sure nothing funny was > > > >going > > > >> on. I guess plain old Python can be better than you'd expect in > certain > > > >> situtations. > > > >> > > > >> -- > > > >> David Grant > > > > > > > > >------------------------------------------------------------------------- > > > >Take Surveys. Earn Cash. Influence the Future of IT > > > >Join SourceForge.net's Techsay panel and you'll get the chance to share > > > >your > > > >opinions on IT & business topics through brief surveys -- and earn cash > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > >_______________________________________________ > > > >Numpy-discussion mailing list > > > > Numpy-discussion at lists.sourceforge.net > > > > >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > ------------------------------------------------------------------------- > > > Take Surveys. Earn Cash. Influence the Future of IT > > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > > opinions on IT & business topics through brief surveys -- and earn cash > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Numpy-discussion at lists.sourceforge.net > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > -- > > -- > > Scott M. Ransom Address: NRAO > > Phone: (434) 296-0320 520 Edgemont Rd. > > email: sransom at nrao.edu Charlottesville, VA 22903 USA > > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From charlesr.harris at gmail.com Fri Aug 4 02:40:13 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 4 Aug 2006 00:40:13 -0600 Subject: [Numpy-discussion] Mean of n values within an array In-Reply-To: References: <20060803184425.GC17862@ssh.cv.nrao.edu> Message-ID: Hi Phil, Curious. It works fine here in the original form. I even expected a tiny difference because of floating point voodoo but there was none at all. Now if I copy your program and run it there *is* a small difference over the slice [1:] (to avoid division by zero). index of max fractional difference: 234 max fractional difference: 2.077e-16 reg at max fractional difference: 1.098e+03 Which is just about roundoff error (1.11e-16) for double precision, so it lost a bit of precision. Still, I am not clear why the results should differ at all between the original and your new code. Cue spooky music. Chuck On 8/3/06, Phil Ruggera wrote: > > Tweek2 is slightly faster, but does not produce the same result as the > regular python baseline: > > regular python took: 11.997997 sec. > numpy convolve took: 0.611996 sec. > numpy convolve tweek 1 took: 0.442029 sec. > numpy convolve tweek 2 took: 0.418857 sec. > Traceback (most recent call last): > File "G:\Python\Dev\mean.py", line 57, in ? > numpy.testing.assert_equal(reg, np3) > File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line > 130, in assert_equal > return assert_array_equal(actual, desired, err_msg) > File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line > 217, in assert_array_equal > assert cond,\ > AssertionError: > Arrays are not equal (mismatch 17.1428571429%): > Array 1: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > ..., 1.7842500000000002e+03 1.785550000... > Array 2: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > ..., 1.7842500000000002e+03 1.785550000... Code: > > # mean of n values within an array > import numpy, time > def nmean(list,n): > a = [] > for i in range(1,len(list)+1): > start = i-n > divisor = n > if start < 0: > start = 0 > divisor = i > a.append(sum(list[start:i])/divisor) > return a > > def testNP(code, text): > start = time.clock() > for x in range(1000): > np = code(t,50) > print text, "took: %f sec."%(time.clock() - start) > return np > > t = [1.3*i for i in range(1400)] > reg = testNP(nmean, 'regular python') > > t = numpy.array(t,dtype=float) > > def numpy_nmean_conv(list,n): > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > for i in range(n): > a[i] /= i + 1 > a[n:] /= n > return a[:len(list)] > > np1 = testNP(numpy_nmean_conv, 'numpy convolve') > > def numpy_nmean_conv_nl_tweak1(list,n): > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > a[:n] /= numpy.arange(1, n+1) > a[n:] /= n > return a[:len(list)] > > np2 = testNP(numpy_nmean_conv_nl_tweak1, 'numpy convolve tweek 1') > > def numpy_nmean_conv_nl_tweak2(list,n): > > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > a[:n] /= numpy.arange(1, n + 1) > a[n:] *= 1.0/n > return a[:len(list)] > > np3 = testNP(numpy_nmean_conv_nl_tweak2, 'numpy convolve tweek 2') > > numpy.testing.assert_equal(reg, np1) > numpy.testing.assert_equal(reg, np2) > numpy.testing.assert_equal(reg, np3) > > On 8/3/06, Charles R Harris wrote: > > Hi Scott, > > > > > > On 8/3/06, Scott Ransom wrote: > > > You should be able to modify the kernel so that you can avoid > > > many of the divides at the end. Something like: > > > > > > def numpy_nmean_conv_nl2(list,n): > > > b = numpy.ones(n,dtype=float) / n > > > a = numpy.convolve (c,b,mode="full") > > > # Note: something magic in here to fix the first 'n' values > > > return a[:len(list)] > > > > > > Yep, I tried that but it wasn't any faster. It might help for really > *big* > > arrays. The first n-1 values still need to be fixed after. > > > > Chuck > > > > > I played with it a bit, but don't have time to figure out exactly > > > how convolve is mangling the first n return values... > > > > > > Scott > > > > > > > > > > > > On Thu, Aug 03, 2006 at 09:38:25AM -0600, Charles R Harris wrote: > > > > Heh, > > > > > > > > This is fun. Two more variations with 1000 reps instead of 100 for > > better > > > > timing: > > > > > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > a[:n] /= numpy.arange(1, n + 1) > > > > a[n:] /= n > > > > return a[:len(list)] > > > > > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > a[:n] /= numpy.arange(1, n + 1) > > > > a[n:] *= 1.0/n > > > > return a[:len(list)] > > > > > > > > Which gives > > > > > > > > numpy convolve took: 2.630000 sec. > > > > numpy convolve noloop took: 0.320000 sec. > > > > numpy convolve noloop tweak1 took: 0.250000 sec. > > > > numpy convolve noloop tweak2 took: 0.240000 sec. > > > > > > > > Chuck > > > > > > > > On 8/2/06, Phil Ruggera wrote: > > > > > > > > > >A variation of the proposed convolve routine is very fast: > > > > > > > > > >regular python took: 1.150214 sec. > > > > >numpy mean slice took: 2.427513 sec. > > > > >numpy convolve took: 0.546854 sec. > > > > >numpy convolve noloop took: 0.058611 sec. > > > > > > > > > >Code: > > > > > > > > > ># mean of n values within an array > > > > >import numpy, time > > > > >def nmean(list,n): > > > > > a = [] > > > > > for i in range(1,len(list)+1): > > > > > start = i-n > > > > > divisor = n > > > > > if start < 0: > > > > > start = 0 > > > > > divisor = i > > > > > a.append(sum(list[start:i])/divisor) > > > > > return a > > > > > > > > > >t = [1.0*i for i in range(1400)] > > > > >start = time.clock() > > > > >for x in range(100): > > > > > reg = nmean(t,50) > > > > >print "regular python took: %f sec."%(time.clock() - start) > > > > > > > > > >def numpy_nmean(list,n): > > > > > a = numpy.empty(len(list),dtype=float) > > > > > for i in range(1,len(list)+1): > > > > > start = i-n > > > > > if start < 0: > > > > > start = 0 > > > > > a[i-1] = list[start:i].mean(0) > > > > > return a > > > > > > > > > >t = numpy.arange (0,1400,dtype=float) > > > > >start = time.clock() > > > > >for x in range(100): > > > > > npm = numpy_nmean(t,50) > > > > >print "numpy mean slice took: %f sec."%(time.clock() - start) > > > > > > > > > >def numpy_nmean_conv(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > for i in range(0,len(list)): > > > > > if i < n : > > > > > a[i] /= i + 1 > > > > > else : > > > > > a[i] /= n > > > > > return a[:len(list)] > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > >start = time.clock () > > > > >for x in range(100): > > > > > npc = numpy_nmean_conv(t,50) > > > > >print "numpy convolve took: %f sec."%(time.clock() - start) > > > > > > > > > >def numpy_nmean_conv_nl(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > for i in range(n): > > > > > a[i] /= i + 1 > > > > > a[n:] /= n > > > > > return a[:len(list)] > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > >start = time.clock() > > > > >for x in range(100): > > > > > npn = numpy_nmean_conv_nl(t,50) > > > > >print "numpy convolve noloop took: %f sec."%( time.clock() - start) > > > > > > > > > >numpy.testing.assert_equal(reg,npm) > > > > >numpy.testing.assert_equal(reg,npc) > > > > >numpy.testing.assert_equal(reg,npn) > > > > > > > > > >On 7/29/06, David Grant < davidgrant at gmail.com> wrote: > > > > >> > > > > >> > > > > >> > > > > >> On 7/29/06, Charles R Harris wrote: > > > > >> > > > > > >> > Hmmm, > > > > >> > > > > > >> > I rewrote the subroutine a bit. > > > > >> > > > > > >> > > > > > >> > def numpy_nmean(list,n): > > > > >> > a = numpy.empty(len(list),dtype=float) > > > > >> > > > > > >> > b = numpy.cumsum(list) > > > > >> > for i in range(0,len(list)): > > > > >> > if i < n : > > > > >> > a[i] = b[i]/(i+1) > > > > >> > else : > > > > >> > a[i] = (b[i] - b[i-n])/(i+1) > > > > >> > return a > > > > >> > > > > > >> > and got > > > > >> > > > > > >> > regular python took: 0.750000 sec. > > > > >> > numpy took: 0.380000 sec. > > > > >> > > > > >> > > > > >> I got rid of the for loop entirely. Usually this is the thing to > do, > > at > > > > >> least this will always give speedups in Matlab and also in my > limited > > > > >> experience with Numpy/Numeric: > > > > >> > > > > >> def numpy_nmean2(list,n): > > > > >> > > > > >> a = numpy.empty(len(list),dtype=float) > > > > >> b = numpy.cumsum(list) > > > > >> c = concatenate((b[n:],b[:n])) > > > > >> a[:n] = b[:n]/(i+1) > > > > >> a[n:] = (b[n:] - c[n:])/(i+1) > > > > >> return a > > > > >> > > > > >> I got no noticeable speedup from doing this which I thought was > > pretty > > > > >> amazing. I even profiled all the functions, the original, the one > > > > >written by > > > > >> Charles, and mine, using hotspot just to make sure nothing funny > was > > > > >going > > > > >> on. I guess plain old Python can be better than you'd expect in > > certain > > > > >> situtations. > > > > >> > > > > >> -- > > > > >> David Grant > > > > > > > > > > > > >------------------------------------------------------------------------- > > > > >Take Surveys. Earn Cash. Influence the Future of IT > > > > >Join SourceForge.net's Techsay panel and you'll get the chance to > share > > > > >your > > > > >opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > >_______________________________________________ > > > > >Numpy-discussion mailing list > > > > > Numpy-discussion at lists.sourceforge.net > > > > > > >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > Take Surveys. Earn Cash. Influence the Future of IT > > > > Join SourceForge.net's Techsay panel and you'll get the chance to > share > > your > > > > opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > _______________________________________________ > > > > Numpy-discussion mailing list > > > > Numpy-discussion at lists.sourceforge.net > > > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > -- > > > -- > > > Scott M. Ransom Address: NRAO > > > Phone: (434) 296-0320 520 Edgemont Rd. > > > email: sransom at nrao.edu Charlottesville, VA 22903 USA > > > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pruggera at gmail.com Fri Aug 4 11:12:51 2006 From: pruggera at gmail.com (Phil Ruggera) Date: Fri, 4 Aug 2006 08:12:51 -0700 Subject: [Numpy-discussion] Mean of n values within an array In-Reply-To: References: <20060803184425.GC17862@ssh.cv.nrao.edu> Message-ID: The spook is in t = [1.3*i for i in range(1400)]. It used to be t = [1.0*i for i in range(1400)] but I changed it to shake out algorithms that produce differences. But a max difference of 2.077e-16 is immaterial for my application. I should use a less strict compare. On 8/3/06, Charles R Harris wrote: > Hi Phil, > > Curious. It works fine here in the original form. I even expected a tiny > difference because of floating point voodoo but there was none at all. Now > if I copy your program and run it there *is* a small difference over the > slice [1:] (to avoid division by zero). > > index of max fractional difference: 234 > max fractional difference: 2.077e-16 > reg at max fractional difference: 1.098e+03 > > Which is just about roundoff error (1.11e-16) for double precision, so it > lost a bit of precision. > > Still, I am not clear why the results should differ at all between the > original and your new code. Cue spooky music. > > Chuck > > On 8/3/06, Phil Ruggera wrote: > > Tweek2 is slightly faster, but does not produce the same result as the > > regular python baseline: > > > > regular python took: 11.997997 sec. > > numpy convolve took: 0.611996 sec. > > numpy convolve tweek 1 took: 0.442029 sec. > > numpy convolve tweek 2 took: 0.418857 sec. > > Traceback (most recent call last): > > File "G:\Python\Dev\mean.py", line 57, in ? > > numpy.testing.assert_equal(reg, np3) > > File > "C:\Python24\Lib\site-packages\numpy\testing\utils.py", > line > > 130, in assert_equal > > return assert_array_equal(actual, desired, err_msg) > > File > "C:\Python24\Lib\site-packages\numpy\testing\utils.py", > line > > 217, in assert_array_equal > > assert cond,\ > > AssertionError: > > Arrays are not equal (mismatch 17.1428571429%): > > Array 1: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > > ..., 1.7842500000000002e+03 1.785550000... > > Array 2: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > > ..., 1.7842500000000002e+03 1.785550000... > > > > > Code: > > > > # mean of n values within an array > > import numpy, time > > def nmean(list,n): > > a = [] > > for i in range(1,len(list)+1): > > start = i-n > > divisor = n > > if start < 0: > > start = 0 > > divisor = i > > a.append(sum(list[start:i])/divisor) > > return a > > > > def testNP(code, text): > > start = time.clock() > > for x in range(1000): > > np = code(t,50) > > print text, "took: %f sec."%( time.clock() - start) > > return np > > > > t = [1.3*i for i in range(1400)] > > reg = testNP(nmean, 'regular python') > > > > t = numpy.array(t,dtype=float) > > > > def numpy_nmean_conv(list,n): > > b = numpy.ones(n,dtype=float) > > a = numpy.convolve(list,b,mode="full") > > for i in range(n): > > a[i] /= i + 1 > > a[n:] /= n > > return a[:len(list)] > > > > np1 = testNP(numpy_nmean_conv, 'numpy convolve') > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > b = numpy.ones(n,dtype=float) > > a = numpy.convolve(list,b,mode="full") > > a[:n] /= numpy.arange(1, n+1) > > a[n:] /= n > > return a[:len(list)] > > > > np2 = testNP(numpy_nmean_conv_nl_tweak1, 'numpy convolve > tweek 1') > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > > b = numpy.ones(n,dtype=float) > > a = numpy.convolve(list,b,mode="full") > > a[:n] /= numpy.arange(1, n + 1) > > a[n:] *= 1.0/n > > return a[:len(list)] > > > > np3 = testNP(numpy_nmean_conv_nl_tweak2, 'numpy convolve > tweek 2') > > > > numpy.testing.assert_equal(reg, np1) > > numpy.testing.assert_equal(reg, np2) > > numpy.testing.assert_equal(reg, np3) > > > > On 8/3/06, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > Hi Scott, > > > > > > > > > On 8/3/06, Scott Ransom wrote: > > > > You should be able to modify the kernel so that you can avoid > > > > many of the divides at the end. Something like: > > > > > > > > def numpy_nmean_conv_nl2(list,n): > > > > b = numpy.ones (n,dtype=float) / n > > > > a = numpy.convolve (c,b,mode="full") > > > > # Note: something magic in here to fix the first 'n' values > > > > return a[:len(list)] > > > > > > > > > Yep, I tried that but it wasn't any faster. It might help for really > *big* > > > arrays. The first n-1 values still need to be fixed after. > > > > > > Chuck > > > > > > > I played with it a bit, but don't have time to figure out exactly > > > > how convolve is mangling the first n return values... > > > > > > > > Scott > > > > > > > > > > > > > > > > On Thu, Aug 03, 2006 at 09:38:25AM -0600, Charles R Harris wrote: > > > > > Heh, > > > > > > > > > > This is fun. Two more variations with 1000 reps instead of 100 for > > > better > > > > > timing: > > > > > > > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > a[:n] /= numpy.arange(1, n + 1) > > > > > a[n:] /= n > > > > > return a[:len(list)] > > > > > > > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > a[:n] /= numpy.arange(1, n + 1) > > > > > a[n:] *= 1.0/n > > > > > return a[:len(list)] > > > > > > > > > > Which gives > > > > > > > > > > numpy convolve took: 2.630000 sec. > > > > > numpy convolve noloop took: 0.320000 sec. > > > > > numpy convolve noloop tweak1 took: 0.250000 sec. > > > > > numpy convolve noloop tweak2 took: 0.240000 sec. > > > > > > > > > > Chuck > > > > > > > > > > On 8/2/06, Phil Ruggera < pruggera at gmail.com> wrote: > > > > > > > > > > > >A variation of the proposed convolve routine is very fast: > > > > > > > > > > > >regular python took: 1.150214 sec. > > > > > >numpy mean slice took: 2.427513 sec. > > > > > >numpy convolve took: 0.546854 sec. > > > > > >numpy convolve noloop took: 0.058611 sec. > > > > > > > > > > > >Code: > > > > > > > > > > > ># mean of n values within an array > > > > > >import numpy, time > > > > > >def nmean(list,n): > > > > > > a = [] > > > > > > for i in range(1,len(list)+1): > > > > > > start = i-n > > > > > > divisor = n > > > > > > if start < 0: > > > > > > start = 0 > > > > > > divisor = i > > > > > > a.append(sum(list[start:i])/divisor) > > > > > > return a > > > > > > > > > > > >t = [1.0*i for i in range(1400)] > > > > > >start = time.clock () > > > > > >for x in range(100): > > > > > > reg = nmean(t,50) > > > > > >print "regular python took: %f sec."%(time.clock() - start) > > > > > > > > > > > >def numpy_nmean(list,n): > > > > > > a = numpy.empty(len(list),dtype=float) > > > > > > for i in range(1,len(list)+1): > > > > > > start = i-n > > > > > > if start < 0: > > > > > > start = 0 > > > > > > a[i-1] = list[start:i].mean(0) > > > > > > return a > > > > > > > > > > > >t = numpy.arange (0,1400,dtype=float) > > > > > >start = time.clock() > > > > > >for x in range(100): > > > > > > npm = numpy_nmean(t,50) > > > > > >print "numpy mean slice took: %f sec."%(time.clock() - start) > > > > > > > > > > > >def numpy_nmean_conv(list,n): > > > > > > b = numpy.ones(n,dtype=float) > > > > > > a = numpy.convolve(list,b,mode="full") > > > > > > for i in range(0,len(list)): > > > > > > if i < n : > > > > > > a[i] /= i + 1 > > > > > > else : > > > > > > a[i] /= n > > > > > > return a[:len(list)] > > > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > > >start = time.clock () > > > > > >for x in range(100): > > > > > > npc = numpy_nmean_conv(t,50) > > > > > >print "numpy convolve took: %f sec."%( time.clock() - start) > > > > > > > > > > > >def numpy_nmean_conv_nl(list,n): > > > > > > b = numpy.ones(n,dtype=float) > > > > > > a = numpy.convolve(list,b,mode="full") > > > > > > for i in range(n): > > > > > > a[i] /= i + 1 > > > > > > a[n:] /= n > > > > > > return a[:len(list)] > > > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > > >start = time.clock() > > > > > >for x in range(100): > > > > > > npn = numpy_nmean_conv_nl(t,50) > > > > > >print "numpy convolve noloop took: %f sec."%( time.clock() - start) > > > > > > > > > > > >numpy.testing.assert_equal(reg,npm) > > > > > >numpy.testing.assert_equal(reg,npc) > > > > > >numpy.testing.assert_equal(reg,npn) > > > > > > > > > > > >On 7/29/06, David Grant < davidgrant at gmail.com> wrote: > > > > > >> > > > > > >> > > > > > >> > > > > > >> On 7/29/06, Charles R Harris wrote: > > > > > >> > > > > > > >> > Hmmm, > > > > > >> > > > > > > >> > I rewrote the subroutine a bit. > > > > > >> > > > > > > >> > > > > > > >> > def numpy_nmean(list,n): > > > > > >> > a = numpy.empty(len(list),dtype=float) > > > > > >> > > > > > > >> > b = numpy.cumsum(list) > > > > > >> > for i in range(0,len(list)): > > > > > >> > if i < n : > > > > > >> > a[i] = b[i]/(i+1) > > > > > >> > else : > > > > > >> > a[i] = (b[i] - b[i-n])/(i+1) > > > > > >> > return a > > > > > >> > > > > > > >> > and got > > > > > >> > > > > > > >> > regular python took: 0.750000 sec. > > > > > >> > numpy took: 0.380000 sec. > > > > > >> > > > > > >> > > > > > >> I got rid of the for loop entirely. Usually this is the thing to > do, > > > at > > > > > >> least this will always give speedups in Matlab and also in my > limited > > > > > >> experience with Numpy/Numeric: > > > > > >> > > > > > >> def numpy_nmean2(list,n): > > > > > >> > > > > > >> a = numpy.empty(len(list),dtype=float) > > > > > >> b = numpy.cumsum(list) > > > > > >> c = concatenate((b[n:],b[:n])) > > > > > >> a[:n] = b[:n]/(i+1) > > > > > >> a[n:] = (b[n:] - c[n:])/(i+1) > > > > > >> return a > > > > > >> > > > > > >> I got no noticeable speedup from doing this which I thought was > > > pretty > > > > > >> amazing. I even profiled all the functions, the original, the one > > > > > >written by > > > > > >> Charles, and mine, using hotspot just to make sure nothing funny > was > > > > > >going > > > > > >> on. I guess plain old Python can be better than you'd expect in > > > certain > > > > > >> situtations. > > > > > >> > > > > > >> -- > > > > > >> David Grant > > > > > > > > > > > > > > > >------------------------------------------------------------------------- > > > > > >Take Surveys. Earn Cash. Influence the Future of IT > > > > > >Join SourceForge.net's Techsay panel and you'll get the chance to > share > > > > > >your > > > > > >opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > >_______________________________________________ > > > > > >Numpy-discussion mailing list > > > > > > Numpy-discussion at lists.sourceforge.net > > > > > > > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > > Take Surveys. Earn Cash. Influence the Future of IT > > > > > Join SourceForge.net's Techsay panel and you'll get the chance to > share > > > your > > > > > opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > _______________________________________________ > > > > > Numpy-discussion mailing list > > > > > Numpy-discussion at lists.sourceforge.net > > > > > > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > -- > > > > -- > > > > Scott M. Ransom Address: NRAO > > > > Phone: (434) 296-0320 520 Edgemont Rd. > > > > email: sransom at nrao.edu Charlottesville, VA 22903 USA > > > > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > > > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From oliphant.travis at ieee.org Fri Aug 4 15:07:21 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 04 Aug 2006 13:07:21 -0600 Subject: [Numpy-discussion] Backward compatibility plans Message-ID: <44D39AE9.6060106@ieee.org> For backward-compatibility with Numeric and Numarray I'm leaning to the following plan: * Do not create compatibility array objects. I initially thought we could sub-class in order to create objects that had the expected attributes and methods of Numeric arrays or Numarray arrays. After some experimentation, I'm ditching this plan. I think this would create too many array-like objects floating around and make unification even harder as these objects interact in difficult-to-predict ways. Instead, I'm planning to: 1) Create compatibility functions in oldnumeric and numarray sub-packages that create NumPy arrays but do it with the same function syntax as the old packages. 2) Create 4 scripts for assisting in conversion (2 for Numeric and 2 for Numarray). a) An initial script that just alters imports (to the compatibility layer) and fixes method and attribute access. b) A secondary script that alters the imports from the compatibility layer and fixes as much as possible the things that need to change in order to make the switch away from the compatibility layer to work correctly. While it is not foolproof, I think this will cover most of the issues and make conversion relatively easy. This will also let us develop NumPy without undue concern for compatibility with older packages. This must all be in place before 1.0 release candidate 1 comes out. Comments and criticisms welcome. -Travis From haase at msg.ucsf.edu Fri Aug 4 18:35:51 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 4 Aug 2006 15:35:51 -0700 Subject: [Numpy-discussion] a**2 60 times slower than a*a - ONLY for int32 Message-ID: <200608041535.51321.haase@msg.ucsf.edu> Hi, >>> a=N.random.poisson(N.arange(1e6)+1) >>> U.timeIt('a**2') 0.59 >>> U.timeIt('a*a') 0.01 >>> a.dtype int32 my U.timeIt function just returns the difference of time in seconds before and after evaluation of the string. For >>> c=N.random.normal(1000, 100, 1e6) >>> c.dtype float64 i get .014 seconds for either c*c or c**2 (I averaged over 100 runs). After converting this to float32 I get 0.008 secs for both. Can the int32 case be speed up the same way !? Thanks, Sebastian Haase From charlesr.harris at gmail.com Fri Aug 4 19:34:09 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 4 Aug 2006 17:34:09 -0600 Subject: [Numpy-discussion] Backward compatibility plans In-Reply-To: <44D39AE9.6060106@ieee.org> References: <44D39AE9.6060106@ieee.org> Message-ID: Hi Travis, I wonder if it is possible to adapt these modules so they can flag all the incompatibilities, maybe with a note on the fix. This would be a useful tool for those having to port code. That might not be the easiest route to go but at least there is a partial list of the functions involved. Chuck On 8/4/06, Travis Oliphant wrote: > > > For backward-compatibility with Numeric and Numarray I'm leaning to the > following plan: > > * Do not create compatibility array objects. I initially thought we > could sub-class in order to > create objects that had the expected attributes and methods of Numeric > arrays or Numarray arrays. After some experimentation, I'm ditching > this plan. I think this would create too many array-like objects > floating around and make unification even harder as these objects > interact in difficult-to-predict ways. > > Instead, I'm planning to: > > 1) Create compatibility functions in oldnumeric and numarray > sub-packages that create NumPy arrays but do it with the same function > syntax as the old packages. > > 2) Create 4 scripts for assisting in conversion (2 for Numeric and 2 for > Numarray). > > a) An initial script that just alters imports (to the compatibility > layer) > and fixes method and attribute access. > > b) A secondary script that alters the imports from the compatibility > layer > and fixes as much as possible the things that need to change in > order to > make the switch away from the compatibility layer to work > correctly. > > > While it is not foolproof, I think this will cover most of the issues > and make conversion relatively easy. This will also let us develop > NumPy without undue concern for compatibility with older packages. > > This must all be in place before 1.0 release candidate 1 comes out. > > Comments and criticisms welcome. > > -Travis > > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Wellary at yahoo.com Fri Aug 4 22:45:08 2006 From: Wellary at yahoo.com (Larry Welenc) Date: Fri, 4 Aug 2006 19:45:08 -0700 Subject: [Numpy-discussion] ImportError: cannot import name oldnumeric Message-ID: I receive an error message when trying to import scipy: import scipy File "C:\Python24\Lib\site-packages\scipy\__init__.py", line 32, in -toplevel- from numpy import oldnumeric ImportError: cannot import name oldnumeric Numpy is installed. How to I correct this problem? Larry W -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Sat Aug 5 03:59:33 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 05 Aug 2006 01:59:33 -0600 Subject: [Numpy-discussion] SciPy SVN and NumPy SVN should work together now Message-ID: <44D44FE5.7030808@ieee.org> I've finished the updates to backward compatibility to Numeric. SciPy passes all tests. Please report any outstanding issues you may encounter. It would be nice to remove dependency on oldnumeric from SciPy entirely. -Travis From uwxuxzpfjl at businessval.com Sun Aug 6 02:29:24 2006 From: uwxuxzpfjl at businessval.com (shapes) Date: Sat, 5 Aug 2006 22:29:24 -0800 Subject: [Numpy-discussion] Talladega Message-ID: <91BA11EE7765F93.28D2AB7226@businessval.com> Phone Where Watching Videos Lenogo Movie iPod Converter Traders costs exception Tanzanite costlier make. admit noses sacred oldest Buddhist shrines world. World Heritage Royal spreading the message of universal love rides flavour season Jaipur: attempt save tourism and interest wildlife sanctury Rajsthan has planned shift from other reserves villagers nearby areas. This decision romance loveshop goddess gift guide soul sexuality profile horoscope email referred both examples lasers lights personal Secretary With efforts immediate future serious committed posture stop Tigers extremely with roughly century ago mainly due rampant suspect numbers could less warned majestic cats siege poachers people living protect reserves. After some fast sanctuary sparked national outrage. Phone: Fax: Oxnam Jedburgh TD QN Scottish Borders Please refresh viewing versions. Online Highlight Superior Low Power Medical Use AntiAging Buy Basics LLLT Level Light Therapy Cold problems sores chapped lips Online Highlight Superior Low Power Medical Use AntiAging Buy seat exiled decades headed monastery larger images Nagendra Ranta craze gold ornaments events like marriages but visible towards using cheaper option. Nitu refresh viewing versions. Online Highlight Superior Low Power Medical Use AntiAging Buy Basics pimples stretch marks cracked burns bruises insect bites acute/ chronic point advanced existed during Ashokas time. Strikes Web Tk Microsoft Linux Tools PDA Wireless Screen Savers Files FontsEZ List Shopping Stores Schools Broadband ISP Tutorials Borders Please refresh home. tourists visiting beauteous region penned messages languages stuck walls. JJI favourite aged cannot JSP DLLs SQL Oracle Flash XML Services Delphi files Software Audio MP CD Chats Messaging Education Email AntiSpam Family Hobbies Video Computer Graphics Linux Tools PDA Wireless Screen Savers Files FontsEZ List Shopping Stores Schools Broadband ISP Tutorials Hosts BB Radio TV Shows JJI favourite haunt locals town adopted Tibetan spiritual leader Dalai Lama. Operated Neema husband last cozy cafe named Lovers love. allowed practice writing MP CD Chats Messaging Education Email AntiSpam Family Hobbies Video Computer Basics LLLT Level Light Therapy bites acute/ chronic sinusitis allergic rhinitis warts verrucae facial palsy gout sciatica arthritis lever must replace medical seeking advice. Always consult doctor worried health condition therapies devices. alongside speed faster spectrum invisible portion Cells lowlevel cellular fuel similar native really wanted leave my Hebrew lovely Natty Israeli said. Tibetans raw material imported South Africa Brazil exported finished form markets exchange earning industry country continues intricate detailed jewellery Books entitled Handbook Tuner Lars Hode guides books presented Basics LLLT Level Light Therapy Cold problems sores chapped lips -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 12687 bytes Desc: not available URL: From fullung at gmail.com Sat Aug 5 18:11:23 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 6 Aug 2006 00:11:23 +0200 Subject: [Numpy-discussion] NumPy documentation Message-ID: Hello all With NumPy 1.0 mere weeks away, I'm hoping we can improve the documentation a bit before the final release. Some things we might want to think about: 1. Documentation Sprint This page: http://www.scipy.org/SciPy2006/CodingSprints mentions a possible Documentation Sprint at SciPy 2006. Does anybody know if this is going to happen? 2. Tickets for missing functions missing docstrings Would it be helpful to create tickets for functions that currently don't have docstrings? If not, is there a better way we can keep track of the state of the documentation? 3. Examples in documentation Do we want to include examples in the docstrings? Some functions already do, and I think think this can be quite useful when one is exploring the library. Maybe the example list: http://www.scipy.org/Numpy_Example_List should be incorporated into the docstrings? Then we can also set up doctests to make sure that all the examples really work. 4. Documentation format If someone wants to submit documentation to be included, say as patches attached to tickets, what kind of format do we want? There's already various PEPs dealing with this topic: Docstring Processing System Framework http://www.python.org/dev/peps/pep-0256/ Docstring Conventions http://www.python.org/dev/peps/pep-0257/ Docutils Design Specification http://www.python.org/dev/peps/pep-0258/ reStructuredText Docstring Format http://www.python.org/dev/peps/pep-0287/ 5. Documentation tools A quick search turned up docutils: http://docutils.sourceforge.net/ and epydoc: http://epydoc.sourceforge.net/ Both of these support restructured text, so that looks like the way to go. I think epydoc can handle LaTeX equations and some LaTeX support has also been added to docutils recently. This might be useful for describing some functions. Something else to consider is pydoc compatibility. NumPy currently breaks pydoc: http://projects.scipy.org/scipy/numpy/ticket/232 It also breaks epydoc 3.0a2 (maybe an epydoc bug): http://sourceforge.net/tracker/index.php?func=detail&aid=1535178&group_id=32 455&atid=405618 Anything else? How should we proceed to improve NumPy's documentation? Regards, Albert From gruben at bigpond.net.au Sat Aug 5 22:28:19 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sun, 06 Aug 2006 12:28:19 +1000 Subject: [Numpy-discussion] NumPy documentation In-Reply-To: References: Message-ID: <44D553C3.4010107@bigpond.net.au> All excellent suggestions Albert. What about creating a numpy version of either the main Numeric or numarray document? I would like to see examples included in numpy of all functions. However, I think a better way to do this would be to place all examples in a separate module and create a function such as example() which would then allow something like example(arange) to spit out the example code. This would make it easier to include multiple examples for each command and to actually execute the example code, which I think is a necessary ability to make the examples testable. Examples could go in like doctests with some sort of delimiting so that they can have numbers generated and be referred to, so that you could execute, say, the 3rd example for the arange() function. Perhaps a runexample() function should be created for this or perhaps provide arguments for the example() function like example(name, number, run) The Maxima CAS package has something like this and also has an apropos() command which lists commands with similar sounding names to the argument. We could implement something similar but better by searching the examples module for similar commands, but also listing "See Also" cross references like those in the Numpy_Example_List, Gary R. Albert Strasheim wrote: > Hello all > > With NumPy 1.0 mere weeks away, I'm hoping we can improve the documentation > a bit before the final release. Some things we might want to think about: > > 1. Documentation Sprint > > This page: > > http://www.scipy.org/SciPy2006/CodingSprints > > mentions a possible Documentation Sprint at SciPy 2006. Does anybody know if > this is going to happen? > > 2. Tickets for missing functions missing docstrings > > Would it be helpful to create tickets for functions that currently don't > have docstrings? If not, is there a better way we can keep track of the > state of the documentation? > > 3. Examples in documentation > > Do we want to include examples in the docstrings? Some functions already do, > and I think think this can be quite useful when one is exploring the > library. > > Maybe the example list: > > http://www.scipy.org/Numpy_Example_List > > should be incorporated into the docstrings? Then we can also set up doctests > to make sure that all the examples really work. > > 4. Documentation format > > If someone wants to submit documentation to be included, say as patches > attached to tickets, what kind of format do we want? > > There's already various PEPs dealing with this topic: > > Docstring Processing System Framework > http://www.python.org/dev/peps/pep-0256/ > > Docstring Conventions > http://www.python.org/dev/peps/pep-0257/ > > Docutils Design Specification > http://www.python.org/dev/peps/pep-0258/ > > reStructuredText Docstring Format > http://www.python.org/dev/peps/pep-0287/ > > 5. Documentation tools > > A quick search turned up docutils: > > http://docutils.sourceforge.net/ > > and epydoc: > > http://epydoc.sourceforge.net/ > > Both of these support restructured text, so that looks like the way to go. I > think epydoc can handle LaTeX equations and some LaTeX support has also been > added to docutils recently. This might be useful for describing some > functions. > > Something else to consider is pydoc compatibility. NumPy currently breaks > pydoc: > > http://projects.scipy.org/scipy/numpy/ticket/232 > > It also breaks epydoc 3.0a2 (maybe an epydoc bug): > > http://sourceforge.net/tracker/index.php?func=detail&aid=1535178&group_id=32 > 455&atid=405618 > > Anything else? How should we proceed to improve NumPy's documentation? > > Regards, > > Albert From davidgrant at gmail.com Sat Aug 5 23:45:49 2006 From: davidgrant at gmail.com (David Grant) Date: Sat, 5 Aug 2006 20:45:49 -0700 Subject: [Numpy-discussion] NumPy documentation In-Reply-To: References: Message-ID: What about the documentation that already exists here: http://www.tramy.us/ I think the more people that buy it the better since that money goes to support Travis does it not? Dave On 8/5/06, Albert Strasheim wrote: > > Hello all > > With NumPy 1.0 mere weeks away, I'm hoping we can improve the > documentation > a bit before the final release. Some things we might want to think about: > > 1. Documentation Sprint > > This page: > > http://www.scipy.org/SciPy2006/CodingSprints > > mentions a possible Documentation Sprint at SciPy 2006. Does anybody know > if > this is going to happen? > > 2. Tickets for missing functions missing docstrings > > Would it be helpful to create tickets for functions that currently don't > have docstrings? If not, is there a better way we can keep track of the > state of the documentation? > > 3. Examples in documentation > > Do we want to include examples in the docstrings? Some functions already > do, > and I think think this can be quite useful when one is exploring the > library. > > Maybe the example list: > > http://www.scipy.org/Numpy_Example_List > > should be incorporated into the docstrings? Then we can also set up > doctests > to make sure that all the examples really work. > > 4. Documentation format > > If someone wants to submit documentation to be included, say as patches > attached to tickets, what kind of format do we want? > > There's already various PEPs dealing with this topic: > > Docstring Processing System Framework > http://www.python.org/dev/peps/pep-0256/ > > Docstring Conventions > http://www.python.org/dev/peps/pep-0257/ > > Docutils Design Specification > http://www.python.org/dev/peps/pep-0258/ > > reStructuredText Docstring Format > http://www.python.org/dev/peps/pep-0287/ > > 5. Documentation tools > > A quick search turned up docutils: > > http://docutils.sourceforge.net/ > > and epydoc: > > http://epydoc.sourceforge.net/ > > Both of these support restructured text, so that looks like the way to go. > I > think epydoc can handle LaTeX equations and some LaTeX support has also > been > added to docutils recently. This might be useful for describing some > functions. > > Something else to consider is pydoc compatibility. NumPy currently breaks > pydoc: > > http://projects.scipy.org/scipy/numpy/ticket/232 > > It also breaks epydoc 3.0a2 (maybe an epydoc bug): > > > http://sourceforge.net/tracker/index.php?func=detail&aid=1535178&group_id=32 > 455&atid=405618 > > Anything else? How should we proceed to improve NumPy's documentation? > > Regards, > > Albert > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Aug 6 03:51:54 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 06 Aug 2006 02:51:54 -0500 Subject: [Numpy-discussion] NumPy documentation In-Reply-To: References: Message-ID: David Grant wrote: > What about the documentation that already exists here: http://www.tramy.us/ Essentially every function and class needs a docstring whether or not there is a manual available. Neither one invalidates the need for the other. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From davidgrant at gmail.com Sun Aug 6 04:14:58 2006 From: davidgrant at gmail.com (David Grant) Date: Sun, 6 Aug 2006 01:14:58 -0700 Subject: [Numpy-discussion] divmod issue Message-ID: The following lines of code: from numpy import floor div, mod = divmod(floor(1.5), 12) generate an exception: ValueError: need more than 0 values to unpack in numpy-0.9.8. Does anyone else see this? It might be due to the fact that floor returns a float64scalar. Should I be forced to cast that to an int before calling divmod with it? -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Aug 6 04:18:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 06 Aug 2006 03:18:23 -0500 Subject: [Numpy-discussion] divmod issue In-Reply-To: References: Message-ID: David Grant wrote: > The following lines of code: > > from numpy import floor > div, mod = divmod(floor(1.5), 12) > > generate an exception: > > ValueError: need more than 0 values to unpack > > in numpy-0.9.8. Does anyone else see this? It might be due to the fact > that floor returns a float64scalar. Should I be forced to cast that to > an int before calling divmod with it? I don't see an exception with a more recent numpy (r2881, to be precise). Please try a later version. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From svetosch at gmx.net Sun Aug 6 15:03:32 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sun, 06 Aug 2006 21:03:32 +0200 Subject: [Numpy-discussion] fixing diag() for matrices In-Reply-To: References: <44C64AA2.7070906@gmx.net> <44C652C0.9040806@gmx.net> <44CA0716.2000707@gmx.net> <44CA3EC0.9020404@noaa.gov> <44CA9283.5030108@gmx.net> Message-ID: <44D63D04.9060600@gmx.net> Charles R Harris schrieb: > Hi Sven, > > On 7/28/06, *Sven Schreiber* > wrote: > > Here's my attempt at summarizing the diag-discussion. > > > > > 2) Deprecate the use of diag which is overloaded with making diagonal > matrices as well as getting diagonals. Instead, use the existing > .diagonal() for getting a diagonal, and introduce a new make_diag() > function which could easily work for numpy-arrays and numpy-matrices > alike. > > > This would be my preference, but with functions {get,put}diag. We could > also add a method or function asdiag, which would always return a > diagonal matrix made from *all* the elements of the matrix taken in > order. For (1,n) or (n,1) this would do what you want. For other > matrices the result would be something new and probably useless, but at > least it wouldn't hurt. > This seems to have been implemented now by the new diagflat() function. So, matrix users can now use m.diagonal() for the matrix->vector direction of diag(), and diagflat(v) for the vector->matrix side of diag(), and always get numpy-matrix output for numpy-matrix input. Thanks a lot for making this possible! One (really minor) comment: "diagflat" as a name is not optimal imho. Are other suggestions welcome, or is there a compelling reason for this name? Thanks, sven From wbaxter at gmail.com Mon Aug 7 01:02:05 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 7 Aug 2006 14:02:05 +0900 Subject: [Numpy-discussion] comments on r_ and c_ ? In-Reply-To: <44CE3EE5.1000904@ieee.org> References: <44CE3EE5.1000904@ieee.org> Message-ID: On 8/1/06, Travis Oliphant wrote: > Bill Baxter wrote: > > When you have a chance, could the powers that be make some comment on > > the r_ and c_ situation? > r_ and c_ were in SciPy and have been there for several years. > > For NumPy, c_ has been deprecated (but not removed because it is used in > SciPy). > > The functionality of c_ is in r_ so it doesn't add anything. I don't see how r_ offers the ability to stack columns like this: >>> c_[ [[0],[1],[2]], [[4],[5],[6]] ] array([[0, 4], [1, 5], [2, 6]]) > There is going to be overlap with long-name functions because of > this. I have not had time to review Bill's suggestions yet --- were > they filed as a ticket? A ticket is the best way to keep track of > issues at this point. I just filed it as #235. But then I noticed I had already filed it previously as #201. Sorry about that. Anyway it's definitely in there now. Regards, --Bill From klemm at phys.ethz.ch Mon Aug 7 04:52:58 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Mon, 07 Aug 2006 10:52:58 +0200 Subject: [Numpy-discussion] numpy compilation question Message-ID: Hello, I try to compile numpy-1.0b1 with blas and lapack support. I have compiled blas and lapack according to the instrunctions in http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 . I copied the libraries to /scratch/python2.4/lib and set the environment variables accordingly. python setup.py config in the numpy directory then finds the libraries. If I then do python setup.py build The compilation dies with the error message: .. build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x28ae): In function `dotblas_vdot': numpy/core/blasdot/_dotblas.c:971: undefined reference to `PyArg_ParseTuple' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2b45):numpy/core/blasdot/_dotblas.c:1002: undefined reference to `PyTuple_N ew' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2b59):numpy/core/blasdot/_dotblas.c:83: undefined reference to `PyArg_Parse Tuple' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2b6d):numpy/core/blasdot/_dotblas.c:107: undefined reference to `_Py_NoneSt ruct' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2cba):numpy/core/blasdot/_dotblas.c:1021: undefined reference to `PyExc_Val ueError' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2cc9):numpy/core/blasdot/_dotblas.c:1021: undefined reference to `PyErr_Set String' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d1c):numpy/core/blasdot/_dotblas.c:1029: undefined reference to `PyEval_Sa veThread' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d3f):numpy/core/blasdot/_dotblas.c:1049: undefined reference to `PyEval_Re storeThread' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d63):numpy/core/blasdot/_dotblas.c:1045: undefined reference to `cblas_cdo tc_sub' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d84):numpy/core/blasdot/_dotblas.c:1041: undefined reference to `cblas_zdo tc_sub' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2da1):numpy/core/blasdot/_dotblas.c:1037: undefined reference to `cblas_sdo t' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2dc6):numpy/core/blasdot/_dotblas.c:1033: undefined reference to `cblas_ddo t' /usr/lib/gcc-lib/x86_64-redhat-linux/3.2.3/libfrtbegin.a(frtbegin.o)(.text+0x22): In function `main': : undefined reference to `MAIN__' collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -L/scratch/apps/lib build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o -L/scratch/python2.4/lib -lfblas - lg2c -o build/lib.linux-x86_64-2.4/numpy/core/_dotblas.so" failed with exit status 1 I try this on a dual processor Xeon machine with gcc 3.2.3 under an old redhat distribution. Therefore using the libraries delivered with the distro don't work as they are broken. At first I tried to compile numpy with atlas support but I got similar problems. I have attached the full output of the failed build. I would be very grateful if somebody with a little more experience with compilers could have a look at it and maybe point me in the right direction. Many thanks in advance, Hanno -- Hanno Klemm klemm at phys.ethz.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log.gz Type: application/x-gzip Size: 4478 bytes Desc: not available URL: From hjn253 at tom.com Thu Aug 10 05:14:17 2006 From: hjn253 at tom.com (=?GB2312?B?IjjUwjE5LTIwyNUvsbG+qSI=?=) Date: Thu, 10 Aug 2006 17:14:17 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU61MvTw0VYQ0VMus1QUFS4xL34udzA7brNvq3Tqr72st8=?= Message-ID: An HTML attachment was scrubbed... URL: From david.huard at gmail.com Mon Aug 7 08:48:52 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 7 Aug 2006 08:48:52 -0400 Subject: [Numpy-discussion] Histogram versus histogram2d In-Reply-To: <3ff66ae00608030749h42e53469j5aa0901628622d79@mail.gmail.com> References: <3ff66ae00608030749h42e53469j5aa0901628622d79@mail.gmail.com> Message-ID: <91cf711d0608070548j2ebda5bat1a92a1932a04388b@mail.gmail.com> I have noticed some that the 1d histogram and 2d histogram. The > histogram function bins everything between the elements of edges, and > then includes everything greater than the last edge element in the > last bin. The histrogram2d function only bins in the range specified > by edges. Is there a reason these two functions do not operate in the > same way? > Hi Mikolai, The reason is that I didn't like the way histogram handled outliers so I wrote histogram1d, histogram2d, and histogramdd to handle 1d, 2d and nd data series. I submitted those functions and only histogram2d got included in numpy, hence the clash. Travis suggested that histogram1d and histogramdd could go into scipy, but with the new compatibility paradigm, I suggest that the old histogram is moved into the compatibility module and histogram1d is renamed to histogram and put into the main namespace. histogramdd could indeed go into scipy.stats. I'll submit a new patch if there is some interest. The new function takes an axis argument so you can make an histogram out of a nd array rowwise or columnwise. Ouliers are not counted, and the bin array has length (nbin +1) (+1 for the right hand side edge). The new function will break some code relying on the old behavior, so its inclusion presupposes the agreement of the users. You can find the code at ticket 189 . David -------------- next part -------------- An HTML attachment was scrubbed... URL: From meesters at uni-mainz.de Mon Aug 7 13:29:44 2006 From: meesters at uni-mainz.de (Christian Meesters) Date: Mon, 7 Aug 2006 19:29:44 +0200 Subject: [Numpy-discussion] numpy and unittests Message-ID: <200608071929.44796.meesters@uni-mainz.de> Hi, I used to work with some unittest scripts for a bigger project of mine. Now that I started the project again the tests don't work anymore, using numpy version '0.9.5.2100' . The errors I get look are like this: ERROR: _normalize() should return dataset scaled between 0 and 1 ---------------------------------------------------------------------- Traceback (most recent call last): File "testingSAXS.py", line 265, in testNormalization self.assertEqual(self.test1._normalize(minimum=0.0,maximum=1.0),self.test5) File "/usr/lib64/python2.4/unittest.py", line 332, in failUnlessEqual if not first == second: File "/home/cm/Documents/Informatics/Python/python_programming/biophysics/SAXS/lib/Data.py", line 174, in __eq__ if self.intensity == other.intensity: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() The 'self.intensity' objects are 1D-arrays containing integers <= 1E6. The unittest script looks like: if __name__=='__main__': from Data import * from Utils import * import unittest def test__eq__(self): """__eq__ should return True with identical array data""" self.assert_(self.test1 == self.test2) suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(Test_SAXS_Sanity)) unittest.TextTestRunner(verbosity=1).run(suite) Any ideas what I have to change? (Possibly trivial, but I have no clue.) TIA Cheers Christian From robert.kern at gmail.com Mon Aug 7 14:04:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 07 Aug 2006 13:04:24 -0500 Subject: [Numpy-discussion] numpy and unittests In-Reply-To: <200608071929.44796.meesters@uni-mainz.de> References: <200608071929.44796.meesters@uni-mainz.de> Message-ID: Christian Meesters wrote: > Hi, > > I used to work with some unittest scripts for a bigger project of mine. Now > that I started the project again the tests don't work anymore, using numpy > version '0.9.5.2100' . > > The errors I get look are like this: > > ERROR: _normalize() should return dataset scaled between 0 and 1 > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "testingSAXS.py", line 265, in testNormalization > > self.assertEqual(self.test1._normalize(minimum=0.0,maximum=1.0),self.test5) > File "/usr/lib64/python2.4/unittest.py", line 332, in failUnlessEqual > if not first == second: > File > "/home/cm/Documents/Informatics/Python/python_programming/biophysics/SAXS/lib/Data.py", > line 174, in __eq__ > if self.intensity == other.intensity: > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() > > The 'self.intensity' objects are 1D-arrays containing integers <= 1E6. > > The unittest script looks like: > > if __name__=='__main__': > from Data import * > from Utils import * > import unittest > > > def test__eq__(self): > """__eq__ should return True with identical array data""" > self.assert_(self.test1 == self.test2) > > suite = unittest.TestSuite() > suite.addTest(unittest.makeSuite(Test_SAXS_Sanity)) > > unittest.TextTestRunner(verbosity=1).run(suite) > > Any ideas what I have to change? (Possibly trivial, but I have no clue.) self.assert_((self.test1 == self.test2).all()) I'm afraid that your test was always broken. Numeric used the convention that if *any* value in a boolean array was True, then the array would evaluate to True when used as a truth value in an if: clause. However, you almost certainly wanted to test that *all* of the values were True. This is why we now raise an exception; lots of people got tripped up over that. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Mon Aug 7 23:18:17 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 8 Aug 2006 12:18:17 +0900 Subject: [Numpy-discussion] Examples of basic C API usage? Message-ID: I see Pyrex and SWIG examples in numpy/doc but there doesn't seem to be an example of just a simple straightforward usage of the C-API. For instance make a few arrays by hand in C and then call numpy.multiply() on them. So far my attempts to call PyArray_SimpleNewFromData all result in segfaults. Anyone have such an example? --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From klemm at phys.ethz.ch Tue Aug 8 05:22:46 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Tue, 08 Aug 2006 11:22:46 +0200 Subject: [Numpy-discussion] numpy import problem Message-ID: Hello, finally after sorting out some homemade problems I managed to compile numpy-1.0b1. If I then start it from the directory where I compiled it, it works fine. However after I installed numpy with python setup.py install --prefix=/scratch/python2.4 I get the error message: Python 2.4.3 (#7, Aug 2 2006, 18:55:46) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/scratch/python2.4/lib/python2.4/site-packages/numpy/__init__.py", line 39, in ? import linalg File "/scratch/python2.4/lib/python2.4/site-packages/numpy/linalg/__init__.py", line 4, in ? from linalg import * File "/scratch/python2.4/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 24, in ? from numpy.linalg import lapack_lite ImportError: /scratch/python2.4/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: atl_f77wrap_zgemv__ >>> I suppose I have to set a path somewhere to the directory where atlas is installed. How do I do this? Hanno -- Hanno Klemm klemm at phys.ethz.ch From mikeyan at yahoo.co.jp Tue Aug 8 06:58:18 2006 From: mikeyan at yahoo.co.jp (=?iso-2022-jp?B?bWlrZQ==?=) Date: Tue, 08 Aug 2006 10:58:18 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?03 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From karol.langner at kn.pl Tue Aug 8 09:45:49 2006 From: karol.langner at kn.pl (Karol Langner) Date: Tue, 8 Aug 2006 15:45:49 +0200 Subject: [Numpy-discussion] Examples of basic C API usage? In-Reply-To: References: Message-ID: <200608081545.50274.karol.langner@kn.pl> On Tuesday 08 of August 2006 05:18, Bill Baxter wrote: > I see Pyrex and SWIG examples in numpy/doc but there doesn't seem to be an > example of just a simple straightforward usage of the C-API. > For instance make a few arrays by hand in C and then call numpy.multiply() > on them. So far my attempts to call PyArray_SimpleNewFromData all result > in segfaults. > Anyone have such an example? > > --Bill Have you looked here? http://numeric.scipy.org/numpydoc/numpy-13.html#pgfId-36640 Karol -- written by Karol Langner wto sie 8 15:45:16 CEST 2006 From ggumas at gmail.com Tue Aug 8 17:02:32 2006 From: ggumas at gmail.com (George Gumas) Date: Tue, 8 Aug 2006 17:02:32 -0400 Subject: [Numpy-discussion] numpy and matplotlib Message-ID: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> I downloaded numpy 10000 and matplotlib and when running numpy i get the error message below from matplotlib._ns_cntr import * RuntimeError: module compiled against version 90709 of C-API but this version of numpy is 1000000 How do I go about chaning the version of rither numpy or matplotlib Thanks George -------------- next part -------------- An HTML attachment was scrubbed... URL: From dd55 at cornell.edu Tue Aug 8 17:11:49 2006 From: dd55 at cornell.edu (Darren Dale) Date: Tue, 8 Aug 2006 17:11:49 -0400 Subject: [Numpy-discussion] numpy and matplotlib In-Reply-To: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> References: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> Message-ID: <200608081711.49315.dd55@cornell.edu> On Tuesday 08 August 2006 17:02, George Gumas wrote: > I downloaded numpy 10000 and matplotlib and when running numpy i get the > error message below > from matplotlib._ns_cntr import * > RuntimeError: module compiled against version 90709 of C-API but this > version of numpy is 1000000 > > How do I go about chaning the version of rither numpy or matplotlib This question is more appropriate for the mpl list, and it was discussed there late last week. The next matplotlib release will support numpy beta 1 and 2. Darren From wbaxter at gmail.com Tue Aug 8 17:11:57 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 9 Aug 2006 06:11:57 +0900 Subject: [Numpy-discussion] numpy and matplotlib In-Reply-To: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> References: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> Message-ID: Matplotlib needs to be recompiled against the latest Numpy. They should release a new version compatible with Numpy 1.0 beta soon. --bb On 8/9/06, George Gumas wrote: > > I downloaded numpy 10000 and matplotlib and when running numpy i get the > error message below > from matplotlib._ns_cntr import * > RuntimeError: module compiled against version 90709 of C-API but this > version of numpy is 1000000 > > How do I go about chaning the version of rither numpy or matplotlib > > Thanks > George > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fullung at gmail.com Tue Aug 8 17:24:08 2006 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 8 Aug 2006 23:24:08 +0200 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes Message-ID: Hello all With the nice ctypes integration in NumPy, and with Python 2.5 which will include ctypes around the corner, a remote possibility exists that within the next year or two, I might not be the only person that wants to use NumPy with ctypes. This is probably going to mean that this someone is going to want to build a shared library for use with ctypes. This is all well and good if you're using a build tool that knows about shared libraries, but in case this person is stuck with distutils, here is what we might want to do. Following this thread from SciPy-dev: http://projects.scipy.org/pipermail/scipy-dev/2006-April/005708.html I came up with the following plan. As it happens, pretending your shared library is a Python extension mostly works. In your setup.py you can do something like this: config = Configuration(package_name,parent_package,top_path) config.add_extension('libsvm_', define_macros=[('LIBSVM_EXPORTS', None), ('LIBSVM_DLL', None)], sources=[join('libsvm-2.82', 'svm.cpp')], depends=[join('libsvm-2.82', 'svm.h')]) First caveat: on Windows, distutils forces the linker to look for an exported symbol called init. In your code you'll have to add an empty function like this: void initlibsvm_() {} This gets us a compiled Python extension, which also happens to be a shared library on every platform I know of, which is Linux and Windows. Counter-examples anyone?. Next caveat: on Windows, shared libraries aka DLLs, typically have a .dll extension. However, Python extensions have a .pyd extension. We have a utility function in NumPy called ctypes_load_library which handles finding and loading of shared libraries with ctypes. Currently, shared library extensions (.dll, .so, .dylib) are hardcoded in this function. I propose we modify this function to look something like this: def ctypes_load_library(libname, loader_path, distutils_hack=False): ... If distutils_hack is True, instead of the default mechanism (which is currently hardcoded extensions), ctypes_load_library should do: import distutils.config so_ext = distutils.sysconfig.get_config_var('SO') to figure out the extension it should use to load shared libraries. This should make it reasonably easy for people to build shared libraries with distutils and use them with NumPy and ctypes. Comments appreciated. Someone checking something along these lines into SVN appreciated more. A solution that doesn't make me want to cry appreciated most. Thanks for reading. Regards, Albert P.S. As it happens, the OOF2 guys have already created a SharedLibrary builder for distutils, but integrating this into numpy.distutils is probably non-trivial. http://www.ctcms.nist.gov/oof/oof2.html From wbaxter at gmail.com Tue Aug 8 17:25:10 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 9 Aug 2006 06:25:10 +0900 Subject: [Numpy-discussion] Examples of basic C API usage? In-Reply-To: <200608081545.50274.karol.langner@kn.pl> References: <200608081545.50274.karol.langner@kn.pl> Message-ID: Ah, great. That is helpful, though it does seem to be a bit outdated. --bb On 8/8/06, Karol Langner wrote: > > On Tuesday 08 of August 2006 05:18, Bill Baxter wrote: > > I see Pyrex and SWIG examples in numpy/doc but there doesn't seem to be > an > > example of just a simple straightforward usage of the C-API. > > For instance make a few arrays by hand in C and then call numpy.multiply > () > > on them. So far my attempts to call PyArray_SimpleNewFromData all > result > > in segfaults. > > Anyone have such an example? > > > > --Bill > > Have you looked here? > > http://numeric.scipy.org/numpydoc/numpy-13.html#pgfId-36640 > > Karol > > -- > written by Karol Langner > wto sie 8 15:45:16 CEST 2006 > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Tue Aug 8 20:22:37 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 9 Aug 2006 09:22:37 +0900 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes In-Reply-To: References: Message-ID: On 8/9/06, Albert Strasheim wrote: > > Next caveat: on Windows, shared libraries aka DLLs, typically have a .dll > extension. However, Python extensions have a .pyd extension. > > We have a utility function in NumPy called ctypes_load_library which > handles > finding and loading of shared libraries with ctypes. Currently, shared > library extensions (.dll, .so, .dylib) are hardcoded in this function. > > I propose we modify this function to look something like this: > > def ctypes_load_library(libname, loader_path, distutils_hack=False): > ... > > If distutils_hack is True, instead of the default mechanism (which is > currently hardcoded extensions), ctypes_load_library should do: > > import distutils.config > so_ext = distutils.sysconfig.get_config_var('SO') > > to figure out the extension it should use to load shared libraries. This > should make it reasonably easy for people to build shared libraries with > distutils and use them with NumPy and ctypes. Wouldn't it make more sense to just rename the .pyd generated by distutils to .dll or .so? Especially since the .pyd generated by distutils won't actually be a python extension module. This renaming could be automated by a simple python script that wraps distutils. The addition of the init{modulename} function could also be done by that script. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From cwmoad at gmail.com Tue Aug 8 20:52:06 2006 From: cwmoad at gmail.com (Charlie Moad) Date: Tue, 8 Aug 2006 20:52:06 -0400 Subject: [Numpy-discussion] numpy and matplotlib In-Reply-To: References: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> Message-ID: <6382066a0608081752q51300fder958703d566881f4f@mail.gmail.com> We're waiting on some possible changes in the numpy c-api before scipy. Hopefully we will have a working release in the next week. On 8/8/06, Bill Baxter wrote: > Matplotlib needs to be recompiled against the latest Numpy. > They should release a new version compatible with Numpy 1.0 beta soon. > --bb > > > On 8/9/06, George Gumas wrote: > > > > I downloaded numpy 10000 and matplotlib and when running numpy i get the > error message below > from matplotlib._ns_cntr import * > RuntimeError: module compiled against version 90709 of C-API but this > version of numpy is 1000000 > > How do I go about chaning the version of rither numpy or matplotlib > > Thanks > George > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From robert.kern at gmail.com Tue Aug 8 21:47:03 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 08 Aug 2006 20:47:03 -0500 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes In-Reply-To: References: Message-ID: Bill Baxter wrote: > Wouldn't it make more sense to just rename the .pyd generated by > distutils to .dll or .so? Especially since the .pyd generated by > distutils won't actually be a python extension module. This renaming > could be automated by a simple python script that wraps distutils. The > addition of the init{modulename} function could also be done by that > script. The strategy of "post-processing" after the setup() is not really robust. I've encountered a number of packages that try to things like that, and I've never had one work right. And no, it won't solve the init{modulename} problem, either. It's a problem that occurs at build-time, not import-time. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Tue Aug 8 21:52:12 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 08 Aug 2006 18:52:12 -0700 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes In-Reply-To: References: Message-ID: <44D93FCC.3070405@astraw.com> Dear Albert, I have started to use numpy and ctypes together and I've been quite pleased. Thanks for your efforts and writings on the wiki. On the topic of ctypes but not directly following from your email: I noticed immediately that the .ctypes attribute of an array is going to be a de-facto array interface, and wondered whether it would actually be better to write some code that takes the __array_struct__ interface and exposes that as an object with ctypes-providing attributes. This way, it could be used by all software exposing the __array_struct__ interface. Still, even with today's implementation, this could be acheived with numpy.asarray( my_array_struct_object ).ctypes. Back to your email: I don't understand why you're trying to build a shared library with distutils. What's wrong with a plain old c-compiler and linker (and mt.exe if you're using MS VC 8)? You can build shared libraries this way with Makefiles, scons, Visual Studio, and about a billion other solutions that have evolved since early C days. I can understand the desire of getting "python setup.py install" to work, but I suspect spawning an appropriate subprocess to do the compilation would be easier and more robust than attempting to get distutils to do something it's not designed for. (Then again, to see what numpy distutils can do, well, let's just say I'm amazed.) Along these lines, I noticed that ctypes-itself seems to have put some hooks into setup.py to perform at least part of the configure/make dance on linux, although I haven't investigated any further yet. Perhaps that's a better way to go than bending distutils to your will? Finally, the ctypes_load_library() function was broken for me and so I just ended up using the appropriate ctypes calls directly. (I should report this bug, I know, and I haven't yet... Bad Andrew.) But the bigger issue for me is that this is a ctypes-level convenience function, and I can't see why it should be in numpy. Is there any reason it should go in numpy and not into ctypes itself where it would surely receive more review and widespread use if it's useful? Albert Strasheim wrote: >Hello all > >With the nice ctypes integration in NumPy, and with Python 2.5 which will >include ctypes around the corner, a remote possibility exists that within >the next year or two, I might not be the only person that wants to use NumPy >with ctypes. > >This is probably going to mean that this someone is going to want to build a >shared library for use with ctypes. This is all well and good if you're >using a build tool that knows about shared libraries, but in case this >person is stuck with distutils, here is what we might want to do. > >Following this thread from SciPy-dev: > >http://projects.scipy.org/pipermail/scipy-dev/2006-April/005708.html > >I came up with the following plan. > >As it happens, pretending your shared library is a Python extension mostly >works. In your setup.py you can do something like this: > >config = Configuration(package_name,parent_package,top_path) >config.add_extension('libsvm_', > define_macros=[('LIBSVM_EXPORTS', None), > ('LIBSVM_DLL', None)], > sources=[join('libsvm-2.82', 'svm.cpp')], > depends=[join('libsvm-2.82', 'svm.h')]) > >First caveat: on Windows, distutils forces the linker to look for an >exported symbol called init. In your code you'll have to >add an empty function like this: > >void initlibsvm_() {} > >This gets us a compiled Python extension, which also happens to be a shared >library on every platform I know of, which is Linux and Windows. >Counter-examples anyone?. > >Next caveat: on Windows, shared libraries aka DLLs, typically have a .dll >extension. However, Python extensions have a .pyd extension. > >We have a utility function in NumPy called ctypes_load_library which handles >finding and loading of shared libraries with ctypes. Currently, shared >library extensions (.dll, .so, .dylib) are hardcoded in this function. > >I propose we modify this function to look something like this: > >def ctypes_load_library(libname, loader_path, distutils_hack=False): > ... > >If distutils_hack is True, instead of the default mechanism (which is >currently hardcoded extensions), ctypes_load_library should do: > >import distutils.config >so_ext = distutils.sysconfig.get_config_var('SO') > >to figure out the extension it should use to load shared libraries. This >should make it reasonably easy for people to build shared libraries with >distutils and use them with NumPy and ctypes. > >Comments appreciated. Someone checking something along these lines into SVN >appreciated more. A solution that doesn't make me want to cry appreciated >most. > >Thanks for reading. > >Regards, > >Albert > >P.S. As it happens, the OOF2 guys have already created a SharedLibrary >builder for distutils, but integrating this into numpy.distutils is probably >non-trivial. > >http://www.ctcms.nist.gov/oof/oof2.html > > > >------------------------------------------------------------------------- >Using Tomcat but need to do more? Need to support web services, security? >Get stuff done quickly with pre-integrated technology to make your job easier >Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From matthew.brett at gmail.com Tue Aug 8 21:52:11 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 9 Aug 2006 02:52:11 +0100 Subject: [Numpy-discussion] astype char conversion Message-ID: <1e2af89e0608081852s6b5e16c0yd67a3ab2958da067@mail.gmail.com> Hi, Sorry if this is silly question, but should this work to convert from int8 to character type? a = array([104, 105], dtype=N.int8) a.astype('|S1') I was a bit surprised by the output: array([1, 1], dtype='|S1') Thanks a lot, Matthew From robert.kern at gmail.com Tue Aug 8 22:02:11 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 08 Aug 2006 21:02:11 -0500 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes In-Reply-To: <44D93FCC.3070405@astraw.com> References: <44D93FCC.3070405@astraw.com> Message-ID: Andrew Straw wrote: > Back to your email: I don't understand why you're trying to build a > shared library with distutils. What's wrong with a plain old c-compiler > and linker (and mt.exe if you're using MS VC 8)? You can build shared > libraries this way with Makefiles, scons, Visual Studio, and about a > billion other solutions that have evolved since early C days. I can > understand the desire of getting "python setup.py install" to work, but > I suspect spawning an appropriate subprocess to do the compilation would > be easier and more robust than attempting to get distutils to do > something it's not designed for. (Then again, to see what numpy > distutils can do, well, let's just say I'm amazed.) Along these lines, I > noticed that ctypes-itself seems to have put some hooks into setup.py to > perform at least part of the configure/make dance on linux, although I > haven't investigated any further yet. Perhaps that's a better way to go > than bending distutils to your will? Well, wrapper he's writing destined for scipy, so "python setup.py build" must work. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Aug 8 22:12:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 08 Aug 2006 21:12:23 -0500 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes In-Reply-To: References: Message-ID: Albert Strasheim wrote: > Comments appreciated. Someone checking something along these lines into SVN > appreciated more. A solution that doesn't make me want to cry appreciated > most. > P.S. As it happens, the OOF2 guys have already created a SharedLibrary > builder for distutils, but integrating this into numpy.distutils is probably > non-trivial. > > http://www.ctcms.nist.gov/oof/oof2.html I recommend using OOF2's stuff, not the .pyd hack. The latter makes *me* want to cry. If you come up with a patch, post it to the numpy Trac, and I'll check it in. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue Aug 8 23:01:56 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 09 Aug 2006 12:01:56 +0900 Subject: [Numpy-discussion] numpy and matplotlib In-Reply-To: <200608081711.49315.dd55@cornell.edu> References: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> <200608081711.49315.dd55@cornell.edu> Message-ID: <44D95024.1090905@ar.media.kyoto-u.ac.jp> Darren Dale wrote: > On Tuesday 08 August 2006 17:02, George Gumas wrote: > >> I downloaded numpy 10000 and matplotlib and when running numpy i get the >> error message below >> from matplotlib._ns_cntr import * >> RuntimeError: module compiled against version 90709 of C-API but this >> version of numpy is 1000000 >> >> This error may happen if you forgot to rebuild all matplotlib against the new numpy. Did you try recompiling everything by removing the build directory of matplotlib ? David From benjamin at decideur.info Wed Aug 9 04:25:10 2006 From: benjamin at decideur.info (Benjamin Thyreau) Date: Wed, 9 Aug 2006 10:25:10 +0200 Subject: [Numpy-discussion] Examples of basic C API usage? In-Reply-To: References: Message-ID: <200608091025.10363.benjamin@decideur.info> Le Mardi 8 Ao?t 2006 05:18, Bill Baxter a ?crit?: > I see Pyrex and SWIG examples in numpy/doc but there doesn't seem to be an > example of just a simple straightforward usage of the C-API. > For instance make a few arrays by hand in C and then call numpy.multiply() > on them. So far my attempts to call PyArray_SimpleNewFromData all result > in segfaults. > Anyone have such an example? > > --Bill For our neuroimagery lib, i had to write some simples straightforward wrappers to the C-lib GSL, which you might be interested to have a quick look.. Trac entry: http://projects.scipy.org/neuroimaging/ni/browser/fff/trunk/bindings/lightwrappers.h http://projects.scipy.org/neuroimaging/ni/browser/fff/trunk/bindings/lightwrappers.c and half-commented example usage.. http://projects.scipy.org/neuroimaging/ni/browser/fff/trunk/pythonTests/fffctests/lightmoduleExample.c -- Benjamin Thyreau CEA Orsay From david.huard at gmail.com Wed Aug 9 10:35:43 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 9 Aug 2006 10:35:43 -0400 Subject: [Numpy-discussion] Moving docstrings from C to Python In-Reply-To: References: <20060728145400.GN6338@mentat.za.net> Message-ID: <91cf711d0608090735h40ec64f7sbaa0d34ceb6e4978@mail.gmail.com> I started to do the same with array methods, but before I spend too much time on it, I'd like to be sure I'm doing the right thing. 1. In add_newdocs.py, add from numpy.core import ndarray 2. then add an entry for each method, eg add_docstring(ndarray.var, """a.var(axis=None, dtype=None) Return the variance, a measure of the spread of a distribution. The variance is the average of the squared deviations from the mean, i.e. var = mean((x - x.mean())**2). See also: std """) 3. in arraymethods.c, delete static char doc_var[] = ... remove doc_var in {"var", (PyCFunction)array_variance, METH_VARARGS|METH_KEYWORDS, doc_var}, David 2006/7/28, Sasha : > > On 7/28/06, Stefan van der Walt wrote: > > > Would anyone mind if the change was made? If not, where should they > > go? (numpy/add_newdocs.py or numpy/core/something) > > Another +1 for numpy/add_newdocs.py and a suggestion: check for > Py_OptimizeFlag > 1 in add_newdoc so that docstrings are not loaded if > python is invoked with -OO option. This will improve import numpy > time and reduce the memory footprint. I'll make the change if no one > objects. > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Wed Aug 9 00:53:36 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 08 Aug 2006 21:53:36 -0700 Subject: [Numpy-discussion] how to reference Numerical Python in a scientific publication Message-ID: <44D96A50.7080002@msg.ucsf.edu> Hi, we are using numerical python as an integral part of a microscope development project over last few years. So far we have been using exclusively numarray but now I decided it's time to slowly but sure migrate to numpy. What is the proper way to reference these packages ? Thanks to everyone involved, Sebastian Haase UCSF From haase at msg.ucsf.edu Wed Aug 9 17:02:14 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 9 Aug 2006 14:02:14 -0700 Subject: [Numpy-discussion] bug !? dtype type_descriptor does not accept zero length tuple Message-ID: <200608091402.14810.haase@msg.ucsf.edu> Hi! I have a problem with record array type descriptor. With numarray this used to work. My records made of n integers and m floats. So I used to be able specify formats="%di4,%df4"%(self.numInts,self.numFloats) in numarray which would translate to byteorder = self.isByteSwapped and '>' or '<' type_descr = [("int", "%s%di4" %(byteorder,self.numInts)), ("float", "%s%df4" %(byteorder,self.numFloats))] The problem occurs when numInts or numFloats is zero !? Could it numpy be changed to silectly accept this case Here is the complete traceback + some debug info: '>0i4'Traceback (most recent call last): File "", line 1, in ? File "/home/haase/PrLinN/Priithon/Mrc.py", line 56, in bindFile a = Mrc(fn, mode) File "/home/haase/PrLinN/Priithon/Mrc.py", line 204, in __init__ self.doExtHdrMap() File "/home/haase/PrLinN/Priithon/Mrc.py", line 271, in doExtHdrMap self.extHdrArray.dtype = type_descr File "/home/haase/qqq/lib/python/numpy/core/records.py", line 194, in __setattr__ return object.__setattr__(self, attr, val) TypeError: invalid data-type for array >>> U.debug() > /home/haase/qqq/lib/python/numpy/core/records.py(196)__setattr__() -> pass (Pdb) l 191 192 def __setattr__(self, attr, val): 193 try: 194 return object.__setattr__(self, attr, val) 195 except AttributeError: # Must be a fieldname 196 -> pass 197 fielddict = sb.ndarray.__getattribute__(self,'dtype').fields 198 try: 199 res = fielddict[attr][:2] 200 except (TypeError,KeyError): 201 raise AttributeError, "record array has no attribute %s" % attr (Pdb) p val [('int', '>0i4'), ('float', '>2f4')] (Pdb) p attr 'dtype' Thanks, Sebastian Haase From oliphant.travis at ieee.org Wed Aug 9 18:11:49 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 09 Aug 2006 16:11:49 -0600 Subject: [Numpy-discussion] astype char conversion In-Reply-To: <1e2af89e0608081852s6b5e16c0yd67a3ab2958da067@mail.gmail.com> References: <1e2af89e0608081852s6b5e16c0yd67a3ab2958da067@mail.gmail.com> Message-ID: <44DA5DA5.1010700@ieee.org> Matthew Brett wrote: > Hi, > > Sorry if this is silly question, but should this work to convert from > int8 to character type? > > a = array([104, 105], dtype=N.int8) > a.astype('|S1') > I'm not sure what you are trying to do here, but the standard coercion to strings will generate ['104', '105']. However you are only allowing 1 character strings so you get the first character. If you are wanting to get characters with ASCII codes 104 and 105 you can do that without coercion by simply viewing the memory as a different data-type: a.view('S1') array([h, i], dtype='|S1') -Travis From oliphant.travis at ieee.org Wed Aug 9 18:18:10 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 09 Aug 2006 16:18:10 -0600 Subject: [Numpy-discussion] bug !? dtype type_descriptor does not accept zero length tuple In-Reply-To: <200608091402.14810.haase@msg.ucsf.edu> References: <200608091402.14810.haase@msg.ucsf.edu> Message-ID: <44DA5F22.7080404@ieee.org> Sebastian Haase wrote: > Hi! > I have a problem with record array type descriptor. > With numarray this used to work. > My records made of n integers and m floats. So I used to be able specify > formats="%di4,%df4"%(self.numInts,self.numFloats) in numarray which would > translate to > byteorder = self.isByteSwapped and '>' or '<' > type_descr = [("int", "%s%di4" %(byteorder,self.numInts)), > ("float", "%s%df4" %(byteorder,self.numFloats))] > > The problem occurs when numInts or numFloats is zero !? > Could it numpy be changed to silectly accept this case > Here is the complete traceback + some debug info: > If numarray supported it, then we should get NumPy to support it as well unless there is a compelling reason not to. I can't think of any except that it might be hard to make it work. What is '0i4' supposed to mean exactly? Do you get a zero-sized field or is the field not included? I think the former will be much easier than the latter. Would that be O.K.? -Travis From haase at msg.ucsf.edu Wed Aug 9 18:41:00 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 9 Aug 2006 15:41:00 -0700 Subject: [Numpy-discussion] bug !? dtype type_descriptor does not accept zero length tuple In-Reply-To: <44DA5F22.7080404@ieee.org> References: <200608091402.14810.haase@msg.ucsf.edu> <44DA5F22.7080404@ieee.org> Message-ID: <200608091541.00208.haase@msg.ucsf.edu> On Wednesday 09 August 2006 15:18, Travis Oliphant wrote: > Sebastian Haase wrote: > > Hi! > > I have a problem with record array type descriptor. > > With numarray this used to work. > > My records made of n integers and m floats. So I used to be able > > specify formats="%di4,%df4"%(self.numInts,self.numFloats) in numarray > > which would translate to > > byteorder = self.isByteSwapped and '>' or '<' > > type_descr = [("int", "%s%di4" %(byteorder,self.numInts)), > > ("float", "%s%df4" %(byteorder,self.numFloats))] > > > > The problem occurs when numInts or numFloats is zero !? > > Could it numpy be changed to silectly accept this case > > Here is the complete traceback + some debug info: > > If numarray supported it, then we should get NumPy to support it as well > unless there is a compelling reason not to. I can't think of any except > that it might be hard to make it work. What is '0i4' supposed to mean > exactly? Do you get a zero-sized field or is the field not included? > I think the former will be much easier than the latter. Would that be > O.K.? That's exactly what numarray did. The rest of my code is assuming that all fields exist (even if they are empty). Otherwise I get a name error which is worse than getting an empty array. Thanks, Sebastian Haase From jek-cygwin1 at kleckner.net Wed Aug 9 20:18:46 2006 From: jek-cygwin1 at kleckner.net (Jim Kleckner) Date: Wed, 09 Aug 2006 17:18:46 -0700 Subject: [Numpy-discussion] Infinite loop in Numeric-24.2 for eigenvalues Message-ID: <44DA7B66.9030802@kleckner.net> It seems that this old problem of compiling Numeric is a problem again (even on my Linux box, not just cygwin): http://sourceforge.net/tracker/index.php?func=detail&aid=732520&group_id=1369&atid=301369 (The issue was the dlamch.f code) The patch recommended to run: python setup.py config in order to work around the problem. Note that this no longer runs and gives the error message: unable to execute _configtest.exe: No such file or directory The setup.py and customize.py code interact in complex ways with Python's build tools. Anyone out there familiar with these and what is going on? BTW, it looks as though the default Makefile in python2.4/config dir now has -O3 turned on which is stimulating this problem. Jim From jek-cygwin1 at kleckner.net Wed Aug 9 20:47:30 2006 From: jek-cygwin1 at kleckner.net (Jim Kleckner) Date: Wed, 09 Aug 2006 17:47:30 -0700 Subject: [Numpy-discussion] Infinite loop in Numeric-24.2 for eigenvalues In-Reply-To: <44DA7B66.9030802@kleckner.net> References: <44DA7B66.9030802@kleckner.net> Message-ID: <44DA8222.5090908@kleckner.net> Jim Kleckner wrote: > It seems that this old problem of compiling Numeric is a problem again > (even on my Linux box, not just cygwin): > http://sourceforge.net/tracker/index.php?func=detail&aid=732520&group_id=1369&atid=301369 > > > (The issue was the dlamch.f code) > > The patch recommended to run: > python setup.py config > in order to work around the problem. > > Note that this no longer runs and gives the error message: > unable to execute _configtest.exe: No such file or directory > > > The setup.py and customize.py code interact in complex ways with > Python's build tools. > > Anyone out there familiar with these and what is going on? > > BTW, it looks as though the default Makefile in python2.4/config dir now > has -O3 turned on which is stimulating this problem. > > Jim > A workaround for this problem in setup.py is to run this simple script to create the config.h file that is failing (probably due to the compile flags): gcc -fno-strict-aliasing -DNDEBUG -g -Wall -Wstrict-prototypes -IInclude -IPackages/FFT/Include -IPackages/RNG/Include -I/usr/include/python2.4 Src/config.c -o mkconfigh ./mkconfigh mv config.h Src From haase at msg.ucsf.edu Thu Aug 10 00:35:30 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 09 Aug 2006 21:35:30 -0700 Subject: [Numpy-discussion] bug !? dtype type_descriptor does not accept zero length tuple In-Reply-To: <44DA7C9A.7010507@ieee.org> References: <200608091402.14810.haase@msg.ucsf.edu> <200608091541.00208.haase@msg.ucsf.edu> <44DA658C.9050205@ieee.org> <200608091600.09607.haase@msg.ucsf.edu> <44DA7C9A.7010507@ieee.org> Message-ID: <44DAB792.2010503@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> On Wednesday 09 August 2006 15:45, you wrote: >> >>> Sebastian Haase wrote: >>> >>>> On Wednesday 09 August 2006 15:18, Travis Oliphant wrote: >>>> >>>>> If numarray supported it, then we should get NumPy to support it as >>>>> well >>>>> unless there is a compelling reason not to. I can't think of any >>>>> except >>>>> that it might be hard to make it work. What is '0i4' supposed to mean >>>>> exactly? Do you get a zero-sized field or is the field not included? >>>>> I think the former will be much easier than the latter. Would >>>>> that be >>>>> O.K.? >>>>> >>>> That's exactly what numarray did. The rest of my code is assuming that >>>> all fields exist (even if they are empty). Otherwise I get a name >>>> error which is worse than getting an empty array. >>>> >>> Do you have a simple code snippet that I could use as a test? >>> >>> -Travis >>> >> >> I think this should do it: >> >> a = N.arange(10, dtype=N.float32) >> a.shape = 5,2 >> type_descr = [("int", "<0i4"),("float", "<2f4")] >> a.dtype = type_descr >> >> > > I'm not sure what a.shape = (5,2) is supposed to do. I left it in the > unit-test out because assigning to the data-type you just defined > already results in > > a['float'].shape being (5,2) > > If it is left in, then an extra dimension is pushed in and > > a['float'].shape is (5,1,2) > > > This is due to the default behavior of assigning data-types when the new > data-type has a larger but compatibile itemsize then the old itemsize. I have to admit that I don't understand that statement. I thought - just "visually" - that a.shape = 5,2 would make a "table" with 2 columns. Then I could go on and give those columns names... Or is the problem that the type "2f4" refers to (some sort of) a "single column" with 2 floats grouped together !? Thanks for implementing it so quickly, Sebastian Haase From haase at msg.ucsf.edu Thu Aug 10 00:36:49 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 09 Aug 2006 21:36:49 -0700 Subject: [Numpy-discussion] how to reference Numerical Python in a scientific publication Message-ID: <44DAB7E1.8090108@msg.ucsf.edu> Hi, we are using numerical python as an integral part of a microscope development project over last few years. So far we have been using exclusively numarray but now I decided it's time to slowly but sure migrate to numpy. What is the proper way to reference these packages ? Thanks to everyone involved, Sebastian Haase UCSF From pfdubois at gmail.com Thu Aug 10 00:55:37 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Wed, 9 Aug 2006 21:55:37 -0700 Subject: [Numpy-discussion] how to reference Numerical Python in a scientific publication In-Reply-To: <44DAB7E1.8090108@msg.ucsf.edu> References: <44DAB7E1.8090108@msg.ucsf.edu> Message-ID: P. F. Dubois, K. Hinsen, and J. Hugunin, "Numerical Python", Computers in Physics, v. 10, #3, May/June 1996. is one reference people have used. Others simply refer to the website. The new book might be the best for numpy itself, dunno. Related papers are: David Ascher, P. F. Dubois, Konrad Hinsen, James Hugunin, and Travis Oliphant, "Numerical Python", UCRL-MA-128569, 93 pp., Lawrence Livermore National Laboratory, Livermore, CA; 1999. -- this is the 'official' Numerical Python documentation as first released. P. F. Dubois, "Extending Python with Fortran", Computing in Science and Engineering, v. 1 #5, Sept./Oct. 1999., p.66-73. David Scherer, Paul Dubois, and Bruce Sherwood, "VPython: 3D Interactive Scientific Graphics for Students", Computing in Science and Engineering, v. 2 #5, Sep./Oct. 2000, p. 56-62. On 09 Aug 2006 21:37:39 -0700, Sebastian Haase wrote: > Hi, > we are using numerical python as an integral part of a microscope > development project over last few years. > > So far we have been using exclusively numarray but now I decided it's > time to slowly but sure migrate to numpy. > > What is the proper way to reference these packages ? > > Thanks to everyone involved, > Sebastian Haase > UCSF > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From drife at ucar.edu Thu Aug 10 01:45:48 2006 From: drife at ucar.edu (Daran L. Rife) Date: Wed, 9 Aug 2006 23:45:48 -0600 (MDT) Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) Message-ID: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> Hello, I recently switched from a Debian Linux box to a Mac G5 PowerPC, running Mac OS X 10.4 Tiger (8.7.0). I use the Python Numeric package extensively, and have come to depend upon it. In my view, this piece of software is truly first rate, and it has greatly improved my productivity in the area of scientific analysis. Unfortunately, I am experiencing a problem that I cannot sort out. I am running Python 2.4.3 on a Debian box (V3.1), using gcc version 4.0.1, and the Apple vecLib.framework which has an optimized BLAS and LAPACK. When building Numeric 24.0, 24.1, or 24.2 everything seems to go AOK. But when I run code which makes use of the Numeric package (maksed arrays, dot product, LinearAlgebra, etc.) my code crashes hard and unpredictably. When it crashes I simply get a "Segmentation Fault". I'm sorry that I can't be more specific about what seems to happen just before the crash...I've tried to trace it but to no avail. Interestingly, I can get Numeric version 23.8 to build and run just fine, but it appears that the dotblas (BLAS optimized matrixmultiply/dot/innerproduct) does not properly get built in. Thus, all my matrix operations are -very- slow. Has anyone seen this problem, or know where I might look to solve it? Perhaps I have overlooked a crucial step in the build/install of Numeric 24.x on the Mac. I searched round the Net with google, and have sifted through the numpy/scipy/numeric Web pages, various mailing lists, user groups, etc., and can't seem to find any relevant info. Alternatively, can someone explain how to get Numeric 23.8 to compile on OS X 10.4 Tiger, with the dotblas module? Thanks very much for your help, Daran From QYWRSSZME at howerobinson.com Thu Aug 10 07:57:45 2006 From: QYWRSSZME at howerobinson.com (Socorro Crane) Date: Thu, 10 Aug 2006 05:57:45 -0600 Subject: [Numpy-discussion] Change DNS Message-ID: Icrease Your S''exual Desire and S''perm volume by 500% L'onger o''rgasms - The longest most intense o''rgasms of your life Multiple o''rgasms - Cum again and again S'PUR-M is The Newest and The Safest Way 100% N''atural and No Side Effects - in contrast to well-known brands. Experience three times longer o''rgasms World Wide shipping within 24 hours Clisk here http://www.guideforswitzerland.info banister vintner eddie chromatogram bureaucrat lascivious cosgrove sash factious continue pulse degree dynast ironstone lambert chevron From pbdr at cmp.uea.ac.uk Thu Aug 10 07:38:37 2006 From: pbdr at cmp.uea.ac.uk (Pierre Barbier de Reuille) Date: Thu, 10 Aug 2006 12:38:37 +0100 Subject: [Numpy-discussion] Change of signature for copyswap function ? Message-ID: <44DB1ABD.6010703@cmp.uea.ac.uk> Hi, in my documentation, the copyswap function in the PyArray_ArrFuncs structure is supposed to have this signature: copyswap (void) (void* dest, void* src, int swap, int itemsize) However, in the latest version of NumPy, the signature is: copyswap (void) (void*, void*, int, void*) My question is: what correspond to the last void* ? Thanks, Pierre From oliphant.travis at ieee.org Thu Aug 10 08:55:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 10 Aug 2006 06:55:29 -0600 Subject: [Numpy-discussion] Change of signature for copyswap function ? In-Reply-To: <44DB1ABD.6010703@cmp.uea.ac.uk> References: <44DB1ABD.6010703@cmp.uea.ac.uk> Message-ID: <44DB2CC1.7090806@ieee.org> Pierre Barbier de Reuille wrote: > Hi, > > in my documentation, the copyswap function in the PyArray_ArrFuncs > structure is supposed to have this signature: > > copyswap (void) (void* dest, void* src, int swap, int itemsize) > > However, in the latest version of NumPy, the signature is: > > copyswap (void) (void*, void*, int, void*) > > My question is: what correspond to the last void* ? > It's only needed for FLEXIBLE arrays (STRING, UNICODE, VOID), then you pass in an array whose ->descr member has the right itemsize. Look in core/src/arratypes for the definitions of the copyswap functions which can be helpful to see if arguments are actually needed. -Travis From drife at ucar.edu Thu Aug 10 09:33:44 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 07:33:44 -0600 (MDT) Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) Message-ID: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> Hello, I am a veteran user of Numeric and am trying out the latest version of numpy (numpy 1.01b) on Mac OS X 10.4 Tiger (8.7.0). When trying to invert a matrix with numpy.linalg.inv I get the following error: ----> Traceback (most recent call last): File "./bias_correction.py", line 381, in ? if __name__ == "__main__": main() File "./bias_correction.py", line 373, in main (index_to_stnid, bias_and_innov) = calc_bias_and_innov(cf, stn_info, obs, infile_obs, grids, infile_grids) File "./bias_correction.py", line 297, in calc_bias_and_innov K = make_kalman_gain(R, P_local, H) File "./bias_correction.py", line 157, in make_kalman_gain K = MA.dot( MA.dot(P, MA.transpose(H)), inv(MA.dot(H, MA.dot(P, MA.transpose(H))) + R ) ) File "/opt/python/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 149, in inv return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) TypeError: __array_wrap__() takes exactly 3 arguments (2 given) <---- Is this a known problem, and if so, what is the fix? Thanks very much, Daran From drife at ucar.edu Thu Aug 10 10:02:23 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 08:02:23 -0600 (MDT) Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> Message-ID: <34498.64.17.89.52.1155218543.squirrel@imap.rap.ucar.edu> Hi group, Sorry, but there was an error on my previous message, 2nd paragraph, 2nd setence. It should read: Unfortunately, I am experiencing a problem that I cannot sort out. I am running Python 2.4.3 on a Mac G5 running OS X 10.4 Tiger (8.7.0), using gcc version 4.0.1, and the Apple vecLib.framework which has an optimized BLAS and LAPACK. When building Numeric 24.0, 24.1, or 24.2 everything seems to go AOK. But when I run code which makes use of the Numeric package (maksed arrays, dot product, LinearAlgebra, etc.) my code crashes hard and unpredictably. When it crashes I simply get a "Segmentation Fault". I'm sorry that I can't be more specific about what seems to happen just before the crash... I've tried to trace it but to no avail. Thanks again for your help. Daran -- > I recently switched from a Debian Linux box to a Mac G5 > PowerPC, running Mac OS X 10.4 Tiger (8.7.0). I use the > Python Numeric package extensively, and have come to > depend upon it. In my view, this piece of software is > truly first rate, and it has greatly improved my > productivity in the area of scientific analysis. > > Unfortunately, I am experiencing a problem that I cannot sort > out. I am running Python 2.4.3 on a Debian box (V3.1), using > gcc version 4.0.1, and the Apple vecLib.framework which has > an optimized BLAS and LAPACK. When building Numeric 24.0, > 24.1, or 24.2 everything seems to go AOK. But when I run > code which makes use of the Numeric package (maksed arrays, > dot product, LinearAlgebra, etc.) my code crashes hard and > unpredictably. When it crashes I simply get a "Segmentation > Fault". I'm sorry that I can't be more specific about what > seems to happen just before the crash...I've tried to trace > it but to no avail. > > Interestingly, I can get Numeric version 23.8 to build and > run just fine, but it appears that the dotblas (BLAS > optimized matrixmultiply/dot/innerproduct) does not properly > get built in. Thus, all my matrix operations are -very- slow. > > Has anyone seen this problem, or know where I might look > to solve it? Perhaps I have overlooked a crucial step in > the build/install of Numeric 24.x on the Mac. > > I searched round the Net with google, and have sifted through > the numpy/scipy/numeric Web pages, various mailing lists, user > groups, etc., and can't seem to find any relevant info. > > Alternatively, can someone explain how to get Numeric 23.8 > to compile on OS X 10.4 Tiger, with the dotblas module? > > > Thanks very much for your help, > > > Daran > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From klemm at phys.ethz.ch Thu Aug 10 10:12:50 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Thu, 10 Aug 2006 16:12:50 +0200 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34498.64.17.89.52.1155218543.squirrel@imap.rap.ucar.edu> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu>, <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> Message-ID: Daran, I had a similar behaviour when I tried to use module compield with an older f2py with a newer version of numpy. So is it maybe possible that some *.so files are used from an earlier build? Hanno "Daran L. Rife" said: > Hi group, > > Sorry, but there was an error on my previous message, > 2nd paragraph, 2nd setence. It should read: > > Unfortunately, I am experiencing a problem that I cannot sort > out. I am running Python 2.4.3 on a Mac G5 running OS X 10.4 > Tiger (8.7.0), using gcc version 4.0.1, and the Apple > vecLib.framework which has an optimized BLAS and LAPACK. > When building Numeric 24.0, 24.1, or 24.2 everything seems > to go AOK. But when I run code which makes use of the Numeric > package (maksed arrays, dot product, LinearAlgebra, etc.) my > code crashes hard and unpredictably. When it crashes I simply > get a "Segmentation Fault". I'm sorry that I can't be more > specific about what seems to happen just before the crash... > I've tried to trace it but to no avail. > > Thanks again for your help. > > > Daran > > -- > > > I recently switched from a Debian Linux box to a Mac G5 > > PowerPC, running Mac OS X 10.4 Tiger (8.7.0). I use the > > Python Numeric package extensively, and have come to > > depend upon it. In my view, this piece of software is > > truly first rate, and it has greatly improved my > > productivity in the area of scientific analysis. > > > > Unfortunately, I am experiencing a problem that I cannot sort > > out. I am running Python 2.4.3 on a Debian box (V3.1), using > > gcc version 4.0.1, and the Apple vecLib.framework which has > > an optimized BLAS and LAPACK. When building Numeric 24.0, > > 24.1, or 24.2 everything seems to go AOK. But when I run > > code which makes use of the Numeric package (maksed arrays, > > dot product, LinearAlgebra, etc.) my code crashes hard and > > unpredictably. When it crashes I simply get a "Segmentation > > Fault". I'm sorry that I can't be more specific about what > > seems to happen just before the crash...I've tried to trace > > it but to no avail. > > > > Interestingly, I can get Numeric version 23.8 to build and > > run just fine, but it appears that the dotblas (BLAS > > optimized matrixmultiply/dot/innerproduct) does not properly > > get built in. Thus, all my matrix operations are -very- slow. > > > > Has anyone seen this problem, or know where I might look > > to solve it? Perhaps I have overlooked a crucial step in > > the build/install of Numeric 24.x on the Mac. > > > > I searched round the Net with google, and have sifted through > > the numpy/scipy/numeric Web pages, various mailing lists, user > > groups, etc., and can't seem to find any relevant info. > > > > Alternatively, can someone explain how to get Numeric 23.8 > > to compile on OS X 10.4 Tiger, with the dotblas module? > > > > > > Thanks very much for your help, > > > > > > Daran > > > > > > ------------------------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job > > easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- Hanno Klemm klemm at phys.ethz.ch From drife at ucar.edu Thu Aug 10 10:58:47 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 08:58:47 -0600 (MDT) Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu>, <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> Message-ID: <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> Hi Hanno, > I had a similar behaviour when I tried to use module compield with an > older f2py with a newer version of numpy. So is it maybe possible that > some *.so files are used from an earlier build? Many thanks for the reply. This was my first attempt to build and use numpy; I have no previous version. May I ask how you specificlly solved the problem on your machine? Thanks, Daran -- From Chris.Barker at noaa.gov Thu Aug 10 12:13:51 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 10 Aug 2006 09:13:51 -0700 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> Message-ID: <44DB5B3F.9080203@noaa.gov> Daran L. Rife wrote: > Many thanks for the reply. This was my first attempt > to build and use numpy; "numpy" used to be a generic name for the Numerical extensions to Python. Now there are three versions: "Numeric": The original, now at version 24.2 This is probably the last version that will be produced. "numarray": This was designed to be the "next generation" array package. It has some nice additional features that Numeric does not have, but is missing some as well. It is at version 1.5.1. it may see some bug fix releases in the future, but probably won't see any more major development. "numpy": this is the "grand unification" array package. It is based on the Numeric code base, and is designed to have the best features of Numeric and numarray, plus some extra good stuff. It is now at version 1.0beta, with an expected release date for 1.0final sometime this fall. It is under active development, the API is pretty stable now, and it appears to have the consensus of the numerical python community as the "way of the future" I wrote all that out so that you can be clear which package you are having trouble with -- you've used both the term "Numeric" and "numpy" in your posts, and there is some confusion. If you are working on a project that does not need to be released for a few months (i.e. after numpy has reached 1.0 final), I'd use numpy, rather than Numeric or numarray. Also: on OS-X, there are far to many ways to build Python. When you report a problem, you need to define exactly which python build you are using, and this goes beyond python version -- fink? darwinports? built-it-from-source? Framework? Universal, etc... The MacPython community is doing it's best to standardize on the Universal Build of 2.4.3 that you can find here: http://www.pythonmac.org/packages/py24-fat/ There you will also find pre-built packages for Numeric24.2, numarray1.5.1, and numpy0.9.8 Have you tried any of those? They should be built against Apple's vectLib. There isn't a package for numpy 1.0beta there yet. I may add one soon. > Interestingly, I can get Numeric version 23.8 to build and > run just fine, but it appears that the dotblas (BLAS > optimized matrixmultiply/dot/innerproduct) does not properly > get built in. Thus, all my matrix operations are -very- slow. I'm not sure of the dates, but that is probably a version that didn't have the check for Apple's vecLib in the setup.py, so it built with the built-in lapack-lite instead. You can compare the setup.py files from that and newer versions to see how to make it build against vectLib, but I suspect if you do that, you'll see the same problems. Also, please send a small test script that crashes for you, so others can test it. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From drife at ucar.edu Thu Aug 10 12:36:15 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 10:36:15 -0600 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <44DB5B3F.9080203@noaa.gov> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> <44DB5B3F.9080203@noaa.gov> Message-ID: <44DB607F.9060903@ucar.edu> Hi Chris, Thanks very much for your reply. My apology for the confusion. To be clear, I am a veteran user of Numeric not numpy. I tried installing four versions of Numeric: 23.8, 24.0, 24.1, and 24.2. My Python distro is built from source, using the GCC 4.0.1 suite of compilers. I am running all of this on a Mac G5 PowerPC with Mac OS X 10.4 Tiger (8.7.0). All branches of Numeric 24.x cause a "Segmentation Fault". The scripts I was running this against are a bit complex, so it is not so easy for me to sort out when/where the failure occurs. I'll keep doing some testing and see if I can get a better idea for what seems to be the issue. I'd very like much like to move to numpy, but I have code that needs to be working -now-, so at this point I am more interested in Numeric; I am an adept user of Numeric, and I know it works well on Debian Linux boxes. I will try your suggestion of installing and running the pre-built packages at . Thanks again for your patience and for your help. Daran -- > Daran L. Rife wrote: >> Many thanks for the reply. This was my first attempt >> to build and use numpy; > > "numpy" used to be a generic name for the Numerical extensions to > Python. Now there are three versions: > > "Numeric": The original, now at version 24.2 This is probably the last > version that will be produced. > > "numarray": This was designed to be the "next generation" array package. > It has some nice additional features that Numeric does not have, but is > missing some as well. It is at version 1.5.1. it may see some bug fix > releases in the future, but probably won't see any more major development. > > "numpy": this is the "grand unification" array package. It is based on > the Numeric code base, and is designed to have the best features of > Numeric and numarray, plus some extra good stuff. It is now at version > 1.0beta, with an expected release date for 1.0final sometime this fall. > It is under active development, the API is pretty stable now, and it > appears to have the consensus of the numerical python community as the > "way of the future" > > I wrote all that out so that you can be clear which package you are > having trouble with -- you've used both the term "Numeric" and "numpy" > in your posts, and there is some confusion. > > If you are working on a project that does not need to be released for a > few months (i.e. after numpy has reached 1.0 final), I'd use numpy, > rather than Numeric or numarray. > > Also: on OS-X, there are far to many ways to build Python. When you > report a problem, you need to define exactly which python build you are > using, and this goes beyond python version -- fink? darwinports? > built-it-from-source? Framework? Universal, etc... > > The MacPython community is doing it's best to standardize on the > Universal Build of 2.4.3 that you can find here: > > http://www.pythonmac.org/packages/py24-fat/ > > There you will also find pre-built packages for Numeric24.2, > numarray1.5.1, and numpy0.9.8 > > Have you tried any of those? They should be built against Apple's > vectLib. There isn't a package for numpy 1.0beta there yet. I may add > one soon. > >> Interestingly, I can get Numeric version 23.8 to build and >> run just fine, but it appears that the dotblas (BLAS >> optimized matrixmultiply/dot/innerproduct) does not properly >> get built in. Thus, all my matrix operations are -very- slow. > > I'm not sure of the dates, but that is probably a version that didn't > have the check for Apple's vecLib in the setup.py, so it built with the > built-in lapack-lite instead. You can compare the setup.py files from > that and newer versions to see how to make it build against vectLib, but > I suspect if you do that, you'll see the same problems. > > Also, please send a small test script that crashes for you, so others > can test it. > > -Chris From klemm at phys.ethz.ch Thu Aug 10 12:50:38 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Thu, 10 Aug 2006 18:50:38 +0200 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu>, <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> , Message-ID: Hi Daran, I fortunately never had the need to run different versions in parallel, so I basically removed the earlier versions of numpy. However, as you possibly know, you can build wrapper functions for fortran code with f2py (which is now shipped with numpy). And that is where I got the segfault behaviour: I had a module compiled for numpy 0.9.6 and then tried to use it with numpy 1.0b. Therefore I thought if you have similar stuff running on your machine that might be a reason. The obvious solution is to recompile the fortran code with the newer version of f2py. But fom what you write, your problem seems to be different. Regards, Hanno "Daran L. Rife" said: > Hi Hanno, > > > I had a similar behaviour when I tried to use module compield with an > > older f2py with a newer version of numpy. So is it maybe possible that > > some *.so files are used from an earlier build? > > > Many thanks for the reply. This was my first attempt > to build and use numpy; I have no previous version. > May I ask how you specificlly solved the problem > on your machine? > > Thanks, > > Daran > > -- > > -- Hanno Klemm klemm at phys.ethz.ch From bhendrix at enthought.com Thu Aug 10 12:53:54 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Thu, 10 Aug 2006 11:53:54 -0500 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available Message-ID: <44DB64A2.60203@enthought.com> For those not able to make SciPy 2006 next week, or who would like to download the ISO a few days early, its available at http://code.enthought.com/downloads/scipy2006-i386.iso.torrent. We squashed a lot onto the CD, so I also had to trim > 100 MB of packages that ship with the standard Ubuntu CD. Here's what I was able to add: * SciPy build from svn (Wed, 12:00 CST) * NumPy built from svn (Wed, 12:00 CST) * Matplotlib built from svn (Wed, 12:00 CST) * IPython built from svn (Wed, 12:00 CST) * Enthought built from svn (Wed, 16:00 CST) * ctypes 1.0.0 * hdf5 1.6.5 * networkx 0.31 * Pyrex 0.9.4.1 * pytables 1.3.2 All of the svn checkouts are zipped in /src, if you'd like to build from a svn version newer than what was shipped, simple copy the compressed package to your home dir, uncompress it, run "svn upate", and built it. Please note: This ISO was built rather hastily, uses un-official code, and received very little testing. Please don't even consider using this in a production environment. Bryce From cookedm at physics.mcmaster.ca Thu Aug 10 14:22:36 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 10 Aug 2006 14:22:36 -0400 Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> References: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> Message-ID: <20060810142236.6770032a@arbutus.physics.mcmaster.ca> On Thu, 10 Aug 2006 07:33:44 -0600 (MDT) "Daran L. Rife" wrote: > Hello, > > I am a veteran user of Numeric and am trying > out the latest version of numpy (numpy 1.01b) > on Mac OS X 10.4 Tiger (8.7.0). > > When trying to invert a matrix with > numpy.linalg.inv I get the following error: > > ----> > > Traceback (most recent call last): > File "./bias_correction.py", line 381, in ? > if __name__ == "__main__": main() > File "./bias_correction.py", line 373, in main > (index_to_stnid, bias_and_innov) = calc_bias_and_innov(cf, stn_info, > obs, infile_obs, grids, infile_grids) > File "./bias_correction.py", line 297, in calc_bias_and_innov > K = make_kalman_gain(R, P_local, H) > File "./bias_correction.py", line 157, in make_kalman_gain > K = MA.dot( MA.dot(P, MA.transpose(H)), inv(MA.dot(H, MA.dot(P, > MA.transpose(H))) + R ) ) > File "/opt/python/lib/python2.4/site-packages/numpy/linalg/linalg.py", > line 149, in inv > return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) > TypeError: __array_wrap__() takes exactly 3 arguments (2 given) > > <---- > > Is this a known problem, and if so, what is the fix? It looks like the problem is that numpy.core.ma.MaskedArray.__array_map__ expects a "context" argument, but none gets passed. I'm not familiar with that, so I don't know what the fix is ... -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From ndarray at mac.com Thu Aug 10 14:41:35 2006 From: ndarray at mac.com (Sasha) Date: Thu, 10 Aug 2006 14:41:35 -0400 Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> References: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> Message-ID: Inverting a matrix with masked values does not make much sense. Call "filled" method with an appropriate fill value before passing the matrix to "inv". On 8/10/06, Daran L. Rife wrote: > Hello, > > I am a veteran user of Numeric and am trying > out the latest version of numpy (numpy 1.01b) > on Mac OS X 10.4 Tiger (8.7.0). > > When trying to invert a matrix with > numpy.linalg.inv I get the following error: > > ----> > > Traceback (most recent call last): > File "./bias_correction.py", line 381, in ? > if __name__ == "__main__": main() > File "./bias_correction.py", line 373, in main > (index_to_stnid, bias_and_innov) = calc_bias_and_innov(cf, stn_info, > obs, infile_obs, grids, infile_grids) > File "./bias_correction.py", line 297, in calc_bias_and_innov > K = make_kalman_gain(R, P_local, H) > File "./bias_correction.py", line 157, in make_kalman_gain > K = MA.dot( MA.dot(P, MA.transpose(H)), inv(MA.dot(H, MA.dot(P, > MA.transpose(H))) + R ) ) > File "/opt/python/lib/python2.4/site-packages/numpy/linalg/linalg.py", > line 149, in inv > return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) > TypeError: __array_wrap__() takes exactly 3 arguments (2 given) > > <---- > > Is this a known problem, and if so, what is the fix? > > > Thanks very much, > > > Daran > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From oliphant.travis at ieee.org Thu Aug 10 15:10:47 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 10 Aug 2006 13:10:47 -0600 Subject: [Numpy-discussion] Numarray compatibility module available Message-ID: <44DB84B7.4060409@ieee.org> I've just finished a first version of the numarray compatibility module. It does not include all the names from the numarray name-space but it does include the most important ones, I believe. It also includes a slightly modified form of the numarray type-objects so that NumPy can recognize them as dtypes. I do not have a lot of code to test the compatibility layer with so any help will be appreciated. The compatibility layer still requires changes to certain methods and attributes on arrays. This is performed by the alter_code1.py module which I will be finishing over the next few hours. Once that is ready (and I've updated NumPy to work with the latest version of Python 2.5 in SVN) I want to make a 1.0b2 release (no later than Friday). I would appreciate it if several people could test the current SVN version of NumPy. In order to support several of the features of NumArray that I had missed, I engaged in a marathon coding sprint last night from about 6:00pm to 6:00am during which time I added output arguments to many of the functions in NumPy, and a clipmode argument to several others. I also added the C-API functions PyArray_OutputConverter and PyArray_ClipmodeConverter to make it easy to get these arguments from Python to C. This caused a change in the C-API that will require re-compilation for 1.0b2. I'm sorry about that. I'm really pushing for stability on the C-API. Now that the numarray compatibility module is complete, I'm more confident that we won't need anymore changes to the C-API for version 1.0. Of course, only when numpy 1.0final comes out will that be a guarantee. While I'm relatively confident about the changes to NumPy, the changes were extensive enough that more testing is warranted including another round of Valgrind tests. Unit-tests written to take advantage of the new output arguments on several of the functions (take, put, compress, clip, conjugate, argmax, argmin, and any function based on a ufunc method -- like sum, product, any, all, etc.) are particularly needed. If serious problems are discovered, then the 1.0b2 might be delayed again, but I'm really pushing to get 1.0b2 out the door soon. The numarray compatibility module and the oldnumeric compatibility module should hopefully help people adapt their code more quickly to NumPy. It's not fool-proof, though, so the best strategy is still to write to NumPy :-) as soon as you can. -Travis From drife at ucar.edu Thu Aug 10 15:33:52 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 13:33:52 -0600 Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: References: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> Message-ID: <44DB8A20.3070605@ucar.edu> Hi Sasha, > Inverting a matrix with masked values does not make much sense. Call > "filled" method with an appropriate fill value before passing the > matrix to "inv". In principle you are right, but even though I use masked arrays in this operation, when the operation itself is done no masked values remain. Thus, my code works very well with the "old" Numeric--and has worked well for some time. That said, I will try your suggestion of doing a "filled" on the matrix before sending it off to the inverse module. Thanks, Daran From ndarray at mac.com Thu Aug 10 16:07:17 2006 From: ndarray at mac.com (Sasha) Date: Thu, 10 Aug 2006 16:07:17 -0400 Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <44DB8A20.3070605@ucar.edu> References: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> <44DB8A20.3070605@ucar.edu> Message-ID: I see that Travis just fixed that by making context optional . I am not sure it is a good idea to allow use of ufuncs for which domain is not defined in ma. This may lead to hard to find bugs coming from ma arrays with nans in the data. I would rather see linalg passing the (func,args) context to wrap. That would not fix the reported problem, but will make diagnostic clearer. On 8/10/06, Daran L. Rife wrote: > Hi Sasha, > > > Inverting a matrix with masked values does not make much sense. Call > > "filled" method with an appropriate fill value before passing the > > matrix to "inv". > > In principle you are right, but even though I use masked arrays > in this operation, when the operation itself is done no masked > values remain. Thus, my code works very well with the "old" > Numeric--and has worked well for some time. That said, I will > try your suggestion of doing a "filled" on the matrix before > sending it off to the inverse module. > > > Thanks, > > > Daran > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From oliphant.travis at ieee.org Thu Aug 10 16:22:21 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 10 Aug 2006 14:22:21 -0600 Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: References: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> <44DB8A20.3070605@ucar.edu> Message-ID: <44DB957D.5020008@ieee.org> Sasha wrote: > I see that Travis just fixed that by making context optional > . I am not sure > it is a good idea to allow use of ufuncs for which domain is not > defined in ma. This may lead to hard to find bugs coming from ma > arrays with nans in the data. I would rather see linalg passing the > (func,args) context to wrap. That would not fix the reported problem, > but will make diagnostic clearer. > > This can be done as well. The problem is that __array_wrap__ is used in quite a few places (without context) and ma needs to have a default behavior when context is not supplied. -Travis From haase at msg.ucsf.edu Thu Aug 10 19:43:09 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 10 Aug 2006 16:43:09 -0700 Subject: [Numpy-discussion] format typestr for "String" ( 10 strings: '10a80' ) gives just 'None' Message-ID: <200608101643.10100.haase@msg.ucsf.edu> Hi, trying to convert my memmap - records - numarray code for reading a image file format (Mrc). There are 10 fields of strings (each 80 chars long) in the header: in numarray I used the format string '10a80' This results in a single value in numpy. Same after changing it to '10S80'. Am I doing something wrong !? Thanks, Sebastian Haase From haase at msg.ucsf.edu Thu Aug 10 20:23:12 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 10 Aug 2006 17:23:12 -0700 Subject: [Numpy-discussion] format typestr for "String" ( 10 strings: '10a80' ) gives just 'None' In-Reply-To: <44DBC7E4.1010904@ieee.org> References: <200608101643.10100.haase@msg.ucsf.edu> <44DBC7E4.1010904@ieee.org> Message-ID: <200608101723.12514.haase@msg.ucsf.edu> On Thursday 10 August 2006 16:57, Travis Oliphant wrote: > Sebastian Haase wrote: > > Hi, > > trying to convert my memmap - records - numarray code for reading a > > image file format (Mrc). > > There are 10 fields of strings (each 80 chars long) in the header: > > in numarray I used the format string '10a80' > > This results in a single value in numpy. > > Same after changing it to '10S80'. > > > > Am I doing something wrong !? > > Not that I can see. But, it's possible that there is a > misunderstanding of what '10a80' represents. > > What is giving you the value? > > For example, I can create a file with 10, 80-character strings and open it > using memmap and a data-type of > > dt = numpy.dtype('10a80') > > and it seems to work fine. > > -Travis This is what I get: It claims that the 'title' field (the last one) is 10 times 'S80' but trying to read that array from the first (and only) record (a.Mrc._hdrArray.title[0]) I just get None... >>> a=Mrc.bindFile('Heather2/GFPtublive-Vecta43') TODO: byteorder >>> repr(a.Mrc._hdrArray.dtype) 'dtype([('Num', '>> a.Mrc._hdrArray.NumTitles [3] >>> a.Mrc._hdrArray.NumTitles[0] 3 >>> type(a.Mrc._hdrArray.title[0]) >>> type(a.Mrc._hdrArray.title[1]) Traceback (most recent call last): File "", line 1, in ? File "/home/haase/qqq/lib/python/numpy/core/defchararray.py", line 45, in __getitem__ val = ndarray.__getitem__(self, obj) IndexError: index out of bounds I get the same on byteswapped data and non-byteswapped data. -Sebastian From haase at msg.ucsf.edu Thu Aug 10 20:42:51 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 10 Aug 2006 17:42:51 -0700 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? Message-ID: <200608101742.51914.haase@msg.ucsf.edu> Hi, Does numpy.ascontiguousarray(arr) "fix" the byteorder when arr is non-native byteorder ? If not, what functions does ? - Sebastian Haase From hjn253 at tom.com Sun Aug 13 21:38:58 2006 From: hjn253 at tom.com (=?GB2312?B?IjjUwjE5LTIwyNUvsbG+qSI=?=) Date: Mon, 14 Aug 2006 09:38:58 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU71MvTw0VYQ0VMus1QUFS4xL34udzA7brNvq3Tqr72st8=?= Message-ID: An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Thu Aug 10 21:45:10 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 10 Aug 2006 19:45:10 -0600 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <200608101742.51914.haase@msg.ucsf.edu> References: <200608101742.51914.haase@msg.ucsf.edu> Message-ID: <44DBE126.7030001@ieee.org> Sebastian Haase wrote: > Hi, > Does numpy.ascontiguousarray(arr) "fix" the byteorder when arr is non-native > byteorder ? > > If not, what functions does ? > It can if you pass in a data-type with the right byteorder (or use a native built-in data-type). In NumPy, it's the data-type that carries the "byte-order" information. So, there are lot's of ways to "fix" the byte-order. Of course there is still the difference between "fixing" the byte-order and simply "viewing" the memory in the correct byte-order. The former physically flips bytes around, the latter just flips them on calculation and presentation. -Travis From haase at msg.ucsf.edu Fri Aug 11 00:25:22 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 10 Aug 2006 21:25:22 -0700 Subject: [Numpy-discussion] format typestr for "String" ( 10 strings: '10a80' ) gives just 'None' In-Reply-To: <44DBE6B9.7000007@ieee.org> References: <200608101643.10100.haase@msg.ucsf.edu> <44DBC7E4.1010904@ieee.org> <200608101723.12514.haase@msg.ucsf.edu> <44DBE6B9.7000007@ieee.org> Message-ID: <44DC06B2.5000306@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> This is what I get: It claims that the 'title' field (the last one) >> is 10 times 'S80' but trying to read that array from the first (and >> only) record (a.Mrc._hdrArray.title[0]) I just get None... >> > Hopefully that problem is resolved now. I should discuss a little bit > about how the 10-element sub-array field is handled by NumPy. > > Any sub-array present causes the shape of the returned array for a given > field to grow by the sub-array size. > > So, in your case you have a (10,)-shape subarray in the title field. > > Thus if g is a record-array of shape gshape g.title will be a chararray > of shape gshape + (10,) > > In this case of a 1-d array with 1-element we have gshape = (1,). > Therefore, g.title will be a (1,10) chararray and g[0].title will be a > (10,)-shaped chararray. > > -Travis > Thanks, for fixing everything so quickly - I'll test it tomorrow. BTW: are you intentionally sending the last few messages ONLY to me and NOT to the mailing list !? I actually think the mailing should be configured that a "normal reply" automatically defaults to go only (!) to the list. (I'm on some other lists that know how to do that). Who would be able to change that for the numpy and the scipy list !? Thanks again, Sebastian From unospecialduties at mail2senegal.com Fri Aug 11 00:26:29 2006 From: unospecialduties at mail2senegal.com (Mr. Frank Diop) Date: Fri, 11 Aug 2006 04:26:29 -0000 Subject: [Numpy-discussion] Act Fast. Message-ID: UNITED NATIONS ORGANISATION IN CONJUNCTION WITH THE INTERNATIONAL MONETARY FUND WORLD BANK FACT-FINDING & SPECIAL DUTIES OFFICE Office of The Director Special duties. Dakar, Senegal Telephone: +221 418 3317 Fax: +221 418 4418 Email: unospecialduties at mail2senegal.com Special duties reference **UNO/WBF LM-05-371** **ORDERING CONTRACTOR: UNO/WBF ? SG DIPLOMATIC BOX 55KG To the Beneficiary, The World Bank Group, Fact Finding & Special Duties office In conjunction with the United Nations Organization, has received part of your pending payment with reference number (LM-05-371) amounting to US$ 5Million (Five Million United State Dollars) out of your contractual/inheritance funds from our ordering contractor Bank quoting reference to UNO/WBF LM-05-371, the said payment is been arranged in a Security-proof box weighing 55kg padded with synthetic nylon. According to information gathered from the bank's security computer we were notified that you have waited for so long to receive this payment without success, we also confirmed that you have not met all statutory requirements in respect of your pending payment. You are therefore advised to contact our Payment Clearance Department to obtain necessary information to the Security Courier Service Company that is specialized in sending diplomatic materials and information from one country to another, which also has diplomatic immunity to carry consignment (Box) such as this. This office has met with this Security Courier Service and concluded shipping arrangement with them, therefore shipment will commence as soon as we have your go ahead order. The diplomat who will be bring in this Consignment(Box) to you is an expert and has been in this line of work for many years now so you have noting to worry about. After all arrangements we have concluded that you must donate Five Hundred Thousand United States Dollars (USD500,000.00) to a charity organization we designate to you as soon as you receive your money. To this effect, in your response you should send to us a promissory note promissing to donate the stated amount and also with your address where you will like the Box to be delivered. Please maintain topmost secrecy as it may cause a lot of problems if found out that we are using this media to help you. Therefore you are advised not to inform anyone about this until you received your money. The above requirement qualifies you for final remittance process of the received sum. Please confirm message granted with "GO AHEAD ORDER" on mail: unospecialduties at mail2senegal.com Congratulations. Yours Faithfully Mr. Frank Diop Director, Special Duties UNO/WBF. From haase at msg.ucsf.edu Fri Aug 11 00:32:28 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 10 Aug 2006 21:32:28 -0700 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <44DBE126.7030001@ieee.org> References: <200608101742.51914.haase@msg.ucsf.edu> <44DBE126.7030001@ieee.org> Message-ID: <44DC085C.7010009@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> Hi, >> Does numpy.ascontiguousarray(arr) "fix" the byteorder when arr is >> non-native byteorder ? >> >> If not, what functions does ? >> > > It can if you pass in a data-type with the right byteorder (or use a > native built-in data-type). > > In NumPy, it's the data-type that carries the "byte-order" > information. So, there are lot's of ways to "fix" the byte-order. > So then the question is: what is the easiest way to say: give me the equivalent type of dtype, but with byteorder '<' (or '=') !? I would be cumbersome (and ugly ;-) ) if one would have to "manually assemble" such a construct every time ... > Of course there is still the difference between "fixing" the byte-order > and simply "viewing" the memory in the correct byte-order. The former > physically flips bytes around, the latter just flips them on calculation > and presentation. I understand. I need something that I can feed into my C routines that are to dumb to handle non-contiguous or byte-swapped data . - Sebastian From drife at ucar.edu Fri Aug 11 01:50:27 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 23:50:27 -0600 (MDT) Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <44DB5B3F.9080203@noaa.gov> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> <44DB5B3F.9080203@noaa.gov> Message-ID: <35522.64.17.89.52.1155275427.squirrel@imap.rap.ucar.edu> Hi Chris, I tried your suggestion of installing and running the pre-built packages at . I am sorry to report that the pre-built MacPython and Numeric 24.2 package did not work. I get the same "Segmentation Fault" that I got when I built Python 2.4.3 and Numeric 24.2 from source. I tried running my code with debug prints in various places to try and pin down where the problem arises. Thus, I ran my code a number of times. Strangely, it never crashes in the same place twice. I'm not sure what to do next, but I will keep at it. As a last resort, I may build ATLAS and LAPACK from source, then build Numeric 23.8 against these, and try installing this into MacPython. I hate having to try this, but I cannot do any development without a functioning Python and Numeric. Thanks again, Daran -- > Daran L. Rife wrote: >> Many thanks for the reply. This was my first attempt >> to build and use numpy; > > "numpy" used to be a generic name for the Numerical extensions to Python. Now there are three versions: > > "Numeric": The original, now at version 24.2 This is probably the last version that will be produced. > > "numarray": This was designed to be the "next generation" array package. It has some nice additional features that Numeric does not have, but is missing some as well. It is at version 1.5.1. it may see some bug fix releases in the future, but probably won't see any more major development. > > "numpy": this is the "grand unification" array package. It is based on the Numeric code base, and is designed to have the best features of Numeric and numarray, plus some extra good stuff. It is now at version 1.0beta, with an expected release date for 1.0final sometime this fall. It is under active development, the API is pretty stable now, and it appears to have the consensus of the numerical python community as the "way of the future" > > I wrote all that out so that you can be clear which package you are having trouble with -- you've used both the term "Numeric" and "numpy" in your posts, and there is some confusion. > > If you are working on a project that does not need to be released for a few months (i.e. after numpy has reached 1.0 final), I'd use numpy, rather than Numeric or numarray. > > Also: on OS-X, there are far to many ways to build Python. When you report a problem, you need to define exactly which python build you are using, and this goes beyond python version -- fink? darwinports? built-it-from-source? Framework? Universal, etc... > > The MacPython community is doing it's best to standardize on the Universal Build of 2.4.3 that you can find here: > > http://www.pythonmac.org/packages/py24-fat/ > > There you will also find pre-built packages for Numeric24.2, > numarray1.5.1, and numpy0.9.8 > > Have you tried any of those? They should be built against Apple's vectLib. There isn't a package for numpy 1.0beta there yet. I may add one soon. > > > Interestingly, I can get Numeric version 23.8 to build and > > run just fine, but it appears that the dotblas (BLAS > > optimized matrixmultiply/dot/innerproduct) does not properly > > get built in. Thus, all my matrix operations are -very- slow. > > I'm not sure of the dates, but that is probably a version that didn't have the check for Apple's vecLib in the setup.py, so it built with the built-in lapack-lite instead. You can compare the setup.py files from that and newer versions to see how to make it build against vectLib, but I suspect if you do that, you'll see the same problems. > > Also, please send a small test script that crashes for you, so others can test it. > > -Chris > > > > > -- > Christopher Barker, Ph.D. > Oceanographer > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > From ainulinde at gmail.com Fri Aug 11 08:41:54 2006 From: ainulinde at gmail.com (ainulinde) Date: Fri, 11 Aug 2006 20:41:54 +0800 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available In-Reply-To: <44DB64A2.60203@enthought.com> References: <44DB64A2.60203@enthought.com> Message-ID: can't get any seeds for this torrent and any other download methods? thanks On 8/11/06, Bryce Hendrix wrote: > For those not able to make SciPy 2006 next week, or who would like to > download the ISO a few days early, its available at > http://code.enthought.com/downloads/scipy2006-i386.iso.torrent. > > We squashed a lot onto the CD, so I also had to trim > 100 MB of > packages that ship with the standard Ubuntu CD. Here's what I was able > to add: > > * SciPy build from svn (Wed, 12:00 CST) > * NumPy built from svn (Wed, 12:00 CST) > * Matplotlib built from svn (Wed, 12:00 CST) > * IPython built from svn (Wed, 12:00 CST) > * Enthought built from svn (Wed, 16:00 CST) > * ctypes 1.0.0 > * hdf5 1.6.5 > * networkx 0.31 > * Pyrex 0.9.4.1 > * pytables 1.3.2 > > All of the svn checkouts are zipped in /src, if you'd like to build from > a svn version newer than what was shipped, simple copy the compressed > package to your home dir, uncompress it, run "svn upate", and built it. > > Please note: This ISO was built rather hastily, uses un-official code, > and received very little testing. Please don't even consider using this > in a production environment. > > Bryce > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From bhendrix at enthought.com Fri Aug 11 11:56:38 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Fri, 11 Aug 2006 10:56:38 -0500 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available In-Reply-To: References: <44DB64A2.60203@enthought.com> Message-ID: <44DCA8B6.6010807@enthought.com> For those behind firewalls or have other problems connecting via bittorrent, the ISO can also be found here: http://code.enthought.com/downloads/scipy2006-i386.iso Bryce ainulinde wrote: > can't get any seeds for this torrent and any other download methods? thanks > > On 8/11/06, Bryce Hendrix wrote: > >> For those not able to make SciPy 2006 next week, or who would like to >> download the ISO a few days early, its available at >> http://code.enthought.com/downloads/scipy2006-i386.iso.torrent. >> >> We squashed a lot onto the CD, so I also had to trim > 100 MB of >> packages that ship with the standard Ubuntu CD. Here's what I was able >> to add: >> >> * SciPy build from svn (Wed, 12:00 CST) >> * NumPy built from svn (Wed, 12:00 CST) >> * Matplotlib built from svn (Wed, 12:00 CST) >> * IPython built from svn (Wed, 12:00 CST) >> * Enthought built from svn (Wed, 16:00 CST) >> * ctypes 1.0.0 >> * hdf5 1.6.5 >> * networkx 0.31 >> * Pyrex 0.9.4.1 >> * pytables 1.3.2 >> >> All of the svn checkouts are zipped in /src, if you'd like to build from >> a svn version newer than what was shipped, simple copy the compressed >> package to your home dir, uncompress it, run "svn upate", and built it. >> >> Please note: This ISO was built rather hastily, uses un-official code, >> and received very little testing. Please don't even consider using this >> in a production environment. >> >> Bryce >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ainulinde at gmail.com Fri Aug 11 12:49:27 2006 From: ainulinde at gmail.com (ainulinde) Date: Sat, 12 Aug 2006 00:49:27 +0800 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available In-Reply-To: <44DCA8B6.6010807@enthought.com> References: <44DB64A2.60203@enthought.com> <44DCA8B6.6010807@enthought.com> Message-ID: Bryce, thanks. this http works for me, the download speed is about 30k/s and the bt can't download anything, just one ip in the userlist(can't download anything from him/her).don't know why. maybe there is sth wrong with my network. On 8/11/06, Bryce Hendrix wrote: > > For those behind firewalls or have other problems connecting via > bittorrent, the ISO can also be found here: > > > http://code.enthought.com/downloads/scipy2006-i386.iso > > Bryce > > > ainulinde wrote: > can't get any seeds for this torrent and any other download methods? thanks > > On 8/11/06, Bryce Hendrix wrote: > > > For those not able to make SciPy 2006 next week, or who would like to > download the ISO a few days early, its available at > http://code.enthought.com/downloads/scipy2006-i386.iso.torrent. > > We squashed a lot onto the CD, so I also had to trim > 100 MB of > packages that ship with the standard Ubuntu CD. Here's what I was able > to add: > > * SciPy build from svn (Wed, 12:00 CST) > * NumPy built from svn (Wed, 12:00 CST) > * Matplotlib built from svn (Wed, 12:00 CST) > * IPython built from svn (Wed, 12:00 CST) > * Enthought built from svn (Wed, 16:00 CST) > * ctypes 1.0.0 > * hdf5 1.6.5 > * networkx 0.31 > * Pyrex 0.9.4.1 > * pytables 1.3.2 > > All of the svn checkouts are zipped in /src, if you'd like to build from > a svn version newer than what was shipped, simple copy the compressed > package to your home dir, uncompress it, run "svn upate", and built it. > > Please note: This ISO was built rather hastily, uses un-official code, > and received very little testing. Please don't even consider using this > in a production environment. > > Bryce > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > From haase at msg.ucsf.edu Fri Aug 11 15:22:01 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 12:22:01 -0700 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <44DC085C.7010009@msg.ucsf.edu> References: <200608101742.51914.haase@msg.ucsf.edu> <44DBE126.7030001@ieee.org> <44DC085C.7010009@msg.ucsf.edu> Message-ID: <200608111222.01938.haase@msg.ucsf.edu> On Thursday 10 August 2006 21:32, Sebastian Haase wrote: > Travis Oliphant wrote: > > Sebastian Haase wrote: > >> Hi, > >> Does numpy.ascontiguousarray(arr) "fix" the byteorder when arr is > >> non-native byteorder ? > >> > >> If not, what functions does ? > > > > It can if you pass in a data-type with the right byteorder (or use a > > native built-in data-type). > > > > In NumPy, it's the data-type that carries the "byte-order" > > information. So, there are lot's of ways to "fix" the byte-order. > > So then the question is: what is the easiest way to say: > give me the equivalent type of dtype, but with byteorder '<' (or '=') !? > I would be cumbersome (and ugly ;-) ) if one would have to "manually > assemble" such a construct every time ... I just found this in myCVS/numpy/numpy/core/tests/test_numerictypes.py def normalize_descr(descr): "Normalize a description adding the platform byteorder." out = [] for item in descr: dtype = item[1] if isinstance(dtype, str): if dtype[0] not in ['|','<','>']: onebyte = dtype[1:] == "1" if onebyte or dtype[0] in ['S', 'V', 'b']: dtype = "|" + dtype else: dtype = byteorder + dtype if len(item) > 2 and item[2] > 1: nitem = (item[0], dtype, item[2]) else: nitem = (item[0], dtype) out.append(nitem) elif isinstance(item[1], list): l = [] for j in normalize_descr(item[1]): l.append(j) out.append((item[0], l)) else: raise ValueError("Expected a str or list and got %s" % \ (type(item))) return out Is that what I was talking about !? It's quite a big animal. Would this be needed "everytime" I want to get a "systembyte-ordered version" of a given type !? - Sebastian > > > Of course there is still the difference between "fixing" the byte-order > > and simply "viewing" the memory in the correct byte-order. The former > > physically flips bytes around, the latter just flips them on calculation > > and presentation. > > I understand. I need something that I can feed into my C routines that > are to dumb to handle non-contiguous or byte-swapped data . > > - Sebastian From oliphant.travis at ieee.org Fri Aug 11 16:02:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 14:02:28 -0600 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <200608111222.01938.haase@msg.ucsf.edu> References: <200608101742.51914.haase@msg.ucsf.edu> <44DBE126.7030001@ieee.org> <44DC085C.7010009@msg.ucsf.edu> <200608111222.01938.haase@msg.ucsf.edu> Message-ID: <44DCE254.9020107@ieee.org> Sebastian Haase wrote: > I just found this in myCVS/numpy/numpy/core/tests/test_numerictypes.py > > > def normalize_descr(descr): > "Normalize a description adding the platform byteorder." > > return out > > > Is that what I was talking about !? It's quite a big animal. > Would this be needed "everytime" I want to get a "systembyte-ordered version" > of a given type !? > No, I'm not even sure why exactly that was written but it's just in the testing code. I think the email I sent yesterday got lost because I sent it CC: numpy-discussion with no To: address. Here's what I said (more or less) in that email: You can use the .newbyteorder(endian='s') method of the dtype object to get a new data-type with a different byteorder. The possibilities for endian are 'swap', 'big' ('>'), 'little' ('<'), or 'native' ('='). This will descend down a complicated data-type and change all the byte-orders appropriately. Then you can use .astype(newtype) to convert to the new byteorder. The .isnative attribute of the data-type will tell you if the data-type (or all of it's fields in recent SVN) are in native byte-order. -Travis From faltet at carabos.com Fri Aug 11 16:30:28 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 11 Aug 2006 22:30:28 +0200 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy Message-ID: <200608112230.28727.faltet@carabos.com> Hi, I was tracking down a memory leak in PyTables and it boiled down to a problem in the array protocol. The issue is easily exposed by: for i in range(1000000): numarray.array(numpy.zeros(dtype=numpy.float64, shape=3)) and looking at the memory consumption of the process. The same happens with: for i in range(1000000): numarray.asarray(numpy.zeros(dtype=numpy.float64, shape=3)) However, the numpy<--numarray sense seems to work well. for i in range(1000000): numpy.array(numarray.zeros(type="Float64", shape=3)) Using numarray 1.5.1 and numpy 1.0b1 I think this is a relatively important problem, because it somewhat prevents a smooth transition from numarray to NumPy. Thanks, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From jmiller at stsci.edu Fri Aug 11 17:13:33 2006 From: jmiller at stsci.edu (Todd Miller) Date: Fri, 11 Aug 2006 17:13:33 -0400 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <200608112230.28727.faltet@carabos.com> References: <200608112230.28727.faltet@carabos.com> Message-ID: <44DCF2FD.3000602@stsci.edu> Francesc Altet wrote: > Hi, > > I was tracking down a memory leak in PyTables and it boiled down to a problem > in the array protocol. The issue is easily exposed by: > > for i in range(1000000): > numarray.array(numpy.zeros(dtype=numpy.float64, shape=3)) > > and looking at the memory consumption of the process. The same happens with: > > for i in range(1000000): > numarray.asarray(numpy.zeros(dtype=numpy.float64, shape=3)) > > However, the numpy<--numarray sense seems to work well. > > for i in range(1000000): > numpy.array(numarray.zeros(type="Float64", shape=3)) > > Using numarray 1.5.1 and numpy 1.0b1 > > I think this is a relatively important problem, because it somewhat prevents a > smooth transition from numarray to NumPy. > > Thanks, > > I looked at this a little with a debug python and figure it's a bug in numpy.zeros(): >>> numpy.zeros(dtype=numpy.float64, shape=3) array([ 0., 0., 0.]) [147752 refs] >>> numpy.zeros(dtype=numpy.float64, shape=3) array([ 0., 0., 0.]) [147753 refs] >>> numpy.zeros(dtype=numpy.float64, shape=3) array([ 0., 0., 0.]) [147754 refs] >>> numarray.array([1,2,3,4]) array([1, 2, 3, 4]) [147772 refs] >>> numarray.array([1,2,3,4]) array([1, 2, 3, 4]) [147772 refs] >>> numarray.array([1,2,3,4]) array([1, 2, 3, 4]) [147772 refs] Regards, Todd From haase at msg.ucsf.edu Fri Aug 11 17:44:16 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 14:44:16 -0700 Subject: [Numpy-discussion] bug ! arr.mean() outside arr.min() .. arr.max() range Message-ID: <200608111444.16236.haase@msg.ucsf.edu> Hi! b is a non-native byteorder array of type int16 but see further down: same after converting to native ... >>> repr(b.dtype) 'dtype('>i2')' >>> b.dtype.isnative False >>> b.shape (38, 512, 512) >>> b.max() 1279 >>> b.min() 0 >>> b.mean() -65.279878014 >>> U.mmms(b) # my "useful" function for min,max,mean,stddev (0, 1279, 365.878016723, 123.112379036) >>> c = b.copy() >>> c.max() 1279 >>> c.min() 0 >>> c.mean() -65.279878014 >>> d = N.asarray(b, b.dtype.newbyteorder('=')) >>> d.dtype.isnative True >>> >>> >>> d.max() 1279 >>> d.min() 0 >>> d.mean() -65.279878014 >>> N.__version__ '1.0b2.dev2996' >>> Sorry that I don't have a simple example - what could be wrong !? - Sebastian Haase From faltet at carabos.com Fri Aug 11 16:55:06 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 11 Aug 2006 22:55:06 +0200 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <44DCE254.9020107@ieee.org> References: <200608101742.51914.haase@msg.ucsf.edu> <200608111222.01938.haase@msg.ucsf.edu> <44DCE254.9020107@ieee.org> Message-ID: <200608112255.08049.faltet@carabos.com> A Divendres 11 Agost 2006 22:02, Travis Oliphant va escriure: > Sebastian Haase wrote: > > I just found this in myCVS/numpy/numpy/core/tests/test_numerictypes.py > > > > > > def normalize_descr(descr): > > "Normalize a description adding the platform byteorder." > > > > return out > > > > > > > > Is that what I was talking about !? It's quite a big animal. > > Would this be needed "everytime" I want to get a "systembyte-ordered > > version" of a given type !? > > No, I'm not even sure why exactly that was written but it's just in the > testing code. I think this is my fault. Some months ago I contributed some testing code for checking numerical types, and ended with this 'animal'. Sorry about that ;-) Cheers! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From oliphant.travis at ieee.org Fri Aug 11 18:06:12 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 16:06:12 -0600 Subject: [Numpy-discussion] bug ! arr.mean() outside arr.min() .. arr.max() range In-Reply-To: <200608111444.16236.haase@msg.ucsf.edu> References: <200608111444.16236.haase@msg.ucsf.edu> Message-ID: <44DCFF54.7070701@ieee.org> Sebastian Haase wrote: > Hi! > b is a non-native byteorder array of type int16 > but see further down: same after converting to native ... > >>>> repr(b.dtype) >>>> > 'dtype('>i2')' > The problem is no-doubt related to "wrapping" for integers. Your total is getting too large to fit into the reducing data-type. What does d.sum() give you? You can add d.mean(dtype='d') to force reduction over doubles. -Travis From oliphant.travis at ieee.org Fri Aug 11 18:11:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 16:11:03 -0600 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <44DCF2FD.3000602@stsci.edu> References: <200608112230.28727.faltet@carabos.com> <44DCF2FD.3000602@stsci.edu> Message-ID: <44DD0077.4030403@ieee.org> Todd Miller wrote: >> >> > I looked at this a little with a debug python and figure it's a bug in > numpy.zeros(): > > Hmmm. I thought of that, but could not get any memory leak by just creating zeros in a four loop. In other words: for i in xrange(10000000): numpy.zeros(dtype=numpy.float64, shape=3) does not leak.. So, it's seems to be related to the array protocol. I have not been able to spot what is going on though. There does not seem to be any reference-counting problem that I can see. -Travis From svetosch at gmx.net Fri Aug 11 18:23:01 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 12 Aug 2006 00:23:01 +0200 Subject: [Numpy-discussion] why is default axis always different? Message-ID: <44DD0345.9000102@gmx.net> Hi, notice the (confusing, imho) different defaults for the axis of the following related functions: nansum(a, axis=-1) Sum the array over the given axis, treating NaNs as 0. sum(x, axis=None, dtype=None) Sum the array over the given axis. The optional dtype argument is the data type for intermediate calculations. average(a, axis=0, weights=None, returned=False) average(a, axis=0, weights=None, returned=False) Average the array over the given axis. If the axis is None, average over all dimensions of the array. Equivalent to a.mean(axis), but with a default axis of 0 instead of None. >>> numpy.__version__ '1.0b2.dev2973' Shouldn't those kind of functions have the same default behavior? So is this a bug or am I missing something? Thanks for enlightenment, Sven From oliphant.travis at ieee.org Fri Aug 11 18:30:51 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 16:30:51 -0600 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <200608112230.28727.faltet@carabos.com> References: <200608112230.28727.faltet@carabos.com> Message-ID: <44DD051B.1000603@ieee.org> Francesc Altet wrote: > Hi, > > I was tracking down a memory leak in PyTables and it boiled down to a problem > in the array protocol. The issue is easily exposed by: > > for i in range(1000000): > numarray.array(numpy.zeros(dtype=numpy.float64, shape=3)) > > More data: The following code does not leak: import numpy import sys for i in xrange(10000000): a = numpy.zeros(dtype=numpy.float64,shape=3) b = a.__array_struct__ as verified by watching the memory growth As far as numpy knows this is all it's supposed to do. This seems to indicate that something is going on inside numarray.array(a) because once you had that line to the loop, memory consumption shows up. In fact, you can just add the line a = _numarray._array_from_array_struct(a) to see the memory growth problem. -Travis -Travis From oliphant.travis at ieee.org Fri Aug 11 18:52:15 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 16:52:15 -0600 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <200608112230.28727.faltet@carabos.com> References: <200608112230.28727.faltet@carabos.com> Message-ID: <44DD0A1F.4010509@ieee.org> Francesc Altet wrote: > Hi, > > I was tracking down a memory leak in PyTables and it boiled down to a problem > in the array protocol. The issue is easily exposed by: > > for i in range(1000000): > numarray.array(numpy.zeros(dtype=numpy.float64, shape=3)) > > and looking at the memory consumption of the process. The same happens with: > > for i in range(1000000): > numarray.asarray(numpy.zeros(dtype=numpy.float64, shape=3)) > > However, the numpy<--numarray sense seems to work well. > > for i in range(1000000): > numpy.array(numarray.zeros(type="Float64", shape=3)) > > Using numarray 1.5.1 and numpy 1.0b1 > > I think this is a relatively important problem, because it somewhat prevents a > smooth transition from numarray to NumPy. > > I tracked the leak to the numarray function NA_FromDimsStridesDescrAndData This function calls NA_NewAllFromBuffer with a brand-new buffer object when data is passed in (like in the case with the array protocol). That function then takes a reference to the buffer object but then the calling function never releases the reference it already holds. This creates the leak. I added the line if (data) {Py_DECREF(buf);} right after the call to NA_NewAllFromBuffer and the leak disappeared. For what it's worth, I also think the base object for the new numarray object should be the object passed in and not the C-object that is created from it. In other words in the NA_FromArrayStruct function a->base = cobj should be replaced with Py_INCREF(obj) a->base = obj Py_DECREF(cobj) Best, -Travis From Seguridad at banamex.com Fri Aug 11 16:13:43 2006 From: Seguridad at banamex.com (Banamex) Date: Fri, 11 Aug 2006 22:13:43 +0200 Subject: [Numpy-discussion] Seguridad Banamex Message-ID: <9ecd20a573d7d9c0641229109b56a39f@www.slc-gent.org> An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Fri Aug 11 23:40:27 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 20:40:27 -0700 Subject: [Numpy-discussion] bug ! arr.mean() outside arr.min() .. arr.max() range In-Reply-To: <44DCFF54.7070701@ieee.org> References: <200608111444.16236.haase@msg.ucsf.edu> <44DCFF54.7070701@ieee.org> Message-ID: <44DD4DAB.5040509@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> Hi! >> b is a non-native byteorder array of type int16 >> but see further down: same after converting to native ... >> >>>>> repr(b.dtype) >>>>> >> 'dtype('>i2')' >> > > The problem is no-doubt related to "wrapping" for integers. Your total is > getting too large to fit into the reducing data-type. > > What does > > d.sum() give you? I can't check that particular array until Monday... > > You can add d.mean(dtype='d') to force reduction over doubles. This almost sound like what I reported is something like a feature !? Is there a sensible / generic way to avoid those "accident" ? Maybe it must be the default to reduce int8, uint8, int16, uint16 into doubles !? - Sebastian From charlesr.harris at gmail.com Sat Aug 12 00:04:44 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 11 Aug 2006 22:04:44 -0600 Subject: [Numpy-discussion] bug ! arr.mean() outside arr.min() .. arr.max() range In-Reply-To: <44DD4DAB.5040509@msg.ucsf.edu> References: <200608111444.16236.haase@msg.ucsf.edu> <44DCFF54.7070701@ieee.org> <44DD4DAB.5040509@msg.ucsf.edu> Message-ID: On 8/11/06, Sebastian Haase wrote: > > Travis Oliphant wrote: > > Sebastian Haase wrote: > >> Hi! > >> b is a non-native byteorder array of type int16 > >> but see further down: same after converting to native ... > >> > >>>>> repr(b.dtype) > >>>>> > >> 'dtype('>i2')' > >> > > > > The problem is no-doubt related to "wrapping" for integers. Your total > is > > getting too large to fit into the reducing data-type. > > > > What does > > > > d.sum() give you? > I can't check that particular array until Monday... > > > > > You can add d.mean(dtype='d') to force reduction over doubles. > This almost sound like what I reported is something like a feature !? > Is there a sensible / generic way to avoid those "accident" ? Maybe it > must be the default to reduce int8, uint8, int16, uint16 into doubles !? Hard to say. I always bear the precision in mind when accumulating numbers but even so it is possible to get unexpected results. Even doubles can give problems if there are a few large numbers mixed with many small numbers. That said, folks probably expect means to be accurate and don't want modular arithmetic, so doubles would probably be a better default. It would be slower though. I think there was a discussion of this problem previously in regard to the reduce methods. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Sat Aug 12 00:10:45 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 21:10:45 -0700 Subject: [Numpy-discussion] is cygwin patch from from ticket #114 still working !? Message-ID: <44DD54C5.9000100@msg.ucsf.edu> This is what I get ? haase at doe:~/myCVS/numpy: patch.exe -b -p0 < ~/winbuilding3.diff patching file numpy/distutils/misc_util.py Reversed (or previously applied) patch detected! Assume -R? [n] Thanks, Sebastian From haase at msg.ucsf.edu Sat Aug 12 00:18:36 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 21:18:36 -0700 Subject: [Numpy-discussion] is cygwin patch from from ticket #114 still working !? In-Reply-To: <44DD54C5.9000100@msg.ucsf.edu> References: <44DD54C5.9000100@msg.ucsf.edu> Message-ID: <44DD569C.6050105@msg.ucsf.edu> Sebastian Haase wrote: > This is what I get ? > > haase at doe:~/myCVS/numpy: patch.exe -b -p0 < ~/winbuilding3.diff > patching file numpy/distutils/misc_util.py > Reversed (or previously applied) patch detected! Assume -R? [n] > > Thanks, > Sebastian OK - I think I can answer myself. No, but it's not needed anymore ! It looks like it compiled fine without applying it - Sebastian From haase at msg.ucsf.edu Sat Aug 12 00:31:20 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 21:31:20 -0700 Subject: [Numpy-discussion] Does a C-API mismatch require a fatal(!) program termination !? (crash on import !) Message-ID: <44DD5998.7000301@msg.ucsf.edu> Hi, I was just wondering if it might be possible to raise an ImportError instead of terminating python; look what I get: haase at doe:~: python Python 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> import sys >>> sys.path.append("PrCyg") >>> from Priithon import seb RuntimeError: module compiled against version 1000000 of C-API but this version of numpy is 1000002 Fatal Python error: numpy.core.multiarray failed to import... exiting. This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. haase at doe:~: Assume that you are running an interactive session, analysing some important[;-)] data. Then you think: "Oh, I should try this one (maybe little old) module on this" ... so you try to import ... and ... suddenly the entire python application crashes. When your shell application runs without a terminal you don't even get to read the error message ! - Sebastian Haase From jmiller at stsci.edu Sat Aug 12 07:05:51 2006 From: jmiller at stsci.edu (Todd Miller) Date: Sat, 12 Aug 2006 07:05:51 -0400 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <44DD051B.1000603@ieee.org> References: <200608112230.28727.faltet@carabos.com> <44DD051B.1000603@ieee.org> Message-ID: <44DDB60F.9050009@stsci.edu> Travis Oliphant wrote: > As far as numpy knows this is all it's supposed to do. This seems to > indicate that something is going on inside numarray.array(a) > > because once you had that line to the loop, memory consumption shows up. > > In fact, you can just add the line > > a = _numarray._array_from_array_struct(a) > This does demonstrate a huge leak I'll look into. Thanks. Regards, Todd From jmiller at stsci.edu Sat Aug 12 08:37:39 2006 From: jmiller at stsci.edu (Todd Miller) Date: Sat, 12 Aug 2006 08:37:39 -0400 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <44DD0A1F.4010509@ieee.org> References: <200608112230.28727.faltet@carabos.com> <44DD0A1F.4010509@ieee.org> Message-ID: <44DDCB93.5080103@stsci.edu> I agree with all of Travis' comments below and committed the suggested changes to numarray CVS. I found one other numarray change needed for Francesc's examples to run (apparently) leak-free: Py_INCREF(obj) Py_XDECREF(a->base) a->base = obj Py_DECREF(cobj) Thanks Travis! Regards, Todd Travis Oliphant wrote: > Francesc Altet wrote: > >> Hi, >> >> I was tracking down a memory leak in PyTables and it boiled down to a problem >> in the array protocol. The issue is easily exposed by: >> >> for i in range(1000000): >> numarray.array(numpy.zeros(dtype=numpy.float64, shape=3)) >> >> and looking at the memory consumption of the process. The same happens with: >> >> for i in range(1000000): >> numarray.asarray(numpy.zeros(dtype=numpy.float64, shape=3)) >> >> However, the numpy<--numarray sense seems to work well. >> >> for i in range(1000000): >> numpy.array(numarray.zeros(type="Float64", shape=3)) >> >> Using numarray 1.5.1 and numpy 1.0b1 >> >> I think this is a relatively important problem, because it somewhat prevents a >> smooth transition from numarray to NumPy. >> >> >> > > I tracked the leak to the numarray function > > NA_FromDimsStridesDescrAndData > > This function calls NA_NewAllFromBuffer with a brand-new buffer object > when data is passed in (like in the case with the array protocol). That > function then takes a reference to the buffer object but then the > calling function never releases the reference it already holds. This > creates the leak. > > I added the line > > if (data) {Py_DECREF(buf);} > > right after the call to NA_NewAllFromBuffer and the leak disappeared. > > For what it's worth, I also think the base object for the new numarray > object should be the object passed in and not the C-object that is > created from it. > > In other words in the NA_FromArrayStruct function > > a->base = cobj > > should be replaced with > > Py_INCREF(obj) > a->base = obj > Py_DECREF(cobj) > > > Best, > > > -Travis > > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From faltet at carabos.com Sat Aug 12 11:53:31 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 12 Aug 2006 17:53:31 +0200 Subject: [Numpy-discussion] =?iso-8859-1?q?Memory_leak_in_array_protocol?= =?iso-8859-1?q?=09numarray_=3C--numpy?= In-Reply-To: <44DDCB93.5080103@stsci.edu> References: <200608112230.28727.faltet@carabos.com> <44DD0A1F.4010509@ieee.org> <44DDCB93.5080103@stsci.edu> Message-ID: <200608121753.33150.faltet@carabos.com> A Dissabte 12 Agost 2006 14:37, Todd Miller va escriure: > I agree with all of Travis' comments below and committed the suggested > changes to numarray CVS. I found one other numarray change needed > for Francesc's examples to run (apparently) leak-free: > > Py_INCREF(obj) > Py_XDECREF(a->base) > a->base = obj > Py_DECREF(cobj) > > Thanks Travis! Hey! I checked this morning Travis' patch and seems to work well for me. I'll add yours as well later on and see... BTW, where exactly I've to add the above lines? Many thanks Travis and Todd. You are great! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From jdhunter at ace.bsd.uchicago.edu Sat Aug 12 12:27:07 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sat, 12 Aug 2006 11:27:07 -0500 Subject: [Numpy-discussion] build bug Message-ID: <87lkpt3oo4.fsf@peds-pc311.bsd.uchicago.edu> Just tried to build svn 2999 on OSX 10.3 w/ python2.3 and encountered a bug in numpy/core/setup.py on line 102 if sys.version[:3] < '2.4': #kws_args['headers'].append('stdlib.h') if check_func('strtod'): moredefs.append(('PyOS_ascii_strtod', 'strtod')) I've commented out the kws_args because it is not defined in this function. Appeared to build fine w/o it. JDH From cookedm at physics.mcmaster.ca Sat Aug 12 14:27:26 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 12 Aug 2006 14:27:26 -0400 Subject: [Numpy-discussion] build bug In-Reply-To: <87lkpt3oo4.fsf@peds-pc311.bsd.uchicago.edu> References: <87lkpt3oo4.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <20060812182726.GA930@arbutus.physics.mcmaster.ca> On Sat, Aug 12, 2006 at 11:27:07AM -0500, John Hunter wrote: > > Just tried to build svn 2999 on OSX 10.3 w/ python2.3 and encountered > a bug in numpy/core/setup.py on line 102 > > if sys.version[:3] < '2.4': > #kws_args['headers'].append('stdlib.h') > if check_func('strtod'): > moredefs.append(('PyOS_ascii_strtod', 'strtod')) > > I've commented out the kws_args because it is not defined in this > function. Appeared to build fine w/o it. Whoops, missed that one. Fixed in svn. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Fri Aug 11 03:12:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 01:12:45 -0600 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <44DC085C.7010009@msg.ucsf.edu> References: <200608101742.51914.haase@msg.ucsf.edu> <44DBE126.7030001@ieee.org> <44DC085C.7010009@msg.ucsf.edu> Message-ID: <44DC2DED.7010102@ieee.org> Sebastian Haase wrote: > Travis Oliphant wrote: >> Sebastian Haase wrote: >>> Hi, >>> Does numpy.ascontiguousarray(arr) "fix" the byteorder when arr is >>> non-native byteorder ? >>> >>> If not, what functions does ? >>> >> >> It can if you pass in a data-type with the right byteorder (or use a >> native built-in data-type). >> >> In NumPy, it's the data-type that carries the "byte-order" >> information. So, there are lot's of ways to "fix" the byte-order. >> > So then the question is: what is the easiest way to say: > give me the equivalent type of dtype, but with byteorder '<' (or '=') !? > I would be cumbersome (and ugly ;-) ) if one would have to "manually > assemble" such a construct every time ... Two things. Every dtype object has the method self.newbyteorder(endian) which can be used to either swap the byte order or apply a new one to every sub-field. endian can be '<', '>', '=', 'swap', 'little', 'big' If you want to swap bytes based on whether or not the data-type is machine native you can do something like the following if not a.dtype.isnative: a = a.astype(a.dtype.newbyteorder()) You can make sure the array has the correct data-type using .astype(newtype) or array(a, newtype). You can also set the data-type of the array a.dtype = newtype but this won't change anything just how they are viewed. You can always byteswap the data explicitly a.byteswap(True) will do it in-place. So, you can change both the data-type and the way it's stored using a.byteswap(True) # Changes the data but not the data-type a.dtype = a.dtype.newbyteorder() # changes the data-type but not the data -Travis From svetosch at gmx.net Sat Aug 12 17:35:36 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 12 Aug 2006 23:35:36 +0200 Subject: [Numpy-discussion] why is default axis always different? In-Reply-To: <44DD0345.9000102@gmx.net> References: <44DD0345.9000102@gmx.net> Message-ID: <44DE49A8.4010606@gmx.net> Sven Schreiber schrieb: > Hi, > notice the (confusing, imho) different defaults for the axis of the > following related functions: > > nansum(a, axis=-1) > Sum the array over the given axis, treating NaNs as 0. > > sum(x, axis=None, dtype=None) > Sum the array over the given axis. The optional dtype argument > is the data type for intermediate calculations. > > > average(a, axis=0, weights=None, returned=False) > average(a, axis=0, weights=None, returned=False) > > Average the array over the given axis. If the axis is None, average > over all dimensions of the array. Equivalent to a.mean(axis), but > with a default axis of 0 instead of None. > >>>> numpy.__version__ > '1.0b2.dev2973' > > Shouldn't those kind of functions have the same default behavior? So is > this a bug or am I missing something? > > Thanks for enlightenment, > Sven > Perhaps this is useful for others, so I'll share my self-enlightenment (please correct me if I got it wrong): - sum's axis=None default actually conforms to what's in the numpy 1.0 release notes (functions that match methods should also get their default, which for such methods is axis=None) - nansum's axis=-1 default is normal for functions which don't match equivalent methods - However, I still don't understand why then average() doesn't have axis=-1 as its default like other functions...? Apparently the axis=0 default of average() is its main feature, explaining its existence vis-?-vis .mean. But that seems inconsistent to me, as it breaks all the rules: It doesn't conform to the standard axis=-1 default for functions, and if it's viewed as equivalent to the .mean method (which it is), it doesn't conform to the rule that it should share the latter's default axis=None. So imho it seems like there's no real use for average() other than creating confusion. (Well that sounds a bit too strong, but anyway...) I therefore suggest to officially deprecate it and move it to some compatibility module. I'm going to file a corresponding ticket tomorrow unless somebody tells me not to. Cheers, Sven From fgv87 at tom.com Wed Aug 16 05:11:37 2006 From: fgv87 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Wed, 16 Aug 2006 17:11:37 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU6yfqy+tK7z9/W97ncvLzE3Mzhyf0=?= Message-ID: An HTML attachment was scrubbed... URL: From MAILER-DAEMON at rosi.szbk.u-szeged.hu Sun Aug 13 05:07:22 2006 From: MAILER-DAEMON at rosi.szbk.u-szeged.hu (Mail Delivery System) Date: Sun, 13 Aug 2006 11:07:22 +0200 (CEST) Subject: [Numpy-discussion] Undelivered Mail Returned to Sender Message-ID: <20060813090722.82ED71BD8E@rosi.szbk.u-szeged.hu> This is the Postfix program at host rosi.szbk.u-szeged.hu. I'm sorry to have to inform you that your message could not be be delivered to one or more recipients. It's attached below. For further assistance, please send mail to If you do so, please include this problem report. You can delete your own text from the attached returned message. The Postfix program : permission denied. Command output: maildrop: maildir over quota. -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From fgv87 at tom.com Wed Aug 16 05:11:37 2006 From: fgv87 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Wed, 16 Aug 2006 17:11:37 +0800 Subject: *****SPAM***** [Numpy-discussion] re:Éú²úÒ»ÏßÖ÷¹Ü¼¼ÄÜÌáÉý Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 -------------- next part -------------- _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From jmiller at stsci.edu Sun Aug 13 08:58:52 2006 From: jmiller at stsci.edu (Todd Miller) Date: Sun, 13 Aug 2006 08:58:52 -0400 Subject: [Numpy-discussion] Memory leak in array protocol numarray <--numpy In-Reply-To: <200608121753.33150.faltet@carabos.com> References: <200608112230.28727.faltet@carabos.com> <44DD0A1F.4010509@ieee.org> <44DDCB93.5080103@stsci.edu> <200608121753.33150.faltet@carabos.com> Message-ID: <44DF220C.2020600@stsci.edu> Francesc Altet wrote: > A Dissabte 12 Agost 2006 14:37, Todd Miller va escriure: > >> I agree with all of Travis' comments below and committed the suggested >> changes to numarray CVS. I found one other numarray change needed >> for Francesc's examples to run (apparently) leak-free: >> >> Py_INCREF(obj) >> Py_XDECREF(a->base) >> a->base = obj >> Py_DECREF(cobj) >> >> Thanks Travis! >> > > Hey! I checked this morning Travis' patch and seems to work well for me. I'll > add yours as well later on and see... BTW, where exactly I've to add the > above lines? > The lines above are a modification to Travis' patch, so basically the same place: ******* a = NA_FromDimsStridesTypeAndData(arrayif->nd, shape, strides, t, arrayif->data); if (!a) goto _fail; ! a->base = cobj; return a; ------- a = NA_FromDimsStridesTypeAndData(arrayif->nd, shape, strides, t, arrayif->data); if (!a) goto _fail; ! Py_INCREF(obj); ! Py_XDECREF(a->base); ! a->base = obj; ! Py_DECREF(cobj); return a; Todd From jdhunter at ace.bsd.uchicago.edu Sun Aug 13 16:02:13 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sun, 13 Aug 2006 15:02:13 -0500 Subject: [Numpy-discussion] numarray cov function Message-ID: <871wrkqu9m.fsf@peds-pc311.bsd.uchicago.edu> I was surprised to see that numarray.mlab.cov is returning a rank-0 complex number when given two 1D arrays as inputs rather than the standard 2x2 covariance array I am used to seeing. Is this a feature or a bug? In [2]: import numarray.mlab as nam In [3]: x = nam.rand(10) In [4]: y = nam.rand(10) In [5]: nam.cov(x, y) Out[5]: array((0.014697855954587828+0j)) In [6]: import numpy.oldnumeric.mlab as npm In [7]: x = npm.rand(10) In [8]: y = npm.rand(10) In [9]: npm.cov(x, y) Out[9]: array([[ 0.13243082, 0.0520454 ], [ 0.0520454 , 0.07435816]]) In [10]: import numarray In [11]: numarray.__version__ Out[11]: '1.3.3' In [12]: import numpy In [13]: numpy.__version__ Out[13]: '1.0b2.dev2999' From oliphant.travis at ieee.org Sun Aug 13 17:33:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 13 Aug 2006 15:33:28 -0600 Subject: [Numpy-discussion] numarray cov function In-Reply-To: <871wrkqu9m.fsf@peds-pc311.bsd.uchicago.edu> References: <871wrkqu9m.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <44DF9AA8.8080802@ieee.org> John Hunter wrote: > I was surprised to see that numarray.mlab.cov is returning a rank-0 > complex number when given two 1D arrays as inputs rather than the > standard 2x2 covariance array I am used to seeing. Is this a feature > or a bug? > > > In [2]: import numarray.mlab as nam > > In [3]: x = nam.rand(10) > > In [4]: y = nam.rand(10) > > In [5]: nam.cov(x, y) > Out[5]: array((0.014697855954587828+0j)) > > In [6]: import numpy.oldnumeric.mlab as npm > > In [7]: x = npm.rand(10) > > In [8]: y = npm.rand(10) > > In [9]: npm.cov(x, y) > Out[9]: > array([[ 0.13243082, 0.0520454 ], > [ 0.0520454 , 0.07435816]]) > > In [10]: import numarray > > In [11]: numarray.__version__ > Out[11]: '1.3.3' > > In [12]: import numpy > > In [13]: numpy.__version__ > Out[13]: '1.0b2.dev2999' > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From oliphant.travis at ieee.org Sun Aug 13 17:35:00 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 13 Aug 2006 15:35:00 -0600 Subject: [Numpy-discussion] numarray cov function In-Reply-To: <871wrkqu9m.fsf@peds-pc311.bsd.uchicago.edu> References: <871wrkqu9m.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <44DF9B04.3010401@ieee.org> John Hunter wrote: > I was surprised to see that numarray.mlab.cov is returning a rank-0 > complex number when given two 1D arrays as inputs rather than the > standard 2x2 covariance array I am used to seeing. Is this a feature > or a bug? > This was the old behavior of the Numeric cov function which numarray borrowed. We changed the behavior of cov in NumPy because it makes more sense to return the full covariance matrix in this case. -Travis From haase at msg.ucsf.edu Sun Aug 13 18:48:41 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 13 Aug 2006 15:48:41 -0700 Subject: [Numpy-discussion] concersion warning: numarray to numpy - now array defaults to not copy Message-ID: <44DFAC49.1060903@msg.ucsf.edu> Hi, I just wanted to point out that the default of the copy argument changed from numpy to numarray. Don't forget about that in the conversion script ... Cheers, Sebastian Haase From oliphant.travis at ieee.org Sun Aug 13 18:57:13 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 13 Aug 2006 16:57:13 -0600 Subject: [Numpy-discussion] concersion warning: numarray to numpy - now array defaults to not copy In-Reply-To: <44DFAC49.1060903@msg.ucsf.edu> References: <44DFAC49.1060903@msg.ucsf.edu> Message-ID: <44DFAE49.4080601@ieee.org> Sebastian Haase wrote: > Hi, > I just wanted to point out that the default of the copy argument changed > from numpy to numarray. > Don't forget about that in the conversion script ... > Hmm.. I don't see what you are talking about. The default for the copy argument in the array function is still copy=True. If there is something else then it is a bug. -Travis From oliphant.travis at ieee.org Sun Aug 13 18:57:13 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 13 Aug 2006 16:57:13 -0600 Subject: [Numpy-discussion] concersion warning: numarray to numpy - now array defaults to not copy In-Reply-To: <44DFAC49.1060903@msg.ucsf.edu> References: <44DFAC49.1060903@msg.ucsf.edu> Message-ID: <44DFAE49.4080601@ieee.org> Sebastian Haase wrote: > Hi, > I just wanted to point out that the default of the copy argument changed > from numpy to numarray. > Don't forget about that in the conversion script ... > Hmm.. I don't see what you are talking about. The default for the copy argument in the array function is still copy=True. If there is something else then it is a bug. -Travis From haase at msg.ucsf.edu Sun Aug 13 20:28:36 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 13 Aug 2006 17:28:36 -0700 Subject: [Numpy-discussion] concersion warning: numarray to numpy - now array defaults to not copy In-Reply-To: <44DFAE49.4080601@ieee.org> References: <44DFAC49.1060903@msg.ucsf.edu> <44DFAE49.4080601@ieee.org> Message-ID: <44DFC3B4.2030901@msg.ucsf.edu> SORRY FOR THE CONFUSION !! I must have been on drugs ! Maybe I did not get enough sleep. asarray() is the function that does not create a copy - both in numpy and in numarray. Sorry, Sebastian Travis Oliphant wrote: > Sebastian Haase wrote: >> Hi, >> I just wanted to point out that the default of the copy argument changed >> from numpy to numarray. >> Don't forget about that in the conversion script ... >> > > Hmm.. I don't see what you are talking about. The default for the copy > argument in the array function is still copy=True. If there is > something else then it is a bug. > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From davidgrant at gmail.com Mon Aug 14 02:33:54 2006 From: davidgrant at gmail.com (David Grant) Date: Sun, 13 Aug 2006 23:33:54 -0700 Subject: [Numpy-discussion] Profiling line-by-line In-Reply-To: References: Message-ID: Could this http://oubiwann.blogspot.com/2006/08/python-and-kcachegrind.html lead to line-by-line profiling with numpy functions? Dave On 7/26/06, David Grant wrote: > > Does anyone know if this issue related to profiling with numpy is a python > problem or a numpy problem? > > Dave > > > On 7/20/06, David Grant < davidgrant at gmail.com> wrote: > > > > > > > > On 7/20/06, Arnd Baecker wrote: > > > > > > > > > More importantly note that profiling in connection > > > with ufuncs seems problematic: > > > > > > Yes, that seems to be my problem... I read the threads you provided > > links to. Do you know why this is the case? > > > > I have tried hotshot2calltree by the way, and I didn't find out anything > > new. > > > > -- > > David Grant > > > > > > -- > David Grant > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes25 at tom.com Thu Aug 17 07:46:26 2006 From: wes25 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Thu, 17 Aug 2006 19:46:26 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU6yfqy+tK7z9/W97ncvLzE3Mzhyf0=?= Message-ID: An HTML attachment was scrubbed... URL: From MAILER-DAEMON at rosi.szbk.u-szeged.hu Mon Aug 14 07:42:31 2006 From: MAILER-DAEMON at rosi.szbk.u-szeged.hu (Mail Delivery System) Date: Mon, 14 Aug 2006 13:42:31 +0200 (CEST) Subject: [Numpy-discussion] Undelivered Mail Returned to Sender Message-ID: <20060814114231.A6BA01BD9A@rosi.szbk.u-szeged.hu> This is the Postfix program at host rosi.szbk.u-szeged.hu. I'm sorry to have to inform you that your message could not be be delivered to one or more recipients. It's attached below. For further assistance, please send mail to If you do so, please include this problem report. You can delete your own text from the attached returned message. The Postfix program : permission denied. Command output: maildrop: maildir over quota. -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From wes25 at tom.com Thu Aug 17 07:46:26 2006 From: wes25 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Thu, 17 Aug 2006 19:46:26 +0800 Subject: *****SPAM***** [Numpy-discussion] re:Éú²úÒ»ÏßÖ÷¹Ü¼¼ÄÜÌáÉý Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 -------------- next part -------------- _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From ainulinde at gmail.com Mon Aug 14 13:21:39 2006 From: ainulinde at gmail.com (ainulinde) Date: Tue, 15 Aug 2006 01:21:39 +0800 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available In-Reply-To: References: <44DB64A2.60203@enthought.com> Message-ID: FYI, I chang my bt client from bitcomet to uTorrent, and it works now. and I have downloaded the iso by http://blabla. in the vmware vitural machine, the livecd boot and i can use ipython/import numpy... is there any more feature or special scipy conference stuff on the cd? On 8/11/06, ainulinde wrote: > can't get any seeds for this torrent and any other download methods? thanks > > On 8/11/06, Bryce Hendrix wrote: > > For those not able to make SciPy 2006 next week, or who would like to > > download the ISO a few days early, its available at > > http://code.enthought.com/downloads/scipy2006-i386.iso.torrent. > > > > We squashed a lot onto the CD, so I also had to trim > 100 MB of > > packages that ship with the standard Ubuntu CD. Here's what I was able > > to add: > > > > * SciPy build from svn (Wed, 12:00 CST) > > * NumPy built from svn (Wed, 12:00 CST) > > * Matplotlib built from svn (Wed, 12:00 CST) > > * IPython built from svn (Wed, 12:00 CST) > > * Enthought built from svn (Wed, 16:00 CST) > > * ctypes 1.0.0 > > * hdf5 1.6.5 > > * networkx 0.31 > > * Pyrex 0.9.4.1 > > * pytables 1.3.2 > > > > All of the svn checkouts are zipped in /src, if you'd like to build from > > a svn version newer than what was shipped, simple copy the compressed > > package to your home dir, uncompress it, run "svn upate", and built it. > > > > Please note: This ISO was built rather hastily, uses un-official code, > > and received very little testing. Please don't even consider using this > > in a production environment. > > > > Bryce > > > > ------------------------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From matthew.brett at gmail.com Mon Aug 14 13:23:07 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 14 Aug 2006 18:23:07 +0100 Subject: [Numpy-discussion] Creating and reshaping fortran order arrays Message-ID: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> Hi, I am sorry if this is obvious, but: I am working on the scipy loadmat module, and would like to use numpy to reformat the fortran order arrays that matlab saves. I was not sure how to do this, and would like to ask for advice. Let us say that I have some raw binary data as a string. The data contains 4 integers, for a 2x2 array, stored in fortran order. For example, here is 0,1,2,3 as int32 str = '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' What is the best way of me putting this into a 2x2 array object so that the array recognizes the data is in fortran order. Sort of: a = somefunction(str, shape=(2,2), dtype=int32, order='F') such that a.shape = (2,2) and a[1,0] == 1, rather than 2. Sorry if that's obvious, but I couldn't see it immediately.... Thanks a lot, Matthew From bhendrix at enthought.com Mon Aug 14 13:36:35 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Mon, 14 Aug 2006 12:36:35 -0500 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available In-Reply-To: References: <44DB64A2.60203@enthought.com> Message-ID: <44E0B4A3.3000307@enthought.com> The Live CD is meant to be paired with the tutorial sessions, but contains just the latest builds + svn checkouts. Once the tutorials are available, we should add them to the same wiki page for downloading. I built the CD's in a VMWare virtual machine, if anyone is interested in the VMWare image, I can make it available via bittorrent too, maybe even with instructions on how to update the files and re-master the ISO :) Bryce ainulinde wrote: > FYI, I chang my bt client from bitcomet to uTorrent, and it works now. > and I have downloaded the iso by http://blabla. > in the vmware vitural machine, the livecd boot and i can use > ipython/import numpy... > is there any more feature or special scipy conference stuff on the cd? > > From oliphant.travis at ieee.org Mon Aug 14 13:55:40 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 11:55:40 -0600 Subject: [Numpy-discussion] Creating and reshaping fortran order arrays In-Reply-To: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> References: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> Message-ID: <44E0B91C.8070807@ieee.org> Matthew Brett wrote: > Hi, > > I am sorry if this is obvious, but: > It's O.K. I don't think many people are used to the fortran-order stuff. So, I doubt it's obvious. > For example, here is 0,1,2,3 as int32 > > str = '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' > > What is the best way of me putting this into a 2x2 array object so > that the array recognizes the data is in fortran order. Sort of: > > a = somefunction(str, shape=(2,2), dtype=int32, order='F') > There isn't really a function like this because the fromstring function only creates 1-d arrays that must be reshaped later (it also copies the data from the string). However, you can use the ndarray creation function itself to do what you want: a = ndarray(shape=(2,2), dtype=int32, buffer=str, order='F') This will use the memory of the string as the new array memory. -Travis From oliphant.travis at ieee.org Mon Aug 14 14:01:48 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 12:01:48 -0600 Subject: [Numpy-discussion] Creating and reshaping fortran order arrays In-Reply-To: <44E0B91C.8070807@ieee.org> References: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> <44E0B91C.8070807@ieee.org> Message-ID: <44E0BA8C.2070801@ieee.org> Travis Oliphant wrote: > However, you can use the ndarray creation function itself to do what you > want: > > a = ndarray(shape=(2,2), dtype=int32, buffer=str, order='F') > > This will use the memory of the string as the new array memory. > Incidentally, the new array will be read-only. But, you can fix this in two ways: 1) a.flags.writeable = True --- This is a cheat that avoids the extra copy on pickle-load and let's you use strings as writeable buffers. Don't abuse it. It will disappear once Python 3k has a proper bytes type. 2) a = a.copy() -Travis From haase at msg.ucsf.edu Mon Aug 14 14:02:53 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 14 Aug 2006 11:02:53 -0700 Subject: [Numpy-discussion] trivial question: how to compare dtype - but ignoring byteorder ? In-Reply-To: <44C52158.3050600@ieee.org> References: <44C450CA.3010609@msg.ucsf.edu> <44C52158.3050600@ieee.org> Message-ID: <200608141102.53945.haase@msg.ucsf.edu> On Monday 24 July 2006 12:36, Travis Oliphant wrote: > Sebastian Haase wrote: > > Hi, > > if I have a numpy array 'a' > > and say: > > a.dtype == numpy.float32 > > > > Is the result independent of a's byteorder ? > > (That's what I would expect ! Just checking !) > > I think I misread the question and saw "==" as "=" > > But, the answer I gave should still help: the byteorder is a property > of the data-type. There is no such thing as "a's" byteorder. Thus, > numpy.float32 (which is actually an array-scalar and not a true > data-type) is interepreted as a machine-byte-order IEEE floating-point > data-type with 32 bits. Thus, the result will depend on whether or not > a.dtype is machine-order or not. > > -Travis Hi, I just realized that this question did actually not get sorted out. Now I'm just about to convert my code to compare arr.dtype.type to the (default scalar!) dtype numpy.uint8 like this: if self.img.dtype.type == N.uint8: self.hist_min, self.hist_max = 0, 1<<8 elif self.img.dtype.type == N.uint16: self.hist_min, self.hist_max = 0, 1<<16 ... This seems to work independent of byteorder - (but looks ugly(er)) ... Is this the best way of doing this ? - Sebastian Haase From oliphant.travis at ieee.org Mon Aug 14 15:32:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 13:32:05 -0600 Subject: [Numpy-discussion] trivial question: how to compare dtype - but ignoring byteorder ? In-Reply-To: <200608141102.53945.haase@msg.ucsf.edu> References: <44C450CA.3010609@msg.ucsf.edu> <44C52158.3050600@ieee.org> <200608141102.53945.haase@msg.ucsf.edu> Message-ID: <44E0CFB5.9060801@ieee.org> > Hi, > I just realized that this question did actually not get sorted out. > Now I'm just about to convert my code to compare > arr.dtype.type to the (default scalar!) dtype numpy.uint8 > like this: > if self.img.dtype.type == N.uint8: > self.hist_min, self.hist_max = 0, 1<<8 > elif self.img.dtype.type == N.uint16: > self.hist_min, self.hist_max = 0, 1<<16 > ... > > Yes, you can do this and it should work independent of byteorder. The dtype comparison will take into account the byte-order but comparing the type objects directly won't. So, if that is your intent, then great. -Travis From satyaupadhya at yahoo.co.in Mon Aug 14 15:44:05 2006 From: satyaupadhya at yahoo.co.in (Satya Upadhya) Date: Mon, 14 Aug 2006 20:44:05 +0100 (BST) Subject: [Numpy-discussion] Regarding Matrices Message-ID: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> Dear All, Just a few queries regarding matrices. On my python shell i typed: >>> from Numeric import * >>> from LinearAlgebra import * >>> A = [1,2,3,4,5,6,7,8,9] >>> B = reshape(A,(3,3)) >>> B array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> X = identity(3) >>> X array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) >>> D = power(B,0) >>> D array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]) the power function is giving a resultant matrix in which each element of matrix B is raised to the power of 0 so as to make it 1. But, taken as a whole i.e. matrix B to the power of 0 should have given the identity matrix. Also, what is the procedure for taking the log of an entire matrix (log(A) where A is a matrix takes the log of every individual element in A, but thats not the same as taking the log of the entire matrix) Thanking you, Satya --------------------------------- Here's a new way to find what you're looking for - Yahoo! Answers Send FREE SMS to your friend's mobile from Yahoo! Messenger Version 8. Get it NOW -------------- next part -------------- An HTML attachment was scrubbed... URL: From svetosch at gmx.net Mon Aug 14 15:58:50 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Mon, 14 Aug 2006 21:58:50 +0200 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> Message-ID: <44E0D5FA.9040505@gmx.net> Hi, Satya Upadhya schrieb: >>>> from Numeric import * Well this list is about the numpy package, but anyway... > the power function is giving a resultant matrix in which each element of > matrix B is raised to the power of 0 so as to make it 1. But, taken as a > whole i.e. matrix B to the power of 0 should have given the identity > matrix. afaik, in numpy terms if you are dealing with a numpy array, such functions are elementwise by design. In contrast, if you have a numpy matrix (a special subclass of the array class) --constructed e.g. as mat(eye(3))-- then power is redefined to be the matrix power; at least that's the rule for the ** operator, not 100% sure if for the explicit power() function as well, but I suppose so. > > Also, what is the procedure for taking the log of an entire matrix > (log(A) where A is a matrix takes the log of every individual element in > A, but thats not the same as taking the log of the entire matrix) I don't understand what you want, how do you take the log of a matrix mathematically? -Sven From nwagner at iam.uni-stuttgart.de Mon Aug 14 16:24:20 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Aug 2006 22:24:20 +0200 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: <44E0D5FA.9040505@gmx.net> References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> <44E0D5FA.9040505@gmx.net> Message-ID: On Mon, 14 Aug 2006 21:58:50 +0200 Sven Schreiber wrote: > Hi, > > Satya Upadhya schrieb: > >>>>> from Numeric import * > > Well this list is about the numpy package, but anyway... > >> the power function is giving a resultant matrix in which >>each element of >> matrix B is raised to the power of 0 so as to make it 1. >>But, taken as a >> whole i.e. matrix B to the power of 0 should have given >>the identity >> matrix. > > afaik, in numpy terms if you are dealing with a numpy >array, such > functions are elementwise by design. > In contrast, if you have a numpy matrix (a special >subclass of the array > class) --constructed e.g. as mat(eye(3))-- then power is >redefined to be > the matrix power; at least that's the rule for the ** >operator, not 100% > sure if for the explicit power() function as well, but I >suppose so. > >> >> Also, what is the procedure for taking the log of an >>entire matrix >> (log(A) where A is a matrix takes the log of every >>individual element in >> A, but thats not the same as taking the log of the >>entire matrix) > > I don't understand what you want, how do you take the >log of a matrix > mathematically? > > -Sven > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web >services, security? > Get stuff done quickly with pre-integrated technology to >make your job easier > Download IBM WebSphere Application Server v.1.0.1 based >on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion Help on function logm in module scipy.linalg.matfuncs: logm(A, disp=1) Matrix logarithm, inverse of expm. Nils From torgil.svensson at gmail.com Mon Aug 14 17:03:04 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Mon, 14 Aug 2006 23:03:04 +0200 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> <44E0D5FA.9040505@gmx.net> Message-ID: >>> import numpy >>> numpy.__version__ '1.0b1' >>> from numpy import * >>> A = [1,2,3,4,5,6,7,8,9] >>> B = asmatrix(reshape(A,(3,3))) >>> B matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> B**0 matrix([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> power(B,0) matrix([[1, 1, 1], [1, 1, 1], [1, 1, 1]]) Shouldn't power() and the ** operator return the same result for matrixes? //Torgil From oliphant.travis at ieee.org Mon Aug 14 17:13:50 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 15:13:50 -0600 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> <44E0D5FA.9040505@gmx.net> Message-ID: <44E0E78E.5060600@ieee.org> Torgil Svensson wrote: >>>> import numpy >>>> numpy.__version__ >>>> > '1.0b1' > >>>> from numpy import * >>>> A = [1,2,3,4,5,6,7,8,9] >>>> B = asmatrix(reshape(A,(3,3))) >>>> B >>>> > matrix([[1, 2, 3], > [4, 5, 6], > [7, 8, 9]]) > >>>> B**0 >>>> > matrix([[ 1., 0., 0.], > [ 0., 1., 0.], > [ 0., 0., 1.]]) > >>>> power(B,0) >>>> > matrix([[1, 1, 1], > [1, 1, 1], > [1, 1, 1]]) > > Shouldn't power() and the ** operator return the same result for matrixes? > No. power is always the ufunc which does element-by-element raising to a power. This is actually a feature in that you can use the function call to do raising to a power without caring what kind of array subclass is used. In the same manner, multiply is *always* the ufunc. -Travis From fullung at gmail.com Mon Aug 14 17:16:06 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon, 14 Aug 2006 23:16:06 +0200 Subject: [Numpy-discussion] ctypes and ndpointer Message-ID: Hello all Just a quick note on the ndpointer function that Travis recently added to NumPy (thanks Travis!). When wrapping functions with ctypes, one can specify the argument types of the function. ctypes then checks that the parameters are valid before invoking the C function. This is described here in detail: http://docs.python.org/dev/lib/ctypes-specifying-required-argument-types.htm l The argtypes list is optional, and I think previously Travis suggested not specifying the argtypes because it would require one to write something like this: bar.argtypes = [POINTER(c_double)] x = N.array([...]) bar(x.data_as(POINTER(c_double)) instead of simply: bar(x) What ndpointer allows one to do is to build classes with a from_param method that knows about the details of ndarrays and how to convert them to something that ctypes can send to a C function. For example, suppose you have the following function: void bar(int* data, double x); You know that bar expects a 20x30 array of big-endian integers in Fortran order. You can make sure it gets only this kind of array by doing: _foolib = N.ctypes_load_library('foolib_', '.') bar = _foolib.bar bar.restype = None p = N.ndpointer(dtype='>i4', ndim=2, shape=(20,30), flags='FORTRAN') bar.argtypes = [p, ctypes.c_double] x = N.zeros((20,30),dtype='>i4',order='F') bar(x, 123.0) If you want your function to accept any kind of ndarray, you can do: bar.argtypes = [N.ndpointer(),...] In this case it will probably still make sense to wrap the C function in a Python function that also passes the .ctypes.strides and .ctypes.shape of the array. Cheers, Albert P.S. Sidebar: do we want these ctypes functions in the top-level namespace? Maybe not. Also, I'm starting to wonder whether ctypes_load_library deserves to exist or whether we should hear from the ctypes guys if there is a better way to accomplish what it does (which is to make it easy to load a shared library/DLL/dylib relative to some file in your module on any platform). From oliphant.travis at ieee.org Mon Aug 14 17:25:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 15:25:53 -0600 Subject: [Numpy-discussion] ctypes and ndpointer In-Reply-To: References: Message-ID: <44E0EA61.7010807@ieee.org> Albert Strasheim wrote: > P.S. Sidebar: do we want these ctypes functions in the top-level namespace? > Maybe not. Also, I'm starting to wonder whether ctypes_load_library deserves > to exist or whether we should hear from the ctypes guys if there is a better > way to accomplish what it does (which is to make it easy to load a shared > library/DLL/dylib relative to some file in your module on any platform). > I'm happy to move them from the top-level name-space to something else prior to 1.0 final. It's probably a good idea. -Travis From Chris.Barker at noaa.gov Mon Aug 14 19:37:31 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 14 Aug 2006 16:37:31 -0700 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> <44E0D5FA.9040505@gmx.net> Message-ID: <44E1093B.6040405@noaa.gov> Torgil Svensson wrote: > Shouldn't power() and the ** operator return the same result for matrixes? no, but the built-in pow() should -- does it? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Mon Aug 14 19:40:31 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 14 Aug 2006 16:40:31 -0700 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <35522.64.17.89.52.1155275427.squirrel@imap.rap.ucar.edu> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> <44DB5B3F.9080203@noaa.gov> <35522.64.17.89.52.1155275427.squirrel@imap.rap.ucar.edu> Message-ID: <44E109EF.9040700@noaa.gov> Daran L. Rife wrote: > I tried your suggestion of installing and running the pre-built > packages at . I am > sorry to report that the pre-built MacPython and Numeric 24.2 > package did not work. I get the same "Segmentation Fault" that > I got when I built Python 2.4.3 and Numeric 24.2 from source. Darn. My few simple tests all work. If you can figure out which functions are failing, and make a small sample that fails, post it here and to the python-mac list. There are some smart folks there that might be able to help. > As a last resort, I may build ATLAS and LAPACK from source, > then build Numeric 23.8 against these, and try installing > this into MacPython. I hate having to try this, but I cannot > do any development without a functioning Python and Numeric. However, it might be easier to port to numpy that do all that. And you'll definitely get more help solving any problems you have with numpy. good luck. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From drife at ucar.edu Mon Aug 14 19:56:52 2006 From: drife at ucar.edu (Daran Rife) Date: Mon, 14 Aug 2006 17:56:52 -0600 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <44E109EF.9040700@noaa.gov> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> <44DB5B3F.9080203@noaa.gov> <35522.64.17.89.52.1155275427.squirrel@imap.rap.ucar.edu> <44E109EF.9040700@noaa.gov> Message-ID: <44E10DC4.2040609@ucar.edu> Hi Chris, > Darn. My few simple tests all work. If you can figure out which > functions are failing, and make a small sample that fails, post it here > and to the python-mac list. There are some smart folks there that might > be able to help. I will try to do so, but like you, I think my time is better spent transitioning to Numpy. Incidentally, I am now using the MacPython distro--thanks for pointing me toward that. I also got Numeric 23.8 to work well with MacPython, including the optimized vecLib framework. I got the harebrained idea to try compiling and installing Numeric 23.8 using the setup.py and customize.py files from Numeric 24.x, since they seem to get the Apple veclib stuff compiled in properly, especially the optimized matrix math libs. The one tweak I had to make was in setup.py, where I pointed it to the new vecLib in: /System/Library/Frameworks/Accelerate.framework > However, it might be easier to port to numpy that do all that. And > you'll definitely get more help solving any problems you have with numpy. Agreed. I am looking forward to the first official release of numpy. In the meantime, I will experiment with the Beta version. Thanks again, Daran From haase at msg.ucsf.edu Mon Aug 14 20:26:33 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 14 Aug 2006 17:26:33 -0700 Subject: [Numpy-discussion] How to share memory when bArr is smaller-sized than aArr Message-ID: <200608141726.33317.haase@msg.ucsf.edu> Hi, in numarray I could do this >>> import numarray as na >>> a = na.arange(10) >>> b = na.array(a._data, type=na.int32, shape=8) b would use the beginning part of a. This is actually important for inplace FFT (where in real-to-complex-fft the input has 2 "columns" more memory than the output) I found that in numpy there is no shape argument in array() at all anymore ! How can this be done with numpy ? Thanks, Sebastian Haase From oliphant.travis at ieee.org Mon Aug 14 20:38:02 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 18:38:02 -0600 Subject: [Numpy-discussion] How to share memory when bArr is smaller-sized than aArr In-Reply-To: <200608141726.33317.haase@msg.ucsf.edu> References: <200608141726.33317.haase@msg.ucsf.edu> Message-ID: <44E1176A.4020005@ieee.org> Sebastian Haase wrote: > Hi, > in numarray I could do this > >>>> import numarray as na >>>> a = na.arange(10) >>>> b = na.array(a._data, type=na.int32, shape=8) >>>> > > b would use the beginning part of a. > > This is actually important for inplace FFT (where in real-to-complex-fft the > input has 2 "columns" more memory than the output) > > I found that in numpy there is no shape argument in array() at all anymore ! > > No, there is no shape argument anymore. But, the ndarray() constructor does have the shape argument and can be used in this way. so import numpy as na b = na.ndarray(buffer=a, dtype=na.int32, shape=9) should work. -Travis From haase at msg.ucsf.edu Mon Aug 14 21:02:21 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 14 Aug 2006 18:02:21 -0700 Subject: [Numpy-discussion] please comment on scalar types Message-ID: <200608141802.21883.haase@msg.ucsf.edu> Hi! I have a record array with a field 'mode' Mode is a small integer that I use to choose a "PixelType" So I did: >>> print PixelTypes[ mode ] TypeError: tuple indices must be integers >>> pdb.pm() > /home/haase/PrLinN64/Priithon/Mrc.py(813)MrcMode2numType() -> return PixelTypes[ mode ] (Pdb) p mode 1 (Pdb) p type(mode) (Pdb) p isinstance(mode, int) False Since numpy introduced special scalar types a simple statement like this doesn't work anymore ! Would it work if int32scalar was derived from int ? I actually thought it was ... Comments ? - Sebastian Haase From oliphant.travis at ieee.org Mon Aug 14 21:18:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 19:18:04 -0600 Subject: [Numpy-discussion] please comment on scalar types In-Reply-To: <200608141802.21883.haase@msg.ucsf.edu> References: <200608141802.21883.haase@msg.ucsf.edu> Message-ID: <44E120CC.8050400@ieee.org> Sebastian Haase wrote: > Hi! > I have a record array with a field 'mode' > Mode is a small integer that I use to choose a "PixelType" > So I did: > >>>> print PixelTypes[ mode ] >>>> > TypeError: tuple indices must be integers > >>>> pdb.pm() >>>> >> /home/haase/PrLinN64/Priithon/Mrc.py(813)MrcMode2numType() >> > -> return PixelTypes[ mode ] > (Pdb) p mode > 1 > (Pdb) p type(mode) > > (Pdb) p isinstance(mode, int) > False > > Since numpy introduced special scalar types a simple statement like this > doesn't work anymore ! Would it work if int32scalar was derived from int ? I > actually thought it was ... > It does sub-class from int unless you are on a system where a c-long is 64-bit then int64scalar inherits from int. On my 32-bit system: isinstance(array([1,2,3])[0],int) is true. -Travis From haase at msg.ucsf.edu Mon Aug 14 22:40:49 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 14 Aug 2006 19:40:49 -0700 Subject: [Numpy-discussion] please comment on scalar types In-Reply-To: <44E120CC.8050400@ieee.org> References: <200608141802.21883.haase@msg.ucsf.edu> <44E120CC.8050400@ieee.org> Message-ID: <44E13431.2040205@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> Hi! >> I have a record array with a field 'mode' >> Mode is a small integer that I use to choose a "PixelType" >> So I did: >> >>>>> print PixelTypes[ mode ] >>>>> >> TypeError: tuple indices must be integers >> >>>>> pdb.pm() >>>>> >>> /home/haase/PrLinN64/Priithon/Mrc.py(813)MrcMode2numType() >>> >> -> return PixelTypes[ mode ] >> (Pdb) p mode >> 1 >> (Pdb) p type(mode) >> >> (Pdb) p isinstance(mode, int) >> False >> >> Since numpy introduced special scalar types a simple statement like this >> doesn't work anymore ! Would it work if int32scalar was derived from int ? I >> actually thought it was ... >> > It does sub-class from int unless you are on a system where a c-long is > 64-bit then int64scalar inherits from int. > > On my 32-bit system: > > isinstance(array([1,2,3])[0],int) is true. > > > > -Travis I see - yes I forgot - that test was indeed run on 64bit Linux. And that automatically implies that there a 32bit-int cannot be used in place of a "normal python integer" !? I could see wanting to use int16 or event uint8 as a tuple index. Logically a small type would be save to use in place of a bigger one ... - Sebastian From oliphant at ee.byu.edu Mon Aug 14 23:13:37 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 14 Aug 2006 21:13:37 -0600 Subject: [Numpy-discussion] please comment on scalar types In-Reply-To: <44E13431.2040205@msg.ucsf.edu> References: <200608141802.21883.haase@msg.ucsf.edu> <44E120CC.8050400@ieee.org> <44E13431.2040205@msg.ucsf.edu> Message-ID: <44E13BE1.7080607@ee.byu.edu> Sebastian Haase wrote: >Travis Oliphant wrote: > > >And that automatically implies that there a 32bit-int cannot be used in >place of a "normal python integer" !? >I could see wanting to use int16 or event uint8 as a tuple index. >Logically a small type would be save to use in place of a bigger one ... > > That is the purpose behind the __index__ attribute I added to Python 2.5 (see PEP 357). This allows all the scalar integers to be used in place of integers inside of Python. -Travis From fperez.net at gmail.com Tue Aug 15 00:06:29 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 14 Aug 2006 22:06:29 -0600 Subject: [Numpy-discussion] Creating and reshaping fortran order arrays In-Reply-To: <44E0BA8C.2070801@ieee.org> References: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> <44E0B91C.8070807@ieee.org> <44E0BA8C.2070801@ieee.org> Message-ID: On 8/14/06, Travis Oliphant wrote: > Travis Oliphant wrote: > > However, you can use the ndarray creation function itself to do what you > > want: > > > > a = ndarray(shape=(2,2), dtype=int32, buffer=str, order='F') > > > > This will use the memory of the string as the new array memory. > > > Incidentally, the new array will be read-only. But, you can fix this in > two ways: > > 1) a.flags.writeable = True Sweet! We now finally have mutable strings for Python: In [2]: astr = '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' In [4]: a = N.ndarray(shape=(2,2), dtype=N.int32, buffer=astr, order='F') In [5]: astr Out[5]: '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' In [6]: a.flags.writeable = True In [7]: a Out[7]: array([[0, 2], [1, 3]]) In [8]: a[0] = 1 In [9]: astr Out[9]: '\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00' Guido's going to kill you on Thursday, you know ;) f From strawman at astraw.com Tue Aug 15 01:37:22 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 14 Aug 2006 22:37:22 -0700 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: <44E0D5FA.9040505@gmx.net> References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> <44E0D5FA.9040505@gmx.net> Message-ID: <44E15D92.1060805@astraw.com> Sven Schreiber wrote: > Hi, > > Satya Upadhya schrieb: > > >>>>> from Numeric import * >>>>> > > Well this list is about the numpy package, but anyway... > This list is for numpy, numarray, and Numeric. There's just a lot more numpy talk going on these days, but "numpy-discussion" comes from the bad old days where no one realized that allowing your software package to be called multiple things (Numeric, Numeric Python, numpy) might result in confusion years later. Cheers! Andrew From oliphant.travis at ieee.org Tue Aug 15 02:01:51 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 15 Aug 2006 00:01:51 -0600 Subject: [Numpy-discussion] Creating and reshaping fortran order arrays In-Reply-To: References: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> <44E0B91C.8070807@ieee.org> <44E0BA8C.2070801@ieee.org> Message-ID: <44E1634F.3050201@ieee.org> Fernando Perez wrote: > Sweet! We now finally have mutable strings for Python: > > In [2]: astr = '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' > > In [4]: a = N.ndarray(shape=(2,2), dtype=N.int32, buffer=astr, order='F') > > In [5]: astr > Out[5]: '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' > > In [6]: a.flags.writeable = True > > In [7]: a > Out[7]: > array([[0, 2], > [1, 3]]) > > In [8]: a[0] = 1 > > In [9]: astr > Out[9]: '\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00' > > > Guido's going to kill you on Thursday, you know ;) > Don't tell him ;-) But, if he had provided a suitable bytes type already (that was pickleable) we wouldn't need to do this :-) Notice it's not writeable by default, so at least you have to "know what you are doing" to shoot yourself in the foot. -Travis From pauli.virtanen at iki.fi Tue Aug 15 02:07:57 2006 From: pauli.virtanen at iki.fi (Pauli Virtanen) Date: Tue, 15 Aug 2006 09:07:57 +0300 Subject: [Numpy-discussion] Numpy 1.0b2 crash Message-ID: <200608150907.57881.pauli.virtanen@iki.fi> Hi all, The following code causes a segmentation fault in Numpy 1.0b2 and 1.0b1. import numpy as N v = N.array([1,2,3,4,5,6,7,8,9,10]) N.lexsort(v) Stack trace =========== $ gdb --args python crash.py GNU gdb 6.4-debian Copyright 2005 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i486-linux-gnu"...Using host libthread_db library "/lib/tls/i686/cmov/libthread_db.so.1". (gdb) run Starting program: /usr/bin/python crash.py [Thread debugging using libthread_db enabled] [New Thread -1209857824 (LWP 22827)] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1209857824 (LWP 22827)] 0xb7d48f8d in PyArray_LexSort (sort_keys=0x81ed7e0, axis=) at arrayobject.c:8483 8483 arrayobject.c: No such file or directory. in arrayobject.c (gdb) bt #0 0xb7d48f8d in PyArray_LexSort (sort_keys=0x81ed7e0, axis=) at arrayobject.c:8483 #1 0xb7d49da5 in array_lexsort (ignored=0x0, args=0x822cb18, kwds=0x822cb18) at numpy/core/src/multiarraymodule.c:6271 #2 0x080b62c7 in PyEval_EvalFrame (f=0x8185c24) at ../Python/ceval.c:3563 #3 0x080b771f in PyEval_EvalCodeEx (co=0xb7e27ce0, globals=0xb7e08824, locals=0xb7e08824, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2736 #4 0x080b7965 in PyEval_EvalCode (co=0x822cb18, globals=0x822cb18, locals=0x822cb18) at ../Python/ceval.c:484 #5 0x080d94cc in PyRun_FileExFlags (fp=0x813e008, filename=0xbfcc98f3 "crash.py", start=136497944, globals=0x822cb18, locals=0x822cb18, closeit=1, flags=0xbfcc91d4) at ../Python/pythonrun.c:1265 #6 0x080d976c in PyRun_SimpleFileExFlags (fp=, filename=0xbfcc98f3 "crash.py", closeit=1, flags=0xbfcc91d4) at ../Python/pythonrun.c:860 #7 0x08055b33 in Py_Main (argc=1, argv=0xbfcc9274) at ../Modules/main.c:493 #8 0xb7e45ea2 in __libc_start_main () from /lib/tls/i686/cmov/libc.so.6 #9 0x08054fa1 in _start () at ../sysdeps/i386/elf/start.S:119 (gdb) #0 0xb7d48f8d in PyArray_LexSort (sort_keys=0x81ed7e0, axis=) at arrayobject.c:8483 #1 0xb7d49da5 in array_lexsort (ignored=0x0, args=0x822cb18, kwds=0x822cb18) at numpy/core/src/multiarraymodule.c:6271 #2 0x080b62c7 in PyEval_EvalFrame (f=0x8185c24) at ../Python/ceval.c:3563 #3 0x080b771f in PyEval_EvalCodeEx (co=0xb7e27ce0, globals=0xb7e08824, locals=0xb7e08824, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2736 #4 0x080b7965 in PyEval_EvalCode (co=0x822cb18, globals=0x822cb18, locals=0x822cb18) at ../Python/ceval.c:484 #5 0x080d94cc in PyRun_FileExFlags (fp=0x813e008, filename=0xbfcc98f3 "crash.py", start=136497944, globals=0x822cb18, locals=0x822cb18, closeit=1, flags=0xbfcc91d4) at ../Python/pythonrun.c:1265 #6 0x080d976c in PyRun_SimpleFileExFlags (fp=, filename=0xbfcc98f3 "crash.py", closeit=1, flags=0xbfcc91d4) at ../Python/pythonrun.c:860 #7 0x08055b33 in Py_Main (argc=1, argv=0xbfcc9274) at ../Modules/main.c:493 #8 0xb7e45ea2 in __libc_start_main () from /lib/tls/i686/cmov/libc.so.6 #9 0x08054fa1 in _start () at ../sysdeps/i386/elf/start.S:119 From drswalton at gmail.com Tue Aug 15 04:04:58 2006 From: drswalton at gmail.com (Stephen Walton) Date: Tue, 15 Aug 2006 01:04:58 -0700 Subject: [Numpy-discussion] site.cfg problems Message-ID: <693733870608150104q5fe24d5ag27eacdbd24780830@mail.gmail.com> Does site.cfg actually work? I ask because I want to test numpy (and soon scipy) against ATLAS 3.7.13. For simplicity I used the "make install" with that distribution, which puts the files in /usr/local/atlas/lib, /usr/local/atlas/include, and so on. No problem, so I created a site.cfg in the numpy root directory with [atlas] library_dirs = /usr/local/atlas/lib atlas_libs = lapack, blas, cblas, atlas include_dirs = /usr/local/atlas/include/ The numpy build did not find atlas; the output of "python setup.py build" shows no sign of even having checked the listed directory above for the libraries. Did I do something wrong? Should site.cfg be in numpy/numpy/distutils instead? -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue Aug 15 10:56:54 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 15 Aug 2006 10:56:54 -0400 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: <44E1093B.6040405@noaa.gov> References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com><44E0D5FA.9040505@gmx.net> <44E1093B.6040405@noaa.gov> Message-ID: > Torgil Svensson wrote: >> Shouldn't power() and the ** operator return the same result for matrixes? On Mon, 14 Aug 2006, Christopher Barker apparently wrote: > no, but the built-in pow() should -- does it? The "try it and see" approach says that it does. Cheers, Alan Isaac From elcorto at gmx.net Tue Aug 15 12:02:00 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 15 Aug 2006 18:02:00 +0200 Subject: [Numpy-discussion] test fails Message-ID: <44E1EFF8.9050100@gmx.net> The test in In [2]: numpy.__version__ Out[2]: '1.0b2.dev3007' fails: [...] check_1D_array (numpy.lib.tests.test_shape_base.test_vstack) ... ok check_2D_array (numpy.lib.tests.test_shape_base.test_vstack) ... ok check_2D_array2 (numpy.lib.tests.test_shape_base.test_vstack) ... ok ====================================================================== ERROR: check_ascii (numpy.core.tests.test_multiarray.test_fromstring) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.3/site-packages/numpy/core/tests/test_multiarray.py", line 120, in check_ascii a = fromstring('1 , 2 , 3 , 4',sep=',') ValueError: don't know how to read character strings for given array type ---------------------------------------------------------------------- Ran 476 tests in 1.291s FAILED (errors=1) -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From nwagner at iam.uni-stuttgart.de Tue Aug 15 12:06:10 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 15 Aug 2006 18:06:10 +0200 Subject: [Numpy-discussion] test fails In-Reply-To: <44E1EFF8.9050100@gmx.net> References: <44E1EFF8.9050100@gmx.net> Message-ID: <44E1F0F2.3050704@iam.uni-stuttgart.de> Steve Schmerler wrote: > The test in > > In [2]: numpy.__version__ > Out[2]: '1.0b2.dev3007' > > fails: > > > [...] > check_1D_array (numpy.lib.tests.test_shape_base.test_vstack) ... ok > check_2D_array (numpy.lib.tests.test_shape_base.test_vstack) ... ok > check_2D_array2 (numpy.lib.tests.test_shape_base.test_vstack) ... ok > > ====================================================================== > ERROR: check_ascii (numpy.core.tests.test_multiarray.test_fromstring) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.3/site-packages/numpy/core/tests/test_multiarray.py", > line 120, in check_ascii > a = fromstring('1 , 2 , 3 , 4',sep=',') > ValueError: don't know how to read character strings for given array type > > ---------------------------------------------------------------------- > Ran 476 tests in 1.291s > > FAILED (errors=1) > > > I cannot reproduce it here Numpy version 1.0b3.dev3025 python /usr/lib64/python2.4/site-packages/numpy/core/tests/test_multiarray.py Found 153 tests for numpy.core.multiarray Found 0 tests for __main__ ......................................................................................................................................................... ---------------------------------------------------------------------- Ran 153 tests in 0.047s OK Nils From etc2103 at columbia.edu Tue Aug 15 13:27:06 2006 From: etc2103 at columbia.edu (Ethan T Coon) Date: Tue, 15 Aug 2006 13:27:06 -0400 (EDT) Subject: [Numpy-discussion] f2py --include_paths from command line Message-ID: Hi all, The following line: f2py -c -m _test --include_paths ./include test.f (where test.f contains the line " include 'test_inc.h' " and 'test_inc.h' exists in the directory './include' ) results in the errors: ------------------------------------------------------------------ running build running config_fc running build_src building extension "_test" sources f2py options: [] f2py:> /tmp/tmpJqhFcQ/src.linux-i686-2.4/_testmodule.c creating /tmp/tmpJqhFcQ creating /tmp/tmpJqhFcQ/src.linux-i686-2.4 Reading fortran codes... Reading file 'test.f' (format:fix,strict) Line #6 in test.f:" INCLUDE 'test_inc.h'" readfortrancode: could not find include file 'test_inc.h'. Ignoring. Post-processing... Block: _test Block: test In: :_test:test.f:test getarrlen:variable "n" undefined Post-processing (stage 2)... Building modules... Building module "_test"... Constructing wrapper function "test"... a = test() Wrote C/API module "_test" to file "/tmp/tmpJqhFcQ/src.linux-i686-2.4/_testmodule.c" adding '/tmp/tmpJqhFcQ/src.linux-i686-2.4/fortranobject.c' to sources. adding '/tmp/tmpJqhFcQ/src.linux-i686-2.4' to include_dirs. copying /packages/lib/python2.4/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpJqhFcQ/src.linux-i686-2.4 copying /packages/lib/python2.4/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpJqhFcQ/src.linux-i686-2.4 running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_ext building '_test' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' error: unknown file type '' (from '--include_paths') --------------------------------------------------------------- Similar versions ( --include_paths=./include , --include_paths "./include" , --include_paths `pwd`/include ) fail similarly. Everything works fine from a distutils extension/setup call, but not from the command line. Thanks, Ethan ------------------------------------------- Ethan Coon DOE CSGF - Graduate Student Dept. Applied Physics & Applied Mathematics Columbia University 212-854-0415 http://www.ldeo.columbia.edu/~ecoon/ ------------------------------------------- From davidgrant at gmail.com Tue Aug 15 13:34:17 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 15 Aug 2006 10:34:17 -0700 Subject: [Numpy-discussion] scipy_distutils Message-ID: Where can I find the Extension module now? In the f2py documentation, the following import is used: from scipy_distutils.core import Extension but that doesn't work, and I read that this was moved into numpy along with f2py. I can't seem to find it anywhere. What's the current way of doing this? -- David Grant http://www.davidgrant.ca From robert.kern at gmail.com Tue Aug 15 14:01:27 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 15 Aug 2006 11:01:27 -0700 Subject: [Numpy-discussion] scipy_distutils In-Reply-To: References: Message-ID: David Grant wrote: > Where can I find the Extension module now? In the f2py documentation, > the following import is used: > > from scipy_distutils.core import Extension > > but that doesn't work, and I read that this was moved into numpy along > with f2py. I can't seem to find it anywhere. What's the current way of > doing this? That documentation is no longer up-to-date wrt building. I don't think that Pearu has done a comprehensive update of that section. The best place to look for documentation is numpy/doc/DISTUTILS.txt . numpy itself and scipy provide excellent examples of use, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From haase at msg.ucsf.edu Tue Aug 15 14:50:36 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 15 Aug 2006 11:50:36 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() Message-ID: <200608151150.36554.haase@msg.ucsf.edu> Hi! numpy renamed the *function* abs to absolute. Most functions like mean, min, max, average, ... have an equivalent array *method*. Why is absolute left out ? I think it should be added . Furthermore, looking at some line of code that have multiple calls to absolute [ like f(absolute(a), absolute(b), absolute(c)) ] I think "some people" might prefer less typing and less reading, like f( a.abs(), b.abs(), c.abs() ). One could even consider not requiring the "function call" parenthesis '()' at all - but I don't know about further implications that might have. Thanks, Sebastian Haase PS: is there any performace hit in using the built-in abs function ? From drswalton at gmail.com Tue Aug 15 19:08:55 2006 From: drswalton at gmail.com (Stephen Walton) Date: Tue, 15 Aug 2006 16:08:55 -0700 Subject: [Numpy-discussion] f2py --include_paths from command line In-Reply-To: References: Message-ID: <693733870608151608w4cb133a1hc1156f8479ba8e4f@mail.gmail.com> On 8/15/06, Ethan T Coon wrote: > > Hi all, > > The following line: > > f2py -c -m _test --include_paths ./include test.f Typing f2py alone seems to indicate the syntax should be f2py -I./include [other args] test.f I tried this and it seems to work here. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmt at cs.ubc.ca Tue Aug 15 21:34:06 2006 From: mmt at cs.ubc.ca (Matthew Trentacoste) Date: Tue, 15 Aug 2006 18:34:06 -0700 Subject: [Numpy-discussion] numpy 1.0b2 problems Message-ID: Hey. I'm trying to get numpy up and running on SuSE 10.1 and not having much luck. I've been working with 1.0b2 and can get it to install without any errors, but can't do anything with it. I run a local install of python 2.4.3 just to keep out of whatever weirdness gets installed on my machine by our sysadmins. Pretty standard fare, untar the ball, and './setup.py install --prefix=$HOME/local' It will complete that without issue, but when I try to run the test, I get: Python 2.4.3 (#1, Aug 15 2006, 18:09:56) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(1) Traceback (most recent call last): File "", line 1, in ? File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ __init__.py", line 77, in test return NumpyTest().test(level, verbosity) File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/testing/ numpytest.py", line 285, in __init__ from numpy.distutils.misc_util import get_frame File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 7, in ? from distutils import ccompiler ImportError: cannot import name ccompiler Any thoughts? Thanks Matt [ matthew m trentacoste mmt at cs.ubc.ca ] [ ] [ graduate student lead software developer ] [ university of british columbia brightside technologies ] [ http://www.cs.ubc.ca/~mmt http://brightsidetech.com ] [ +1 (604) 827-3979 +1 (604) 228-4624 ] From davidgrant at gmail.com Tue Aug 15 22:06:51 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 15 Aug 2006 19:06:51 -0700 Subject: [Numpy-discussion] some work on arpack Message-ID: Building an arpack extension turned out to be surprisingly simple. For example for dsaupd: f2py -c dsaupd.f -m dsaupd -L/usr/lib/blas/atlas:/usr/lib/lapack/atlas -llapack -lblas -larpack It took me a long time to get the command down to something that simple. Took me a while even to figure out I could just use the arpack library on my computer rather than re-linking all of arpack! I was able to import the dsaupd.so python module just fine and I was also able to call it just fine. I'll have to tweak the pyf file in order to get some proper output. But this gives me confidence that arpack is easy to hook into which is what others have said in the past, but without any experience with f2py I had no idea myself. f2py is awesome, for anyone who doesn't know. Matlab has interfaces for the arpack functions like dsaupd, dseupd, dnaupd, znaupd, zneupd (the mex file documentation claims those are the only ones, but they have more). Matlab has a C interface to these functions in arpackc.mex* and the script eigs.m does the grunt work, providing a very high-level interface as well as doing some linear algebra (the same type of stuff that is done in arpack's examples directory I gather) and various other things. My idea is (if I have time) to write an eigs-like function in python that will only perform a subset of what Matlab's eigs does for. It will, for example, compute a certain number of eigenvalues and eigenvectors for a real, sparse, symmetric matrix (the case I'm interested in)... I hope that this subset-of-matlab's-eigs function will not be too hard to write. Then more functionality can be added on to eigs.py later... Does this make sense? Has anyone else started work on arpack integration at all? -- David Grant http://www.davidgrant.ca From mmt at cs.ubc.ca Tue Aug 15 22:12:49 2006 From: mmt at cs.ubc.ca (Matthew Trentacoste) Date: Tue, 15 Aug 2006 19:12:49 -0700 Subject: [Numpy-discussion] Numpy 1.0b2 install issues Message-ID: <84E05761-5BA0-4098-A408-7D3D42C8D91C@cs.ubc.ca> Hey. I'm trying to get numpy up and running on SuSE 10.1 and not having much luck. I've been working with 1.0b2 and can get it to install without any errors, but can't do anything with it. I run a local install of python 2.4.3 just to keep out of whatever weirdness gets installed on my machine by our sysadmins. Pretty standard fare, untar the ball, and './setup.py install --prefix=$HOME/local' It will complete that without issue, but when I try to run the test, I get: Python 2.4.3 (#1, Aug 15 2006, 18:09:56) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(1) Traceback (most recent call last): File "", line 1, in ? File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ __init__.py", line 77, in test return NumpyTest().test(level, verbosity) File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/testing/ numpytest.py", line 285, in __init__ from numpy.distutils.misc_util import get_frame File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 7, in ? from distutils import ccompiler ImportError: cannot import name ccompiler This pretty much borks everything. I have to remove it before I can try to install other packages and stuff. Any thoughts? Thanks Matt From mmt at cs.ubc.ca Tue Aug 15 22:51:27 2006 From: mmt at cs.ubc.ca (Matthew Trentacoste) Date: Tue, 15 Aug 2006 19:51:27 -0700 Subject: [Numpy-discussion] numpy 1.0b2 problems Message-ID: <560C253F-DA6F-4BAB-8F13-28AD0800F4FC@cs.ubc.ca> Hey. I'm trying to get numpy up and running on SuSE 10.1 and not having much luck. I've been working with 1.0b2 and can get it to install without any errors, but can't do anything with it. I run a local install of python 2.4.3 just to keep out of whatever weirdness gets installed on my machine by our sysadmins. Pretty standard fare, untar the ball, and './setup.py install --prefix=$HOME/local' It will complete that without issue, but when I try to run the test, I get: Python 2.4.3 (#1, Aug 15 2006, 18:09:56) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(1) Traceback (most recent call last): File "", line 1, in ? File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ __init__.py", line 77, in test return NumpyTest().test(level, verbosity) File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/testing/ numpytest.py", line 285, in __init__ from numpy.distutils.misc_util import get_frame File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 7, in ? from distutils import ccompiler ImportError: cannot import name ccompiler Once installed, it messes up trying to install anything else, so I have to move it out of the way in the short term. Any thoughts? Thanks Matt [ matthew m trentacoste mmt at cs.ubc.ca ] [ ] [ graduate student lead software developer ] [ university of british columbia brightside technologies ] [ http://www.cs.ubc.ca/~mmt http://brightsidetech.com ] [ +1 (604) 827-3979 +1 (604) 228-4624 ] From mmt at cs.ubc.ca Tue Aug 15 23:19:47 2006 From: mmt at cs.ubc.ca (Matthew Trentacoste) Date: Tue, 15 Aug 2006 20:19:47 -0700 Subject: [Numpy-discussion] Fwd: numpy 1.0b2 problems References: <560C253F-DA6F-4BAB-8F13-28AD0800F4FC@cs.ubc.ca> Message-ID: <86D6645D-0257-401E-98B7-3AC77623398B@cs.ubc.ca> Hey. I'm trying to get numpy up and running on SuSE 10.1 and not having much luck. I've been working with 1.0b2 and can get it to install without any errors, but can't do anything with it. I run a local install of python 2.4.3 just to keep out of whatever weirdness gets installed on my machine by our sysadmins. Pretty standard fare, untar the ball, and './setup.py install --prefix=$HOME/local' It will complete that without issue, but when I try to run the test, I get: Python 2.4.3 (#1, Aug 15 2006, 18:09:56) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(1) Traceback (most recent call last): File "", line 1, in ? File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ __init__.py", line 77, in test return NumpyTest().test(level, verbosity) File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/testing/ numpytest.py", line 285, in __init__ from numpy.distutils.misc_util import get_frame File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 7, in ? from distutils import ccompiler ImportError: cannot import name ccompiler Once installed, it messes up trying to install anything else, so I have to move it out of the way in the short term. Any thoughts? Thanks Matt [ matthew m trentacoste mmt at cs.ubc.ca ] [ ] [ graduate student lead software developer ] [ university of british columbia brightside technologies ] [ http://www.cs.ubc.ca/~mmt http://brightsidetech.com ] [ +1 (604) 827-3979 +1 (604) 228-4624 ] From oliphant.travis at ieee.org Wed Aug 16 00:18:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 15 Aug 2006 22:18:04 -0600 Subject: [Numpy-discussion] numpy 1.0b2 problems In-Reply-To: References: Message-ID: <44E29C7C.2050509@ieee.org> Matthew Trentacoste wrote: > Hey. I'm trying to get numpy up and running on SuSE 10.1 and not > having much luck. > > I've been working with 1.0b2 and can get it to install without any > errors, but can't do anything with it. I run a local install of > python 2.4.3 just to keep out of whatever weirdness gets installed on > my machine by our sysadmins. Pretty standard fare, untar the ball, > and './setup.py install --prefix=$HOME/local' > Do you need to specify --prefix if you've already got Python installed somewhere? Are you missing it. From oliphant.travis at ieee.org Wed Aug 16 00:19:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 15 Aug 2006 22:19:29 -0600 Subject: [Numpy-discussion] numpy 1.0b2 problems In-Reply-To: References: Message-ID: <44E29CD1.4090509@ieee.org> Matthew Trentacoste wrote: > Hey. I'm trying to get numpy up and running on SuSE 10.1 and not > having much luck. > > I've been working with 1.0b2 and can get it to install without any > errors, but can't do anything with it. I run a local install of > python 2.4.3 just to keep out of whatever weirdness gets installed on > my machine by our sysadmins. Pretty standard fare, untar the ball, > and './setup.py install --prefix=$HOME/local' > > It will complete that without issue, but when I try to run the test, > I get: > > Python 2.4.3 (#1, Aug 15 2006, 18:09:56) > [GCC 4.1.0 (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> numpy.test(1) > Traceback (most recent call last): > File "", line 1, in ? > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ > __init__.py", line 77, in test > return NumpyTest().test(level, verbosity) > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/testing/ > numpytest.py", line 285, in __init__ > from numpy.distutils.misc_util import get_frame > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ > distutils/__init__.py", line 5, in ? > import ccompiler > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ > distutils/ccompiler.py", line 6, in ? > from distutils.ccompiler import * > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ > distutils/__init__.py", line 5, in ? > import ccompiler > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ > distutils/ccompiler.py", line 7, in ? > from distutils import ccompiler > ImportError: cannot import name ccompiler > > This seems to be a path issue. Can you give us import sys print sys.path() -Travis From mmt at cs.ubc.ca Wed Aug 16 02:58:53 2006 From: mmt at cs.ubc.ca (Matthew Trentacoste) Date: Tue, 15 Aug 2006 23:58:53 -0700 Subject: [Numpy-discussion] Numpy-discussion Digest, Vol 3, Issue 42 In-Reply-To: References: Message-ID: For starters, wow. I'm sorry. I didn't mean to spam my problem 5 times. My mail server decided to fritz out today and I thought it was Sourceforge rejecting my emails since they didn't originate the address I'm registered as. My apologies. > Do you need to specify --prefix if you've already got Python installed > somewhere? > > Are you missing it. I tried it again without setting it. No more luck. > This seems to be a path issue. Can you give us > > import sys > print sys.path() [ '', '/home/m/mmt/local/lib/python2.4/site-packages', '/home/m/mmt/local/lib/python2.4/site-packages/PIL', '/home/m/mmt/local/lib/python2.4/site-packages/numpy', '/grads/mmt/local/lib/python24.zip', '/grads/mmt/local/lib/python2.4', '/grads/mmt/local/lib/python2.4/plat-linux2', '/grads/mmt/local/lib/python2.4/lib-tk', '/grads/mmt/local/lib/python2.4/lib-dynload', '/grads/mmt/local/lib/python2.4/site-packages', '/grads/mmt/local/lib/python2.4/site-packages/PIL' ] The top 3 are added to my python path by myself, the rest are included by default. FYI: /grads/mmt and /home/m/mmt map to the same diretory. Sorry again about the repeat emails. Matt From kwgoodman at gmail.com Wed Aug 16 09:45:14 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 16 Aug 2006 06:45:14 -0700 Subject: [Numpy-discussion] some work on arpack In-Reply-To: References: Message-ID: On 8/15/06, David Grant wrote: > My idea is (if I have time) to write an eigs-like function in python > that will only perform a subset of what Matlab's eigs does for. It > will, for example, compute a certain number of eigenvalues and > eigenvectors for a real, sparse, symmetric matrix (the case I'm > interested in) Will it also work for a real, dense, symmetric matrix? That's the case I'm interested in. But even if it doesn't, your work is great news for numpy. From nwagner at iam.uni-stuttgart.de Wed Aug 16 10:14:30 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Aug 2006 16:14:30 +0200 Subject: [Numpy-discussion] some work on arpack In-Reply-To: References: Message-ID: <44E32846.5000508@iam.uni-stuttgart.de> Keith Goodman wrote: > On 8/15/06, David Grant wrote: > > >> My idea is (if I have time) to write an eigs-like function in python >> that will only perform a subset of what Matlab's eigs does for. It >> will, for example, compute a certain number of eigenvalues and >> eigenvectors for a real, sparse, symmetric matrix (the case I'm >> interested in) >> > > AFAIK, pysparse (in the sandbox) includes a module that implements a Jacobi-Davidson eigenvalue solver for the symmetric, generalised matrix eigenvalue problem (JDSYM). Did someone test pysparse ? Nils > Will it also work for a real, dense, symmetric matrix? That's the case > I'm interested in. But even if it doesn't, your work is great news for > numpy. > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From david.huard at gmail.com Wed Aug 16 10:16:59 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 16 Aug 2006 10:16:59 -0400 Subject: [Numpy-discussion] array equivalent to string.split(sep) Message-ID: <91cf711d0608160716p5d52c18fr1f68297fdcbee6f3@mail.gmail.com> Hi, I have a time series that I want to split into contiguous groups differentiated by a condition. I didn't find a vectorized way to that, so I ended up doing a for loop... I know there are split functions that split arrays into equal lengths subarrays, but is there a swell trick to return a sequence of arrays separated by a condition ? For instance, I would like to do something like: >>> a = array([1,1,1,1,1,5,1,1,1,1,1,1,6,2,1,1]) >>> a.argsplit(a>1) [[0,1,2,3,4], [6,7,8,9,10,11], [14,15]] Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Aug 16 10:28:35 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Aug 2006 16:28:35 +0200 Subject: [Numpy-discussion] some work on arpack In-Reply-To: <44E32846.5000508@iam.uni-stuttgart.de> References: <44E32846.5000508@iam.uni-stuttgart.de> Message-ID: <44E32B93.7020609@iam.uni-stuttgart.de> Nils Wagner wrote: > Keith Goodman wrote: > >> On 8/15/06, David Grant wrote: >> >> >> >>> My idea is (if I have time) to write an eigs-like function in python >>> that will only perform a subset of what Matlab's eigs does for. It >>> will, for example, compute a certain number of eigenvalues and >>> eigenvectors for a real, sparse, symmetric matrix (the case I'm >>> interested in) >>> >>> >> >> > AFAIK, pysparse (in the sandbox) includes a module that implements a > Jacobi-Davidson > eigenvalue solver for the symmetric, generalised matrix eigenvalue > problem (JDSYM). > Did someone test pysparse ? > > Nils > > >> Will it also work for a real, dense, symmetric matrix? That's the case >> I'm interested in. But even if it doesn't, your work is great news for >> numpy. >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > Ok it's not ready... gcc: Lib/sandbox/pysparse/src/spmatrixmodule.c In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:17: Lib/sandbox/pysparse/src/ll_mat.c: In function ?LLMat_matvec_transp?: Lib/sandbox/pysparse/src/ll_mat.c:760: error: ?CONTIGUOUS? undeclared (first use in this function) Lib/sandbox/pysparse/src/ll_mat.c:760: error: (Each undeclared identifier is reported only once Lib/sandbox/pysparse/src/ll_mat.c:760: error: for each function it appears in.) Lib/sandbox/pysparse/src/ll_mat.c: In function ?LLMat_matvec?: Lib/sandbox/pysparse/src/ll_mat.c:797: error: ?CONTIGUOUS? undeclared (first use in this function) In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:18: Lib/sandbox/pysparse/src/csr_mat.c: In function ?CSRMat_matvec_transp?: Lib/sandbox/pysparse/src/csr_mat.c:119: error: ?CONTIGUOUS? undeclared (first use in this function) Lib/sandbox/pysparse/src/csr_mat.c: In function ?CSRMat_matvec?: Lib/sandbox/pysparse/src/csr_mat.c:146: error: ?CONTIGUOUS? undeclared (first use in this function) In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:19: Lib/sandbox/pysparse/src/sss_mat.c: In function ?SSSMat_matvec?: Lib/sandbox/pysparse/src/sss_mat.c:83: error: ?CONTIGUOUS? undeclared (first use in this function) In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:17: Lib/sandbox/pysparse/src/ll_mat.c: In function ?LLMat_matvec_transp?: Lib/sandbox/pysparse/src/ll_mat.c:760: error: ?CONTIGUOUS? undeclared (first use in this function) Lib/sandbox/pysparse/src/ll_mat.c:760: error: (Each undeclared identifier is reported only once Lib/sandbox/pysparse/src/ll_mat.c:760: error: for each function it appears in.) Lib/sandbox/pysparse/src/ll_mat.c: In function ?LLMat_matvec?: Lib/sandbox/pysparse/src/ll_mat.c:797: error: ?CONTIGUOUS? undeclared (first use in this function) In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:18: Lib/sandbox/pysparse/src/csr_mat.c: In function ?CSRMat_matvec_transp?: Lib/sandbox/pysparse/src/csr_mat.c:119: error: ?CONTIGUOUS? undeclared (first use in this function) Lib/sandbox/pysparse/src/csr_mat.c: In function ?CSRMat_matvec?: Lib/sandbox/pysparse/src/csr_mat.c:146: error: ?CONTIGUOUS? undeclared (first use in this function) In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:19: Lib/sandbox/pysparse/src/sss_mat.c: In function ?SSSMat_matvec?: Lib/sandbox/pysparse/src/sss_mat.c:83: error: ?CONTIGUOUS? undeclared (first use in this function) error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -ILib/sandbox/pysparse/include/ -I/usr/lib64/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c Lib/sandbox/pysparse/src/spmatrixmodule.c -o build/temp.linux-x86_64-2.4/Lib/sandbox/pysparse/src/spmatrixmodule.o" failed with exit status 1 Nils From davidgrant at gmail.com Wed Aug 16 11:10:35 2006 From: davidgrant at gmail.com (David Grant) Date: Wed, 16 Aug 2006 08:10:35 -0700 Subject: [Numpy-discussion] some work on arpack In-Reply-To: References: Message-ID: On 8/16/06, Keith Goodman wrote: > > On 8/15/06, David Grant wrote: > > > My idea is (if I have time) to write an eigs-like function in python > > that will only perform a subset of what Matlab's eigs does for. It > > will, for example, compute a certain number of eigenvalues and > > eigenvectors for a real, sparse, symmetric matrix (the case I'm > > interested in) > > Will it also work for a real, dense, symmetric matrix? That's the case > I'm interested in. But even if it doesn't, your work is great news for > numpy. > Real, dense, symmetric, well doesn't scipy already have something for this? I'm honestly not sure on the arpack side of things, I thought arpack was only useful (over other tools) for sparse matrices, I could be wrong. -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From fullung at gmail.com Wed Aug 16 11:23:05 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 16 Aug 2006 17:23:05 +0200 Subject: [Numpy-discussion] some work on arpack In-Reply-To: Message-ID: Hello all > -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of David Grant > Sent: 16 August 2006 17:11 > To: Discussion of Numerical Python > Subject: Re: [Numpy-discussion] some work on arpack > > > > On 8/16/06, Keith Goodman wrote: > > On 8/15/06, David Grant wrote: > > > My idea is (if I have time) to write an eigs-like function in > python > > that will only perform a subset of what Matlab's eigs does for. It > > will, for example, compute a certain number of eigenvalues and > > eigenvectors for a real, sparse, symmetric matrix (the case I'm > > interested in) > > Will it also work for a real, dense, symmetric matrix? That's the > case > I'm interested in. But even if it doesn't, your work is great news > for > numpy. > > Real, dense, symmetric, well doesn't scipy already have something for > this? I'm honestly not sure on the arpack side of things, I thought arpack > was only useful (over other tools) for sparse matrices, I could be wrong. Maybe SciPy can also do this, but what makes ARPACK useful is that it can get you a few eigenvalues and eigenvectors of a massive matrix without having to have the whole thing in memory. Instead, you provide ARPACK with a function that does A*x on your matrix. ARPACK passes a few x's to your function and a few eigenvalues and eigenvectors fall out. I recently used MATLAB's eigs to do exactly this. I had a dense matrix A with dimensions m x n, where m >> n. I wanted the eigenvalues of A'A (which has dimensions m x m, which is too large to keep in memory). But I could keep A and A' in memory I could quickly calculate A'A*x, which is what ARPACK needs. Cheers, Albert From fullung at gmail.com Wed Aug 16 11:29:51 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 16 Aug 2006 17:29:51 +0200 Subject: [Numpy-discussion] some work on arpack In-Reply-To: Message-ID: Argh... > I recently used MATLAB's eigs to do exactly this. I had a dense matrix A > with dimensions m x n, where m >> n. I wanted the eigenvalues of A'A > (which > has dimensions m x m, which is too large to keep in memory). But I could Make that AA'. Cheers, Albert From aisaac at american.edu Wed Aug 16 11:13:03 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 16 Aug 2006 11:13:03 -0400 Subject: [Numpy-discussion] array equivalent to string.split(sep) In-Reply-To: <91cf711d0608160716p5d52c18fr1f68297fdcbee6f3@mail.gmail.com> References: <91cf711d0608160716p5d52c18fr1f68297fdcbee6f3@mail.gmail.com> Message-ID: On Wed, 16 Aug 2006, David Huard apparently wrote: > I have a time series that I want to split into contiguous > groups differentiated by a condition. Perhaps itertools.groupby()? fwiw, Alan Isaac From davidgrant at gmail.com Wed Aug 16 12:26:07 2006 From: davidgrant at gmail.com (David Grant) Date: Wed, 16 Aug 2006 09:26:07 -0700 Subject: [Numpy-discussion] some work on arpack In-Reply-To: References: Message-ID: On 8/16/06, Albert Strasheim wrote: > > Hello all > > > -----Original Message----- > > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > > discussion-bounces at lists.sourceforge.net] On Behalf Of David Grant > > Sent: 16 August 2006 17:11 > > To: Discussion of Numerical Python > > Subject: Re: [Numpy-discussion] some work on arpack > > > > > > > > On 8/16/06, Keith Goodman wrote: > > > > On 8/15/06, David Grant wrote: > > > > > My idea is (if I have time) to write an eigs-like function in > > python > > > that will only perform a subset of what Matlab's eigs does for. > It > > > will, for example, compute a certain number of eigenvalues and > > > eigenvectors for a real, sparse, symmetric matrix (the case I'm > > > interested in) > > > > Will it also work for a real, dense, symmetric matrix? That's the > > case > > I'm interested in. But even if it doesn't, your work is great news > > for > > numpy. > > > > Real, dense, symmetric, well doesn't scipy already have something for > > this? I'm honestly not sure on the arpack side of things, I thought > arpack > > was only useful (over other tools) for sparse matrices, I could be > wrong. > > Maybe SciPy can also do this, but what makes ARPACK useful is that it can > get you a few eigenvalues and eigenvectors of a massive matrix without > having to have the whole thing in memory. Instead, you provide ARPACK with > a > function that does A*x on your matrix. ARPACK passes a few x's to your > function and a few eigenvalues and eigenvectors fall out. Cool, thanks for the info. -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Wed Aug 16 11:08:16 2006 From: davidgrant at gmail.com (David Grant) Date: Wed, 16 Aug 2006 08:08:16 -0700 Subject: [Numpy-discussion] some work on arpack In-Reply-To: <44E32846.5000508@iam.uni-stuttgart.de> References: <44E32846.5000508@iam.uni-stuttgart.de> Message-ID: On 8/16/06, Nils Wagner wrote: > > Keith Goodman wrote: > > On 8/15/06, David Grant wrote: > > > > > >> My idea is (if I have time) to write an eigs-like function in python > >> that will only perform a subset of what Matlab's eigs does for. It > >> will, for example, compute a certain number of eigenvalues and > >> eigenvectors for a real, sparse, symmetric matrix (the case I'm > >> interested in) > >> > > > > > AFAIK, pysparse (in the sandbox) includes a module that implements a > Jacobi-Davidson > eigenvalue solver for the symmetric, generalised matrix eigenvalue > problem (JDSYM). > Did someone test pysparse ? > > I did try pysparse a few years ago (I think right before sparse stuff came into scipy). I think there is probably an old post asking the list about sparse stuff and I think Travis had just written it and told me about it... can't remember. Can JDSYM just return the k lowest eigenvalues/eigenvectors? -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Aug 16 12:50:05 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Aug 2006 18:50:05 +0200 Subject: [Numpy-discussion] some work on arpack In-Reply-To: References: <44E32846.5000508@iam.uni-stuttgart.de> Message-ID: On Wed, 16 Aug 2006 08:08:16 -0700 "David Grant" wrote: > On 8/16/06, Nils Wagner >wrote: >> >> Keith Goodman wrote: >> > On 8/15/06, David Grant wrote: >> > >> > >> >> My idea is (if I have time) to write an eigs-like >>function in python >> >> that will only perform a subset of what Matlab's eigs >>does for. It >> >> will, for example, compute a certain number of >>eigenvalues and >> >> eigenvectors for a real, sparse, symmetric matrix >>(the case I'm >> >> interested in) >> >> >> > >> > >> AFAIK, pysparse (in the sandbox) includes a module that >>implements a >> Jacobi-Davidson >> eigenvalue solver for the symmetric, generalised matrix >>eigenvalue >> problem (JDSYM). >> Did someone test pysparse ? >> >> I did try pysparse a few years ago (I think right before >>sparse stuff came > into scipy). I think there is probably an old post >asking the list about > sparse stuff and I think Travis had just written it and >told me about it... > can't remember. Can JDSYM just return the k lowest >eigenvalues/eigenvectors? > > -- > David Grant > http://www.davidgrant.ca Yes. See http://people.web.psi.ch/geus/pyfemax/pysparse_examples.html for details. Nils From davidgrant at gmail.com Wed Aug 16 14:45:27 2006 From: davidgrant at gmail.com (David Grant) Date: Wed, 16 Aug 2006 11:45:27 -0700 Subject: [Numpy-discussion] log can't handle big ints Message-ID: I am using numpy-0.9.8 and it seems that numpy's log2 function can't handle large integers? In [19]: a=11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 In [20]: math.log(a,2) Out[20]: 292.48167544353294 In [21]: numpy.log2(a) --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/david/ /usr/lib/python2.4/site-packages/numpy/lib/ufunclike.py in log2(x, y) 52 x = asarray(x) 53 if y is None: ---> 54 y = umath.log(x) 55 else: 56 umath.log(x, y) AttributeError: 'long' object has no attribute 'log' Does anyone else get this in numpy? if not, what version are you using? -- David Grant http://www.davidgrant.ca From elijah.gregory at gmail.com Wed Aug 16 15:15:51 2006 From: elijah.gregory at gmail.com (Elijah Gregory) Date: Wed, 16 Aug 2006 12:15:51 -0700 Subject: [Numpy-discussion] Installation and Uninstallation Message-ID: Dear NumPy Users, I am attempting to install numpy-0.9.8 as a user on unix system. When I install numpy by typing "python setup.py install" as per the (only) instructions in the README.txt file everything proceeds smoothly until some point where the script attempts to write a file to the root-level /usr/lib64. How can I configure the setup.py script to use my user-level directories which I do have access to? Also, given that the install exited with an error, how do I clean up the aborted installation? Thank you for your help, regards, Elijah Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhendrix at enthought.com Wed Aug 16 15:18:22 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 16 Aug 2006 14:18:22 -0500 Subject: [Numpy-discussion] Installation and Uninstallation In-Reply-To: References: Message-ID: <44E36F7E.1080307@enthought.com> python setup.py install --prefix=your_path You shouldn't have to clean up the previous install, if it got to the point where it was copy files, the first one would have failed. Next time you run setup.py with the --prefix option, it will pick up where the previous install left off. Bryce Elijah Gregory wrote: > Dear NumPy Users, > > I am attempting to install numpy-0.9.8 as a user on unix system. > When I install numpy by typing "python setup.py install" as per the > (only) instructions in the README.txt file everything proceeds > smoothly until some point where the script attempts to write a file to > the root-level /usr/lib64. How can I configure the setup.py script to > use my user-level directories which I do have access to? Also, given > that the install exited with an error, how do I clean up the aborted > installation? Thank you for your help, > > regards, > > Elijah Gregory > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Aug 16 15:20:43 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 16 Aug 2006 12:20:43 -0700 Subject: [Numpy-discussion] log can't handle big ints In-Reply-To: References: Message-ID: <44E3700B.1060802@ieee.org> David Grant wrote: > I am using numpy-0.9.8 and it seems that numpy's log2 function can't > handle large integers? > > In [19]: a=11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 > > In [20]: math.log(a,2) > Out[20]: 292.48167544353294 > > In [21]: numpy.log2(a) > Ufuncs on objects (like the long object) work by looking for the corresponding method. It's not found for long objects. Convert the long object to a float first. I'm not sure of any other way to "fix" it. I suppose if no method is found an attempt to convert them to floats could be performed under the covers on all object array inputs. -Travis From kortmann at ideaworks.com Wed Aug 16 16:54:35 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Wed, 16 Aug 2006 13:54:35 -0700 (PDT) Subject: [Numpy-discussion] numpy.linalg.linalg.LinAlgError: Singular matrix Message-ID: <1377.12.216.231.149.1155761675.squirrel@webmail.ideaworks.com> all of the variables n, st, st2, st3, st4, st5, st6, sx, sxt, sxt2, and sxt3 are all floats. A = array([[N, st, st2, st3],[st, st2, st3, st4], [st2, st3, st4, st5], [st3, st4, st5, st6]]) B = array ([sx, sxt, sxt2, sxt3]) lina = linalg.solve(A, B) is there something wrong with this code? it is returning File "C:\PYTHON23\Lib\site-packages\numpy\linalg\linalg.py", line 138, in solve raise LinAlgError, 'Singular matrix' numpy.linalg.linalg.LinAlgError: Singular matrix Does anyone know what I am doing wrong? -Kenny From oliphant.travis at ieee.org Wed Aug 16 17:10:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 16 Aug 2006 14:10:17 -0700 Subject: [Numpy-discussion] Installation and Uninstallation In-Reply-To: References: Message-ID: <44E389B9.1030905@ieee.org> Elijah Gregory wrote: > Dear NumPy Users, > > I am attempting to install numpy-0.9.8 as a user on unix system. > When I install numpy by typing "python setup.py install" as per the > (only) instructions in the README.txt file everything proceeds > smoothly until some point where the script attempts to write a file to > the root-level /usr/lib64. How can I configure the setup.py script to > use my user-level directories which I do have access to? Also, given > that the install exited with an error, how do I clean up the aborted > installation? Is there a particular reason you are installing numpy-0.9.8? Please use the latest version as 0.9.8 is a pre-beta release. -Travis From yatimameiji at gmail.com Wed Aug 16 19:29:11 2006 From: yatimameiji at gmail.com (Yatima Meiji) Date: Wed, 16 Aug 2006 18:29:11 -0500 Subject: [Numpy-discussion] Atempt to build numpy-1.0b2 fail on distutils.ccompiler Message-ID: <877dd2d00608161629t71f98125m913165f6693ab41f@mail.gmail.com> I'm currently running a fresh install of Suse 10.1. I ran the numpy setup script using "python setup.py install" and it fails with this error: Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 89, in ? setup_package() File "setup.py", line 59, in setup_package from numpy.distutils.core import setup File "/home/xxx/numpy-1.0b2/numpy/distutils/__init__.py", line 5, in ? import ccompiler File "/home/xxx/numpy-1.0b2/numpy/distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * ImportError: No module named distutils.ccompiler I checked ccompiler.py to see what was wrong. I'm not much of a programmer, but it seems strange to have ccompiler.py reference itself. I'm guessing others have compilied numpy just fine, so whats wrong with me? Thanks in advanced. -- "Physics is like sex: sure, it may give some practical results, but that's not why we do it." -- Richard P. Feynman -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Aug 16 19:33:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Aug 2006 16:33:43 -0700 Subject: [Numpy-discussion] Atempt to build numpy-1.0b2 fail on distutils.ccompiler In-Reply-To: <877dd2d00608161629t71f98125m913165f6693ab41f@mail.gmail.com> References: <877dd2d00608161629t71f98125m913165f6693ab41f@mail.gmail.com> Message-ID: Yatima Meiji wrote: > I'm currently running a fresh install of Suse 10.1. I ran the numpy > setup script using "python setup.py install" and it fails with this error: > > Running from numpy source directory. > Traceback (most recent call last): > File "setup.py", line 89, in ? > setup_package() > File "setup.py", line 59, in setup_package > from numpy.distutils.core import setup > File "/home/xxx/numpy-1.0b2/numpy/distutils/__init__.py", line 5, in ? > import ccompiler > File "/home/xxx/numpy-1.0b2/numpy/distutils/ccompiler.py", line 6, in ? > from distutils.ccompiler import * > ImportError: No module named distutils.ccompiler > > I checked ccompiler.py to see what was wrong. I'm not much of a > programmer, but it seems strange to have ccompiler.py reference itself. It's not; it's trying to import from the standard library's distutils.ccompiler module. Suse, like several other Linux distributions, separates distutils from the rest of the standard library in a separate package which you will need to install. It will be called something like python-dev or python-devel. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From drswalton at gmail.com Wed Aug 16 19:51:24 2006 From: drswalton at gmail.com (Stephen Walton) Date: Wed, 16 Aug 2006 16:51:24 -0700 Subject: [Numpy-discussion] numpy.linalg.linalg.LinAlgError: Singular matrix In-Reply-To: <1377.12.216.231.149.1155761675.squirrel@webmail.ideaworks.com> References: <1377.12.216.231.149.1155761675.squirrel@webmail.ideaworks.com> Message-ID: <693733870608161651j77732739w6a90e449bf6670b2@mail.gmail.com> On 8/16/06, kortmann at ideaworks.com wrote: > > all of the variables n, st, st2, st3, st4, st5, st6, sx, sxt, sxt2, and > sxt3 are all floats. > > > A = array([[N, st, st2, st3],[st, st2, st3, st4], [st2, st3, st4, st5], > [st3, st4, st5, st6]]) > B = array ([sx, sxt, sxt2, sxt3]) > lina = linalg.solve(A, B) Is your matrix A in fact singular? Without numerical values of A, st, etc., it is hard to know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chanley at stsci.edu Thu Aug 17 08:23:03 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 17 Aug 2006 08:23:03 -0400 Subject: [Numpy-discussion] numpy.bool8 Message-ID: <44E45FA7.7080209@stsci.edu> What happened to numpy.bool8? I realize that bool_ is just as good. I was just wondering what motivated the change? Chris From aisaac at american.edu Thu Aug 17 12:37:47 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 17 Aug 2006 12:37:47 -0400 Subject: [Numpy-discussion] how to reference Numerical Python in a scientific publication In-Reply-To: References: <44DAB7E1.8090108@msg.ucsf.edu> Message-ID: In BibTeX format. fwiw, Alan Isaac @MANUAL{Oliphant:2006, author = {Oliphant, Travis E.}, year = 2006, title = {Guide to NumPy}, month = mar, address = {Provo, UT}, institution = {Brigham Young University} } @ARTICLE{Dubois+etal:1996, author = {Dubois, Paul F. and Konrad Hinsen and James Hugunin}, year = {1996}, title = {Numerical Python}, journal = {Computers in Physics}, volume = 10, number = 3, month = {May/June} } @ARTICLE{Dubois:1999, author = {Dubois, Paul F.}, year = 1999, title = {Extending Python with Fortran}, journal = {Computing Science and Engineering}, volume = 1, number = 5, month = {Sep/Oct}, pages = {66--73} } @ARTICLE{Scherer+etal:2000, author = {Scherer, David and Paul Dubois and Bruce Sherwood}, year = 2000, title = {VPython: 3D Interactive Scientific Graphics for Students}, journal = {Computing in Science and Engineering}, volume = 2, number = 5, month = {Sep/Oct}, pages = {56--62} } @MANUAL{Ascher+etal:1999, author = {Ascher, David and Paul F. Dubois and Konrad Hinsen and James Hugunin and Travis Oliphant}, year = 1999, title = {Numerical Python}, edition = {UCRL-MA-128569}, address = {Livermore, CA}, organization = {Lawrence Livermore National Laboratory} } From christopher.e.kees at erdc.usace.army.mil Thu Aug 17 13:01:13 2006 From: christopher.e.kees at erdc.usace.army.mil (Chris Kees) Date: Thu, 17 Aug 2006 12:01:13 -0500 Subject: [Numpy-discussion] convertcode.py Message-ID: <9EC96922-E299-4996-BED4-262B0E3E0126@erdc.usace.army.mil> Hi, I just ran convertcode.py on my code (from the latest svn source of numpy) and it looks like it just changed the import statements to import numpy.oldnumeric as Numeric So it doesn't look like it's really helping me move over to the new usage. Is there a script that will converts code to use the new numpy as it's intended to be used? Thanks, Chris From wes25 at tom.com Sun Aug 20 14:30:57 2006 From: wes25 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Mon, 21 Aug 2006 02:30:57 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU6yfqy+tK7z9/W97ncvLzE3Mzhyf0=?= Message-ID: An HTML attachment was scrubbed... URL: From MAILER-DAEMON at rosi.szbk.u-szeged.hu Thu Aug 17 15:02:46 2006 From: MAILER-DAEMON at rosi.szbk.u-szeged.hu (Mail Delivery System) Date: Thu, 17 Aug 2006 21:02:46 +0200 (CEST) Subject: [Numpy-discussion] Undelivered Mail Returned to Sender Message-ID: <20060817190246.17AC61BD7D@rosi.szbk.u-szeged.hu> This is the Postfix program at host rosi.szbk.u-szeged.hu. I'm sorry to have to inform you that your message could not be be delivered to one or more recipients. It's attached below. For further assistance, please send mail to If you do so, please include this problem report. You can delete your own text from the attached returned message. The Postfix program : permission denied. Command output: maildrop: maildir over quota. -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From wes25 at tom.com Sun Aug 20 14:30:57 2006 From: wes25 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Mon, 21 Aug 2006 02:30:57 +0800 Subject: *****SPAM***** [Numpy-discussion] re:Éú²úÒ»ÏßÖ÷¹Ü¼¼ÄÜÌáÉý Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 -------------- next part -------------- _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From davidgrant at gmail.com Thu Aug 17 15:48:35 2006 From: davidgrant at gmail.com (David Grant) Date: Thu, 17 Aug 2006 12:48:35 -0700 Subject: [Numpy-discussion] numpy 0.9.8->1.0b2 Message-ID: I'm contemplating upgrading to 1.0b2. The main reason is that I am experiencing a major memory leak and before I report a bug I think the developers would appeciate if I was using the most recent version. Am I correct in that the only major change that might actually break my code is that the following functions: take, repeat, sum, product, sometrue, cumsum, cumproduct, ptp, amax, amin, prod, cumprod, mean, std, var now have axis=None as argument? BTW, how come alter_code2.py ( http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/oldnumeric/alter_code2.py?rev=HEAD) says in the docstring that it "converts functions that don't give axis= keyword that have changed" but I don't see it actually doing that anywhere in the code? Thanks, David -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Thu Aug 17 19:25:12 2006 From: davidgrant at gmail.com (David Grant) Date: Thu, 17 Aug 2006 16:25:12 -0700 Subject: [Numpy-discussion] Interesting memory leak Message-ID: Hello all, I had a massive memory leak in some of my code. It would basically end up using up all 1GB of my RAM or more if I don't kill the application. I managed to finally figure out which portion of the code was causing the leak (with great difficulty) and have a little example which exposes the leak. I am using numpy-0.9.8 and I'm wondering if perhaps this is already fixed in 1.0b2. Run this through valgrind with appropriate options (I used the recommended valgrind_py.sh that I found on scipy's site somewhere) and this will leak 100kB. Increase the xrange on the big loop and you can watch the memory increase over time in top. The interesting thing is that the only difference between the leaky and non-leaky code is: if not adjacencyMatrix[anInt2,anInt1] == 0: (leaky) vs. if not adjacencyMatrix[anInt2][anInt1] == 0: (non-leaky) however another way to make the leaky code non-leaky is to change anArrayOfInts to just be [1] Here's the code: from numpy import array def leakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix): ys = set() for aList in aListOfArrays: for anInt1 in anArrayOfInts: for anInt2 in aList: if not adjacencyMatrix[anInt2,anInt1] == 0: ys.add(anInt1) return ys def nonLeakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix): ys = set() for aList in aListOfArrays: for anInt1 in anArrayOfInts: for anInt2 in aList: if not adjacencyMatrix[anInt2][anInt1] == 0: ys.add(anInt1) return ys if __name__ == "__main__": for i in xrange(10000): aListOfArrays = [[0, 1]] anArrayOfInts = array([1]) adjacencyMatrix = array([[0,1],[1,0]]) #COMMENT OUT ONE OF THE 2 LINES BELOW #bar = nonLeakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix) bar = leakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix) -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Aug 17 19:30:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Aug 2006 16:30:23 -0700 Subject: [Numpy-discussion] Interesting memory leak In-Reply-To: References: Message-ID: David Grant wrote: > Hello all, > > I had a massive memory leak in some of my code. It would basically end > up using up all 1GB of my RAM or more if I don't kill the application. I > managed to finally figure out which portion of the code was causing the > leak (with great difficulty) and have a little example which exposes the > leak. I am using numpy-0.9.8 and I'm wondering if perhaps this is > already fixed in 1.0b2. Run this through valgrind with appropriate > options (I used the recommended valgrind_py.sh that I found on scipy's > site somewhere) and this will leak 100kB. Increase the xrange on the big > loop and you can watch the memory increase over time in top. I don't see a leak in 1.0b2.dev3002. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From numt at usa.net Thu Aug 17 22:40:56 2006 From: numt at usa.net (numt at usa.net) Date: Thu, 17 Aug 2006 23:40:56 -0300 Subject: [Numpy-discussion] Passed over again for that promotion, no Degree? Message-ID: wsSgWsS8VbEuW.xkGdRTUahhaOT@usa.net An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Thu Aug 17 20:08:28 2006 From: davidgrant at gmail.com (David Grant) Date: Thu, 17 Aug 2006 17:08:28 -0700 Subject: [Numpy-discussion] Interesting memory leak In-Reply-To: References: Message-ID: On 8/17/06, Robert Kern wrote: > > David Grant wrote: > > Hello all, > > > > I had a massive memory leak in some of my code. It would basically end > > up using up all 1GB of my RAM or more if I don't kill the application. I > > managed to finally figure out which portion of the code was causing the > > leak (with great difficulty) and have a little example which exposes the > > leak. I am using numpy-0.9.8 and I'm wondering if perhaps this is > > already fixed in 1.0b2. Run this through valgrind with appropriate > > options (I used the recommended valgrind_py.sh that I found on scipy's > > site somewhere) and this will leak 100kB. Increase the xrange on the big > > loop and you can watch the memory increase over time in top. > > I don't see a leak in 1.0b2.dev3002. Thanks Robert. I decided to upgrade to 1.0b2 just to see what I get and now I get 7kB of "possibly lost" memory, coming from PyObject_Malloc (in /usr/lib/libpython2.4.so.1.0). This is a constant 7kB, however, and it isn't getting any larger if I increase the loop iterations. Looks good then. I don't really know the meaning of this "possibly lost" memory. -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Fri Aug 18 00:13:06 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 18 Aug 2006 13:13:06 +0900 Subject: [Numpy-discussion] bug with numpy.linalg.eig for complex output Message-ID: If you do this: >>> numpy.linalg.eig(numpy.random.rand(3,3)) You'll (almost always) get a wrong answer back from numpy. Something like: (array([ 1.72167898, -0.07251007, -0.07251007]), array([[ 0.47908847, 0.72095163, 0.72095163], [ 0.56659142, -0.46403504, -0.46403504], [ 0.67040914, 0.01361572, 0.01361572]])) The return value should be complex (unless rand() just happens to return something symmetric). It really needs to either throw an exception, or preferably for this function, just go ahead and return something complex, like the numpy.dftfunctions do. On the other hand it, would be nice to stick with plain doubles if the output isn't complex, but I'd rather get the right output all the time than get the minimal type that will handle the output. This is with beta 1. Incidentally, I tried logging into the Trac here: http://projects.scipy.org/scipy/scipy to file a bug, but it wouldn't let me in under the account I've been using for a while now. Is the login system broken? Were passwords reset or something? --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Aug 18 00:54:44 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 18 Aug 2006 13:54:44 +0900 Subject: [Numpy-discussion] ctypes: how does load_library work ? Message-ID: <44E54814.7030803@ar.media.kyoto-u.ac.jp> Hi, I am investigating the use of ctypes to write C extensions for numpy/scipy. First, thank you for the wiki, it makes it easy to implement in a few minutes a wrapper for a C function taking arrays as arguments. I am running recent SVN version of numpy and scipy, and I couldn't make load_library work as I expected: Let's say I have a libhello.so library on linux, which contains the C function int sum(const int* in, size_t n). To wrap it, I use: import numpy as N from ctypes import cdll, POINTER, c_int, c_uint _hello = cdll.LoadLibrary('libhello.so') _hello.sum.restype = c_int _hello.sum.artype = [POINTER(c_int), c_uint] def sum(data): return _hello.sum(data.ctypes.data_as(POINTER(c_int)), len(data)) n = 10 data = N.arange(n) print data print "sum(data) is " + str(sum(data)) That works OK, but to avoid the platform dependency, I would like to use load_library from numpy: I just replace the cdll.LoadLibrary by : _hello = N.ctypeslib.load_library('hello', '.') which does not work. The python interpreter returns a strange error message, because it says hello.so.so is not found, and it is looking for the library in the directory usr/$(PWD), which does not make sense to me. Is it a bug, or am I just not understanding how to use the load_library function ? David From joris at ster.kuleuven.be Fri Aug 18 02:21:39 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 18 Aug 2006 08:21:39 +0200 Subject: [Numpy-discussion] numpy installation problem Message-ID: <200608180821.39074.joris@ster.kuleuven.be> Hi, In the README.txt of the numpy installation it says that one could use a site.cfg file to specify non-standard locations of ATLAS en LAPACK libraries, but it doesn't explain how. I have a directory software/atlas3.6.0/lib/Linux_PPROSSE2/ which contains libcombinedlapack.a libatlas.a libcblas.a libf77blas.a liblapack.a libtstatlas.a where liblapack.a are the few lapack routines provided by ATLAS, and libcombinedlapack.a (> 5 MB) contains the full LAPACK library including the few optimized routines of ATLAS. From the example in numpy/distutils/system_info.py I figured that my site.cfg file should look like --- site.cfg --- [atlas] library_dirs = /software/atlas3.6.0/lib/Linux_PPROSSE2/ atlas_libs = combinedlapack, f77blas, cblas, atlas --------------- However, during numpy installation, he says: FOUND: libraries = ['combinedlapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/software/atlas3.6.0/lib/Linux_PPROSSE2/'] which is good, but afterwards he also says: Lapack library (from ATLAS) is probably incomplete: size of /software/atlas3.6.0/lib/Linux_PPROSSE2/liblapack.a is 305k (expected >4000k) which he shouldn't use at all. Strangely enough, renaming libcombinedlapack.a to liblapack.a and adapting the site.cfg file accordingly still gives the same message. Any pointers? Joris From nwagner at iam.uni-stuttgart.de Fri Aug 18 03:21:38 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 18 Aug 2006 09:21:38 +0200 Subject: [Numpy-discussion] bug with numpy.linalg.eig for complex output In-Reply-To: References: Message-ID: <44E56A82.2030604@iam.uni-stuttgart.de> Bill Baxter wrote: > If you do this: > >>> numpy.linalg.eig(numpy.random.rand(3,3)) > > You'll (almost always) get a wrong answer back from numpy. Something > like: > > (array([ 1.72167898, -0.07251007, -0.07251007]), > array([[ 0.47908847, 0.72095163, 0.72095163], > [ 0.56659142, -0.46403504, -0.46403504], > [ 0.67040914, 0.01361572, 0.01361572]])) > > The return value should be complex (unless rand() just happens to > return something symmetric). > > It really needs to either throw an exception, or preferably for this > function, just go ahead and return something complex, like the > numpy.dft functions do. > On the other hand it, would be nice to stick with plain doubles if the > output isn't complex, but I'd rather get the right output all the time > than get the minimal type that will handle the output. > > This is with beta 1. > > Incidentally, I tried logging into the Trac here: > http://projects.scipy.org/scipy/scipy > to file a bug, but it wouldn't let me in under the account I've been > using for a while now. Is the login system broken? Were passwords > reset or something? > > > --bb > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > AFAIK this problem is fixed. http://projects.scipy.org/scipy/numpy/ticket/215 I have no problem wrt the Trac system. Nils From wbaxter at gmail.com Fri Aug 18 04:06:41 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 18 Aug 2006 17:06:41 +0900 Subject: [Numpy-discussion] bug with numpy.linalg.eig for complex output In-Reply-To: <44E56A82.2030604@iam.uni-stuttgart.de> References: <44E56A82.2030604@iam.uni-stuttgart.de> Message-ID: Thanks for the info Nils. Sounds like it was fixed post-1.0b1. Good news. And Trac seems to be letting me in again. Not sure what was wrong there. --bb On 8/18/06, Nils Wagner wrote: > > Bill Baxter wrote: > > If you do this: > > >>> numpy.linalg.eig(numpy.random.rand(3,3)) > > > > You'll (almost always) get a wrong answer back from numpy. Something > > like: > > > > (array([ 1.72167898, -0.07251007, -0.07251007]), > > array([[ 0.47908847, 0.72095163, 0.72095163], > > [ 0.56659142, -0.46403504, -0.46403504], > > [ 0.67040914, 0.01361572, 0.01361572]])) > > > > The return value should be complex (unless rand() just happens to > > return something symmetric). > > > > It really needs to either throw an exception, or preferably for this > > function, just go ahead and return something complex, like the > > numpy.dft functions do. > > On the other hand it, would be nice to stick with plain doubles if the > > output isn't complex, but I'd rather get the right output all the time > > than get the minimal type that will handle the output. > > > > This is with beta 1. > > > > Incidentally, I tried logging into the Trac here: > > http://projects.scipy.org/scipy/scipy > > to file a bug, but it wouldn't let me in under the account I've been > > using for a while now. Is the login system broken? Were passwords > > reset or something? > > > > > > --bb > > > > - > > AFAIK this problem is fixed. > > http://projects.scipy.org/scipy/numpy/ticket/215 > > I have no problem wrt the Trac system. > > Nils > > -discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri Aug 18 05:16:46 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 18 Aug 2006 11:16:46 +0200 Subject: [Numpy-discussion] ctypes: how does load_library work ? In-Reply-To: <44E54814.7030803@ar.media.kyoto-u.ac.jp> References: <44E54814.7030803@ar.media.kyoto-u.ac.jp> Message-ID: <20060818091646.GR10593@mentat.za.net> On Fri, Aug 18, 2006 at 01:54:44PM +0900, David Cournapeau wrote: > import numpy as N > from ctypes import cdll, POINTER, c_int, c_uint > > _hello = cdll.LoadLibrary('libhello.so') > > _hello.sum.restype = c_int > _hello.sum.artype = [POINTER(c_int), c_uint] > > def sum(data): > return _hello.sum(data.ctypes.data_as(POINTER(c_int)), len(data)) > > n = 10 > data = N.arange(n) > > print data > print "sum(data) is " + str(sum(data)) > > > That works OK, but to avoid the platform dependency, I would like to use > load_library from numpy: I just replace the cdll.LoadLibrary by : > > _hello = N.ctypeslib.load_library('hello', '.') Shouldn't that be 'libhello'? Try _hello = N.ctypes_load_library('libhello','__file__') Cheers St?fan From fullung at gmail.com Fri Aug 18 06:31:06 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 18 Aug 2006 12:31:06 +0200 Subject: [Numpy-discussion] Interesting memory leak In-Reply-To: Message-ID: Hello all > > I decided to upgrade to 1.0b2 just to see what I get and now I get 7kB of > "possibly lost" memory, coming from PyObject_Malloc (in > /usr/lib/libpython2.4.so.1.0). This is a constant 7kB, however, and it > isn't getting any larger if I increase the loop iterations. Looks good > then. I don't really know the meaning of this "possibly lost" memory. http://projects.scipy.org/scipy/numpy/ticket/195 This leak is caused by add_docstring, but it's supposed to leak. I wonder if there's a way to register some kind of on-exit handler in Python so that this can also be cleaned up? Cheers, Albert From fullung at gmail.com Fri Aug 18 06:40:05 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 18 Aug 2006 12:40:05 +0200 Subject: [Numpy-discussion] ctypes: how does load_library work ? In-Reply-To: <44E54814.7030803@ar.media.kyoto-u.ac.jp> Message-ID: Hello all > -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of David Cournapeau > Sent: 18 August 2006 06:55 > To: Discussion of Numerical Python > Subject: [Numpy-discussion] ctypes: how does load_library work ? > > > That works OK, but to avoid the platform dependency, I would like to use > load_library from numpy: I just replace the cdll.LoadLibrary by : > > _hello = N.ctypeslib.load_library('hello', '.') > > which does not work. The python interpreter returns a strange error > message, because it says hello.so.so is not found, and it is looking for > the library in the directory usr/$(PWD), which does not make sense to > me. Is it a bug, or am I just not understanding how to use the > load_library function ? load_library currently assumes that library names don't have a prefix. We might want to rethink this assumption on Linux and other Unixes. load_library's second argument is a filename or a directory name. If it's a directory, load_library looks for hello. in that directory. If it's a filename, load_library calls os.path.dirname to get a directory. The idea with this is that in a module you'll probably have one file that loads the library and sets up argtypes and restypes and here you'll do (in mylib.py): _mylib = numpy.ctypeslib.load_library('mylib_', __file__) and then the library will be installed in the same directory as mylib.py. Better suggestions for doing all this appreciated. ;-) Cheers, Albert From david at ar.media.kyoto-u.ac.jp Fri Aug 18 07:36:21 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 18 Aug 2006 20:36:21 +0900 Subject: [Numpy-discussion] ctypes: how does load_library work ? In-Reply-To: <20060818091646.GR10593@mentat.za.net> References: <44E54814.7030803@ar.media.kyoto-u.ac.jp> <20060818091646.GR10593@mentat.za.net> Message-ID: <44E5A635.3090403@ar.media.kyoto-u.ac.jp> Stefan van der Walt wrote: > On Fri, Aug 18, 2006 at 01:54:44PM +0900, David Cournapeau wrote: > >> import numpy as N >> from ctypes import cdll, POINTER, c_int, c_uint >> >> _hello = cdll.LoadLibrary('libhello.so') >> >> _hello.sum.restype = c_int >> _hello.sum.artype = [POINTER(c_int), c_uint] >> >> def sum(data): >> return _hello.sum(data.ctypes.data_as(POINTER(c_int)), len(data)) >> >> n = 10 >> data = N.arange(n) >> >> print data >> print "sum(data) is " + str(sum(data)) >> >> >> That works OK, but to avoid the platform dependency, I would like to use >> load_library from numpy: I just replace the cdll.LoadLibrary by : >> >> _hello = N.ctypeslib.load_library('hello', '.') >> > > Shouldn't that be 'libhello'? Try > > _hello = N.ctypes_load_library('libhello','__file__') > Well, the library name convention under unix, as far as I know, is 'lib'+ name + '.so' + 'version'. And if I put lib in front of hello, it then does not work under windows. David From david at ar.media.kyoto-u.ac.jp Fri Aug 18 07:42:22 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 18 Aug 2006 20:42:22 +0900 Subject: [Numpy-discussion] ctypes: how does load_library work ? In-Reply-To: References: Message-ID: <44E5A79E.5090402@ar.media.kyoto-u.ac.jp> Albert Strasheim wrote: > Hello all > > >> -----Original Message----- >> From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- >> discussion-bounces at lists.sourceforge.net] On Behalf Of David Cournapeau >> Sent: 18 August 2006 06:55 >> To: Discussion of Numerical Python >> Subject: [Numpy-discussion] ctypes: how does load_library work ? >> >> >> That works OK, but to avoid the platform dependency, I would like to use >> load_library from numpy: I just replace the cdll.LoadLibrary by : >> >> _hello = N.ctypeslib.load_library('hello', '.') >> >> which does not work. The python interpreter returns a strange error >> message, because it says hello.so.so is not found, and it is looking for >> the library in the directory usr/$(PWD), which does not make sense to >> me. Is it a bug, or am I just not understanding how to use the >> load_library function ? >> > > load_library currently assumes that library names don't have a prefix. We > might want to rethink this assumption on Linux and other Unixes. > I think it needs to be modified for linux and Solaris at least, where the prefix lib is put in the library name. When linking, you use -lm, and not -llibm. In dlopen, you use the full name (libm.so). After a quick look at ctypes reference doc, it looks like there are some function to search a library, maybe this can be used ? Anyway, this is kind of nickpicking, as ctypes is really a breeze to use. To be able to do the whole wrapping in pure python is great, thanks ! David From faltet at carabos.com Fri Aug 18 07:59:03 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 18 Aug 2006 13:59:03 +0200 Subject: [Numpy-discussion] First impressions on migrating to NumPy Message-ID: <200608181359.03643.faltet@carabos.com> Hi, I'm starting to (slowly) replace numarray by NumPy at the core of PyTables, specially at those places where the speed of NumPy is *much* better, that is, in the creation of arrays (there are places in PyTables where this is critical, most specially in indexation) and in copying arrays. In both cases, NumPy performs between 8x to 40x than numarray and this is, well..., excellent :-) Also, the big unification between numerical homogeneous arrays, string homogeneous arrays (with unicode support added) and heterogeneous arrays (recarrays, with nested records support there also!) is simplyfying very much the code in PyTables where there are many places where one have to distinguish between those different objects in numarray. Fortunately, this distinction is not necessary anymore in many of this places. Furthermore, I'm seeing that most of the corner cases where numarray do well (this was the main reason I was conservative about migrating anyway), are also very well resolved in NumPy (in some cases better, as for one, NumPy has chosen NULL terminated strings for internal representation, instead of space padding in numarray that gave me lots of headaches). Of course, there are some glitches that I'll report appropriately, but overall, NumPy is behaving better than expected (and I already had *great* expectations). Well, I just wanted to report these experiences just in case other people is pondering about migrating as well to NumPy. But also wanted to thanks (once more), the excellent work of the NumPy crew, and specially Travis for their first-class work. Thanks! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From Norbert.Nemec.list at gmx.de Fri Aug 18 09:36:47 2006 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Fri, 18 Aug 2006 15:36:47 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup Message-ID: <44E5C26F.6020609@gmx.de> Hi there, in numpy-1.0b2 the logic in setup.py is slightly off. The attached patch fixes the issue. Greetings, Norbert PS: I would have preferred to submit this patch via the sourceforge bug-tracker, but that seems rather confusing: there are tabs "Numarray Patches" and "Numarray Bugs" but no "NumPy bugs" and the tab "Patches" seems to be used for Numeric. Why isn't NumPy handled via the Sourceforge page? -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: threading-without-smp-setup-bugfix.diff URL: From faltet at carabos.com Fri Aug 18 10:34:18 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 18 Aug 2006 16:34:18 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <44E5C26F.6020609@gmx.de> References: <44E5C26F.6020609@gmx.de> Message-ID: <200608181634.19694.faltet@carabos.com> A Divendres 18 Agost 2006 15:36, Norbert Nemec va escriure: > PS: I would have preferred to submit this patch via the sourceforge > bug-tracker, but that seems rather confusing: there are tabs "Numarray > Patches" and "Numarray Bugs" but no "NumPy bugs" and the tab "Patches" > seems to be used for Numeric. Why isn't NumPy handled via the > Sourceforge page? Because it has its own development site in: http://projects.scipy.org/scipy/numpy/ Log your bug reports there. Sourceforge is mainly used to distribute tarballs and binary packages of public releases, that's all. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From christopher.e.kees at erdc.usace.army.mil Fri Aug 18 10:44:12 2006 From: christopher.e.kees at erdc.usace.army.mil (Chris Kees) Date: Fri, 18 Aug 2006 09:44:12 -0500 Subject: [Numpy-discussion] First impressions on migrating to NumPy In-Reply-To: <200608181359.03643.faltet@carabos.com> References: <200608181359.03643.faltet@carabos.com> Message-ID: <7A1915DC-2495-480C-9CE1-68D0A5C67FFA@erdc.usace.army.mil> Can you provide some details about your approach to migrating to NumPy? Are you following some documentation on migration or do you have your own plan of attack? Chris On Aug 18, 2006, at 6:59 AM, Francesc Altet wrote: > Hi, > > I'm starting to (slowly) replace numarray by NumPy at the core of > PyTables, > specially at those places where the speed of NumPy is *much* > better, that is, > in the creation of arrays (there are places in PyTables where this is > critical, most specially in indexation) and in copying arrays. In > both cases, > NumPy performs between 8x to 40x than numarray and this is, well..., > excellent :-) > > Also, the big unification between numerical homogeneous arrays, > string > homogeneous arrays (with unicode support added) and heterogeneous > arrays > (recarrays, with nested records support there also!) is simplyfying > very much > the code in PyTables where there are many places where one have to > distinguish between those different objects in numarray. > Fortunately, this > distinction is not necessary anymore in many of this places. > > Furthermore, I'm seeing that most of the corner cases where > numarray do well > (this was the main reason I was conservative about migrating > anyway), are > also very well resolved in NumPy (in some cases better, as for one, > NumPy has > chosen NULL terminated strings for internal representation, instead > of space > padding in numarray that gave me lots of headaches). Of course, > there are > some glitches that I'll report appropriately, but overall, NumPy is > behaving > better than expected (and I already had *great* expectations). > > Well, I just wanted to report these experiences just in case other > people is > pondering about migrating as well to NumPy. But also wanted to > thanks (once > more), the excellent work of the NumPy crew, and specially Travis > for their > first-class work. > > Thanks! > > -- >> 0,0< Francesc Altet http://www.carabos.com/ > V V C?rabos Coop. V. Enjoy Data > "-" > > ---------------------------------------------------------------------- > --- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your > job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From stefan at sun.ac.za Fri Aug 18 10:45:03 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 18 Aug 2006 16:45:03 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <44E5C26F.6020609@gmx.de> References: <44E5C26F.6020609@gmx.de> Message-ID: <20060818144503.GW10593@mentat.za.net> Hi Norbert On Fri, Aug 18, 2006 at 03:36:47PM +0200, Norbert Nemec wrote: > in numpy-1.0b2 the logic in setup.py is slightly off. The attached patch > fixes the issue. Please file a ticket so that we don't lose track of this. St?fan From faltet at carabos.com Fri Aug 18 11:07:51 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 18 Aug 2006 17:07:51 +0200 Subject: [Numpy-discussion] First impressions on migrating to NumPy In-Reply-To: <7A1915DC-2495-480C-9CE1-68D0A5C67FFA@erdc.usace.army.mil> References: <200608181359.03643.faltet@carabos.com> <7A1915DC-2495-480C-9CE1-68D0A5C67FFA@erdc.usace.army.mil> Message-ID: <200608181707.52563.faltet@carabos.com> A Divendres 18 Agost 2006 16:44, Chris Kees va escriure: > Can you provide some details about your approach to migrating to > NumPy? Are you following some documentation on migration or do you > have your own plan of attack? Well, to say the truth none of both ;-). The truth is that I was trying to accelerate some parts of my software and realized that numarray was an important bottleneck. NumPy was already in advanced beta stage and some small benchmarks conviced me that it would be the solution. So, I started porting one single C extension (PyTables has several), the simplest one, and checked that the results were correct (and confirmed that the new code was much faster!). After that, the second extension came converted and I'm in the process of checking everything. Now, there remains 3 more extensions to migrate, but the important ones for me are done. So, no plans other than having a good motivation (and the need for speed was a very good one). However, I think that having a complete test suite checking every detail of your software was key. Also, having access to the excellent book by Travis was extremely helpful. Finally, having IPython opened to check everything, look at online docstrings and be able to do fast timings added the "cerise sur le g?teau". Luck! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From mowry at de.multek.com Fri Aug 18 13:30:32 2006 From: mowry at de.multek.com (Jaka Kornreich) Date: Fri, 18 Aug 2006 10:30:32 -0700 Subject: [Numpy-discussion] te test Message-ID: <000001c6c2ec$01aad0c0$6963a8c0@utycir> Hi, It is so common to have problems with erecxxtion, Try VIrAGRA and forget about it http://www.vabaominheran.com which in turn . . . Charges up your batteries. We know a thing or two as well, Hingst, greeter of strangers to Paradise, and we are not your usual goaty -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Fri Aug 18 14:18:14 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 11:18:14 -0700 Subject: [Numpy-discussion] numpy.bool8 In-Reply-To: <44E45FA7.7080209@stsci.edu> References: <44E45FA7.7080209@stsci.edu> Message-ID: <44E60466.2060504@ieee.org> Christopher Hanley wrote: > What happened to numpy.bool8? I realize that bool_ is just as good. I > was just wondering what motivated the change? > > I think it was accidental... The numpy scalar tp_names were recently changed to be more consistent with Python and the bool8 construct probably disappeared because it was automatically generated. Thanks for the check. -Travis From oliphant.travis at ieee.org Fri Aug 18 14:21:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 11:21:05 -0700 Subject: [Numpy-discussion] convertcode.py In-Reply-To: <9EC96922-E299-4996-BED4-262B0E3E0126@erdc.usace.army.mil> References: <9EC96922-E299-4996-BED4-262B0E3E0126@erdc.usace.army.mil> Message-ID: <44E60511.40507@ieee.org> Chris Kees wrote: > Hi, > > I just ran convertcode.py on my code (from the latest svn source > of numpy) and it looks like it just changed the import statements to > > import numpy.oldnumeric as Numeric > > So it doesn't look like it's really helping me move over to the > new usage. Is there a script that will converts code to use the > new numpy as it's intended to be used? > Not yet. The transition approach is to use the compatibility layer first by running oldnumeric.alter_code1.py and then running alter_code2.py which will take you from the compatibility layer to NumPy (but alter_code2 is not completed yet). The description of what these codes do is in the latest version of the second chapter of my book (which is part of the preview chapters that are available on the web). -Travis From oliphant.travis at ieee.org Fri Aug 18 14:23:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 11:23:45 -0700 Subject: [Numpy-discussion] numpy 0.9.8->1.0b2 In-Reply-To: References: Message-ID: <44E605B1.2060705@ieee.org> David Grant wrote: > I'm contemplating upgrading to 1.0b2. The main reason is that I am > experiencing a major memory leak and before I report a bug I think the > developers would appeciate if I was using the most recent version. Am > I correct in that the only major change that might actually break my > code is that the following functions: > > take, repeat, sum, product, sometrue, cumsum, cumproduct, ptp, amax, > amin, prod, cumprod, mean, std, var > > now have axis=None as argument? Also the default return type is "float" instead of "int". I've highlighted the changes I think might break 0.9.8 code with the NOTE annotation on the page of release notes. > > BTW, how come alter_code2.py ( > http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/oldnumeric/alter_code2.py?rev=HEAD) > says in the docstring that it "converts functions that don't give > axis= keyword that have changed" but I don't see it actually doing > that anywhere in the code? Because it isn't done. The comments are a "this is what it should do". If you notice there is a warning on import (probably should be an error). -Travis From haase at msg.ucsf.edu Fri Aug 18 14:26:12 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 18 Aug 2006 11:26:12 -0700 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize Message-ID: <200608181126.12599.haase@msg.ucsf.edu> Hi, array dtype descriptors have an attribute itemsize that gives the total number of bytes required for an item of that dtype. Scalar types, like numy.int32, also have that attribute, but it returns "something else" - don't know what: >>> a.dtype.itemsize 4 >>> a.dtype.name 'float32' >>> N.int32.itemsize Furthermore there are *lot's* of more attributes to a scalar dtype, e.g. >>> N.int32.data >>> N.int32.argmax() Traceback (most recent call last): File "", line 1, in ? TypeError: descriptor 'argmax' of 'genericscalar' object needs an argument Are those useful ? Thanks, Sebastian Haase From oliphant.travis at ieee.org Fri Aug 18 14:34:27 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 11:34:27 -0700 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <44E5C26F.6020609@gmx.de> References: <44E5C26F.6020609@gmx.de> Message-ID: <44E60833.2060100@ieee.org> Norbert Nemec wrote: > Hi there, > > in numpy-1.0b2 the logic in setup.py is slightly off. The attached patch > fixes the issue. > > Greetings, > Norbert > > PS: I would have preferred to submit this patch via the sourceforge > bug-tracker, but that seems rather confusing: there are tabs "Numarray > Patches" and "Numarray Bugs" but no "NumPy bugs" and the tab "Patches" > seems to be used for Numeric. Why isn't NumPy handled via the > Sourceforge page? > NumPy development happens on the SVN servers at scipy.org and bug-tracking is handled through the Trac system at http://projects.scipy.org/scipy/numpy We only use sourceforge for distribution. I need more description on why the logic is not right. -Travis From oliphant.travis at ieee.org Fri Aug 18 14:38:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 11:38:17 -0700 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize In-Reply-To: <200608181126.12599.haase@msg.ucsf.edu> References: <200608181126.12599.haase@msg.ucsf.edu> Message-ID: <44E60919.1000606@ieee.org> Sebastian Haase wrote: > Hi, > array dtype descriptors have an attribute itemsize that gives the total > number of bytes required for an item of that dtype. > > Scalar types, like numy.int32, also have that attribute, > but it returns "something else" - don't know what: > > > Furthermore there are *lot's* of more attributes to a scalar dtype, e.g. > The scalar types are actual Python types (classes) whereas the dtype objects are instances. The attributes you are seeing of the typeobject are very useful when you have an instance of that type. With numpy.int32.itemsize you are doing the equivalent of numpy.dtype.itemsize -Travis From kortmann at ideaworks.com Fri Aug 18 15:32:17 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Fri, 18 Aug 2006 12:32:17 -0700 (PDT) Subject: [Numpy-discussion] 1.02b Message-ID: <1356.12.216.231.149.1155929537.squirrel@webmail.ideaworks.com> I realize it was just released, but is there going to be a windows release for 1.02b? From haase at msg.ucsf.edu Fri Aug 18 16:16:15 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 18 Aug 2006 13:16:15 -0700 Subject: [Numpy-discussion] =?iso-8859-1?q?attributes_of_scalar_types_-_e?= =?iso-8859-1?q?=2Eg=2E=09numpy=2Eint32=2Eitemsize?= In-Reply-To: <44E60919.1000606@ieee.org> References: <200608181126.12599.haase@msg.ucsf.edu> <44E60919.1000606@ieee.org> Message-ID: <200608181316.15166.haase@msg.ucsf.edu> On Friday 18 August 2006 11:38, Travis Oliphant wrote: > Sebastian Haase wrote: > > Hi, > > array dtype descriptors have an attribute itemsize that gives the total > > number of bytes required for an item of that dtype. > > > > Scalar types, like numy.int32, also have that attribute, > > but it returns "something else" - don't know what: > > > > > > Furthermore there are *lot's* of more attributes to a scalar dtype, e.g. > > The scalar types are actual Python types (classes) whereas the dtype > objects are instances. > > The attributes you are seeing of the typeobject are very useful when you > have an instance of that type. > > With numpy.int32.itemsize you are doing the equivalent of > numpy.dtype.itemsize but why then do I not get the result 4 ? -Sebastian From charlesr.harris at gmail.com Fri Aug 18 17:03:35 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 18 Aug 2006 15:03:35 -0600 Subject: [Numpy-discussion] convertcode.py In-Reply-To: <44E60511.40507@ieee.org> References: <9EC96922-E299-4996-BED4-262B0E3E0126@erdc.usace.army.mil> <44E60511.40507@ieee.org> Message-ID: Hi Travis, > The description of what these codes do is in the latest version of the > second chapter of my book (which is part of the preview chapters that > are available on the web). Speaking of which, is it possible for us early buyers to get updated copies? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Fri Aug 18 17:09:07 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 14:09:07 -0700 Subject: [Numpy-discussion] 1.02b In-Reply-To: <1356.12.216.231.149.1155929537.squirrel@webmail.ideaworks.com> References: <1356.12.216.231.149.1155929537.squirrel@webmail.ideaworks.com> Message-ID: <44E62C73.6070304@ieee.org> kortmann at ideaworks.com wrote: > I realize it was just released, but is there going to be a windows release > for 1.02b? > > There will be either be one of 1.0b3 or one of 1.0b2 released for windows by Monday. -Travis From davidgrant at gmail.com Fri Aug 18 17:40:37 2006 From: davidgrant at gmail.com (David Grant) Date: Fri, 18 Aug 2006 14:40:37 -0700 Subject: [Numpy-discussion] numpy 0.9.8->1.0b2 In-Reply-To: <44E605B1.2060705@ieee.org> References: <44E605B1.2060705@ieee.org> Message-ID: On 8/18/06, Travis Oliphant wrote: > David Grant wrote: > > I'm contemplating upgrading to 1.0b2. The main reason is that I am > > experiencing a major memory leak and before I report a bug I think the > > developers would appeciate if I was using the most recent version. Am > > I correct in that the only major change that might actually break my > > code is that the following functions: > > > > take, repeat, sum, product, sometrue, cumsum, cumproduct, ptp, amax, > > amin, prod, cumprod, mean, std, var > > > > now have axis=None as argument? > Also the default return type is "float" instead of "int". I've > highlighted the changes I think might break 0.9.8 code with the NOTE > annotation on the page of release notes. > > > > BTW, how come alter_code2.py ( > > http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/oldnumeric/alter_code2.py?rev=HEAD) > > says in the docstring that it "converts functions that don't give > > axis= keyword that have changed" but I don't see it actually doing > > that anywhere in the code? > Because it isn't done. The comments are a "this is what it should do". > If you notice there is a warning on import (probably should be an error). Oh ok, so maybe a FIXME then... oh well, it's all a question of personal style, as long as you know what they mean. :-) I see the warning now...good idea. I see the "Important changes are denoted with a NOTE:" now in the release notes now. Finally realizing that I had a scipy wiki account, I added some more emphasis here for others. Thanks, David -- David Grant http://www.davidgrant.ca From Fernando.Perez at colorado.edu Fri Aug 18 17:54:13 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 18 Aug 2006 15:54:13 -0600 Subject: [Numpy-discussion] [Fwd: Re: Signal handling] Message-ID: <44E63705.3020804@colorado.edu> Hi all, here is the SAGE signal handling code, graciously donated by William Stein. I'd suggest putting (with any modifications to adapt it to numpy conventions) this into the actual numpy headers, so that not only all of our auto-generation tools (f2py, weave) can use it, but so that it also becomes trivial for end-users to user the same macros in their own code without doing anything additional. Regards, f -------- Original Message -------- Subject: Re: Signal handling Date: Fri, 18 Aug 2006 21:15:38 +0000 From: William Stein To: Fernando Perez References: <44E586D3.7010209 at colorado.edu> Here you are (see attached). Let me know if you have any trouble with gmail mangling the attachment. On 8/18/06, Fernando Perez wrote: > Hi William, > > could you please send me 'officially' an email with the interrupt.{c,h} files > and a notice of them being BSD licensed ? With that, I can then forward them > to the numpy list and work on their inclusion tomorrow. -- William Stein Associate Professor of Mathematics University of Washington -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: interrupt.c URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: interrupt.h URL: From fperez.net at gmail.com Fri Aug 18 17:58:34 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 18 Aug 2006 15:58:34 -0600 Subject: [Numpy-discussion] [Fwd: Re: Signal handling] In-Reply-To: <44E63705.3020804@colorado.edu> References: <44E63705.3020804@colorado.edu> Message-ID: On 8/18/06, Fernando Perez wrote: > here is the SAGE signal handling code, graciously donated by William Stein. Hit send too soon... I forgot to thank William for this code :) hopefully one of many things we'll be sharing between numpy/scipy and SAGE. Cheers, f From oliphant.travis at ieee.org Fri Aug 18 18:25:47 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 15:25:47 -0700 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize In-Reply-To: <200608181316.15166.haase@msg.ucsf.edu> References: <200608181126.12599.haase@msg.ucsf.edu> <44E60919.1000606@ieee.org> <200608181316.15166.haase@msg.ucsf.edu> Message-ID: <44E63E6B.2090503@ieee.org> Sebastian Haase wrote: > On Friday 18 August 2006 11:38, Travis Oliphant wrote: > >> Sebastian Haase wrote: >> >>> Hi, >>> array dtype descriptors have an attribute itemsize that gives the total >>> number of bytes required for an item of that dtype. >>> >>> Scalar types, like numy.int32, also have that attribute, >>> but it returns "something else" - don't know what: >>> >>> >>> Furthermore there are *lot's* of more attributes to a scalar dtype, e.g. >>> >> The scalar types are actual Python types (classes) whereas the dtype >> objects are instances. >> >> The attributes you are seeing of the typeobject are very useful when you >> have an instance of that type. >> >> With numpy.int32.itemsize you are doing the equivalent of >> numpy.dtype.itemsize >> > > but why then do I not get the result 4 ? > Because it's not a "class" attribute, it's an instance attribute. What does numpy.dtype.itemsize give you? -Travis From haase at msg.ucsf.edu Fri Aug 18 18:57:22 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 18 Aug 2006 15:57:22 -0700 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize In-Reply-To: <44E63E6B.2090503@ieee.org> References: <200608181126.12599.haase@msg.ucsf.edu> <200608181316.15166.haase@msg.ucsf.edu> <44E63E6B.2090503@ieee.org> Message-ID: <200608181557.22912.haase@msg.ucsf.edu> On Friday 18 August 2006 15:25, Travis Oliphant wrote: > Sebastian Haase wrote: > > On Friday 18 August 2006 11:38, Travis Oliphant wrote: > >> Sebastian Haase wrote: > >>> Hi, > >>> array dtype descriptors have an attribute itemsize that gives the > >>> total number of bytes required for an item of that dtype. > >>> > >>> Scalar types, like numy.int32, also have that attribute, > >>> but it returns "something else" - don't know what: > >>> > >>> > >>> Furthermore there are *lot's* of more attributes to a scalar dtype, > >>> e.g. > >> > >> The scalar types are actual Python types (classes) whereas the dtype > >> objects are instances. > >> > >> The attributes you are seeing of the typeobject are very useful when you > >> have an instance of that type. > >> > >> With numpy.int32.itemsize you are doing the equivalent of > >> numpy.dtype.itemsize > > > > but why then do I not get the result 4 ? > > Because it's not a "class" attribute, it's an instance attribute. > > What does numpy.dtype.itemsize give you? > I'm really sorry for being so dumb - but HOW can I get then the number of bytes needed by a given scalar type ? -S. From joris at ster.kuleuven.ac.be Fri Aug 18 18:07:17 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Sat, 19 Aug 2006 00:07:17 +0200 Subject: [Numpy-discussion] numpy installation Message-ID: <1155938837.44e63a15b8c43@webmail.ster.kuleuven.be> Hi, I am correctly assuming that numpy needs the full lapack distribution, and not just the few lapack routines given by atlas? After installing numpy I still get the warning ImportError: /software/python-2.4.1/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: s_wsfe which seems to indicate that numpy is trying to use its lapack_lite version instead of the full lapack distribution. Defining [lapack] library_dirs = /software/lapack3.0/ lapack_libs = combinedlapack in my site.cfg does not help. It also always gives a warning that my lapack lib in my atlas directory is incomplete despite the fact that I specified the full lapack library. The complaint of incompleteness disappears when I overwrite the liblapack.a of atlas with the one of the full lapack distribution, but then I still have the ImportError when I try to import numpy in my python shell. Any pointers? Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From luszczek at cs.utk.edu Fri Aug 18 19:48:43 2006 From: luszczek at cs.utk.edu (Piotr Luszczek) Date: Fri, 18 Aug 2006 19:48:43 -0400 Subject: [Numpy-discussion] numpy installation In-Reply-To: <1155938837.44e63a15b8c43@webmail.ster.kuleuven.be> References: <1155938837.44e63a15b8c43@webmail.ster.kuleuven.be> Message-ID: <200608181948.43282.luszczek@cs.utk.edu> s_wsfe is not LAPACK's routine it's a routine from the g2c library. You have to link it in in addition to lapack_lite. Piotr On Friday 18 August 2006 18:07, joris at ster.kuleuven.ac.be wrote: > Hi, > > I am correctly assuming that numpy needs the full lapack > distribution, and not just the few lapack routines given by atlas? > After installing numpy I still get the warning > > ImportError: > /software/python-2.4.1/lib/python2.4/site-packages/numpy/linalg/lapac >k_lite.so: undefined symbol: s_wsfe > > which seems to indicate that numpy is trying to use its lapack_lite > version instead of the full lapack distribution. Defining > > [lapack] > library_dirs = /software/lapack3.0/ > lapack_libs = combinedlapack > > in my site.cfg does not help. It also always gives a warning that my > lapack lib in my atlas directory is incomplete despite the fact that > I specified the full lapack library. The complaint of incompleteness > disappears when I overwrite the liblapack.a of atlas with the one of > the full lapack distribution, but then I still have the ImportError > when I try to import numpy in my python shell. > > Any pointers? > > Cheers, > Joris > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > --------------------------------------------------------------------- >---- Using Tomcat but need to do more? Need to support web services, > security? Get stuff done quickly with pre-integrated technology to > make your job easier Download IBM WebSphere Application Server > v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121 >642 _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant.travis at ieee.org Fri Aug 18 19:51:35 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 16:51:35 -0700 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize In-Reply-To: <200608181557.22912.haase@msg.ucsf.edu> References: <200608181126.12599.haase@msg.ucsf.edu> <200608181316.15166.haase@msg.ucsf.edu> <44E63E6B.2090503@ieee.org> <200608181557.22912.haase@msg.ucsf.edu> Message-ID: <44E65287.4020508@ieee.org> Sebastian Haase wrote: > On Friday 18 August 2006 15:25, Travis Oliphant wrote: > >> Sebastian Haase wrote: >> >>> On Friday 18 August 2006 11:38, Travis Oliphant wrote: >>> >>>> Sebastian Haase wrote: >>>> >>>>> Hi, >>>>> array dtype descriptors have an attribute itemsize that gives the >>>>> total number of bytes required for an item of that dtype. >>>>> >>>>> Scalar types, like numy.int32, also have that attribute, >>>>> but it returns "something else" - don't know what: >>>>> >>>>> >>>>> Furthermore there are *lot's* of more attributes to a scalar dtype, >>>>> e.g. >>>>> >>>> The scalar types are actual Python types (classes) whereas the dtype >>>> objects are instances. >>>> >>>> The attributes you are seeing of the typeobject are very useful when you >>>> have an instance of that type. >>>> >>>> With numpy.int32.itemsize you are doing the equivalent of >>>> numpy.dtype.itemsize >>>> >>> but why then do I not get the result 4 ? >>> >> Because it's not a "class" attribute, it's an instance attribute. >> >> What does numpy.dtype.itemsize give you? >> >> > I'm really sorry for being so dumb - but HOW can I get then the number of > bytes needed by a given scalar type ? > > Ah, the real question. Sorry for not catching it earlier. I've been in "make sure this isn't a bug mode" for a long time. If you have a scalar type you could create one and then check the itemsize: int32(0).itemsize Or you could look at the name and parse out how big it is. There is also a stored dictionary-like object that returns the number of bytes for any data-type recognized: numpy.nbytes[int32] -Travis From fperez.net at gmail.com Fri Aug 18 19:52:57 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 18 Aug 2006 17:52:57 -0600 Subject: [Numpy-discussion] Interesting memory leak In-Reply-To: References: Message-ID: > This leak is caused by add_docstring, but it's supposed to leak. I wonder if > there's a way to register some kind of on-exit handler in Python so that > this can also be cleaned up? import atexit atexit.register(your_cleanup_function) whose api is no args on input: def your_cleanup_function(): do_whatever... You could use here a little extension function which goes in and does the necessary free() calls on a pre-stored list of allocated pointers, if there's more than one (I don't really know what's going on here). Cheers, f From stefan at sun.ac.za Fri Aug 18 20:00:57 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 19 Aug 2006 02:00:57 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <20060818144503.GW10593@mentat.za.net> References: <44E5C26F.6020609@gmx.de> <20060818144503.GW10593@mentat.za.net> Message-ID: <20060819000057.GZ10593@mentat.za.net> On Fri, Aug 18, 2006 at 04:45:03PM +0200, Stefan van der Walt wrote: > Hi Norbert > > On Fri, Aug 18, 2006 at 03:36:47PM +0200, Norbert Nemec wrote: > > in numpy-1.0b2 the logic in setup.py is slightly off. The attached patch > > fixes the issue. > > Please file a ticket so that we don't lose track of this. Urgh, please excuse me. It seems that I have lost the ability to read more than one paragraph. St?fan From haase at msg.ucsf.edu Fri Aug 18 20:05:21 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 18 Aug 2006 17:05:21 -0700 Subject: [Numpy-discussion] =?iso-8859-1?q?attributes_of_scalar_types_-_e?= =?iso-8859-1?q?=2Eg=2E=09numpy=2Eint32=2Eitemsize?= In-Reply-To: <44E65287.4020508@ieee.org> References: <200608181126.12599.haase@msg.ucsf.edu> <200608181557.22912.haase@msg.ucsf.edu> <44E65287.4020508@ieee.org> Message-ID: <200608181705.21240.haase@msg.ucsf.edu> On Friday 18 August 2006 16:51, Travis Oliphant wrote: > Sebastian Haase wrote: > > On Friday 18 August 2006 15:25, Travis Oliphant wrote: > >> Sebastian Haase wrote: > >>> On Friday 18 August 2006 11:38, Travis Oliphant wrote: > >>>> Sebastian Haase wrote: > >>>>> Hi, > >>>>> array dtype descriptors have an attribute itemsize that gives the > >>>>> total number of bytes required for an item of that dtype. > >>>>> > >>>>> Scalar types, like numy.int32, also have that attribute, > >>>>> but it returns "something else" - don't know what: > >>>>> > >>>>> > >>>>> Furthermore there are *lot's* of more attributes to a scalar dtype, > >>>>> e.g. > >>>> > >>>> The scalar types are actual Python types (classes) whereas the dtype > >>>> objects are instances. > >>>> > >>>> The attributes you are seeing of the typeobject are very useful when > >>>> you have an instance of that type. > >>>> > >>>> With numpy.int32.itemsize you are doing the equivalent of > >>>> numpy.dtype.itemsize > >>> > >>> but why then do I not get the result 4 ? > >> > >> Because it's not a "class" attribute, it's an instance attribute. > >> > >> What does numpy.dtype.itemsize give you? > > > > I'm really sorry for being so dumb - but HOW can I get then the number of > > bytes needed by a given scalar type ? > > Ah, the real question. Sorry for not catching it earlier. I've been in > "make sure this isn't a bug mode" for a long time. > > If you have a scalar type you could create one and then check the itemsize: > > int32(0).itemsize > > Or you could look at the name and parse out how big it is. > > There is also a stored dictionary-like object that returns the number of > bytes for any data-type recognized: > > numpy.nbytes[int32] Thanks, that seems to be a handy "dictionary-like object" Just for the record - in the meantime I found this: >>> N.dtype(N.int32).itemsize 4 Cheers, Sebastian From joris at ster.kuleuven.be Fri Aug 18 20:16:52 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Sat, 19 Aug 2006 02:16:52 +0200 Subject: [Numpy-discussion] numpy installation In-Reply-To: <200608181948.43282.luszczek@cs.utk.edu> References: <1155938837.44e63a15b8c43@webmail.ster.kuleuven.be> <200608181948.43282.luszczek@cs.utk.edu> Message-ID: <200608190216.52391.joris@ster.kuleuven.be> Hi, [PL]: s_wsfe is not LAPACK's routine it's a routine from the g2c library. [PL]: You have to link it in in addition to lapack_lite. Thanks for the pointer. Sorry about my ignorance about these things. But is lapack_lite linked to numpy even if you specify the full lapack library? After some googling I learned that g2c is a lib which takes care that you can link fortran and C libraries (again my ignorance...). It's still not obvious for me, though, where/how I can make the install program do this linking. I have a /usr/lib/libg2c.a, so I am surprised it doesn't find it right away... Anybody experienced something similar, or other pointers? Ciao, Joris From charlesr.harris at gmail.com Fri Aug 18 21:35:43 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 18 Aug 2006 19:35:43 -0600 Subject: [Numpy-discussion] Whitespace Message-ID: Hi All, I've noticed a lot of trailing whitespace while browsing through the numpy subversion repository. So here is a perl script I pinched from the linux-kernel mailing list that does a good job of removing it. Chuck -------------- next part -------------- A non-text attachment was scrubbed... Name: cleanfile Type: application/octet-stream Size: 1122 bytes Desc: not available URL: From joris at ster.kuleuven.be Sat Aug 19 18:55:32 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Sun, 20 Aug 2006 00:55:32 +0200 Subject: [Numpy-discussion] speed degression Message-ID: <200608200055.32320.joris@ster.kuleuven.be> Hi, Some of my code is heavily using large complex arrays, and I noticed a speed degression in NumPy 1.0b2 with respect to Numarray. The following code snippet is an example that on my computer runs 10% faster in Numarray than in NumPy. >>> A = zeros(1000000, complex) >>> for k in range(1000): ... A *= zeros(1000000, complex) (I replaced 'complex' with 'Complex' in Numarray). Can anyone confirm this? Ciao, Joris From charlesr.harris at gmail.com Sat Aug 19 20:00:22 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 19 Aug 2006 18:00:22 -0600 Subject: [Numpy-discussion] speed degression In-Reply-To: <200608200055.32320.joris@ster.kuleuven.be> References: <200608200055.32320.joris@ster.kuleuven.be> Message-ID: Yes, On 8/19/06, Joris De Ridder wrote: > Hi, > > Some of my code is heavily using large complex arrays, and I noticed a speed > degression in NumPy 1.0b2 with respect to Numarray. The following code snippet > is an example that on my computer runs 10% faster in Numarray than in NumPy. > > >>> A = zeros(1000000, complex) > >>> for k in range(1000): > ... A *= zeros(1000000, complex) > > (I replaced 'complex' with 'Complex' in Numarray). Can anyone confirm this? I see this too. In [242]: t1 = timeit.Timer('a *= nx.zeros(1000000,"D")','import numarray as nx; a = nx.zeros(1000000,"D")') In [243]: t2 = timeit.Timer('a *= nx.zeros(1000000,"D")','import numpy as nx; a = nx.zeros(1000000,"D")') In [244]: t1.repeat(3,100) Out[244]: [5.184194803237915, 5.1135070323944092, 5.1053409576416016] In [245]: t2.repeat(3,100) Out[245]: [5.5170519351959229, 5.4989008903503418, 5.479154109954834] Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From djoefish at yahoo.com Sat Aug 19 20:43:44 2006 From: djoefish at yahoo.com (Daniel Fish) Date: Sat, 19 Aug 2006 17:43:44 -0700 Subject: [Numpy-discussion] install on Python 2.5 Message-ID: Any advice on installing numpy in Python 2.5 on WindowsXP? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgrav at mac.com Sat Aug 19 20:54:12 2006 From: tgrav at mac.com (Tommy Grav) Date: Sat, 19 Aug 2006 20:54:12 -0400 Subject: [Numpy-discussion] 1.02b problems In-Reply-To: <44E62C73.6070304@ieee.org> References: <1356.12.216.231.149.1155929537.squirrel@webmail.ideaworks.com> <44E62C73.6070304@ieee.org> Message-ID: <58D304A5-7274-479C-AE89-E975B59F4B50@mac.com> I am trying to install numpy on my Apple Powerbook G4 running OS X Tiger (10.4.7). I am running ActivePython 2.4.3. Installing the numPy package seems to work fine but when I try to import it I get the following: /Users/tgrav --> python ActivePython 2.4.3 Build 11 (ActiveState Software Inc.) based on Python 2.4.3 (#1, Apr 3 2006, 18:07:18) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/__init__.py", line 35, in ? import core File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/core/__init__.py", line 10, in ? from numeric import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/core/numeric.py", line 33, in ? CLIP = multiarray.CLIP AttributeError: 'module' object has no attribute 'CLIP' >>> How can I remedy this problem? Cheers Tommy tgrav at mac.com http://homepage.mac.com/tgrav/ "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genious -- and a lot of courage -- to move in the opposite direction" -- Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Sun Aug 20 14:32:24 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sun, 20 Aug 2006 19:32:24 +0100 Subject: [Numpy-discussion] Patch against Image.py in the PIL In-Reply-To: <44B57AEF.3080300@ieee.org> References: <44B57AEF.3080300@ieee.org> Message-ID: <20060820193224.303481aa.simon@arrowtheory.com> On Wed, 12 Jul 2006 16:42:55 -0600 Travis Oliphant wrote: > > Attached is a patch that makes PIL Image objects both export and consume > the array interface. Cool ! I found that upon converting to/from a numpy array the image is upside-down. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From Norbert.Nemec.list at gmx.de Sun Aug 20 06:51:52 2006 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Sun, 20 Aug 2006 12:51:52 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <44E60833.2060100@ieee.org> References: <44E5C26F.6020609@gmx.de> <44E60833.2060100@ieee.org> Message-ID: <44E83EC8.3020501@gmx.de> Travis Oliphant wrote: > Norbert Nemec wrote: > >> Hi there, >> >> in numpy-1.0b2 the logic in setup.py is slightly off. The attached patch >> fixes the issue. >> >> Greetings, >> Norbert >> >> PS: I would have preferred to submit this patch via the sourceforge >> bug-tracker, but that seems rather confusing: there are tabs "Numarray >> Patches" and "Numarray Bugs" but no "NumPy bugs" and the tab "Patches" >> seems to be used for Numeric. Why isn't NumPy handled via the >> Sourceforge page? >> >> > NumPy development happens on the SVN servers at scipy.org and > bug-tracking is handled through the Trac system at > > http://projects.scipy.org/scipy/numpy > > We only use sourceforge for distribution. > OK, sorry. I found this myself in the meantime. I even remember that I stumbled over this some time ago already. Problem is: I'm submitting bug-reports, fixes and small patches to so many different projects, that I start mixing up the details of the individual procedures. Furthermore: the TRAC tickets do not seem to allow attachment of patches. Did I miss something there? > I need more description on why the logic is not right. > The original code reads: ----------------------- [...snip...] if nosmp: moredefs = [('NPY_ALLOW_THREADS', '0')] else: moredefs = [] [...snip...] if moredefs: target_f = open(target,'a') for d in moredefs: if isinstance(d,str): target_f.write('#define %s\n' % (d)) else: target_f.write('#define %s %s\n' % (d[0],d[1])) if not nosmp: # default is to use WITH_THREAD target_f.write('#ifdef WITH_THREAD\n#define NPY_ALLOW_THREADS 1\n#else\n#define NPY_ALLOW_THREADS 0\n#endif\n') target_f.close() [...snip...] ---------------- That is: if not nosmp, then moredefs may be empty, in which case NPY_ALLOW_THREADS is not defined at all. My patch ensures that NPY_ALLOW_THREADS is defined in any case, either by putting it in moredefs, or by adding the special conditional define. The conditional "if moredefs" is not needed at all: the file needs to be opened in any case, to define NPY_ALLOW_THREADS one way or other. Greetings, Norbert From fullung at gmail.com Sun Aug 20 08:53:52 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 20 Aug 2006 14:53:52 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <44E83EC8.3020501@gmx.de> Message-ID: Hello all > > Furthermore: the TRAC tickets do not seem to allow attachment of > patches. Did I miss something there? After submitting the initial report, you can attach files to the ticket. Regards, Albert From drswalton at gmail.com Sun Aug 20 18:32:29 2006 From: drswalton at gmail.com (Stephen Walton) Date: Sun, 20 Aug 2006 15:32:29 -0700 Subject: [Numpy-discussion] numpy installation In-Reply-To: <200608190216.52391.joris@ster.kuleuven.be> References: <1155938837.44e63a15b8c43@webmail.ster.kuleuven.be> <200608181948.43282.luszczek@cs.utk.edu> <200608190216.52391.joris@ster.kuleuven.be> Message-ID: <693733870608201532n49f840c4jdf7fa7ca3efd2623@mail.gmail.com> On 8/18/06, Joris De Ridder wrote: > > > > Sorry about my ignorance about these things. But is lapack_lite linked > to numpy even if you specify the full lapack library? As I understand it, lapack_lite is built and used by numpy as a shared library with a subset of the LAPACK routines. After some googling I learned that g2c is a lib which takes care that you > can link fortran and C libraries (again my ignorance...). Which platform are you on? If you do python setup.py build >& spool grep lapack spool what output do you get? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjn253 at tom.com Thu Aug 24 19:48:03 2006 From: hjn253 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Fri, 25 Aug 2006 07:48:03 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU6yOe6zrPJzqrTxdDjtcSztbzk1ve53A==?= Message-ID: An HTML attachment was scrubbed... URL: From service at citibank.com Sun Aug 20 20:53:18 2006 From: service at citibank.com (service at citibank.com) Date: Sun, 20 Aug 2006 20:53:18 -0400 (EDT) Subject: [Numpy-discussion] Citibank Update Message-ID: <20060821005318.B3E534DF68@nietzsche.smarterlinux.com> An HTML attachment was scrubbed... URL: From billieo at american-controls.com Sun Aug 20 21:16:00 2006 From: billieo at american-controls.com (Faith Thoreson) Date: Sun, 20 Aug 2006 18:16:00 -0700 Subject: [Numpy-discussion] news eafyby Message-ID: <000001c6c4bf$5cc1f310$90bfa8c0@wtavl> Hi, Economize up to 50 % on your R X with us http://www.rehungla.st As we raised and clashed our glasses together, drank deep, I thought of my mother. I do this very rarely; it must be all the male-female myth dredging that brought her to mind. Or what she used to say. Very -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Mon Aug 21 03:45:59 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 21 Aug 2006 16:45:59 +0900 Subject: [Numpy-discussion] linspace upper bound not met? Message-ID: I was porting some code over from matlab in which I relied on the upper bound of linspace to be met exactly. It turns out that it isn't always exactly met in numpy. In [390]: filter(lambda x: x[1]!=0.0, [ (i,1.0-numpy.linspace(0,1,i)[-1]) for i in range(2,200) ]) Out[390]: [(50, 1.1102230246251565e-016), (99, 1.1102230246251565e-016), (104, 1.1102230246251565e-016), (108, 1.1102230246251565e-016), (162, 1.1102230246251565e-016), (188, 1.1102230246251565e-016), (197, 1.1102230246251565e-016), (198, 1.1102230246251565e-016)] I know it's not a good idea to count on floating point equality in general, but still it doesn't seem too much to expect that the first and last values returned by linspace are exactly the values asked for if they both have exact floating point representations. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon Aug 21 11:31:14 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 21 Aug 2006 11:31:14 -0400 Subject: [Numpy-discussion] linspace upper bound not met? In-Reply-To: References: Message-ID: The definition of linspace is: def linspace(start, stop, num=50, endpoint=True, retstep=False): """Return evenly spaced numbers. Return 'num' evenly spaced samples from 'start' to 'stop'. If 'endpoint' is True, the last sample is 'stop'. If 'retstep' is True then return the step value used. """ num = int(num) if num <= 0: return array([], float) if endpoint: if num == 1: return array([float(start)]) step = (stop-start)/float((num-1)) else: step = (stop-start)/float(num) y = _nx.arange(0, num) * step + start if retstep: return y, step else: return y The simplest way to achieve this goal is to add right after the assignment to y two new lines: if endpoint: y[-1] = float(stop) Cheers, Alan Isaac PS I'll take this opportunity to state again my opinion that in the denerate case num=1 that if endpoint=True then linspace should return stop rather than start. (Otherwise endpoint is ignored. But I do not expect anyone to agree.) From wbaxter at gmail.com Mon Aug 21 12:27:20 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 22 Aug 2006 01:27:20 +0900 Subject: [Numpy-discussion] linspace upper bound not met? In-Reply-To: References: Message-ID: Out of curiosity I checked on what matlab does. It does explicity set the last value to 'stop' to avoid the roundoff issue. In numpy terms, it does something like y = r_[start+r_[0:num-1]*(stop-start)/(num-1.0), stop] But for numpy it's probably more efficient to just do the 'y[-1] = stop' like you say. --bb On 8/22/06, Alan G Isaac wrote: > > The definition of linspace is: > def linspace(start, stop, num=50, endpoint=True, retstep=False): > """Return evenly spaced numbers. > > Return 'num' evenly spaced samples from 'start' to 'stop'. If > 'endpoint' is True, the last sample is 'stop'. If 'retstep' is > True then return the step value used. > """ > num = int(num) > if num <= 0: > return array([], float) > if endpoint: > if num == 1: > return array([float(start)]) > step = (stop-start)/float((num-1)) > else: > step = (stop-start)/float(num) > y = _nx.arange(0, num) * step + start > if retstep: > return y, step > else: > return y > > The simplest way to achieve this goal is to add right after > the assignment to y two new lines: > if endpoint: > y[-1] = float(stop) > > Cheers, > Alan Isaac > > PS I'll take this opportunity to state again my opinion that > in the denerate case num=1 that if endpoint=True then > linspace should return stop rather than start. (Otherwise > endpoint is ignored. But I do not expect anyone to agree.) > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Mon Aug 21 14:55:15 2006 From: davidgrant at gmail.com (David Grant) Date: Mon, 21 Aug 2006 11:55:15 -0700 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple Message-ID: I was a bit surprised today to find that numpy.random.rand doesn't take in a tuple as input for the dimensions of the desired array. I am very used to using a tuple for zeros, ones. Also, wouldn't this mean that it would not be possible to add other non-keyword arguments to rand later? -- David Grant http://www.davidgrant.ca From oliphant at ee.byu.edu Mon Aug 21 15:02:05 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 21 Aug 2006 13:02:05 -0600 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: <44EA032D.4040309@ee.byu.edu> David Grant wrote: >I was a bit surprised today to find that numpy.random.rand doesn't >take in a tuple as input for the dimensions of the desired array. I am >very used to using a tuple for zeros, ones. Also, wouldn't this mean >that it would not be possible to add other non-keyword arguments to >rand later? > > > numpy.random.rand?? Return an array of the given dimensions which is initialized to random numbers from a uniform distribution in the range [0,1). rand(d0, d1, ..., dn) -> random values Note: This is a convenience function. If you want an interface that takes a tuple as the first argument use numpy.random.random_sample(shape_tuple). From aisaac at american.edu Mon Aug 21 15:14:20 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 21 Aug 2006 15:14:20 -0400 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: On Mon, 21 Aug 2006, David Grant apparently wrote: > I was a bit surprised today to find that numpy.random.rand > doesn't take in a tuple as input for the dimensions of the > desired array. I am very used to using a tuple for zeros, > ones. Also, wouldn't this mean that it would not be > possible to add other non-keyword arguments to rand later? You will find a long discussion of this in the archives. Cheers, Alan Isaac PS Thank you for improving the average predictive accuracy of economists. (You'll understand when you read the thread.) From robert.kern at gmail.com Mon Aug 21 15:07:55 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 Aug 2006 14:07:55 -0500 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: David Grant wrote: > I was a bit surprised today to find that numpy.random.rand doesn't > take in a tuple as input for the dimensions of the desired array. I am > very used to using a tuple for zeros, ones. Also, wouldn't this mean > that it would not be possible to add other non-keyword arguments to > rand later? Don't use rand(), then. Use random(). rand()'s sole purpose in life is to *not* take a tuple. If you like, you can read the archives on the several (long) discussions on this and why things are the way they are now. We finally achieved something resembling consensus, so please let's not resurrect this argument. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mithrandir42 at web.de Mon Aug 21 16:07:12 2006 From: mithrandir42 at web.de (N. Volbers) Date: Mon, 21 Aug 2006 22:07:12 +0200 Subject: [Numpy-discussion] error message when using insufficient dtype dict Message-ID: <44EA1270.6060805@web.de> Hello everyone, I had quite some trouble figuring out the _correct_ way to create heterogeneous arrays. What I wanted to do was something like the following: >>> numpy.array( [(0,0,0)], dtype={'names':['a','b','c'], 'formats':['f4','f4','f4']}) This works fine. Now, let's do something wrong, e.g. leave out the 'formats' specifier: >>> numpy.array( [(0,0,0)], dtype={'names':['a','b','c']}) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/numpy/core/_internal.py", line 53, in _usefields names, formats, offsets, titles = _makenames_list(adict) File "/usr/lib/python2.4/site-packages/numpy/core/_internal.py", line 21, in _makenames_list raise ValueError, "entry not a 2- or 3- tuple" ValueError: entry not a 2- or 3- tuple This error message was totally unclear to me. After reading a little on the scipy wiki I finally realized that (maybe) numpy internally converts the dict with the names and the formats to a list of 2-tuples of the form (name, format). Since no formats were given, these 2-tuples were invalid. I would suggest a check for the required dict keys and some meaningful error message like: "The dtype dictionary must at least contain the 'names' and the 'formats' items." Keep up the great work, Niklas. From davidgrant at gmail.com Mon Aug 21 19:26:10 2006 From: davidgrant at gmail.com (David Grant) Date: Mon, 21 Aug 2006 16:26:10 -0700 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: On 8/21/06, Robert Kern wrote: > > David Grant wrote: > > I was a bit surprised today to find that numpy.random.rand doesn't > > take in a tuple as input for the dimensions of the desired array. I am > > very used to using a tuple for zeros, ones. Also, wouldn't this mean > > that it would not be possible to add other non-keyword arguments to > > rand later? > > Don't use rand(), then. Use random(). rand()'s sole purpose in life is to > *not* > take a tuple. If you like, you can read the archives on the several (long) > discussions on this and why things are the way they are now. We finally > achieved > something resembling consensus, so please let's not resurrect this > argument. Thanks everyone. My only question now is why there is random_sample and random. My guess is that one is there for compatibility with older releases and so I'm not bothered by it. -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Aug 21 19:38:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 Aug 2006 18:38:05 -0500 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: David Grant wrote: > Thanks everyone. My only question now is why there is random_sample and > random. My guess is that one is there for compatibility with older > releases and so I'm not bothered by it. Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Mon Aug 21 19:48:09 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 22 Aug 2006 08:48:09 +0900 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: If you like, here's a rand function that takes either a sequence or a tuple. I use this for interactive sessions. def rand(*shape): """ Return an array of the given dimensions which is initialized to random numbers from a uniform distribution in the range [0,1). rand(d0, d1, ..., dn) -> random values or rand((d0, d1, ..., dn)) -> random values """ if len(shape) == 0 or not hasattr(shape[0],'__getitem__'): return numpy.random.rand(*shape) else: if len(shape) != 1: raise TypeError('Argument should either be a tuple or an argument list') else: return numpy.random.rand(*shape[0]) On 8/22/06, David Grant wrote: > > > > On 8/21/06, Robert Kern wrote: > > > > David Grant wrote: > > > I was a bit surprised today to find that numpy.random.rand doesn't > > > take in a tuple as input for the dimensions of the desired array. I am > > > very used to using a tuple for zeros, ones. Also, wouldn't this mean > > > that it would not be possible to add other non-keyword arguments to > > > rand later? > > > > Don't use rand(), then. Use random(). rand()'s sole purpose in life is > > to *not* > > take a tuple. If you like, you can read the archives on the several > > (long) > > discussions on this and why things are the way they are now. We finally > > achieved > > something resembling consensus, so please let's not resurrect this > > argument. > > > > Thanks everyone. My only question now is why there is random_sample and > random. My guess is that one is there for compatibility with older releases > and so I'm not bothered by it. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Mon Aug 21 21:09:43 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 21 Aug 2006 18:09:43 -0700 Subject: [Numpy-discussion] bug is arr.real for byteswapped array Message-ID: <200608211809.43642.haase@msg.ucsf.edu> Hi, We just spend some time debugging some numpy image analysis code where we finally noticed that our file was byte-swapped ;-). Even though we got much crazier numbers, the test below already shows one bug in the a.real.max() line. My numpy.__version__ is '1.0b3.dev3015' and this is run on pentium (little endian) Linux (both 64bit and 32bit version give same results): >>> a = N.arange(4, dtype='>c8') >>> a [ 0. +0.00000000e+00j 0. +1.00000000e+00j 0. +2.00000000e+00j 0. +3.00000000e+00j] >>> a.max() (3+0j) >>> a.real.max() 0.0 >>> a.imag.max() 4.60060298822e-41 >>> >>> a = N.arange(4, dtype='>> a.max() (3+0j) >>> a.real.max() 3.0 >>> a.imag.max() 0.0 >>> Can someone test this on a newer SVN version ? Thanks, Sebastian Haase From lists.steve at arachnedesign.net Mon Aug 21 22:08:29 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Mon, 21 Aug 2006 22:08:29 -0400 Subject: [Numpy-discussion] bug is arr.real for byteswapped array In-Reply-To: <200608211809.43642.haase@msg.ucsf.edu> References: <200608211809.43642.haase@msg.ucsf.edu> Message-ID: <8A9B3015-5136-430C-A48A-0BAC1EE254F8@arachnedesign.net> Hi Sebastian, > We just spend some time debugging some numpy image analysis code > where we finally noticed that our file was byte-swapped ;-). > Even though we got much crazier numbers, > the test below already shows one bug in the a.real.max() line. > My numpy.__version__ is '1.0b3.dev3015' and this is run on > pentium (little > endian) Linux (both 64bit and 32bit version give same results): I'm getting the same results you are. I just recompiled numpy to the latest svn (1.0b4.dev3050) and am running your example on intel (32 bit) Mac OS X.4.7. -steve From fullung at gmail.com Tue Aug 22 04:46:22 2006 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 22 Aug 2006 10:46:22 +0200 Subject: [Numpy-discussion] bug is arr.real for byteswapped array In-Reply-To: <200608211809.43642.haase@msg.ucsf.edu> Message-ID: Hello all > > > >>> a = N.arange(4, dtype='>c8') > >>> a.imag.max() > 4.60060298822e-41 Confirmed on Windows 32-bit with 1.0b4.dev3050. I created a ticket here: http://projects.scipy.org/scipy/numpy/ticket/265 Regards, Albert From misa-v-v at yahoo.co.jp Tue Aug 22 07:58:06 2006 From: misa-v-v at yahoo.co.jp (=?iso-2022-jp?B?bWlzYQ==?=) Date: Tue, 22 Aug 2006 11:58:06 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/07 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/07 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/07 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/07 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From oliphant.travis at ieee.org Tue Aug 22 12:36:14 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 22 Aug 2006 10:36:14 -0600 Subject: [Numpy-discussion] bug is arr.real for byteswapped array In-Reply-To: <200608211809.43642.haase@msg.ucsf.edu> References: <200608211809.43642.haase@msg.ucsf.edu> Message-ID: <44EB327E.1040302@ieee.org> Sebastian Haase wrote: > Hi, > We just spend some time debugging some numpy image analysis code > where we finally noticed that our file was byte-swapped ;-). > Even though we got much crazier numbers, > the test below already shows one bug in the a.real.max() line. > My numpy.__version__ is '1.0b3.dev3015' and this is run on pentium (little > endian) Linux (both 64bit and 32bit version give same results): > > I just fixed two bugs with respect to this issue which were introduced at various stages of development 1) The real and imag attribute getting functions were not respecting the byte-order of the data-type object of the array on creation of the "floating-point" equivalent data-type --- this one was introduced on the change to have byteorder part of the data-type object itself. 2) The copyswapn function for complex arrays was not performing two sets of swaps. It was performing one large swap (which had the effect of moving the real part to the imaginary part and vice-versa). These bug-fixes will be in 1.0b4 -Travis From haase at msg.ucsf.edu Tue Aug 22 12:33:53 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 22 Aug 2006 09:33:53 -0700 Subject: [Numpy-discussion] bug is arr.real for byteswapped array In-Reply-To: References: Message-ID: <200608220933.54066.haase@msg.ucsf.edu> Hi, probably related to this is that arr[2].real is read-only ... I noticed that you cannot assign to arr[2].real : >>> a[2].real =6 Traceback (most recent call last): File "", line 1, in ? TypeError: attribute 'real' of 'genericscalar' objects is not writable >>> a.real[2] =6 >>> >>> a[2].real.flags CONTIGUOUS : True FORTRAN : True OWNDATA : True WRITEABLE : False ALIGNED : True UPDATEIFCOPY : False >>> a.real[2].flags WRITEABLE : False >>> >>> a.real.flags CONTIGUOUS : False FORTRAN : False OWNDATA : False WRITEABLE : True >>> a[2].flags CONTIGUOUS : True FORTRAN : True OWNDATA : True WRITEABLE : False ALIGNED : True UPDATEIFCOPY : False Is the "not writable" restriction necessary ? Thanks, Sebastian Haase On Tuesday 22 August 2006 01:46, Albert Strasheim wrote: > Hello all > > > > > > > >>> a = N.arange(4, dtype='>c8') > > >>> a.imag.max() > > > > 4.60060298822e-41 > > Confirmed on Windows 32-bit with 1.0b4.dev3050. > > I created a ticket here: > > http://projects.scipy.org/scipy/numpy/ticket/265 > > Regards, > > Albert > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant.travis at ieee.org Tue Aug 22 15:15:36 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 22 Aug 2006 12:15:36 -0700 Subject: [Numpy-discussion] bug is arr.real for byteswapped array In-Reply-To: <200608220933.54066.haase@msg.ucsf.edu> References: <200608220933.54066.haase@msg.ucsf.edu> Message-ID: <44EB57D8.5000200@ieee.org> Sebastian Haase wrote: > Hi, > probably related to this is that > arr[2].real is read-only ... > > I noticed that you cannot assign > to arr[2].real : > No, that's unrelated. The problem is that arr[2] is a scalar and so it is immutable. When an array scalar is created you get a *copy* of the data. Setting it would not have the effect you imagine as the original data would go unchanged. The only exception to this is the array of type "void" which *does not* copy the data. -Travis From haase at msg.ucsf.edu Tue Aug 22 15:11:03 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 22 Aug 2006 12:11:03 -0700 Subject: [Numpy-discussion] why is int32 a NPY_LONG on 32bitLinux & NPY_INT on 64bitLinux Message-ID: <200608221211.03343.haase@msg.ucsf.edu> Hi, I just ran into more problems with my SWIG typemaps. In the C api the current enum for NPY_INT is 5 NPY_LONG is 7 to match overloaded function I need to check these type values. On 64bit all works fine: my 32bit int function matches NPY_INT - which is "int" in C/C++ my 64bit int function matches NPY_LONG - which is "long" in C/C++ but on 32bit Linux the 32bit int function matches NPY_LONG there is no NPY_INT on 32bit that is: if I have a non overloaded C/C++ function that expects a C "int" - i.e. a 32bit int - I have write different function matching rules !!! REQUEST: Can a 32bit int array get the typenumber code NPY_INT on 32bit Linux !? Then it would work for both 32bit Linux and 64bit Linux the same ! (I don't know about 64bit windows - I have heard that both C int and C long are 64bit - so that is screwed in any case .... ) Thanks, Sebastian Haase From oliphant.travis at ieee.org Tue Aug 22 15:30:54 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 22 Aug 2006 12:30:54 -0700 Subject: [Numpy-discussion] why is int32 a NPY_LONG on 32bitLinux & NPY_INT on 64bitLinux In-Reply-To: <200608221211.03343.haase@msg.ucsf.edu> References: <200608221211.03343.haase@msg.ucsf.edu> Message-ID: <44EB5B6E.5020908@ieee.org> Sebastian Haase wrote: > Hi, > I just ran into more problems with my SWIG > typemaps. > In the C api the current enum for > NPY_INT is 5 > NPY_LONG is 7 > > to match overloaded function I need to check these type values. > > On 64bit all works fine: > my 32bit int function matches NPY_INT - which is "int" in C/C++ > my 64bit int function matches NPY_LONG - which is "long" in C/C++ > > but on 32bit Linux > the 32bit int function matches NPY_LONG > there is no NPY_INT on 32bit > Yes there is. Both NPY_INT and NPY_LONG are always there. One matches the int and one matches the long. Perhaps you are confused about what the special defines NPY_INT32 match to? The behavior is that the 'long' type gets "first-dibs" then the 'longlong' type gets a crack. Finally, the 'int' type is chosen. The first one that matches the bit-type is used. > that is: if I have a non overloaded C/C++ function that expects a C "int" > - i.e. a 32bit int - I have write different function matching rules !!! > What you need to do is stop trying to match bit-widths and instead match c-types. That's why NPY_INT and NPY_LONG are both there. Let me know if you have further questions. I don't really understand what the issue is. -Travis From boyle5 at llnl.gov Tue Aug 22 15:38:00 2006 From: boyle5 at llnl.gov (James Boyle) Date: Tue, 22 Aug 2006 12:38:00 -0700 Subject: [Numpy-discussion] numpy/Numeric co-existence Message-ID: <25676ec40523d25046073f1c37ea49e3@llnl.gov> I have some codes which require a Numeric array and others which require a numpy array. I have no control over either code, and not the time to convert all to numpy if I did. The problem is this - say I have a routine that returns a numpy array as a result and I wish to do something to this array using a code that uses Numeric. Just passing the numpy array to the numeric code does not work. In my case the Numeric code thinks that the numpy float is a long int, this is not good. So what does one do in the interim? There are some legacy codes which will never be converted to numpy. I have seen discussion as to how to convert Numeric -> numpy, but not how the two can play together. I can appreciate the strong desire to eliminate having two systems, but the practical aspects of getting things done must also be considered. I am using numpy 1.0b1 and Numeric 23.7 . Thanks for any enlightenment - perhaps I am missing something obvious. --Jim From oliphant.travis at ieee.org Tue Aug 22 15:39:37 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 22 Aug 2006 12:39:37 -0700 Subject: [Numpy-discussion] why is int32 a NPY_LONG on 32bitLinux & NPY_INT on 64bitLinux In-Reply-To: <200608221211.03343.haase@msg.ucsf.edu> References: <200608221211.03343.haase@msg.ucsf.edu> Message-ID: <44EB5D79.90806@ieee.org> Sebastian Haase wrote: > Hi, > I just ran into more problems with my SWIG > typemaps. > In the C api the current enum for > NPY_INT is 5 > NPY_LONG is 7 > > to match overloaded function I need to check these type values. > > On 64bit all works fine: > my 32bit int function matches NPY_INT - which is "int" in C/C++ > my 64bit int function matches NPY_LONG - which is "long" in C/C++ > As you noted below, this is not always the case. You can't assume that 64-bit means "long" Let me assume that you are trying to write functions for each of the "data-types". You can proceed in a couple of ways: 1) Use the basic c-types 2) Use "bit-width" types (npy_int32, npy_int64, etc...) The advantage of the former is that it avoids any confusion in terms of what kind of c-type it matches. This is really only important if you are trying to interface with external code that uses basic c-types. The advantage of the latter is that you don't have to write a redundant routine (i.e. on 32-bit linux the int and long routines should be identical machine code), but you will have to be careful in matching to a c-type should you need to call some external routine. The current system gives you as many choices as possible (you can either match external code using the c-types) or you can write to a particular bit-width. This is accomplished through comprehensive checks defined in the arrayobject.h file. -Travis From robert.kern at gmail.com Tue Aug 22 15:49:17 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 22 Aug 2006 14:49:17 -0500 Subject: [Numpy-discussion] numpy/Numeric co-existence In-Reply-To: <25676ec40523d25046073f1c37ea49e3@llnl.gov> References: <25676ec40523d25046073f1c37ea49e3@llnl.gov> Message-ID: James Boyle wrote: > I have some codes which require a Numeric array and others which > require a numpy array. > I have no control over either code, and not the time to convert all to > numpy if I did. > The problem is this - say I have a routine that returns a numpy array > as a result and I wish to do something to this array using a code that > uses Numeric. Just passing the numpy array to the numeric code does > not work. In my case the Numeric code thinks that the numpy float is a > long int, this is not good. So what does one do in the interim? There > are some legacy codes which will never be converted to numpy. > > I have seen discussion as to how to convert Numeric -> numpy, but not > how the two can play together. I can appreciate the strong desire to > eliminate having two systems, but the practical aspects of getting > things done must also be considered. > > I am using numpy 1.0b1 and Numeric 23.7 . Upgrade to Numeric 24.2 and use Numeric.asarray(numpy_array) and numpy.asarray(numeric_array) at the interfaces between your codes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kortmann at ideaworks.com Tue Aug 22 16:27:11 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Tue, 22 Aug 2006 13:27:11 -0700 (PDT) Subject: [Numpy-discussion] Version 1.0b3 In-Reply-To: References: Message-ID: <1214.12.216.231.149.1156278431.squirrel@webmail.ideaworks.com> Since no one has downloaded 1.0b3 yet, if someone wants to put up the windows version for python2.3 i would be more than happy to be the first person to download it :) From haase at msg.ucsf.edu Tue Aug 22 16:44:32 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 22 Aug 2006 13:44:32 -0700 Subject: [Numpy-discussion] why is int32 a NPY_LONG on 32bitLinux & NPY_INT on 64bitLinux In-Reply-To: <44EB5B6E.5020908@ieee.org> References: <200608221211.03343.haase@msg.ucsf.edu> <44EB5B6E.5020908@ieee.org> Message-ID: <200608221344.33145.haase@msg.ucsf.edu> Thanks for the reply, see question below... On Tuesday 22 August 2006 12:30, Travis Oliphant wrote: > Sebastian Haase wrote: > > Hi, > > I just ran into more problems with my SWIG > > typemaps. > > In the C api the current enum for > > NPY_INT is 5 > > NPY_LONG is 7 > > > > to match overloaded function I need to check these type values. > > > > On 64bit all works fine: > > my 32bit int function matches NPY_INT - which is "int" in C/C++ > > my 64bit int function matches NPY_LONG - which is "long" in C/C++ > > > > but on 32bit Linux > > the 32bit int function matches NPY_LONG > > there is no NPY_INT on 32bit > > Yes there is. Both NPY_INT and NPY_LONG are always there. One matches > the int and one matches the long. > > Perhaps you are confused about what the special defines NPY_INT32 match to? > > The behavior is that the 'long' type gets "first-dibs" then the > 'longlong' type gets a crack. Finally, the 'int' type is chosen. The > first one that matches the bit-type is used. > This explains it - my specific function overloads only one of its two array arguments (i.e. allow many different types) - the second one must be a C "int". [(a 32bit int) - but SWIG matches the "C signature" ] But what is the type number of " > that is: if I have a non overloaded C/C++ function that expects a C "int" > > - i.e. a 32bit int - I have write different function matching rules !!! > > What you need to do is stop trying to match bit-widths and instead match > c-types. That's why NPY_INT and NPY_LONG are both there. If you are referring to use of the sizeof() operator - I'm not doing that. Thanks as always for your quick and careful replies. - Sebastian From oliphant.travis at ieee.org Tue Aug 22 20:34:26 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 22 Aug 2006 17:34:26 -0700 Subject: [Numpy-discussion] why is int32 a NPY_LONG on 32bitLinux & NPY_INT on 64bitLinux In-Reply-To: <200608221344.33145.haase@msg.ucsf.edu> References: <200608221211.03343.haase@msg.ucsf.edu> <44EB5B6E.5020908@ieee.org> <200608221344.33145.haase@msg.ucsf.edu> Message-ID: <44EBA292.8010806@ieee.org> Sebastian Haase wrote: > This explains it - my specific function overloads only one of its two array > arguments (i.e. allow many different types) - the second one must be a > C "int". > [(a 32bit int) - but SWIG matches the "C signature" ] > But what is the type number of " But on 32bitLinux I get NPY_LONG because of that rule. > > My SWIG typemaps want to "double check" that a C function expecting c-type > "int" gets a NPY_INT - (a "long" needs a "NPY_LONG") > Perhaps I can help you do what you want without making assumptions about the platform. I'll assume you are matching on an int* signature and want to "translate" that to an integer array of the correct bit-width. So, you have a PyArrayObject as input I'll call self Just check: (PyArray_ISSIGNED(self) && PyArray_ITEMSIZE(self)==SIZEOF_INT) For your type-map check. This will work on all platforms and allow signed integers of the right type. > I don't know what the solution should be - but maybe the rule should be > changed based on the assumption that "int" in more common !? > That's not going to happen at this point. Besides in the Python world, the fact that Python integers are "long" means that the "long" is the more common 32-bit integer on 32-bit machines. -Travis From carlosjosepita at yahoo.com.ar Tue Aug 22 22:51:01 2006 From: carlosjosepita at yahoo.com.ar (Carlos Pita) Date: Tue, 22 Aug 2006 23:51:01 -0300 (ART) Subject: [Numpy-discussion] Array pooling Message-ID: <20060823025101.30020.qmail@web50302.mail.yahoo.com> Hi! I'm writting a real time sound synthesis framework where processing units are interconnected via numpy arrays. These buffers are all the same size and type, so it would be easy and convenient pooling them in order to avoid excesive creation/destruction of arrays (consider that thousands of them are acquired and released per second, but just a few dozens used at the same time). But first I would like to know if numpy implements some pooling mechanism by itself. Could you give me some insight on this? Also, is it possible to obtain an uninitialized array? I mean, sometimes I don't feel like wasting valuable cpu clocks filling arrays with zeros, ones or whatever. Thank you in advance. Regards, Carlos --------------------------------- Pregunt?. Respond?. Descubr?. Todo lo que quer?as saber, y lo que ni imaginabas, est? en Yahoo! Respuestas (Beta). Probalo ya! -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Wed Aug 23 08:00:56 2006 From: simon at arrowtheory.com (Simon Burton) Date: Wed, 23 Aug 2006 13:00:56 +0100 Subject: [Numpy-discussion] Array pooling In-Reply-To: <20060823025101.30020.qmail@web50302.mail.yahoo.com> References: <20060823025101.30020.qmail@web50302.mail.yahoo.com> Message-ID: <20060823130056.576e41cc.simon@arrowtheory.com> On Tue, 22 Aug 2006 23:51:01 -0300 (ART) Carlos Pita wrote: > Hi! I'm writting a real time sound synthesis framework where processing units are interconnected via numpy arrays. These buffers are all the same size and type, so it would be easy and convenient pooling them in order to avoid excesive creation/destruction of arrays (consider that thousands of them are acquired and released per second, but just a few dozens used at the same time). But first I would like to know if numpy implements some pooling mechanism by itself. I don't think so. > Could you give me some insight on this? Also, is it possible to obtain an uninitialized array? numpy.empty > I mean, sometimes I don't feel like wasting valuable cpu clocks filling arrays with zeros, ones or whatever. > Thank you in advance. > Regards, > Carlos Sounds like fun. Simon. > > > > > > --------------------------------- > Pregunt?. Respond?. Descubr?. > Todo lo que quer?as saber, y lo que ni imaginabas, > est? en Yahoo! Respuestas (Beta). > Probalo ya! -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From charlesr.harris at gmail.com Tue Aug 22 23:31:32 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 22 Aug 2006 21:31:32 -0600 Subject: [Numpy-discussion] Array pooling In-Reply-To: <20060823025101.30020.qmail@web50302.mail.yahoo.com> References: <20060823025101.30020.qmail@web50302.mail.yahoo.com> Message-ID: On 8/22/06, Carlos Pita wrote: > > Hi! I'm writting a real time sound synthesis framework where processing > units are interconnected via numpy arrays. These buffers are all the same > size and type, so it would be easy and convenient pooling them in order to > avoid excesive creation/destruction of arrays (consider that thousands of > them are acquired and released per second, but just a few dozens used at the > same time). But first I would like to know if numpy implements some pooling > mechanism by itself. Could you give me some insight on this? Also, is it > possible to obtain an uninitialized array? I mean, sometimes I don't feel > like wasting valuable cpu clocks filling arrays with zeros, ones or > whatever. > Is there any reason to keep allocating arrays if you are just using them as data buffers? It seems you should be able to reuse them. If you wanted to be fancy you could keep them in a list, which would retain a reference and keep them from being garbage collected. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlosjosepita at yahoo.com.ar Wed Aug 23 00:11:03 2006 From: carlosjosepita at yahoo.com.ar (Carlos Pita) Date: Wed, 23 Aug 2006 01:11:03 -0300 (ART) Subject: [Numpy-discussion] Array pooling In-Reply-To: Message-ID: <20060823041103.64388.qmail@web50302.mail.yahoo.com> One reason is to use operator syntax: buf1 = buf2 + buf3, instead of add(buf2,buf3, buf1). The other is to spare the final user (synth programmer) any buffer bookkeeping. My idea was to keep track of pooled buffers' reference counts, so that those currently unused would have a refcount of 1 and could be safely deleted (well, if pool policy variables allow it). But as buffers are acquired all the time, even a simple (pure-python) pooling policy implementation is pretty time consuming. In fact, I have benchmarked this against simply creating new zeros-arrays every time, and the non-pooling version just runs faster. That was when I thought that numpy could be doing some internal pooling by itself. Regards, Carlos Is there any reason to keep allocating arrays if you are just using them as data buffers? It seems you should be able to reuse them. If you wanted to be fancy you could keep them in a list, which would retain a reference and keep them from being garbage collected. --------------------------------- Pregunt?. Respond?. Descubr?. Todo lo que quer?as saber, y lo que ni imaginabas, est? en Yahoo! Respuestas (Beta). Probalo ya! -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Aug 23 10:39:44 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 23 Aug 2006 08:39:44 -0600 Subject: [Numpy-discussion] Array pooling In-Reply-To: <20060823041103.64388.qmail@web50302.mail.yahoo.com> References: <20060823041103.64388.qmail@web50302.mail.yahoo.com> Message-ID: Hi Carlos, On 8/22/06, Carlos Pita wrote: > > One reason is to use operator syntax: buf1 = buf2 + buf3, instead of > add(buf2,buf3, buf1). The other is to spare the final user (synth > programmer) any buffer bookkeeping. > I see. My idea was to keep track of pooled buffers' reference counts, so that those > currently unused would have a refcount of 1 and could be safely deleted > (well, if pool policy variables allow it). But as buffers are acquired all > the time, even a simple (pure-python) pooling policy implementation is > pretty time consuming. In fact, I have benchmarked this against simply > creating new zeros-arrays every time, and the non-pooling version just runs > faster. That was when I thought that numpy could be doing some internal > pooling by itself. > I think the language libraries themselves must do some sort of pooling, at least the linux ones seem to. C++ programs do a lot of creation/destruction of structures on the heap and I have found the overhead noticeable but surprisingly small. Numpy arrays are a couple of layers of abstraction up, so maybe not quite as fast. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Aug 23 14:45:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 11:45:29 -0700 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions Message-ID: <44ECA249.3030007@ieee.org> I'm working on some macros that will allow extensions to be "interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the implementation is complicated by the possibility of threads and making sure to handle clean-up code correctly when the interrupt returns. I'd like to get this in to 1.0 final. Anything needed will not require re-compilation of extension modules built for 1.0b2 however. This will be strictly "extra" and if an extension module doesn't use it there will be no problems. Step 1: Define the interface. Here are a couple of draft proposals. Please comment on them. 1) General purpose interface NPY_SIG_TRY { [code] } NPY_SIG_EXCEPT(signum) { [interrupt handling return] } NPY_SIG_ELSE [normal return] The idea of signum is to hold the signal actually caught. 2) Simpler interface NPY_SIG_TRY { [code] } NPY_SIG_EXCEPT_GOTO(label) [normal return] label: [interrupt handling return] C-extensions often use the notion of a label to handle failure code. If anybody has any thoughts on this, they would be greatly appreciated. Step 2: Implementation. I have the idea to have a single interrupt handler (defined globally in NumPy) that basically uses longjmp to return to the section of code corresponding to the thread that is handling the interrupt. I had thought to use a global variable containing a linked list of jmp_buf structures with a thread-id attached (PyThread_get_thread_ident()) so that the interrupt handler can search it to see if the thread has registered a return location. If it has not, then the intterupt handler will just return normally. In this way a thread that calls setjmpbuf will be sure to return to the correct place when it handles the interrupt. Concern: My thinking is that this mechanism should work whether or not the GIL is held so that we don't have to worry about whether or not the GIL is held except in the interrupt handling case (when Python exceptions are to be set). But, honestly, this gets very confusing. The sigjmp / longjmp mechanism for handling interrupts is not recommended under windows (not sure about mingw), but there we could possibly use Microsoft's __try and __except extension to implement. Initially, it would be "un-implemented" on platforms where it didn't work. Any comments are greatly appreciated -Travis From paul_midgley2000 at yahoo.co.uk Wed Aug 23 15:12:42 2006 From: paul_midgley2000 at yahoo.co.uk (Paul Midgley) Date: Wed, 23 Aug 2006 19:12:42 +0000 (GMT) Subject: [Numpy-discussion] Newbie question In-Reply-To: Message-ID: <20060823191242.37257.qmail@web25711.mail.ukl.yahoo.com> Hello I have been interested in using python for some time for carrying out calculations, but I have not been able to determine if it is possible to use it to print out a report at the end. What I want is to use it similar to Mathcad producing structured equations in line with the text, graphs etc. I can produced decent reports using MS Word or open office, but these will not do the calculations and the anlysis work that can be done with python and similar languages. What I am trying to achieve is calculations in a template form where the raw data can be put into it and carries out the calculations and it can be printed out in the form of a report. Any help would be appreciated. Regards Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at nnytech.net Wed Aug 23 15:23:59 2006 From: john at nnytech.net (John Byrnes) Date: Wed, 23 Aug 2006 19:23:59 +0000 Subject: [Numpy-discussion] Newbie question In-Reply-To: <20060823191242.37257.qmail@web25711.mail.ukl.yahoo.com> References: <20060823191242.37257.qmail@web25711.mail.ukl.yahoo.com> Message-ID: <200608231924.10768.john@nnytech.net> On Wednesday 23 August 2006 19:12, Paul Midgley wrote: > Hello > > I have been interested in using python for some time for carrying out > calculations, but I have not been able to determine if it is possible to > use it to print out a report at the end. What I want is to use it similar > to Mathcad producing structured equations in line with the text, graphs > etc. > > I can produced decent reports using MS Word or open office, but these will > not do the calculations and the anlysis work that can be done with python > and similar languages. > > What I am trying to achieve is calculations in a template form where the > raw data can be put into it and carries out the calculations and it can be > printed out in the form of a report. > You may be able to use GNU TeXmacs with the Python plugin. I've not tried this so YMMV. TeXmacs: http://www.texmacs.org/ Python Plugin: http://dkbza.org/tmPython.html Enjoy! John -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 191 bytes Desc: not available URL: From aisaac at american.edu Wed Aug 23 15:38:51 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 23 Aug 2006 15:38:51 -0400 Subject: [Numpy-discussion] Newbie question In-Reply-To: <20060823191242.37257.qmail@web25711.mail.ukl.yahoo.com> References: <20060823191242.37257.qmail@web25711.mail.ukl.yahoo.com> Message-ID: On Wed, 23 Aug 2006, (GMT) Paul Midgley apparently wrote: > I have been interested in using python for some time for > carrying out calculations, but I have not been able to > determine if it is possible to use it to print out > a report at the end. http://gael-varoquaux.info/computers/pyreport/ hth, Alan Isaac From oliphant.travis at ieee.org Wed Aug 23 15:59:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 12:59:29 -0700 Subject: [Numpy-discussion] speed degression In-Reply-To: References: <200608200055.32320.joris@ster.kuleuven.be> Message-ID: <44ECB3A1.5050304@ieee.org> Charles R Harris wrote: > Yes, > > On 8/19/06, Joris De Ridder > wrote: > > Hi, > > > > Some of my code is heavily using large complex arrays, and I noticed > a speed > > degression in NumPy 1.0b2 with respect to Numarray. The following > code snippet > > is an example that on my computer runs 10% faster in Numarray than > in NumPy. > > > > >>> A = zeros(1000000, complex) > > >>> for k in range(1000): > > ... A *= zeros(1000000, complex) > > > > (I replaced 'complex' with 'Complex' in Numarray). Can anyone > confirm this? > The multiply (and divide functions) for complex arrays were using the "generic interface" (probably because this is what Numeric did) which calls out to a function to compute each result. I just committed a switch to "in-line" the multiplication and division calls. The speed-up is about that 10%. Now, my numarray and NumPy versions of the test are very similar. -Travis From haase at msg.ucsf.edu Wed Aug 23 16:51:02 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 23 Aug 2006 13:51:02 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() Message-ID: <200608231351.02236.haase@msg.ucsf.edu> Hi! numpy renamed the *function* abs to absolute. Most functions like mean, min, max, average, ... have an equivalent array *method*. Why is absolute left out ? I think it should be added . Furthermore, looking at some line of code that have multiple calls to absolute [ like f(absolute(a), absolute(b), absolute(c)) ] I think "some people" might prefer less typing and less reading, like f( a.abs(), b.abs(), c.abs() ). One could even consider not requiring the "function call" parenthesis '()' at all - but I don't know about further implications that might have. Thanks, Sebastian Haase PS: is there any performace hit in using the built-in abs function ? From cookedm at physics.mcmaster.ca Wed Aug 23 17:13:45 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 23 Aug 2006 17:13:45 -0400 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <200608231351.02236.haase@msg.ucsf.edu> References: <200608231351.02236.haase@msg.ucsf.edu> Message-ID: <20060823171345.786680ad@arbutus.physics.mcmaster.ca> On Wed, 23 Aug 2006 13:51:02 -0700 Sebastian Haase wrote: > Hi! > numpy renamed the *function* abs to absolute. > Most functions like mean, min, max, average, ... > have an equivalent array *method*. > > Why is absolute left out ? > I think it should be added . We've got __abs__ :-) > Furthermore, looking at some line of code that have multiple calls to > absolute [ like f(absolute(a), absolute(b), absolute(c)) ] > I think "some people" might prefer less typing and less reading, > like f( a.abs(), b.abs(), c.abs() ). > One could even consider not requiring the "function call" parenthesis '()' > at all - but I don't know about further implications that might have. eh, no. things that return new arrays should be functions. (As opposed to views of existing arrays, like a.T) > PS: is there any performace hit in using the built-in abs function ? Shouldn't be: abs(x) looks for the x.__abs__() method (which arrays have). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From jdawe at eos.ubc.ca Wed Aug 23 17:27:29 2006 From: jdawe at eos.ubc.ca (Jordan Dawe) Date: Wed, 23 Aug 2006 14:27:29 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows Message-ID: <44ECC841.1040304@eos.ubc.ca> I just tried to compile numpy-1.0b3 under windows using mingw. I got this error: compile options: '-Ibuild\src.win32-2.4\numpy\core\src -Inumpy\core\include -Ibuild\src.win32-2.4\numpy\core -Inumpy\core\src -Inumpy\core\include -Ic:\Python24\include -Ic:\Python24\PC -c' gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -Ibuild\src.win32-2.4\numpy\core\src -Inumpy\core\include -Ibuild\src.win32-2.4\numpy\core -Inumpy\core\src -Inumpy\core\include -Ic:\Python24\include -Ic:\Python24\PC -c numpy\core\src\multiarraymodule.c -o build\temp.win32-2.4\Release\numpy\core\src\multiarraymodule.o In file included from numpy/core/src/multiarraymodule.c:64: numpy/core/src/arrayobject.c:6643: initializer element is not constant numpy/core/src/arrayobject.c:6643: (near initialization for `PyArray_Type.tp_free') numpy/core/src/arrayobject.c:10312: initializer element is not constant numpy/core/src/arrayobject.c:10312: (near initialization for `PyArrayMultiIter_Type.tp_free') numpy/core/src/arrayobject.c:11189: initializer element is not constant numpy/core/src/arrayobject.c:11189: (near initialization for `PyArrayDescr_Type.tp_hash') error: Command "gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -Ibuild\src.win32-2.4\numpy\core\src -Inumpy\core\include -Ibuild\src.win32-2.4\numpy\core -Inumpy\core\src -Inumpy\core\include -Ic:\Python24\include -Ic:\Python24\PC -c numpy\core\src\multiarraymodule.c -o build\temp.win32-2.4\Release\numpy\core\src\multiarraymodule.o" failed with exit status 1 Any ideas? Jordan Dawe From svetosch at gmx.net Wed Aug 23 17:34:48 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed, 23 Aug 2006 23:34:48 +0200 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44ECC841.1040304@eos.ubc.ca> References: <44ECC841.1040304@eos.ubc.ca> Message-ID: <44ECC9F8.1050108@gmx.net> Jordan Dawe schrieb: > I just tried to compile numpy-1.0b3 under windows using mingw. I got > this error: ... > > Any ideas? > No, except that I ran into the same problem... Hooray, I'm not alone ;-) -sven From perry at stsci.edu Wed Aug 23 17:43:15 2006 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 23 Aug 2006 17:43:15 -0400 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44ECA249.3030007@ieee.org> References: <44ECA249.3030007@ieee.org> Message-ID: I thought it might be useful to give a little more context on the problems involved in handling such interruptions. Basically, one doesn't want to exit out of places where data structures are incompletely set up, or memory isn't properly handled so that later references to these don't cause segfaults (or experience memory leaks). There may be more exotic cases but typically many extensions are as simple as: 1) Figure out what inputs one has and the mode of computation needed 2) allocate and setup output arrays 3) do computation, possibly lengthy, over arrays 4) free temporary arrays and other data structures 5) return results Typically, the interrupt handling is needed only for 3, the part that it may spend a very long time in. 1, 2, 4, and 5 are not worth interrupting, and the area that may cause the most trouble. I'd argue that many things could do with a very simple structure where section 3 is bracketed with macros. Something like: NPY_SIG_INTERRUPTABLE [long looping computational code that doesn't create or destroy objects] NPY_SIG_END_INTERRUPTABLE followed by the normal code to do 4 and 5. What happens during an interrupt is the computation code is exited and execution resumes right after the closing macro. Very often one doesn't care that the results in the arrays may be incomplete, or invalid numbers (presumably you know that since you just did control-C, but maybe I'm confused). Any reason that most cases couldn't be handled with something this simple? All cases can't be handled with this, but most should I think. Perry On Aug 23, 2006, at 2:45 PM, Travis Oliphant wrote: > > I'm working on some macros that will allow extensions to be > "interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the > implementation is complicated by the possibility of threads and making > sure to handle clean-up code correctly when the interrupt returns. > > I'd like to get this in to 1.0 final. Anything needed will not > require > re-compilation of extension modules built for 1.0b2 however. This > will > be strictly "extra" and if an extension module doesn't use it there > will > be no problems. > > Step 1: > > Define the interface. Here are a couple of draft proposals. Please > comment on them. > > 1) General purpose interface > > NPY_SIG_TRY { > [code] > } > NPY_SIG_EXCEPT(signum) { > [interrupt handling return] > } > NPY_SIG_ELSE > [normal return] > > The idea of signum is to hold the signal actually caught. > > > 2) Simpler interface > > NPY_SIG_TRY { > [code] > } > NPY_SIG_EXCEPT_GOTO(label) > [normal return] > > label: > [interrupt handling return] > > > C-extensions often use the notion of a label to handle failure code. > > If anybody has any thoughts on this, they would be greatly > appreciated. > > > Step 2: > > Implementation. I have the idea to have a single interrupt handler > (defined globally in NumPy) that basically uses longjmp to return > to the > section of code corresponding to the thread that is handling the > interrupt. I had thought to use a global variable containing a linked > list of jmp_buf structures with a thread-id attached > (PyThread_get_thread_ident()) so that the interrupt handler can search > it to see if the thread has registered a return location. If it has > not, then the intterupt handler will just return normally. In > this way > a thread that calls setjmpbuf will be sure to return to the correct > place when it handles the interrupt. > > Concern: > > My thinking is that this mechanism should work whether or not the > GIL is > held so that we don't have to worry about whether or not the GIL is > held > except in the interrupt handling case (when Python exceptions are > to be > set). But, honestly, this gets very confusing. > > The sigjmp / longjmp mechanism for handling interrupts is not > recommended under windows (not sure about mingw), but there we could > possibly use Microsoft's __try and __except extension to implement. > Initially, it would be "un-implemented" on platforms where it > didn't work. > > Any comments are greatly appreciated > > -Travis > > > > > ---------------------------------------------------------------------- > --- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your > job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From frank at qfin.net Wed Aug 23 17:47:28 2006 From: frank at qfin.net (Frank Conradie) Date: Wed, 23 Aug 2006 14:47:28 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44ECC9F8.1050108@gmx.net> References: <44ECC841.1040304@eos.ubc.ca> <44ECC9F8.1050108@gmx.net> Message-ID: <44ECCCF0.3080206@qfin.net> Hi Sven and Jordan I wish to add my name to this list ;-) I just got the same error trying to compile for Python 2.3 with latest candidate mingw32, following the instructions at http://www.scipy.org/Installing_SciPy/Windows . Hopefully someone can shed some light on this error - what I've been able to find on the net explains something about C not allowing dynamic initializing of global variables, whereas C++ does...? - Frank Sven Schreiber wrote: > Jordan Dawe schrieb: > >> I just tried to compile numpy-1.0b3 under windows using mingw. I got >> this error: >> > ... > >> Any ideas? >> >> > > No, except that I ran into the same problem... Hooray, I'm not alone ;-) > -sven > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Aug 23 18:13:57 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 15:13:57 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44ECCCF0.3080206@qfin.net> References: <44ECC841.1040304@eos.ubc.ca> <44ECC9F8.1050108@gmx.net> <44ECCCF0.3080206@qfin.net> Message-ID: <44ECD325.2040204@ieee.org> Frank Conradie wrote: > Hi Sven and Jordan > > I wish to add my name to this list ;-) I just got the same error > trying to compile for Python 2.3 with latest candidate mingw32, > following the instructions at > http://www.scipy.org/Installing_SciPy/Windows . > > Hopefully someone can shed some light on this error - what I've been > able to find on the net explains something about C not allowing > dynamic initializing of global variables, whereas C++ does...? > Edit line 690 of ndarrayobject.h to read #define NPY_USE_PYMEM 0 Hopefully that should fix the error. -Travis From oliphant.travis at ieee.org Wed Aug 23 18:21:41 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 15:21:41 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44ECD325.2040204@ieee.org> References: <44ECC841.1040304@eos.ubc.ca> <44ECC9F8.1050108@gmx.net> <44ECCCF0.3080206@qfin.net> <44ECD325.2040204@ieee.org> Message-ID: <44ECD4F5.9000401@ieee.org> Travis Oliphant wrote: > Frank Conradie wrote: > >> Hi Sven and Jordan >> >> I wish to add my name to this list ;-) I just got the same error >> trying to compile for Python 2.3 with latest candidate mingw32, >> following the instructions at >> http://www.scipy.org/Installing_SciPy/Windows . >> >> Hopefully someone can shed some light on this error - what I've been >> able to find on the net explains something about C not allowing >> dynamic initializing of global variables, whereas C++ does...? >> >> > Edit line 690 of ndarrayobject.h to read > > #define NPY_USE_PYMEM 0 > > Hopefully that should fix the error. > You will also have to alter line 11189 so that _Py_HashPointer is replaced by 0 or NULL From wbaxter at gmail.com Wed Aug 23 19:12:31 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 24 Aug 2006 08:12:31 +0900 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <20060823171345.786680ad@arbutus.physics.mcmaster.ca> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> Message-ID: The thing that I find I keep forgetting is that abs() is a built-in, but other simple functions are not. So it's abs(foo), but numpy.floor(foo) and numpy.ceil(foo). And then there's round() which is a built-in but can't be used with arrays, so numpy.round_(foo). Seems like it would be more consistent to just add a numpy.abs() and numpy.round(). But I guess there's nothing numpy can do about it... you can't name a method the same as a built-in function, right? That's why we have numpy.round_() instead of numpy.round(), no? [...goes and checks] Oh, you *can* name a module function the same as a built-in. Hmm... so then why isn't numpy.round_() just numpy.round()? Is it just so "from numpy import *" won't hide the built-in? --bill On 8/24/06, David M. Cooke wrote: > > On Wed, 23 Aug 2006 13:51:02 -0700 > Sebastian Haase wrote: > > > Hi! > > numpy renamed the *function* abs to absolute. > > Most functions like mean, min, max, average, ... > > have an equivalent array *method*. > > > > Why is absolute left out ? > > I think it should be added . > > We've got __abs__ :-) > > > Furthermore, looking at some line of code that have multiple calls to > > absolute [ like f(absolute(a), absolute(b), absolute(c)) ] > > I think "some people" might prefer less typing and less reading, > > like f( a.abs(), b.abs(), c.abs() ). > > > One could even consider not requiring the "function call" parenthesis > '()' > > at all - but I don't know about further implications that might have. > > eh, no. things that return new arrays should be functions. (As opposed to > views of existing arrays, like a.T) > > > PS: is there any performace hit in using the built-in abs function ? > > Shouldn't be: abs(x) looks for the x.__abs__() method (which arrays have). > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Wed Aug 23 19:22:52 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 23 Aug 2006 16:22:52 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> Message-ID: <200608231622.52266.haase@msg.ucsf.edu> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: > The thing that I find I keep forgetting is that abs() is a built-in, but > other simple functions are not. So it's abs(foo), but numpy.floor(foo) and > numpy.ceil(foo). And then there's round() which is a built-in but can't be > used with arrays, so numpy.round_(foo). Seems like it would be more > consistent to just add a numpy.abs() and numpy.round(). > > But I guess there's nothing numpy can do about it... you can't name a > method the same as a built-in function, right? That's why we have > numpy.round_() instead of numpy.round(), no? > [...goes and checks] > Oh, you *can* name a module function the same as a built-in. Hmm... so > then why isn't numpy.round_() just numpy.round()? Is it just so "from > numpy import *" won't hide the built-in? > That is my theory... Even tough I try to advertise import numpy as N a) "N." is not *that* much extra typing b) it much clearer to read code and see what is special from numpy vs. what is builtin c) (most important for me): I use PyShell/PyCrust and when I type the '.' after 'N' I get a nice pop-up list reminding me of all the function in numy ;-) Regarding the original subject: a) "absolute" is impractically too much typing and b) I just thought some (module-) functions might be "forgotten" to be put in as (object-) methods ... !? Cheers, Sebastian > --bill > > On 8/24/06, David M. Cooke wrote: > > On Wed, 23 Aug 2006 13:51:02 -0700 > > > > Sebastian Haase wrote: > > > Hi! > > > numpy renamed the *function* abs to absolute. > > > Most functions like mean, min, max, average, ... > > > have an equivalent array *method*. > > > > > > Why is absolute left out ? > > > I think it should be added . > > > > We've got __abs__ :-) > > > > > Furthermore, looking at some line of code that have multiple calls to > > > absolute [ like f(absolute(a), absolute(b), absolute(c)) ] > > > I think "some people" might prefer less typing and less reading, > > > like f( a.abs(), b.abs(), c.abs() ). > > > > > > One could even consider not requiring the "function call" parenthesis > > > > '()' > > > > > at all - but I don't know about further implications that might have. > > > > eh, no. things that return new arrays should be functions. (As opposed to > > views of existing arrays, like a.T) > > > > > PS: is there any performace hit in using the built-in abs function ? > > > > Shouldn't be: abs(x) looks for the x.__abs__() method (which arrays > > have). From cookedm at physics.mcmaster.ca Wed Aug 23 19:35:49 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 23 Aug 2006 19:35:49 -0400 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44ECA249.3030007@ieee.org> References: <44ECA249.3030007@ieee.org> Message-ID: <20060823193549.70728721@arbutus.physics.mcmaster.ca> On Wed, 23 Aug 2006 11:45:29 -0700 Travis Oliphant wrote: > > I'm working on some macros that will allow extensions to be > "interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the > implementation is complicated by the possibility of threads and making > sure to handle clean-up code correctly when the interrupt returns. > For writing clean-up code, here's some prior art on adding exceptions to C: http://www.ossp.org/pkg/lib/ex/ (BSD license) http://adomas.org/excc/ (GPL'd, so no good) http://ldeniau.web.cern.ch/ldeniau/html/exception/exception.html (no license given) The last one has functions that allow you to add pointers (and their deallocation functions) to a list so that they can be deallocated when an exception is thrown. (You don't necessarily need something like these libraries, but I thought I'd throw it in here, because it's along the same lines) > Step 2: > > Implementation. I have the idea to have a single interrupt handler > (defined globally in NumPy) that basically uses longjmp to return to the > section of code corresponding to the thread that is handling the > interrupt. I had thought to use a global variable containing a linked > list of jmp_buf structures with a thread-id attached > (PyThread_get_thread_ident()) so that the interrupt handler can search > it to see if the thread has registered a return location. If it has > not, then the intterupt handler will just return normally. In this way > a thread that calls setjmpbuf will be sure to return to the correct > place when it handles the interrupt. Signals and threads don't mix well at *all*. With POSIX semantics, synchronous signals (ones caused by the thread itself) should be sent to the handler for that thread. Asynchronous ones (like SIGINT for Ctrl-C) will be sent to an *arbitrary* thread. (Apple, for instance, doesn't make any guarantees on which thread gets it: http://developer.apple.com/qa/qa2001/qa1184.html) Best way I can see this is to have a SIGINT handler installed that sets a global variable, and check that every so often. It's such a good way that Python already does this -- Parser/intrcheck.c sets the handler, and you can use PyOS_InterruptOccurred() to check if one happened. So something like while (long running loop) { if (PyOS_InterruptOccurred()) goto error: ... useful stuff ... } error: This could be abstracted to a set of macros (with Perry's syntax): NPY_SIG_INTERRUPTABLE while (long loop) { NPY_CHECK_SIGINT; .. more stuff .. } NPY_SIG_END_INTERRUPTABLE where NPY_CHECK_SIGINT would do a longjmp(). Or come up with a good (fast) way to run stuff in another process :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Wed Aug 23 19:40:48 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 23 Aug 2006 19:40:48 -0400 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <200608231622.52266.haase@msg.ucsf.edu> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> <200608231622.52266.haase@msg.ucsf.edu> Message-ID: <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> On Wed, 23 Aug 2006 16:22:52 -0700 Sebastian Haase wrote: > On Wednesday 23 August 2006 16:12, Bill Baxter wrote: > > The thing that I find I keep forgetting is that abs() is a built-in, but > > other simple functions are not. So it's abs(foo), but numpy.floor(foo) > > and numpy.ceil(foo). And then there's round() which is a built-in but > > can't be used with arrays, so numpy.round_(foo). Seems like it would > > be more consistent to just add a numpy.abs() and numpy.round(). > > > > Regarding the original subject: > a) "absolute" is impractically too much typing and > b) I just thought some (module-) functions might be "forgotten" to be put > in as (object-) methods ... !? Four-line change, so I added a.abs() (three lines for array, one for MaskedArray). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Wed Aug 23 19:46:15 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 23 Aug 2006 17:46:15 -0600 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> Message-ID: On 8/23/06, Bill Baxter wrote: > The thing that I find I keep forgetting is that abs() is a built-in, but > other simple functions are not. So it's abs(foo), but numpy.floor(foo) and > numpy.ceil(foo). And then there's round() which is a built-in but can't be > used with arrays, so numpy.round_(foo). Seems like it would be more > consistent to just add a numpy.abs() and numpy.round(). > > But I guess there's nothing numpy can do about it... you can't name a > method the same as a built-in function, right? That's why we have > numpy.round_() instead of numpy.round(), no? > [...goes and checks] > Oh, you *can* name a module function the same as a built-in. Hmm... so then > why isn't numpy.round_() just numpy.round()? Is it just so "from numpy > import *" won't hide the built-in? Technically numpy could simply have (illustrated with round, but works also with abs) round = round_ and simply NOT include round in the __all__ list. This would make numpy.round(x) work (clean syntax) while from numpy import * would not clobber the builtin round. That sounds like a decent solution to me. Cheers, f From oliphant.travis at ieee.org Wed Aug 23 21:37:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 18:37:28 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> <200608231622.52266.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> Message-ID: <44ED02D8.6030401@ieee.org> David M. Cooke wrote: > On Wed, 23 Aug 2006 16:22:52 -0700 > Sebastian Haase wrote: > > >> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: >> >>> The thing that I find I keep forgetting is that abs() is a built-in, but >>> other simple functions are not. So it's abs(foo), but numpy.floor(foo) >>> and numpy.ceil(foo). And then there's round() which is a built-in but >>> can't be used with arrays, so numpy.round_(foo). Seems like it would >>> be more consistent to just add a numpy.abs() and numpy.round(). >>> >>> >> Regarding the original subject: >> a) "absolute" is impractically too much typing and >> b) I just thought some (module-) functions might be "forgotten" to be put >> in as (object-) methods ... !? >> > > Four-line change, so I added a.abs() (three lines for array, one > for MaskedArray). > While I appreciate it's proactive nature, I don't like this change because it adds another "ufunc" as a method. Right now, I think conj is the only other method like that. Instead, I like better the idea of adding abs, round, max, and min to the "non-import-*" namespace of numpy. From haase at msg.ucsf.edu Wed Aug 23 22:02:13 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 23 Aug 2006 19:02:13 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <44ED02D8.6030401@ieee.org> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> Message-ID: <200608231902.13491.haase@msg.ucsf.edu> On Wednesday 23 August 2006 18:37, Travis Oliphant wrote: > David M. Cooke wrote: > > On Wed, 23 Aug 2006 16:22:52 -0700 > > > > Sebastian Haase wrote: > >> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: > >>> The thing that I find I keep forgetting is that abs() is a built-in, > >>> but other simple functions are not. So it's abs(foo), but > >>> numpy.floor(foo) and numpy.ceil(foo). And then there's round() which > >>> is a built-in but can't be used with arrays, so numpy.round_(foo). > >>> Seems like it would be more consistent to just add a numpy.abs() and > >>> numpy.round(). > >> > >> Regarding the original subject: > >> a) "absolute" is impractically too much typing and > >> b) I just thought some (module-) functions might be "forgotten" to be > >> put in as (object-) methods ... !? > > > > Four-line change, so I added a.abs() (three lines for array, one > > for MaskedArray). > > While I appreciate it's proactive nature, I don't like this change > because it adds another "ufunc" as a method. Right now, I think conj is > the only other method like that. > > Instead, I like better the idea of adding abs, round, max, and min to > the "non-import-*" namespace of numpy. > How does this compare with mean, min, max, average ? BTW: I think me choice is now settled on the builtin call: abs(arr) -- short and sweet. (As long as it is really supposed to *always* work and is not *slow* in any way !?!?!?!?) Cheers, Sebastian From oliphant.travis at ieee.org Wed Aug 23 22:12:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 19:12:03 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <200608231902.13491.haase@msg.ucsf.edu> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> <200608231902.13491.haase@msg.ucsf.edu> Message-ID: <44ED0AF3.2020601@ieee.org> Sebastian Haase wrote: > On Wednesday 23 August 2006 18:37, Travis Oliphant wrote: > >> David M. Cooke wrote: >> >>> On Wed, 23 Aug 2006 16:22:52 -0700 >>> >>> Sebastian Haase wrote: >>> >>>> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: >>>> >>>>> The thing that I find I keep forgetting is that abs() is a built-in, >>>>> but other simple functions are not. So it's abs(foo), but >>>>> numpy.floor(foo) and numpy.ceil(foo). And then there's round() which >>>>> is a built-in but can't be used with arrays, so numpy.round_(foo). >>>>> Seems like it would be more consistent to just add a numpy.abs() and >>>>> numpy.round(). >>>>> >>>> Regarding the original subject: >>>> a) "absolute" is impractically too much typing and >>>> b) I just thought some (module-) functions might be "forgotten" to be >>>> put in as (object-) methods ... !? >>>> >>> Four-line change, so I added a.abs() (three lines for array, one >>> for MaskedArray). >>> >> While I appreciate it's proactive nature, I don't like this change >> because it adds another "ufunc" as a method. Right now, I think conj is >> the only other method like that. >> >> Instead, I like better the idea of adding abs, round, max, and min to >> the "non-import-*" namespace of numpy. >> >> > How does this compare with > mean, min, max, average > ? > I'm not sure what this question is asking, so I'll answer what I think it is asking. The mean, min, max, and average functions are *not* ufuncs. They are methods of particular ufuncs. The abs() should not be slow (because it calls the __abs__ method which for arrays is mapped to the ufunc absolute). Thus, there is one more layer of indirection which will only matter for small arrays. -Travis From david at ar.media.kyoto-u.ac.jp Wed Aug 23 23:11:34 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 24 Aug 2006 12:11:34 +0900 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <20060823193549.70728721@arbutus.physics.mcmaster.ca> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> Message-ID: <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> David M. Cooke wrote: > On Wed, 23 Aug 2006 11:45:29 -0700 > Travis Oliphant wrote: > >> I'm working on some macros that will allow extensions to be >> "interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the >> implementation is complicated by the possibility of threads and making >> sure to handle clean-up code correctly when the interrupt returns. >> > This is funny, I was just thinking about that yesterday. This is a major problem when writing C extensions in matlab (the manual says use the matlab allocator instead of malloc/new/whatever, but when you call a library, you cannot do that...). > > Best way I can see this is to have a SIGINT handler installed that sets a > global variable, and check that every so often. It's such a good way that > Python already does this -- Parser/intrcheck.c sets the handler, and you can > use PyOS_InterruptOccurred() to check if one happened. So something like This is the way I do it when writing extension under matlab. I am by no means knowledgeable about those kind of things, but this is the simplest solution I came up with so far. I would guess that because it uses one global variable, it should not matter which thread receives the signal ? > > while (long running loop) { > if (PyOS_InterruptOccurred()) goto error: > ... useful stuff ... > } > error: > > This could be abstracted to a set of macros (with Perry's syntax): > > NPY_SIG_INTERRUPTABLE > while (long loop) { > NPY_CHECK_SIGINT; > .. more stuff .. > } > NPY_SIG_END_INTERRUPTABLE > > where NPY_CHECK_SIGINT would do a longjmp(). Is there really a need for a longjmp ? What I simply do in this case is checking the global variable, and if its value changes, goto to the normal error handling. Let's say you have already a good error handling in your function, as Travis described in his email: status = do_stuff(); if (status < 0) { goto cleanup; } Then, to handle sigint, you need a global variable got_sigint which is modified by the signal handler, and check its value (the exact type of this variable is platform specific; on linux, I am using volatile sig_atomic_t, as recommeded by the GNU C doc):: /* status is 0 if everything is OK */ status = do_stuff(); if (status < 0) { goto cleanup; } sigprocmask (SIG_BLOCK, &block_sigint, NULL); if (got_sigint) { got_sigint = 0; goto cleanup; } sigprocmask (SIG_UNBLOCK, &block_sigint, NULL); So the error handling does not be modified, and no longjmp is needed ? Or maybe I don't understand what you mean. I think the case proposer by Perry is too restrictive: it is really common to use external libraries which we do not know whether they use memory allocation inside the processing, and there is a need to clean that too. > > Or come up with a good (fast) way to run stuff in another process :-) > This sounds a bit overkill, and a pain to implement for different platforms ? The checking of signals should be fast, but it has a cost (you have to use a branch) which prevents is from being called to often inside a loop, for example. David From haase at msg.ucsf.edu Thu Aug 24 00:22:32 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 23 Aug 2006 21:22:32 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <44ED0AF3.2020601@ieee.org> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> <200608231902.13491.haase@msg.ucsf.edu> <44ED0AF3.2020601@ieee.org> Message-ID: <44ED2988.8020501@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> On Wednesday 23 August 2006 18:37, Travis Oliphant wrote: >> >>> David M. Cooke wrote: >>> >>>> On Wed, 23 Aug 2006 16:22:52 -0700 >>>> >>>> Sebastian Haase wrote: >>>> >>>>> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: >>>>> >>>>>> The thing that I find I keep forgetting is that abs() is a built-in, >>>>>> but other simple functions are not. So it's abs(foo), but >>>>>> numpy.floor(foo) and numpy.ceil(foo). And then there's round() which >>>>>> is a built-in but can't be used with arrays, so numpy.round_(foo). >>>>>> Seems like it would be more consistent to just add a numpy.abs() and >>>>>> numpy.round(). >>>>>> >>>>> Regarding the original subject: >>>>> a) "absolute" is impractically too much typing and >>>>> b) I just thought some (module-) functions might be "forgotten" to be >>>>> put in as (object-) methods ... !? >>>>> >>>> Four-line change, so I added a.abs() (three lines for array, one >>>> for MaskedArray). >>>> >>> While I appreciate it's proactive nature, I don't like this change >>> because it adds another "ufunc" as a method. Right now, I think conj is >>> the only other method like that. >>> >>> Instead, I like better the idea of adding abs, round, max, and min to >>> the "non-import-*" namespace of numpy. >>> >>> >> How does this compare with >> mean, min, max, average >> ? >> > > I'm not sure what this question is asking, so I'll answer what I think > it is asking. > > The mean, min, max, and average functions are *not* ufuncs. They are > methods of particular ufuncs. > Yes - that's what wanted to hear ! I'm just trying to bring in the "user's" point of view: Not thinking about how they are implemented under the hood: mean,min,max,average have a very similar "feeling" to them as "abs". I'm hoping this ("seeing things from the user p.o.v.") can stay like that for as long as possible ! Numpy should be focused on "scientist not programers". (This is just why I posted this comment about "arr.abs()" - but if there is a good reason to not have this for "simplicity reasons 'under the hood'" I can see that perfectly fine !) - Sebastian From wbaxter at gmail.com Thu Aug 24 00:41:50 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 24 Aug 2006 13:41:50 +0900 Subject: [Numpy-discussion] users point of view and ufuncs Message-ID: On 8/24/06, Sebastian Haase wrote: > > > I'm not sure what this question is asking, so I'll answer what I think > > it is asking. > > > > The mean, min, max, and average functions are *not* ufuncs. They are > > methods of particular ufuncs. > > > Yes - that's what wanted to hear ! I'm just trying to bring in the > "user's" point of view: Not thinking about how they are implemented > under the hood: mean,min,max,average have a very similar "feeling" to > them as "abs". While we're on the subject of the "user's" point of view, the term "ufunc" is not very new-user friendly, yet it gets slung around fairly often. I'm not sure what to do about it exactly, but maybe for starters it would be nice to add a concise definition of "ufunc" to the numpy glossary: http://www.scipy.org/Numpy_Glossary. Can anyone come up with such a definition? Here's my stab at it: ufunc: A function that operates element-wise on arrays. But I have a feeling there's more to it than that. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From chanley at stsci.edu Thu Aug 24 08:57:12 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 24 Aug 2006 08:57:12 -0400 Subject: [Numpy-discussion] numpy revision 3056 will not build on RHE3 or Solaris Message-ID: <44EDA228.20100@stsci.edu> Good Morning, Numpy revision 3056 will not build on either Red Hat Enterprise 3 or Solaris 8. The relevant syntax errors are below: For RHE3: --------- creating build/temp.linux-i686-2.4 creating build/temp.linux-i686-2.4/numpy creating build/temp.linux-i686-2.4/numpy/core creating build/temp.linux-i686-2.4/numpy/core/src compile options: '-Ibuild/src.linux-i686-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-i686-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/stsci/pyssgdev/Python-2.4.2/include/python2.4 -c' gcc: numpy/core/src/multiarraymodule.c In file included from numpy/core/include/numpy/arrayobject.h:19, from numpy/core/src/multiarraymodule.c:25: numpy/core/include/numpy/npy_interrupt.h:95: syntax error before "_NPY_SIGINT_BUF" numpy/core/include/numpy/npy_interrupt.h:95: warning: type defaults to `int' in declaration of `_NPY_SIGINT_BUF' numpy/core/include/numpy/npy_interrupt.h:95: warning: data definition has no type or storage class numpy/core/include/numpy/npy_interrupt.h: In function `_npy_sighandler': numpy/core/include/numpy/npy_interrupt.h:100: `SIG_IGN' undeclared (first use in this function) numpy/core/include/numpy/npy_interrupt.h:100: (Each undeclared identifier is reported only once numpy/core/include/numpy/npy_interrupt.h:100: for each function it appears in.) numpy/core/include/numpy/npy_interrupt.h:101: warning: implicit declaration of function `longjmp' numpy/core/src/multiarraymodule.c: In function `test_interrupt': numpy/core/src/multiarraymodule.c:6441: `SIGINT' undeclared (first use in this function) numpy/core/src/multiarraymodule.c:6441: warning: implicit declaration of function `setjmp' In file included from numpy/core/include/numpy/arrayobject.h:19, from numpy/core/src/multiarraymodule.c:25: numpy/core/include/numpy/npy_interrupt.h:95: syntax error before "_NPY_SIGINT_BUF" numpy/core/include/numpy/npy_interrupt.h:95: warning: type defaults to `int' in declaration of `_NPY_SIGINT_BUF' numpy/core/include/numpy/npy_interrupt.h:95: warning: data definition has no type or storage class numpy/core/include/numpy/npy_interrupt.h: In function `_npy_sighandler': numpy/core/include/numpy/npy_interrupt.h:100: `SIG_IGN' undeclared (first use in this function) numpy/core/include/numpy/npy_interrupt.h:100: (Each undeclared identifier is reported only once numpy/core/include/numpy/npy_interrupt.h:100: for each function it appears in.) numpy/core/include/numpy/npy_interrupt.h:101: warning: implicit declaration of function `longjmp' numpy/core/src/multiarraymodule.c: In function `test_interrupt': numpy/core/src/multiarraymodule.c:6441: `SIGINT' undeclared (first use in this function) numpy/core/src/multiarraymodule.c:6441: warning: implicit declaration of function `setjmp' error: Command "gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -Ibuild/src.linux-i686-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-i686-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/stsci/pyssgdev/Python-2.4.2/include/python2.4 -c numpy/core/src/multiarraymodule.c -o build/temp.linux-i686-2.4/numpy/core/src/multiarraymodule.o" failed with exit status 1 For Solaris 8: -------------- creating build/temp.solaris-2.8-sun4u-2.4 creating build/temp.solaris-2.8-sun4u-2.4/numpy creating build/temp.solaris-2.8-sun4u-2.4/numpy/core creating build/temp.solaris-2.8-sun4u-2.4/numpy/core/src compile options: '-Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/ra/pyssg/Python-2.4.2/include/python2.4 -c' cc: numpy/core/src/multiarraymodule.c "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: jmp_buf "numpy/core/include/numpy/npy_interrupt.h", line 95: syntax error before or at: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 100: undefined symbol: SIG_IGN "numpy/core/include/numpy/npy_interrupt.h", line 100: warning: improper pointer/integer combination: arg #2 "numpy/core/src/scalartypes.inc.src", line 70: warning: statement not reached "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arrayobject.c", line 4338: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, int) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 4444: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to unsigned long, pointer to unsigned long, int, int) returning int "numpy/core/src/arrayobject.c", line 4448: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to char, pointer to char, int, int) returning int "numpy/core/src/arrayobject.c", line 5313: warning: assignment type mismatch: pointer to function(pointer to struct PyArrayObject {int ob_refcnt, pointer to struct _typeobject {..} ob_type, pointer to char data, int nd, pointer to int dimensions, pointer to int strides, pointer to struct _object {..} base, pointer to struct {..} descr, int flags, pointer to struct _object {..} weakreflist}, pointer to struct _object {int ob_refcnt, pointer to struct _typeobject {..} ob_type}) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 7280: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, pointer to void, pointer to void) returning void "=" pointer to void "numpy/core/src/multiarraymodule.c", line 6441: undefined symbol: SIGINT cc: acomp failed for numpy/core/src/multiarraymodule.c "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: jmp_buf "numpy/core/include/numpy/npy_interrupt.h", line 95: syntax error before or at: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 100: undefined symbol: SIG_IGN "numpy/core/include/numpy/npy_interrupt.h", line 100: warning: improper pointer/integer combination: arg #2 "numpy/core/src/scalartypes.inc.src", line 70: warning: statement not reached "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arrayobject.c", line 4338: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, int) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 4444: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to unsigned long, pointer to unsigned long, int, int) returning int "numpy/core/src/arrayobject.c", line 4448: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to char, pointer to char, int, int) returning int "numpy/core/src/arrayobject.c", line 5313: warning: assignment type mismatch: pointer to function(pointer to struct PyArrayObject {int ob_refcnt, pointer to struct _typeobject {..} ob_type, pointer to char data, int nd, pointer to int dimensions, pointer to int strides, pointer to struct _object {..} base, pointer to struct {..} descr, int flags, pointer to struct _object {..} weakreflist}, pointer to struct _object {int ob_refcnt, pointer to struct _typeobject {..} ob_type}) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 7280: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, pointer to void, pointer to void) returning void "=" pointer to void "numpy/core/src/multiarraymodule.c", line 6441: undefined symbol: SIGINT cc: acomp failed for numpy/core/src/multiarraymodule.c error: Command "/opt/SUNWspro-6u2/bin/cc -DNDEBUG -O -Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/ra/pyssg/Python-2.4.2/include/python2.4 -c numpy/core/src/multiarraymodule.c -o build/temp.solaris-2.8-sun4u-2.4/numpy/core/src/multiarraymodule.o" failed with exit status 2 Chris From ndarray at mac.com Thu Aug 24 09:27:46 2006 From: ndarray at mac.com (Sasha) Date: Thu, 24 Aug 2006 09:27:46 -0400 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: Message-ID: On 8/24/06, Bill Baxter wrote: >[snip] it would be > nice to add a concise definition of "ufunc" to the numpy glossary: > http://www.scipy.org/Numpy_Glossary. > done > Can anyone come up with such a definition? I copied the definition from the old Numeric manual. > Here's my stab at it: > > ufunc: A function that operates element-wise on arrays. > This is not entirely correct. Ufuncs operate on anything that can be passed to asarray: arrays, python lists, tuples or scalars. From frank at qfin.net Thu Aug 24 11:36:20 2006 From: frank at qfin.net (Frank Conradie) Date: Thu, 24 Aug 2006 08:36:20 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44ECD4F5.9000401@ieee.org> References: <44ECC841.1040304@eos.ubc.ca> <44ECC9F8.1050108@gmx.net> <44ECCCF0.3080206@qfin.net> <44ECD325.2040204@ieee.org> <44ECD4F5.9000401@ieee.org> Message-ID: <44EDC774.6050603@qfin.net> Thanks Travis - that did the trick. Is this an issue specifically with mingw and Windows? - Frank Travis Oliphant wrote: > Travis Oliphant wrote: > >> Frank Conradie wrote: >> >> >>> Hi Sven and Jordan >>> >>> I wish to add my name to this list ;-) I just got the same error >>> trying to compile for Python 2.3 with latest candidate mingw32, >>> following the instructions at >>> http://www.scipy.org/Installing_SciPy/Windows . >>> >>> Hopefully someone can shed some light on this error - what I've been >>> able to find on the net explains something about C not allowing >>> dynamic initializing of global variables, whereas C++ does...? >>> >>> >>> >> Edit line 690 of ndarrayobject.h to read >> >> #define NPY_USE_PYMEM 0 >> >> Hopefully that should fix the error. >> >> > > You will also have to alter line 11189 so that > > _Py_HashPointer is replaced by 0 or NULL > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Thu Aug 24 12:22:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 10:22:29 -0600 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44EDC774.6050603@qfin.net> References: <44ECC841.1040304@eos.ubc.ca> <44ECC9F8.1050108@gmx.net> <44ECCCF0.3080206@qfin.net> <44ECD325.2040204@ieee.org> <44ECD4F5.9000401@ieee.org> <44EDC774.6050603@qfin.net> Message-ID: <44EDD245.6020708@ieee.org> Frank Conradie wrote: > Thanks Travis - that did the trick. Is this an issue specifically with > mingw and Windows? > Yes, I keep forgetting that Python functions are not necessarily defined at compile time on Windows. It may also be a problem with MSVC on Windows but I'm not sure. The real fix is now in SVN where these function pointers are initialized before calling PyType_Ready -Travis From oliphant.travis at ieee.org Thu Aug 24 12:24:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 10:24:04 -0600 Subject: [Numpy-discussion] numpy revision 3056 will not build on RHE3 or Solaris In-Reply-To: <44EDA228.20100@stsci.edu> References: <44EDA228.20100@stsci.edu> Message-ID: <44EDD2A4.5090606@ieee.org> Christopher Hanley wrote: > Good Morning, > > Numpy revision 3056 will not build on either Red Hat Enterprise 3 or > Solaris 8. The relevant syntax errors are below: > > I'd like to see which platforms do not work with the npy_interrupt.h stuff. If you have a unique platform please try the latest SVN. There is a NPY_NO_SIGNAL define that will "turn off" support for interrupts which we can define on platforms that won't work. -Travis From dd55 at cornell.edu Thu Aug 24 12:36:21 2006 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 24 Aug 2006 12:36:21 -0400 Subject: [Numpy-discussion] =?iso-8859-1?q?numpy_revision_3056_will_not_bu?= =?iso-8859-1?q?ild_on_RHE3_or=09Solaris?= In-Reply-To: <44EDD2A4.5090606@ieee.org> References: <44EDA228.20100@stsci.edu> <44EDD2A4.5090606@ieee.org> Message-ID: <200608241236.21573.dd55@cornell.edu> Hi Travis, On Thursday 24 August 2006 12:24, you wrote: > Christopher Hanley wrote: > > Good Morning, > > > > Numpy revision 3056 will not build on either Red Hat Enterprise 3 or > > Solaris 8. The relevant syntax errors are below: > > I'd like to see which platforms do not work with the npy_interrupt.h > stuff. If you have a unique platform please try the latest SVN. I am able to build on an amd64/gentoo with python 2.4.3 and gcc-4.1.1. I am not able to build on 32bit RHEL4: --------------------------------- In file included from numpy/core/include/numpy/arrayobject.h:19, from numpy/core/src/multiarraymodule.c:25: numpy/core/include/numpy/npy_interrupt.h: In function `_npy_sighandler': numpy/core/include/numpy/npy_interrupt.h:102: error: `SIG_IGN' undeclared (first use in this function) numpy/core/include/numpy/npy_interrupt.h:102: error: (Each undeclared identifier is reported only once numpy/core/include/numpy/npy_interrupt.h:102: error: for each function it appears in.) numpy/core/src/multiarraymodule.c: In function `test_interrupt': numpy/core/src/multiarraymodule.c:6439: error: `SIGINT' undeclared (first use in this function) In file included from numpy/core/include/numpy/arrayobject.h:19, from numpy/core/src/multiarraymodule.c:25: numpy/core/include/numpy/npy_interrupt.h: In function `_npy_sighandler': numpy/core/include/numpy/npy_interrupt.h:102: error: `SIG_IGN' undeclared (first use in this function) numpy/core/include/numpy/npy_interrupt.h:102: error: (Each undeclared identifier is reported only once numpy/core/include/numpy/npy_interrupt.h:102: error: for each function it appears in.) numpy/core/src/multiarraymodule.c: In function `test_interrupt': numpy/core/src/multiarraymodule.c:6439: error: `SIGINT' undeclared (first use in this function) error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -D_GNU_SOURCE -fPIC -fPIC -Ibuild/src.linux-i686-2.3/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-i686-2.3/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.3 -c numpy/core/src/multiarraymodule.c -o build/temp.linux-i686-2.3/numpy/core/src/multiarraymodule.o" failed with exit status 1 From kortmann at ideaworks.com Thu Aug 24 12:50:55 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Thu, 24 Aug 2006 09:50:55 -0700 (PDT) Subject: [Numpy-discussion] numpy-1.0b3 under windows Message-ID: <1244.12.216.231.149.1156438255.squirrel@webmail.ideaworks.com> Sorry for my ignorance, but I have not ever heard of or used mingw32. I am also using python 2.3. Is there any way someone could possibly send me a brief walk through of how to install 1.0b3 on windows32? Also I am not sure that I know how to manipulate the code that you guys said that you have to so that it will work so if that is needed could you post a walk through of that also? From haase at msg.ucsf.edu Thu Aug 24 12:55:37 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 09:55:37 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <1244.12.216.231.149.1156438255.squirrel@webmail.ideaworks.com> References: <1244.12.216.231.149.1156438255.squirrel@webmail.ideaworks.com> Message-ID: <200608240955.38031.haase@msg.ucsf.edu> On Thursday 24 August 2006 09:50, kortmann at ideaworks.com wrote: > Sorry for my ignorance, but I have not ever heard of or used mingw32. I > am also using python 2.3. http://en.wikipedia.org/wiki/Mingw explains in detail. > > Is there any way someone could possibly send me a brief walk through of > how to install 1.0b3 on windows32? do you know about the ("awesome" wiki website at scipy.org) try your luck at http://www.scipy.org/Build_for_Windows > > Also I am not sure that I know how to manipulate the code that you guys > said that you have to so that it will work so if that is needed could you > post a walk through of that also? > To my knowledge there is no need to "manipulate code" .... Maybe you should try getting per-build versions first. Sebastian Haase From oliphant.travis at ieee.org Thu Aug 24 12:34:47 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 10:34:47 -0600 Subject: [Numpy-discussion] numpy revision 3056 will not build on RHE3 or Solaris In-Reply-To: <44EDD2A4.5090606@ieee.org> References: <44EDA228.20100@stsci.edu> <44EDD2A4.5090606@ieee.org> Message-ID: <44EDD527.1040008@ieee.org> Travis Oliphant wrote: > Christopher Hanley wrote: > >> Good Morning, >> >> Numpy revision 3056 will not build on either Red Hat Enterprise 3 or >> Solaris 8. The relevant syntax errors are below: >> >> >> > I'd like to see which platforms do not work with the npy_interrupt.h > stuff. If you have a unique platform please try the latest SVN. > > There is a NPY_NO_SIGNAL define that will "turn off" support for > interrupts which we can define on platforms that won't work. > > In particular, if the signal handling works on your platform, then numpy.core.multiarray.test_interrupt() should be interruptable. Otherwise, it will continue until the incrementing counter becomes negative which on my system takes about 10 seconds -Travis From chanley at stsci.edu Thu Aug 24 13:32:54 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 24 Aug 2006 13:32:54 -0400 Subject: [Numpy-discussion] numpy revision 3056 will not build on RHE3 or Solaris In-Reply-To: <44EDD527.1040008@ieee.org> References: <44EDA228.20100@stsci.edu> <44EDD2A4.5090606@ieee.org> <44EDD527.1040008@ieee.org> Message-ID: <44EDE2C6.40209@stsci.edu> Travis, Numpy version '1.0b4.dev3060' will now build on both a 32bit Red Hat Enterprise 3 machine as well as Solaris 8. Chris From haase at msg.ucsf.edu Thu Aug 24 14:01:20 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 11:01:20 -0700 Subject: [Numpy-discussion] should a flatiter object get a 'dtype' attribute ? Message-ID: <200608241101.20636.haase@msg.ucsf.edu> Hi, I suppose the answer is no . But converting more code to numpy I got this error AttributeError: 'numpy.flatiter' object has no attribute 'dtype' (I found that I did not need the .flat in the first place ) So I was just wondering if (or how much) a flatiter object should behave like an ndarray ? Also this is an opportunity to have some talk about the relative newcomer "flatiter generator objects" ... Thanks, - Sebastian Haase From oliphant at ee.byu.edu Thu Aug 24 15:07:44 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 13:07:44 -0600 Subject: [Numpy-discussion] should a flatiter object get a 'dtype' attribute ? In-Reply-To: <200608241101.20636.haase@msg.ucsf.edu> References: <200608241101.20636.haase@msg.ucsf.edu> Message-ID: <44EDF900.5070206@ee.byu.edu> Sebastian Haase wrote: >Hi, >I suppose the answer is no . >But converting more code to numpy I got this error >AttributeError: 'numpy.flatiter' object has no attribute 'dtype' >(I found that I did not need the .flat in the first place ) >So I was just wondering if (or how much) a flatiter object should behave like >an ndarray ? > > It's a good question. Right now, they act like an array when passed to functions, but don't have the same attributes and/or methods of an ndarray. I've not wanted to add them because I'm not sure how far thinking that a.flat is an actual array will go and so it's probably better not to try and hide the fact that it isn't an array object. I've slowly added a few things (like comparison operators), but the real-purpose of the object returned from .flat is for indexing using flat indexes into the array. a.flat[10] = 10 a.flat[30] Beyond that you should use .ravel() (only copies when necessary to create a contiguous chunk of data) and .flatten() (copies all the time). -Travis From oliphant at ee.byu.edu Thu Aug 24 15:18:01 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 13:18:01 -0600 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: References: Message-ID: <44EDFB69.8090608@ee.byu.edu> Albert Strasheim wrote: >Dialog pops up: > >--------------------------- >python.exe - Application Error >--------------------------- >The exception unknown software exception (0xc0000029) occurred in the >application at location 0x7c86d474. > > >Click on OK to terminate the program >Click on CANCEL to debug the program >--------------------------- >OK Cancel >--------------------------- > >In the Python console it prints: > >-2147483648 > >If you can give me some idea of what should be happening, I can take a look >at fixing it. > > When does the crash happen? Does it happen when you press Ctrl-C? What's supposed to be happening is that we are registering a handler for Ctrl-C that longjmps back to just after the code between NPY_SIGINT_ON and NPY_SIGINT_OFF. I'm not sure how to actually accomplish something like that under windows as I've heard mention that longjmp should not be used with signals under win32. The easy "fix" is to just define NPY_NO_SIGNAL in setup.py when on a platform that doesn't support using signals and longjmp (like apparently win32). If you could figure out what to do instead on windows that would be preferrable. -Travis From kortmann at ideaworks.com Thu Aug 24 16:10:36 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Thu, 24 Aug 2006 13:10:36 -0700 (PDT) Subject: [Numpy-discussion] (no subject) Message-ID: <1804.12.216.231.149.1156450236.squirrel@webmail.ideaworks.com> >On Thursday 24 August 2006 09:50, kortmann at ideaworks.com wrote: >> Sorry for my ignorance, but I have not ever heard of or used mingw32. I >> am also using python 2.3. >http://en.wikipedia.org/wiki/Mingw explains in detail. >> >> Is there any way someone could possibly send me a brief walk through of >> how to install 1.0b3 on windows32? >do you know about the ("awesome" wiki website at scipy.org) >try your luck at >http://www.scipy.org/Build_for_Windows >> >> Also I am not sure that I know how to manipulate the code that you guys >> said that you have to so that it will work so if that is needed could you >> post a walk through of that also? >> >To my knowledge there is no need to "manipulate code" .... >Maybe you should try getting per-build versions first. >Sebastian Haase Thank you for all of that. I followed the directions carefully. created a numpy folder, checked out the svn via http://svn.scipy.org/svn/numpy/trunk changed to the numpy directory and typed python setup.py config --compiler=mingw32 build --compiler=mingw32 install and then reinstalled sci py because it says to install sci py after numpy. And then I recieved this after trying to run my program. Any Ideas anyone? $HOME=C:\Documents and Settings\Administrator CONFIGDIR=C:\Documents and Settings\Administrator\.matplotlib loaded ttfcache file C:\Documents and Settings\Administrator\.matplotlib\ttffont .cache matplotlib data path c:\python23\lib\site-packages\matplotlib\mpl-data backend WXAgg version 2.6.3.2 Overwriting info= from scipy.misc.helpmod (was from numpy.lib.utils) Overwriting who= from scipy.misc.common (was from numpy.lib.utils) Overwriting source= from scipy.misc.helpmod (was from numpy.lib.utils) RuntimeError: module compiled against version 1000000 of C-API but this version of numpy is 1000002 Fatal Python error: numpy.core.multiarray failed to import... exiting. abnormal program termination From oliphant at ee.byu.edu Thu Aug 24 16:17:44 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 14:17:44 -0600 Subject: [Numpy-discussion] (no subject) In-Reply-To: <1804.12.216.231.149.1156450236.squirrel@webmail.ideaworks.com> References: <1804.12.216.231.149.1156450236.squirrel@webmail.ideaworks.com> Message-ID: <44EE0968.1030904@ee.byu.edu> kortmann at ideaworks.com wrote: >>On Thursday 24 August 2006 09:50, kortmann at ideaworks.com wrote: >> >> >>>Sorry for my ignorance, but I have not ever heard of or used mingw32. I >>>am also using python 2.3. >>> >>> >>http://en.wikipedia.org/wiki/Mingw explains in detail. >> >> > >$HOME=C:\Documents and Settings\Administrator >CONFIGDIR=C:\Documents and Settings\Administrator\.matplotlib >loaded ttfcache file C:\Documents and >Settings\Administrator\.matplotlib\ttffont >.cache >matplotlib data path c:\python23\lib\site-packages\matplotlib\mpl-data >backend WXAgg version 2.6.3.2 >Overwriting info= from scipy.misc.helpmod >(was ction info at 0x01F896F0> from numpy.lib.utils) >Overwriting who= from scipy.misc.common (was >on who at 0x01F895F0> from numpy.lib.utils) >Overwriting source= from scipy.misc.helpmod >(was > from numpy.lib.utils) >RuntimeError: module compiled against version 1000000 of C-API but this >version >of numpy is 1000002 >Fatal Python error: numpy.core.multiarray failed to import... exiting. > > >abnormal program termination > > You have a module built against an older version of NumPy. What modules are being loaded? Perhaps it is matplotlib or SciPy -Travis From haase at msg.ucsf.edu Thu Aug 24 17:05:21 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 14:05:21 -0700 Subject: [Numpy-discussion] possible bug in C-API Message-ID: <200608241405.21834.haase@msg.ucsf.edu> Hi, I noticed in numpy/numarray/_capi.c: NA_NewAllFromBuffer() a) the original numarray function could create arrays of any (ndim) shape, while PyArray_FromBuffer() looks to me that the returned array is always 1D. b) in the code part npy_intp size = dtype->elsize; for ... size *= self->dimensions[i]; PyArray_FromBuffer(bufferObject, dtype, size, byteoffset); Is "size" here a muplitple of the itemsize !? I think I got a crashed (my code) that I fixed when I set size to (the equivalent of) N.prod(array.shape) Cheers, Sebastian Haase From oliphant at ee.byu.edu Thu Aug 24 18:38:43 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 16:38:43 -0600 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> Message-ID: <44EE2A73.2080406@ee.byu.edu> David Cournapeau wrote: >>>I'm working on some macros that will allow extensions to be >>>"interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the >>>implementation is complicated by the possibility of threads and making >>>sure to handle clean-up code correctly when the interrupt returns. >>> >>> >>> >This is funny, I was just thinking about that yesterday. This is a major >problem when writing C extensions in matlab (the manual says use the >matlab allocator instead of malloc/new/whatever, but when you call a >library, you cannot do that...). > > I'm glad many people are thinking about it. There is no reason we can't have a few ways to handle the situation. Currently in SVN, the simple NPY_SIGINT_ON [code] NPY_SIGINT_OFF approach is implemented (for platforms with sigsetjmp/siglongjmp). You can already use the approach suggested: if (PyOS_InterruptOccurred()) goto error to handle interrupts. The drawback of this approach is that the loop executes more slowly because a check for the interrupt occurs many times in the loop which costs time. The advantage is that it may work with threads (I'm not clear on whether or not PyOS_InterruptOccurred can be called without the GIL, though). >I think the case proposer by Perry is too restrictive: it is really >common to use external libraries which we do not know whether they use >memory allocation inside the processing, and there is a need to clean >that too. > > If nothing is known about memory allocation of the external library, then I don't see how it can be safely interrupted using any mechanism. What is available now is sufficient. I played far too long with how to handle threads, but was not able to come up with a solution, so for now I've punted. -Travis From hetland at tamu.edu Thu Aug 24 18:42:19 2006 From: hetland at tamu.edu (Rob Hetland) Date: Thu, 24 Aug 2006 17:42:19 -0500 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44EDFB69.8090608@ee.byu.edu> References: <44EDFB69.8090608@ee.byu.edu> Message-ID: <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> In compiling matplotlib and scipy, I get errors complaining about multiply defined symbols (See below). I tried to fix this with - multiply_defined suppress but this did not work. Is there a way to make this go away? -Rob Scipy error: c++ -bundle -undefined dynamic_lookup build/temp.macosx-10.4-i386-2.4/ Lib/sandbox/delaunay/_delaunay.o build/temp.macosx-10.4-i386-2.4/Lib/ sandbox/delaunay/VoronoiDiagramGenerator.o build/temp.macosx-10.4- i386-2.4/Lib/sandbox/delaunay/delaunay_utils.o build/temp.macosx-10.4- i386-2.4/Lib/sandbox/delaunay/natneighbors.o -Lbuild/temp.macosx-10.4- i386-2.4 -o build/lib.macosx-10.4-i386-2.4/scipy/sandbox/delaunay/ _delaunay.so /usr/bin/ld: multiple definitions of symbol __NPY_SIGINT_BUF build/temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/_delaunay.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) build/temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/ VoronoiDiagramGenerator.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) collect2: ld returned 1 exit status /usr/bin/ld: multiple definitions of symbol __NPY_SIGINT_BUF build/temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/_delaunay.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) build/temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/ VoronoiDiagramGenerator.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) collect2: ld returned 1 exit status error: Command "c++ -bundle -undefined dynamic_lookup build/ temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/_delaunay.o build/ temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/ VoronoiDiagramGenerator.o build/temp.macosx-10.4-i386-2.4/Lib/sandbox/ delaunay/delaunay_utils.o build/temp.macosx-10.4-i386-2.4/Lib/sandbox/ delaunay/natneighbors.o -Lbuild/temp.macosx-10.4-i386-2.4 -o build/ lib.macosx-10.4-i386-2.4/scipy/sandbox/delaunay/_delaunay.so" failed with exit status 1 matplotlib error: c++ -bundle -undefined dynamic_lookup build/temp.macosx-10.4-i386-2.4/ agg23/src/agg_trans_affine.o build/temp.macosx-10.4-i386-2.4/agg23/ src/agg_path_storage.o build/temp.macosx-10.4-i386-2.4/agg23/src/ agg_bezier_arc.o build/temp.macosx-10.4-i386-2.4/agg23/src/ agg_curves.o build/temp.macosx-10.4-i386-2.4/agg23/src/ agg_vcgen_dash.o build/temp.macosx-10.4-i386-2.4/agg23/src/ agg_vcgen_stroke.o build/temp.macosx-10.4-i386-2.4/agg23/src/ agg_rasterizer_scanline_aa.o build/temp.macosx-10.4-i386-2.4/agg23/ src/agg_image_filters.o build/temp.macosx-10.4-i386-2.4/src/_image.o build/temp.macosx-10.4-i386-2.4/src/ft2font.o build/temp.macosx-10.4- i386-2.4/src/mplutils.o build/temp.macosx-10.4-i386-2.4/CXX/ cxx_extensions.o build/temp.macosx-10.4-i386-2.4/CXX/cxxsupport.o build/temp.macosx-10.4-i386-2.4/CXX/IndirectPythonInterface.o build/ temp.macosx-10.4-i386-2.4/CXX/cxxextensions.o build/temp.macosx-10.4- i386-2.4/src/_ns_backend_agg.o -L/usr/local/lib -L/usr/lib -L/usr/ local/lib -L/usr/lib -lpng -lz -lstdc++ -lm -lfreetype -lz -lstdc++ - lm -o build/lib.macosx-10.4-i386-2.4/matplotlib/backends/ _ns_backend_agg.so /usr/bin/ld: multiple definitions of symbol __NPY_SIGINT_BUF build/temp.macosx-10.4-i386-2.4/src/_image.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) build/temp.macosx-10.4-i386-2.4/src/_ns_backend_agg.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) collect2: ld returned 1 exit status /usr/bin/ld: multiple definitions of symbol __NPY_SIGINT_BUF build/temp.macosx-10.4-i386-2.4/src/_image.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) build/temp.macosx-10.4-i386-2.4/src/_ns_backend_agg.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) collect2: ld returned 1 exit status error: Command "c++ -bundle -undefined dynamic_lookup build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_trans_affine.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_path_storage.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_bezier_arc.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_curves.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_vcgen_dash.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_vcgen_stroke.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_rasterizer_scanline_aa.o build/temp.macosx-10.4-i386-2.4/agg23/src/agg_image_filters.o build/ temp.macosx-10.4-i386-2.4/src/_image.o build/temp.macosx-10.4- i386-2.4/src/ft2font.o build/temp.macosx-10.4-i386-2.4/src/mplutils.o build/temp.macosx-10.4-i386-2.4/CXX/cxx_extensions.o build/ temp.macosx-10.4-i386-2.4/CXX/cxxsupport.o build/temp.macosx-10.4- i386-2.4/CXX/IndirectPythonInterface.o build/temp.macosx-10.4- i386-2.4/CXX/cxxextensions.o build/temp.macosx-10.4-i386-2.4/src/ _ns_backend_agg.o -L/usr/local/lib -L/usr/lib -L/usr/local/lib -L/usr/ lib -lpng -lz -lstdc++ -lm -lfreetype -lz -lstdc++ -lm -o build/ lib.macosx-10.4-i386-2.4/matplotlib/backends/_ns_backend_agg.so" failed with exit status 1 On Aug 24, 2006, at 2:18 PM, Travis Oliphant wrote: > Albert Strasheim wrote: > >> Dialog pops up: >> >> --------------------------- >> python.exe - Application Error >> --------------------------- >> The exception unknown software exception (0xc0000029) occurred in the >> application at location 0x7c86d474. >> >> >> Click on OK to terminate the program >> Click on CANCEL to debug the program >> --------------------------- >> OK Cancel >> --------------------------- >> >> In the Python console it prints: >> >> -2147483648 >> >> If you can give me some idea of what should be happening, I can >> take a look >> at fixing it. >> >> > > When does the crash happen? Does it happen when you press Ctrl-C? > > What's supposed to be happening is that we are registering a > handler for > Ctrl-C that longjmps back to just after the code between NPY_SIGINT_ON > and NPY_SIGINT_OFF. > > I'm not sure how to actually accomplish something like that under > windows as I've heard mention that longjmp should not be used with > signals under win32. > > The easy "fix" is to just define NPY_NO_SIGNAL in setup.py when on a > platform that doesn't support using signals and longjmp (like > apparently > win32). > > If you could figure out what to do instead on windows that would be > preferrable. > > -Travis > > > ---------------------------------------------------------------------- > --- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your > job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From oliphant at ee.byu.edu Thu Aug 24 18:52:25 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 16:52:25 -0600 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> References: <44EDFB69.8090608@ee.byu.edu> <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> Message-ID: <44EE2DA9.50908@ee.byu.edu> Rob Hetland wrote: >In compiling matplotlib and scipy, I get errors complaining about >multiply defined symbols (See below). I tried to fix this with - >multiply_defined suppress but this did not work. Is there a way to >make this go away? > > define NPY_NO_SIGNAL for now. -Travis From paul_midgley2000 at yahoo.co.uk Thu Aug 24 19:28:59 2006 From: paul_midgley2000 at yahoo.co.uk (Paul Midgley) Date: Thu, 24 Aug 2006 23:28:59 +0000 (GMT) Subject: [Numpy-discussion] Numpy-discussion Digest, Vol 3, Issue 61 In-Reply-To: Message-ID: <20060824232859.61786.qmail@web25710.mail.ukl.yahoo.com> Thanks for your help -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Thu Aug 24 19:39:45 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 17:39:45 -0600 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> References: <44EDFB69.8090608@ee.byu.edu> <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> Message-ID: <44EE38C1.8000804@ee.byu.edu> Rob Hetland wrote: >In compiling matplotlib and scipy, I get errors complaining about >multiply defined symbols (See below). I tried to fix this with - >multiply_defined suppress but this did not work. Is there a way to >make this go away? > > Can you try current SVN again, to see if it now works? -Travis From cookedm at physics.mcmaster.ca Thu Aug 24 19:40:55 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 24 Aug 2006 19:40:55 -0400 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE2A73.2080406@ee.byu.edu> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> Message-ID: <88EB405A-22AB-4B7C-B009-B96288E45B7E@physics.mcmaster.ca> On Aug 24, 2006, at 18:38 , Travis Oliphant wrote: > > You can already use the approach suggested: > > if (PyOS_InterruptOccurred()) goto error > > to handle interrupts. The drawback of this approach is that the loop > executes more slowly because a check for the interrupt occurs many > times > in the loop which costs time. > > The advantage is that it may work with threads (I'm not clear on > whether > or not PyOS_InterruptOccurred can be called without the GIL, though). It should be; it's pure C code: int PyOS_InterruptOccurred(void) { if (!interrupted) return 0; interrupted = 0; return 1; } (where interrupted is a static int). -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From haase at msg.ucsf.edu Thu Aug 24 20:09:48 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 17:09:48 -0700 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules Message-ID: <200608241709.48522.haase@msg.ucsf.edu> Hi, I get TypeError: array cannot be safely cast to required type when calling hstack() ( which calls concatenate() ) on two arrays being a int32 and a float32 respectively. I understand now that a int32 cannot be safely converted into a float32 but why does concatenate not automatically up(?) cast to float64 ?? Is this really required to be done *explicitly* every time ? ** In general it makes float32 cubersome to use. ** ( Background: my large image data is float32 (float64 would require too much memory) and the hstack call happens inside scipy plt module when I try to get a 1d line profile and the "y_data" is hstack'ed with the x-axis values (int32)) ) Thanks, Sebastian Haase From oliphant at ee.byu.edu Thu Aug 24 20:11:09 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 18:11:09 -0600 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <88EB405A-22AB-4B7C-B009-B96288E45B7E@physics.mcmaster.ca> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> <88EB405A-22AB-4B7C-B009-B96288E45B7E@physics.mcmaster.ca> Message-ID: <44EE401D.3070006@ee.byu.edu> David M. Cooke wrote: >On Aug 24, 2006, at 18:38 , Travis Oliphant wrote: > > > >>You can already use the approach suggested: >> >>if (PyOS_InterruptOccurred()) goto error >> >>to handle interrupts. The drawback of this approach is that the loop >>executes more slowly because a check for the interrupt occurs many >>times >>in the loop which costs time. >> >>The advantage is that it may work with threads (I'm not clear on >>whether >>or not PyOS_InterruptOccurred can be called without the GIL, though). >> >> > >It should be; it's pure C code: > >int >PyOS_InterruptOccurred(void) >{ > if (!interrupted) > return 0; > interrupted = 0; > return 1; >} > > I tried to test this with threads using the following program and it doesn't seem to respond to interrupts. import threading import numpy.core.multiarray as ncm class mythread(threading.Thread): def run(self): print "Starting thread", self.getName() ncm.test_interrupt(1) print "Ending thread", self.getName() m1 = mythread() m2 = mythread() m1.start() m2.start() From oliphant at ee.byu.edu Thu Aug 24 20:28:19 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 18:28:19 -0600 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules In-Reply-To: <200608241709.48522.haase@msg.ucsf.edu> References: <200608241709.48522.haase@msg.ucsf.edu> Message-ID: <44EE4423.2010909@ee.byu.edu> Sebastian Haase wrote: >Hi, >I get >TypeError: array cannot be safely cast to required type > >when calling hstack() ( which calls concatenate() ) >on two arrays being a int32 and a float32 respectively. > >I understand now that a int32 cannot be safely converted into a float32 >but why does concatenate not automatically >up(?) cast to float64 ?? > > Basically, NumPy is following Numeric's behavior of raising an error in this case of unsafe casting in concatenate. For functions that are not universal-function objects, mixed-type behavior works basically just like Numeric did (using the ordering of the types to determine which one to choose as the output). It could be argued that the ufunc-rules should be followed instead. -Travis From wbaxter at gmail.com Thu Aug 24 20:39:50 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 25 Aug 2006 09:39:50 +0900 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: Message-ID: On 8/24/06, Sasha wrote: > On 8/24/06, Bill Baxter wrote: > >[snip] it would be > > nice to add a concise definition of "ufunc" to the numpy glossary: > > http://www.scipy.org/Numpy_Glossary. > > > > done > > > Can anyone come up with such a definition? > > I copied the definition from the old Numeric manual. > > > Here's my stab at it: > > > > ufunc: A function that operates element-wise on arrays. > > > This is not entirely correct. Ufuncs operate on anything that can be > passed to asarray: arrays, python lists, tuples or scalars. Hey Sasha. Your defnition may be more correct, but I have to confess I don't understand it. "Universal function. Universal functions follow similar rules for broadcasting, coercion and "element-wise operation"." What is "coercion"? (Who or what is being coerced to do what?) and what does it mean to "follow similar rules for ... coercion"? Similar to what? --bill From haase at msg.ucsf.edu Thu Aug 24 20:47:08 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 17:47:08 -0700 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules In-Reply-To: <44EE4423.2010909@ee.byu.edu> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> Message-ID: <200608241747.08195.haase@msg.ucsf.edu> On Thursday 24 August 2006 17:28, Travis Oliphant wrote: > Sebastian Haase wrote: > >Hi, > >I get > >TypeError: array cannot be safely cast to required type > > > >when calling hstack() ( which calls concatenate() ) > >on two arrays being a int32 and a float32 respectively. > > > >I understand now that a int32 cannot be safely converted into a float32 > >but why does concatenate not automatically > >up(?) cast to float64 ?? > > Basically, NumPy is following Numeric's behavior of raising an error in > this case of unsafe casting in concatenate. For functions that are not > universal-function objects, mixed-type behavior works basically just > like Numeric did (using the ordering of the types to determine which one > to choose as the output). > > It could be argued that the ufunc-rules should be followed instead. > > -Travis > Are you saying the ufunc-rules would convert "int32-float32" to float64 and hence make my code "just work" !? And why are there two sets of rules ? Are the Numeric rules used at many places ? Thanks, Sebastian Haase From megu24 at yahoo.co.jp Thu Aug 24 21:21:58 2006 From: megu24 at yahoo.co.jp (=?iso-2022-jp?B?bWVndQ==?=) Date: Fri, 25 Aug 2006 01:21:58 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?06 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?06 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?06 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?06 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?06 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From simon at arrowtheory.com Fri Aug 25 07:42:19 2006 From: simon at arrowtheory.com (Simon Burton) Date: Fri, 25 Aug 2006 12:42:19 +0100 Subject: [Numpy-discussion] tensor dot ? Message-ID: <20060825124219.6581a608.simon@arrowtheory.com> >>> numpy.dot.__doc__ matrixproduct(a,b) Returns the dot product of a and b for arrays of floating point types. Like the generic numpy equivalent the product sum is over the last dimension of a and the second-to-last dimension of b. NB: The first argument is not conjugated. Does numpy support summing over arbitrary dimensions, as in tensor calculus ? I could cook up something that uses transpose and dot, but it's reasonably tricky i think :) Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From david at ar.media.kyoto-u.ac.jp Thu Aug 24 23:11:26 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 Aug 2006 12:11:26 +0900 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE2A73.2080406@ee.byu.edu> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> Message-ID: <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> Travis Oliphant wrote: > I'm glad many people are thinking about it. There is no reason we > can't have a few ways to handle the situation. > > Currently in SVN, the simple > > NPY_SIGINT_ON > [code] > NPY_SIGINT_OFF > > approach is implemented (for platforms with sigsetjmp/siglongjmp). > > You can already use the approach suggested: > > if (PyOS_InterruptOccurred()) goto error > > to handle interrupts. The drawback of this approach is that the loop > executes more slowly because a check for the interrupt occurs many times > in the loop which costs time. > I am not sure whether there are other solutions... This is the way I saw signal handling done in common programs when I looked for a solution for my matlab extensions. > The advantage is that it may work with threads (I'm not clear on whether > or not PyOS_InterruptOccurred can be called without the GIL, though). > > >> I think the case proposer by Perry is too restrictive: it is really >> common to use external libraries which we do not know whether they use >> memory allocation inside the processing, and there is a need to clean >> that too. >> >> >> > > If nothing is known about memory allocation of the external library, > then I don't see how it can be safely interrupted using any mechanism. > If the library does nothing w.r.t signals, then you just have to clean all the things related to the library once you caught a signal. This is no different than cleaning your own code. Actually, cleaning libraries is the main reason why I implemented this signal scheme in matlab extensions, since they cannot use the matlab memory allocator, and because they live in the same memory space, calling several times the same extension can corrupt really quickly most of matlab memory space. Maybe there are some problems I am not aware of ? David From oliphant.travis at ieee.org Thu Aug 24 22:46:51 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 20:46:51 -0600 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules In-Reply-To: <200608241747.08195.haase@msg.ucsf.edu> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> Message-ID: <44EE649B.1020500@ieee.org> Sebastian Haase wrote: > On Thursday 24 August 2006 17:28, Travis Oliphant wrote: > > Are you saying the ufunc-rules would convert "int32-float32" to float64 and > hence make my code "just work" !? > Yes. That's what I'm saying (but you would get float64 out --- but if you didn't want that then you would have to be specific). > And why are there two sets of rules ? > Because there are two modules (multiarray and umath) where the functionality is implemented. > Are the Numeric rules used at many places ? > Not that many. I did abstract the notion to a C-API: PyArray_ConvertToCommonType and implemented the scalars-don't-cause-upcasting part of the ufunc rules in that code. But, I followed the old-style Numeric coercion rules for the rest of it (because I was adapting Numeric). Right now, unless there are strong objections, I'm leaning to changing that so that the same coercion rules are used whenever a common type is needed. It would not be that difficult of a change. -Travis From ndarray at mac.com Thu Aug 24 23:10:24 2006 From: ndarray at mac.com (Sasha) Date: Thu, 24 Aug 2006 23:10:24 -0400 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: Message-ID: On 8/24/06, Bill Baxter wrote: [snip] > Hey Sasha. Your defnition may be more correct, but I have to confess > I don't understand it. > > "Universal function. Universal functions follow similar rules for > broadcasting, coercion and "element-wise operation"." > > What is "coercion"? (Who or what is being coerced to do what?) and > what does it mean to "follow similar rules for ... coercion"? Similar > to what? This is not my definition, I just rephrased the introductory paragraph from the ufunc section of the "Numerical Python" . Feel free to edit it so that it makes more sense. Please note that I originally intended the "Numpy Glossary" not as a place to learn new terms, but as a guide for those who know more than one meaning of the terms or more than one way to call something. (See the preamble.) This may explain why I did not include "ufunc" to begin with. (I remember deciding not to include "ufunc", but I don't remember the exact reason anymore.) I would welcome an effort to make the glossary more novice friendly, but not at the expense of oversimplifying things. BTW, do you think "Rank ... (2) number of orthogonal dimensions of a matrix" is clear? Considering that matrix is defined a "an array of rank 2"? Is "rank" in linear algebra sense common enough in numpy documentation to be included in the glossary? For comparison, here are a few alternative formulations of matrix rank definition: "The rank of a matrix or a linear map is the dimension of the image of the matrix or the linear map, corresponding to the number of linearly independent rows or columns of the matrix, or to the number of nonzero singular values of the map." "In linear algebra, the column rank (row rank respectively) of a matrix A with entries in some field is defined to be the maximal number of columns (rows respectively) of A which are linearly independent." From oliphant.travis at ieee.org Thu Aug 24 23:20:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 21:20:45 -0600 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> Message-ID: <44EE6C8D.5000208@ieee.org> David Cournapeau wrote: >>> >>> >> If nothing is known about memory allocation of the external library, >> then I don't see how it can be safely interrupted using any mechanism. >> >> > If the library does nothing w.r.t signals, then you just have to clean > all the things related to the library once > you caught a signal. This is no different than cleaning your own code. > Right, as long as you know what to do you are O.K. I was just thinking about a hypothetical situation where the library allocated some temporary memory that it was going to free at the end of the subroutine but then an interrupt jumped out back to your code before it could finish. In a case like this, you would have to use the "check if interrupt has occurred" approach before and after the library call. But, then that library call is not interruptable. I could also see wanting to be able to interrupt a library calculation when you know it isn't allocating memory. So, I like having both possibilities available. So far we haven't actually put anything in the numpy code itself. I'm leaning to putting PyOS_InterruptOccurred-style checks in a few places at some point down the road. -Travis From haase at msg.ucsf.edu Thu Aug 24 23:59:19 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 20:59:19 -0700 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules In-Reply-To: <44EE649B.1020500@ieee.org> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> <44EE649B.1020500@ieee.org> Message-ID: <44EE7597.7000908@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> On Thursday 24 August 2006 17:28, Travis Oliphant wrote: >> >> Are you saying the ufunc-rules would convert "int32-float32" to float64 and >> hence make my code "just work" !? >> > Yes. That's what I'm saying (but you would get float64 out --- but if > you didn't want that then you would have to be specific). > >> And why are there two sets of rules ? >> > Because there are two modules (multiarray and umath) where the > functionality is implemented. > >> Are the Numeric rules used at many places ? >> > Not that many. I did abstract the notion to a C-API: > PyArray_ConvertToCommonType and implemented the > scalars-don't-cause-upcasting part of the ufunc rules in that code. > But, I followed the old-style Numeric coercion rules for the rest of it > (because I was adapting Numeric). > > Right now, unless there are strong objections, I'm leaning to changing > that so that the same coercion rules are used whenever a common type is > needed. If you mean keeping the ufunc rules (which seem more liberal, fix my problem ;-) and might make using float32 in general more painless) - I would be all for it ... simplifying is always good in the long term ... Cheers, Sebastian > > It would not be that difficult of a change. From oliphant.travis at ieee.org Fri Aug 25 00:03:10 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 22:03:10 -0600 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules In-Reply-To: <200608241747.08195.haase@msg.ucsf.edu> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> Message-ID: <44EE767E.7000207@ieee.org> Sebastian Haase wrote: > On Thursday 24 August 2006 17:28, Travis Oliphant wrote: > >> Sebastian Haase wrote: >> >>> Hi, >>> I get >>> TypeError: array cannot be safely cast to required type >>> >>> when calling hstack() ( which calls concatenate() ) >>> on two arrays being a int32 and a float32 respectively. >>> >>> I understand now that a int32 cannot be safely converted into a float32 >>> but why does concatenate not automatically >>> up(?) cast to float64 ?? >>> >> Basically, NumPy is following Numeric's behavior of raising an error in >> this case of unsafe casting in concatenate. For functions that are not >> universal-function objects, mixed-type behavior works basically just >> like Numeric did (using the ordering of the types to determine which one >> to choose as the output). >> >> It could be argued that the ufunc-rules should be followed instead. >> >> -Travis >> >> > Are you saying the ufunc-rules would convert "int32-float32" to float64 and > hence make my code "just work" !? > This is now the behavior in SVN. Note that this is different from both Numeric (which gave an error) and numarray (which coerced to float32). But, it is consistent with how mixed-types are handled in calculations and is thus an easier rule to explain. Thanks for the testing. -Travis From david at ar.media.kyoto-u.ac.jp Fri Aug 25 00:39:23 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 Aug 2006 13:39:23 +0900 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE6C8D.5000208@ieee.org> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> <44EE6C8D.5000208@ieee.org> Message-ID: <44EE7EFB.2080203@ar.media.kyoto-u.ac.jp> Travis Oliphant wrote: > > Right, as long as you know what to do you are O.K. I was just thinking > about a hypothetical situation where the library allocated some > temporary memory that it was going to free at the end of the subroutine > but then an interrupt jumped out back to your code before it could > finish. In a case like this, you would have to use the "check if > interrupt has occurred" approach before and after the library call. Indeed. By the way, I tried something for python.thread + signals. This is posix specific, and it works as expected on linux: - first, a C extension which implements the signal handling. It has a function called hello, which is the entry point of the C module, and calls the function process (which does random computation). It checks if it got a SIGINT signal, and returns -1 if caught. Returns 0 if no SIGINT called: - extension compiled into python module (I used boost python because I am too lazy to find how to do it in C :) ) - python script which creates several threads running the hello function. They run in parallel, and ctrl+C is correctly handled. I think this is signal specific, and this needs to be improved (this is just meant as a toy example): import threading import hello import time class mythread(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): print "Starting thread", self.getName() st = 0 while st == 0: st = hello.foo(self.getName()) # sleep to force the python interpreter to run # other threads if available time.sleep(1) if st == -1: print self.getName() + " got signal" print "Ending thread", self.getName() nthread = 5 t = [mythread() for i in range(nthread)] [i.start() for i in t] Then, you have something like: tarting thread Thread-1 Thread-1 processing... done clean called Starting thread Thread-5 Thread-5 processing... done clean called Starting thread Thread-3 Thread-3 processing... done clean called Starting thread Thread-2 Thread-2 processing... done hello.c:hello signal caught line 56 for thread Thread-2 clean called Thread-1 processing... done clean called Starting thread Thread-4 Thread-4 processing... done clean called Thread-5 processing... done clean called Thread-3 processing... done hello.c:hello signal caught line 56 for thread Thread-3 clean called Thread-2 got signal Ending thread Thread-2 Thread-1 processing... done clean called Thread-4 processing... done clean called Thread-5 processing... done clean called Thread-3 got signal Ending thread Thread-3 Thread-1 processing... done hello.c:hello signal caught line 56 for thread Thread-1 clean called Thread-4 processing... done clean called Thread-5 processing... done hello.c:hello signal caught line 56 for thread Thread-5 clean called Thread-1 got signal Ending thread Thread-1 Thread-4 processing... done clean called Thread-5 got signal Ending thread Thread-5 Thread-4 processing... done clean called Thread-4 processing... done clean called Thread-4 processing... done hello.c:hello signal caught line 56 for thread Thread-4 clean called Thread-4 got signal Ending thread Thread-4 (SIGINT are received when Ctrl+C on linux) You can find all sources here: http://www.ar.media.kyoto-u.ac.jp/members/david/numpysig/ Please note that I know almost nothing about all this stuff, I just naively implemented from the example of GNU C library, and it always worked for me on matlab on my machine. I do not know if this is portable, if this can work for other signals, etc... David From oliphant.travis at ieee.org Fri Aug 25 02:10:26 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 00:10:26 -0600 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE7EFB.2080203@ar.media.kyoto-u.ac.jp> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> <44EE6C8D.5000208@ieee.org> <44EE7EFB.2080203@ar.media.kyoto-u.ac.jp> Message-ID: <44EE9452.1080007@ieee.org> David Cournapeau wrote: > Indeed. > > By the way, I tried something for python.thread + signals. This is posix > specific, and it works as expected on linux: > Am I right that this could this be accomplished simply by throwing away all the interrupt handling stuff in the code and checking for PyOS_InterruptOccurred() in the place where you check for the global variable that your signal handler uses? Your signal handler does essentially what Python's signal handler already does, if I'm not mistaken. -Travis From stefan at sun.ac.za Fri Aug 25 03:45:26 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 25 Aug 2006 09:45:26 +0200 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: Message-ID: <20060825074526.GC17119@mentat.za.net> On Thu, Aug 24, 2006 at 11:10:24PM -0400, Sasha wrote: > I would welcome an effort to make the glossary more novice friendly, > but not at the expense of oversimplifying things. > > BTW, do you think "Rank ... (2) number of orthogonal dimensions of a > matrix" is clear? Considering that matrix is defined a "an array of > rank 2"? Is "rank" in linear algebra sense common enough in numpy > documentation to be included in the glossary? > > For comparison, here are a few alternative formulations of matrix rank > definition: > > "The rank of a matrix or a linear map is the dimension of the image of > the matrix or the linear map, corresponding to the number of linearly > independent rows or columns of the matrix, or to the number of nonzero > singular values of the map." > > > "In linear algebra, the column rank (row rank respectively) of a > matrix A with entries in some field is defined to be the maximal > number of columns (rows respectively) of A which are linearly > independent." > I prefer the last definition. Introductory algebra courses teach the term "linearly independent" before "orthogonal" (IIRC). As for "linear map", it has other names, too, and doesn't (in my mind) clarify the definition of rank in this context. Regards St?fan From david at ar.media.kyoto-u.ac.jp Fri Aug 25 06:12:57 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 Aug 2006 19:12:57 +0900 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE9452.1080007@ieee.org> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> <44EE6C8D.5000208@ieee.org> <44EE7EFB.2080203@ar.media.kyoto-u.ac.jp> <44EE9452.1080007@ieee.org> Message-ID: <44EECD29.5070207@ar.media.kyoto-u.ac.jp> Travis Oliphant wrote: > David Cournapeau wrote: >> Indeed. >> >> By the way, I tried something for python.thread + signals. This is posix >> specific, and it works as expected on linux: >> > Am I right that this could this be accomplished simply by throwing away > all the interrupt handling stuff in the code and checking for > PyOS_InterruptOccurred() in the place where you check for the global > variable that your signal handler uses? Your signal handler does > essentially what Python's signal handler already does, if I'm not mistaken. I don't know how the python signal handler works, but I believe it should do more or less the same, indeed. The key idea is that it is important to mask other signals related to interrupting. To have a relatively clear view on this, if you have not seen it, you may take a look at the gnu C doc on signal handling: http://www.gnu.org/software/libc/manual/html_node/Defining-Handlers.html#Defining-Handlers After having given some thought, I am wondering about what exactly we are trying to do: - the main problem is to be able to interrupt some which may take a long time to compute, without corrupting the whole python process. - for that, those function need to be able to trap the usual signals corresponding to interrupt (SIGINT, etc... on Unix, equivalents on windows). There are two ways to handle a signal: - check regularly some global (that is, global to the whole process) value, and if change this value if a signal is trapped. That's the easier way, but this is not thread safe as I first thought (I will code an example if I have time). - the signal handler jumps to an other point of the program where cleaning is done: this is more complicated, and I am not sure we need the complication (I have never used this scheme, so I may just miss the point totally). I don't even want to think how it works in multi-threading environment :) Now, the threading issue came in, and I am not sure why we need to care: this is a problem if numpy is implemented in a multi-thread way, but I don't believe it to be the case, right ? An other solution, which is used I think in more sophisticated programs, is having one thread with high priority, which only job is to detect signals, and to mask all signals in all other threads. Again, this seems overkill (and highly non portable) ? And this should be the python interpreter job, no ? Actually, as this is a generic problem for any python extension code, other really smart people should have thought about that... If I am interpreting correctly what is said here http://docs.python.org/lib/module-signal.html, I believe that what you suggest (using PyOS_InterruptOccurred() at some points) is what shall be done: the python interpreter is making sure that the signal is send to the main thread, that is the thread where numpy is executed (that's my understanding on the way python interpreter works, not a fact). David From faltet at carabos.com Fri Aug 25 06:11:25 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 25 Aug 2006 12:11:25 +0200 Subject: [Numpy-discussion] [ANN] PyTables 1.3.3 released Message-ID: <200608251211.25886.faltet@carabos.com> =========================== Announcing PyTables 1.3.3 =========================== I'm happy to announce a new minor release of PyTables. In this one, we have focused on improving compatibility with latest beta versions of NumPy (0.9.8, 1.0b2, 1.0b3 and higher), adding some improvements and the typical bunch of fixes (some of them are important, like the possibility of re-using the same nested class in declaration of table records; see later). Go to the PyTables web site for downloading the beast: http://www.pytables.org/ or keep reading for more info about the new features and bugs fixed. Changes more in depth ===================== Improvements: - Added some workarounds on a couple of 'features' of recent versions of NumPy. Now, PyTables should work with a broad range of NumPy versions, ranging from 0.9.8 up to 1.0b3 (and hopefully beyond, but let's see). - When a loop for appending a table is not flushed before the node is unbounded (and hence, becomes ``killed`` in PyTables slang), like in:: import tables as T class Item(T.IsDescription): name = T.StringCol(length=16) vals = T.Float32Col(0.0) fileh = T.openFile("/tmp/test.h5", "w") table = fileh.createTable(fileh.root, 'table', Item) for i in range(100): table.row.append() #table.flush() # uncomment this prevent the warning table = None # Unbounding table node! a ``PerformanceWarning`` is issued telling the user that it is *much* recommended flushing the buffers in a table before unbounding it. Hopefully, this will also prevent other scary errors (like ``Illegal Instruction``, ``Malloc(): trying to call free() twice``, ``Bus Error`` or ``Segmentation fault`` ) that some people is seeing lately and which are most probably related with this issue. Bug fixes: - In situations where the same metaclass is used for declaring several columns in a table, like in:: class Nested(IsDescription): uid = IntCol() data = FloatCol() class B_Candidate(IsDescription): nested1 = Nested() nested2 = Nested() they were sharing the same column metadata behind the scenes, introducing several inconsistencies on it. This has been fixed. - More work on different padding conventions between NumPy/numarray. Now, all trailing spaces in chararrays are stripped-off during write/read operations. This means that when retrieving NumPy chararrays, it shouldn't appear spureous trailing spaces anymore (not even in the context of recarrays). The drawback is that you will loose *all* the trailing spaces, no matter if you want them in this place or not. This is not a very confortable situation to deal with, but hopefully, things will get better when NumPy would be at the core of PyTables. In the meanwhile, I hope that the current behaviour would be a minor evil for most of situations. This closes ticket #13 (again). - Solved a problem with conversions from numarray charrays to numpy objects. Before, when saving numpy chararrays with a declared length of N, but none of this components reached such a length, the dtype of the numpy chararray retrieved was the maximum length of the component strings. This has been corrected. - Fixed a minor glitch in detection of signedness in IntAtom classes. Thanks to Norbert Nemec for reporting this one and providing the fix. Known bugs: - Using ``Row.update()`` in tables with some columns marked as indexed gives a ``NotImplemented`` error although it should not. This is fixed in SVN trunk and the functionality will be available in the 1.4.x series. Meanwhile, a workaround would be refraining to declare columns as indexed and index them *after* the update process (with Col.createIndex() for example). Deprecated features: - None Backward-incompatible changes: - Please, see ``RELEASE-NOTES.txt`` file. Important note for Windows users ================================ If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win.ZIP What it is ========== PyTables is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with qsupport for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. PyTables runs on top of the HDF5 library and numarray (but NumPy and Numeric are also supported) package for achieving maximum throughput and convenient use. Besides, PyTables I/O for table objects is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. PyTables sports indexing capabilities as well, allowing doing selections in tables exceeding one billion of rows in just seconds. Platforms ========= This version has been extensively checked on quite a few platforms, like Linux on Intel32 (Pentium), Win on Intel32 (Pentium), Linux on Intel64 (Itanium2), FreeBSD on AMD64 (Opteron), Linux on PowerPC (and PowerPC64) and MacOSX on PowerPC. For other platforms, chances are that the code can be easily compiled and run without further issues. Please, contact us in case you are experiencing problems. Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdf.ncsa.uiuc.edu/HDF5/ About numarray: http://www.stsci.edu/resources/software_hardware/numarray To know more about the company behind the PyTables development, see: http://www.carabos.com/ Acknowledgments =============== Thanks to various the users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last but not least, a big thank you to THG (http://www.hdfgroup.org/) for sponsoring many of the new features recently introduced in PyTables. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team From svetosch at gmx.net Fri Aug 25 06:27:51 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 25 Aug 2006 12:27:51 +0200 Subject: [Numpy-discussion] Version 1.0b3 In-Reply-To: <1214.12.216.231.149.1156278431.squirrel@webmail.ideaworks.com> References: <1214.12.216.231.149.1156278431.squirrel@webmail.ideaworks.com> Message-ID: <44EED0A7.2000103@gmx.net> kortmann at ideaworks.com schrieb: > Since no one has downloaded 1.0b3 yet, if someone wants to put up the > windows version for python2.3 i would be more than happy to be the first > person to download it :) > I'm sorry, this is *not* for python 2.3, but I posted a build of current svn for python 2.4 under windows here (direct download link): http://www.wiwi.uni-frankfurt.de/profs/nautz/downloads/software/numpy-1.0b4.dev3068.win32-py2.4.exe I didn't do anything except checking out and compiling it, so I guess this is not optimized in any way. Maybe it's still useful for some people. cheers, Sven From charlesr.harris at gmail.com Fri Aug 25 09:34:20 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 25 Aug 2006 07:34:20 -0600 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: <20060825074526.GC17119@mentat.za.net> References: <20060825074526.GC17119@mentat.za.net> Message-ID: Hi, On 8/25/06, Stefan van der Walt wrote: > > On Thu, Aug 24, 2006 at 11:10:24PM -0400, Sasha wrote: > > I would welcome an effort to make the glossary more novice friendly, > > but not at the expense of oversimplifying things. > > > > BTW, do you think "Rank ... (2) number of orthogonal dimensions of a > > matrix" is clear? Considering that matrix is defined a "an array of > > rank 2"? Is "rank" in linear algebra sense common enough in numpy > > documentation to be included in the glossary? > > > > For comparison, here are a few alternative formulations of matrix rank > > definition: > > > > "The rank of a matrix or a linear map is the dimension of the image of > > the matrix or the linear map, corresponding to the number of linearly > > independent rows or columns of the matrix, or to the number of nonzero > > singular values of the map." > > > > > > "In linear algebra, the column rank (row rank respectively) of a > > matrix A with entries in some field is defined to be the maximal > > number of columns (rows respectively) of A which are linearly > > independent." > > > > I prefer the last definition. Introductory algebra courses teach the > term "linearly independent" before "orthogonal" (IIRC). As for > "linear map", it has other names, too, and doesn't (in my mind) > clarify the definition of rank in this context. Matrix rank has nothing to do with numpy rank. Numpy rank is simply the number of indices required to address an element of an ndarray. I always thought a better name for the Numpy rank would be dimensionality, but like everything else one gets used to the numpy jargon, it only needs to be defined someplace for what it is. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndarray at mac.com Fri Aug 25 09:48:54 2006 From: ndarray at mac.com (Sasha) Date: Fri, 25 Aug 2006 09:48:54 -0400 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: <20060825074526.GC17119@mentat.za.net> Message-ID: On 8/25/06, Charles R Harris wrote: > Matrix rank has nothing to do with numpy rank. Numpy rank is simply the > number of indices required to address an element of an ndarray. I always > thought a better name for the Numpy rank would be dimensionality, but like > everything else one gets used to the numpy jargon, it only needs to be > defined someplace for what it is. That's my point exactly. The rank(2) definition was added by Sebastian Haase who advocates the use of the term "ndims" instead of "rank". I've discussed the use of "dimentionality' in the preamble. Note that ndims stands for the number of dimensions, not dimensionality. I don't want to remove rank(2) without hearing from Sebastian first and I appreciate his effort to improve the glossary. Maybe we shold add a "matrix rank" entry instead. From haase at msg.ucsf.edu Fri Aug 25 11:18:11 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 08:18:11 -0700 Subject: [Numpy-discussion] coercion rules for float32 in numpy are different from numarray In-Reply-To: <44EE767E.7000207@ieee.org> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> <44EE767E.7000207@ieee.org> Message-ID: <44EF14B3.2030904@msg.ucsf.edu> was: Re: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules Travis Oliphant wrote: > Sebastian Haase wrote: >> On Thursday 24 August 2006 17:28, Travis Oliphant wrote: >> >>> Sebastian Haase wrote: >>> >>>> Hi, >>>> I get >>>> TypeError: array cannot be safely cast to required type >>>> >>>> when calling hstack() ( which calls concatenate() ) >>>> on two arrays being a int32 and a float32 respectively. >>>> >>>> I understand now that a int32 cannot be safely converted into a float32 >>>> but why does concatenate not automatically >>>> up(?) cast to float64 ?? >>>> >>> Basically, NumPy is following Numeric's behavior of raising an error in >>> this case of unsafe casting in concatenate. For functions that are not >>> universal-function objects, mixed-type behavior works basically just >>> like Numeric did (using the ordering of the types to determine which one >>> to choose as the output). >>> >>> It could be argued that the ufunc-rules should be followed instead. >>> >>> -Travis >>> >>> >> Are you saying the ufunc-rules would convert "int32-float32" to float64 and >> hence make my code "just work" !? >> > > This is now the behavior in SVN. Note that this is different from both > Numeric (which gave an error) and numarray (which coerced to float32). > > But, it is consistent with how mixed-types are handled in calculations > and is thus an easier rule to explain. > > Thanks for the testing. > > -Travis After sleeping over this, I am contemplating about the cases where one would use float32 in the first place. My case yesterday, where I only had a 1d line profile of my data, I was of course OK with coercion to float64. But if you are working with 3D image data (as in medicine) or large 2D images as in astronomy I would assume the reason use float32 is that computer memory is to tight to afford 64bits per pixel. This is probably why numarray tried to keep float32. Float32 can handle a few more digits of precision than int16, but not as much as int32. But I find that I most always have int32s only because its the default, whereas I have float32 as a clear choice to save memory. How hard would it be to change the rules back to the numarray behavior ? Who would be negatively affected ? And who positively ? Thanks for the great work. Sebastian From haase at msg.ucsf.edu Fri Aug 25 11:34:25 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 08:34:25 -0700 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: <20060825074526.GC17119@mentat.za.net> Message-ID: <44EF1881.8020604@msg.ucsf.edu> Sasha wrote: > On 8/25/06, Charles R Harris wrote: >> Matrix rank has nothing to do with numpy rank. Numpy rank is simply the >> number of indices required to address an element of an ndarray. I always >> thought a better name for the Numpy rank would be dimensionality, but like >> everything else one gets used to the numpy jargon, it only needs to be >> defined someplace for what it is. > > That's my point exactly. The rank(2) definition was added by > Sebastian Haase who advocates the use of the term "ndims" instead of > "rank". I've discussed the use of "dimentionality' in the preamble. > Note that ndims stands for the number of dimensions, not > dimensionality. > > I don't want to remove rank(2) without hearing from Sebastian first > and I appreciate his effort to improve the glossary. Maybe we shold > add a "matrix rank" entry instead. My phasing is certainly suboptimal (I only remember the German wording - and even that only faintly - "linear independent" !?) But I put it in, remembering the discussion in "numpy" on *why* array.rank (numarray) was changed to array.ndim (numpy) I just thought this page might be a good place to 'discourage usage of badly-defined terms' or at least give the argument for "ndim". [ OK: it's not "badly" defined: but there are two separate camps on *what* it should mean --- ndim is clear.] BTW: Does the "matrix" class have m.rank attribute !? Cheers, Sebastian. From hetland at tamu.edu Fri Aug 25 12:12:55 2006 From: hetland at tamu.edu (Rob Hetland) Date: Fri, 25 Aug 2006 11:12:55 -0500 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44EE38C1.8000804@ee.byu.edu> References: <44EDFB69.8090608@ee.byu.edu> <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> <44EE38C1.8000804@ee.byu.edu> Message-ID: <03D72D27-4E5B-45AB-B749-77F1926F34B6@tamu.edu> Yes, it works now. Thanks, -Rob On Aug 24, 2006, at 6:39 PM, Travis Oliphant wrote: > Rob Hetland wrote: > >> In compiling matplotlib and scipy, I get errors complaining about >> multiply defined symbols (See below). I tried to fix this with - >> multiply_defined suppress but this did not work. Is there a way to >> make this go away? >> >> > Can you try current SVN again, to see if it now works? > > -Travis > > > ---------------------------------------------------------------------- > --- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your > job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From robert.kern at gmail.com Fri Aug 25 14:02:10 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 Aug 2006 13:02:10 -0500 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: <20060825074526.GC17119@mentat.za.net> Message-ID: Charles R Harris wrote: > Matrix rank has nothing to do with numpy rank. Numpy rank is simply the > number of indices required to address an element of an ndarray. I always > thought a better name for the Numpy rank would be dimensionality, but > like everything else one gets used to the numpy jargon, it only needs to > be defined someplace for what it is. "numpy rank" derives from "tensor rank" rather than "matrix rank". It's not *wrong*, but as with many things in mathematics, the term is overloaded and can be confusing. "dimensionality" is no better. A "three-dimensional array" might be [1, 2, 3], not [[[1]]]. http://mathworld.wolfram.com/TensorRank.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Fri Aug 25 08:50:32 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 06:50:32 -0600 Subject: [Numpy-discussion] coercion rules for float32 in numpy are different from numarray In-Reply-To: <44EF14B3.2030904@msg.ucsf.edu> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> <44EE767E.7000207@ieee.org> <44EF14B3.2030904@msg.ucsf.edu> Message-ID: <44EEF218.7070103@ieee.org> Sebastian Haase wrote: >> This is now the behavior in SVN. Note that this is different from both >> Numeric (which gave an error) and numarray (which coerced to float32). >> >> But, it is consistent with how mixed-types are handled in calculations >> and is thus an easier rule to explain. >> >> Thanks for the testing. >> >> -Travis >> > > How hard would it be to change the rules back to the numarray behavior ? > It wouldn't be hard, but I'm not so sure that's a good idea. I do see the logic behind that approach and it is worthy of some discussion. I'll give my current opinion: The reason I changed the behavior is to get consistency so there is one set of rules on mixed-type interaction to explain. You can always do what you want by force-casting your int32 arrays to float32. There will always be some people who don't like whichever behavior is selected, but we are trying to move NumPy in a direction of consistency with fewer exceptions to explain (although this is a guideline and not an absolute requirement). Mixed-type interaction is always somewhat ambiguous. Now there is a consistent rule for both universal functions and other functions (move to a precision where both can be safely cast to --- unless one is a scalar and then its precision is ignored). If you don't want that to happen, then be clear about what data-type should be used by casting yourself. In this case, we should probably not try and guess about what users really want in mixed data-type situations. -Travis From kwgoodman at gmail.com Fri Aug 25 14:58:06 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 25 Aug 2006 11:58:06 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix Message-ID: How do I delete a row (or list of rows) from a matrix object? To remove the n'th row in octave I use x(n,:) = []. Or n could be a vector of rows to remove. In numpy 0.9.9.2813 x[[1,2],:] = [] changes the values of all the elements of x without changing the size of x. In numpy do I have to turn it around and construct a list of the rows I want to keep? From charlesr.harris at gmail.com Fri Aug 25 15:19:31 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 25 Aug 2006 13:19:31 -0600 Subject: [Numpy-discussion] coercion rules for float32 in numpy are different from numarray In-Reply-To: <44EEF218.7070103@ieee.org> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> <44EE767E.7000207@ieee.org> <44EF14B3.2030904@msg.ucsf.edu> <44EEF218.7070103@ieee.org> Message-ID: Hi, On 8/25/06, Travis Oliphant wrote: > > Sebastian Haase wrote: > >> This is now the behavior in SVN. Note that this is different from > both > >> Numeric (which gave an error) and numarray (which coerced to float32). > >> > >> But, it is consistent with how mixed-types are handled in calculations > >> and is thus an easier rule to explain. > >> > >> Thanks for the testing. > >> > >> -Travis > >> > > > > How hard would it be to change the rules back to the numarray behavior ? > > > It wouldn't be hard, but I'm not so sure that's a good idea. I do see > the logic behind that approach and it is worthy of some discussion. > I'll give my current opinion: > > The reason I changed the behavior is to get consistency so there is one > set of rules on mixed-type interaction to explain. You can always do > what you want by force-casting your int32 arrays to float32. There > will always be some people who don't like whichever behavior is > selected, but we are trying to move NumPy in a direction of consistency > with fewer exceptions to explain (although this is a guideline and not > an absolute requirement). > > Mixed-type interaction is always somewhat ambiguous. Now there is a > consistent rule for both universal functions and other functions (move > to a precision where both can be safely cast to --- unless one is a > scalar and then its precision is ignored). I think this is a good thing. It makes it easy to remember what the function will produce. The only oddity the user has to be aware of is that int32 has more precision than float32. Probably not obvious to a newbie, but a newbie will probably be using the double defaults anyway. Which is another good reason for making double the default type. If you don't want that to happen, then be clear about what data-type > should be used by casting yourself. In this case, we should probably > not try and guess about what users really want in mixed data-type > situations. I wonder if it would be reasonable to add the dtype keyword to hstack itself? Hmmm, what are the conventions for coercions to lesser precision? That could get messy indeed, maybe it is best to leave such things alone and let the programmer deal with it by rethinking the program. In the float case that would probably mean using a float32 array instead of an int32 array. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Fri Aug 25 15:32:25 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 12:32:25 -0700 Subject: [Numpy-discussion] coercion rules for float32 in numpy are different from numarray In-Reply-To: References: <200608241709.48522.haase@msg.ucsf.edu> <44EEF218.7070103@ieee.org> Message-ID: <200608251232.25199.haase@msg.ucsf.edu> On Friday 25 August 2006 12:19, Charles R Harris wrote: > Hi, > > On 8/25/06, Travis Oliphant wrote: > > Sebastian Haase wrote: > > >> This is now the behavior in SVN. Note that this is different from > > > > both > > > > >> Numeric (which gave an error) and numarray (which coerced to float32). > > >> > > >> But, it is consistent with how mixed-types are handled in calculations > > >> and is thus an easier rule to explain. > > >> > > >> Thanks for the testing. > > >> > > >> -Travis > > > > > > How hard would it be to change the rules back to the numarray behavior > > > ? > > > > It wouldn't be hard, but I'm not so sure that's a good idea. I do see > > the logic behind that approach and it is worthy of some discussion. > > I'll give my current opinion: > > > > The reason I changed the behavior is to get consistency so there is one > > set of rules on mixed-type interaction to explain. You can always do > > what you want by force-casting your int32 arrays to float32. There > > will always be some people who don't like whichever behavior is > > selected, but we are trying to move NumPy in a direction of consistency > > with fewer exceptions to explain (although this is a guideline and not > > an absolute requirement). > > > > Mixed-type interaction is always somewhat ambiguous. Now there is a > > consistent rule for both universal functions and other functions (move > > to a precision where both can be safely cast to --- unless one is a > > scalar and then its precision is ignored). > > I think this is a good thing. It makes it easy to remember what the > function will produce. The only oddity the user has to be aware of is that > int32 has more precision than float32. Probably not obvious to a newbie, > but a newbie will probably be using the double defaults anyway. Which is > another good reason for making double the default type. Not true - a numpy-(or numeric-programming) newbie working in medical imaging or astronomy would still get float32 data to work with. He/She would do some operations on the data and be surprised that memory (or disk space) blows up. > > If you don't want that to happen, then be clear about what data-type > > > should be used by casting yourself. In this case, we should probably > > not try and guess about what users really want in mixed data-type > > situations. > > I wonder if it would be reasonable to add the dtype keyword to hstack > itself? Hmmm, what are the conventions for coercions to lesser precision? > That could get messy indeed, maybe it is best to leave such things alone > and let the programmer deal with it by rethinking the program. In the float > case that would probably mean using a float32 array instead of an int32 > array. > > Chuck I think my main argument is that float32 is a very common type in (large) data processing to save memory. But I don't know about how many exceptions like an extra "float32 rule" we can handle ... I would like to hear how the numarray (STScI) folks think about this. Who else works with data of the order of GBs !? - Sebastian From oliphant.travis at ieee.org Fri Aug 25 10:01:36 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 08:01:36 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: Message-ID: <44EF02C0.6050408@ieee.org> Keith Goodman wrote: > How do I delete a row (or list of rows) from a matrix object? > > To remove the n'th row in octave I use x(n,:) = []. Or n could be a > vector of rows to remove. > > In numpy 0.9.9.2813 x[[1,2],:] = [] changes the values of all the > elements of x without changing the size of x. > > In numpy do I have to turn it around and construct a list of the rows > I want to keep? > Basically, that is true for now. I think it would be worth implementing some kind of function for making this easier. One might think of using: del a[obj] But, the problem with both of those approaches is that once you start removing arbitrary rows (or n-1 dimensional sub-spaces) from an array you very likely will no longer have a chunk of memory that can be described using the n-dimensional array memory model. So, you would have to make memory copies. This could be done, of course, and the data area of "a" altered appropriately. But, such alteration of the memory would break any other objects that have a "view" of the memory area of "a." Right now, there is no way to track which objects have such "views", and therefore no good way to tell (other than the very conservative reference count) if it is safe to re-organize the memory of "a" in this way. So, "in-place" deletion of array objects would not be particularly useful, because it would only work for arrays with no additional reference counts (i.e. simple b=a assignment would increase the reference count and make it impossible to say del a[obj]). However, a function call that returned a new array object with the appropriate rows deleted (implemented by constructing a new array with the remaining rows) would seem to be a good idea. I'll place a prototype (named delete) to that effect into SVN soon. -Travis From kortmann at ideaworks.com Fri Aug 25 16:32:33 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Fri, 25 Aug 2006 13:32:33 -0700 (PDT) Subject: [Numpy-discussion] 1.0b3 in windows Message-ID: <2546.12.216.231.149.1156537953.squirrel@webmail.ideaworks.com> From oliphant at ee.byu.edu Thu Aug 24 16:17:44 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 14:17:44 -0600 Subject: [Numpy-discussion] (no subject) Message-ID: <44EE0968.1030904@ee.byu.edu> kortmann at ideaworks.com wrote: >>On Thursday 24 August 2006 09:50, kortmann at ideaworks.com wrote: >> >> >>>Sorry for my ignorance, but I have not ever heard of or used mingw32. I >>>am also using python 2.3. >>> >>> >>http://en.wikipedia.org/wiki/Mingw explains in detail. >> >> > >$HOME=C:\Documents and Settings\Administrator >CONFIGDIR=C:\Documents and Settings\Administrator\.matplotlib >loaded ttfcache file C:\Documents and >Settings\Administrator\.matplotlib\ttffont >.cache >matplotlib data path c:\python23\lib\site-packages\matplotlib\mpl-data >backend WXAgg version 2.6.3.2 >Overwriting info= from scipy.misc.helpmod >(was ction info at 0x01F896F0> from numpy.lib.utils) >Overwriting who= from scipy.misc.common (was >on who at 0x01F895F0> from numpy.lib.utils) >Overwriting source= from scipy.misc.helpmod >(was > from numpy.lib.utils) >RuntimeError: module compiled against version 1000000 of C-API but this >version >of numpy is 1000002 >Fatal Python error: numpy.core.multiarray failed to import... exiting. > > >abnormal program termination > > You have a module built against an older version of NumPy. What modules are being loaded? Perhaps it is matplotlib or SciPy -Travis Travis I tried doing it again with removing scipy and my old version of numpy. I also have matplotlib installed. is there a special way that i have to go about installing this because of matplotlib? From oliphant.travis at ieee.org Fri Aug 25 10:38:59 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 08:38:59 -0600 Subject: [Numpy-discussion] 1.0b3 in windows In-Reply-To: <2546.12.216.231.149.1156537953.squirrel@webmail.ideaworks.com> References: <2546.12.216.231.149.1156537953.squirrel@webmail.ideaworks.com> Message-ID: <44EF0B83.6090904@ieee.org> kortmann at ideaworks.com wrote: > Message: 4 > Date: Thu, 24 Aug 2006 14:17:44 -0600 > From: Travis Oliphant > Subject: Re: [Numpy-discussion] (no subject) > To: Discussion of Numerical Python > > Message-ID: <44EE0968.1030904 at ee.byu.edu> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > kortmann at ideaworks.com wrote: > > > > You have a module built against an older version of NumPy. What modules > are being loaded? Perhaps it is matplotlib or SciPy > You need to re-build matplotlib. They should be producing a binary that is compatible with 1.0b2 (I'm being careful to make sure future releases are binary compatible with 1.0b2). Also, make sure that you remove the build directory under numpy if you have previously built a version of numpy prior to 1.0b2. -Travis From haase at msg.ucsf.edu Fri Aug 25 16:48:23 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 13:48:23 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF02C0.6050408@ieee.org> References: <44EF02C0.6050408@ieee.org> Message-ID: <200608251348.23730.haase@msg.ucsf.edu> On Friday 25 August 2006 07:01, Travis Oliphant wrote: > Keith Goodman wrote: > > How do I delete a row (or list of rows) from a matrix object? > > > > To remove the n'th row in octave I use x(n,:) = []. Or n could be a > > vector of rows to remove. > > > > In numpy 0.9.9.2813 x[[1,2],:] = [] changes the values of all the > > elements of x without changing the size of x. > > > > In numpy do I have to turn it around and construct a list of the rows > > I want to keep? > > Basically, that is true for now. > > I think it would be worth implementing some kind of function for making > this easier. > > One might think of using: > > del a[obj] > > But, the problem with both of those approaches is that once you start > removing arbitrary rows (or n-1 dimensional sub-spaces) from an array > you very likely will no longer have a chunk of memory that can be > described using the n-dimensional array memory model. > > So, you would have to make memory copies. This could be done, of > course, and the data area of "a" altered appropriately. But, such > alteration of the memory would break any other objects that have a > "view" of the memory area of "a." Right now, there is no way to track > which objects have such "views", and therefore no good way to tell > (other than the very conservative reference count) if it is safe to > re-organize the memory of "a" in this way. > > So, "in-place" deletion of array objects would not be particularly > useful, because it would only work for arrays with no additional > reference counts (i.e. simple b=a assignment would increase the > reference count and make it impossible to say del a[obj]). > > However, a function call that returned a new array object with the > appropriate rows deleted (implemented by constructing a new array with > the remaining rows) would seem to be a good idea. > > I'll place a prototype (named delete) to that effect into SVN soon. > > -Travis > Now of course: I often needed to "insert" a column, row or section, ... ? I made a quick and dirty implementation for that myself: def insert(arr, i, entry, axis=0): """returns new array with new element inserted at index i along axis if arr.ndim>1 and if entry is scalar it gets filled in (ref. broadcasting) note: (original) arr does not get affected """ if i > arr.shape[axis]: raise IndexError, "index i larger than arr size" shape = list(arr.shape) shape[axis] += 1 a= N.empty(dtype=arr.dtype, shape=shape) aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) aa[:i] = aarr[:i] aa[i+1:] = aarr[i:] aa[i] = entry return a but maybe there is a way to put it it numpy directly. - Sebastian From oliphant.travis at ieee.org Fri Aug 25 10:54:21 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 08:54:21 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <200608251348.23730.haase@msg.ucsf.edu> References: <44EF02C0.6050408@ieee.org> <200608251348.23730.haase@msg.ucsf.edu> Message-ID: <44EF0F1D.3060805@ieee.org> Sebastian Haase wrote: > On Friday 25 August 2006 07:01, Travis Oliphant wrote: > >> Keith Goodman wrote: >> >>> How do I delete a row (or list of rows) from a matrix object? >>> >>> To remove the n'th row in octave I use x(n,:) = []. Or n could be a >>> vector of rows to remove. >>> >>> In numpy 0.9.9.2813 x[[1,2],:] = [] changes the values of all the >>> elements of x without changing the size of x. >>> >>> In numpy do I have to turn it around and construct a list of the rows >>> I want to keep? >>> >> Basically, that is true for now. >> >> I think it would be worth implementing some kind of function for making >> this easier. >> >> One might think of using: >> >> del a[obj] >> >> But, the problem with both of those approaches is that once you start >> removing arbitrary rows (or n-1 dimensional sub-spaces) from an array >> you very likely will no longer have a chunk of memory that can be >> described using the n-dimensional array memory model. >> >> So, you would have to make memory copies. This could be done, of >> course, and the data area of "a" altered appropriately. But, such >> alteration of the memory would break any other objects that have a >> "view" of the memory area of "a." Right now, there is no way to track >> which objects have such "views", and therefore no good way to tell >> (other than the very conservative reference count) if it is safe to >> re-organize the memory of "a" in this way. >> >> So, "in-place" deletion of array objects would not be particularly >> useful, because it would only work for arrays with no additional >> reference counts (i.e. simple b=a assignment would increase the >> reference count and make it impossible to say del a[obj]). >> >> However, a function call that returned a new array object with the >> appropriate rows deleted (implemented by constructing a new array with >> the remaining rows) would seem to be a good idea. >> >> I'll place a prototype (named delete) to that effect into SVN soon. >> >> -Travis >> >> > Now of course: I often needed to "insert" a column, row or section, ... ? > I made a quick and dirty implementation for that myself: > def insert(arr, i, entry, axis=0): > """returns new array with new element inserted at index i along axis > if arr.ndim>1 and if entry is scalar it gets filled in (ref. broadcasting) > > note: (original) arr does not get affected > """ > if i > arr.shape[axis]: > raise IndexError, "index i larger than arr size" > shape = list(arr.shape) > shape[axis] += 1 > a= N.empty(dtype=arr.dtype, shape=shape) > aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) > aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) > aa[:i] = aarr[:i] > aa[i+1:] = aarr[i:] > aa[i] = entry > return a > Sure, it makes sense to parallel the delete function. -Travis From oliphant.travis at ieee.org Fri Aug 25 11:01:58 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 09:01:58 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF0F1D.3060805@ieee.org> References: <44EF02C0.6050408@ieee.org> <200608251348.23730.haase@msg.ucsf.edu> <44EF0F1D.3060805@ieee.org> Message-ID: <44EF10E6.5080501@ieee.org> Travis Oliphant wrote: >> Now of course: I often needed to "insert" a column, row or section, ... ? >> I made a quick and dirty implementation for that myself: >> def insert(arr, i, entry, axis=0): >> """returns new array with new element inserted at index i along axis >> if arr.ndim>1 and if entry is scalar it gets filled in (ref. broadcasting) >> >> note: (original) arr does not get affected >> """ >> if i > arr.shape[axis]: >> raise IndexError, "index i larger than arr size" >> shape = list(arr.shape) >> shape[axis] += 1 >> a= N.empty(dtype=arr.dtype, shape=shape) >> aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) >> aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) >> aa[:i] = aarr[:i] >> aa[i+1:] = aarr[i:] >> aa[i] = entry >> return a >> >> > > Sure, it makes sense to parallel the delete function. > Although there is already and insert function present in numpy.... -Travis From haase at msg.ucsf.edu Fri Aug 25 17:47:20 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 14:47:20 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF10E6.5080501@ieee.org> References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> Message-ID: <200608251447.20953.haase@msg.ucsf.edu> On Friday 25 August 2006 08:01, Travis Oliphant wrote: > Travis Oliphant wrote: > >> Now of course: I often needed to "insert" a column, row or section, ... > >> ? I made a quick and dirty implementation for that myself: > >> def insert(arr, i, entry, axis=0): > >> """returns new array with new element inserted at index i along axis > >> if arr.ndim>1 and if entry is scalar it gets filled in (ref. > >> broadcasting) > >> > >> note: (original) arr does not get affected > >> """ > >> if i > arr.shape[axis]: > >> raise IndexError, "index i larger than arr size" > >> shape = list(arr.shape) > >> shape[axis] += 1 > >> a= N.empty(dtype=arr.dtype, shape=shape) > >> aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) > >> aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) > >> aa[:i] = aarr[:i] > >> aa[i+1:] = aarr[i:] > >> aa[i] = entry > >> return a > > > > Sure, it makes sense to parallel the delete function. > > Although there is already and insert function present in numpy.... > > -Travis Yeah - I saw that ... maybe one could introduce consistent namings like arr.copy_insert() arr.copy_delete() arr.copy_append() for the new ones. This emphasis the fact that a copy is created ... (Append is also a function often asked for when people expect "list capabilities" - did I miss others ?) -Sebastian From oliphant.travis at ieee.org Fri Aug 25 19:16:09 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 17:16:09 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <200608251447.20953.haase@msg.ucsf.edu> References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> Message-ID: <44EF84B9.5000909@ieee.org> Sebastian Haase wrote: > On Friday 25 August 2006 08:01, Travis Oliphant wrote: > >> Travis Oliphant wrote: >> >>>> Now of course: I often needed to "insert" a column, row or section, ... >>>> ? I made a quick and dirty implementation for that myself: >>>> def insert(arr, i, entry, axis=0): >>>> """returns new array with new element inserted at index i along axis >>>> if arr.ndim>1 and if entry is scalar it gets filled in (ref. >>>> broadcasting) >>>> >>>> note: (original) arr does not get affected >>>> """ >>>> if i > arr.shape[axis]: >>>> raise IndexError, "index i larger than arr size" >>>> shape = list(arr.shape) >>>> shape[axis] += 1 >>>> a= N.empty(dtype=arr.dtype, shape=shape) >>>> aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) >>>> aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) >>>> aa[:i] = aarr[:i] >>>> aa[i+1:] = aarr[i:] >>>> aa[i] = entry >>>> return a >>>> >>> Sure, it makes sense to parallel the delete function. >>> >> Although there is already and insert function present in numpy.... >> >> -Travis >> > > Yeah - I saw that ... > maybe one could introduce consistent namings like > arr.copy_insert() > arr.copy_delete() > arr.copy_append() > I've come up with adding the functions (not methods at this point) deletefrom insertinto appendto (syntatic sugar for concatenate but with a separate argument for the array and the extra stuff) --- is this needed? These functions will operate along a particular axis (default is axis=0 to match concatenate). Comments? -Travis From haase at msg.ucsf.edu Fri Aug 25 19:24:47 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 16:24:47 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF84B9.5000909@ieee.org> References: <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: <200608251624.47782.haase@msg.ucsf.edu> On Friday 25 August 2006 16:16, Travis Oliphant wrote: > Sebastian Haase wrote: > > On Friday 25 August 2006 08:01, Travis Oliphant wrote: > >> Travis Oliphant wrote: > >>>> Now of course: I often needed to "insert" a column, row or section, > >>>> ... ? I made a quick and dirty implementation for that myself: > >>>> def insert(arr, i, entry, axis=0): > >>>> """returns new array with new element inserted at index i along > >>>> axis if arr.ndim>1 and if entry is scalar it gets filled in (ref. > >>>> broadcasting) > >>>> > >>>> note: (original) arr does not get affected > >>>> """ > >>>> if i > arr.shape[axis]: > >>>> raise IndexError, "index i larger than arr size" > >>>> shape = list(arr.shape) > >>>> shape[axis] += 1 > >>>> a= N.empty(dtype=arr.dtype, shape=shape) > >>>> aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) > >>>> aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) > >>>> aa[:i] = aarr[:i] > >>>> aa[i+1:] = aarr[i:] > >>>> aa[i] = entry > >>>> return a > >>> > >>> Sure, it makes sense to parallel the delete function. > >> > >> Although there is already and insert function present in numpy.... > >> > >> -Travis > > > > Yeah - I saw that ... > > maybe one could introduce consistent namings like > > arr.copy_insert() > > arr.copy_delete() > > arr.copy_append() > > I've come up with adding the functions (not methods at this point) > > deletefrom > insertinto > > appendto (syntatic sugar for concatenate but with a separate argument > for the array and the extra stuff) --- is this needed? not for me. -S. From kwgoodman at gmail.com Fri Aug 25 19:47:00 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 25 Aug 2006 16:47:00 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF84B9.5000909@ieee.org> References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: On 8/25/06, Travis Oliphant wrote: > I've come up with adding the functions (not methods at this point) > > deletefrom > insertinto > > appendto (syntatic sugar for concatenate but with a separate argument > for the array and the extra stuff) --- is this needed? > > These functions will operate along a particular axis (default is axis=0 > to match concatenate). It is probably obvious to everyone except me: what is the syntax? If x is 5x5 and I want to delete rows 2 and 4 is it deletfrom(x, [1,3], axis=0)? From robert.kern at gmail.com Fri Aug 25 19:55:51 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 Aug 2006 18:55:51 -0500 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF84B9.5000909@ieee.org> References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: Travis Oliphant wrote: > I've come up with adding the functions (not methods at this point) > > deletefrom > insertinto > > appendto (syntatic sugar for concatenate but with a separate argument > for the array and the extra stuff) --- is this needed? > > These functions will operate along a particular axis (default is axis=0 > to match concatenate). > > Comments? I would drop appendto(). I also recommend leaving them as functions and not making methods from them. This will help prevent people from thinking that these modify the arrays in-place. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Fri Aug 25 20:04:57 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 18:04:57 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: <44EF9029.7020807@ieee.org> Keith Goodman wrote: > On 8/25/06, Travis Oliphant wrote: > > >> I've come up with adding the functions (not methods at this point) >> >> deletefrom >> insertinto >> >> appendto (syntatic sugar for concatenate but with a separate argument >> for the array and the extra stuff) --- is this needed? >> >> These functions will operate along a particular axis (default is axis=0 >> to match concatenate). >> > > It is probably obvious to everyone except me: what is the syntax? > No, I'm sure it isn't obvious to anyone. Here's what I'm implementing (I'm using the default axis=None now which I like because it's consistent with everything else and it forces you to pick an axis for >1d arrays --- this also gives some purpose for the appendonto function) deletefrom(arr, obj, axis=None) where obj is either an integer, a slice object, or a sequence of integers indicating the rows to delete: > If x is 5x5 and I want to delete rows 2 and 4 is it deletfrom(x, [1,3], axis=0)? > Yes, if you are counting from 1. -Travis From torgil.svensson at gmail.com Fri Aug 25 20:22:34 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Sat, 26 Aug 2006 02:22:34 +0200 Subject: [Numpy-discussion] 1.0b3 in windows In-Reply-To: <44EF0B83.6090904@ieee.org> References: <2546.12.216.231.149.1156537953.squirrel@webmail.ideaworks.com> <44EF0B83.6090904@ieee.org> Message-ID: Not really recommended. But it might "work" with just running the script twice. I'm doing that with beta1 and the matplotlib that was current at the time of that release. Laziness i guess. //Torgil On 8/25/06, Travis Oliphant wrote: > kortmann at ideaworks.com wrote: > > Message: 4 > > Date: Thu, 24 Aug 2006 14:17:44 -0600 > > From: Travis Oliphant > > Subject: Re: [Numpy-discussion] (no subject) > > To: Discussion of Numerical Python > > > > Message-ID: <44EE0968.1030904 at ee.byu.edu> > > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > > > kortmann at ideaworks.com wrote: > > > > > > > > You have a module built against an older version of NumPy. What modules > > are being loaded? Perhaps it is matplotlib or SciPy > > > > You need to re-build matplotlib. They should be producing a binary that > is compatible with 1.0b2 (I'm being careful to make sure future releases > are binary compatible with 1.0b2). > > Also, make sure that you remove the build directory under numpy if you > have previously built a version of numpy prior to 1.0b2. > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From faltet at carabos.com Sat Aug 26 03:20:04 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 26 Aug 2006 09:20:04 +0200 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: <44EF84B9.5000909@ieee.org> Message-ID: <200608260920.05184.faltet@carabos.com> Hi, A Dissabte 26 Agost 2006 01:55, Robert Kern va escriure: > Travis Oliphant wrote: > > I've come up with adding the functions (not methods at this point) > > > > deletefrom > > insertinto > > > > appendto (syntatic sugar for concatenate but with a separate argument > > for the array and the extra stuff) --- is this needed? > > > > These functions will operate along a particular axis (default is axis=0 > > to match concatenate). > > > > Comments? > > I would drop appendto(). I also recommend leaving them as functions and not > making methods from them. This will help prevent people from thinking that > these modify the arrays in-place. But there are already quite a few methods in NumPy that doesn't modify the array in-place (swapaxes, flatten, ravel or squeeze, but I guess many more). I'm personally an addict to encapsulate as much functionality as possible in methods (but perhaps I'm biased by an insane use of TAB in ipython console). Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From faltet at carabos.com Sat Aug 26 04:05:39 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 26 Aug 2006 10:05:39 +0200 Subject: [Numpy-discussion] [RFE] Suport for version 3 of array protocol in numarray Message-ID: <200608261005.42388.faltet@carabos.com> Hi, I've lately ran into problems in numarray-->numpy conversions which are due to a lack of suport of the array procol version 3 on behalf of numarray. For more info on this issue see: http://projects.scipy.org/scipy/numpy/ticket/256 and http://projects.scipy.org/scipy/numpy/ticket/266 Question: is the numarray crew going to add this support anytime soon? If not, I'd advocate to retain support for version 2 in NumPy at least for sometime (until numarray gets the support), although I don't know whether this will complicate things a lot in NumPy. I personally don't need this functionality as I've found a workaround for PyTables (i.e. using the numpy.ndarray factory in order to create the NumPy object directly from the numarray buffer), but I think this would be very useful in helping other users (end-users mainly) in the numarray-->NumPy transition. Thanks, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From oliphant.travis at ieee.org Sat Aug 26 04:34:15 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 26 Aug 2006 02:34:15 -0600 Subject: [Numpy-discussion] [RFE] Suport for version 3 of array protocol in numarray In-Reply-To: <200608261005.42388.faltet@carabos.com> References: <200608261005.42388.faltet@carabos.com> Message-ID: <44F00787.1020500@ieee.org> Francesc Altet wrote: > Hi, > > I've lately ran into problems in numarray-->numpy conversions which are due to > a lack of suport of the array procol version 3 on behalf of numarray. For > more info on this issue see: > > http://projects.scipy.org/scipy/numpy/ticket/256 > > and > > http://projects.scipy.org/scipy/numpy/ticket/266 > > Question: is the numarray crew going to add this support anytime soon? If not, > I'd advocate to retain support for version 2 in NumPy at least for sometime > (until numarray gets the support), although I don't know whether this will > complicate things a lot in NumPy. > > I personally don't need this functionality as I've found a workaround for > PyTables (i.e. using the numpy.ndarray factory in order to create the NumPy > object directly from the numarray buffer), but I think this would be very > useful in helping other users (end-users mainly) in the numarray-->NumPy > transition. > Remember it's only the Python-side of version 2 of the protocol that is not supported. The C-side is still supported. Thus, it's only objects which don't export the C-side of the interface that are affected. In numarray that is the chararray and the recarray. Normal numarray arrays should work fine as the C-side of version 2 is still supported. I think the number of objects supporting the Python side of version 2 of the protocol is small enough that it is not worth the extra hassle (and attribute lookup time) in NumPy to support it. It would be a good thing if numarray supported version 3 of the protocol by adding the __array_interface__ attribute to support the Python side of version 3. -Travis From oliphant.travis at ieee.org Sat Aug 26 05:44:34 2006 From: oliphant.travis at ieee.org (Travis E. Oliphant) Date: Sat, 26 Aug 2006 03:44:34 -0600 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available Message-ID: <44F01802.8050505@ieee.org> The 4th beta release of NumPy 1.0 has just been made available. NumPy 1.0 represents the culmination of over 18 months of work to unify the Numeric and Numarray array packages into a single best-of-breed array package for Python. NumPy supports all the features of Numeric and Numarray with a healthy dose of it's own improved features. It's time to start porting your applications to use NumPy as Numeric is no longer maintained and Numarray will only be maintained for a few more months. Porting is not difficult especially using the compatibility layers numpy.oldnumeric and numpy.numarray and the alter_code1.py modules in those packages. The full C-API of Numeric is supported as is the C-API of Numarray. More information is available at http://numpy.scipy.org NumPy Developers From numpy at mspacek.mm.st Sat Aug 26 06:06:42 2006 From: numpy at mspacek.mm.st (Martin Spacek) Date: Sat, 26 Aug 2006 03:06:42 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array Message-ID: <44F01D32.9080103@mspacek.mm.st> Hello, I'm a bit ignorant of optimization in numpy. I have a movie with 65535 32x32 frames stored in a 3D array of uint8 with shape (65535, 32, 32). I load it from an open file f like this: >>> import numpy as np >>> data = np.fromfile(f, np.uint8, count=65535*32*32) >>> data = data.reshape(65535, 32, 32) I'm picking several thousand frames more or less randomly from throughout the movie and finding the mean frame over those frames: >>> meanframe = data[frameis].mean(axis=0) frameis is a 1D array of frame indices with no repeated values in it. If it has say 4000 indices in it, then the above line takes about 0.5 sec to complete on my system. I'm doing this for a large number of different frameis, some of which can have many more indices in them. All this takes many minutes to complete, so I'm looking for ways to speed it up. If I divide it into 2 steps: >>> temp = data[frameis] >>> meanframe = temp.mean(axis=0) and time it, I find the first step takes about 0.2 sec, and the second takes about 0.3 sec. So it's not just the mean() step, but also the indexing step that's taking some time. If I flatten with ravel: >>> temp = data[frameis].ravel() >>> meanframe = temp.mean(axis=0) then the first step still takes about 0.2 sec, but the mean() step drops to about 0.1 sec. But of course, this is taking a flat average across all pixels in the movie, which isn't what I want to do. I have a feeling that the culprit is the non contiguity of the movie frames being averaged, but I don't know how to proceed. Any ideas? Could reshaping the data somehow speed things up? Would weave.blitz or weave.inline or pyrex help? I'm running numpy 0.9.8 Thanks, Martin From oliphant.travis at ieee.org Sat Aug 26 06:26:32 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 26 Aug 2006 04:26:32 -0600 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F01D32.9080103@mspacek.mm.st> References: <44F01D32.9080103@mspacek.mm.st> Message-ID: <44F021D8.5070002@ieee.org> Martin Spacek wrote: > Hello, > > I'm a bit ignorant of optimization in numpy. > > I have a movie with 65535 32x32 frames stored in a 3D array of uint8 > with shape (65535, 32, 32). I load it from an open file f like this: > > >>> import numpy as np > >>> data = np.fromfile(f, np.uint8, count=65535*32*32) > >>> data = data.reshape(65535, 32, 32) > > I'm picking several thousand frames more or less randomly from > throughout the movie and finding the mean frame over those frames: > > >>> meanframe = data[frameis].mean(axis=0) > > frameis is a 1D array of frame indices with no repeated values in it. If > it has say 4000 indices in it, then the above line takes about 0.5 sec > to complete on my system. I'm doing this for a large number of different > frameis, some of which can have many more indices in them. All this > takes many minutes to complete, so I'm looking for ways to speed it up. > > If I divide it into 2 steps: > > >>> temp = data[frameis] > >>> meanframe = temp.mean(axis=0) > > and time it, I find the first step takes about 0.2 sec, and the second > takes about 0.3 sec. So it's not just the mean() step, but also the > indexing step that's taking some time. > If frameis is 1-D, then you should be able to use temp = data.take(frameis,axis=0) for the first step. This can be quite a bit faster (and is a big reason why take is still around). There are several reasons for this (one of which is that index checking is done over the entire list when using indexing). -Travis From wbaxter at gmail.com Sat Aug 26 07:42:32 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 26 Aug 2006 20:42:32 +0900 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <200608260920.05184.faltet@carabos.com> References: <44EF84B9.5000909@ieee.org> <200608260920.05184.faltet@carabos.com> Message-ID: On 8/26/06, Francesc Altet wrote: > > I'm personally an addict to encapsulate as much functionality as possible > in > methods (but perhaps I'm biased by an insane use of TAB in ipython > console). You can still get tab completion for functions: numpy. Even if it's your custom to "from numpy import *" you can still also do an "import numpy" or "import numpy as N". --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Sat Aug 26 08:13:15 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 26 Aug 2006 21:13:15 +0900 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF84B9.5000909@ieee.org> References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: On 8/26/06, Travis Oliphant wrote: > > > I've come up with adding the functions (not methods at this point) > > deletefrom > insertinto "delete" and "insert" really would be better. The current "insert" function seems inaptly named. What it does sounds more like "overlay" or "set_masked". ... or the existing "putmask" which I see does a similar thing. Actually there seems to be a little doc-bug there or something. numpy.insert claims it differs from putmask in that it only accepts a vector of values with same number of vals as the # of non-zero entries in the mask, but a quick test revals it's quite happy with a different number and cycles through them. In [31]: a = numpy.zeros((3,3)) In [32]: numpy.insert(a, [[0,1,0],[1,0,0],[1,0,0]], [4,5]) In [33]: a Out[33]: array([[ 0., 4., 0.], [ 5., 0., 0.], [ 4., 0., 0.]]) Anyway, in the end nothing has really been inserted, existing entries have just been replaced. So "insert" seems like a much better name for a function that actually puts in a new row or column. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From svetosch at gmx.net Sat Aug 26 08:13:30 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 26 Aug 2006 14:13:30 +0200 Subject: [Numpy-discussion] memory corruption bug Message-ID: <44F03AEA.7010403@gmx.net> Hi, I experienced this strange bug which caused a totally unrelated variable to be overwritten (no exception or error was raised, so it took me while to rule out any errors of my own). The context where this is in is a method of a class (Vecm.getSW()), and the instance of Vecm is created within a different class (GG.__init__). Now, the affected variable is in the namespace of GG (it's GG.urate), and so I would think that anything local in Vecm.getSW() should not affect GG.urate, right? Originally I did: glx[lag:, :] -= temp But that caused the described problem. Then I tried: glx[lag:, :] = glx[lag:, :] - temp But the same problem remains. Then I worked around the slice assignment like this: temp4 = r_[zeros([lag, n_y]), temp] glx = glx - temp4 And everything is ok! However, when I alter the second line of this workaround to: glx -= temp4 The problem reappears! So I'm not even sure whether this is one or two bugs... This is with yesterday's numpy svn on windows, but the same thing happens with an earlier svn (~b2) as well. If you need further info, please tell me how to provide it. Thanks, Sven From svetosch at gmx.net Sat Aug 26 08:20:10 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 26 Aug 2006 14:20:10 +0200 Subject: [Numpy-discussion] round() bug Message-ID: <44F03C7A.4060908@gmx.net> Hi, is this normal behavior?: >>> import numpy as n; print n.mat(0.075).round(2); print n.mat(0.575).round(2) [[ 0.08]] [[ 0.57]] Again, yesterday's svn on windows. cheers, Sven From nadavh at visionsense.com Sat Aug 26 09:45:39 2006 From: nadavh at visionsense.com (Nadav Horesh) Date: Sat, 26 Aug 2006 15:45:39 +0200 Subject: [Numpy-discussion] tensor dot ? Message-ID: <07C6A61102C94148B8104D42DE95F7E8C8F051@exchange2k.envision.co.il> I once wrote a function "tensormultiply" which is a part of numarray (undocumented). You can borrow it from there. Nadav -----Original Message----- From: numpy-discussion-bounces at lists.sourceforge.net on behalf of Simon Burton Sent: Fri 25-Aug-06 14:42 To: numpy-discussion at lists.sourceforge.net Cc: Subject: [Numpy-discussion] tensor dot ? >>> numpy.dot.__doc__ matrixproduct(a,b) Returns the dot product of a and b for arrays of floating point types. Like the generic numpy equivalent the product sum is over the last dimension of a and the second-to-last dimension of b. NB: The first argument is not conjugated. Does numpy support summing over arbitrary dimensions, as in tensor calculus ? I could cook up something that uses transpose and dot, but it's reasonably tricky i think :) Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From wbaxter at gmail.com Sat Aug 26 08:52:01 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 26 Aug 2006 21:52:01 +0900 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: <44F03AEA.7010403@gmx.net> References: <44F03AEA.7010403@gmx.net> Message-ID: You're sure it's not just pass-by-reference semantics biting you? If you make an array and pass it to another class or function, by default they just get a reference to the same array. so e.g.: a = numpy.array([1,2,3]) some_class.set_array(a) a[1] = 10 Then both the local 'a' and the 'a' that some_class has are now [1,10,3]. If you don't want that sharing then you need to make an explicit copy of a by calling a.copy(). Watch out for lists or dicts of arrays too. The python idom for copying a list: new_list = list_orig[:], won't copy the contents of elements that are array. If you want to be sure to make complete copies of complex data structures, there's the deepcopy method of the copy module. new_list = copy.deepcopy(list_orig). I found a bunch of these sorts of bugs in some code I ported over from Matlab last week. Matlab uses copy semantics for everything, so if you pass a matrix A to a function in Matlab you can always treat it as a fresh local copy inside the function. Not so with Python. I found that locating and fixing those bugs was the most difficult thing about porting Matlab code to Numpy (that and the lack of some major toolkit or function you use in Matlab doesn't have an equivalent in Numpy... like eigs()). --bb On 8/26/06, Sven Schreiber wrote: > > Hi, > I experienced this strange bug which caused a totally unrelated variable > to be overwritten (no exception or error was raised, so it took me while > to rule out any errors of my own). > > The context where this is in is a method of a class (Vecm.getSW()), and > the instance of Vecm is created within a different class (GG.__init__). > Now, the affected variable is in the namespace of GG (it's GG.urate), > and so I would think that anything local in Vecm.getSW() should not > affect GG.urate, right? > > Originally I did: > > glx[lag:, :] -= temp > > But that caused the described problem. Then I tried: > > glx[lag:, :] = glx[lag:, :] - temp > > But the same problem remains. Then I worked around the slice assignment > like this: > > temp4 = r_[zeros([lag, n_y]), temp] > glx = glx - temp4 > > And everything is ok! However, when I alter the second line of this > workaround to: > > glx -= temp4 > > The problem reappears! So I'm not even sure whether this is one or two > bugs... > > This is with yesterday's numpy svn on windows, but the same thing > happens with an earlier svn (~b2) as well. If you need further info, > please tell me how to provide it. > > Thanks, > Sven > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Sat Aug 26 10:05:16 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Sat, 26 Aug 2006 07:05:16 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: On 8/26/06, Bill Baxter wrote: > On 8/26/06, Travis Oliphant wrote: > > > > > I've come up with adding the functions (not methods at this point) > > > > deletefrom > > insertinto > > > "delete" and "insert" really would be better. The current "insert" > function seems inaptly named. What it does sounds more like "overlay" or > "set_masked". I prefer delete and insert too. I guess it is OK that del and delete are similar (?) From svetosch at gmx.net Sat Aug 26 11:12:31 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 26 Aug 2006 17:12:31 +0200 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: References: <44F03AEA.7010403@gmx.net> Message-ID: <44F064DF.1090805@gmx.net> I appreciate your warnings, thanks. However, they don't seem to apply here, or why would my described workaround work at all in that case? Also, afaict, the affected variable is not even passed to the class where the problematic assignment happens. -sven Bill Baxter schrieb: > You're sure it's not just pass-by-reference semantics biting you? > If you make an array and pass it to another class or function, by > default they just get a reference to the same array. > so e.g.: > > a = numpy.array ([1,2,3]) > some_class.set_array(a) > a[1] = 10 > > Then both the local 'a' and the 'a' that some_class has are now [1,10,3]. > If you don't want that sharing then you need to make an explicit copy of > a by calling a.copy (). > Watch out for lists or dicts of arrays too. The python idom for > copying a list: new_list = list_orig[:], won't copy the contents of > elements that are array. If you want to be sure to make complete copies > of complex data structures, there's the deepcopy method of the copy > module. new_list = copy.deepcopy(list_orig). > > I found a bunch of these sorts of bugs in some code I ported over from > Matlab last week. Matlab uses copy semantics for everything, so if you > pass a matrix A to a function in Matlab you can always treat it as a > fresh local copy inside the function. Not so with Python. I found that > locating and fixing those bugs was the most difficult thing about > porting Matlab code to Numpy (that and the lack of some major toolkit or > function you use in Matlab doesn't have an equivalent in Numpy... like > eigs()). > > --bb > > > > On 8/26/06, *Sven Schreiber* > wrote: > > Hi, > I experienced this strange bug which caused a totally unrelated variable > to be overwritten (no exception or error was raised, so it took me while > to rule out any errors of my own). > > The context where this is in is a method of a class ( Vecm.getSW()), and > the instance of Vecm is created within a different class (GG.__init__). > Now, the affected variable is in the namespace of GG (it's GG.urate), > and so I would think that anything local in Vecm.getSW () should not > affect GG.urate, right? > > Originally I did: > > glx[lag:, :] -= temp > > But that caused the described problem. Then I tried: > > glx[lag:, :] = glx[lag:, :] - temp > > But the same problem remains. Then I worked around the slice assignment > like this: > > temp4 = r_[zeros([lag, n_y]), temp] > glx = glx - temp4 > > And everything is ok! However, when I alter the second line of this > workaround to: > > glx -= temp4 > > The problem reappears! So I'm not even sure whether this is one or two > bugs... > > This is with yesterday's numpy svn on windows, but the same thing > happens with an earlier svn (~b2) as well. If you need further info, > please tell me how to provide it. > > Thanks, > Sven > From fullung at gmail.com Sat Aug 26 11:20:15 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 26 Aug 2006 17:20:15 +0200 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: <44F03AEA.7010403@gmx.net> Message-ID: A complete code snippet that reproduces the bug would be most helpful. If there is a memory corruption problem, it might show up if we run the problematic code under Valgrind. Regards, Albert > -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of Sven Schreiber > Sent: 26 August 2006 14:14 > To: numpy-discussion > Subject: [Numpy-discussion] memory corruption bug > > Hi, > I experienced this strange bug which caused a totally unrelated variable > to be overwritten (no exception or error was raised, so it took me while > to rule out any errors of my own). From hjn253 at tom.com Wed Aug 30 11:41:30 2006 From: hjn253 at tom.com (=?GB2312?B?IjnUwjktMTDI1S/Jz7qjIg==?=) Date: Wed, 30 Aug 2006 23:41:30 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU61MvTw0VYQ0VMus1QUFS4xL34udzA7brNvq3Tqr72st8=?= Message-ID: An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Aug 26 12:02:53 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 10:02:53 -0600 Subject: [Numpy-discussion] round() bug In-Reply-To: <44F03C7A.4060908@gmx.net> References: <44F03C7A.4060908@gmx.net> Message-ID: Hi, On 8/26/06, Sven Schreiber wrote: > > Hi, > > is this normal behavior?: > > >>> import numpy as n; print n.mat(0.075).round(2); print > n.mat(0.575).round(2) > [[ 0.08]] > [[ 0.57]] In [7]: (arange(100)*.5).round() Out[7]: array([ 0., 0., 1., 2., 2., 2., 3., 4., 4., 4., 5., 6., 6., 6., 7., 8., 8., 8., 9., 10., 10., 10., 11., 12., 12., 12., 13., 14., 14., 14., 15., 16., 16., 16., 17., 18., 18., 18., 19., 20., 20., 20., 21., 22., 22., 22., 23., 24., 24., 24., 25., 26., 26., 26., 27., 28., 28., 28., 29., 30., 30., 30., 31., 32., 32., 32., 33., 34., 34., 34., 35., 36., 36., 36., 37., 38., 38., 38., 39., 40., 40., 40., 41., 42., 42., 42., 43., 44., 44., 44., 45., 46., 46., 46., 47., 48., 48., 48., 49., 50.]) It looks like numpy does round to even. Knuth has a discussion of rounding that is worth reading, although he prefers round to odd. The basic idea is to avoid the systematic bias that comes from always rounding in one direction. Another thing to bear in mind is that floating point isn't always what it seems due to the conversion between decimal and binary representation: In [8]: print '%25.18f'%.075 0.074999999999999997 Throw in multiplication, different precisions in the internal computations of the fpu, rounding in the print routine, and other complications, and it is tough to know precisely what should happen. For instance: In [15]: '%25.18f'%(mat(0.575)*100) Out[15]: ' 57.499999999999992895' In [16]: '%25.18f'%(around(mat(0.575)*100)) Out[16]: ' 57.000000000000000000' In [17]: '%25.18f'%(around(mat(0.575)*100)/100) Out[17]: ' 0.569999999999999951' And you can see that .575 after conversion to IEEE floating point and scaling was properly rounded down and showed up as .57 after the default print precision is taken into account. Python, on the other hand, always rounds up: In [12]: for i in range(10) : print '%25.18f'%round(i*.5) ....: 0.000000000000000000 1.000000000000000000 1.000000000000000000 2.000000000000000000 2.000000000000000000 3.000000000000000000 3.000000000000000000 4.000000000000000000 4.000000000000000000 5.000000000000000000 Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Aug 26 12:22:33 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 10:22:33 -0600 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: References: <44F03AEA.7010403@gmx.net> Message-ID: Hi, On 8/26/06, Bill Baxter wrote: > > You're sure it's not just pass-by-reference semantics biting you? > If you make an array and pass it to another class or function, by default > they just get a reference to the same array. > so e.g.: > > a = numpy.array ([1,2,3]) > some_class.set_array(a) > a[1] = 10 > > Then both the local 'a' and the 'a' that some_class has are now [1,10,3]. > If you don't want that sharing then you need to make an explicit copy of a > by calling a.copy (). > Watch out for lists or dicts of arrays too. The python idom for copying > a list: new_list = list_orig[:], won't copy the contents of elements that > are array. If you want to be sure to make complete copies of complex data > structures, there's the deepcopy method of the copy module. new_list = > copy.deepcopy(list_orig). > > I found a bunch of these sorts of bugs in some code I ported over from > Matlab last week. Matlab uses copy semantics for everything, > Matlab does copy on write, so it maintains a reference until an element is modified, at which point it makes a copy. I believe it does this for efficiency and memory conservation, probably the latter because it doesn't seem to have garbage collection. I could be wrong about that, though. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Aug 26 12:30:00 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 10:30:00 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: Hi, On 8/26/06, Keith Goodman wrote: > > On 8/26/06, Bill Baxter wrote: > > On 8/26/06, Travis Oliphant wrote: > > > > > > > > I've come up with adding the functions (not methods at this point) > > > > > > deletefrom > > > insertinto > > > > > > "delete" and "insert" really would be better. The current "insert" > > function seems inaptly named. What it does sounds more like "overlay" > or > > "set_masked". > > I prefer delete and insert too. I guess it is OK that del and delete > are similar (?) Me too, although remove could be used instead of delete. Is there a problem besides compatibility with removing or changing the old insert? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Aug 26 12:35:12 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 10:35:12 -0600 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: References: <44F03AEA.7010403@gmx.net> Message-ID: Hi, On 8/26/06, Albert Strasheim wrote: > > A complete code snippet that reproduces the bug would be most helpful. +1. I too suspect that what you have here is a reference/copy problem. The only thing that is local to the class is the reference (pointer), the data is global. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From torgil.svensson at gmail.com Sat Aug 26 13:02:52 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Sat, 26 Aug 2006 19:02:52 +0200 Subject: [Numpy-discussion] std(axis=1) memory footprint issues + moving avg / stddev Message-ID: Hi ndarray.std(axis=1) seems to have memory issues on large 2D-arrays. I first thought I had a performance issue but discovered that std() used lots of memory and therefore caused lots of swapping. I want to get an array where element i is the stadard deviation of row i in the 2D array. Using valgrind on the std() function... $ valgrind --tool=massif python -c "from numpy import *; a=reshape(arange(100000*100),(100000,100)).std(axis=1)" ... showed me a peak of 200Mb memory while iterating line by line... $ valgrind --tool=massif python -c "from numpy import *; a=array([x.std() for x in reshape(arange(100000*100),(100000,100))])" ... got a peak of 40Mb memory. This seems unnecessary since we know before calculations what the output shape will be and should therefore be able to preallocate memory. My original problem was to get an moving average and a moving standard deviation (120k rows and N=1000). For average I guess convolve should perform good, but is there anything smart for std()? For now I use ... >>> moving_std=array([a[i:i+n].std() for i in range(len(a)-n)]) which seems to perform quite well. BR, //Torgil From charlesr.harris at gmail.com Sat Aug 26 13:49:33 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 11:49:33 -0600 Subject: [Numpy-discussion] std(axis=1) memory footprint issues + moving avg / stddev In-Reply-To: References: Message-ID: On 8/26/06, Torgil Svensson wrote: > > Hi > > ndarray.std(axis=1) seems to have memory issues on large 2D-arrays. I > first thought I had a performance issue but discovered that std() used > lots of memory and therefore caused lots of swapping. > > I want to get an array where element i is the stadard deviation of row > i in the 2D array. Using valgrind on the std() function... > > $ valgrind --tool=massif python -c "from numpy import *; > a=reshape(arange(100000*100),(100000,100)).std(axis=1)" > > ... showed me a peak of 200Mb memory while iterating line by line... > > $ valgrind --tool=massif python -c "from numpy import *; > a=array([x.std() for x in reshape(arange(100000*100),(100000,100))])" > > ... got a peak of 40Mb memory. > > This seems unnecessary since we know before calculations what the > output shape will be and should therefore be able to preallocate > memory. > > > My original problem was to get an moving average and a moving standard > deviation (120k rows and N=1000). For average I guess convolve should > perform good, but is there anything smart for std()? For now I use ... Why not use convolve for the std also? You can't subtract the average first, but you could convolve the square of the vector and then use some variant of std = sqrt((convsqrs - n*avg**2)/(n-1)). There are possible precision problems but they may not matter for your application, especially if the moving window isn't really big. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.hochberg at ieee.org Sat Aug 26 13:59:38 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Sat, 26 Aug 2006 10:59:38 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F01D32.9080103@mspacek.mm.st> References: <44F01D32.9080103@mspacek.mm.st> Message-ID: <44F08C0A.1070008@ieee.org> Martin Spacek wrote: > Hello, > > I'm a bit ignorant of optimization in numpy. > > I have a movie with 65535 32x32 frames stored in a 3D array of uint8 > with shape (65535, 32, 32). I load it from an open file f like this: > > >>> import numpy as np > >>> data = np.fromfile(f, np.uint8, count=65535*32*32) > >>> data = data.reshape(65535, 32, 32) > > I'm picking several thousand frames more or less randomly from > throughout the movie and finding the mean frame over those frames: > > >>> meanframe = data[frameis].mean(axis=0) > > frameis is a 1D array of frame indices with no repeated values in it. If > it has say 4000 indices in it, then the above line takes about 0.5 sec > to complete on my system. I'm doing this for a large number of different > frameis, some of which can have many more indices in them. All this > takes many minutes to complete, so I'm looking for ways to speed it up. > > If I divide it into 2 steps: > > >>> temp = data[frameis] > >>> meanframe = temp.mean(axis=0) > > and time it, I find the first step takes about 0.2 sec, and the second > takes about 0.3 sec. So it's not just the mean() step, but also the > indexing step that's taking some time. > > If I flatten with ravel: > > >>> temp = data[frameis].ravel() > >>> meanframe = temp.mean(axis=0) > > then the first step still takes about 0.2 sec, but the mean() step drops > to about 0.1 sec. But of course, this is taking a flat average across > all pixels in the movie, which isn't what I want to do. > > I have a feeling that the culprit is the non contiguity of the movie > frames being averaged, but I don't know how to proceed. > > Any ideas? Could reshaping the data somehow speed things up? Would > weave.blitz or weave.inline or pyrex help? > > I'm running numpy 0.9.8 > > Thanks, > > Martin > Martin, Here's an approach (mean_accumulate) that avoids making any copies of the data. It runs almost 4x as fast as your approach (called baseline here) on my box. Perhaps this will be useful: frames = 65535 samples = 4000 data = (256 * np.random.random((frames, 32, 32))).astype(np.uint8) indices = np.arange(frames) random.shuffle(indices) indices = indices[:samples] def mean_baseline(data, indices): return data[indices].mean(axis=0) def mean_accumulate(data, indices): result = np.zeros([32, 32], float) for i in indices: result += data[i] result /= len(indices) return result if __name__ == "__main__": import timeit print mean_baseline(data, indices)[0,:8] print timeit.Timer("s.mean_baseline(s.data, s.indices)", "import scratch as s").timeit(10) print mean_accumulate(data, indices)[0,:8] print timeit.Timer("s.mean_accumulate(s.data, s.indices)", "import scratch as s").timeit(10) This gives: [ 126.947 127.39175 128.03725 129.83425 127.98925 126.866 128.5352 127.6205 ] 3.95907664242 [ 126.947 127.39175 128.03725 129.83425 127.98925 126.866 128.53525 127.6205 ] 1.06913644053 I also wondered if sorting indices would help since it would help improve locality of reference, but when I measured that it appeared to help not at all. regards, -tim From nvf at MIT.EDU Sat Aug 26 14:00:51 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Sat, 26 Aug 2006 13:00:51 -0500 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: Message-ID: <88B1CCEA-9383-458A-8DC5-FDEFCCEF01E5@mit.edu> On Aug 26, 2006, at 7:05 AM, Keith Goodman wrote: > On 8/26/06, Bill Baxter wrote: >> On 8/26/06, Travis Oliphant wrote: >> >>> >>> I've come up with adding the functions (not methods at this point) >>> >>> deletefrom >>> insertinto >> >> >> "delete" and "insert" really would be better. The current "insert" >> function seems inaptly named. What it does sounds more like >> "overlay" or >> "set_masked". > > I prefer delete and insert too. I guess it is OK that del and delete > are similar (?) How about "deleted" and "inserted" to parallel "sorted"? "delete" and "insert" sound very imperative and side-effects-ish. Nick From schaffer at optonline.net Sat Aug 26 14:07:08 2006 From: schaffer at optonline.net (Les Schaffer) Date: Sat, 26 Aug 2006 14:07:08 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F01802.8050505@ieee.org> References: <44F01802.8050505@ieee.org> Message-ID: <44F08DCC.6060800@optonline.net> Travis E. Oliphant wrote: > Porting is not difficult especially using the compatibility layers > numpy.oldnumeric and numpy.numarray and the alter_code1.py modules in > those packages. The full C-API of Numeric is supported as is the C-API > of Numarray. > this is not true of numpy.core.records (nee numarray.records): 1. numarray's records.py does not show up in numpy.numarray. 2. my code that uses recarrays is now broken if i use numpy.core.records; for one thing, you have no .info attribute. another example: strings pushed into the arrays *apparently* were stripped automagically in the old recarray (so we coded appropriately), but now are not. 3. near zero docstrings for this module, hard to see how the new records works. 4. last year i made a case for the old records to return a list of the column names. it looks like the column names are now attributes of the record object, any chance of getting a list of them recarrayObj.get_colNames() or some such? yes, in working code, we know what the names are, but in test code we are creating recarrays from parsing of Excel spreadsheets, and for testing purposes, its nice to know what records THINKS are the names of all the columns. Les Schaffer From robert.kern at gmail.com Sat Aug 26 15:28:20 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Aug 2006 14:28:20 -0500 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F08DCC.6060800@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> Message-ID: Les Schaffer wrote: > 4. last year i made a case for the old records to return a list of the > column names. it looks like the column names are now attributes of the > record object, any chance of getting a list of them > recarrayObj.get_colNames() or some such? yes, in working code, we know > what the names are, but in test code we are creating recarrays from > parsing of Excel spreadsheets, and for testing purposes, its nice to > know what records THINKS are the names of all the columns. In [2]: from numpy import * In [3]: rec.fromarrays(ones(10, dtype=float) Display all 628 possibilities? (y or n) In [3]: a = rec.fromarrays([ones(10, dtype=float), ones(10, dtype=int)], names='float,int', formats=[float, int]) In [4]: a Out[4]: recarray([(1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1)], dtype=[('float', '>f8'), ('int', '>i4')]) In [6]: a.dtype.names Out[6]: ('float', 'int') -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Aug 26 15:29:39 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Aug 2006 14:29:39 -0500 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F08DCC.6060800@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> Message-ID: Les Schaffer wrote: > 3. near zero docstrings for this module, hard to see how the new > records works. http://www.scipy.org/RecordArrays -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schaffer at optonline.net Sat Aug 26 15:50:29 2006 From: schaffer at optonline.net (Les Schaffer) Date: Sat, 26 Aug 2006 15:50:29 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> Message-ID: <44F0A605.407@optonline.net> Robert Kern wrote: > http://www.scipy.org/RecordArrays > which didn't help one iota. look, someone is charging for documentation, but the claim is the free docstrings have docs. for the records module, this ain't so. documentation means someone knows what is the complete public interface. yes, examples help. earlier, you said: > In [6]: a.dtype.names > Out[6]: ('float', 'int') congratulations, this can be the first docstring in records. now what about the incompatibility between old and new. les schaffer From aisaac at american.edu Sat Aug 26 16:11:56 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 26 Aug 2006 16:11:56 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F0A605.407@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> Message-ID: On Sat, 26 Aug 2006, Les Schaffer apparently wrote: > congratulations, this can be the first docstring in > records. now what about the incompatibility between old > and new I am always mystified when someone requesting free help adopts a pissy tone if they do not immediately get what they wish. It reminds me a bit of my youngest child, age 7, whom I am still teaching the advantages of politeness. Cheers, Alan Isaac From schaffer at optonline.net Sat Aug 26 16:07:25 2006 From: schaffer at optonline.net (Les Schaffer) Date: Sat, 26 Aug 2006 16:07:25 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> Message-ID: <44F0A9FD.1040809@optonline.net> Alan G Isaac wrote: > I am always mystified when someone requesting free help > adopts a pissy tone if they do not immediately > get what they wish. > > It reminds me a bit of my youngest child, age 7, > whom I am still teaching the advantages of politeness. > you are refering to robert kern i take it???? because i am 52. and relax, i have given plenty of free help in my life, and constantly asked for it, pissy tones and all. so save the moral speech for your friends. les From aisaac at american.edu Sat Aug 26 16:31:45 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 26 Aug 2006 16:31:45 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F0A9FD.1040809@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> <44F0A9FD.1040809@optonline.net> Message-ID: On Sat, 26 Aug 2006, Les Schaffer apparently wrote: > save the moral speech I did not say anything about morals. I spoke only of *advantages* of politeness, which someone age 52 might still need to ponder. Of course I bothered to write because I read this list and appreciate in addition to its helpfulness that it generally maintains a more polite tone. This too has value. Cheers, Alan Isaac From schaffer at optonline.net Sat Aug 26 16:27:50 2006 From: schaffer at optonline.net (Les Schaffer) Date: Sat, 26 Aug 2006 16:27:50 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> <44F0A9FD.1040809@optonline.net> Message-ID: <44F0AEC6.2080708@optonline.net> Alan G Isaac wrote: > Of course I bothered to write because I read this list and > appreciate in addition to its helpfulness that it generally > maintains a more polite tone. This too has value. > > > so, you want to work on improving the documentation of this poorly documented module? then lets get down to details. i'll pitch in some time to add docstrings, if i know they will be used. les From robert.kern at gmail.com Sat Aug 26 16:37:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Aug 2006 15:37:43 -0500 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F0AEC6.2080708@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> <44F0A9FD.1040809@optonline.net> <44F0AEC6.2080708@optonline.net> Message-ID: Les Schaffer wrote: > i'll pitch in some > time to add docstrings, if i know they will be used. Of course they will. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Sat Aug 26 17:08:27 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 26 Aug 2006 17:08:27 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org><44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> <44F0A9FD.1040809@optonline.net> <44F0AEC6.2080708@optonline.net> Message-ID: > Les Schaffer wrote: >> i'll pitch in some >> time to add docstrings, if i know they will be used. On Sat, 26 Aug 2006, Robert Kern apparently wrote: > Of course they will. Did Albert's initiative get any traction? http://www.mail-archive.com/numpy-discussion at lists.sourceforge.net/msg01616.html If so, Les might profit from coordinating with him. Is the preferred approach, as Albert suggested, to submit documentation patches attached to tickets? Cheers, Alan Isaac From faltet at carabos.com Sat Aug 26 17:00:19 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 26 Aug 2006 23:00:19 +0200 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: <200608260920.05184.faltet@carabos.com> Message-ID: <200608262300.20721.faltet@carabos.com> A Dissabte 26 Agost 2006 13:42, Bill Baxter va escriure: > On 8/26/06, Francesc Altet wrote: > > I'm personally an addict to encapsulate as much functionality as possible > > in > > methods (but perhaps I'm biased by an insane use of TAB in ipython > > console). > > You can still get tab completion for functions: numpy. > Even if it's your custom to "from numpy import *" you can still also do an > "import numpy" or "import numpy as N". Yep, you are right. It is just that I tend to do that on the objects that I manipulate and not with first-level functions in packages. Anyway, I think that I see now that these routines should not be methods because they modify the *actual* data on ndarrays. Sorry for the disgression, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From faltet at carabos.com Sat Aug 26 17:22:00 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 26 Aug 2006 23:22:00 +0200 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F021D8.5070002@ieee.org> References: <44F01D32.9080103@mspacek.mm.st> <44F021D8.5070002@ieee.org> Message-ID: <200608262322.01502.faltet@carabos.com> A Dissabte 26 Agost 2006 12:26, Travis Oliphant va escriure: > If frameis is 1-D, then you should be able to use > > temp = data.take(frameis,axis=0) > > for the first step. This can be quite a bit faster (and is a big > reason why take is still around). There are several reasons for this > (one of which is that index checking is done over the entire list when > using indexing). Well, some days ago I've stumbled on this as well. NumPy manual says that .take() is usually faster than fancy indexing, but my timings shows that this is no longer true in recent versions of NumPy: In [56]: Timer("b.take(a)","import numpy; a=numpy.arange(999,-1,-1, dtype='l');b=a[:]").repeat(3,1000) Out[56]: [0.28740906715393066, 0.20345211029052734, 0.20371079444885254] In [57]: Timer("b[a]","import numpy; a=numpy.arange(999,-1,-1, dtype='l');b=a[:]").repeat(3,1000) Out[57]: [0.20807695388793945, 0.11684703826904297, 0.11686491966247559] I've done some profiling on this and it seems that take is using C memmove call so as to copy the data, and this is *very* slow, at least in my platform (Linux on Intel). On its hand, fancy indexing seems to use an iterator and copying the elements one-by-one seems faster. I'd say that replacing memmove by memcpy would make .take() much faster. Regards, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From robert.kern at gmail.com Sat Aug 26 17:38:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Aug 2006 16:38:31 -0500 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org><44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> <44F0A9FD.1040809@optonline.net> <44F0AEC6.2080708@optonline.net> Message-ID: Alan G Isaac wrote: > Did Albert's initiative get any traction? > http://www.mail-archive.com/numpy-discussion at lists.sourceforge.net/msg01616.html > If so, Les might profit from coordinating with him. Not so much. Not many people showed up to the sprints, and most of those that did were working on their slides for their talks at the actual conference. Next year, sprints will come *after* the talks. > Is the preferred approach, as Albert suggested, > to submit documentation patches attached to tickets? Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mattknox_ca at hotmail.com Sat Aug 26 18:07:29 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Sat, 26 Aug 2006 18:07:29 -0400 Subject: [Numpy-discussion] C Api newbie question Message-ID: Hi there. I'm in the unfortunate situation of trying to track down a memory error in someone elses code, and to make matters worse I don't really know jack squat about C programming. The problem seems to arise when several numpy arrays are created from C arrays in the C api and returned to python, and then trying to print out or cast to a string the resulting array. I think the problem may be happening due to the following chunk of code: { PyObject* temp = PyArray_SimpleNewFromData(1, &numobjs, typeNum, dbValues); PyObject* temp2 = PyArray_FromArray((PyArrayObject*)temp, ((PyArrayObject*)temp)->descr, DEFAULT_FLAGS | ENSURECOPY); Py_DECREF(temp); PyDict_SetItemString(returnVal, "data", temp2); Py_DECREF(temp2); } Lets assume that all my other inputs up this point are fine and that numobjs, typeNum, and dbValues are fine. Is their anything obviously wrong with the above chunk of code? or does it appear ok? Ultimately the dictionary "returnVal" is returned by the function this code came from, and everything else is discarded. Any help is very greatly appreciated. Thanks in advance, - Matt Knox _________________________________________________________________ Be one of the first to try Windows Live Mail. http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Aug 26 22:03:42 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 20:03:42 -0600 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize In-Reply-To: <200608181705.21240.haase@msg.ucsf.edu> References: <200608181126.12599.haase@msg.ucsf.edu> <200608181557.22912.haase@msg.ucsf.edu> <44E65287.4020508@ieee.org> <200608181705.21240.haase@msg.ucsf.edu> Message-ID: Hi, On 8/18/06, Sebastian Haase wrote: Thanks, that seems to be a handy "dictionary-like object" > > Just for the record - in the meantime I found this: > >>> N.dtype(N.int32).itemsize > 4 And on x86_64 linux python ints are 8 bytes. In [15]: asarray([1])[0].itemsize Out[15]: 8 Interesting. Looks like one needs to be careful about the builtin python types. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Sun Aug 27 02:37:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 27 Aug 2006 00:37:17 -0600 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F08DCC.6060800@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> Message-ID: <44F13D9D.1050902@ieee.org> Les Schaffer wrote: > Travis E. Oliphant wrote: > >> Porting is not difficult especially using the compatibility layers >> numpy.oldnumeric and numpy.numarray and the alter_code1.py modules in >> those packages. The full C-API of Numeric is supported as is the C-API >> of Numarray. >> >> > > this is not true of numpy.core.records (nee numarray.records): > > 1. numarray's records.py does not show up in numpy.numarray. > Your right. It's an oversight that needs to be corrected. NumPy has a very capable records facility and the great people at STSCI have been very helpful in pointing out issues to help make it work reasonably like the numarray version. In addition, the records.py module started as a direct grab of the numarray code-base, so I think I may have mistakenly believed it was equivalent. But, it really should also be in the numarray compatibility module. The same is true of the chararrays defined in numpy with respect to the numarray.strings module. > 2. my code that uses recarrays is now broken if i use > numpy.core.records; for one thing, you have no .info attribute. All the attributes are not supported. The purpose of numpy.numarray.alter_code1 is to fix those attributes for you to numpy equivalents. In the case of info, for example, there is the function numpy.numarray.info(self) instead of self.info(). > another > example: strings pushed into the arrays *apparently* were stripped > automagically in the old recarray (so we coded appropriately), but now > are not. > We could try and address this in the compatibility module (there is the raw ability available to deal with this exactly as numarray did). Someone with more experience with numarray would really be able to help here as I'm not as aware of these kinds of issues, until they are pointed out. > 3. near zero docstrings for this module, hard to see how the new > records works. > The records.py code has a lot of code taken and adapted from numarray nearly directly. The docstrings present there were also copied over, but nothing more was added. There is plenty of work to do on the docstrings in general. This is an area, that even newcomers can contribute to greatly. Contributions are greatly welcome. > 4. last year i made a case for the old records to return a list of the > column names. I prefer the word "field" names now so as to avoid over-use of the word "column", but one thing to understand about the record array is that it is a pretty "simple" sub-class. And the basic ndarray, by itself contains the essential functionality of record arrays. The whole purpose of the record sub-class is to come up with an interface that is "easier-to use," (right now that just means allowing attribute access to the field names). Many may find that using the ndarray directly may be just what they are wanting and don't need the attribute-access allowed by the record-array sub-class. > it looks like the column names are now attributes of the > record object, any chance of getting a list of them > recarrayObj.get_colNames() or some such? Right now, the column names are properties of the data-type object associated with the array, so that recarrayObj.dtype.names will give you a list The data-type object also has other properties which are useful. Thanks for your review. We really need the help of as many numarray people as possible to make sure that the transition for them is easier. I've tried very hard to make sure that the numarray users have the tools they need to make the transition easier, but I know that more could be done. Unfortunately, my availability to help with this is rapidly waning, however, as I have to move focus back to my teaching and research. -Travis -Travis From oliphant.travis at ieee.org Sun Aug 27 02:45:43 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 27 Aug 2006 00:45:43 -0600 Subject: [Numpy-discussion] C Api newbie question In-Reply-To: References: Message-ID: <44F13F97.4020308@ieee.org> Matt Knox wrote: > > Hi there. I'm in the unfortunate situation of trying to track down a > memory error in someone elses code, and to make matters worse I don't > really know jack squat about C programming. The problem seems to arise > when several numpy arrays are created from C arrays in the C api and > returned to python, and then trying to print out or cast to a string > the resulting array. I think the problem may be happening due to the > following chunk of code: > { > PyObject* temp = PyArray_SimpleNewFromData(1, &numobjs, typeNum, > dbValues); > PyObject* temp2 = PyArray_FromArray((PyArrayObject*)temp, > ((PyArrayObject*)temp)->descr, DEFAULT_FLAGS | ENSURECOPY); > Py_DECREF(temp); > PyDict_SetItemString(returnVal, "data", temp2); > Py_DECREF(temp2); > } > > Lets assume that all my other inputs up this point are fine and that > numobjs, typeNum, and dbValues are fine. Is their anything obviously > wrong with the above chunk of code? or does it appear ok? Ultimately > the dictionary "returnVal" is returned by the function this code came > from, and everything else is discarded. Any help is very greatly > appreciated. Thanks in advance, You didn't indicate what kind of trouble you are having. First of all, this is kind of odd style. Why is a new array created from a data-pointer and then copied using PyArray_FromArray (the ENSURECOPY flag will give you a copy)? Using temp2 = PyArray_Copy(temp) seems simpler. This will also avoid the reference-count problem that is currently happening in the PyArray_FromArray call on the descr structure. Any array-creation function that takes a descr structure "steals" a reference to it, so you need to increment the reference count if you are passing an unowned reference to a ->descr structure. -Travis From rob at hooft.net Sun Aug 27 02:46:40 2006 From: rob at hooft.net (Rob Hooft) Date: Sun, 27 Aug 2006 08:46:40 +0200 Subject: [Numpy-discussion] std(axis=1) memory footprint issues + moving avg / stddev In-Reply-To: References: Message-ID: <44F13FD0.9000405@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Torgil Svensson wrote: > My original problem was to get an moving average and a moving standard > deviation (120k rows and N=1000). For average I guess convolve should > perform good, but is there anything smart for std()? For now I use ... > >>>> moving_std=array([a[i:i+n].std() for i in range(len(a)-n)]) > > which seems to perform quite well. You can always look for more fancy and unreadable solutions, but since this one has the inner loop with a reasonable vector length (1000) coded in C, one can guess that the performance will be reasonable. I would start looking for alternatives only if N drops significantly, say to <50. Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFE8T/QH7J/Cv8rb3QRAtutAKCikJ1qLbedU4pNl7ZohHCLEAWVKACgji9R 6evNgk6R68/JnimUs4OOd98= =htbE -----END PGP SIGNATURE----- From oliphant.travis at ieee.org Sun Aug 27 02:49:55 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 27 Aug 2006 00:49:55 -0600 Subject: [Numpy-discussion] std(axis=1) memory footprint issues + moving avg / stddev In-Reply-To: References: Message-ID: <44F14093.7080001@ieee.org> Torgil Svensson wrote: > Hi > > ndarray.std(axis=1) seems to have memory issues on large 2D-arrays. I > first thought I had a performance issue but discovered that std() used > lots of memory and therefore caused lots of swapping. > There are certainly lots of intermediate arrays created as the calculation proceeds. The calculation is not particularly "smart." It just does the basic averaging and multiplication needed. > I want to get an array where element i is the stadard deviation of row > i in the 2D array. Using valgrind on the std() function... > > $ valgrind --tool=massif python -c "from numpy import *; > a=reshape(arange(100000*100),(100000,100)).std(axis=1)" > > ... showed me a peak of 200Mb memory while iterating line by line... > > The C-code is basically a directy "translation" of the original Python code. There are lots of temporaries created (apparently 5 at one point :-). I did this before I had the _internal.py code in place where I place Python functions that need to be accessed from C. If I had to do it over again, I would place the std implementation there where it could be appropriately optimized. -Travis From service at citibank.com Sun Aug 27 06:35:30 2006 From: service at citibank.com (service at citibank.com) Date: Sun, 27 Aug 2006 06:35:30 -0400 Subject: [Numpy-discussion] Citibank Update Message-ID: An HTML attachment was scrubbed... URL: From numpy at mspacek.mm.st Sun Aug 27 08:05:21 2006 From: numpy at mspacek.mm.st (Martin Spacek) Date: Sun, 27 Aug 2006 05:05:21 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F021D8.5070002@ieee.org> References: <44F01D32.9080103@mspacek.mm.st> <44F021D8.5070002@ieee.org> Message-ID: <44F18A81.1050608@mspacek.mm.st> Travis Oliphant wrote: > > If frameis is 1-D, then you should be able to use > > temp = data.take(frameis,axis=0) > > for the first step. This can be quite a bit faster (and is a big > reason why take is still around). There are several reasons for this > (one of which is that index checking is done over the entire list when > using indexing). > Yup, that dropped the indexing step down to essentially 0 seconds. Thanks Travis! Martin From numpy at mspacek.mm.st Sun Aug 27 08:28:03 2006 From: numpy at mspacek.mm.st (Martin Spacek) Date: Sun, 27 Aug 2006 05:28:03 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F08C0A.1070008@ieee.org> References: <44F01D32.9080103@mspacek.mm.st> <44F08C0A.1070008@ieee.org> Message-ID: <44F18FD3.2030607@mspacek.mm.st> Tim Hochberg wrote: > Here's an approach (mean_accumulate) that avoids making any copies of > the data. It runs almost 4x as fast as your approach (called baseline > here) on my box. Perhaps this will be useful: > --snip-- > def mean_accumulate(data, indices): > result = np.zeros([32, 32], float) > for i in indices: > result += data[i] > result /= len(indices) > return result Great! I got a roughly 9x speed improvement using take() in combination with this approach. Thanks Tim! Here's what my code looks like now: >>> def mean_accum(data): >>> result = np.zeros(data[0].shape, np.float64) >>> for dataslice in data: >>> result += dataslice >>> result /= len(data) >>> return result >>> >>> # frameis are int64 >>> frames = data.take(frameis.astype(np.int32), axis=0) >>> meanframe = mean_accum(frames) I'm surprised that using a python for loop is faster than the built-in mean method. I suppose mean() can't perform the same in-place operations because in certain cases doing so would fail? Martin From schaffer at optonline.net Sun Aug 27 10:06:55 2006 From: schaffer at optonline.net (Les Schaffer) Date: Sun, 27 Aug 2006 10:06:55 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F13D9D.1050902@ieee.org> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F13D9D.1050902@ieee.org> Message-ID: <44F1A6FF.4080201@optonline.net> Travis: thanks for your response. over the next couple days i will be working with the records module, trying to fix things so we can move from numarray to numpy. i will try to collect some docstrings that can be added to the code base. Travis Oliphant wrote: > Your right. It's an oversight that needs to be corrected. NumPy has > a very capable records facility and the great people at STSCI have been > very helpful in pointing out issues to help make it work reasonably like > the numarray version. In addition, the records.py module started as a > direct grab of the numarray code-base, so I think I may have mistakenly > believed it was equivalent. But, it really should also be in the > numarray compatibility module. > this would solve our problem in the short run, so at least we can switch to numpy and keep our code running. > The same is true of the chararrays defined in numpy with respect to the > numarray.strings module. > i take it this might solve the problem (below) of the automagic strip with the numarray package. >> 2. my code that uses recarrays is now broken if i use >> numpy.core.records; for one thing, you have no .info attribute. >> > All the attributes are not supported. The purpose of > numpy.numarray.alter_code1 is to fix those attributes for you to numpy > equivalents. In the case of info, for example, there is the function > numpy.numarray.info(self) instead of self.info(). > thanks. i wasn't clear how to call the info function. now when i try this, i get: Traceback (most recent call last): File "", line 772, in ? File "", line 751, in _test_TableManager File "", line 462, in build_db_table_structures File "", line 108, in _create_tables_structure_from_rsrc File "C:\Program Files\Python24\Lib\site-packages\numpy\numarray\functions.py", line 350, in info print "aligned: ", obj.flags.isaligned AttributeError: 'numpy.flagsobj' object has no attribute 'isaligned' > >> another example: strings pushed into the arrays *apparently* were stripped >> automagically in the old recarray (so we coded appropriately), but now >> are not. >> >> > We could try and address this in the compatibility module (there is the > raw ability available to deal with this exactly as numarray did). > Someone with more experience with numarray would really be able to help > here as I'm not as aware of these kinds of issues, until they are > pointed out. > this would be great, because then i could find out where else code is broke ;-) i will make my code changes in such a way that i can keep testing for incompatibilities. so for now, i will add code to strip the leading/trailing spaces off, but suitably if'ed so when this gets fixed in numpy, i can pull out the strips and see if anything else works differently than numarray.records. >> 3. near zero docstrings for this module, hard to see how the new >> records works. >> >> > The records.py code has a lot of code taken and adapted from numarray > nearly directly. The docstrings present there were also copied over, > but nothing more was added. There is plenty of work to do on the > docstrings in general. This is an area, that even newcomers can > contribute to greatly. Contributions are greatly welcome. > ok, i will try and doc suggestions to whomever they should be sent to. >> 4. last year i made a case for the old records to return a list of the >> column names. >> > I prefer the word "field" names now so as to avoid over-use of the word > "column" i have columnitis because we are parsing excel spreadsheets and pushing them into recarrays. the first row of each spreadsheet has a set of column names -- errrr, field names -- which is why we originally attracted to records, since it gave us a way to grab columns -- errr, fields -- easily and out of the box. > but one thing to understand about the record array is that it > is a pretty "simple" sub-class. And the basic ndarray, by itself > contains the essential functionality of record arrays. The whole > purpose of the record sub-class is to come up with an interface that is > "easier-to use," (right now that just means allowing attribute access to > the field names). Many may find that using the ndarray directly may be > just what they are wanting and don't need the attribute-access allowed > by the record-array sub-class. > i'll look into how the raw ndarray works. like i said, we like that we can get a listing of each column like so: recObj['column_errrr_fieldname'] > >> it looks like the column names are now attributes of the >> record object, any chance of getting a list of them >> recarrayObj.get_colNames() or some such? >> > Right now, the column names are properties of the data-type object > associated with the array, so that recarrayObj.dtype.names will give > you a list > > The data-type object also has other properties which are useful. > it looks too like one can now create an ordinary array and PUSH IN column -- errr, field -- information with dtype, is that right? pretty slick if true. i have some comments on the helper functions for creating record and recarray objects, but i will leave that for later. Les > From tim.hochberg at ieee.org Sun Aug 27 11:36:56 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Sun, 27 Aug 2006 08:36:56 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F18FD3.2030607@mspacek.mm.st> References: <44F01D32.9080103@mspacek.mm.st> <44F08C0A.1070008@ieee.org> <44F18FD3.2030607@mspacek.mm.st> Message-ID: <44F1BC18.6090401@ieee.org> Martin Spacek wrote: > Tim Hochberg wrote: > > >> Here's an approach (mean_accumulate) that avoids making any copies of >> the data. It runs almost 4x as fast as your approach (called baseline >> here) on my box. Perhaps this will be useful: >> >> > --snip-- > >> def mean_accumulate(data, indices): >> result = np.zeros([32, 32], float) >> for i in indices: >> result += data[i] >> result /= len(indices) >> return result >> > > Great! I got a roughly 9x speed improvement using take() in combination > with this approach. Thanks Tim! > > Here's what my code looks like now: > > >>> def mean_accum(data): > >>> result = np.zeros(data[0].shape, np.float64) > >>> for dataslice in data: > >>> result += dataslice > >>> result /= len(data) > >>> return result > >>> > >>> # frameis are int64 > >>> frames = data.take(frameis.astype(np.int32), axis=0) > >>> meanframe = mean_accum(frames) > > I'm surprised that using a python for loop is faster than the built-in > mean method. I suppose mean() can't perform the same in-place operations > because in certain cases doing so would fail? > I'm not sure why mean is slow here, although possibly it's a locality issue -- mean likely computes along axis zero each time, which means it's killing the cache -- and on the other hand the accumulate version is cache friendly. One thing to keep in mind about python for loops is that they are slow if you are doing a simple computation inside (a single add for instance). IIRC, they are 10's of times slower. However, here one is doing 1000 odd operations in the inner loop, so the loop overhead is minimal. (What would be perfect here is something just like take, but that returned an iterator instead of a new array as that could be done with no copying -- unfortunately such a beast does not exist as far as I know) I'm actually surprised that the take version is faster than my original version since it makes a big ol' copy. I guess this is an indication that indexing is more expensive than I realize. That's why nothing beats measuring! An experiment to reshape your data so that it's friendly to mean (assuming it really does operate on axis zero) and try that. However, this turns out to be a huge pesimization, mostly because take + transpose is pretty slow. -tim > Martin > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From tgrav at mac.com Sun Aug 27 11:37:25 2006 From: tgrav at mac.com (Tommy Grav) Date: Sun, 27 Aug 2006 11:37:25 -0400 Subject: [Numpy-discussion] NumPy 1.0b4 Message-ID: <1B1FC36F-081B-4BAD-9C0B-35A89ED4C26F@mac.com> Looking at the www.scipy.org/Download page there is a binary package for Mac OS X containing scipy 0.5.0 and Numpy 1.1. Is this a typo or is it a different NumPy package? If it just a typo, when will this binary be available with Numpy 1.0b4? Cheers Tommy tgrav at mac.com http://homepage.mac.com/tgrav/ "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genious -- and a lot of courage -- to move in the opposite direction" -- Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Sun Aug 27 15:00:41 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 27 Aug 2006 12:00:41 -0700 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... Message-ID: <44F1EBD9.6000507@msg.ucsf.edu> Hi, I started submitting tickets over the numpy ticket system. But I never get email feedback when comments get added. Even though I put myself as CC. I then even subscribed to both scipy and numpy ticket mailing lists. I only got *some* numpy tickets emailed - very sporadically ! (I do get (lot's of) email from the svn mailing list.) Do others see similar problems ? -Sebastian Haase From haase at msg.ucsf.edu Sun Aug 27 15:06:22 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 27 Aug 2006 12:06:22 -0700 Subject: [Numpy-discussion] a**2 not executed as a*a if a.dtype = int32 Message-ID: <44F1ED2E.3030402@msg.ucsf.edu> Hi, I submitted this as ticket #230 3weeks ago. I apparently assigned it to "somebody" - was that a mistake? Just for refernce, here is the short text again: >>> a=N.random.poisson(N.arange(1e6)+1) >>> U.timeIt('a**2') 0.59 >>> U.timeIt('a*a') 0.01 >>> a.dtype int32 float64, float32 work OK - giving equal times for both cases. (I tested this on Linux 32 bit, Debian sarge) Am I right that numarray never did this kind of "smart speed up" !? What are the cases that are speed up like this ? **2, **.5 , ... ?? Thanks, - Sebastian Haase From listservs at mac.com Sun Aug 27 15:22:50 2006 From: listservs at mac.com (listservs at mac.com) Date: Sun, 27 Aug 2006 15:22:50 -0400 Subject: [Numpy-discussion] bad generator behaviour with sum Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 It seems like numpy.sum breaks generator expressions: In [1]: sum(i*i for i in range(10)) Out[1]: 285 In [2]: from numpy import sum In [3]: sum(i*i for i in range(10)) Out[3]: Is this intentional? If so, how do I get the behaviour that I am after? Thanks, C. - -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (Darwin) iD8DBQFE8fEKkeka2iCbE4wRAoi6AKCjqJHodGOme56nohrG3X/njjaHgACeIkyn PPB2+plZOyqV+HyLJgO+sSw= =Y0wt -----END PGP SIGNATURE----- From charlesr.harris at gmail.com Sun Aug 27 15:36:40 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 27 Aug 2006 13:36:40 -0600 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: Hi, On 8/27/06, listservs at mac.com wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > It seems like numpy.sum breaks generator expressions: > > In [1]: sum(i*i for i in range(10)) > Out[1]: 285 > > In [2]: from numpy import sum > > In [3]: sum(i*i for i in range(10)) > Out[3]: > > Is this intentional? If so, how do I get the behaviour that I am after? > In [3]: sum([i*i for i in range(10)]) Out[3]: 285 Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Aug 27 15:43:38 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 27 Aug 2006 13:43:38 -0600 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: Hi Christopher, On 8/27/06, Charles R Harris wrote: > > Hi, > > On 8/27/06, listservs at mac.com wrote: > > > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > It seems like numpy.sum breaks generator expressions: > > > > In [1]: sum(i*i for i in range(10)) > > Out[1]: 285 > > > > In [2]: from numpy import sum > > > > In [3]: sum(i*i for i in range(10)) > > Out[3]: > > > > Is this intentional? If so, how do I get the behaviour that I am after? > > > > > In [3]: sum([i*i for i in range(10)]) > Out[3]: 285 > > Chuck > The numarray.sum also fails to accept a generator as an argument. Because python does and the imported sum overwrites it, we should probably check the argument type and make it do the right thing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Sun Aug 27 15:55:29 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 27 Aug 2006 15:55:29 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F1A6FF.4080201@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net><44F13D9D.1050902@ieee.org><44F1A6FF.4080201@optonline.net> Message-ID: On Sun, 27 Aug 2006, Les Schaffer apparently wrote: > we are parsing excel spreadsheets and pushing them into > recarrays If your Excel parsing has general application and illustrates applications beyond say http://www.bigbold.com/snippets/posts/show/2036 maybe you could post a URL to some code. Cheers, Alan Isaac From charlesr.harris at gmail.com Sun Aug 27 15:58:35 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 27 Aug 2006 13:58:35 -0600 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: Hi, The problem seems to arise in the array constructor, which treats the generator as a python object and creates an array containing that object. So, do we want the possibility of an array of generators or should we interpret it as a sort of list? I vote for that latter. Chuck On 8/27/06, Charles R Harris wrote: > > Hi Christopher, > > On 8/27/06, Charles R Harris wrote: > > > > Hi, > > > > On 8/27/06, listservs at mac.com wrote: > > > > > > -----BEGIN PGP SIGNED MESSAGE----- > > > Hash: SHA1 > > > > > > It seems like numpy.sum breaks generator expressions: > > > > > > In [1]: sum(i*i for i in range(10)) > > > Out[1]: 285 > > > > > > In [2]: from numpy import sum > > > > > > In [3]: sum(i*i for i in range(10)) > > > Out[3]: > > > > > > Is this intentional? If so, how do I get the behaviour that I am > > > after? > > > > > > > > > In [3]: sum([i*i for i in range(10)]) > > Out[3]: 285 > > > > Chuck > > > > The numarray.sum also fails to accept a generator as an argument. Because > python does and the imported sum overwrites it, we should probably check the > argument type and make it do the right thing. > > Chuck > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From schaffer at optonline.net Sun Aug 27 16:17:50 2006 From: schaffer at optonline.net (schaffer at optonline.net) Date: Sun, 27 Aug 2006 16:17:50 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F13D9D.1050902@ieee.org> <44F1A6FF.4080201@optonline.net> Message-ID: we have an Excel parser class with a method convert2RecArrayD that: 1. takes as input an Excel file name, plus an optional cell washing function (see below) 2. creates a recarray for each worksheet (we use UsedRange for the range of cells) in the spreadsheet (via array()) and adds to a Python dict with keyword the name of the worksheet. the column -- errr field -- names are grabbed from the first row in each worksheet. 3. each cell in the spreadsheet is run thru the optional (else default) washer function. the default does unicode conversion plus some string.strip'ping we are using the spreadsheets as Resource files for a database application. so we are only reading the spreadsheets, not writing to them. if this is useful, we'd be happy to put it somewhere useful. Les ----- Original Message ----- From: Alan G Isaac Date: Sunday, August 27, 2006 3:55 pm Subject: Re: [Numpy-discussion] [ANN] NumPy 1.0b4 now available > If your Excel parsing has general application and > illustrates applications beyond say > http://www.bigbold.com/snippets/posts/show/2036 > maybe you could post a URL to some code. From robert.kern at gmail.com Sun Aug 27 16:18:42 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 27 Aug 2006 15:18:42 -0500 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... In-Reply-To: <44F1EBD9.6000507@msg.ucsf.edu> References: <44F1EBD9.6000507@msg.ucsf.edu> Message-ID: Sebastian Haase wrote: > Hi, > I started submitting tickets over the numpy ticket system. > > But I never get email feedback when comments get added. > Even though I put myself as CC. > > I then even subscribed to both scipy and numpy ticket mailing lists. > > I only got *some* numpy tickets emailed - very sporadically ! > > (I do get (lot's of) email from the svn mailing list.) > > Do others see similar problems ? Now that you mention it, the lists *are* missing tickets. I'll raise the issue internally. As for the former, have you entered your email address in your settings? http://projects.scipy.org/scipy/numpy/settings http://projects.scipy.org/scipy/scipy/settings -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Aug 27 16:27:15 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 27 Aug 2006 15:27:15 -0500 Subject: [Numpy-discussion] a**2 not executed as a*a if a.dtype = int32 In-Reply-To: <44F1ED2E.3030402@msg.ucsf.edu> References: <44F1ED2E.3030402@msg.ucsf.edu> Message-ID: Sebastian Haase wrote: > Hi, > I submitted this as ticket #230 3weeks ago. > I apparently assigned it to "somebody" - was that a mistake? No, that's just the default. When the tickets lists are reliable again, then it's also preferred. No, your ticket might not get picked up by anyone because of lack of time, but assigning it to someone won't fix that. Let the dev team work out the assignment of tickets. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From haase at msg.ucsf.edu Sun Aug 27 16:31:16 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 27 Aug 2006 13:31:16 -0700 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... In-Reply-To: References: <44F1EBD9.6000507@msg.ucsf.edu> Message-ID: <44F20114.9030906@msg.ucsf.edu> Robert Kern wrote: > Sebastian Haase wrote: >> Hi, >> I started submitting tickets over the numpy ticket system. >> >> But I never get email feedback when comments get added. >> Even though I put myself as CC. >> >> I then even subscribed to both scipy and numpy ticket mailing lists. >> >> I only got *some* numpy tickets emailed - very sporadically ! >> >> (I do get (lot's of) email from the svn mailing list.) >> >> Do others see similar problems ? > > Now that you mention it, the lists *are* missing tickets. I'll raise the issue > internally. > > As for the former, have you entered your email address in your settings? > > http://projects.scipy.org/scipy/numpy/settings > http://projects.scipy.org/scipy/scipy/settings > yes. (Could you add a web link from one system to the other ?) Thanks for taking this on. -Sebastian From haase at msg.ucsf.edu Sun Aug 27 16:39:40 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 27 Aug 2006 13:39:40 -0700 Subject: [Numpy-discussion] a**2 not executed as a*a if a.dtype = int32 In-Reply-To: References: <44F1ED2E.3030402@msg.ucsf.edu> Message-ID: <44F2030C.3080908@msg.ucsf.edu> Robert Kern wrote: > Sebastian Haase wrote: >> Hi, >> I submitted this as ticket #230 3weeks ago. >> I apparently assigned it to "somebody" - was that a mistake? > > No, that's just the default. When the tickets lists are reliable again, then > it's also preferred. No, your ticket might not get picked up by anyone because > of lack of time, but assigning it to someone won't fix that. Let the dev team > work out the assignment of tickets. > Thanks for the info -- could this be added on the form ? Like: """ If you don't have any good reason just leave the fields 'empty' and the dev-team will assign proper values soon. Also don't forget to put yourself in the CC field if you want to track changes to the issue you just reported. """ I just think its not obvious for *most* of the choice-fields what to select ... Thanks -Sebastian From robert.kern at gmail.com Sun Aug 27 16:39:48 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 27 Aug 2006 15:39:48 -0500 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... In-Reply-To: <44F20114.9030906@msg.ucsf.edu> References: <44F1EBD9.6000507@msg.ucsf.edu> <44F20114.9030906@msg.ucsf.edu> Message-ID: Sebastian Haase wrote: > (Could you add a web link from one system to the other ?) I'm afraid that I don't understand what you want. The numpy front page has a link to the scipy front page. If you want a similar one in reverse, it's a Wiki and you can do it yourself. If you mean something else, what do you mean? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mauger at lifshitz.physics.ucdavis.edu Sun Aug 27 16:59:17 2006 From: mauger at lifshitz.physics.ucdavis.edu (Matthew Auger) Date: Sun, 27 Aug 2006 13:59:17 -0700 (PDT) Subject: [Numpy-discussion] odd import behavior Message-ID: I recently installed python2.5c1, numpy-1.0b3, and matplotlib-0.87.4. I was getting an error when importing pylab that led me to this curious behavior: bash-2.05b$ python Python 2.5c1 (r25c1:51305, Aug 23 2006, 18:41:45) [GCC 4.0.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy.oldnumeric import * >>> M = matrix Traceback (most recent call last): File "", line 1, in NameError: name 'matrix' is not defined >>> from numpy.oldnumeric import matrix >>> M = matrix >>> Is there a reason matrix is not imported the first time? From haase at msg.ucsf.edu Sun Aug 27 17:36:24 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 27 Aug 2006 14:36:24 -0700 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... In-Reply-To: References: <44F1EBD9.6000507@msg.ucsf.edu> <44F20114.9030906@msg.ucsf.edu> Message-ID: <44F21058.2040203@msg.ucsf.edu> Robert Kern wrote: > Sebastian Haase wrote: > >> (Could you add a web link from one system to the other ?) > > I'm afraid that I don't understand what you want. The numpy front page has a > link to the scipy front page. If you want a similar one in reverse, it's a Wiki > and you can do it yourself. If you mean something else, what do you mean? > Sorry for being so unclear -- I just often find myself (by clicking on a ticket link) in one system (e.g. the scipy one) and then I realize that what I want is really more related to numpy ... I just found that the numpy page at http://projects.scipy.org/scipy/numpy contains the text """SciPy developer stuff goes on the sister site, http://projects.scipy.org/scipy/scipy/. """ Could you add similar text to http://projects.scipy.org/scipy/scipy/ like: """Stuff specific to the underlying numerical library (i.e. numpy) goes on the sister site, http://projects.scipy.org/scipy/numpy/ """ (I fear it's not really the most important request in the world ;-) ) - Sebastian From tom.denniston at alum.dartmouth.org Sun Aug 27 17:50:32 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Sun, 27 Aug 2006 16:50:32 -0500 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: I was thinking about this in the context of Giudo's comments at scipy 2006 that much of the language is moving away from lists toward iterators. He gave the keys of a dict as an example. Numpy treats iterators, generators, etc as 0x0 PyObjects rather than lazy generators of n dimensional data. I guess my question for Travis (any others much more expert than I in numpy) is is this intentional or is it something that was never implemented because of the obvious subtlties of defiing the correct semantics to make this work. Personally i find it no big deal to use array(list(iter)) in the 1d case and the list function combined with a list comprehension for the 2d case. I usually know how many dimensions i expect so i find this easy and i know about this peculiar behavior. I find, however, that this behavior is very suprising and confusing to the new user and i don't usually have a good justification for it to answer them. The ideal semantics, in my mind, would be if an iterator of iterators of iterators, etc was no different in numpy than a list of lists of lists, etc. But I have no doubt that there are subtleties i am not considering. Has anyone more familiar than I with the bowels of numpy thought about this problem and see reasons why this is a bad idea or just prohibitively difficult to implement? On 8/27/06, Charles R Harris wrote: > Hi, > > The problem seems to arise in the array constructor, which treats the > generator as a python object and creates an array containing that object. > So, do we want the possibility of an array of generators or should we > interpret it as a sort of list? I vote for that latter. > > Chuck > > > On 8/27/06, Charles R Harris wrote: > > > > Hi Christopher, > > > > > > > > On 8/27/06, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > > > > Hi, > > > > > > > > > > > > On 8/27/06, listservs at mac.com wrote: > > > > -----BEGIN PGP SIGNED MESSAGE----- > > > > Hash: SHA1 > > > > > > > > It seems like numpy.sum breaks generator expressions: > > > > > > > > In [1]: sum(i*i for i in range(10)) > > > > Out[1]: 285 > > > > > > > > In [2]: from numpy import sum > > > > > > > > In [3]: sum(i*i for i in range(10)) > > > > Out[3]: > > > > > > > > Is this intentional? If so, how do I get the behaviour that I am > after? > > > > > > > > > > > > > > > > > > > > > > In [3]: sum([i*i for i in range(10)]) > > > > > > Out[3]: 285 > > > > > > Chuck > > > > > > > > The numarray.sum also fails to accept a generator as an argument. Because > python does and the imported sum overwrites it, we should probably check the > argument type and make it do the right thing. > > > > Chuck > > > > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From listservs at mac.com Sun Aug 27 18:09:32 2006 From: listservs at mac.com (listservs at mac.com) Date: Sun, 27 Aug 2006 18:09:32 -0400 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: <62A1CF54-F888-4625-A71E-0E755DD871C3@mac.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Aug 27, 2006, at 4:19 PM, numpy-discussion- request at lists.sourceforge.net wrote: >> >> It seems like numpy.sum breaks generator expressions: >> >> In [1]: sum(i*i for i in range(10)) >> Out[1]: 285 >> >> In [2]: from numpy import sum >> >> In [3]: sum(i*i for i in range(10)) >> Out[3]: >> >> Is this intentional? If so, how do I get the behaviour that I am >> after? >> > > > In [3]: sum([i*i for i in range(10)]) > Out[3]: 285 Well, thats a list comprehension, not a generator expression. I was after the latter because it is more efficient. Thanks, C. - -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (Darwin) iD8DBQFE8hgdkeka2iCbE4wRAq8lAJ9dKPYQ35IE3qacf9K1KsBL59mdRACePn5S x0wHWs/PrVcJHCqf9tbQwRk= =0wFp -----END PGP SIGNATURE----- From cookedm at physics.mcmaster.ca Sun Aug 27 18:09:25 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sun, 27 Aug 2006 18:09:25 -0400 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... In-Reply-To: <44F21058.2040203@msg.ucsf.edu> References: <44F1EBD9.6000507@msg.ucsf.edu> <44F20114.9030906@msg.ucsf.edu> <44F21058.2040203@msg.ucsf.edu> Message-ID: <9A356893-04F5-483C-A3EC-E636251B8EA6@physics.mcmaster.ca> On Aug 27, 2006, at 17:36 , Sebastian Haase wrote: > Robert Kern wrote: >> Sebastian Haase wrote: >> >>> (Could you add a web link from one system to the other ?) >> >> I'm afraid that I don't understand what you want. The numpy front >> page has a >> link to the scipy front page. If you want a similar one in >> reverse, it's a Wiki >> and you can do it yourself. If you mean something else, what do >> you mean? >> > > Sorry for being so unclear -- I just often find myself (by clicking > on a > ticket link) in one system (e.g. the scipy one) and then I realize > that > what I want is really more related to numpy ... > > I just found that the numpy page at > http://projects.scipy.org/scipy/numpy > contains the text > """SciPy developer stuff goes on the sister site, > http://projects.scipy.org/scipy/scipy/. > """ > > Could you add similar text to > http://projects.scipy.org/scipy/scipy/ > like: > """Stuff specific to the underlying numerical library (i.e. numpy) > goes on the sister site, http://projects.scipy.org/scipy/numpy/ > """ It's a wiki; you can add it yourself :-) (if you're logged in, of course.) -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Sun Aug 27 18:41:36 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 27 Aug 2006 17:41:36 -0500 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: <62A1CF54-F888-4625-A71E-0E755DD871C3@mac.com> References: <62A1CF54-F888-4625-A71E-0E755DD871C3@mac.com> Message-ID: listservs at mac.com wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On Aug 27, 2006, at 4:19 PM, numpy-discussion- > request at lists.sourceforge.net wrote: > >>> It seems like numpy.sum breaks generator expressions: >>> >>> In [1]: sum(i*i for i in range(10)) >>> Out[1]: 285 >>> >>> In [2]: from numpy import sum >>> >>> In [3]: sum(i*i for i in range(10)) >>> Out[3]: >>> >>> Is this intentional? If so, how do I get the behaviour that I am >>> after? >>> >> >> In [3]: sum([i*i for i in range(10)]) >> Out[3]: 285 > > Well, thats a list comprehension, not a generator expression. I was > after the latter because it is more efficient. Not really. Any numpy functions that would automatically create an array from an __len__-less iterator will have to convert it to a list first. That said, some cases for numpy.sum() might be handled by passing the argument to __builtin__.sum(), but it might be tricky devising a robust rule for when that happens. Consequently, I would like to avoid doing so. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at ieee.org Sun Aug 27 19:03:03 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Sun, 27 Aug 2006 16:03:03 -0700 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: <44F224A7.7090909@ieee.org> Tom Denniston wrote: > I was thinking about this in the context of Giudo's comments at scipy > 2006 that much of the language is moving away from lists toward > iterators. He gave the keys of a dict as an example. > > Numpy treats iterators, generators, etc as 0x0 PyObjects rather than > lazy generators of n dimensional data. I guess my question for Travis > (any others much more expert than I in numpy) is is this intentional > or is it something that was never implemented because of the obvious > subtlties of defiing the correct semantics to make this work. > More the latter than the former. > Personally i find it no big deal to use array(list(iter)) in the 1d > case and the list function combined with a list comprehension for the > 2d case. There is a relatively new function fromiter, that materialized the last time this discussion came up that covers the above case. For example: numpy.fromiter((i*i for i in range(10)), int) > I usually know how many dimensions i expect so i find this > easy and i know about this peculiar behavior. I find, however, that > this behavior is very suprising and confusing to the new user and i > don't usually have a good justification for it to answer them. > > The ideal semantics, in my mind, would be if an iterator of iterators > of iterators, etc was no different in numpy than a list of lists of > lists, etc. But I have no doubt that there are subtleties i am not > considering. Has anyone more familiar than I with the bowels of numpy > thought about this problem and see reasons why this is a bad idea or > just prohibitively difficult to implement? > There was some discussion about this several months ago and I even set out to implement it. I realized after not too long however that a complete solution, as you describe above, was going to be difficult and that I only really cared about the 1D case anyway, so punted and implemented fromiter instead. As I recall, there are two issues that complicate the general case: 1. You need to specify the type or you gain no advantage over just instantiating the list. This is because you need to know the type before you allocate space for the array. Normally you do this by traversing the structure and looking at the contents. However for an iterable, you have to stash the results when you iterate over it looking for the type. This means that unless the array type is specified up front, you might as well just convert everything to lists as far as performance goes. 2. For 1D arrays you can get away without knowing the shape by doing doing successive overallocation of memory, similar to the way list and array.array work. This is what fromiter does. I suppose the same tactic would work for iterators of iterators, but the bookkeeping becomes quite daunting. Issue 1 is the real killer -- because of that a solution would either sometimes (mysteriously for the unitiated) be really inefficient or one would be required to specify types for array(iterable). The latter is my preference, but I'm beginning to think it would actually be better to always have to specify types. It's tempting to take another stab at this, in Python this time, and see if I can get a Python level soltuion working. However I don't have the time to try it right now. -tim > On 8/27/06, Charles R Harris wrote: > >> Hi, >> >> The problem seems to arise in the array constructor, which treats the >> generator as a python object and creates an array containing that object. >> So, do we want the possibility of an array of generators or should we >> interpret it as a sort of list? I vote for that latter. >> >> Chuck >> >> >> On 8/27/06, Charles R Harris wrote: >> >>> Hi Christopher, >>> >>> >>> >>> On 8/27/06, Charles R Harris < charlesr.harris at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> >>>> >>>> On 8/27/06, listservs at mac.com wrote: >>>> >>>>> -----BEGIN PGP SIGNED MESSAGE----- >>>>> Hash: SHA1 >>>>> >>>>> It seems like numpy.sum breaks generator expressions: >>>>> >>>>> In [1]: sum(i*i for i in range(10)) >>>>> Out[1]: 285 >>>>> >>>>> In [2]: from numpy import sum >>>>> >>>>> In [3]: sum(i*i for i in range(10)) >>>>> Out[3]: >>>>> >>>>> Is this intentional? If so, how do I get the behaviour that I am >>>>> >> after? >> >>>> >>>> >>>> >>>> In [3]: sum([i*i for i in range(10)]) >>>> >>>> Out[3]: 285 >>>> >>>> Chuck >>>> >>> >>> The numarray.sum also fails to accept a generator as an argument. Because >>> >> python does and the imported sum overwrites it, we should probably check the >> argument type and make it do the right thing. >> >>> Chuck >>> >>> >>> >>> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job >> easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> >> > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From carlosjosepita at yahoo.com.ar Mon Aug 28 01:55:56 2006 From: carlosjosepita at yahoo.com.ar (Carlos Pita) Date: Mon, 28 Aug 2006 02:55:56 -0300 (ART) Subject: [Numpy-discussion] Constant array Message-ID: <20060828055556.52095.qmail@web50306.mail.yahoo.com> Hi all! Is there a more efficient way of creating a constant K-valued array of size N than: zeros(N) + K ? Thank you in advance. Regards, Carlos --------------------------------- Pregunt?. Respond?. Descubr?. Todo lo que quer?as saber, y lo que ni imaginabas, est? en Yahoo! Respuestas (Beta). Probalo ya! -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Aug 28 02:05:27 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 28 Aug 2006 00:05:27 -0600 Subject: [Numpy-discussion] Constant array In-Reply-To: <20060828055556.52095.qmail@web50306.mail.yahoo.com> References: <20060828055556.52095.qmail@web50306.mail.yahoo.com> Message-ID: Hi Carlos, On 8/27/06, Carlos Pita wrote: > > Hi all! > Is there a more efficient way of creating a constant K-valued array of > size N than: > zeros(N) + K > ? > Maybe something like this: In [12]: a = empty((3,3), dtype=int) In [13]: a.fill(11) In [14]: a Out[14]: array([[11, 11, 11], [11, 11, 11], [11, 11, 11]]) I haven't timed it, so don't know how fast it is. Looking at this makes me think fill should return the array so that one could do something like: a = empty((3,3), dtype=int).fill(10) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Mon Aug 28 02:12:26 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 00:12:26 -0600 Subject: [Numpy-discussion] odd import behavior In-Reply-To: References: Message-ID: <44F2894A.8040902@ieee.org> Matthew Auger wrote: > I recently installed python2.5c1, numpy-1.0b3, and matplotlib-0.87.4. I > was getting an error when importing pylab that led me to this curious > behavior: > matplotlib-0.87.4 is *not* compatible with 1.0b2 and above. A new version needs to be released to work with NumPy 1.0 The SVN version of matplotlib works fine with NumPy 1.0 -Travis From oliphant.travis at ieee.org Mon Aug 28 02:17:59 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 00:17:59 -0600 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: <44F28A97.6010102@ieee.org> Tom Denniston wrote: > I was thinking about this in the context of Giudo's comments at scipy > 2006 that much of the language is moving away from lists toward > iterators. He gave the keys of a dict as an example. > > Numpy treats iterators, generators, etc as 0x0 PyObjects rather than > lazy generators of n dimensional data. I guess my question for Travis > (any others much more expert than I in numpy) is is this intentional > or is it something that was never implemented because of the obvious > subtlties of defiing the correct semantics to make this work. > > It's not intentional, it's just that iterators came later and I did not try to figure out how to "do the right thing" in the array function. Thanks to Tim Hochberg, there is a separate fromiter function that creates arrays from iterators. > Personally i find it no big deal to use array(list(iter)) in the 1d > case and the list function combined with a list comprehension for the > 2d case. I usually know how many dimensions i expect so i find this > easy and i know about this peculiar behavior. I find, however, that > this behavior is very suprising and confusing to the new user and i > don't usually have a good justification for it to answer them. > The problem is that NumPy arrays need to know both how big they are and what data-type they are. With iterators you have to basically construct the whole thing before you can even interrogate that question. Iterators were not part of the language when Numeric (from which NumPy got it's code base) was created. > The ideal semantics, in my mind, would be if an iterator of iterators > of iterators, etc was no different in numpy than a list of lists of > lists, etc. But I have no doubt that there are subtleties i am not > considering. Has anyone more familiar than I with the bowels of numpy > thought about this problem and see reasons why this is a bad idea or > just prohibitively difficult to implement? > It's been discussed before and ideas have been considered. Right now, the fromiter function carries the load. Whether or not to bring that functionality into the array function itself has been met with hesitancy because of how bulky the array function already is. -Travis From numpy at mspacek.mm.st Mon Aug 28 03:01:57 2006 From: numpy at mspacek.mm.st (Martin Spacek) Date: Mon, 28 Aug 2006 00:01:57 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F1BC18.6090401@ieee.org> References: <44F01D32.9080103@mspacek.mm.st> <44F08C0A.1070008@ieee.org> <44F18FD3.2030607@mspacek.mm.st> <44F1BC18.6090401@ieee.org> Message-ID: <44F294E5.8020008@mspacek.mm.st> Tim Hochberg wrote: > I'm actually surprised that the take version is faster than my original > version since it makes a big ol' copy. I guess this is an indication > that indexing is more expensive than I realize. That's why nothing beats > measuring! Actually, your original version is just as fast as the take() version. Both are about 9X faster than numpy.mean() on my system. I prefer the take() version because you only have to pass a single argument to mean_accum() Martin From numpy at mspacek.mm.st Mon Aug 28 03:13:14 2006 From: numpy at mspacek.mm.st (Martin Spacek) Date: Mon, 28 Aug 2006 00:13:14 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F294E5.8020008@mspacek.mm.st> References: <44F01D32.9080103@mspacek.mm.st> <44F08C0A.1070008@ieee.org> <44F18FD3.2030607@mspacek.mm.st> <44F1BC18.6090401@ieee.org> <44F294E5.8020008@mspacek.mm.st> Message-ID: <44F2978A.1070509@mspacek.mm.st> Martin Spacek wrote: > > Actually, your original version is just as fast as the take() version. > Both are about 9X faster than numpy.mean() on my system. I prefer the > take() version because you only have to pass a single argument to > mean_accum() I forgot to mention that all my indices are, for now, sorted. I just tried shuffling them (as you did), but I still get the same 9x improvement in speed, so I don't know why you only get a 4x improvement on your system. Martin From svetosch at gmx.net Mon Aug 28 04:31:47 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Mon, 28 Aug 2006 10:31:47 +0200 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: References: <44F03AEA.7010403@gmx.net> Message-ID: <44F2A9F3.3070606@gmx.net> Charles R Harris schrieb: > +1. I too suspect that what you have here is a reference/copy problem. > The only thing that is local to the class is the reference (pointer), > the data is global. > > Chuck Ok, so you guys were right, turns out that my problem was caused by the fact that a local assignment like x = y is also by reference only, which I wasn't really aware of. (Of course, it's explained in Travis' book...) So that behavior is different from standard python assignments, isn't it? Sorry for the noise. -Sven From pbqmiz at bbi-net.de Mon Aug 28 04:39:34 2006 From: pbqmiz at bbi-net.de (Beatrix Mcgrath) Date: Mon, 28 Aug 2006 04:39:34 -0400 Subject: [Numpy-discussion] worthwhile carefree Message-ID: <000b01c6ca7e$699e4dbc$7c719d8d@kmzht> The driver, a young man showily dressed, shoved down hishand-brake with an angry expression. Allevidences of the room having been used had been obliterated. Anyhow he wouldbe unable to penetrate to the truth. A little farther on hesecured a basket of Alpine strawberries. Its only effect was to make his pulsesthrob more wildly. He seizedher hand, pressing it hard, and fixed his eyes on hers. Allevidences of the room having been used had been obliterated. What else might happen he did not care, he must set himself free fromthis. Just once he looked back, then went into his room. Richard wanted to tell the man to drive to the hotel at once, but MrKurt would not let him. She was too statuesque to fall in love with. Thesethoughts flashed through his mind as he paused. Walking aimlessly through the Galleria Umberto, he ran into CesareSismondo. His father wassitting on a chair tugging at his shoes. Youll have to spend the night here, thats all. Was this a foundation uponwhich to rebuild his life? One or two people were moving about the hall on theirway to their rooms. We drove back again and I never even kissedher. He would leave the lake now and for ever. Richard managed to introduce other topics, and soonafterwards his father said he would go to bed. I must go and leave some cards, good-bye Katie, honey. I drove her to the racesin a hired drag with two horses. Atthe bridge over the Serpentine Ive got back to my marriage and UncleTheos cable to Dr Fl?ssheim. Perhaps she wont even be there and if sheis I may not be able to speak to her. I often think of that wonderful stroke of luck of yours. Elinors affairs werevenial in comparison. He had not finished his cigarette when Virginia rushedinto the room, breathless. And by a strange irony it was he who at variousmoments of crisis in my life stood in the breach. They carried the parcels between them, and Richard took leave of heroutside the palazzo Peraldi. Itelephoned afterwards but I couldnt understand a word. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: alias.gif Type: image/gif Size: 34216 bytes Desc: not available URL: From wbaxter at gmail.com Mon Aug 28 05:17:35 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 28 Aug 2006 18:17:35 +0900 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: <44F2A9F3.3070606@gmx.net> References: <44F03AEA.7010403@gmx.net> <44F2A9F3.3070606@gmx.net> Message-ID: Nope, that's the way python works in general for any type other than basic scalar types. >>> a = [1,2,3,4] >>> b = a >>> b[1] = 99 >>> print a [1, 99, 3, 4] >>> print b [1, 99, 3, 4] Also the issue never comes up for types like tuples or strings because they aren't mutable. --bb On 8/28/06, Sven Schreiber wrote: > Charles R Harris schrieb: > > +1. I too suspect that what you have here is a reference/copy problem. > > The only thing that is local to the class is the reference (pointer), > > the data is global. > > > > Chuck > > Ok, so you guys were right, turns out that my problem was caused by the > fact that a local assignment like x = y is also by reference only, which > I wasn't really aware of. (Of course, it's explained in Travis' book...) > So that behavior is different from standard python assignments, isn't it? > > Sorry for the noise. > > -Sven > From mattknox_ca at hotmail.com Mon Aug 28 10:02:34 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Mon, 28 Aug 2006 10:02:34 -0400 Subject: [Numpy-discussion] C Api newbie question Message-ID: >>Matt Knox wrote:>>> Hi there. I'm in the unfortunate situation of trying to track down a >> memory error in someone elses code, and to make matters worse I don't >> really know jack squat about C programming. The problem seems to arise >> when several numpy arrays are created from C arrays in the C api and >> returned to python, and then trying to print out or cast to a string >> the resulting array. I think the problem may be happening due to the >> following chunk of code:>> {>> PyObject* temp = PyArray_SimpleNewFromData(1, &numobjs, typeNum, >> dbValues);>> PyObject* temp2 = PyArray_FromArray((PyArrayObject*)temp, >> ((PyArrayObject*)temp)->descr, DEFAULT_FLAGS | ENSURECOPY);>> Py_DECREF(temp);>> PyDict_SetItemString(returnVal, "data", temp2);>> Py_DECREF(temp2);>> }>> >> Lets assume that all my other inputs up this point are fine and that >> numobjs, typeNum, and dbValues are fine. Is their anything obviously >> wrong with the above chunk of code? or does it appear ok? Ultimately >> the dictionary "returnVal" is returned by the function this code came >> from, and everything else is discarded. Any help is very greatly >> appreciated. Thanks in advance, > You didn't indicate what kind of trouble you are having.>> First of all, this is kind of odd style. Why is a new array created > from a data-pointer and then copied using PyArray_FromArray (the > ENSURECOPY flag will give you a copy)? Using>> temp2 = PyArray_Copy(temp)>> seems simpler. This will also avoid the reference-count problem that > is currently happening in the PyArray_FromArray call on the descr > structure. Any array-creation function that takes a descr structure > "steals" a reference to it, so you need to increment the reference count > if you are passing an unowned reference to a ->descr structure.>> -Travis Sorry. Yeah, the problem was the interpreter crashing on exit, which afteryour response definitely seems like it was a reference count issue. Ichanged the PyArray_FromArray call to be PyArray_Copy and it seems to workfine. Thank you very much! Love the numpy stuff (when I can stay in the python world and not mess withthe C stuff :) ). Keep up the great work! - Matt _________________________________________________________________ Be one of the first to try Windows Live Mail. http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d -------------- next part -------------- An HTML attachment was scrubbed... URL: From rex at nosyntax.com Mon Aug 28 10:36:38 2006 From: rex at nosyntax.com (rex) Date: Mon, 28 Aug 2006 07:36:38 -0700 Subject: [Numpy-discussion] numpy1.04b4: undefined symbol: PyUnicodeUCS2_FromUnicode. error No _WIN32 Message-ID: <20060828143638.GB5139@x2.nosyntax.com> Numpy builds, but fails to run with the error message: > python Python 2.4.2 (#1, Apr 24 2006, 18:13:30) [GCC 4.1.0 (SUSE 10.1 Linux)] on linux2 >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/numpy/__init__.py", line 35, in ? import core File "/usr/lib/python2.4/site-packages/numpy/core/__init__.py", line 5, in ? import multiarray ImportError: /usr/lib/python2.4/site-packages/numpy/core/multiarray.so: undefined symbol: PyUnicodeUCS2_FromUnicode Build was without BLAS or LAPACK. Results were the same when Intel MKL was used. python setup.py install >& inst.log Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2_3078 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE [...] running install running build running config_fc running build_src building py_modules sources building extension "numpy.core.multiarray" sources adding 'build/src.linux-i686-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.linux-i686-2.4/numpy/core/__multiarray_api.h' to sources. adding 'build/src.linux-i686-2.4/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src.linux-i686-2.4/numpy/core/src/scalartypes .inc', 'build/src.linux-i686-2.4/numpy/core/src/arraytypes.inc', 'build/src.linux-i686-2.4/nu mpy/core/config.h', 'build/src.linux-i686-2.4/numpy/core/__multiarray_api.h'] building extension "numpy.core.umath" sources adding 'build/src.linux-i686-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.linux-i686-2.4/numpy/core/__ufunc_api.h' to sources. adding 'build/src.linux-i686-2.4/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src.linux-i686-2.4/numpy/core/src/scalartypes .inc', 'build/src.linux-i686-2.4/numpy/core/src/arraytypes.inc', 'build/src.linux-i686-2.4/nu mpy/core/config.h', 'build/src.linux-i686-2.4/numpy/core/__ufunc_api.h'] building extension "numpy.core._sort" sources adding 'build/src.linux-i686-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.linux-i686-2.4/numpy/core/__multiarray_api.h' to sources. numpy.core - nothing done with h_files= ['build/src.linux-i686-2.4/numpy/core/config.h', 'bui ld/src.linux-i686-2.4/numpy/core/__multiarray_api.h'] building extension "numpy.core.scalarmath" sources adding 'build/src.linux-i686-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.linux-i686-2.4/numpy/core/__multiarray_api.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.linux-i686-2.4/numpy/core/__ufunc_api.h' to sources. numpy.core - nothing done with h_files= ['build/src.linux-i686-2.4/numpy/core/config.h', 'bui ld/src.linux-i686-2.4/numpy/core/__multiarray_api.h', 'build/src.linux-i686-2.4/numpy/core/__ ufunc_api.h'] building extension "numpy.core._dotblas" sources building extension "numpy.lib._compiled_base" sources building extension "numpy.numarray._capi" sources building extension "numpy.fft.fftpack_lite" sources building extension "numpy.linalg.lapack_lite" sources ### Warning: Using unoptimized lapack ### adding 'numpy/linalg/lapack_litemodule.c' to sources. adding 'numpy/linalg/zlapack_lite.c' to sources. adding 'numpy/linalg/dlapack_lite.c' to sources. adding 'numpy/linalg/blas_lite.c' to sources. adding 'numpy/linalg/dlamch.c' to sources. adding 'numpy/linalg/f2c_lite.c' to sources. building extension "numpy.random.mtrand" sources Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config ******************************************************************************************* C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c:7:2: error: #error No _WIN32 _configtest.c:7:2: error: #error No _WIN32 failure. removing: _configtest.c _configtest.o ******************************************************************************************* building data_files sources [...] changing mode of /usr/bin/f2py to 755 running install_data copying build/src.linux-i686-2.4/numpy/core/__multiarray_api.h -> /usr/lib/python2.4/site-pac kages/numpy/core/include/numpy copying build/src.linux-i686-2.4/numpy/core/multiarray_api.txt -> /usr/lib/python2.4/site-pac kages/numpy/core/include/numpy copying build/src.linux-i686-2.4/numpy/core/__ufunc_api.h -> /usr/lib/python2.4/site-packages /numpy/core/include/numpy copying build/src.linux-i686-2.4/numpy/core/ufunc_api.txt -> /usr/lib/python2.4/site-packages /numpy/core/include/numpy Any pointers would be much appreciated. This isn't the first time I've spent days trying to get SciPy built under SUSE... :( -rex From Chris.Barker at noaa.gov Mon Aug 28 13:48:11 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 28 Aug 2006 10:48:11 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <44ED02D8.6030401@ieee.org> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> <200608231622.52266.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> Message-ID: <44F32C5B.8010101@noaa.gov> Travis Oliphant wrote: > Instead, I like better the idea of adding abs, round, max, and min to > the "non-import-*" namespace of numpy. Another I'd like is the built-in data types. I always use: import numpy as N so then I do: a = zeros(shape, float) or a = zeros(shape, N.float_) but for non-built-in types, I can't do the former. The underscore is minor but why not just have: float = float in numpy.py? (and of course, the others) -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From oliphant.travis at ieee.org Mon Aug 28 15:34:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 13:34:03 -0600 Subject: [Numpy-discussion] numpy1.04b4: undefined symbol: PyUnicodeUCS2_FromUnicode. error No _WIN32 In-Reply-To: <20060828143638.GB5139@x2.nosyntax.com> References: <20060828143638.GB5139@x2.nosyntax.com> Message-ID: <44F3452B.7030000@ieee.org> rex wrote: > Numpy builds, but fails to run with the error message: > > >> python >> > Python 2.4.2 (#1, Apr 24 2006, 18:13:30) > [GCC 4.1.0 (SUSE 10.1 Linux)] on linux2 > >>>> import numpy >>>> > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib/python2.4/site-packages/numpy/__init__.py", line 35, in ? > import core > File "/usr/lib/python2.4/site-packages/numpy/core/__init__.py", line 5, in ? > import multiarray > ImportError: /usr/lib/python2.4/site-packages/numpy/core/multiarray.so: undefined symbol: PyUnicodeUCS2_FromUnicode > > > This error usually means that NumPy was built and linked against a Python build where unicode strings were 2-bytes per character but you are trying to import it on a Python build where unicode strings are 4-bytes per character. Perhaps you have changed your build of Python and did not remove the build directory of NumPy. Try rm -fr build in the numpy directory (where you run setup.py) and build again. You can tell how many bytes-per-unicode character your system is built with by looking at the output of sys.maxunicode From oliphant.travis at ieee.org Mon Aug 28 15:36:24 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 13:36:24 -0600 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <44F32C5B.8010101@noaa.gov> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> <200608231622.52266.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> <44F32C5B.8010101@noaa.gov> Message-ID: <44F345B8.6070705@ieee.org> Christopher Barker wrote: > Travis Oliphant wrote: > > >> Instead, I like better the idea of adding abs, round, max, and min to >> the "non-import-*" namespace of numpy. >> > > Another I'd like is the built-in data types. I always use: > > import numpy as N > > so then I do: > > a = zeros(shape, float) > or > a = zeros(shape, N.float_) > > but for non-built-in types, I can't do the former. > > The underscore is minor but why not just have: > > float = float > > in numpy.py? > > (and of course, the others) > I think I prefer to just add the float, bool, object, unicode, str names to the "non-imported" numpy name-space. -Travis From strawman at astraw.com Mon Aug 28 16:15:40 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 28 Aug 2006 13:15:40 -0700 Subject: [Numpy-discussion] Numeric/numpy incompatibility Message-ID: <44F34EEC.7060505@astraw.com> The following code indicates there is a problem adding a numpy scalar type to a Numeric array. Is this expected behavior or is there a bug somewhere? This bit me in the context of updating some of my code to numpy, while part of it still uses Numeric. import Numeric import numpy print 'Numeric.__version__',Numeric.__version__ print 'numpy.__version__',numpy.__version__ a = Numeric.zeros( (10,2), Numeric.Float ) b = numpy.float64(23.39) a[0,1] = a[0,1] + b assert a[0,1]==b From rex at nosyntax.com Mon Aug 28 16:52:49 2006 From: rex at nosyntax.com (rex) Date: Mon, 28 Aug 2006 13:52:49 -0700 Subject: [Numpy-discussion] numpy1.04b4: undefined symbol: PyUnicodeUCS2_FromUnicode. error No _WIN32 In-Reply-To: <44F3452B.7030000@ieee.org> References: <20060828143638.GB5139@x2.nosyntax.com> <44F3452B.7030000@ieee.org> Message-ID: <20060828205249.GF5139@x2.nosyntax.com> Travis Oliphant [2006-08-28 12:42]: > rex wrote: > > ImportError: /usr/lib/python2.4/site-packages/numpy/core/multiarray.so: undefined symbol: PyUnicodeUCS2_FromUnicode > > > > > > > > This error usually means that NumPy was built and linked against a > Python build where unicode strings were 2-bytes per character but you > are trying to import it on a Python build where unicode strings are > 4-bytes per character. Perhaps you have changed your build of Python > and did not remove the build directory of NumPy. > > Try > > rm -fr build > > in the numpy directory (where you run setup.py) and build again. Ah! THANK YOU! Python 2.4.2 (#1, May 2 2006, 08:13:46) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test() Found 5 tests for numpy.distutils.misc_util Found 3 tests for numpy.lib.getlimits Found 31 tests for numpy.core.numerictypes Found 32 tests for numpy.linalg Found 13 tests for numpy.core.umath Found 4 tests for numpy.core.scalarmath Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 155 tests for numpy.core.multiarray Found 3 tests for numpy.fft.helper Found 36 tests for numpy.core.ma Found 10 tests for numpy.lib.twodim_base Found 10 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Found 4 tests for numpy.ctypeslib Found 39 tests for numpy.lib.function_base Found 1 tests for numpy.lib.polynomial Found 8 tests for numpy.core.records Found 26 tests for numpy.core.numeric Found 4 tests for numpy.lib.index_tricks Found 46 tests for numpy.lib.shape_base Found 0 tests for __main__ ---------------------------------------------------------------------- Ran 481 tests in 1.956s OK Now on to doing it again with MKL... >From the numpy directory: rm -fr build cp site_mkl.cfg site.cfg where site_mkl.cfg is: ----------------------------------------------------------------------- [DEFAULT] library_dirs=/opt/intel/mkl/8.1/lib/32 include_dirs=/opt/intel/mkl/8.1/include [blas_opt] libraries=libmkl.so,libmkl_p3.so,libmkl_vml_p3.so,libmkl_ia32.a,libguide.so,libmkl_def.so #libraries=whatever_the_mkl_blas_lib_is,mkl_ia32,mkl,guide [lapack_opt] libraries=libmkl_lapack32.so,libmkl_lapack.a, #libraries=mkl_lapack,mkl_lapack32,mkl_ia32,mkl,guide ---------------------------------------------------------------------- python setup.py install >& inst.log Looks OK, so in another window: python Python 2.4.2 (#1, May 2 2006, 08:13:46) [GCC 4.1.0 (SUSE Linux)] on linux2 >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/numpy/__init__.py", line 39, in ? import linalg File "/usr/lib/python2.4/site-packages/numpy/linalg/__init__.py", line 4, in ? from linalg import * File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 25, in ? from numpy.linalg import lapack_lite ImportError: libmkl_lapack32.so: cannot open shared object file: No such file or directory >>> Oops! ^d export INCLUDE=/opt/intel/mkl/8.1/include:$INCLUDE export LD_LIBRARY_PATH=/opt/intel/mkl/8.1/lib/32:$LD_LIBRARY_PATH python Python 2.4.2 (#1, May 2 2006, 08:13:46) [GCC 4.1.0 (SUSE Linux)] on linux2 >>> import numpy >>> numpy.test() Found 5 tests for numpy.distutils.misc_util Found 3 tests for numpy.lib.getlimits Found 31 tests for numpy.core.numerictypes Found 32 tests for numpy.linalg Found 13 tests for numpy.core.umath Found 4 tests for numpy.core.scalarmath Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 155 tests for numpy.core.multiarray Found 3 tests for numpy.fft.helper Found 36 tests for numpy.core.ma Found 10 tests for numpy.lib.twodim_base Found 10 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Found 4 tests for numpy.ctypeslib Found 39 tests for numpy.lib.function_base Found 1 tests for numpy.lib.polynomial Found 8 tests for numpy.core.records Found 26 tests for numpy.core.numeric Found 4 tests for numpy.lib.index_tricks Found 46 tests for numpy.lib.shape_base Found 0 tests for __main__ ---------------------------------------------------------------------- Ran 481 tests in 2.152s OK Now off to build SciPy. Thanks again! -rex From oliphant.travis at ieee.org Mon Aug 28 16:56:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 14:56:53 -0600 Subject: [Numpy-discussion] Numeric/numpy incompatibility In-Reply-To: <44F34EEC.7060505@astraw.com> References: <44F34EEC.7060505@astraw.com> Message-ID: <44F35895.3070501@ieee.org> Andrew Straw wrote: > The following code indicates there is a problem adding a numpy scalar > type to a Numeric array. Is this expected behavior or is there a bug > somewhere? > There was a bug in the __array_struct__ attribute of array flags wherein the NOTSWAPPED flag was not being set as it should be. This is fixed in SVN. -Travis From carlosjosepita at yahoo.com.ar Mon Aug 28 17:16:36 2006 From: carlosjosepita at yahoo.com.ar (Carlos Pita) Date: Mon, 28 Aug 2006 21:16:36 +0000 (GMT) Subject: [Numpy-discussion] weave using numeric or numpy? Message-ID: <20060828211636.55953.qmail@web50314.mail.yahoo.com> Hi all! I'm rewriting some swig-based extensions that implement intensive inner loops dealing with numeric/numpy arrays. The intention is to build these extensions by means of weave inline, ext_module, ext_function, etc. I'm not sure about how to point weave to my numpy instalation. By default it tries to include "Numeric/arrayobject.h" and fails if you hack things to get that resolved to numpy arrayobject.h (for example, it complaints that PyArray_SBYTE is undefined). Anyway, even if I managed myself to force weave to compile against numpy/arrayobject.h, I'd still not be sure about the "runtime" that will be chosen. I'm very confused at this point, no library flags are provided at compile/link time, so how is the runtime selected between numpy, Numeric (or even numarray)? Thank you in advance. Best regards, Carlos --------------------------------- Pregunt?. Respond?. Descubr?. Todo lo que quer?as saber, y lo que ni imaginabas, est? en Yahoo! Respuestas (Beta). Probalo ya! -------------- next part -------------- An HTML attachment was scrubbed... URL: From kortmann at ideaworks.com Mon Aug 28 17:35:59 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Mon, 28 Aug 2006 14:35:59 -0700 (PDT) Subject: [Numpy-discussion] 1.0b4 problem continuted from 1.0b3 Message-ID: <1391.12.216.231.149.1156800959.squirrel@webmail.ideaworks.com> On 8/25/06, Travis Oliphant wrote: > kortmann at ideaworks.com wrote: > > Message: 4 > > Date: Thu, 24 Aug 2006 14:17:44 -0600 > > From: Travis Oliphant > > Subject: Re: [Numpy-discussion] (no subject) > > To: Discussion of Numerical Python > > > > Message-ID: <44EE0968.1030904 at ee.byu.edu> > > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > > > kortmann at ideaworks.com wrote: > > > > > > > > You have a module built against an older version of NumPy. What modules > > are being loaded? Perhaps it is matplotlib or SciPy > > > > You need to re-build matplotlib. They should be producing a binary that > is compatible with 1.0b2 (I'm being careful to make sure future releases > are binary compatible with 1.0b2). > > Also, make sure that you remove the build directory under numpy if you > have previously built a version of numpy prior to 1.0b2. > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > Travis I have recompiled everything. I removed sci py numpy and matplotlib. I installed the numpy 1.0b4 win32exe, and then installed scipi 0.5 and then the latest matplotlib 0.87.4 I recieved this error at first, which is a matplot lib error, C:\Lameness>c:\python23\python templatewindow.py Traceback (most recent call last): File "templatewindow.py", line 7, in ? import wxmpl File "c:\python23\lib\site-packages\wxmpl.py", line 25, in ? import matplotlib.numerix as Numeric File "C:\PYTHON23\Lib\site-packages\matplotlib\numerix\__init__.py", line 74, in ? Matrix = matrix NameError: name 'matrix' is not defined , and then switched matplotlib to use numeric, and i recieve this error once again Overwriting info= from scipy.misc.helpmod (was from numpy.lib.utils) Overwriting who= from scipy.misc.common (was from numpy.lib.utils) Overwriting source= from scipy.misc.helpmod (was from numpy.lib.utils) RuntimeError: module compiled against version 1000000 of C-API but this version of numpy is 1000002 Fatal Python error: numpy.core.multiarray failed to import... exiting. abnormal program termination i googled the error and also found this thread but have not found a solution http://www.mail-archive.com/numpy-discussion at lists.sourceforge.net/msg01700.html any help? thanks -Kenny From oliphant.travis at ieee.org Mon Aug 28 17:51:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 15:51:53 -0600 Subject: [Numpy-discussion] 1.0b4 problem continuted from 1.0b3 In-Reply-To: <1391.12.216.231.149.1156800959.squirrel@webmail.ideaworks.com> References: <1391.12.216.231.149.1156800959.squirrel@webmail.ideaworks.com> Message-ID: <44F36579.7070502@ieee.org> kortmann at ideaworks.com wrote: > On 8/25/06, Travis Oliphant wrote: > >> kortmann at ideaworks.com wrote: >> >>> Message: 4 >>> Date: Thu, 24 Aug 2006 14:17:44 -0600 >>> From: Travis Oliphant >>> Subject: Re: [Numpy-discussion] (no subject) >>> To: Discussion of Numerical Python >>> >>> Message-ID: <44EE0968.1030904 at ee.byu.edu> >>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >>> >>> kortmann at ideaworks.com wrote: >>> >>> >>> >>> You have a module built against an older version of NumPy. What modules >>> are being loaded? Perhaps it is matplotlib or SciPy >>> >>> >> You need to re-build matplotlib. They should be producing a binary that >> is compatible with 1.0b2 (I'm being careful to make sure future releases >> are binary compatible with 1.0b2). >> >> Also, make sure that you remove the build directory under numpy if you >> have previously built a version of numpy prior to 1.0b2. >> You have to download the SVN version of matplotlib. The released version does not support 1.0b2 and above yet. -Travis From Chris.Barker at noaa.gov Mon Aug 28 19:11:46 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 28 Aug 2006 16:11:46 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <44F345B8.6070705@ieee.org> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> <200608231622.52266.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> <44F32C5B.8010101@noaa.gov> <44F345B8.6070705@ieee.org> Message-ID: <44F37832.2020804@noaa.gov> Travis Oliphant wrote: > I think I prefer to just add the float, bool, object, unicode, str names > to the "non-imported" numpy > name-space. which mean you get it with: import numpy as N N.float but not with from numpy import * ? If that's what you mean, then I'm all for it! -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Mon Aug 28 19:25:44 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 28 Aug 2006 16:25:44 -0700 Subject: [Numpy-discussion] Is numpy supposed to support the buffer protocol? Message-ID: <44F37B78.2050009@noaa.gov> HI all, Robin Dunn has been working on adding better support for dumping data directly to wxPython from the num* packages. I've been talking to him about the new array interface, and he might well support it (particularly if one of us contributes code), but in the meantime, he's got a number of things working with python buffers. For instance: wx.Image.SetDataBuffer(dataBuffer) That sets the data for a wxImage to the buffer handed in. This isn't as nice as the array protocol, as it has no way of checking anything other than if the length of the buffer is correct, but it is a good way to maximize performance for this sort of thing. he's now working on adding methods for creating wx.Bitmaps directly from buffers. In the process if testing some of this, I discovered that numarray (which Robin is testing with) works fine, but numpy does not. I get: File "/usr/lib/python2.4/site-packages/wx-2.6-gtk2-unicode/wx/_core.py", line 2814, in SetDataBuffer return _core_.Image_SetDataBuffer(*args, **kwargs) TypeError: non-character array cannot be interpreted as character buffer If I try to pass in a numpy array, while it works great with a numarray array. While I'm a great advocate of the new array protocol, it seems supporting the buffer protocol also would be a good idea. I've enclosed some simple test code. It works with numarray, but not numpy 1.0b4 Tested with Python 2.4.3, wxPython 2.6.3.0, Linux fedora core4 -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- A non-text attachment was scrubbed... Name: ImageBuffer2.py Type: text/x-python Size: 793 bytes Desc: not available URL: From oliphant.travis at ieee.org Mon Aug 28 19:32:20 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 17:32:20 -0600 Subject: [Numpy-discussion] Is numpy supposed to support the buffer protocol? In-Reply-To: <44F37B78.2050009@noaa.gov> References: <44F37B78.2050009@noaa.gov> Message-ID: <44F37D04.10807@ieee.org> Christopher Barker wrote: > HI all, > > File > "/usr/lib/python2.4/site-packages/wx-2.6-gtk2-unicode/wx/_core.py", > line 2814, in SetDataBuffer > return _core_.Image_SetDataBuffer(*args, **kwargs) > TypeError: non-character array cannot be interpreted as character buffer > > If I try to pass in a numpy array, while it works great with a > numarray array. This error sounds like wx is using the *wrong* buffer protocol. Don't use bf_getcharbuffer as it is of uncertain utility. It is slated for removal from Python 3000. It was meant to be used as a way to determine buffers that were supposed to contain characters (not arbitrary data). Just use bf_getreadbuffer and bf_getwritebuffer from tp_as_buffer. More support for the buffer protocol all the way around is a good idea. NumPy has always supported it very well (just make sure to use it correctly). FYI, I'm going to write a PEP to get the array protocol placed as an add-on to the buffer protocol for Python 2.6 -Travis From robert.kern at gmail.com Mon Aug 28 19:37:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Aug 2006 18:37:57 -0500 Subject: [Numpy-discussion] Is numpy supposed to support the buffer protocol? In-Reply-To: <44F37B78.2050009@noaa.gov> References: <44F37B78.2050009@noaa.gov> Message-ID: Christopher Barker wrote: > While I'm a great advocate of the new array protocol, it seems > supporting the buffer protocol also would be a good idea. I've enclosed > some simple test code. It works with numarray, but not numpy 1.0b4 Instead of I.SetDataBuffer(some_array) you can use I.SetDataBuffer(buffer(some_array)) and it seems to work on OS X with Python 2.4, numpy 1.0b2 and wxMac 2.6.3.3 . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From torgil.svensson at gmail.com Mon Aug 28 20:24:37 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 29 Aug 2006 02:24:37 +0200 Subject: [Numpy-discussion] 1.0b4 problem continuted from 1.0b3 In-Reply-To: <44F36579.7070502@ieee.org> References: <1391.12.216.231.149.1156800959.squirrel@webmail.ideaworks.com> <44F36579.7070502@ieee.org> Message-ID: This is really a matplotlib problem. >From matplotlib users mailing-list archives: > From: Charlie Moad > Snapshot build for use with numpy-1.0b3 > 2006-08-23 06:11 > > Here is a snapshot of svn this morning for those wanting to work with the numpy beta. Both builds are for python2.4 and windows. > > exe: http://tinyurl.com/gf299 > egg: http://tinyurl.com/fbjmg > > -Charlie That exe-file worked for me. //Torgil On 8/28/06, Travis Oliphant wrote: > kortmann at ideaworks.com wrote: > > On 8/25/06, Travis Oliphant wrote: > > > >> kortmann at ideaworks.com wrote: > >> > >>> Message: 4 > >>> Date: Thu, 24 Aug 2006 14:17:44 -0600 > >>> From: Travis Oliphant > >>> Subject: Re: [Numpy-discussion] (no subject) > >>> To: Discussion of Numerical Python > >>> > >>> Message-ID: <44EE0968.1030904 at ee.byu.edu> > >>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed > >>> > >>> kortmann at ideaworks.com wrote: > >>> > >>> > >>> > >>> You have a module built against an older version of NumPy. What modules > >>> are being loaded? Perhaps it is matplotlib or SciPy > >>> > >>> > >> You need to re-build matplotlib. They should be producing a binary that > >> is compatible with 1.0b2 (I'm being careful to make sure future releases > >> are binary compatible with 1.0b2). > >> > >> Also, make sure that you remove the build directory under numpy if you > >> have previously built a version of numpy prior to 1.0b2. > >> > > You have to download the SVN version of matplotlib. The released > version does not support 1.0b2 and above yet. > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From cvpl at fleegerfishing.com Mon Aug 28 20:26:45 2006 From: cvpl at fleegerfishing.com (Lillian Slaughter) Date: Mon, 28 Aug 2006 19:26:45 -0500 Subject: [Numpy-discussion] nausea Message-ID: <001001c6cb01$fec336cc$7f24c947@hanhky.ct> Kit took Pentreaths book away; it was Jukes Brownes Geology. During the winter he made a change in his own habit of life. My dear old chap, exams arent everything. His room hadtwo windows and a mahogany door, a bookcase, and two or three oddchairs. The barbarians were growing too strongfor him. The shop becamealive, with liquid capital circulating in its blood vessels. Maurice flinched; he flinched too easily. That she did appear inthat echoing room was another revealing of the world to Kit. On Saturdays he played soccer for the hospital. His impetus swung him along,and he cultivated this impetus. Also Kit met people in the Chelsea house, people who mattered, whohad done things. The barbarians were growing too strongfor him. Appreciation matters, and his fathers understanding keenness wasno small part of Kits inspiration. Kit confessed that he passed through the centre of the spiders webonce each day. Maurice was always looking fearfully at theclock. If anyone appreciated the pretty and nicely wingedjibes in Punch, Sorrell appreciated them. She put a live slow-worm in his bed, filled his tennisshoes with flour, and mocked him openly. Ada, the middle-aged maid who had been withMrs. Maurice was always looking fearfully at theclock. Youth explores, and Kits questing hada serious and high ardour. Gibbins provided him with breakfast and a hot meal atnight. She put out a hand and touched a rigid arm. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: backstage.gif Type: image/gif Size: 35347 bytes Desc: not available URL: From torgil.svensson at gmail.com Mon Aug 28 20:30:58 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 29 Aug 2006 02:30:58 +0200 Subject: [Numpy-discussion] std(axis=1) memory footprint issues + moving avg / stddev In-Reply-To: <44F14093.7080001@ieee.org> References: <44F14093.7080001@ieee.org> Message-ID: > The C-code is basically a directy "translation" of the original Python > code. ... > If I had to do it over again, I would place the std implementation there where > it could be appropriately optimized. Isn't C-code a good place for optimizations? //Torgil On 8/27/06, Travis Oliphant wrote: > Torgil Svensson wrote: > > Hi > > > > ndarray.std(axis=1) seems to have memory issues on large 2D-arrays. I > > first thought I had a performance issue but discovered that std() used > > lots of memory and therefore caused lots of swapping. > > > There are certainly lots of intermediate arrays created as the > calculation proceeds. The calculation is not particularly "smart." It > just does the basic averaging and multiplication needed. > > > I want to get an array where element i is the stadard deviation of row > > i in the 2D array. Using valgrind on the std() function... > > > > $ valgrind --tool=massif python -c "from numpy import *; > > a=reshape(arange(100000*100),(100000,100)).std(axis=1)" > > > > ... showed me a peak of 200Mb memory while iterating line by line... > > > > > The C-code is basically a directy "translation" of the original Python > code. There are lots of temporaries created (apparently 5 at one point > :-). I did this before I had the _internal.py code in place where I > place Python functions that need to be accessed from C. If I had to do > it over again, I would place the std implementation there where it could > be appropriately optimized. > > > > -Travis > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From oliphant.travis at ieee.org Mon Aug 28 23:03:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 21:03:29 -0600 Subject: [Numpy-discussion] tensor dot ? In-Reply-To: <20060825124219.6581a608.simon@arrowtheory.com> References: <20060825124219.6581a608.simon@arrowtheory.com> Message-ID: <44F3AE81.7010305@ieee.org> Simon Burton wrote: >>>> numpy.dot.__doc__ >>>> > matrixproduct(a,b) > Returns the dot product of a and b for arrays of floating point types. > Like the generic numpy equivalent the product sum is over > the last dimension of a and the second-to-last dimension of b. > NB: The first argument is not conjugated. > > Does numpy support summing over arbitrary dimensions, > as in tensor calculus ? > > I could cook up something that uses transpose and dot, but it's > reasonably tricky i think :) > I've just added tensordot to NumPy (adapted and enhanced from numarray). It allows you to sum over an arbitrary number of axes. It uses a 2-d dot-product internally as that is optimized if you have a fast blas installed. Example: If a.shape is (3,4,5) and b.shape is (4,3,2) Then tensordot(a, b, axes=([1,0],[0,1])) returns a (5,2) array which is equivalent to the code: c = zeros((5,2)) for i in range(5): for j in range(2): for k in range(3): for l in range(4): c[i,j] += a[k,l,i]*b[l,k,j] -Travis From wbaxter at gmail.com Mon Aug 28 23:55:06 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 29 Aug 2006 12:55:06 +0900 Subject: [Numpy-discussion] tensor dot ? In-Reply-To: <44F3AE81.7010305@ieee.org> References: <20060825124219.6581a608.simon@arrowtheory.com> <44F3AE81.7010305@ieee.org> Message-ID: On 8/29/06, Travis Oliphant wrote: > Example: > > If a.shape is (3,4,5) > and b.shape is (4,3,2) > > Then > > tensordot(a, b, axes=([1,0],[0,1])) > > returns a (5,2) array which is equivalent to the code: > > c = zeros((5,2)) > for i in range(5): > for j in range(2): > for k in range(3): > for l in range(4): > c[i,j] += a[k,l,i]*b[l,k,j] That's pretty cool. >From there it shouldn't be too hard to make a wrapper that would allow you to write c_ji = a_kli * b_lkj (w/sum over k and l) like: tensordot_ez(a,'kli', b,'lkj', out='ji') or maybe with numexpr-like syntax: tensor_expr('_ji = a_kli * b_lkj') [pulling a and b out of the globals()/locals()] Might be neat to be able to build a callable function for repeated reuse: tprod = tensor_func('_ji = [0]_kli * [1]_lkj') # [0] and [1] become parameters 0 and 1 c = tprod(a, b) or to pass the output through a (potentially reused) array argument: tprod1 = tensor_func('[0]_ji = [1]_kli * [2]_lkj') tprod1(c, a, b) --bb From pgmdevlist at gmail.com Tue Aug 29 01:25:25 2006 From: pgmdevlist at gmail.com (PGM) Date: Tue, 29 Aug 2006 01:25:25 -0400 Subject: [Numpy-discussion] A minor annoyance with MA Message-ID: <200608290125.25232.pgmdevlist@gmail.com> Folks, I keep running into the following problem since some recent update (I'm currently running 1.0b3, but the problem occurred roughly around 0.9.8): >>> import numpy.core.ma as MA >>> x=MA.array([[1],[2]],mask=False) >>> x.sum(None) /usr/lib64/python2.4/site-packages/numpy/core/ma.py in reduce(self, target, axis, dtype) 393 m.shape = (1,) 394 if m is nomask: --> 395 return masked_array (self.f.reduce (t, axis)) 396 else: 397 t = masked_array (t, m) TypeError: an integer is required #................................ Note that x.sum(0) and x.sum(1) work fine. I know some consensus seems to be lacking with MA, but still, I can't see why axis=None is not recognized. Corollary: with masked array, the default axis for sum is 0, when it's None for regular arrays. Is there a reason for this inconsistency ? Thanks a lot From robin at alldunn.com Tue Aug 29 02:09:35 2006 From: robin at alldunn.com (Robin Dunn) Date: Mon, 28 Aug 2006 23:09:35 -0700 Subject: [Numpy-discussion] Is numpy supposed to support the buffer protocol? In-Reply-To: <44F37D04.10807@ieee.org> References: <44F37B78.2050009@noaa.gov> <44F37D04.10807@ieee.org> Message-ID: <44F3DA1F.4020007@alldunn.com> Travis Oliphant wrote: > Christopher Barker wrote: >> HI all, >> >> File >> "/usr/lib/python2.4/site-packages/wx-2.6-gtk2-unicode/wx/_core.py", >> line 2814, in SetDataBuffer >> return _core_.Image_SetDataBuffer(*args, **kwargs) >> TypeError: non-character array cannot be interpreted as character buffer >> >> If I try to pass in a numpy array, while it works great with a >> numarray array. > > This error sounds like wx is using the *wrong* buffer protocol. Don't > use bf_getcharbuffer as it is of uncertain utility. It is slated for > removal from Python 3000. It was meant to be used as a way to determine > buffers that were supposed to contain characters (not arbitrary data). > > Just use bf_getreadbuffer and bf_getwritebuffer from tp_as_buffer. I'm using PyArg_Parse($input, "t#", ...) to get the buffer pointer and size. Is there another format specifier to use for the buffer pointer using the other slots or do I need to drop down to a lower level API to get it? I didn't realize there was a distinction between buffer and character buffer. Another read of the PyArg_Parse docs with that new fact makes things a little more clear. Looking at the code I guess "s#" will do it, I guess I thought it would try to coerce the object to a PyString like some other APIs do, which I was trying to avoid, but it doesn't appear to do that, (only encoding a unicode object if that is passed.) I think I'll take a shot at using tp_as_buffer directly to avoid any confusion in the future and avoid the arg parse overhead... Any other suggestions? BTW Chris, try using buffer(RGB) and buffer(Alpha) in your sample, I expect that will work with the current code. -- Robin Dunn Software Craftsman http://wxPython.org Java give you jitters? Relax with wxPython! From bruce.who.hk at gmail.com Tue Aug 29 02:03:10 2006 From: bruce.who.hk at gmail.com (Bruce Who) Date: Tue, 29 Aug 2006 14:03:10 +0800 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F341E4.7000003@ieee.org> References: <44F01802.8050505@ieee.org> <200608281448353906004@gmail.com> <44F341E4.7000003@ieee.org> Message-ID: Hi, Travis I can pack my scripts into an executable with py2exe, but errors occur once it runs: No scipy-style subpackage 'random' found in D:\test\dist\numpy. Ignoring: No module named info import core -> failed: No module named _internal import lib -> failed: 'module' object has no attribute '_ARRAY_API' import linalg -> failed: 'module' object has no attribute '_ARRAY_API' import dft -> failed: 'module' object has no attribute '_ARRAY_API' Traceback (most recent call last): File "main.py", line 9, in ? File "numpy\__init__.pyc", line 49, in ?  File "numpy\add_newdocs.pyc", line 2, in ? gkDc File "numpy\lib\__init__.pyc", line 5, in ? File "numpy\lib\type_check.pyc", line 8, in ? File "numpy\core\__init__.pyc", line 6, in ? File "numpy\core\umath.pyc", line 12, in ? File "numpy\core\umath.pyc", line 10, in __load AttributeError: 'module' object has no attribute '_ARRAY_API' This is the main.py file: #======================================= # filename:main.py import wx import numpy class myFrame(wx.Frame): def __init__(self, *args, **kwds): wx.Frame.__init__(self, *args, **kwds) ##------ your widgets ##------ put stuff into sizer self.sizer_ = wx.BoxSizer(wx.VERTICAL) ## self.sizer_.Add(your_ctrl, proportion = 1, flag = wx.EXPAND) ## apply sizer self.SetSizer(self.sizer_) self.SetAutoLayout(True) def main(): ## {{{ app = wx.PySimpleApp(0) frame = myFrame(None, -1, title = '') frame.Show(True) app.SetTopWindow(frame) app.MainLoop() ## }}} if __name__ == "__main__":main() #======================================= # filename:setup.py import glob import sys from distutils.core import setup import py2exe includes = ["encodings", "encodings.*", ] excludes = ["javax.comm"] options = { "py2exe": { #"compressed": 1, #"optimize": 0, #"bundle_files":2, "skip_archive":1, "includes": includes, 'excludes': excludes } } setup( version = "0.1", description = "", name = "test", options = options, windows = [ { "script":"main.py", } ], #zipfile = None, ) and I run this command to compile the scripts: python setup.py py2exe and all packages I use are: python2.4.3 numpy-0.98 py2exe-0.6.5 wxpython-2.6.3.2 I unistalled Numeric before I compiled scripts. If you google "numpy py2exe", you can find others guys stumbled by the same issue with ease: http://aspn.activestate.com/ASPN/Mail/Message/py2exe-users/3249182 http://www.nabble.com/matplotlib,-numpy-and-py2exe-t1901429.html I just hope this can be fixed in the next table release of numpy. On 8/29/06, Travis Oliphant wrote: > bruce.who.hk wrote: > > Hi, Travis > > > > I just wonder if NumPy 1.0b4 can get along with py2exe? Just a few weeks ago I made a application in Python. At first I used Numpy, it works OK, but I cannot pack it into a workable executable with py2exe and the XXX.log saied that numpy cannot find some module. I found some hints in py2exe wiki, but it still doesn't work. At Last I tried Numeric instead and it got OK. I just hope that you donnot stop the maintenance of Numeric before you are sure that Numpy can work with py2exe. > > > We've already stopped maintenance of Numeric nearly 1 year ago. If > NumPy doesn't work with py2exe then we need help figuring out why. The > beta-release period is the perfect time to fix that. I've never used > py2exe myself, but I seem to recall that some have been able to make it > work. > > The problem may just be listing the right set of modules to carry along > because you may not be able to get that with just the Python-side > imports. Post any errors you receive to > numpy-discussion at lists.sourceforge.net > > Thanks, > > > -Travis > > Bruce Who From gquqef at cpwarehouse.com Tue Aug 29 05:06:34 2006 From: gquqef at cpwarehouse.com (Connor Pritchard) Date: Tue, 29 Aug 2006 12:06:34 +0300 Subject: [Numpy-discussion] ignorance gust Message-ID: <000c01c6cb4b$8fc58514$dfa26455@df.bsm> Cleohad become a sinister young huntress, a chipmunk-stalker and adabbler after fish. Ill just say I admire Jinny, and will he lay off, or else. I wish youd have a talk with your friend Jay. Or her laudable griefover the sickness of the second child of Cousin Mary, who lived inIndiana. Benjamin took to staying away from the business, to guard her. Only one of them had a bar, and this was the Laverick-Crileyestablishment. Wellgo up on the porch, like little mice, and not disturb the BigBoys. He saw Jinny snatch it away, but not tooswiftly, after what seemed to be a laughing debate. Whoever said there wasnt a lot of wanton in everygood woman? Jinny said reverently, Now is that a pretty trick! Werent they seen leaving you two fellows shack at dawn onWednesday? The cottage, of pine clapboards apparently once painted green, wasairy as a birdcage. The true American is active even in his inactivities. Im going to stay right with you all the time hes here! Benjamin was very sorry when she spoke thus. Her circle felt that that was too many Indiana relatives toounexpectedly. They couldscarce avoid meeting, with the swimming, tennis, canoeing. But I do wish she wouldnt so perpetuallyget herself ambushed by Nimbus and Jay. We have a very fine old house now, and to get a new onewould be spending our CAPITAL. For my own honor, if there is such a thing, but morefor HER honor and contentment. Howl would hate to havesomebody offer me a hundred-thousand-dollar bribe! I told him to go jump inthe lake, said Jinny, in a refined manner. On her way home, with bottle, a policeman stopped her. How can you insult her withsuch suspicions? They say that if I dont go in voluntarily, the Marines will forceme to. On her way home, with bottle, a policeman stopped her. Doyou mean to tell me Frank Brightwing gets three thousand dollars ofour money? Ive always loved Jinny like an uncle,and I want to protect her almost as much as you do. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: disengage.gif Type: image/gif Size: 37939 bytes Desc: not available URL: From tcorcelle at yahoo.fr Tue Aug 29 06:01:32 2006 From: tcorcelle at yahoo.fr (tristan CORCELLE) Date: Tue, 29 Aug 2006 10:01:32 +0000 (GMT) Subject: [Numpy-discussion] Py2exe / numpy troubles Message-ID: <20060829100135.12485.qmail@web26511.mail.ukl.yahoo.com> Hello, I am having troubles with py2exe and numpy/matplotlib... Configuration : Windows XP pro ActivePython 2.4.2.10 Scipy 0.4.9 Numpy 0.9.8 MatplotLib 0.87.1 Py2exe 0.6.5 WxPython 2.6 I am using the following setup.py file: #--------------------------------------------------------- from distutils.core import setup import py2exe from distutils.filelist import findall import os import matplotlib matplotlibdatadir = matplotlib.get_data_path() matplotlibdata = findall(matplotlibdatadir) matplotlibdata_files = [] for f in matplotlibdata: dirname = os.path.join('matplotlibdata', f[len(matplotlibdatadir)+1:]) matplotlibdata_files.append((os.path.split(dirname)[0], [f])) packages = ['matplotlib', 'pytz'] includes = [] excludes = [] dll_excludes = ['libgdk_pixbuf-2.0-0.dll', 'libgobject-2.0-0.dll', 'libgdk-win32-2.0-0.dll', 'wxmsw26uh_vc.dll'] opts = { 'py2exe': { 'packages' : packages, 'includes' : includes, 'excludes' : excludes, 'dll_excludes' : dll_excludes } } setup ( console=['test.py'], options = opts, data_files = matplotlibdata_files ) #----------------------------- EOF --------------------------- I compile the application by running ">setup.py py2exe" At the end of compilation phase, it is written : The following modules appear to be missing ['AppKit', 'FFT', 'Foundation', 'Image', 'LinearAlgebra', 'MA', 'MLab', 'Matrix', 'Numeric', 'PyObjCTools', 'P yQt4', 'Pyrex', 'Pyrex.Compiler', 'RandomArray', '_curses', '_ssl', 'backends.draw_if_interactive', 'backends. new_figure_manager', 'backends.pylab_setup', 'backends.show', 'cairo', 'cairo.gtk', 'fcompiler.FCompiler', 'fc ompiler.show_fcompilers', 'fltk', 'gd', 'gobject', 'gtk', 'lib.add_newdoc', 'matplotlib.enthought.pyface.actio n', 'mlab.amax', 'mlab.amin', 'numarray', 'numarray.convolve', 'numarray.fft', 'numarray.ieeespecial', 'numarr ay.linear_algebra', 'numarray.linear_algebra.mlab', 'numarray.ma', 'numarray.numeric', 'numarray.random_array' , 'numerix.ArrayType', 'numerix.Complex', 'numerix.Complex32', 'numerix.Complex64', 'numerix.Float', 'numerix. Float32', 'numerix.Float64', 'numerix.Int', 'numerix.Int16', 'numerix.Int32', 'numerix.Int8', 'numerix.NewAxis ', 'numerix.UInt16', 'numerix.UInt32', 'numerix.UInt8', 'numerix.absolute', 'numerix.add', 'numerix.all', 'num erix.allclose', 'numerix.alltrue', 'numerix.arange', 'numerix.arccos', 'numerix.arccosh', 'numerix.arcsin', 'n umerix.arcsinh', 'numerix.arctan', 'numerix.arctan2', 'numerix.arctanh', 'numerix.argmax', 'numerix.argmin', ' numerix.argsort', 'numerix.around', 'numerix.array', 'numerix.arrayrange', 'numerix.asarray', 'numerix.asum', 'numerix.bitwise_and', 'numerix.bitwise_or', 'numerix.bitwise_xor', 'numerix.ceil', 'numerix.choose', 'numerix .clip', 'numerix.compress', 'numerix.concatenate', 'numerix.conjugate', 'numerix.convolve', 'numerix.cos', 'nu merix.cosh', 'numerix.cross_correlate', 'numerix.cumproduct', 'numerix.cumsum', 'numerix.diagonal', 'numerix.d ivide', 'numerix.dot', 'numerix.equal', 'numerix.exp', 'numerix.fabs', 'numerix.fft.fft', 'numerix.fft.inverse _fft', 'numerix.floor', 'numerix.fmod', 'numerix.fromfunction', 'numerix.fromstring', 'numerix.greater', 'nume rix.greater_equal', 'numerix.hypot', 'numerix.identity', 'numerix.indices', 'numerix.innerproduct', 'numerix.i scontiguous', 'numerix.less', 'numerix.less_equal', 'numerix.log', 'numerix.log10', 'numerix.logical_and', 'nu merix.logical_not', 'numerix.logical_or', 'numerix.logical_xor', 'numerix.matrixmultiply', 'numerix.maximum', 'numerix.minimum', 'numerix.mlab.amax', 'numerix.mlab.amin', 'numerix.mlab.cov', 'numerix.mlab.diff', 'numerix .mlab.hanning', 'numerix.mlab.rand', 'numerix.mlab.std', 'numerix.mlab.svd', 'numerix.multiply', 'numerix.nega tive', 'numerix.newaxis', 'numerix.nonzero', 'numerix.not_equal', 'numerix.nx', 'numerix.ones', 'numerix.outer product', 'numerix.pi', 'numerix.power', 'numerix.product', 'numerix.put', 'numerix.putmask', 'numerix.rank', 'numerix.ravel', 'numerix.repeat', 'numerix.reshape', 'numerix.resize', 'numerix.searchsorted', 'numerix.shape ', 'numerix.sin', 'numerix.sinh', 'numerix.size', 'numerix.sometrue', 'numerix.sort', 'numerix.sqrt', 'numerix .subtract', 'numerix.swapaxes', 'numerix.take', 'numerix.tan', 'numerix.tanh', 'numerix.trace', 'numerix.trans pose', 'numerix.typecode', 'numerix.typecodes', 'numerix.where', 'numerix.which', 'numerix.zeros', 'numpy.Comp lex', 'numpy.Complex32', 'numpy.Complex64', 'numpy.Float', 'numpy.Float32', 'numpy.Float64', 'numpy.Infinity', 'numpy.Int', 'numpy.Int16', 'numpy.Int32', 'numpy.Int8', 'numpy.UInt16', 'numpy.UInt32', 'numpy.UInt8', 'nump y.inf', 'numpy.infty', 'numpy.oldnumeric', 'objc', 'paint', 'pango', 'pre', 'pyemf', 'qt', 'setuptools', 'setu ptools.command', 'setuptools.command.egg_info', 'trait_sheet', 'matplotlib.numerix.Float', 'matplotlib.numerix .Float32', 'matplotlib.numerix.absolute', 'matplotlib.numerix.alltrue', 'matplotlib.numerix.asarray', 'matplot lib.numerix.ceil', 'matplotlib.numerix.equal', 'matplotlib.numerix.fromstring', 'matplotlib.numerix.indices', 'matplotlib.numerix.put', 'matplotlib.numerix.ravel', 'matplotlib.numerix.sqrt', 'matplotlib.numerix.take', 'm atplotlib.numerix.transpose', 'matplotlib.numerix.where', 'numpy.core.conjugate', 'numpy.core.equal', 'numpy.c ore.less', 'numpy.core.less_equal', 'numpy.dft.old', 'numpy.random.rand', 'numpy.random.randn'] 1) First Problem: numpy\core\_internal.pyc not included in Library.zip No scipy-style subpackage 'core' found in C:\WinCE\Traces\py2exe test\dist\library.zip\numpy. Ignoring: No module named _internal Traceback (most recent call last): File "profiler_ftt.py", line 15, in ? from matplotlib.backends.backend_wx import Toolbar, FigureCanvasWx,\ File "matplotlib\backends\backend_wx.pyc", line 152, in ? File "matplotlib\backend_bases.pyc", line 10, in ? File "matplotlib\colors.pyc", line 33, in ? File "matplotlib\numerix\__init__.pyc", line 67, in ? File "numpy\__init__.pyc", line 35, in ? File "numpy\_import_tools.pyc", line 173, in __call__ File "numpy\_import_tools.pyc", line 68, in _init_info_modules File "", line 1, in ? File "numpy\lib\__init__.pyc", line 5, in ? File "numpy\lib\type_check.pyc", line 8, in ? File "numpy\core\__init__.pyc", line 6, in ? File "numpy\core\umath.pyc", line 12, in ? File "numpy\core\umath.pyc", line 10, in __load AttributeError: 'module' object has no attribute '_ARRAY_API' I resolved that issue by adding the file ...\Python24\Lib\site-packages\numpy\core\_internal.pyc in ...\test\dist\library.zip\numpy\core. Each time I compile that executable, I add the file by hand. Does anybody know how to automatically add that file? 2) Second problem: I don't know how to resolve that issue: Traceback (most recent call last): File "profiler_ftt.py", line 15, in ? from matplotlib.backends.backend_wx import Toolbar, FigureCanvasWx,\ File "matplotlib\backends\backend_wx.pyc", line 152, in ? File "matplotlib\backend_bases.pyc", line 10, in ? File "matplotlib\colors.pyc", line 33, in ? File "matplotlib\numerix\__init__.pyc", line 67, in ? File "numpy\__init__.pyc", line 35, in ? File "numpy\_import_tools.pyc", line 173, in __call__ File "numpy\_import_tools.pyc", line 68, in _init_info_modules File "", line 1, in ? File "numpy\random\__init__.pyc", line 3, in ? File "numpy\random\mtrand.pyc", line 12, in ? File "numpy\random\mtrand.pyc", line 10, in __load File "numpy.pxi", line 32, in mtrand AttributeError: 'module' object has no attribute 'dtype' I don't find the file numpy.pxi in my file tree nor in \test\dist\library.zip. I browsed the web in the hope to find a solution but nothing. It seems that this issue is well known but no solution provided in mailing lists. What is that file "numpix.pxi"? Where to find it or how is it generated? How to resolve that execution issue? Thanks, Regards, Tristan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattknox_ca at hotmail.com Tue Aug 29 08:59:10 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 29 Aug 2006 08:59:10 -0400 Subject: [Numpy-discussion] possible bug with numpy.object_ Message-ID: is the following behaviour expected? or is this a bug with numpy.object_ ? I'm using numpy 1.0b1 >>> print numpy.array([],numpy.float64).size0 >>> print numpy.array([],numpy.object_).size1 Should the size of an array initialized from an empty list not always be 1 ? or am I just crazy? Thanks, - Matt Knox _________________________________________________________________ Be one of the first to try Windows Live Mail. http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattknox_ca at hotmail.com Tue Aug 29 10:05:27 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 29 Aug 2006 10:05:27 -0400 Subject: [Numpy-discussion] possible bug with numpy.object_ Message-ID: # is the following behaviour expected? or is this a bug with numpy.object_ ? I'm using numpy 1.0b1# # >>> print numpy.array([],numpy.float64).size# 0## >>> print numpy.array([],numpy.object_).size# 1## Should the size of an array initialized from an empty list not always be 1 ? or am I just crazy?## Thanks,# # - Matt Knox Correction... I mean shouldn't it always be 0, not 1 _________________________________________________________________ Be one of the first to try Windows Live Mail. http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d -------------- next part -------------- An HTML attachment was scrubbed... URL: From cssmwbs at gmail.com Tue Aug 29 11:15:21 2006 From: cssmwbs at gmail.com (W. Bryan Smith) Date: Tue, 29 Aug 2006 08:15:21 -0700 Subject: [Numpy-discussion] error in ctypes example from the numpy book? Message-ID: <7c13686f0608290815i1078a347s18dbbd196dd429af@mail.gmail.com> hi, i posted this to the forum, but it looks like the email list gets much more traffic, so here goes. i am attempting to reproduce a portion of the example on using ctypes from the current version of the numpy book (the example can be found on pps 313-16). here is what i am trying to do: import numpy import interface x = numpy.array(range(1,1)) y = numpy.ones_like(x) z = interface.add(a,b) prints the following error: BEGIN ERROR>> 26 b = N.require(b, dtype, requires) 27 c = N.empty_like(a) ---> 28 func(a,b,c,a.size) 29 return c 30 ArgumentError: argument 1: exceptions.TypeError: Don't know how to convert parameter 1 <> /* Add arrays of contiguous data */ typedef struct {double real;} cdouble; typedef struct {float real;} cfloat; void dadd(double *a, double *b, double *c, long n) { while (n--) { *c++ = *a++ + *b++; } } void sadd(float *a, float *b, float *c, long n) { while (n--) { *c++ = *a++ + *b++; } } <> __all__ = ['add'] import numpy as N from ctypes import * import os _path = os.path.dirname('__file__') lib = N.ctypeslib.ctypes_load_library('testAddInt', _path) for name in ['sadd','dadd']: getattr(lib,name).restype=None def select(dtype): if dtype.char in['?bBhHf']: return lib.sadd, single else: return lib.dadd, float return func, ntype def add(a,b): requires = ['CONTIGUOUS','ALIGNED'] a = N.asanyarray(a) func, dtype = select(a.dtype) a = N.require(a, dtype, requires) b = N.require(b, dtype, requires) c = N.empty_like(a) func(a,b,c,a.size) return c < From kwgoodman at gmail.com Tue Aug 29 11:43:34 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Aug 2006 08:43:34 -0700 Subject: [Numpy-discussion] Problem with randn Message-ID: randn incorrectly returns random numbers only between 0 and 1 in numpy 1.0b1. random.randn works. >> from numpy.matlib import * >> randn(3,4) matrix([[ 0.60856413, 0.35500732, 0.48089868, 0.7044022 ], [ 0.71098538, 0.8506885 , 0.56154652, 0.4243273 ], [ 0.89655777, 0.92339559, 0.62247685, 0.70340003]]) >> randn(3,4) matrix([[ 0.84349201, 0.55638171, 0.19052097, 0.0927636 ], [ 0.60144183, 0.3788309 , 0.41451568, 0.61766302], [ 0.98992704, 0.94276652, 0.18569066, 0.69976656]]) >> randn(3,4) matrix([[ 0.69003273, 0.07171546, 0.34549767, 0.20901683], [ 0.1333439 , 0.4086678 , 0.80960253, 0.86864547], [ 0.75329427, 0.6760677 , 0.32496542, 0.99402779]]) >> random.randn(3,4) array([[ 1.00107604, 0.41418557, -0.07923699, 0.19203247], [-0.29386593, 0.02343702, -0.42366834, -1.27978993], [ 0.25722357, -0.53765827, 0.50569238, -2.44592854]]) From oliphant.travis at ieee.org Tue Aug 29 12:49:58 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 29 Aug 2006 10:49:58 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: References: Message-ID: <44F47036.8040300@ieee.org> Matt Knox wrote: > is the following behaviour expected? or is this a bug with > numpy.object_ ? I'm using numpy 1.0b1 > > >>> print numpy.array([],numpy.float64).size > 0 > > >>> print numpy.array([],numpy.object_).size > 1 > > Should the size of an array initialized from an empty list not always > be 1 ? or am I just crazy? > Not in this case. Explictly creating an object array from any object (even the empty-list object) gives you a 0-d array containing that object. When you explicitly create an object array a different section of code handles it and gives this result. This is a recent change, and I don't think this use-case was considered as a backward incompatibility (which I believe it is). Perhaps we should make it so array([],....) always returns an empty array. I'm not sure. Comments? -Travis From Chris.Barker at noaa.gov Tue Aug 29 13:12:04 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 29 Aug 2006 10:12:04 -0700 Subject: [Numpy-discussion] Is numpy supposed to support the buffer protocol? In-Reply-To: <44F3DA1F.4020007@alldunn.com> References: <44F37B78.2050009@noaa.gov> <44F37D04.10807@ieee.org> <44F3DA1F.4020007@alldunn.com> Message-ID: <44F47564.4070208@noaa.gov> Robin Dunn wrote: > BTW Chris, try using buffer(RGB) and buffer(Alpha) in your sample, I > expect that will work with the current code. yup. that does work. I was concerned that it would make a copy, but it looks like it makes a new buffer object, but using the same data buffer, so that should be fine. Thanks for all this, -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From kortmann at ideaworks.com Tue Aug 29 13:18:44 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Tue, 29 Aug 2006 10:18:44 -0700 (PDT) Subject: [Numpy-discussion] py2exe error Message-ID: <3345.12.216.231.149.1156871924.squirrel@webmail.ideaworks.com> >Hi, Travis >I can pack my scripts into an executable with py2exe, but errors occur >once it runs: >No scipy-style subpackage 'random' found in D:\test\dist\numpy. >Ignoring: No module named info >import core -> failed: No module named _internal >import lib -> failed: 'module' object has no attribute '_ARRAY_API' >import linalg -> failed: 'module' object has no attribute '_ARRAY_API' >import dft -> failed: 'module' object has no attribute '_ARRAY_API' >Traceback (most recent call last): > File "main.py", line 9, in ? > File "numpy\__init__.pyc", line 49, in ? >  > File "numpy\add_newdocs.pyc", line 2, in ? > gkDc > File "numpy\lib\__init__.pyc", line 5, in ? > > File "numpy\lib\type_check.pyc", line 8, in ? > > File "numpy\core\__init__.pyc", line 6, in ? > > File "numpy\core\umath.pyc", line 12, in ? > > File "numpy\core\umath.pyc", line 10, in __load I am cross referencing this from the py2exe mailing list. There seems to have been a fix for this problem #---------------------------begining of setup.py--------------------# from distutils.core import setup import py2exe from distutils.filelist import findall import os import matplotlib matplotlibdatadir = matplotlib.get_data_path() matplotlibdata = findall(matplotlibdatadir) matplotlibdata_files = [] for f in matplotlibdata: dirname = os.path.join('matplotlibdata', f[len(matplotlibdatadir)+1:]) matplotlibdata_files.append((os.path.split(dirname)[0], [f])) packages = ['matplotlib', 'pytz'] includes = [] excludes = [] dll_excludes = ['libgdk_pixbuf-2.0-0.dll', 'libgobject-2.0-0.dll', 'libgdk-win32-2.0-0.dll', 'wxmsw26uh_vc.dll'] opts = { 'py2exe': { 'packages' : packages, 'includes' : includes, 'excludes' : excludes, 'dll_excludes' : dll_excludes } } setup ( console=['test.py'], options = opts, data_files = matplotlibdata_files ) #--------------------------End of setup.py--------------# >>1) First Problem: numpy\core\_internal.pyc not included in Library.zip >>No scipy-style subpackage 'core' found in C:\WinCE\Traces\py2exe >>test\dist\library.zip\numpy. Ignoring: No module named _internal >>Traceback (most recent call last): >> File "profiler_ftt.py", line 15, in ? >> from matplotlib.backends.backend_wx import Toolbar, FigureCanvasWx,\ >> File "matplotlib\backends\backend_wx.pyc", line 152, in ? >> File "matplotlib\backend_bases.pyc", line 10, in ? >> File "matplotlib\colors.pyc", line 33, in ? >> File "matplotlib\numerix\__init__.pyc", line 67, in ? >> File "numpy\__init__.pyc", line 35, in ? >> File "numpy\_import_tools.pyc", line 173, in __call__ >> File "numpy\_import_tools.pyc", line 68, in _init_info_modules >> File "", line 1, in ? >> File "numpy\lib\__init__.pyc", line 5, in ? >> File "numpy\lib\type_check.pyc", line 8, in ? >> File "numpy\core\__init__.pyc", line 6, in ? >> File "numpy\core\umath.pyc", line 12, in ? >> File "numpy\core\umath.pyc", line 10, in __load >>AttributeError: 'module' object has no attribute '_ARRAY_API' >>I resolved that issue by adding the file >>...\Python24\Lib\site-packages\numpy\core\_internal.pyc in >>...\test\dist\library.zip\numpy\core. >>Each time I compile that executable, I add the file by hand. >>Does anybody know how to automatically add that file? the setup.py was from the person who wrote the instructions for this fix. also here is my setup.py just for reference although mine is probably incorrect due to me being new with py2exe #------------------------setup.py------------------------# from distutils.core import setup import py2exe from distutils.filelist import findall import os import matplotlib matplotlibdatadir = matplotlib.get_data_path() matplotlibdata = findall(matplotlibdatadir) matplotlibdata_files = [] for f in matplotlibdata: dirname = os.path.join('matplotlibdata', f[len(matplotlibdatadir)+1:]) matplotlibdata_files.append((os.path.split(dirname)[0], [f])) setup( console=['templatewindow.py'], options={ "py2exe": { "compressed": 1, "optimize": 2, "packages": ["encodings", "kinterbasdb", "pytz.zoneinfo.UTC", "matplotlib.numerix", ], "dll_excludes": ["tcl84.dll", "tk84.dll"] } mpldata = glob.glob(r'C:\Python24\share\matplotlib\*') mpldata.append(r'C:\Python24\share\matplotlib\.matplotlibrc') data_files = [("prog\\locale\\fr\\LC_MESSAGES", mylocaleFR), ("prog\\locale\\de\\LC_MESSAGES", mylocaleDE), ("prog\\locale\\en\\LC_MESSAGES", mylocaleEN), ... ("matplotlibdata", mpldata), ("prog\\amaradata", amaradata), ("prog\\amaradata\\Schemata", amaraschemata), ] ) #-----------------------EOF-----------------# I was receiving this same "AttributeError: 'module' object has no attribute '_ARRAY_API'" error, and i did the same thing this person did, unzipped the folder, put the _internal.pyc file in the numpy/core folder and then rezipped the folder and I am receiving a wx error, but the numpy array_api error is gone. You may want to check this out and let us know if it works for you also. -Kenny p.s. i tried sending this 4 times prior but believe it did not send because it was alot longer so i shortened it, sorry if it posted 4 times From charlesr.harris at gmail.com Tue Aug 29 13:32:54 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 11:32:54 -0600 Subject: [Numpy-discussion] Documentation Message-ID: Hi All, I've finished moving all the docstrings in arraymethods to add_newdocs. Much of the documentation is still incomplete and needs nicer formatting, so if you are so inclined, or even annoyed with some of the help messages, feel free to fix things up. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Tue Aug 29 13:57:26 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Aug 2006 10:57:26 -0700 Subject: [Numpy-discussion] For loop tips Message-ID: I have a very long list that contains many repeated elements. The elements of the list can be either all numbers, or all strings, or all dates [datetime.date]. I want to convert the list into a matrix where each unique element of the list is assigned a consecutive integer starting from zero. I've done it by brute force below. Any tips for making it faster? (5x would make it useful; 10x would be a dream.) >> list2index.test() Numbers: 5.84955787659 seconds Characters: 24.3192870617 seconds Dates: 39.288228035 seconds import datetime, time from numpy import nan, asmatrix, ones def list2index(L): # Find unique elements in list uL = dict.fromkeys(L).keys() # Convert list to matrix L = asmatrix(L).T # Initialize return matrix idx = nan * ones((L.size, 1)) # Assign numbers to unique L values for i, uLi in enumerate(uL): idx[L == uLi,:] = i def test(): L = 5000*range(255) t1 = time.time() idx = list2index(L) t2 = time.time() print 'Numbers:', t2-t1, 'seconds' L = 5000*[chr(z) for z in range(255)] t1 = time.time() idx = list2index(L) t2 = time.time() print 'Characters:', t2-t1, 'seconds' d = datetime.date step = datetime.timedelta L = 5000*[d(2006,1,1)+step(z) for z in range(255)] t1 = time.time() idx = list2index(L) t2 = time.time() print 'Dates:', t2-t1, 'seconds' From tim.hochberg at ieee.org Tue Aug 29 14:40:11 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 11:40:11 -0700 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: Message-ID: <44F48A0B.7020401@ieee.org> Keith Goodman wrote: > I have a very long list that contains many repeated elements. The > elements of the list can be either all numbers, or all strings, or all > dates [datetime.date]. > > I want to convert the list into a matrix where each unique element of > the list is assigned a consecutive integer starting from zero. > If what you want is that the first unique element get's zero, the second one, I don't think the code below will work in general since the dict does not preserve order. You might want to look at the results for the character case to see what I mean. If you're looking for something else, you'll need to elaborate a bit. Since list2index doesn't return anything, it's not entirely clear what the answer consists of. Just idx? Idx plus uL? > I've done it by brute force below. Any tips for making it faster? (5x > would make it useful; 10x would be a dream.) > Assuming I understand what you're trying to do, this might help: def list2index2(L): idx = ones([len(L)]) map = {} for i, x in enumerate(L): index = map.get(x) if index is None: map[x] = index = len(map) idx[i] = index return idx It's almost 10x faster for numbers and about 40x faster for characters and dates. However it produces different results from list2index in the second two cases. That may or may not be a good thing depending on what you're really trying to do. -tim > >>> list2index.test() >>> > Numbers: 5.84955787659 seconds > Characters: 24.3192870617 seconds > Dates: 39.288228035 seconds > > > import datetime, time > from numpy import nan, asmatrix, ones > > def list2index(L): > > # Find unique elements in list > uL = dict.fromkeys(L).keys() > > # Convert list to matrix > L = asmatrix(L).T > > # Initialize return matrix > idx = nan * ones((L.size, 1)) > > # Assign numbers to unique L values > for i, uLi in enumerate(uL): > idx[L == uLi,:] = i > > def test(): > > L = 5000*range(255) > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Numbers:', t2-t1, 'seconds' > > L = 5000*[chr(z) for z in range(255)] > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Characters:', t2-t1, 'seconds' > > d = datetime.date > step = datetime.timedelta > L = 5000*[d(2006,1,1)+step(z) for z in range(255)] > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Dates:', t2-t1, 'seconds' > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From tim.hochberg at ieee.org Tue Aug 29 14:48:19 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 11:48:19 -0700 Subject: [Numpy-discussion] For loop tips In-Reply-To: <44F48A0B.7020401@ieee.org> References: <44F48A0B.7020401@ieee.org> Message-ID: <44F48BF3.9090108@ieee.org> Tim Hochberg wrote: > Keith Goodman wrote: > >> I have a very long list that contains many repeated elements. The >> elements of the list can be either all numbers, or all strings, or all >> dates [datetime.date]. >> >> I want to convert the list into a matrix where each unique element of >> the list is assigned a consecutive integer starting from zero. >> >> > If what you want is that the first unique element get's zero, the second > one, I don't think the code below will work in general since the dict > does not preserve order. You might want to look at the results for the > character case to see what I mean. If you're looking for something else, > you'll need to elaborate a bit. Since list2index doesn't return > anything, it's not entirely clear what the answer consists of. Just idx? > Idx plus uL? > > >> I've done it by brute force below. Any tips for making it faster? (5x >> would make it useful; 10x would be a dream.) >> >> > Assuming I understand what you're trying to do, this might help: > > def list2index2(L): > idx = ones([len(L)]) > map = {} > for i, x in enumerate(L): > index = map.get(x) > if index is None: > map[x] = index = len(map) > idx[i] = index > return idx > > > It's almost 10x faster for numbers and about 40x faster for characters > and dates. However it produces different results from list2index in the > second two cases. That may or may not be a good thing depending on what > you're really trying to do. > Ugh! I fell victim to premature optimization disease. The following is both clearer and faster: Sigh. def list2index3(L): idx = ones([len(L)]) map = {} for i, x in enumerate(L): if x not in map: map[x] = len(map) idx[i] = map[x] return idx > -tim > > >> >> >>>> list2index.test() >>>> >>>> >> Numbers: 5.84955787659 seconds >> Characters: 24.3192870617 seconds >> Dates: 39.288228035 seconds >> >> >> import datetime, time >> from numpy import nan, asmatrix, ones >> >> def list2index(L): >> >> # Find unique elements in list >> uL = dict.fromkeys(L).keys() >> >> # Convert list to matrix >> L = asmatrix(L).T >> >> # Initialize return matrix >> idx = nan * ones((L.size, 1)) >> >> # Assign numbers to unique L values >> for i, uLi in enumerate(uL): >> idx[L == uLi,:] = i >> >> def test(): >> >> L = 5000*range(255) >> t1 = time.time() >> idx = list2index(L) >> t2 = time.time() >> print 'Numbers:', t2-t1, 'seconds' >> >> L = 5000*[chr(z) for z in range(255)] >> t1 = time.time() >> idx = list2index(L) >> t2 = time.time() >> print 'Characters:', t2-t1, 'seconds' >> >> d = datetime.date >> step = datetime.timedelta >> L = 5000*[d(2006,1,1)+step(z) for z in range(255)] >> t1 = time.time() >> idx = list2index(L) >> t2 = time.time() >> print 'Dates:', t2-t1, 'seconds' >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> >> > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From oliphant.travis at ieee.org Tue Aug 29 14:57:30 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 29 Aug 2006 12:57:30 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend Message-ID: <44F48E1A.1020006@ieee.org> Hi all, Classes start for me next Tuesday, and I'm teaching a class for which I will be using NumPy / SciPy extensively. I need to have a release of these two (and hopefully matplotlib) that work with each other. Therefore, I'm going to make a 1.0b5 release of NumPy over the weekend (probably Monday), and also get a release of SciPy out as well. At that point, I'll only be available for bug-fixes to 1.0. Therefore, the next release after 1.0b5 I would like to be 1.0rc1 (release-candidate 1). To facilitate that, after 1.0b5 there will be a feature-freeze (except for in the compatibility modules and the alter_code scripts which can still be modified to ease the transition burden). The 1.0rc1 release of NumPy will be mid September I suspect. Also, I recognize that the default-axis switch is a burden for those who have already transitioned code to use NumPy (for those just starting out it's not a big deal because of the compatibility layer). As a result, I've added a module called fix_default_axis whose converttree method will walk a hierarchy and change all .py files to fix the default axis problem in those files. This can be done in one of two ways (depending on the boolean argument import_change). If import_change is False a) Add and axis= keyword argument to any function whose default changed in 1.0b2 or 1.0b3, which does not already have the axis argument --- this method does not distinguish where the function came from and so can do the wrong thing with similarly named functions from other modules (.e.g. builtin sum and itertools.repeat). If import_change is True b) Change the location where the function is imported from numpy to numpy.oldnumeric where the default axis is the same as before. This approach looks for several flavors of the import statement and alters the import location for any function whose default axis argument changed --- this can get confused if you use from numpy import sum as mysum --- it will not replace that usage of sum. I used this script on the scipy tree in mode a) as a test (followed by a manual replacement of all? incorrect substitutions). I hope it helps. I know it's annoying to have such things change. But, it does make NumPy much more consistent with respect to the default axis argument. With a few exceptions (concatenate, diff, trapz, split, array_split), the rule is that you need to specify the axis if there is more than 1 dimension or it will ravel the input. -Travis From torgil.svensson at gmail.com Tue Aug 29 14:59:55 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 29 Aug 2006 20:59:55 +0200 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: Message-ID: def list2index(L): idx=dict((y,x) for x,y in enumerate(set(L))) return asmatrix(fromiter((idx[x] for x in L),dtype=int)) # old $ python test.py Numbers: 29.4062280655 seconds Characters: 84.6239070892 seconds Dates: 117.560418844 seconds # new $ python test.py Numbers: 1.79700994492 seconds Characters: 1.6025249958 seconds Dates: 1.7974088192 seconds 16, 52 and 100 times faster //Torgil On 8/29/06, Keith Goodman wrote: > I have a very long list that contains many repeated elements. The > elements of the list can be either all numbers, or all strings, or all > dates [datetime.date]. > > I want to convert the list into a matrix where each unique element of > the list is assigned a consecutive integer starting from zero. > > I've done it by brute force below. Any tips for making it faster? (5x > would make it useful; 10x would be a dream.) > > >> list2index.test() > Numbers: 5.84955787659 seconds > Characters: 24.3192870617 seconds > Dates: 39.288228035 seconds > > > import datetime, time > from numpy import nan, asmatrix, ones > > def list2index(L): > > # Find unique elements in list > uL = dict.fromkeys(L).keys() > > # Convert list to matrix > L = asmatrix(L).T > > # Initialize return matrix > idx = nan * ones((L.size, 1)) > > # Assign numbers to unique L values > for i, uLi in enumerate(uL): > idx[L == uLi,:] = i > > def test(): > > L = 5000*range(255) > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Numbers:', t2-t1, 'seconds' > > L = 5000*[chr(z) for z in range(255)] > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Characters:', t2-t1, 'seconds' > > d = datetime.date > step = datetime.timedelta > L = 5000*[d(2006,1,1)+step(z) for z in range(255)] > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Dates:', t2-t1, 'seconds' > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From aisaac at american.edu Tue Aug 29 15:13:54 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 29 Aug 2006 15:13:54 -0400 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: Message-ID: You can get some speed up for numeric data: def list2index2(L): aL = asarray(L) eL = empty_like(L) for v,k in enumerate(set(L)): eL[aL == k] = v return numpy.asmatrix(eL).T fwiw, Alan Isaac From charlesr.harris at gmail.com Tue Aug 29 15:06:38 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 13:06:38 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F48E1A.1020006@ieee.org> References: <44F48E1A.1020006@ieee.org> Message-ID: Hi Travis, On 8/29/06, Travis Oliphant wrote: > > > Hi all, > > Classes start for me next Tuesday, and I'm teaching a class for which I > will be using NumPy / SciPy extensively. I need to have a release of > these two (and hopefully matplotlib) that work with each other. > > Therefore, I'm going to make a 1.0b5 release of NumPy over the weekend > (probably Monday), and also get a release of SciPy out as well. At that > point, I'll only be available for bug-fixes to 1.0. Therefore, the next > release after 1.0b5 I would like to be 1.0rc1 (release-candidate 1). > > To facilitate that, after 1.0b5 there will be a feature-freeze (except > for in the compatibility modules and the alter_code scripts which can > still be modified to ease the transition burden). Speaking of features, I wonder if more of the methods should return references. For instance, it might be nice to write something like: a.sort().searchsorted([...]) instead of making two statements out of it. The 1.0rc1 release of NumPy will be mid September I suspect. > > Also, I recognize that the default-axis switch is a burden for those who > have already transitioned code to use NumPy (for those just starting out > it's not a big deal because of the compatibility layer). I am curious as to why you made this switch. Not complaining, mind. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Tue Aug 29 15:11:47 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 29 Aug 2006 13:11:47 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: On 8/29/06, Charles R Harris wrote: > Speaking of features, I wonder if more of the methods should return > references. For instance, it might be nice to write something like: > > a.sort().searchsorted([...]) > > instead of making two statements out of it. +1 for more 'return self' at the end of methods which currently don't return anything (well, we get the default None), as long as it's sensible. I really like this 'message chaining' style of programming, and it annoys me that much of the python stdlib gratuitously prevents it by NOT returning self in places where it would be a perfectly sensible thing to do. I find it much cleaner to write x = foo.bar().baz(param).frob() than foo.bar() foo.baz(param) x = foo.frob() but perhaps others disagree. Cheers, f From rudolphv at gmail.com Tue Aug 29 15:15:30 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Tue, 29 Aug 2006 21:15:30 +0200 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> This definitely gets my vote as well (for what it's worth). R. On 8/29/06, Fernando Perez wrote: > +1 for more 'return self' at the end of methods which currently don't > return anything (well, we get the default None), as long as it's > sensible. I really like this 'message chaining' style of programming, > and it annoys me that much of the python stdlib gratuitously prevents > it by NOT returning self in places where it would be a perfectly > sensible thing to do. > > I find it much cleaner to write > > x = foo.bar().baz(param).frob() > > than > > foo.bar() > foo.baz(param) > x = foo.frob() > > but perhaps others disagree. > > Cheers, > > f -- Rudolph van der Merwe Karoo Array Telescope / Square Kilometer Array - http://www.ska.ac.za From charlesr.harris at gmail.com Tue Aug 29 15:25:14 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 13:25:14 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: Hi Fernando, On 8/29/06, Fernando Perez wrote: > > On 8/29/06, Charles R Harris wrote: > > > Speaking of features, I wonder if more of the methods should return > > references. For instance, it might be nice to write something like: > > > > a.sort().searchsorted([...]) > > > > instead of making two statements out of it. > > +1 for more 'return self' at the end of methods which currently don't > return anything (well, we get the default None), as long as it's > sensible. I really like this 'message chaining' style of programming, > and it annoys me that much of the python stdlib gratuitously prevents > it by NOT returning self in places where it would be a perfectly > sensible thing to do. My pet peeve example: a.reverse() I would also like to see simple methods for "+=" operator and such. Then one could write x = a.copy().add(10) One could make a whole reverse polish translator out of such operations and a few parenthesis. I have in mind some sort of code optimizer. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.hochberg at ieee.org Tue Aug 29 15:26:06 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 12:26:06 -0700 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> Message-ID: <44F494CE.1080008@ieee.org> -0.5 from me if what we're talking about here is having mutating methods return self rather than None. Chaining stuff is pretty, but having methods that mutate self and return self looks like a source of elusive bugs to me. -tim Rudolph van der Merwe wrote: > This definitely gets my vote as well (for what it's worth). > > R. > > On 8/29/06, Fernando Perez wrote: > >> +1 for more 'return self' at the end of methods which currently don't >> return anything (well, we get the default None), as long as it's >> sensible. I really like this 'message chaining' style of programming, >> and it annoys me that much of the python stdlib gratuitously prevents >> it by NOT returning self in places where it would be a perfectly >> sensible thing to do. >> >> I find it much cleaner to write >> >> x = foo.bar().baz(param).frob() >> >> than >> >> foo.bar() >> foo.baz(param) >> x = foo.frob() >> >> but perhaps others disagree. >> >> Cheers, >> >> f >> > > From kwgoodman at gmail.com Tue Aug 29 15:27:34 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Aug 2006 12:27:34 -0700 Subject: [Numpy-discussion] For loop tips In-Reply-To: <44F48A0B.7020401@ieee.org> References: <44F48A0B.7020401@ieee.org> Message-ID: On 8/29/06, Tim Hochberg wrote: > Keith Goodman wrote: > > I have a very long list that contains many repeated elements. The > > elements of the list can be either all numbers, or all strings, or all > > dates [datetime.date]. > > > > I want to convert the list into a matrix where each unique element of > > the list is assigned a consecutive integer starting from zero. > > > If what you want is that the first unique element get's zero, the second > one, I don't think the code below will work in general since the dict > does not preserve order. You might want to look at the results for the > character case to see what I mean. If you're looking for something else, > you'll need to elaborate a bit. Since list2index doesn't return > anything, it's not entirely clear what the answer consists of. Just idx? > Idx plus uL? The output I wanted (in my mind, but unfortunately not in my previous email) is idx and uL where uL[0] corresponds to the zeros in idx, uL[1] corresponds to the ones in idx. etc. I'd also like the uL's to be ordered (now I see that characters and dates aren't ordered, ooops, thanks for telling me about that). Or optionally ordered by a second list input which if present would be used instead of the unique values of L. Thank you all for the huge improvements to my code. I'll learn a lot studying all of them. From charlesr.harris at gmail.com Tue Aug 29 15:36:33 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 13:36:33 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F494CE.1080008@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> Message-ID: Hi, On 8/29/06, Tim Hochberg wrote: > > > -0.5 from me if what we're talking about here is having mutating methods > return self rather than None. Chaining stuff is pretty, but having > methods that mutate self and return self looks like a source of elusive > bugs to me. > > -tim But how is that any worse than the current mutating operators? I think the operating principal is that methods generally work in place, functions make copies. The exceptions to this rule need to be noted. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue Aug 29 15:49:25 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 29 Aug 2006 15:49:25 -0400 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F494CE.1080008@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> Message-ID: On Tue, 29 Aug 2006, Tim Hochberg apparently wrote: > -0.5 from me if what we're talking about here is having > mutating methods return self rather than None. Chaining > stuff is pretty, but having methods that mutate self and > return self looks like a source of elusive bugs to me. I believe this reasoning was the basis of sort (method, returns None) and sorted (function, returns new object) in Python. I believe that was a long and divisive discussion ... Cheers, Alan Isaac From torgil.svensson at gmail.com Tue Aug 29 15:44:11 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 29 Aug 2006 21:44:11 +0200 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: <44F48A0B.7020401@ieee.org> Message-ID: something like this? def list2index(L): uL=sorted(set(L)) idx=dict((y,x) for x,y in enumerate(uL)) return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) //Torgil On 8/29/06, Keith Goodman wrote: > On 8/29/06, Tim Hochberg wrote: > > Keith Goodman wrote: > > > I have a very long list that contains many repeated elements. The > > > elements of the list can be either all numbers, or all strings, or all > > > dates [datetime.date]. > > > > > > I want to convert the list into a matrix where each unique element of > > > the list is assigned a consecutive integer starting from zero. > > > > > If what you want is that the first unique element get's zero, the second > > one, I don't think the code below will work in general since the dict > > does not preserve order. You might want to look at the results for the > > character case to see what I mean. If you're looking for something else, > > you'll need to elaborate a bit. Since list2index doesn't return > > anything, it's not entirely clear what the answer consists of. Just idx? > > Idx plus uL? > > The output I wanted (in my mind, but unfortunately not in my previous > email) is idx and uL where uL[0] corresponds to the zeros in idx, > uL[1] corresponds to the ones in idx. etc. > > I'd also like the uL's to be ordered (now I see that characters and > dates aren't ordered, ooops, thanks for telling me about that). Or > optionally ordered by a second list input which if present would be > used instead of the unique values of L. > > Thank you all for the huge improvements to my code. I'll learn a lot > studying all of them. > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tim.hochberg at ieee.org Tue Aug 29 16:00:50 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 13:00:50 -0700 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> Message-ID: <44F49CF2.5020505@ieee.org> Charles R Harris wrote: > Hi, > > On 8/29/06, *Tim Hochberg* > wrote: > > > -0.5 from me if what we're talking about here is having mutating > methods > return self rather than None. Chaining stuff is pretty, but having > methods that mutate self and return self looks like a source of > elusive > bugs to me. > > -tim > > > But how is that any worse than the current mutating operators? I think > the operating principal is that methods generally work in place, > functions make copies. The exceptions to this rule need to be noted. Is that really the case? I was more under the impression that there wasn't much rhyme nor reason to this. Let's do a quick dir(somearray) and see what we get (I'll strip out the __XXX__ names): 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', 'base', 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', 'copy', 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dtype', 'dump', 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', 'item', 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', 'newbyteorder', 'nonzero', 'prod', 'ptp', 'put', 'putmask', 'ravel', 'real', 'repeat', 'reshape', 'resize', 'round', 'searchsorted', 'setfield', 'setflags', 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', 'swapaxes', 'take', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', 'var', 'view' Hmmm. Without taking too much time to go through these one at a time, I'm pretty certain that they do not in general mutate things in place. Probably at least half return, or can return new arrays, sometimes with references to the original data, but new shapes, sometimes with completely new data. In fact, other than sort, I'm not sure which of these does mutate in place. -tim From kwgoodman at gmail.com Tue Aug 29 16:02:10 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Aug 2006 13:02:10 -0700 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: <44F48A0B.7020401@ieee.org> Message-ID: On 8/29/06, Torgil Svensson wrote: > something like this? > > def list2index(L): > uL=sorted(set(L)) > idx=dict((y,x) for x,y in enumerate(uL)) > return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) Wow. That's amazing. Thank you. From charlesr.harris at gmail.com Tue Aug 29 16:17:29 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 14:17:29 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F49CF2.5020505@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> Message-ID: On 8/29/06, Tim Hochberg wrote: > > Charles R Harris wrote: > > Hi, > > > > On 8/29/06, *Tim Hochberg* > > wrote: > > > > > > -0.5 from me if what we're talking about here is having mutating > > methods > > return self rather than None. Chaining stuff is pretty, but having > > methods that mutate self and return self looks like a source of > > elusive > > bugs to me. > > > > -tim > > > > > > But how is that any worse than the current mutating operators? I think > > the operating principal is that methods generally work in place, > > functions make copies. The exceptions to this rule need to be noted. > Is that really the case? I was more under the impression that there > wasn't much rhyme nor reason to this. Let's do a quick dir(somearray) > and see what we get (I'll strip out the __XXX__ names): > > 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', 'base', > 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', 'copy', > 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dtype', 'dump', > 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', 'item', > 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', 'newbyteorder', > 'nonzero', 'prod', 'ptp', 'put', 'putmask', 'ravel', 'real', 'repeat', > 'reshape', 'resize', 'round', 'searchsorted', 'setfield', 'setflags', > 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', 'swapaxes', > 'take', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', 'var', > 'view' There are certainly many methods where inplace operations make no sense. But for such things as conjugate and clip I think it should be preferred. Think of them as analogs of the "+=" operators that allow memory efficient inplace operations. At the moment there are too few such operators, IMHO, and that makes it hard to write memory efficient code when you want to do so. If you need a copy, the functional form should be the preferred way to go and can easily be implement by constructions like a.copy().sort(). Hmmm. Without taking too much time to go through these one at a time, > I'm pretty certain that they do not in general mutate things in place. > Probably at least half return, or can return new arrays, sometimes with > references to the original data, but new shapes, sometimes with > completely new data. In fact, other than sort, I'm not sure which of > these does mutate in place. > > -tim Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Tue Aug 29 16:36:09 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 29 Aug 2006 14:36:09 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: <44F4A539.3090702@ee.byu.edu> Charles R Harris wrote: > > The 1.0rc1 release of NumPy will be mid September I suspect. > > Also, I recognize that the default-axis switch is a burden for > those who > have already transitioned code to use NumPy (for those just > starting out > it's not a big deal because of the compatibility layer). > > > I am curious as to why you made this switch. Not complaining, mind. New-comers to NumPy asked why there were different conventions on the methods and the functions for the axis argument. The only reason was backward compatibility. Because we had already created a compatibility layer for code transitioning, that seemed like a weak reason to keep the current behavior. The problem is it left early NumPy adopters (including me :-) ) in a bit of a bind, when it comes to code (like SciPy) that had already been converted. Arguments like Fernando's: "it's better to have a bit of pain now, then regrets later" also were convincing. -Travis From oliphant at ee.byu.edu Tue Aug 29 16:43:14 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 29 Aug 2006 14:43:14 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F494CE.1080008@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> Message-ID: <44F4A6E2.1070002@ee.byu.edu> Tim Hochberg wrote: >-0.5 from me if what we're talking about here is having mutating methods >return self rather than None. Chaining stuff is pretty, but having >methods that mutate self and return self looks like a source of elusive >bugs to me. > > I'm generally +0 on this idea (it seems like the clarity in writing comes largely for interactive users), and don't see much difficulty in separating the constructs. On the other hand, I don't see much problem in returning a reference to self either. I guess you are worried about the situation where you write b = a.sort() and think you have a new array, but in fact have a new reference to the already-altered 'a'? Hmm.. So, how is this different from the fact that b = a[1:10:3] already returns a reference to 'a' (I suppose in the fact that it actually returns a new object just one that happens to share the same data with a). However, I suppose that other methods don't return a reference to an already-altered object, do they. Tim's argument has moved me from +0 to -0 -Travis From Chris.Barker at noaa.gov Tue Aug 29 16:49:20 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 29 Aug 2006 13:49:20 -0700 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: <44F4A850.3030903@noaa.gov> Fernando Perez wrote: > more 'return self' at the end of methods which currently don't > return anything (well, we get the default None), as long as it's > sensible. +1 Though I'm a bit hesitant: if it's really consistent that methods that alter the object in place NEVER return themselves, the there is something to be said for that. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tim.hochberg at ieee.org Tue Aug 29 17:03:39 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 14:03:39 -0700 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> Message-ID: <44F4ABAB.3090508@ieee.org> Charles R Harris wrote: > > > On 8/29/06, *Tim Hochberg* > wrote: > > Charles R Harris wrote: > > Hi, > > > > On 8/29/06, *Tim Hochberg* > > >> > wrote: > > > > > > -0.5 from me if what we're talking about here is having mutating > > methods > > return self rather than None. Chaining stuff is pretty, but > having > > methods that mutate self and return self looks like a source of > > elusive > > bugs to me. > > > > -tim > > > > > > But how is that any worse than the current mutating operators? I > think > > the operating principal is that methods generally work in place, > > functions make copies. The exceptions to this rule need to be noted. > Is that really the case? I was more under the impression that there > wasn't much rhyme nor reason to this. Let's do a quick dir(somearray) > and see what we get (I'll strip out the __XXX__ names): > > 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', 'base', > 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', 'copy', > 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dtype', 'dump', > 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', > 'item', > 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', 'newbyteorder', > 'nonzero', 'prod', 'ptp', 'put', 'putmask', 'ravel', 'real', > 'repeat', > 'reshape', 'resize', 'round', 'searchsorted', 'setfield', 'setflags', > 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', > 'swapaxes', > 'take', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', > 'var', 'view' > > > There are certainly many methods where inplace operations make no > sense. But for such things as conjugate and clip I think it should be > preferred. Think of them as analogs of the "+=" operators that allow > memory efficient inplace operations. At the moment there are too few > such operators, IMHO, and that makes it hard to write memory efficient > code when you want to do so. If you need a copy, the functional form > should be the preferred way to go and can easily be implement by > constructions like a.copy().sort(). So let's make this clear; what you are proposing is more that just returning self for more operations. You are proposing changing the meaning of the existing methods to operate in place rather than return new objects. It seems awfully late in the day to be considering this being that we're on the edge of 1.0 and this would could break any existing numpy code that is out there. Just for grins let's look at the operations that could potentially benefit from being done in place. I think they are: byteswap clip conjugate round sort Of these, clip, conjugate and round support an 'out' argument like that supported by ufunces; byteswap has a boolean argument telling it whether to perform operations in place; and sort always operates in place. Noting that the ufunc-like methods (max, argmax, etc) appear to support the 'out' argument as well although it's not documented for most of them, it looks to me as if the two odd methods are byteswap and sort. The method situation could be made more consistent by swapping the boolean inplace flag in byteswapped with another 'out' argument and also having sort not operate in place by default, but also supply an out argument there. Thus: b = a.sort() # Returns a copy a.sort(out=a) # Sorts a in place a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort() in this case since we don't want to rewrite the sort routines) On the whole I think that this would be an improvement, but it may be too late in the day to actually implement it since 1.0 is coming up. There would still be a few methods (fill, put, etc) that modify the array in place and return None, but I haven't heard any complaints about those. -tim > > Hmmm. Without taking too much time to go through these one at a time, > I'm pretty certain that they do not in general mutate things in place. > Probably at least half return, or can return new arrays, sometimes > with > references to the original data, but new shapes, sometimes with > completely new data. In fact, other than sort, I'm not sure which of > these does mutate in place. > > -tim > > > Chuck > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From kortmann at ideaworks.com Tue Aug 29 17:16:12 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Tue, 29 Aug 2006 14:16:12 -0700 (PDT) Subject: [Numpy-discussion] Release of 1.0b5 this weekend Message-ID: <2369.12.216.231.149.1156886172.squirrel@webmail.ideaworks.com> >I find it much cleaner to write >x = foo.bar().baz(param).frob() >than >foo.bar() >foo.baz(param) >x = foo.frob() >but perhaps others disagree. Both of these look "clean" but i do not think that moving 3 lines to one line makes code "cleaner" They both do the same thing and if someone that does not know what .bar() .baz(param) and .frob() are IMO the second version that takes place on three lines would be easier to understand. >I'm generally +0 on this idea (it seems like the clarity in writing >comes largely for interactive users), and don't see much difficulty in >separating the constructs. On the other hand, I don't see much problem >in returning a reference to self either. >I guess you are worried about the situation where you write >b = a.sort() >and think you have a new array, but in fact have a new reference to the >already-altered 'a'? >Hmm.. So, how is this different from the fact that >b = a[1:10:3] already returns a reference to 'a' >(I suppose in the fact that it actually returns a new object just one >that happens to share the same data with a). >However, I suppose that other methods don't return a reference to an >already-altered object, do they. >Tim's argument has moved me from +0 to -0 >-Travis I couldn't agree more with you and Tim on this. I would rather have code that works all the time and will not possibly confuse people later, like the example of >b = a.sort() >and think you have a new array, but in fact have a new reference to the >already-altered 'a'? alot of people have problems grasping this "memory management" type of programming...or at least in my C class half of the kids dropped out because the couldnt keep track of b = a.sort() meaning that b was actually just referencing a and if you changed b then a was changed also. But then again who on this list has problems remembering things like that anyways right?... ~Kenny From cookedm at physics.mcmaster.ca Tue Aug 29 17:19:50 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 29 Aug 2006 17:19:50 -0400 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F4ABAB.3090508@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> Message-ID: <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> On Tue, 29 Aug 2006 14:03:39 -0700 Tim Hochberg wrote: > Of these, clip, conjugate and round support an 'out' argument like that > supported by ufunces; byteswap has a boolean argument telling it > whether to perform operations in place; and sort always operates in > place. Noting that the ufunc-like methods (max, argmax, etc) appear to > support the 'out' argument as well although it's not documented for most > of them, it looks to me as if the two odd methods are byteswap and sort. > The method situation could be made more consistent by swapping the > boolean inplace flag in byteswapped with another 'out' argument and also > having sort not operate in place by default, but also supply an out > argument there. Thus: > > b = a.sort() # Returns a copy > a.sort(out=a) # Sorts a in place > a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort() > in this case since we don't want to rewrite the sort routines) Ugh. That's completely different semantics from sort() on lists, so I think it would be a source of bugs (at least, it would mean keeping two different ideas of .sort() in my head). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From charlesr.harris at gmail.com Tue Aug 29 17:20:24 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 15:20:24 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F4ABAB.3090508@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> Message-ID: Hi Tim, On 8/29/06, Tim Hochberg wrote: > > Charles R Harris wrote: > > > > > > On 8/29/06, *Tim Hochberg* > > wrote: > > > > Charles R Harris wrote: > > > Hi, > > > > > > On 8/29/06, *Tim Hochberg* > > > > >> > > wrote: > > > > > > > > > -0.5 from me if what we're talking about here is having > mutating > > > methods > > > return self rather than None. Chaining stuff is pretty, but > > having > > > methods that mutate self and return self looks like a source > of > > > elusive > > > bugs to me. > > > > > > -tim > > > > > > > > > But how is that any worse than the current mutating operators? I > > think > > > the operating principal is that methods generally work in place, > > > functions make copies. The exceptions to this rule need to be > noted. > > Is that really the case? I was more under the impression that there > > wasn't much rhyme nor reason to this. Let's do a quick > dir(somearray) > > and see what we get (I'll strip out the __XXX__ names): > > > > 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', 'base', > > 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', > 'copy', > > 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dtype', 'dump', > > 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', > > 'item', > > 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', 'newbyteorder', > > 'nonzero', 'prod', 'ptp', 'put', 'putmask', 'ravel', 'real', > > 'repeat', > > 'reshape', 'resize', 'round', 'searchsorted', 'setfield', > 'setflags', > > 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', > > 'swapaxes', > > 'take', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', > > 'var', 'view' > > > > > > There are certainly many methods where inplace operations make no > > sense. But for such things as conjugate and clip I think it should be > > preferred. Think of them as analogs of the "+=" operators that allow > > memory efficient inplace operations. At the moment there are too few > > such operators, IMHO, and that makes it hard to write memory efficient > > code when you want to do so. If you need a copy, the functional form > > should be the preferred way to go and can easily be implement by > > constructions like a.copy().sort(). > So let's make this clear; what you are proposing is more that just > returning self for more operations. You are proposing changing the > meaning of the existing methods to operate in place rather than return > new objects. It seems awfully late in the day to be considering this > being that we're on the edge of 1.0 and this would could break any > existing numpy code that is out there. > > Just for grins let's look at the operations that could potentially > benefit from being done in place. I think they are: > byteswap > clip > conjugate > round > sort > > Of these, clip, conjugate and round support an 'out' argument like that > supported by ufunces; byteswap has a boolean argument telling it > whether to perform operations in place; and sort always operates in > place. Noting that the ufunc-like methods (max, argmax, etc) appear to > support the 'out' argument as well although it's not documented for most > of them, it looks to me as if the two odd methods are byteswap and sort. > The method situation could be made more consistent by swapping the > boolean inplace flag in byteswapped with another 'out' argument and also > having sort not operate in place by default, but also supply an out > argument there. Thus: > > b = a.sort() # Returns a copy > a.sort(out=a) # Sorts a in place > a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort() > in this case since we don't want to rewrite the sort routines) > > On the whole I think that this would be an improvement, but it may be > too late in the day to actually implement it since 1.0 is coming up. > There would still be a few methods (fill, put, etc) that modify the > array in place and return None, but I haven't heard any complaints about > those. That sounds like a good idea. One could keep the present behaviour in most cases by supplying a default value, although the out keyword might need a None value to indicate "copy" and a 'Self' value that means in place, or something like that, and then have all reasonable methods return values. That way the change would be transparent. The changes to the sort method would all be upper level, the low level sorting routines would remain unchanged. Methods are new, so code that needs to be changed is code specifically written for Numpy and now is the time to make these sort of decisions. -tim Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Tue Aug 29 17:21:40 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 29 Aug 2006 17:21:40 -0400 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: <20060829172140.29db40dd@arbutus.physics.mcmaster.ca> On Tue, 29 Aug 2006 13:25:14 -0600 "Charles R Harris" wrote: > Hi Fernando, > > On 8/29/06, Fernando Perez wrote: > > > > On 8/29/06, Charles R Harris wrote: > > > > > Speaking of features, I wonder if more of the methods should return > > > references. For instance, it might be nice to write something like: > > > > > > a.sort().searchsorted([...]) > > > > > > instead of making two statements out of it. > > > > +1 for more 'return self' at the end of methods which currently don't > > return anything (well, we get the default None), as long as it's > > sensible. I really like this 'message chaining' style of programming, > > and it annoys me that much of the python stdlib gratuitously prevents > > it by NOT returning self in places where it would be a perfectly > > sensible thing to do. -1, for the same reasons l.sort() doesn't (for a list l). For lists, the reason .sort() returns None is because it makes it clear it's a mutation. Returning self would make it look like it was doing a copy. > My pet peeve example: a.reverse() > > I would also like to see simple methods for "+=" operator and such. Then one > could write > > x = a.copy().add(10) There are: x = a.copy().__add__(10) or, for +=: x.__iadd__(10) > One could make a whole reverse polish translator out of such operations and > a few parenthesis. I have in mind some sort of code optimizer. It wouldn't be anymore efficient than the other way. For a code optimizer, you'll either have to parse the python code or use special objects (much like numexpr does), and then you might as well use the operators. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Tue Aug 29 17:25:08 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 29 Aug 2006 15:25:08 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> Message-ID: On 8/29/06, David M. Cooke wrote: > On Tue, 29 Aug 2006 14:03:39 -0700 > Tim Hochberg wrote: > > b = a.sort() # Returns a copy > > a.sort(out=a) # Sorts a in place > > a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort() > > in this case since we don't want to rewrite the sort routines) > > Ugh. That's completely different semantics from sort() on lists, so I think > it would be a source of bugs (at least, it would mean keeping two different > ideas of .sort() in my head). Agreed. Except where very well justified (such as slicing returning views for memory reasons), let's keep numpy arrays similar to native lists in their behavior... Special cases aren't special enough to break the rules. and all that :) Cheers, f From charlesr.harris at gmail.com Tue Aug 29 17:32:25 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 15:32:25 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <2369.12.216.231.149.1156886172.squirrel@webmail.ideaworks.com> References: <2369.12.216.231.149.1156886172.squirrel@webmail.ideaworks.com> Message-ID: Hi, On 8/29/06, kortmann at ideaworks.com wrote: > > >I find it much cleaner to write > > >x = foo.bar().baz(param).frob() > > >than > > >foo.bar() > >foo.baz(param) > >x = foo.frob() > > >but perhaps others disagree. > > Both of these look "clean" but i do not think that moving 3 lines to one > line makes code "cleaner" They both do the same thing and if someone that > does not know what .bar() .baz(param) and .frob() are IMO the second > version that takes place on three lines would be easier to understand. > > > > >I'm generally +0 on this idea (it seems like the clarity in writing > >comes largely for interactive users), and don't see much difficulty in > >separating the constructs. On the other hand, I don't see much problem > >in returning a reference to self either. > > >I guess you are worried about the situation where you write > > >b = a.sort() > > >and think you have a new array, but in fact have a new reference to the > >already-altered 'a'? > > >Hmm.. So, how is this different from the fact that > > >b = a[1:10:3] already returns a reference to 'a' > > >(I suppose in the fact that it actually returns a new object just one > >that happens to share the same data with a). > > >However, I suppose that other methods don't return a reference to an > >already-altered object, do they. > > >Tim's argument has moved me from +0 to -0 > > >-Travis > > > I couldn't agree more with you and Tim on this. I would rather have code > that works all the time and will not possibly confuse people later, like > the example of > > >b = a.sort() > >and think you have a new array, but in fact have a new reference to the > >already-altered 'a'? > > alot of people have problems grasping this "memory management" type of > programming...or at least in my C class half of the kids dropped out > because the couldnt keep track of > > b = a.sort() meaning that b was actually just referencing a and if you > changed b then a was changed also. Maybe they should start with assembly (or mix ;) instead of C? In any case, references are pointer wrappers and pointers seem to be the biggest bugaboo in C. Maybe everyone should start with Fortran where most everything was a reference. I say "was" because the last Fortran I used was F77 and I have no idea what the current situation is. I suppose the in/out specs make a difference. But then again who on this list has problems remembering things like that > anyways right?... > ~Kenny Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue Aug 29 17:46:57 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 29 Aug 2006 17:46:57 -0400 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F4ABAB.3090508@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> Message-ID: On Tue, 29 Aug 2006, Tim Hochberg apparently wrote: > b = a.sort() # Returns a copy Given the extant Python vocabulary, this seems like a bad idea to me. (Better to call it 'sorted' in this case.) fwiw, Alan Isaac From torgil.svensson at gmail.com Tue Aug 29 17:43:48 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 29 Aug 2006 23:43:48 +0200 Subject: [Numpy-discussion] fromiter shape argument -- was Re: For loop tips Message-ID: > return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) Is it possible for fromiter to take an optional shape (or count) argument in addition to the dtype argument? If both is given it could preallocate memory and we only have to iterate over L once. //Torgil On 8/29/06, Keith Goodman wrote: > On 8/29/06, Torgil Svensson wrote: > > something like this? > > > > def list2index(L): > > uL=sorted(set(L)) > > idx=dict((y,x) for x,y in enumerate(uL)) > > return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) > > Wow. That's amazing. Thank you. > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tim.hochberg at ieee.org Tue Aug 29 17:49:26 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 14:49:26 -0700 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> Message-ID: <44F4B666.3070901@ieee.org> David M. Cooke wrote: > On Tue, 29 Aug 2006 14:03:39 -0700 > Tim Hochberg wrote: > > >> Of these, clip, conjugate and round support an 'out' argument like that >> supported by ufunces; byteswap has a boolean argument telling it >> whether to perform operations in place; and sort always operates in >> place. Noting that the ufunc-like methods (max, argmax, etc) appear to >> support the 'out' argument as well although it's not documented for most >> of them, it looks to me as if the two odd methods are byteswap and sort. >> The method situation could be made more consistent by swapping the >> boolean inplace flag in byteswapped with another 'out' argument and also >> having sort not operate in place by default, but also supply an out >> argument there. Thus: >> >> b = a.sort() # Returns a copy >> a.sort(out=a) # Sorts a in place >> a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort() >> in this case since we don't want to rewrite the sort routines) >> > > Ugh. That's completely different semantics from sort() on lists, so I think > it would be a source of bugs (at least, it would mean keeping two different > ideas of .sort() in my head). > Thinking about it a bit more, I'd leave sort alone (returning None and all).. I was (over)reacting to changing to sort to return self, which makes the set of methods both less consistent within itself, less consistent with python and more error prone IMO, which seems the worst possibility. For the moment at least I do stand by the suggestion of changing byteswap to match the rest of the methods, as that would remove one outlier in the set methods. -tim From tcorcelle at yahoo.fr Tue Aug 29 17:57:59 2006 From: tcorcelle at yahoo.fr (tristan CORCELLE) Date: Tue, 29 Aug 2006 21:57:59 +0000 (GMT) Subject: [Numpy-discussion] Py2exe / numpy troubles Message-ID: <20060829215759.67527.qmail@web26509.mail.ukl.yahoo.com> > >1) First Problem: numpy\core\_internal.pyc not included in Library.zip > >C:\Lameness\dist>templatewindow.exe > Traceback (most recent call last): > File "templatewindow.py", line 7, in ? > File "wxmpl.pyc", line 25, in ? > File "matplotlib\numerix\__init__.pyc", line 60, in ? > File "Numeric.pyc", line 91, in ? > File "numpy\__init__.pyc", line 35, in ? > File "numpy\core\__init__.pyc", line 6, in ? > File "numpy\core\umath.pyc", line 12, in ? > File "numpy\core\umath.pyc", line 10, in __load > AttributeError: 'module' object has no attribute '_ARRAY_API' > > > > >I resolved that issue by adding the file > >...\Python24\Lib\site-packages\numpy\core\_internal.pyc in > >...\test\dist\library.zip\numpy\core. > >Each time I compile that executable, I add the file by hand. > >Does anybody know how to automatically add that file? > > although mine was in \python23 respectively :) > > thanks for this fix > now i have this problem > > C:\Lameness\dist>templatewindow.exe > Traceback (most recent call last): > File "c:\python23\lib\site-packages\py2exe\boot_common.py", line 92, in ? > import linecache > ImportError: No module named linecache > Traceback (most recent call last): > File "templatewindow.py", line 1, in ? > ImportError: No module named wx > > C:\Lameness\dist> > > > current setup.py = > > ######################################################## > from distutils.filelist import findall > import os > import matplotlib > matplotlibdatadir = matplotlib.get_data_path() > matplotlibdata = findall(matplotlibdatadir) > matplotlibdata_files = [] > for f in matplotlibdata: > dirname = os.path.join('matplotlibdata', f[len(matplotlibdatadir)+1:]) > matplotlibdata_files.append((os.path.split(dirname)[0], [f])) > > > packages = ['matplotlib', 'pytz'] > includes = [] > excludes = [] > dll_excludes = ['libgdk_pixbuf-2.0-0.dll', > 'libgobject-2.0-0.dll', > 'libgdk-win32-2.0-0.dll', > 'wxmsw26uh_vc.dll'] > > > opts = { 'py2exe': { 'packages' : packages, > 'includes' : includes, > 'excludes' : excludes, > 'dll_excludes' : dll_excludes > } > } > > setup ( console=['templatewindow.py'], > options = opts, > data_files = matplotlibdata_files > ) > ########################################################## > > anyone seen this problem before? > > first line of template window = import wx > My Configuration : Windows XP pro, ActivePython 2.4.2.10, Scipy 0.4.9, Numpy 0.9.8, MatplotLib 0.87.1, Py2exe 0.6.5, WxPython 2.6 ---- 1) Be very careful on how you generate the file "...\dist\library.zip".I don't know why, but the zip file generated by hand doesn't work. Check its size! Specific zip format? Specific options to generate it? I didn't check source files to know how library.zip is generated.My method is the following one: - Extract the ...\test\dist\library.zip file in ...\test\dist\library - Add the file ...\Python24\Lib\site-packages\numpy\core\_internal.pyc in ...\test\dist\library\numpy\core. - Use Winzip to Add the ...\test\dist\library\numpy directory to the ...\dist\library.zip fileI know, it is not really beautiful but it seems to work. It is a temporary solution for debug. I am new in Python so my style is not really "academic" ---- 2) If you use my setup.py file, one more time, be careful cause of the wx specific dll: wxmsw26uh_vc.dllI don't know why, but Py2Exe doesn't find it. I remove that dll from the compilation phase and I copy it by hand in ...\test\dist directory.An idea may be the modification of setup.py file to indicate the path of that dll or something like that.DOES ANYONE HAVE THE SOLUTION? ---- 3)I am still blocked on my second issue > >2) Second problem: I don't know how to resolve that issue:> > > >Traceback (most recent call last):> > File "profiler_ftt.py", line 15, in ?> > from matplotlib.backends.backend_wx import Toolbar, FigureCanvasWx,\> > File "matplotlib\backends\backend_wx.pyc", line 152, in ?> > File "matplotlib\backend_bases.pyc", line 10, in ?> > File "matplotlib\colors.pyc", line 33, in ?> > File "matplotlib\numerix\__init__.pyc", line 67, in ?> > File "numpy\__init__.pyc", line 35, in ?> > File "numpy\_import_tools.pyc", line 173, in __call__> > File "numpy\_import_tools.pyc", line 68, in _init_info_modules> > File "", line 1, in ?> > File "numpy\random\__init__.pyc", line 3, in ?> > File "numpy\random\mtrand.pyc", line 12, in ?> > File "numpy\random\mtrand.pyc", line 10, in __load> > File "numpy.pxi", line 32, in mtrand> >AttributeError: 'module' object has no attribute 'dtype'> > > >I don't find the file numpy.pxi in my file tree nor in \test\dist\library.zip.> >I browsed the web in the hope to find a solution but nothing.> >It seems that this issue is well known but no solution provided in mailing lists.> > > >What is that file "numpix.pxi"? Where to find it or how is it generated?> >How to resolve that execution issue? Regards,Tristan -------------- next part -------------- An HTML attachment was scrubbed... URL: From kortmann at ideaworks.com Tue Aug 29 18:16:41 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Tue, 29 Aug 2006 15:16:41 -0700 (PDT) Subject: [Numpy-discussion] Py2exe / numpy troubles Message-ID: <2588.12.216.231.149.1156889801.squirrel@webmail.ideaworks.com> My Configuration : Windows XP pro, ActivePython 2.4.2.10, Scipy 0.4.9, Numpy 0.9.8, MatplotLib 0.87.1, Py2exe 0.6.5, WxPython 2.6 ---- 1) Be very careful on how you generate the file "...\dist\library.zip".I don't know why, but the zip file generated by hand doesn't work. Check its size! Specific zip format? Specific options to generate it? I didn't check source files to know how library.zip is generated.My method is the following one: - Extract the ...\test\dist\library.zip file in ...\test\dist\library - Add the file ...\Python24\Lib\site-packages\numpy\core\_internal.pyc in ...\test\dist\library\numpy\core. - Use Winzip to Add the ...\test\dist\library\numpy directory to the ...\dist\library.zip file I know, it is not really beautiful but it seems to work. It is a temporary solution for debug. I am new in Python so my style is not really "academic" ---- 2) If you use my setup.py file, one more time, be careful cause of the wx specific dll: wxmsw26uh_vc.dllI don't know why, but Py2Exe doesn't find it. I remove that dll from the compilation phase and I copy it by hand in ...\test\dist directory.An idea may be the modification of setup.py file to indicate the path of that dll or something like that.DOES ANYONE HAVE THE SOLUTION? ---- 3)I am still blocked on my second issue > >2) Second problem: I don't know how to resolve that issue: > > > >Traceback (most recent call last):> > File "profiler_ftt.py", line 15, in ? > > from matplotlib.backends.backend_wx import Toolbar, FigureCanvasWx,\ > > File "matplotlib\backends\backend_wx.pyc", line 152, in ? > > File "matplotlib\backend_bases.pyc", line 10, in ? > > File "matplotlib\colors.pyc", line 33, in ? > > File "matplotlib\numerix\__init__.pyc", line 67, in ? > > File "numpy\__init__.pyc", line 35, in ? > > File "numpy\_import_tools.pyc", line 173, in __call__ > > File "numpy\_import_tools.pyc", line 68, in _init_info_modules > > File "", line 1, in ? > > File "numpy\random\__init__.pyc", line 3, in ? > > File "numpy\random\mtrand.pyc", line 12, in ? > > File "numpy\random\mtrand.pyc", line 10, in __load > > File "numpy.pxi", line 32, in mtrand > > AttributeError: 'module' object has no attribute 'dtype' > > > >I don't find the file numpy.pxi in my file tree nor in \test\dist\library.zip. > >I browsed the web in the hope to find a solution but nothing. > >It seems that this issue is well known but no solution provided in mailing lists. > > > >What is that file "numpix.pxi"? Where to find it or how is it generated? > >How to resolve that execution issue? Regards,Tristan could you post your setup file please? i can look at it i may not be much help but some is better than none From pfdubois at gmail.com Tue Aug 29 18:20:44 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Tue, 29 Aug 2006 15:20:44 -0700 Subject: [Numpy-discussion] A minor annoyance with MA In-Reply-To: <200608290125.25232.pgmdevlist@gmail.com> References: <200608290125.25232.pgmdevlist@gmail.com> Message-ID: Whatever the current state of the implementation, the original intention was that ma be, where it makes sense, a "drop-in" replacement for numpy arrays. Being retired I don't read this list all that carefully but I did see some subjects concerning axis defaults (about the 98th time we have had that discussion I suppose) and perhaps ma and numpy got out of sync, even if they were in sync to begin with. For sum, x.sum() should be the sum of the entire array, no? And that implies a default of None, doesn't it? So a default of zero or one would be wrong. Oh well, back to my nap. On 28 Aug 2006 22:26:54 -0700, PGM wrote: > > Folks, > I keep running into the following problem since some recent update (I'm > currently running 1.0b3, but the problem occurred roughly around 0.9.8): > > >>> import numpy.core.ma as MA > >>> x=MA.array([[1],[2]],mask=False) > >>> x.sum(None) > /usr/lib64/python2.4/site-packages/numpy/core/ma.py in reduce(self, > target, > axis, dtype) > 393 m.shape = (1,) > 394 if m is nomask: > --> 395 return masked_array (self.f.reduce (t, axis)) > 396 else: > 397 t = masked_array (t, m) > > TypeError: an integer is required > #................................ > > Note that x.sum(0) and x.sum(1) work fine. I know some consensus seems to > be > lacking with MA, but still, I can't see why axis=None is not recognized. > > Corollary: with masked array, the default axis for sum is 0, when it's > None > for regular arrays. Is there a reason for this inconsistency ? > > Thanks a lot > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Tue Aug 29 18:36:55 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 29 Aug 2006 16:36:55 -0600 Subject: [Numpy-discussion] A minor annoyance with MA In-Reply-To: <200608290125.25232.pgmdevlist@gmail.com> References: <200608290125.25232.pgmdevlist@gmail.com> Message-ID: <44F4C187.10102@ee.byu.edu> PGM wrote: >Folks, >I keep running into the following problem since some recent update (I'm >currently running 1.0b3, but the problem occurred roughly around 0.9.8): > > > >>>>import numpy.core.ma as MA >>>>x=MA.array([[1],[2]],mask=False) >>>>x.sum(None) >>>> >>>> >/usr/lib64/python2.4/site-packages/numpy/core/ma.py in reduce(self, target, >axis, dtype) > 393 m.shape = (1,) > 394 if m is nomask: >--> 395 return masked_array (self.f.reduce (t, axis)) > 396 else: > 397 t = masked_array (t, m) > >TypeError: an integer is required >#................................ > > This bug has hopefully been fixed (in SVN). Please let us know if it still persists. -Travis From charlesr.harris at gmail.com Tue Aug 29 18:42:23 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 16:42:23 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F4B666.3070901@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> <44F4B666.3070901@ieee.org> Message-ID: On 8/29/06, Tim Hochberg wrote: > > David M. Cooke wrote: > > On Tue, 29 Aug 2006 14:03:39 -0700 > > Tim Hochberg wrote: > > > > > >> Of these, clip, conjugate and round support an 'out' argument like > that > >> supported by ufunces; byteswap has a boolean argument telling it > >> whether to perform operations in place; and sort always operates in > >> place. Noting that the ufunc-like methods (max, argmax, etc) appear to > >> support the 'out' argument as well although it's not documented for > most > >> of them, it looks to me as if the two odd methods are byteswap and > sort. > >> The method situation could be made more consistent by swapping the > >> boolean inplace flag in byteswapped with another 'out' argument and > also > >> having sort not operate in place by default, but also supply an out > >> argument there. Thus: > >> > >> b = a.sort() # Returns a copy > >> a.sort(out=a) # Sorts a in place > >> a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort > () > >> in this case since we don't want to rewrite the sort routines) > >> > > > > Ugh. That's completely different semantics from sort() on lists, so I > think > > it would be a source of bugs (at least, it would mean keeping two > different > > ideas of .sort() in my head). > > > Thinking about it a bit more, I'd leave sort alone (returning None and > all).. I was (over)reacting to changing to sort to return self, which > makes the set of methods both less consistent within itself, less > consistent with python and more error prone IMO, which seems the worst > possibility. Here is Guido on sort: I'd like to explain once more why I'm so adamant that *sort*() shouldn't *return* 'self'. This comes from a coding style (popular in various other languages, I believe especially Lisp revels in it) where a series of side effects on a single object can be chained like this: x.compress().chop(y).*sort*(z) which would be the same as x.compress() x.chop(y) x.*sort*(z) I find the chaining form a threat to readability; it requires that the reader must be intimately familiar with each of the methods. The second form makes it clear that each of these calls acts on the same object, and so even if you don't know the class and its methods very well, you can understand that the second and third call are applied to x (and that all calls are made for their side-effects), and not to something else. I'd like to reserve chaining for operations that *return* new values, like string processing operations: y = x.rstrip("\n").split(":").lower() There are a few standard library modules that encourage chaining of side-effect calls (pstat comes to mind). There shouldn't be any new ones; pstat slipped through my filter when it was weak. So it seems you are correct in light of the Python philosophy. For those operators that allow specification of out I would still like to see a special value that means inplace, I think it would make the code clearer. Of course, merely having the out flag violates Guido's intent. The idea seems to be that we want some way to avoid allocating new memory. So maybe byteswap should be inplace and return None, while a copyto method could be added. Then one would do a.copyto(b) b.byteswap() instead of b = a.byteswap() -tim Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From myeates at jpl.nasa.gov Tue Aug 29 18:46:45 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Tue, 29 Aug 2006 15:46:45 -0700 Subject: [Numpy-discussion] stumped numpy user seeks help Message-ID: <44F4C3D5.80600@jpl.nasa.gov> My head is about to explode. I have an M by N array of floats. Associated with the columns are character labels ['a','b','b','c','d','e','e','e'] note: already sorted so duplicates are contiguous I want to replace the 2 'b' columns with the sum of the 2 columns. Similarly, replace the 3 'e' columns with the sum of the 3 'e' columns. The resulting array still has M rows but less than N columns. Anyone? Could be any harder than Sudoku. Mathew From kwgoodman at gmail.com Tue Aug 29 19:09:34 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Aug 2006 16:09:34 -0700 Subject: [Numpy-discussion] stumped numpy user seeks help In-Reply-To: <44F4C3D5.80600@jpl.nasa.gov> References: <44F4C3D5.80600@jpl.nasa.gov> Message-ID: On 8/29/06, Mathew Yeates wrote: > I have an M by N array of floats. Associated with the columns are > character labels > ['a','b','b','c','d','e','e','e'] note: already sorted so duplicates > are contiguous > > I want to replace the 2 'b' columns with the sum of the 2 columns. > Similarly, replace the 3 'e' columns with the sum of the 3 'e' columns. Make a cumsum of the array. Find the index of the last 'a', last 'b', etc and make the reduced array from that. Then take the diff of the columns. I know that's vague, but so is my understanding of python/numpy. Or even more vague: make a function that does what you want. From charlesr.harris at gmail.com Tue Aug 29 19:17:36 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 17:17:36 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> <44F4B666.3070901@ieee.org> Message-ID: On 8/29/06, Charles R Harris wrote: > > On 8/29/06, Tim Hochberg wrote: > > > David M. Cooke wrote: > > > On Tue, 29 Aug 2006 14:03:39 -0700 > > > Tim Hochberg wrote: > > > > > > > > >> Of these, clip, conjugate and round support an 'out' argument like > > that > > >> supported by ufunces; byteswap has a boolean argument telling it > > >> whether to perform operations in place; and sort always operates in > > >> place. Noting that the ufunc-like methods (max, argmax, etc) appear > > to > > >> support the 'out' argument as well although it's not documented for > > most > > >> of them, it looks to me as if the two odd methods are byteswap and > > sort. > > >> The method situation could be made more consistent by swapping the > > >> boolean inplace flag in byteswapped with another 'out' argument and > > also > > >> having sort not operate in place by default, but also supply an out > > >> argument there. Thus: > > >> > > >> b = a.sort() # Returns a copy > > >> a.sort(out=a) # Sorts a in place > > >> a.sort(out=c) # Sorts a into c (probably just equivalent to c = > > a.sort() > > >> in this case since we don't want to rewrite the sort routines) > > >> > > > > > > Ugh. That's completely different semantics from sort() on lists, so I > > think > > > it would be a source of bugs (at least, it would mean keeping two > > different > > > ideas of .sort() in my head). > > > > > Thinking about it a bit more, I'd leave sort alone (returning None and > > all).. I was (over)reacting to changing to sort to return self, which > > makes the set of methods both less consistent within itself, less > > consistent with python and more error prone IMO, which seems the worst > > possibility. > > > Here is Guido on sort: > > I'd like to explain once more why I'm so adamant that * > sort*() shouldn't > *return* 'self'. > > This comes from a coding style (popular in various other languages, I > believe especially Lisp revels in it) where a series of side effects > > on a single object can be chained like this: > > x.compress().chop(y).*sort*(z) > > which would be the same as > > x.compress() > x.chop > (y) > x.*sort*(z) > > I find the chaining form a threat to readability; it requires that the > reader must be intimately familiar with each of the methods. The > > second form makes it clear that each of these calls acts on the same > object, and so even if you don't know the class and its methods very > well, you can understand that the second and third call are applied to > > x (and that all calls are made for their side-effects), and not to > something else. > > I'd like to reserve chaining for operations that *return* new values, > > like string processing operations: > > y = x.rstrip("\n").split(":").lower() > > There are a few standard library modules that encourage chaining of > side-effect calls (pstat comes to mind). There shouldn't be any new > > ones; pstat slipped through my filter when it was weak. > > So it seems you are correct in light of the Python philosophy. For those > operators that allow specification of out I would still like to see a > special value that means inplace, I think it would make the code clearer. Of > course, merely having the out flag violates Guido's intent. The idea seems > to be that we want some way to avoid allocating new memory. So maybe > byteswap should be inplace and return None, while a copyto method could be > added. Then one would do > > a.copyto(b) > b.byteswap() > > instead of > > b = a.byteswap() > > To expand on this a bit. Guidos philosophy, combined with a desire for memory efficiency, means that methods like byteswap and clip, which use the same memory, should operate inplace and return None. Thus, instead of b = a.clip(...) use b = a.copy() b.clip(...) Hey, it's a risc machine. If we did this, then functions could always return copies: b = clip(a,...) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Tue Aug 29 19:22:05 2006 From: pgmdevlist at gmail.com (PGM) Date: Tue, 29 Aug 2006 19:22:05 -0400 Subject: [Numpy-discussion] A minor annoyance with MA In-Reply-To: <44F4C187.10102@ee.byu.edu> References: <200608290125.25232.pgmdevlist@gmail.com> <44F4C187.10102@ee.byu.edu> Message-ID: <200608291922.05664.pgmdevlist@gmail.com> Travis, > This bug has hopefully been fixed (in SVN). Please let us know if it > still persists. It seems to work quite fine with the latest version of ma. Thanks a lot ! P. From fperez.net at gmail.com Tue Aug 29 19:24:52 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 29 Aug 2006 17:24:52 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F48E1A.1020006@ieee.org> References: <44F48E1A.1020006@ieee.org> Message-ID: On 8/29/06, Travis Oliphant wrote: > > Hi all, > > Classes start for me next Tuesday, and I'm teaching a class for which I > will be using NumPy / SciPy extensively. I need to have a release of > these two (and hopefully matplotlib) that work with each other. > > Therefore, I'm going to make a 1.0b5 release of NumPy over the weekend > (probably Monday), and also get a release of SciPy out as well. At that > point, I'll only be available for bug-fixes to 1.0. Therefore, the next > release after 1.0b5 I would like to be 1.0rc1 (release-candidate 1). What's the status of these 'overwriting' messages? planck[/tmp]> python -c 'import scipy;scipy.test()' Overwriting info= from scipy.misc (was from numpy.lib.utils) Overwriting fft= from scipy.fftpack.basic (was from /home/fperez/tmp/local/lib/python2.3/site-packages/numpy/fft/__init__.pyc) ... I was under the impression you'd decided to quiet them out, but they seem to be making a comeback. Cheers, f From charlesr.harris at gmail.com Tue Aug 29 19:26:23 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 17:26:23 -0600 Subject: [Numpy-discussion] stumped numpy user seeks help In-Reply-To: References: <44F4C3D5.80600@jpl.nasa.gov> Message-ID: On 8/29/06, Keith Goodman wrote: > > On 8/29/06, Mathew Yeates wrote: > > > I have an M by N array of floats. Associated with the columns are > > character labels > > ['a','b','b','c','d','e','e','e'] note: already sorted so duplicates > > are contiguous > > > > I want to replace the 2 'b' columns with the sum of the 2 columns. > > Similarly, replace the 3 'e' columns with the sum of the 3 'e' columns. > > Make a cumsum of the array. Find the index of the last 'a', last 'b', > etc and make the reduced array from that. Then take the diff of the > columns. > > I know that's vague, but so is my understanding of python/numpy. > > Or even more vague: make a function that does what you want. Or you could use searchsorted on the labels to get a sequence of ranges. What you have is a sort of binning applied to columns instead of values in a vector. Or, if the overhead isn't to much, use a dictionary of with (keys: array) entries. Index thru the columns adding keys, when the key is new insert a column copy, when it is already present add the new column to the old one. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkanwar at geol.sc.edu Tue Aug 29 19:57:45 2006 From: rkanwar at geol.sc.edu (Rahul Kanwar) Date: Tue, 29 Aug 2006 19:57:45 -0400 Subject: [Numpy-discussion] array indexing problem Message-ID: <1156895865.5499.5.camel@hydro.geol.sc.edu> Hello, I am trying to extract a column from a 2D array here is what is have done: -------------------------------------------- In [3]: a = array([[1,2,3],[1,2,3]]) In [4]: a Out[4]: array([[1, 2, 3], [1, 2, 3]]) In [5]: a[:, 1] Out[5]: array([2, 2]) In [6]: a[:, 1:2] Out[6]: array([[2], [2]]) -------------------------------------------- when i use a[:, 1] i get a 1x2 array where as when i use a[:, 1:2] i get a 2x1 array. The intuitive behavior of a[:, 1] should be a 2x1 array. Am i doing something wrong here or is there some reason for this behavior ? regards, Rahul From wbaxter at gmail.com Tue Aug 29 20:02:24 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 30 Aug 2006 09:02:24 +0900 Subject: [Numpy-discussion] array indexing problem In-Reply-To: <1156895865.5499.5.camel@hydro.geol.sc.edu> References: <1156895865.5499.5.camel@hydro.geol.sc.edu> Message-ID: That's just the way it works in numpy. Slices return arrays of lower rank. If you want arrays that behave like they do in linear algebra you can use 'matrix' instead. Check out the Numpy for Matlab users page for more info on array vs. matrix. http://www.scipy.org/NumPy_for_Matlab_Users --bb On 8/30/06, Rahul Kanwar wrote: > Hello, > > I am trying to extract a column from a 2D array here is what is have > done: > > -------------------------------------------- > In [3]: a = array([[1,2,3],[1,2,3]]) > > In [4]: a > Out[4]: > array([[1, 2, 3], > [1, 2, 3]]) > > In [5]: a[:, 1] > Out[5]: array([2, 2]) > > In [6]: a[:, 1:2] > Out[6]: > array([[2], > [2]]) > -------------------------------------------- > > when i use a[:, 1] i get a 1x2 array where as when i use a[:, 1:2] i get > a 2x1 array. The intuitive behavior of a[:, 1] should be a 2x1 array. Am > i doing something wrong here or is there some reason for this behavior ? > > regards, > Rahul > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From rahul.kanwar at gmail.com Tue Aug 29 20:05:04 2006 From: rahul.kanwar at gmail.com (Rahul Kanwar) Date: Tue, 29 Aug 2006 20:05:04 -0400 Subject: [Numpy-discussion] array indexing problem Message-ID: <63dec5bf0608291705l793865cag4dc59884a1542f92@mail.gmail.com> Hello, I am trying to extract a column from a 2D array here is what is have done: -------------------------------------------- In [3]: a = array([[1,2,3],[1,2,3]]) In [4]: a Out[4]: array([[1, 2, 3], [1, 2, 3]]) In [5]: a[:, 1] Out[5]: array([2, 2]) In [6]: a[:, 1:2] Out[6]: array([[2], [2]]) -------------------------------------------- when i use a[:, 1] i get a 1x2 array where as when i use a[:, 1:2] i get a 2x1 array. The intuitive behavior of a[:, 1] should be a 2x1 array. Am i doing something wrong here or is there some reason for this behavior ? regards, Rahul From charlesr.harris at gmail.com Tue Aug 29 20:11:13 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 18:11:13 -0600 Subject: [Numpy-discussion] array indexing problem In-Reply-To: <1156895865.5499.5.camel@hydro.geol.sc.edu> References: <1156895865.5499.5.camel@hydro.geol.sc.edu> Message-ID: On 8/29/06, Rahul Kanwar wrote: > > Hello, > > I am trying to extract a column from a 2D array here is what is have > done: > > -------------------------------------------- > In [3]: a = array([[1,2,3],[1,2,3]]) > > In [4]: a > Out[4]: > array([[1, 2, 3], > [1, 2, 3]]) > > In [5]: a[:, 1] > Out[5]: array([2, 2]) > > In [6]: a[:, 1:2] > Out[6]: > array([[2], > [2]]) > -------------------------------------------- > > when i use a[:, 1] i get a 1x2 array where as when i use a[:, 1:2] i get > a 2x1 array. The intuitive behavior of a[:, 1] should be a 2x1 array. Am > i doing something wrong here or is there some reason for this behavior ? The behaviour is expected. a[:,1] is returned with one less dimension, just as for a one dimensional array b[1] is zero dimensional (a scalar). For instance In [65]: int64(2).shape Out[65]: () You can get what you expect using matrices: In [67]: a = mat(arange(6).reshape(2,3)) In [68]: a[:,1] Out[68]: matrix([[1], [4]]) But generally it is best to just use arrays and get used to the conventions. regards, > Rahul Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Aug 29 20:13:39 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 29 Aug 2006 19:13:39 -0500 Subject: [Numpy-discussion] array indexing problem In-Reply-To: <1156895865.5499.5.camel@hydro.geol.sc.edu> References: <1156895865.5499.5.camel@hydro.geol.sc.edu> Message-ID: Rahul Kanwar wrote: > Hello, > > I am trying to extract a column from a 2D array here is what is have > done: > > -------------------------------------------- > In [3]: a = array([[1,2,3],[1,2,3]]) > > In [4]: a > Out[4]: > array([[1, 2, 3], > [1, 2, 3]]) > > In [5]: a[:, 1] > Out[5]: array([2, 2]) > > In [6]: a[:, 1:2] > Out[6]: > array([[2], > [2]]) > -------------------------------------------- > > when i use a[:, 1] i get a 1x2 array where as when i use a[:, 1:2] i get > a 2x1 array. The intuitive behavior of a[:, 1] should be a 2x1 array. Am > i doing something wrong here or is there some reason for this behavior ? Indexing reduces the rank of the array. Slicing does not. In the first instance, you do not get a 1x2 array; you get an array with shape (2,). This choice dates from the earliest days of Numeric. It ends up being quite useful in most contexts. However, it is somewhat less so when you want to treat these arrays as matrices and row and column vectors. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ngjdql at dekleineknip.com Tue Aug 29 21:53:36 2006 From: ngjdql at dekleineknip.com (Stanislaus Farrell) Date: Tue, 29 Aug 2006 20:53:36 -0500 Subject: [Numpy-discussion] profusion supremely Message-ID: <001101c6cbd8$3aa14635$9057efc9@bltbp> He is to have the satisfaction ofreturning it to the chapel himself. I hope to succeed ininteresting him in the Mexican scheme, but of that in its place. Nearly all the tragedies in the lives of my clientscome from little COMMAC effects. He hasjust returned from Manchester, and ushered us in. I think the slaving businessgave them the idea I was a rough and ready customer. I and my sister live alone herevery simply, as you see. Floods hand shook as he inserted the comma and Mr. Itmust really have been rather grand, I gather, but it nearly cost memy inheritance. I am lonely, or I would not write youso much. I saw anexpression of pain and amazement flash from his eyes. This on a bed of sandwas put in to heat the carriage. Hesuggested to them that they should be legally represented. The lugger is shipping a new mast, I hear. The whole transaction was overlooked by the genial Mr. Hesuggested to them that they should be legally represented. I dont, he said, and so I am not going to sell this chair. You shall hear, too, of the Mexican affairin due course. He was soon unfolding to me thestate of his own affairs as I had already done with mine. As soon as the crisis in my own affairs was allayed I intimated toMr. With luck you should have this in two weeks. Such moral sentiments as still exist do not help deliverthe mails. It is certainly ingenious although notvery necessary. Sir Francis is responsible forand still directs its policies. As you say, it is a very comfortable one. This was the situation onthe morning of my first interview with the Barings. My reception there by Sir Francis Baring and later by Mr. I can only say it was for me a terribleexperience. Hickey by his cousin Williamthat is now in Bengal. I dont, he said, and so I am not going to sell this chair. The ship was then sold at auction by theBarings. Mayerlooked at a very large and handsome Swiss watch with enormous sealsattached. It alighted andentered daintily by a small door in the cote. In relating his experiencesin prison he broke down. I have come to know Nathan rather well, however. Perhaps you had best take it with you in the coach, Mr. The world outside seemed tohave fallen away. The first matter taken up was the sum due me from my remittances tothe Barings from Africa. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: liberty.gif Type: image/gif Size: 42098 bytes Desc: not available URL: From rw679aq02 at sneakemail.com Wed Aug 30 01:47:02 2006 From: rw679aq02 at sneakemail.com (rw679aq02 at sneakemail.com) Date: Tue, 29 Aug 2006 22:47:02 -0700 Subject: [Numpy-discussion] Irregular arrays Message-ID: <1156916822.16010.269732980@webmail.messagingengine.com> Many problems are best solved with irregular array structures. These are aggregations not having a rectangular shape. To motivate, here's one example, http://lambda-the-ultimate.org/files/HammingNumbersDeclarative.7z - from http://lambda-the-ultimate.org/node/608#comment-5746 Irregularity here changes an O(N^3) solution to O(N). (The file format is a 7zip archive with a MathReader file inside, readable in Windows or Unix with free software.) These cases also arise in simulations where physical geometry determines array shape. Here memory consumption is the minimization goal that makes irregularity desirable. The access function will return NaN or zero for out-of-bounds requests. There is no need to consume memory storing NaNs and zeros. Please advise how much support numpy/Scipy has for these structures, if any, including future plans. If support exists, could you kindly supply a Scipy declaration matching the first example. Thank you very much. From oliphant.travis at ieee.org Wed Aug 30 02:58:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Aug 2006 00:58:53 -0600 Subject: [Numpy-discussion] Irregular arrays In-Reply-To: <1156916822.16010.269732980@webmail.messagingengine.com> References: <1156916822.16010.269732980@webmail.messagingengine.com> Message-ID: <44F5372D.40206@ieee.org> rw679aq02 at sneakemail.com wrote: > Many problems are best solved with irregular array structures. These > are aggregations not having a rectangular shape. To motivate, here's > one example, > > http://lambda-the-ultimate.org/files/HammingNumbersDeclarative.7z > - from http://lambda-the-ultimate.org/node/608#comment-5746 > > Irregularity here changes an O(N^3) solution to O(N). (The file format > is a 7zip archive with a MathReader file inside, readable in Windows or > Unix with free software.) > > These cases also arise in simulations where physical geometry determines > array shape. Here memory consumption is the minimization goal that > makes irregularity desirable. The access function will return NaN or > zero for out-of-bounds requests. There is no need to consume memory > storing NaNs and zeros > > Please advise how much support numpy/Scipy has for these structures, if > any, including future plans. If support exists, could you kindly supply > a Scipy declaration matching the first example. > SciPy has sparse matrix support (scipy.sparse) with several storage formats You can also construct irregular arrays using arrays of objects or just lists of lists. -Travis From bruce.who.hk at gmail.com Wed Aug 30 04:06:01 2006 From: bruce.who.hk at gmail.com (bruce.who.hk) Date: Wed, 30 Aug 2006 16:06:01 +0800 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available References: <44F01802.8050505@ieee.org> <200608281448353906004@gmail.com> <44F341E4.7000003@ieee.org> Message-ID: <200608301605580156650@gmail.com> Hi, Travis I tried numpy1.0b4 and add this to setup.py includes = ["numpy.core._internal"] then it works! And all scripts can be packed into a single executables with "bundle_files":2, "skip_archive":0, zipfile = None, --skip_archive option is not needed now. ------------------------------------------------------------- >I suspect you need to force-include the numpy/core/_internal.py file by >specifying it in your setup.py file as explained on the py2exe site. >That module is only imported by the multiarraymodule.c file which I >suspect py2exe can't automatically discern. > >In 1.0 we removed the package-loader issues which are probably giving >the scipy-style subpackage errors. So, very likely you might be O.K. >with the beta releases of 1.0 as long as you tell py2exe about >numpy/core/_internal.py so that it gets included in the distribution. > >Please post any successes. > >Best, > >-Travis > >-- >http://mail.python.org/mailman/listinfo/python-list ------------------ bruce.who.hk 2006-08-30 From rw679aq02 at sneakemail.com Wed Aug 30 04:21:05 2006 From: rw679aq02 at sneakemail.com (rw679aq02 at sneakemail.com) Date: Wed, 30 Aug 2006 01:21:05 -0700 Subject: [Numpy-discussion] Irregular arrays In-Reply-To: <1156916822.16010.269732980@webmail.messagingengine.com> References: <1156916822.16010.269732980@webmail.messagingengine.com> Message-ID: <1156926065.27232.269739888@webmail.messagingengine.com> Travis, A sparse matrix is a different animal serving a different purpose, i.e., solution of linear systems. Those storage formats are geared for that application: upper diagonal, block diagonal, stripwise, etc. To be more specific: here tight numerical arrays are presumably discussed. Python and other languages could define an "irregular list of irregular lists" or "aggregation of objects" configuration. Probably Lisp would be better for that. But it is not my driving interest. My interest is packed storage minimizing memory consumption and access time, with bonus points for integration with numerical recipes and element-wise operations. Again, actual demonstration would be appreciated. I selected an example with minimal deviation from a regular array to simplify things. The shape is essentially a cube with a planar cut across one corner. The Mathematica code shows it is very easy to define in that language. (I am not sure whether it is tightly packed but it shows O(N) performance graphs.) From svetosch at gmx.net Wed Aug 30 05:57:52 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed, 30 Aug 2006 11:57:52 +0200 Subject: [Numpy-discussion] array indexing problem In-Reply-To: References: <1156895865.5499.5.camel@hydro.geol.sc.edu> Message-ID: <44F56120.5040404@gmx.net> Charles R Harris schrieb: > You can get what you expect using matrices: > ... > But generally it is best to just use arrays and get used to the conventions. > Well, there are different views on this subject, and I'm happy that the numpy crew is really trying (and good at it) to make array *and* matrix users happy. So please let us coexist peacefully. -sven From landriu at discovery.saclay.cea.fr Wed Aug 30 06:28:40 2006 From: landriu at discovery.saclay.cea.fr (LANDRIU David SAp) Date: Wed, 30 Aug 2006 12:28:40 +0200 (MEST) Subject: [Numpy-discussion] Use of numarray from numpy package Message-ID: <200608301029.k7UATQ4v013493@discovery.saclay.cea.fr> Hello, is it necessary to install numarray separately to use numpy ? Indeed, after numpy installation, when I try to use it in the code, I get the same error as below : .../... Python 2.4.1 (#1, May 13 2005, 13:45:18) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numarray import * Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.3/site-packages/numpy/numarray/__init__.py", line 1, in ? from util import * File "/usr/local/lib/python2.3/site-packages/numpy/numarray/util.py", line 2, in ? from numpy import geterr ImportError: No module named numpy >>> Thanks for your answer, Cheers, David Landriu -------------------------------------------------------------------- David Landriu DAPNIA/SAp CEA SACLAY (France) Phone : (33|0)169088785 Fax : (33|0)169086577 --------------------------------------------------------------------- From a.h.jaffe at gmail.com Wed Aug 30 07:04:22 2006 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Wed, 30 Aug 2006 12:04:22 +0100 Subject: [Numpy-discussion] fftfreq very slow; rfftfreq incorrect? Message-ID: Hi all, the current implementation of fftfreq (which is meant to return the appropriate frequencies for an FFT) does the following: k = range(0,(n-1)/2+1)+range(-(n/2),0) return array(k,'d')/(n*d) I have tried this with very long (2**24) arrays, and it is ridiculously slow. Should this instead use arange (or linspace?) and concatenate rather than converting the above list? This seems to result in acceptable performance, but we could also perhaps even pre-allocate the space. The numpy.fft.rfftfreq seems just plain incorrect to me. It seems to produce lots of duplicated frequencies, contrary to the actual output of rfft: def rfftfreq(n,d=1.0): """ rfftfreq(n, d=1.0) -> f DFT sample frequencies (for usage with rfft,irfft). The returned float array contains the frequency bins in cycles/unit (with zero at the start) given a window length n and a sample spacing d: f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2]/(d*n) if n is even f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2,n/2]/(d*n) if n is odd **** None of these should be doubled, right? """ assert isinstance(n,int) return array(range(1,n+1),dtype=int)/2/float(n*d) Thanks, Andrew From a.h.jaffe at gmail.com Wed Aug 30 07:17:51 2006 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Wed, 30 Aug 2006 12:17:51 +0100 Subject: [Numpy-discussion] fftfreq very slow; rfftfreq incorrect? In-Reply-To: References: Message-ID: [copied to the scipy list since rfftfreq is only in scipy] Andrew Jaffe wrote: > Hi all, > > the current implementation of fftfreq (which is meant to return the > appropriate frequencies for an FFT) does the following: > > k = range(0,(n-1)/2+1)+range(-(n/2),0) > return array(k,'d')/(n*d) > > I have tried this with very long (2**24) arrays, and it is ridiculously > slow. Should this instead use arange (or linspace?) and concatenate > rather than converting the above list? This seems to result in > acceptable performance, but we could also perhaps even pre-allocate the > space. > > The numpy.fft.rfftfreq seems just plain incorrect to me. It seems to > produce lots of duplicated frequencies, contrary to the actual output of > rfft: > > def rfftfreq(n,d=1.0): > """ rfftfreq(n, d=1.0) -> f > > DFT sample frequencies (for usage with rfft,irfft). > > The returned float array contains the frequency bins in > cycles/unit (with zero at the start) given a window length n and a > sample spacing d: > > f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2]/(d*n) if n is even > f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2,n/2]/(d*n) if n is odd > > **** None of these should be doubled, right? > > """ > assert isinstance(n,int) > return array(range(1,n+1),dtype=int)/2/float(n*d) > > Thanks, > > Andrew > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 From stefan at sun.ac.za Wed Aug 30 08:04:16 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 30 Aug 2006 14:04:16 +0200 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <44F47036.8040300@ieee.org> References: <44F47036.8040300@ieee.org> Message-ID: <20060830120415.GQ23074@mentat.za.net> On Tue, Aug 29, 2006 at 10:49:58AM -0600, Travis Oliphant wrote: > Matt Knox wrote: > > is the following behaviour expected? or is this a bug with > > numpy.object_ ? I'm using numpy 1.0b1 > > > > >>> print numpy.array([],numpy.float64).size > > 0 > > > > >>> print numpy.array([],numpy.object_).size > > 1 > > > > Should the size of an array initialized from an empty list not always > > be 1 ? or am I just crazy? > > > Not in this case. Explictly creating an object array from any object > (even the empty-list object) gives you a 0-d array containing that > object. When you explicitly create an object array a different section > of code handles it and gives this result. This is a recent change, and > I don't think this use-case was considered as a backward incompatibility > (which I believe it is). Perhaps we should make it so array([],....) > always returns an empty array. I'm not sure. Comments? The current behaviour makes sense, but is maybe not consistent: N.array([],dtype=object).size == 1 N.array([[],[]],dtype=object).size == 2 Regards St?fan From svetosch at gmx.net Wed Aug 30 08:31:50 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed, 30 Aug 2006 14:31:50 +0200 Subject: [Numpy-discussion] stumped numpy user seeks help In-Reply-To: <44F4C3D5.80600@jpl.nasa.gov> References: <44F4C3D5.80600@jpl.nasa.gov> Message-ID: <44F58536.7030806@gmx.net> Mathew Yeates schrieb: > My head is about to explode. > > I have an M by N array of floats. Associated with the columns are > character labels > ['a','b','b','c','d','e','e','e'] note: already sorted so duplicates > are contiguous > > I want to replace the 2 'b' columns with the sum of the 2 columns. > Similarly, replace the 3 'e' columns with the sum of the 3 'e' columns. > > The resulting array still has M rows but less than N columns. Anyone? > Could be any harder than Sudoku. > Hi, I don't have time for this ;-) , but I learnt something useful along the way... import numpy as n m = n.ones([2,6]) a = ['b', 'c', 'c', 'd', 'd', 'd'] startindices = set([a.index(x) for x in a]) out = n.empty([m.shape[0], 0]) for i in startindices: temp = n.mat(m[:, i : i + a.count(a[i])]).sum(axis = 1) out = n.hstack([out, temp]) print out Not sure if axis = 1 is needed, but until the defaults have settled a bit it can't hurt. You need python 2.4 for the built-in , and will be a numpy matrix, use if you don't like that. But here it's really nice to work with matrices, because otherwise .sum() will give you a 1-d array sometimes, and that will suddenly look like a row to (instead of a nice column vector) and wouldn't work -- that's why matrices are so great and everybody should be using them ;-) hth, sven From landriu at discovery.saclay.cea.fr Wed Aug 30 08:51:51 2006 From: landriu at discovery.saclay.cea.fr (LANDRIU David SAp) Date: Wed, 30 Aug 2006 14:51:51 +0200 (MEST) Subject: [Numpy-discussion] Use of numarray from numpy package [# INC NO 24609] Message-ID: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> Hello, I come back to my question : how to use numarray with the numpy installation ? After some update in the system there is another error message : >> AttributeError: 'module' object has no attribute 'NewAxis' It seems , from advice of the system manager, that an kind of alias failed to execute the right action. Thanks in advance for your answer, Cheers, David Landriu ------------- Begin Forwarded Message ------------- >Date: Wed, 30 Aug 2006 14:14:27 +0200 (MEST) >To: LANDRIU David SAp >Subject: Re: Use of numarray from numpy package [# INC NO 24609] >From: User Support >Error-to: Jean-Rene Rouet >X-CEA-Source: externe >X-CEA-DebugSpam: 7% >X-CEA-Spam-Report: No antispam rules were triggered by this message >X-CEA-Spam-Hits: __HAS_MSGID 0, __MIME_TEXT_ONLY 0, __SANE_MSGID 0, __STOCK_CRUFT 0 >MIME-Version: 1.0 >Content-Transfer-Encoding: 8bit >X-Spam-Checker-Version: SpamAssassin 2.63 (2004-01-11) on discovery >X-Spam-Status: No, hits=0.1 required=4.0 tests=AWL autolearn=no version=2.63 >X-Spam-Level: > > >R?ponse de User-Support ? votre question : >------------------------------------------ > >Rebonjour >Essayez maintenat svp > >WW Voici ce que j'obtiens maintenant : {ccali22}~(0)>setenv PYTHONPATH /usr/local/lib/python2.3/site-packages/numpy {ccali22}~(0)> {ccali22}~(0)> {ccali22}~(0)>python Python 2.3.5 (#2, Oct 17 2005, 17:20:02) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numarray import * Traceback (most recent call last): File "", line 1, in ? AttributeError: 'module' object has no attribute 'NewAxis' >>> ############################################## ############################################## Hello, is it necessary to install numarray separately to use numpy ? Indeed, after numpy installation, when I try to use it in the code, I get the same error as below : .../... Python 2.4.1 (#1, May 13 2005, 13:45:18) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numarray import * Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.3/site-packages/numpy/numarray/__init__.py", line 1, in ? from util import * File "/usr/local/lib/python2.3/site-packages/numpy/numarray/util.py", line 2, in ? from numpy import geterr ImportError: No module named numpy >>> Thanks for your answer, Cheers, David Landriu -------------------------------------------------------------------- David Landriu DAPNIA/SAp CEA SACLAY (France) Phone : (33|0)169088785 Fax : (33|0)169086577 --------------------------------------------------------------------- From joris at ster.kuleuven.be Wed Aug 30 09:42:54 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Wed, 30 Aug 2006 15:42:54 +0200 Subject: [Numpy-discussion] Use of numarray from numpy package In-Reply-To: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> References: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> Message-ID: <200608301542.54416.joris@ster.kuleuven.be> Hi David, Numeric, numarray and numpy are three different packages that can live independently, but that can also coexist if you like so. If you're new to this packages, you should stick to numpy, as the other ones are getting phased out. It's difficult to see what's going wrong without having seen how you installed it. I see that you tried >>> from numarray import * Perhaps a stupid question, but you did import numpy with >>> from numpy import * didn't you? Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From cyray at jnjcr.jnj.com Wed Aug 30 09:47:11 2006 From: cyray at jnjcr.jnj.com (Ellil Rendon) Date: Wed, 30 Aug 2006 06:47:11 -0700 Subject: [Numpy-discussion] eoRXwu Message-ID: <000001c6cc3a$cb010910$52c0a8c0@wbaa> Hi, http://ikanuyunfadesun.com , s , c , f move. The soldier at the door looked at me, looked away. Steengo had military service were mutually incompatible for the most part. I Since I know next to nothing about music he is going to teach me my -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.hochberg at ieee.org Wed Aug 30 10:33:25 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Wed, 30 Aug 2006 07:33:25 -0700 Subject: [Numpy-discussion] fromiter shape argument -- was Re: For loop tips In-Reply-To: References: Message-ID: <44F5A1B5.7090409@ieee.org> Torgil Svensson wrote: >> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) >> > > Is it possible for fromiter to take an optional shape (or count) > argument in addition to the dtype argument? Yes. fromiter(iterable, dtype, count) works. > If both is given it could > preallocate memory and we only have to iterate over L once. > Regardless, L is only iterated over once. In general you can't rewind iterators, so that's a requirement. This is accomplished by doing successive overallocation similar to the way appending to a list is handled. By specifying the count up front you save a bunch of reallocs, but no iteration. -tim > //Torgil > > On 8/29/06, Keith Goodman wrote: > >> On 8/29/06, Torgil Svensson wrote: >> >>> something like this? >>> >>> def list2index(L): >>> uL=sorted(set(L)) >>> idx=dict((y,x) for x,y in enumerate(uL)) >>> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) >>> >> Wow. That's amazing. Thank you. >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From perry at stsci.edu Wed Aug 30 10:43:26 2006 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 30 Aug 2006 10:43:26 -0400 Subject: [Numpy-discussion] Use of numarray from numpy package [# INC NO 24609] In-Reply-To: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> References: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> Message-ID: <56849779-37DC-444D-B260-14CBFDAEE201@stsci.edu> On Aug 30, 2006, at 8:51 AM, LANDRIU David SAp wrote: > Hello, > > I come back to my question : how to use numarray > with the numpy installation ? > If you are using both at the same time, one thing you don't want to do is from numpy import * from numarray import * You can do that with one or the other but not both. Are you doing that? Perry Greenfield From stefan at sun.ac.za Wed Aug 30 10:51:52 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 30 Aug 2006 16:51:52 +0200 Subject: [Numpy-discussion] stumped numpy user seeks help In-Reply-To: <44F4C3D5.80600@jpl.nasa.gov> References: <44F4C3D5.80600@jpl.nasa.gov> Message-ID: <20060830145152.GT23074@mentat.za.net> On Tue, Aug 29, 2006 at 03:46:45PM -0700, Mathew Yeates wrote: > My head is about to explode. > > I have an M by N array of floats. Associated with the columns are > character labels > ['a','b','b','c','d','e','e','e'] note: already sorted so duplicates > are contiguous > > I want to replace the 2 'b' columns with the sum of the 2 columns. > Similarly, replace the 3 'e' columns with the sum of the 3 'e' columns. > > The resulting array still has M rows but less than N columns. Anyone? > Could be any harder than Sudoku. I attach one possible solution (allowing for the same column name occurring in different places, i.e. ['a','b','b','a']). I'd be glad for any suggestions on how to clean up the code. Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: arsum.py Type: text/x-python Size: 572 bytes Desc: not available URL: From fperez.net at gmail.com Wed Aug 30 11:11:43 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 30 Aug 2006 09:11:43 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <20060830120415.GQ23074@mentat.za.net> References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> Message-ID: On 8/30/06, Stefan van der Walt wrote: > The current behaviour makes sense, but is maybe not consistent: > > N.array([],dtype=object).size == 1 > N.array([[],[]],dtype=object).size == 2 Yes, including one more term in this check: In [5]: N.array([],dtype=object).size Out[5]: 1 In [6]: N.array([[]],dtype=object).size Out[6]: 1 In [7]: N.array([[],[]],dtype=object).size Out[7]: 2 Intuitively, I'd have expected the answers to be 0,1,2, instead of 1,1,2. Cheers, f From kwgoodman at gmail.com Wed Aug 30 11:53:45 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 30 Aug 2006 08:53:45 -0700 Subject: [Numpy-discussion] amd64 support Message-ID: I plan to build an amd64 box and run debian etch. Are there any big, 64-bit, show-stopping problems in numpy? Any minor annoyances? From strawman at astraw.com Wed Aug 30 12:13:16 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 30 Aug 2006 09:13:16 -0700 Subject: [Numpy-discussion] Use of numarray from numpy package [# INC NO 24609] In-Reply-To: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> References: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> Message-ID: <44F5B91C.5090202@astraw.com> LANDRIU David SAp wrote: > Hello, > > I come back to my question : how to use numarray > with the numpy installation ? > > {ccali22}~(0)>setenv PYTHONPATH /usr/local/lib/python2.3/site-packages/numpy > Here's where you went wrong. You want: setenv PYTHONPATH /usr/local/lib/python2.3/site-packages > {ccali22}~(0)>python > Python 2.3.5 (#2, Oct 17 2005, 17:20:02) > [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> from numarray import * >>>> > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.3/site-packages/numpy/numarray/__init__.py", line 1, in ? > from util import * > File "/usr/local/lib/python2.3/site-packages/numpy/numarray/util.py", line 2, in ? > from numpy import geterr > ImportError: No module named numpy > Note that you're actually importing a numarray within numpy's directory structure. That's because of your PYTHONPATH. numpy ships numpy.numarray to provide backwards compatibility. To use it, you must do "import numpy.numarray as numarray" Cheers! Andrew From stefan at sun.ac.za Wed Aug 30 12:41:49 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 30 Aug 2006 18:41:49 +0200 Subject: [Numpy-discussion] fftfreq very slow; rfftfreq incorrect? In-Reply-To: References: Message-ID: <20060830164149.GV23074@mentat.za.net> On Wed, Aug 30, 2006 at 12:04:22PM +0100, Andrew Jaffe wrote: > the current implementation of fftfreq (which is meant to return the > appropriate frequencies for an FFT) does the following: > > k = range(0,(n-1)/2+1)+range(-(n/2),0) > return array(k,'d')/(n*d) > > I have tried this with very long (2**24) arrays, and it is ridiculously > slow. Should this instead use arange (or linspace?) and concatenate > rather than converting the above list? This seems to result in > acceptable performance, but we could also perhaps even pre-allocate the > space. Please try the attached benchmark. > The numpy.fft.rfftfreq seems just plain incorrect to me. It seems to > produce lots of duplicated frequencies, contrary to the actual output of > rfft: > > def rfftfreq(n,d=1.0): > """ rfftfreq(n, d=1.0) -> f > > DFT sample frequencies (for usage with rfft,irfft). > > The returned float array contains the frequency bins in > cycles/unit (with zero at the start) given a window length n and a > sample spacing d: > > f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2]/(d*n) if n is even > f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2,n/2]/(d*n) if n is odd > > **** None of these should be doubled, right? > > """ > assert isinstance(n,int) > return array(range(1,n+1),dtype=int)/2/float(n*d) Please produce a code snippet to demonstrate the problem. We can then fix the bug and use your code as a unit test. Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: fftfreq_bench.py Type: text/x-python Size: 2201 bytes Desc: not available URL: From lfriedri at imtek.de Wed Aug 30 12:39:43 2006 From: lfriedri at imtek.de (Lars Friedrich) Date: Wed, 30 Aug 2006 18:39:43 +0200 Subject: [Numpy-discussion] upcast In-Reply-To: References: Message-ID: <1156955983.6572.13.camel@localhost> Hello, I would like to discuss the following code: #***start*** import numpy as N a = N.array((200), dtype = N.uint8) print (a * 100) / 100 b = N.array((200, 200), dtype = N.uint8) print (b * 100) / 100 #***stop*** The first print statement will print "200" because the uint8-value is cast "upwards", I suppose. The second statement prints "[0 0]". I suppose this is due to overflows during the calculation. How can I tell numpy to do the upcast also in the second case, returning "[200 200]"? I am interested in the fastest solution regarding execution time. In my application I would like to store the result in an Numeric.UInt8-array. Thanks for every comment Lars From Chris.Barker at noaa.gov Wed Aug 30 13:18:49 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 30 Aug 2006 10:18:49 -0700 Subject: [Numpy-discussion] Use of numarray from numpy package [# INC NO 24609] In-Reply-To: <44F5B91C.5090202@astraw.com> References: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> <44F5B91C.5090202@astraw.com> Message-ID: <44F5C879.3040404@noaa.gov> Andrew Straw wrote: >> {ccali22}~(0)>setenv PYTHONPATH /usr/local/lib/python2.3/site-packages/numpy >> > Here's where you went wrong. You want: > > setenv PYTHONPATH /usr/local/lib/python2.3/site-packages Which you shouldn't need at all. site-packages should be on sys.path by default. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ghalib at sent.com Wed Aug 30 13:20:22 2006 From: ghalib at sent.com (Ghalib Suleiman) Date: Wed, 30 Aug 2006 13:20:22 -0400 Subject: [Numpy-discussion] Interfacing with PIL? Message-ID: <2569935D-20F9-42D3-B79E-BAB68818F4B3@sent.com> I'm somewhat new to both libraries...is there any way to create a 2D array of pixel values from an image object from the Python Image Library? I'd like to do some arithmetic on the values. From a.u.r.e.l.i.a.n at gmx.net Wed Aug 30 14:10:59 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Wed, 30 Aug 2006 20:10:59 +0200 Subject: [Numpy-discussion] Interfacing with PIL? In-Reply-To: <2569935D-20F9-42D3-B79E-BAB68818F4B3@sent.com> References: <2569935D-20F9-42D3-B79E-BAB68818F4B3@sent.com> Message-ID: <200608302010.59845.a.u.r.e.l.i.a.n@gmx.net> Am Mittwoch, 30. August 2006 19:20 schrieb Ghalib Suleiman: > I'm somewhat new to both libraries...is there any way to create a 2D > array of pixel values from an image object from the Python Image > Library? I'd like to do some arithmetic on the values. Yes. To transport the data: >>> import numpy >>> image = >>> arr = numpy.fromstring(image.tostring(), dtype=numpy.uint8) (alternately use dtype=numpy.uint32 if you want RGBA packed in one number). arr will be a 1d array with length (height * width * b(ytes)pp). Use reshape to get it into a reasonable form. HTH, Johannes From tim.hochberg at ieee.org Wed Aug 30 14:16:58 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Wed, 30 Aug 2006 11:16:58 -0700 Subject: [Numpy-discussion] Interfacing with PIL? In-Reply-To: <200608302010.59845.a.u.r.e.l.i.a.n@gmx.net> References: <2569935D-20F9-42D3-B79E-BAB68818F4B3@sent.com> <200608302010.59845.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <44F5D61A.4080503@ieee.org> Johannes Loehnert wrote: > Am Mittwoch, 30. August 2006 19:20 schrieb Ghalib Suleiman: > >> I'm somewhat new to both libraries...is there any way to create a 2D >> array of pixel values from an image object from the Python Image >> Library? I'd like to do some arithmetic on the values. >> > > Yes. > > To transport the data: > >>>> import numpy >>>> image = >>>> arr = numpy.fromstring(image.tostring(), dtype=numpy.uint8) >>>> > > (alternately use dtype=numpy.uint32 if you want RGBA packed in one number). > > arr will be a 1d array with length (height * width * b(ytes)pp). Use reshape > to get it into a reasonable form. > On a related note, does anyone have a good recipe for converting a PIL image to a wxPython image? The last time I tried this, the best I could come up with was: stream = cStringIO.StringIO() img.save(stream, "png") # img is PIL Image stream.seek(0) image = wx.ImageFromStream(stream) # image is a wxPython Image -tim From Chris.Barker at noaa.gov Wed Aug 30 15:15:15 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 30 Aug 2006 12:15:15 -0700 Subject: [Numpy-discussion] Interfacing with PIL? In-Reply-To: <44F5D61A.4080503@ieee.org> References: <2569935D-20F9-42D3-B79E-BAB68818F4B3@sent.com> <200608302010.59845.a.u.r.e.l.i.a.n@gmx.net> <44F5D61A.4080503@ieee.org> Message-ID: <44F5E3C3.5030300@noaa.gov> Tim Hochberg wrote: > Johannes Loehnert wrote: >>> I'm somewhat new to both libraries...is there any way to create a 2D >>> array of pixel values from an image object from the Python Image >>> Library? I'd like to do some arithmetic on the values. the latest version of PIL (maybe not released yet) supports the array interface, so you may be able to do something like: A = numpy.asarray(PIL_image) see the PIL page: http://effbot.org/zone/pil-changes-116.htm where it says: Changes from release 1.1.5 to 1.1.6 Added "fromarray" function, which takes an object implementing the NumPy array interface and creates a PIL Image from it. (from Travis Oliphant). Added NumPy array interface support (__array_interface__) to the Image class (based on code by Travis Oliphant). This allows you to easily convert between PIL image memories and NumPy arrays: import numpy, Image i = Image.open('lena.jpg') a = numpy.asarray(i) # a is readonly i = Image.fromarray(a) > On a related note, does anyone have a good recipe for converting a PIL > image to a wxPython image? Does a PIL image support the buffer protocol? There will be a: wx.ImageFromBuffer() soon, and there is now; wx.Image.SetDataBuffer() if not, I think this will work: I = wx.EmptyImage(width, height) DataString = PIL_image.tostring() I.SetDataBuffer(DataString) This will only work if the PIL image is an 24 bit RGB image, of course. Just make sure to keep DataString around, so that the data buffer doesn't get deleted. wx.ImageFromBuffer() will do that foryou, but it's not available until 2.7 comes out. Ideally, both PIL and wx will support the array interface, and we can just do: I = wx.ImageFromArray(PIL_Image) and not get any data copying as well. Also, Robin has just added some methods to directly manipulate wxBitmaps, so you can use a numpy array as the data buffer for a wx.Bitmap. This can help prevent a lot of data copies. see a test here: http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/demo/RawBitmapAccess.py?rev=1.3&content-type=text/vnd.viewcvs-markup -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From thilo.wehrmann at gmx.net Wed Aug 30 17:14:21 2006 From: thilo.wehrmann at gmx.net (Thilo Wehrmann) Date: Wed, 30 Aug 2006 23:14:21 +0200 Subject: [Numpy-discussion] (no subject) Message-ID: <20060830211421.193280@gmx.net> Hi, currently I?m trying to compile the latest numpy version (1.0b4) under an SGI IRIX 6.5 environment. I?m using the gcc 3.4.6 compiler and python 2.4.3 (self compiled). During the compilation of numpy.core I get a nasty error message: ... copying build/src.irix64-6.5-2.4/numpy/__config__.py -> build/lib.irix64-6.5-2.4/numpy copying build/src.irix64-6.5-2.4/numpy/distutils/__config__.py -> build/lib.irix64-6.5-2.4/numpy/distutils running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize MipsFCompiler customize MipsFCompiler customize MipsFCompiler using build_ext building 'numpy.core.umath' extension compiling C sources C compiler: gcc -fno-strict-aliasing -DNDEBUG -D_FILE_OFFSET_BITS=64 -DHAVE_LARGEFILE_SUPPORT -fmessage-length=0 -Wall -O2 compile options: '-Ibuild/src.irix64-6.5-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.irix64-6.5-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/local/include/python2.4 -c' gcc: build/src.irix64-6.5-2.4/numpy/core/src/umathmodule.c numpy/core/src/umathmodule.c.src: In function `nc_sqrtf': numpy/core/src/umathmodule.c.src:602: warning: implicit declaration of function `hypotf' numpy/core/src/umathmodule.c.src: In function `nc_sqrtl': numpy/core/src/umathmodule.c.src:602: warning: implicit declaration of function `fabsl' ... ... lots of math functions ... ... numpy/core/src/umathmodule.c.src: In function `LONGDOUBLE_frexp': numpy/core/src/umathmodule.c.src:1940: warning: implicit declaration of function `frexpl' numpy/core/src/umathmodule.c.src: In function `LONGDOUBLE_ldexp': numpy/core/src/umathmodule.c.src:1957: warning: implicit declaration of function `ldexpl' In file included from numpy/core/src/umathmodule.c.src:2011: build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c: At top level: build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:15: error: `acosl' undeclared here (not in a function) build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:15: error: initializer element is not constant build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:15: error: (near initialization for `arccos_data[2]') ... ... lots of math functions ... ... build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:192: error: initializer element is not constant build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:192: error: (near initialization for `tanh_data[2]') numpy/core/include/numpy/ufuncobject.h:328: warning: 'generate_overflow_error' defined but not used numpy/core/src/umathmodule.c.src: In function `nc_sqrtf': numpy/core/src/umathmodule.c.src:602: warning: implicit declaration of function `hypotf' ... ... lots of math functions ... ... numpy/core/src/umathmodule.c.src: In function `FLOAT_frexp': numpy/core/src/umathmodule.c.src:1940: warning: implicit declaration of function `frexpf' numpy/core/src/umathmodule.c.src: In function `FLOAT_ldexp': numpy/core/src/umathmodule.c.src:1957: warning: implicit declaration of function `ldexpf' numpy/core/src/umathmodule.c.src: In function `LONGDOUBLE_frexp': numpy/core/src/umathmodule.c.src:1940: warning: implicit declaration of function `frexpl' numpy/core/src/umathmodule.c.src: In function `LONGDOUBLE_ldexp': numpy/core/src/umathmodule.c.src:1957: warning: implicit declaration of function `ldexpl' In file included from numpy/core/src/umathmodule.c.src:2011: build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c: At top level: build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:15: error: `acosl' undeclared here (not in a function) build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:15: error: initializer element is not constant ... ... lots of math functions ... ... build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:192: error: initializer element is not constant build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:192: error: (near initialization for `tanh_data[2]') numpy/core/include/numpy/ufuncobject.h:328: warning: 'generate_overflow_error' defined but not used error: Command "gcc -fno-strict-aliasing -DNDEBUG -D_FILE_OFFSET_BITS=64 -DHAVE_LARGEFILE_SUPPORT -fmessage-length=0 -Wall -O2 -Ibuild/src.irix64-6.5-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.irix64-6.5-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/local/include/python2.4 -c build/src.irix64-6.5-2.4/numpy/core/src/umathmodule.c -o build/temp.irix64-6.5-2.4/build/src.irix64-6.5-2.4/numpy/core/src/umathmodule.o" failed with exit status 1 Can somebody explain me, what?s going wrong. It seems there is some header files missing. thanks, thilo -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal f?r Modem und ISDN: http://www.gmx.net/de/go/smartsurfer From wbaxter at gmail.com Wed Aug 30 17:18:34 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 31 Aug 2006 06:18:34 +0900 Subject: [Numpy-discussion] stumped numpy user seeks help In-Reply-To: <44F58536.7030806@gmx.net> References: <44F4C3D5.80600@jpl.nasa.gov> <44F58536.7030806@gmx.net> Message-ID: On 8/30/06, Sven Schreiber wrote: > Mathew Yeates schrieb: > will be a numpy matrix, use if you don't like that. But here > it's really nice to work with matrices, because otherwise .sum() will > give you a 1-d array sometimes, and that will suddenly look like a row > to (instead of a nice column vector) and wouldn't work -- > that's why matrices are so great and everybody should be using them ;-) column_stack would work perfectly in place of hstack there if it only didn't have the silly behavior of transposing arguments that already are 2-d. For reminders, here's the replacement implementation of column_stack I proposed on July 21: def column_stack(tup): def transpose_1d(array): if array.ndim<2: return _nx.transpose(atleast_2d(array)) else: return array arrays = map(transpose_1d,map(atleast_1d,tup)) return _nx.concatenate(arrays,1) This was in a big ticket I submitted about overhauling r_,c_,etc, which was largely ignored. Maybe I should resubmit this by itself... --bb From fperez.net at gmail.com Wed Aug 30 17:57:16 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 30 Aug 2006 15:57:16 -0600 Subject: [Numpy-discussion] Changing Fatal error into ImportError? Message-ID: Hi all, this was mentioned in the past, but I think it fell through the cracks: Python 2.3.4 (#1, Mar 10 2006, 06:12:09) [GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import mwadap Overwriting info= from scipy.misc (was from numpy.lib.utils) RuntimeError: module compiled against version 90909 of C-API but this version of numpy is 1000002 Fatal Python error: numpy.core.multiarray failed to import... exiting. I really think that this should raise ImportError, but NOT kill the python interpreter. If this happens in the middle of a long-running interactive session, you'll lose all of your current state/work, where a simple ImportError would have been enough to tell you that this particular module needed recompilation. FatalError should be reserved for situations where the internal state of the Python VM itself can not realistically be expected to be sane (corruption, complete memory exhaustion for even internal allocations, etc.) But killing the user's session for a failed import is a bit much, IMHO. Cheers, f From robert.kern at gmail.com Wed Aug 30 18:11:21 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Aug 2006 17:11:21 -0500 Subject: [Numpy-discussion] Changing Fatal error into ImportError? In-Reply-To: References: Message-ID: Fernando Perez wrote: > Hi all, > > this was mentioned in the past, but I think it fell through the cracks: > > Python 2.3.4 (#1, Mar 10 2006, 06:12:09) > [GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import mwadap > Overwriting info= from scipy.misc (was > from numpy.lib.utils) > RuntimeError: module compiled against version 90909 of C-API but this > version of numpy is 1000002 > Fatal Python error: numpy.core.multiarray failed to import... exiting. > > I really think that this should raise ImportError, but NOT kill the > python interpreter. If this happens in the middle of a long-running > interactive session, you'll lose all of your current state/work, where > a simple ImportError would have been enough to tell you that this > particular module needed recompilation. > > FatalError should be reserved for situations where the internal state > of the Python VM itself can not realistically be expected to be sane > (corruption, complete memory exhaustion for even internal allocations, > etc.) But killing the user's session for a failed import is a bit > much, IMHO. I don't see where we're calling Py_FatalError. The problem might be in Python or mwadap. Indeed, import_array() raises a PyExc_ImportError. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Wed Aug 30 18:36:19 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 30 Aug 2006 16:36:19 -0600 Subject: [Numpy-discussion] Changing Fatal error into ImportError? In-Reply-To: References: Message-ID: On 8/30/06, Robert Kern wrote: > I don't see where we're calling Py_FatalError. The problem might be in Python or > mwadap. Indeed, import_array() raises a PyExc_ImportError. Sorry for the noise: it looks like this was already fixed: http://projects.scipy.org/scipy/numpy/changeset/3044 since the code causing problems had been built /before/ 3044, we got the FatalError. But with modules built post-3044, it's all good (I artificially hacked the number to force the error): In [1]: import mwadap Overwriting info= from scipy.misc (was from numpy.lib.utils) --------------------------------------------------------------------------- exceptions.RuntimeError Traceback (most recent call last) RuntimeError: module compiled against version 1000001 of C-API but this version of numpy is 1000002 --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/fperez/research/code/mwadap-merge/mwadap/test/ /home/fperez/usr/lib/python2.3/site-packages/mwadap/__init__.py 9 glob,loc = globals(),locals() 10 for name in __all__: ---> 11 __import__(name,glob,loc,[]) 12 13 # Namespace cleanup /home/fperez/usr/lib/python2.3/site-packages/mwadap/Operator.py 18 19 # Our own packages ---> 20 import mwrep 21 from mwadap import mwqmfl, utils, Function, flinalg 22 ImportError: numpy.core.multiarray failed to import In [2]: Cheers, f From charlesr.harris at gmail.com Wed Aug 30 19:12:14 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 30 Aug 2006 17:12:14 -0600 Subject: [Numpy-discussion] upcast In-Reply-To: <1156955983.6572.13.camel@localhost> References: <1156955983.6572.13.camel@localhost> Message-ID: On 8/30/06, Lars Friedrich wrote: > > Hello, > > I would like to discuss the following code: > > #***start*** > import numpy as N > > a = N.array((200), dtype = N.uint8) > print (a * 100) / 100 This is actually a scalar, i.e., a zero dimensional array. N.uint8(200) would give you the same thing, because (200) is a number, not a tuple like (200,). In any case In [44]:a = array([200], dtype=uint8) In [45]:a*100 Out[45]:array([32], dtype=uint8) In [46]:uint8(100)*100 Out[46]:10000 i.e. , the array arithmetic is carried out in mod 256 because Numpy keeps the array type when multiplying by scalars. On the other hand, when multiplying a *scalar* by a number, the lower precision scalars are upconverted in the conventional way. Numpy makes the choices it does for space efficiency. If you want to work in uint8 you don't have to recast every time you multiply by a small integer. I suppose one could demand using uint8(1) instead of 1, but the latter is more convenient. Integers can be tricky once the ordinary precision is exceeded and modular arithmetic takes over, it just happens more easily for uint8 than for uint32. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Aug 30 19:24:34 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 30 Aug 2006 17:24:34 -0600 Subject: [Numpy-discussion] upcast In-Reply-To: <1156955983.6572.13.camel@localhost> References: <1156955983.6572.13.camel@localhost> Message-ID: On 8/30/06, Lars Friedrich wrote: > > Hello, > > I would like to discuss the following code: > > #***start*** > import numpy as N > > a = N.array((200), dtype = N.uint8) > print (a * 100) / 100 > > b = N.array((200, 200), dtype = N.uint8) > print (b * 100) / 100 > #***stop*** > > The first print statement will print "200" because the uint8-value is > cast "upwards", I suppose. The second statement prints "[0 0]". I > suppose this is due to overflows during the calculation. > > How can I tell numpy to do the upcast also in the second case, returning > "[200 200]"? I am interested in the fastest solution regarding execution > time. In my application I would like to store the result in an > Numeric.UInt8-array. > > Thanks for every comment To answer the original question, you need to use a higher precision array or explicitly cast it to higher precision. In [49]:(a.astype(int)*100)/100 Out[49]:array([200]) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Thu Aug 31 01:02:35 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 30 Aug 2006 22:02:35 -0700 Subject: [Numpy-discussion] amd64 support In-Reply-To: References: Message-ID: <44F66D6B.5030506@msg.ucsf.edu> Keith Goodman wrote: > I plan to build an amd64 box and run debian etch. Are there any big, > 64-bit, show-stopping problems in numpy? Any minor annoyances? > I am not aware of any - we use fine on 32bit and 64bit with debian sarge and etch. -Sebastian Haase From haase at msg.ucsf.edu Thu Aug 31 01:11:05 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 30 Aug 2006 22:11:05 -0700 Subject: [Numpy-discussion] Use of numarray from numpy package [# INC NO 24609] In-Reply-To: <44F5B91C.5090202@astraw.com> References: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> <44F5B91C.5090202@astraw.com> Message-ID: <44F66F69.1010305@msg.ucsf.edu> Andrew Straw wrote: > LANDRIU David SAp wrote: >> Hello, >> >> I come back to my question : how to use numarray >> with the numpy installation ? >> >> {ccali22}~(0)>setenv PYTHONPATH /usr/local/lib/python2.3/site-packages/numpy >> > Here's where you went wrong. You want: > > setenv PYTHONPATH /usr/local/lib/python2.3/site-packages > >> {ccali22}~(0)>python >> Python 2.3.5 (#2, Oct 17 2005, 17:20:02) >> [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>>>> from numarray import * >>>>> >> Traceback (most recent call last): >> File "", line 1, in ? >> File "/usr/local/lib/python2.3/site-packages/numpy/numarray/__init__.py", line 1, in ? >> from util import * >> File "/usr/local/lib/python2.3/site-packages/numpy/numarray/util.py", line 2, in ? >> from numpy import geterr >> ImportError: No module named numpy >> > > Note that you're actually importing a numarray within numpy's directory > structure. That's because of your PYTHONPATH. numpy ships numpy.numarray > to provide backwards compatibility. To use it, you must do "import > numpy.numarray as numarray" > Just to explain -- there is only a numarray directory inside numpy to provide some special treatment for people that do the transition from numarray to numpy - meaning: they can do somthing like from numpy import numarray and get a "numpy(!) version" that behaves more like numarray than the straight numpy ... Similar for "from numarray import oldnumaric as Numeric" (for people coming from Numeric ) Yes - it is actually confusing, but that's the baggage when there are 2 (now 3) numerical python packages is human history. The future will be much brighter - forget all of the above, and just use import numpy (I like "import numpy as N" for less typing - others prefer even "from numpy import *" ) Hope that helps, - Sebastian Haase From lfriedri at imtek.de Thu Aug 31 01:25:40 2006 From: lfriedri at imtek.de (Lars Friedrich) Date: Thu, 31 Aug 2006 07:25:40 +0200 Subject: [Numpy-discussion] upcast In-Reply-To: References: <1156955983.6572.13.camel@localhost> Message-ID: <1157001940.6670.4.camel@gdur.breisach> > To answer the original question, you need to use a higher precision > array or explicitly cast it to higher precision. > > In [49]:(a.astype(int)*100)/100 > Out[49]:array([200]) Thank you. This is what I wanted to know. Lars From torgil.svensson at gmail.com Thu Aug 31 02:15:36 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Thu, 31 Aug 2006 08:15:36 +0200 Subject: [Numpy-discussion] Unwanted upcast from uint64 to float64 Message-ID: I'm using windows datetimes (100nano-seconds since 0001,1,1) as time in a numpy array and was hit by this behaviour. >>> numpy.__version__ '1.0b4' >>> a=numpy.array([632925394330000000L],numpy.uint64) >>> t=a[0] >>> t 632925394330000000L >>> type(t) >>> t+1 6.3292539433e+017 >>> type(t+1) >>> t==(t+1) True I was trying to set t larger than any time in an array. Is there any reason for the scalar to upcast in this case? //Torgil From landriu at discovery.saclay.cea.fr Thu Aug 31 06:19:45 2006 From: landriu at discovery.saclay.cea.fr (LANDRIU David SAp) Date: Thu, 31 Aug 2006 12:19:45 +0200 (MEST) Subject: [Numpy-discussion] Use of numarray from numpy package Message-ID: <200608311020.k7VAKWr5009000@discovery.saclay.cea.fr> Hello, I learned you answered me, but I did not get your message : can you send it to me again ? Thanks , David Landriu -------------------------------------------------------------------- David Landriu DAPNIA/SAp CEA SACLAY (France) Phone : (33|0)169088785 Fax : (33|0)169086577 --------------------------------------------------------------------- From lists.steve at arachnedesign.net Thu Aug 31 09:23:51 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Thu, 31 Aug 2006 09:23:51 -0400 Subject: [Numpy-discussion] Use of numarray from numpy package In-Reply-To: <200608311020.k7VAKWr5009000@discovery.saclay.cea.fr> References: <200608311020.k7VAKWr5009000@discovery.saclay.cea.fr> Message-ID: <32ED73BE-DF47-4C4E-B6A7-3A79D72D0B25@arachnedesign.net> On Aug 31, 2006, at 6:19 AM, LANDRIU David SAp wrote: > I learned you answered me, but I did not get > your message : can you send it to me again ? Does this help? http://sourceforge.net/mailarchive/forum.php? thread_id=30384097&forum_id=4890 -steve From oliphant.travis at ieee.org Thu Aug 31 09:40:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 31 Aug 2006 07:40:28 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> Message-ID: <44F6E6CC.70206@ieee.org> Fernando Perez wrote: > On 8/30/06, Stefan van der Walt wrote: > > >> The current behaviour makes sense, but is maybe not consistent: >> >> N.array([],dtype=object).size == 1 >> N.array([[],[]],dtype=object).size == 2 >> > > Yes, including one more term in this check: > > In [5]: N.array([],dtype=object).size > Out[5]: 1 > > In [6]: N.array([[]],dtype=object).size > Out[6]: 1 > > In [7]: N.array([[],[]],dtype=object).size > Out[7]: 2 > > Intuitively, I'd have expected the answers to be 0,1,2, instead of 1,1,2. > > What about N.array(3).size N.array([3]).size N.array([3,3]).size Essentially, the [] is being treated as an object when you explicitly ask for an object array in exactly the same way as 3 is being treated as a number in the default case. It's just that '[' ']' is "also" being used as the dimension delimiter and thus the confusion. It is consistent. It's a corner case, and I have no problem fixing the special-case code running when dtype=object so that array([], dtype=object) returns an empty array, if that is the consensus. -Travis > Cheers, > > f > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From oliphant.travis at ieee.org Thu Aug 31 09:45:46 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 31 Aug 2006 07:45:46 -0600 Subject: [Numpy-discussion] Unwanted upcast from uint64 to float64 In-Reply-To: References: Message-ID: <44F6E80A.90508@ieee.org> Torgil Svensson wrote: > I'm using windows datetimes (100nano-seconds since 0001,1,1) as time > in a numpy array and was hit by this behaviour. > > >>>> numpy.__version__ >>>> > '1.0b4' > >>>> a=numpy.array([632925394330000000L],numpy.uint64) >>>> t=a[0] >>>> t >>>> > 632925394330000000L > >>>> type(t) >>>> > > >>>> t+1 >>>> > 6.3292539433e+017 > >>>> type(t+1) >>>> > > >>>> t==(t+1) >>>> > True > > I was trying to set t larger than any time in an array. Is there any > reason for the scalar to upcast in this case? > Yes, because you are adding a signed scalar to an unsigned scalar and a float64 is the only thing that can handle it (well actually it should be the long double scalar but we've made a special case here because long doubles are not that common). Add an unsigned scalar t+numpy.uint64(1) to get what you want. -Travis > //Torgil > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tom.denniston at alum.dartmouth.org Thu Aug 31 09:47:31 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 08:47:31 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 Message-ID: In version 0.9.6 one used to be able to do this: In [4]: numpy.__version__ Out[4]: '0.9.6' In [6]: numpy.array([numpy.array([4,5,6]), numpy.array([1,2,3])], dtype=object).shape Out[6]: (2, 3) In beta 1 numpy.PyObject no longer exists. I was trying to get the same behavior with dtype=object but it doesn't work: In [7]: numpy.__version__ Out[7]: '1.0b1' In [8]: numpy.array([numpy.array([4,5,6]), numpy.array([1,2,3])], dtype=object).shape Out[8]: (2,) Is this an intentional change? From jonathan.taylor at utoronto.ca Thu Aug 31 10:19:19 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 31 Aug 2006 10:19:19 -0400 Subject: [Numpy-discussion] BLAS not found in numpy 1.0b4 Message-ID: <463e11f90608310719m314360e3ue6be8ea6a5fe18fc@mail.gmail.com> When trying to install 1.0b4 I had trouble getting it to detect my installed atlas. This was because the shipped site.cfg had; [atlas] library_dirs = /usr/lib/atlas/3dnow/ atlas_libs = lapack, blas but I had to change 3dnow to sse2 due to my current state of pentiumness. In any case it should probabally look in all the possible locations instead of just AMD's location. Cheers. Jon. From dd55 at cornell.edu Thu Aug 31 09:57:44 2006 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 31 Aug 2006 09:57:44 -0400 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: <200608310957.44947.dd55@cornell.edu> On Tuesday 29 August 2006 19:24, Fernando Perez wrote: > On 8/29/06, Travis Oliphant wrote: > > Hi all, > > > > Classes start for me next Tuesday, and I'm teaching a class for which I > > will be using NumPy / SciPy extensively. I need to have a release of > > these two (and hopefully matplotlib) that work with each other. > > > > Therefore, I'm going to make a 1.0b5 release of NumPy over the weekend > > (probably Monday), and also get a release of SciPy out as well. At that > > point, I'll only be available for bug-fixes to 1.0. Therefore, the next > > release after 1.0b5 I would like to be 1.0rc1 (release-candidate 1). > > What's the status of these 'overwriting' messages? > > planck[/tmp]> python -c 'import scipy;scipy.test()' > Overwriting info= from scipy.misc (was > from numpy.lib.utils) > Overwriting fft= from scipy.fftpack.basic > (was '/home/fperez/tmp/local/lib/python2.3/site-packages/numpy/fft/__init__.pyc' >> from > /home/fperez/tmp/local/lib/python2.3/site-packages/numpy/fft/__init__.pyc) > ... > > I was under the impression you'd decided to quiet them out, but they > seem to be making a comeback. Will these messages be included in NumPy-1.0? From Christophe.Blondeau at onera.fr Thu Aug 31 10:15:47 2006 From: Christophe.Blondeau at onera.fr (Christophe-Blondeau) Date: Thu, 31 Aug 2006 16:15:47 +0200 Subject: [Numpy-discussion] numpy/f2py module import segfault on HP-UX11.11 Message-ID: <44F6EF13.6030905@onera.fr> An HTML attachment was scrubbed... URL: From torgil.svensson at gmail.com Thu Aug 31 10:57:27 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Thu, 31 Aug 2006 16:57:27 +0200 Subject: [Numpy-discussion] Unwanted upcast from uint64 to float64 In-Reply-To: <44F6E80A.90508@ieee.org> References: <44F6E80A.90508@ieee.org> Message-ID: > Yes, because you are adding a signed scalar to an unsigned scalar and a > float64 is the only thing that can handle it > > t+numpy.uint64(1) Thanks, this make sense. This is a good thing to have back in the head. //Torgil On 8/31/06, Travis Oliphant wrote: > Torgil Svensson wrote: > > I'm using windows datetimes (100nano-seconds since 0001,1,1) as time > > in a numpy array and was hit by this behaviour. > > > > > >>>> numpy.__version__ > >>>> > > '1.0b4' > > > >>>> a=numpy.array([632925394330000000L],numpy.uint64) > >>>> t=a[0] > >>>> t > >>>> > > 632925394330000000L > > > >>>> type(t) > >>>> > > > > > >>>> t+1 > >>>> > > 6.3292539433e+017 > > > >>>> type(t+1) > >>>> > > > > > >>>> t==(t+1) > >>>> > > True > > > > I was trying to set t larger than any time in an array. Is there any > > reason for the scalar to upcast in this case? > > > Yes, because you are adding a signed scalar to an unsigned scalar and a > float64 is the only thing that can handle it (well actually it should be > the long double scalar but we've made a special case here because long > doubles are not that common). Add an unsigned scalar > > t+numpy.uint64(1) > > to get what you want. > > -Travis > > > > //Torgil > > > > ------------------------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From fperez.net at gmail.com Thu Aug 31 11:08:36 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 31 Aug 2006 09:08:36 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <44F6E6CC.70206@ieee.org> References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> Message-ID: On 8/31/06, Travis Oliphant wrote: > What about > > N.array(3).size > > N.array([3]).size > > N.array([3,3]).size > > Essentially, the [] is being treated as an object when you explicitly > ask for an object array in exactly the same way as 3 is being treated as > a number in the default case. It's just that '[' ']' is "also" being > used as the dimension delimiter and thus the confusion. > > It is consistent. It's a corner case, and I have no problem fixing the > special-case code running when dtype=object so that array([], > dtype=object) returns an empty array, if that is the consensus. I wasn't really complaining: these are corner cases I've never seen in real use, so I'm not really sure how critical it is to worry about them. Though I could see code which does automatic size/shape checks tripping on some of them. The shape tuples shed a bit of light on what's going on for the surprised (like myself): In [8]: N.array(3).shape Out[8]: () In [9]: N.array([3]).shape Out[9]: (1,) In [10]: N.array([3,3]).shape Out[10]: (2,) In [11]: N.array([]).shape Out[11]: (0,) In [12]: N.array([[]]).shape Out[12]: (1, 0) In [13]: N.array([[],[]]).shape Out[13]: (2, 0) I won't really vote for any changes one way or another, as far as I'm concerned it's one of those 'learn the library' things. I do realize that the near-ambiguity between '[]' as an empty object and '[]' as the syntactic delimiter for a container makes this case a bit of a gotcha. I guess my only remaining question is: what is the difference between outputs #8 and #11 above? Is an empty shape tuple == array scalar, while a (0,) shape indicates a one-dimensional array with no elements? If this interpretation is correct, what is the usage of the latter kind of object, given how it can't even be indexed? In [15]: N.array([])[0] --------------------------------------------------------------------------- exceptions.IndexError Traceback (most recent call last) /home/fperez/research/code/mjmdim/pycode/ IndexError: index out of bounds And is this really expected? In [18]: N.array([]).any() Out[18]: False In [19]: N.array([]).all() Out[19]: True It's a bit funny to have an array for which 'no elements are true' (any==false), yet 'all are true' (all==true), isn't it? Regards, f From charlesr.harris at gmail.com Thu Aug 31 11:33:25 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 09:33:25 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> Message-ID: On 8/31/06, Fernando Perez wrote: > > On 8/31/06, Travis Oliphant wrote: > > > What about > > > > N.array(3).size > > > > N.array([3]).size > > > > N.array([3,3]).size > > > > Essentially, the [] is being treated as an object when you explicitly > > ask for an object array in exactly the same way as 3 is being treated as > > a number in the default case. It's just that '[' ']' is "also" being > > used as the dimension delimiter and thus the confusion. > > > > It is consistent. It's a corner case, and I have no problem fixing the > > special-case code running when dtype=object so that array([], > > dtype=object) returns an empty array, if that is the consensus. > > I wasn't really complaining: these are corner cases I've never seen in > real use, so I'm not really sure how critical it is to worry about > them. Though I could see code which does automatic size/shape checks > tripping on some of them. The shape tuples shed a bit of light on > what's going on for the surprised (like myself): > > In [8]: N.array(3).shape > Out[8]: () > > In [9]: N.array([3]).shape > Out[9]: (1,) > > In [10]: N.array([3,3]).shape > Out[10]: (2,) > > In [11]: N.array([]).shape > Out[11]: (0,) > > In [12]: N.array([[]]).shape > Out[12]: (1, 0) > > In [13]: N.array([[],[]]).shape > Out[13]: (2, 0) > > > I won't really vote for any changes one way or another, as far as I'm > concerned it's one of those 'learn the library' things. I do realize > that the near-ambiguity between '[]' as an empty object and '[]' as > the syntactic delimiter for a container makes this case a bit of a > gotcha. > > I guess my only remaining question is: what is the difference between > outputs #8 and #11 above? Is an empty shape tuple == array scalar, > while a (0,) shape indicates a one-dimensional array with no elements? > If this interpretation is correct, what is the usage of the latter > kind of object, given how it can't even be indexed? > > In [15]: N.array([])[0] > > --------------------------------------------------------------------------- > exceptions.IndexError Traceback (most > recent call last) > > /home/fperez/research/code/mjmdim/pycode/ > > IndexError: index out of bounds > > > And is this really expected? > > In [18]: N.array([]).any() > Out[18]: False This could be interpreted as : exists x, x element of array, s.t. x is true. In [19]: N.array([]).all() > Out[19]: True Seems right: for all x, x element of array, x is true. It's a bit funny to have an array for which 'no elements are true' > (any==false), yet 'all are true' (all==true), isn't it? Fun with empty sets! The question is, is a zero dimensional array an empty container or does it contain its value. The numpy choice of treating zero dimensional arrays as both empty containers and scalar values makes the determination a bit ambiguous although it is consistent with the indexing convention. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From humufr at yahoo.fr Thu Aug 31 11:43:59 2006 From: humufr at yahoo.fr (humufr at yahoo.fr) Date: Thu, 31 Aug 2006 11:43:59 -0400 Subject: [Numpy-discussion] numpy and dtype Message-ID: <200608311143.59711.humufr@yahoo.fr> Hi, sorry to bother you with probably plenty of stupid question but I would like to clarify my mind with dtype. I have a problem to view a recarray, I'm not sure but I suspect a bug or at least a problem I have an array with some data, the array is very big but I have no problem with numpy. In [44]: data_end Out[44]: array([[ 2.66000000e+02, 5.16300000e+04, 1.00000000e+00, ..., -1.04130435e+00, 1.47304565e+02, 4.27402449e+00], [ 2.66000000e+02, 5.16300000e+04, 2.00000000e+00, ..., -6.52190626e-01, 1.64214981e+02, 1.58334379e+01], [ 2.66000000e+02, 5.16300000e+04, 4.00000000e+00, ..., -7.65136838e-01, 1.33340195e+02, 9.84033298e+00], ..., [ 9.78000000e+02, 5.24310000e+04, 6.32000000e+02, ..., 3.06083832e+01, 6.71210251e+01, 1.18813887e+01], [ 9.78000000e+02, 5.24310000e+04, 6.36000000e+02, ..., 3.05993423e+01, 1.10403000e+02, 5.81539488e+00], [ 9.78000000e+02, 5.24310000e+04, 6.40000000e+02, ..., 3.05382938e+01, 1.26916304e+01, 3.25683937e+01]]) In [45]: data_end.shape Out[45]: (567486, 7) In [46]: data_end.dtype Out[46]: dtype('i2','>i4','>i2','>f4','>f4','>f4','>f4']}) In [49]: b = numpy.rec.fromarrays(data_end.transpose(),type_descr) In [50]: b[:1] Out[50]: recarray([ (266, 51630, 1, 146.71420288085938, -1.041304349899292, 147.3045654296875, 4.274024486541748)], dtype=[('PLATEID', '>i2'), ('MJD', '>i4'), ('FIBERID', '>i2'), ('RA', '>f4'), ('DEC', '>f4'), ('V_DISP', '>f4'), ('V_DISP_ERR', '>f4')]) In [51]: b[1] Out[51]: (266, 51630, 2, 146.74412536621094, -0.65219062566757202, 164.21498107910156, 15.833437919616699) but I obtain an error when I want to print the recarray b (it's working for smallest array): In [54]: b[:10] Out[54]: recarray([ (266, 51630, 1, 146.71420288085938, -1.041304349899292, 147.3045654296875, 4.274024486541748), (266, 51630, 2, 146.74412536621094, -0.65219062566757202, 164.21498107910156, 15.833437919616699), (266, 51630, 4, 146.62857055664062, -0.76513683795928955, 133.34019470214844, 9.8403329849243164), (266, 51630, 6, 146.63166809082031, -0.98827779293060303, 146.91035461425781, 30.08709716796875), (266, 51630, 7, 146.91944885253906, -0.99049174785614014, 152.96893310546875, 12.429832458496094), (266, 51630, 9, 146.76339721679688, -0.81043314933776855, 347.72918701171875, 41.387767791748047), (266, 51630, 10, 146.62281799316406, -0.9513852596282959, 162.53567504882812, 24.676788330078125), (266, 51630, 11, 146.93409729003906, -0.67040395736694336, 266.56011962890625, 10.875675201416016), (266, 51630, 12, 146.96389770507812, -0.54500257968902588, 92.040328979492188, 18.999214172363281), (266, 51630, 13, 146.9635009765625, -0.75935173034667969, 72.828048706054688, 13.028598785400391)], dtype=[('PLATEID', '>i2'), ('MJD', '>i4'), ('FIBERID', '>i2'), ('RA', '>f4'), ('DEC', '>f4'), ('V_DISP', '>f4'), ('V_DISP_ERR', '>f4')]) So I would like to know if it's normal. And another question is it possile to do, in theory, something like: b = numpy.array(data_end,dtype=type_descr) or all individual array element must have the same dtype? To replace the context, I have a big fits table, I want to use only some columns from the table so I did: table = pyfits.getdata('gal_info_dr4_v5_1b.fit') #pyfits can't read, at least now the gzip file #the file is a fits table file so we look in the pyfits doc to read it! fields = ['PLATEID', 'MJD', 'FIBERID', 'RA', 'DEC','V_DISP','V_DISP_ERR'] type_descr = numpy.dtype({'names':fields,'formats': [' /home/gruel/usr/lib/python2.4/site-packages/IPython/Prompts.py in __call__(self, arg) 514 515 # and now call a possibly user-defined print mechanism --> 516 manipulated_val = self.display(arg) 517 518 # user display hooks can change the variable to be stored in /home/gruel/usr/lib/python2.4/site-packages/IPython/Prompts.py in _display(self, arg) 538 """ 539 --> 540 return self.shell.hooks.result_display(arg) 541 542 # Assign the default display method: /home/gruel/usr/lib/python2.4/site-packages/IPython/hooks.py in __call__(self, *args, **kw) 132 #print "prio",prio,"cmd",cmd #dbg 133 try: --> 134 ret = cmd(*args, **kw) 135 return ret 136 except ipapi.TryNext, exc: /home/gruel/usr/lib/python2.4/site-packages/IPython/hooks.py in result_display(self, arg) 153 154 if self.rc.pprint: --> 155 out = pformat(arg) 156 if '\n' in out: 157 # So that multi-line strings line up with the left column of /usr/lib/python2.4/pprint.py in pformat(self, object) 108 def pformat(self, object): 109 sio = _StringIO() --> 110 self._format(object, sio, 0, 0, {}, 0) 111 return sio.getvalue() 112 /usr/lib/python2.4/pprint.py in _format(self, object, stream, indent, allowance, context, level) 126 self._readable = False 127 return --> 128 rep = self._repr(object, context, level - 1) 129 typ = _type(object) 130 sepLines = _len(rep) > (self._width - 1 - indent - allowance) /usr/lib/python2.4/pprint.py in _repr(self, object, context, level) 192 def _repr(self, object, context, level): 193 repr, readable, recursive = self.format(object, context.copy(), --> 194 self._depth, level) 195 if not readable: 196 self._readable = False /usr/lib/python2.4/pprint.py in format(self, object, context, maxlevels, level) 204 and whether the object represents a recursive construct. 205 """ --> 206 return _safe_repr(object, context, maxlevels, level) 207 208 /usr/lib/python2.4/pprint.py in _safe_repr(object, context, maxlevels, level) 289 return format % _commajoin(components), readable, recursive 290 --> 291 rep = repr(object) 292 return rep, (rep and not rep.startswith('<')), False 293 /home/gruel/usr/lib/python2.4/site-packages/numpy/core/numeric.py in array_repr(arr, max_line_width, precision, suppress_small) 389 if arr.size > 0 or arr.shape==(0,): 390 lst = array2string(arr, max_line_width, precision, suppress_small, --> 391 ', ', "array(") 392 else: # show zero-length shape unless it is (0,) 393 lst = "[], shape=%s" % (repr(arr.shape),) /home/gruel/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py in array2string(a, max_line_width, precision, suppress_small, separator, prefix, style) 202 else: 203 lst = _array2string(a, max_line_width, precision, suppress_small, --> 204 separator, prefix) 205 return lst 206 /home/gruel/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py in _array2string(a, max_line_width, precision, suppress_small, separator, prefix) 137 if a.size > _summaryThreshold: 138 summary_insert = "..., " --> 139 data = _leading_trailing(a) 140 else: 141 summary_insert = "" /home/gruel/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py in _leading_trailing(a) 108 if a.ndim == 1: 109 if len(a) > 2*_summaryEdgeItems: --> 110 b = _gen.concatenate((a[:_summaryEdgeItems], 111 a[-_summaryEdgeItems:])) 112 else: TypeError: expected a readable buffer object Out[53]: From charlesr.harris at gmail.com Thu Aug 31 11:44:14 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 09:44:14 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: On 8/31/06, Tom Denniston wrote: > > In version 0.9.6 one used to be able to do this: > > In [4]: numpy.__version__ > Out[4]: '0.9.6' > > > In [6]: numpy.array([numpy.array([4,5,6]), numpy.array([1,2,3])], > dtype=object).shape > Out[6]: (2, 3) > > > In beta 1 numpy.PyObject no longer exists. I was trying to get the > same behavior with dtype=object but it doesn't work: > > In [7]: numpy.__version__ > Out[7]: '1.0b1' > > In [8]: numpy.array([numpy.array([4,5,6]), numpy.array([1,2,3])], > dtype=object).shape > Out[8]: (2,) The latter looks more correct, in that is produces an array of objects. To get the previous behaviour there is the function vstack: In [6]: a = array([1,2,3]) In [7]: b = array([4,5,6]) In [8]: vstack([a,b]) Out[8]: array([[1, 2, 3], [4, 5, 6]]) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.denniston at alum.dartmouth.org Thu Aug 31 11:59:36 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 10:59:36 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: For this simple example yes, but if one of the nice things about the array constructors is that they know that lists, tuple and arrays are just sequences and any combination of them is valid numpy input. So for instance a list of tuples yields a 2d array. A list of tuples of 1d arrays yields a 3d array. A list of 1d arrays yields 2d array. This was the case consistently across all dtypes. Now it is the case across all of them except for the dtype=object which has this unusual behavior. I agree that vstack is a better choice when you know you have a list of arrays but it is less useful when you don't know and you can't force a type in the vstack function so there is no longer an equivalent to the dtype=object behavior: In [7]: numpy.array([numpy.array([1,2,3]), numpy.array([4,5,6])], dtype=object) Out[7]: array([[1, 2, 3], [4, 5, 6]], dtype=object) In [8]: numpy.vstack([numpy.array([1,2,3]), numpy.array([4,5,6])], dtype=object) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) TypeError: vstack() got an unexpected keyword argument 'dtype' On 8/31/06, Charles R Harris wrote: > On 8/31/06, Tom Denniston > wrote: > > > In version 0.9.6 one used to be able to do this: > > > > In [4]: numpy.__version__ > > Out[4]: '0.9.6' > > > > > > In [6]: numpy.array([numpy.array([4,5,6]), numpy.array([1,2,3])], > > dtype=object).shape > > Out[6]: (2, 3) > > > > > > In beta 1 numpy.PyObject no longer exists. I was trying to get the > > same behavior with dtype=object but it doesn't work: > > > > In [7]: numpy.__version__ > > Out[7]: '1.0b1' > > > > In [8]: numpy.array([numpy.array ([4,5,6]), numpy.array([1,2,3])], > > dtype=object).shape > > Out[8]: (2,) > > > The latter looks more correct, in that is produces an array of objects. To > get the previous behaviour there is the function vstack: > > In [6]: a = array([1,2,3]) > > In [7]: b = array([4,5,6]) > > In [8]: vstack([a,b]) > Out[8]: > array([[1, 2, 3], > [4, 5, 6]]) > > Chuck > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Aug 31 12:24:35 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 10:24:35 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: On 8/31/06, Tom Denniston wrote: > > For this simple example yes, but if one of the nice things about the array > constructors is that they know that lists, tuple and arrays are just > sequences and any combination of them is valid numpy input. So for instance > a list of tuples yields a 2d array. A list of tuples of 1d arrays yields a > 3d array. A list of 1d arrays yields 2d array. This was the case > consistently across all dtypes. Now it is the case across all of them > except for the dtype=object which has this unusual behavior. I agree that > vstack is a better choice when you know you have a list of arrays but it is > less useful when you don't know and you can't force a type in the vstack > function so there is no longer an equivalent to the dtype=object behavior: > > In [7]: numpy.array([numpy.array([1,2,3]), numpy.array([4,5,6])], > dtype=object) > Out[7]: > array([[1, 2, 3], > [4, 5, 6]], dtype=object) > What are you trying to do? If you want integers: In [32]: a = array([array([1,2,3]), array([4,5,6])], dtype=int) In [33]: a.shape Out[33]: (2, 3) If you want objects, you have them: In [30]: a = array([array([1,2,3]), array([4,5,6])], dtype=object) In [31]: a.shape Out[31]: (2,) i.e, a is an array containing two array objects. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Thu Aug 31 12:36:08 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 31 Aug 2006 09:36:08 -0700 Subject: [Numpy-discussion] BLAS not found in numpy 1.0b4 In-Reply-To: <463e11f90608310719m314360e3ue6be8ea6a5fe18fc@mail.gmail.com> References: <463e11f90608310719m314360e3ue6be8ea6a5fe18fc@mail.gmail.com> Message-ID: <44F70FF8.6090801@noaa.gov> Jonathan Taylor wrote: > When trying to install 1.0b4 I had trouble getting it to detect my > installed atlas. This was because the shipped site.cfg had; > > [atlas] > library_dirs = /usr/lib/atlas/3dnow/ > atlas_libs = lapack, blas > > but I had to change 3dnow to sse2 due to my current state of > pentiumness. In any case it should probabally look in all the > possible locations instead of just AMD's location. "All possible locations" is pretty much impossible. There really isn't any choice but for individuals to customize site.cfg for their setup. that's why it's called "site".cfg. I would like to see a pretty good collection of examples, most of them commented out, in there, however. i.e.: ## for AMD atlas: #library_dirs = /usr/lib/atlas/3dnow/ #atlas_libs = lapack, blas ## for Fedora Core 4 sse2 atlas: #library_dirs = /usr/lib/sse2/ #atlas_libs = lapack, blas etc, etc. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu Aug 31 12:46:06 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 31 Aug 2006 09:46:06 -0700 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> Message-ID: <44F7124E.7010702@noaa.gov> Fernando Perez wrote: > In [8]: N.array(3).shape > Out[8]: () > In [11]: N.array([]).shape > Out[11]: (0,) > I guess my only remaining question is: what is the difference between > outputs #8 and #11 above? Is an empty shape tuple == array scalar, > while a (0,) shape indicates a one-dimensional array with no elements? > If this interpretation is correct, what is the usage of the latter > kind of object, given how it can't even be indexed? It can be iterated over (with zero iterations): >>> a = N.array([]) >>> for i in a: ... print i ... whereas the scalar can not: >>> b = N.array(3) >>> b array(3) >>> for i in b: ... print i ... Traceback (most recent call last): File "", line 1, in ? TypeError: iteration over a scalar (0-dim array) Of course the scalar isn't empty, so ti's different in that way too. Can there be an empty scalar? It doesn't look like it. In fact, this looks like it may be a bug: >>> a = N.array([1,2,3]).sum(); a.shape; a.size; a () 1 6 That's what I'd expect, but what if you start with a (0,) array: >>> a = N.array([]).sum(); a.shape; a.size; a () 1 0 where did that zero come from? >>> N.__version__ '1.0b4' -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From charlesr.harris at gmail.com Thu Aug 31 12:51:01 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 10:51:01 -0600 Subject: [Numpy-discussion] BLAS not found in numpy 1.0b4 In-Reply-To: <44F70FF8.6090801@noaa.gov> References: <463e11f90608310719m314360e3ue6be8ea6a5fe18fc@mail.gmail.com> <44F70FF8.6090801@noaa.gov> Message-ID: On 8/31/06, Christopher Barker wrote: > > Jonathan Taylor wrote: > > When trying to install 1.0b4 I had trouble getting it to detect my > > installed atlas. This was because the shipped site.cfg had; > > > > [atlas] > > library_dirs = /usr/lib/atlas/3dnow/ > > atlas_libs = lapack, blas > > > > but I had to change 3dnow to sse2 due to my current state of > > pentiumness. In any case it should probabally look in all the > > possible locations instead of just AMD's location. > > "All possible locations" is pretty much impossible. There really isn't > any choice but for individuals to customize site.cfg for their setup. > that's why it's called "site".cfg. > > I would like to see a pretty good collection of examples, most of them > commented out, in there, however. i.e.: I need this on fc5 x86_64 [atlas] library_dirs = /usr/lib64/atlas atlas_libs = lapack, blas, cblas, atlas I think this should be automatic. Apart from debian, the /usr/lib64 directory is pretty much standard for 64bit linux distros. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.hochberg at ieee.org Thu Aug 31 12:57:25 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Thu, 31 Aug 2006 09:57:25 -0700 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <44F7124E.7010702@noaa.gov> References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> <44F7124E.7010702@noaa.gov> Message-ID: <44F714F5.9050305@ieee.org> Christopher Barker wrote: > Fernando Perez wrote: > >> In [8]: N.array(3).shape >> Out[8]: () >> > > >> In [11]: N.array([]).shape >> Out[11]: (0,) >> > > >> I guess my only remaining question is: what is the difference between >> outputs #8 and #11 above? Is an empty shape tuple == array scalar, >> while a (0,) shape indicates a one-dimensional array with no elements? >> If this interpretation is correct, what is the usage of the latter >> kind of object, given how it can't even be indexed? >> > > It can be iterated over (with zero iterations): > > >>> a = N.array([]) > >>> for i in a: > ... print i > ... > > whereas the scalar can not: > > >>> b = N.array(3) > >>> b > array(3) > >>> for i in b: > ... print i > ... > Traceback (most recent call last): > File "", line 1, in ? > TypeError: iteration over a scalar (0-dim array) > > Of course the scalar isn't empty, so ti's different in that way too. Can > there be an empty scalar? It doesn't look like it. In fact, this looks > like it may be a bug: > >>> a = N.array([1,2,3]).sum(); a.shape; a.size; a > () > 1 > 6 > > That's what I'd expect, but what if you start with a (0,) array: > >>> a = N.array([]).sum(); a.shape; a.size; a > () > 1 > 0 > > where did that zero come from? > More or less from: >>> numpy.add.identity 0 All the ufuncs have an identity function that they use as a starting point for reduce and accumulate. Sum doesn't appear to actually ahve one, but since it's more or less the same as add.reduce it's probably good that it has the same behavior. Note that this also matches the behavior of python's built in sum, although there the identity is called 'start'. -tim > >>> N.__version__ > '1.0b4' > > -Chris > > > > From tom.denniston at alum.dartmouth.org Thu Aug 31 13:00:06 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 12:00:06 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: But i have hetergenious arrays that have numbers and strings and NoneType, etc. Take for instance: In [11]: numpy.array([numpy.array([1,'A', None]), numpy.array([2,2,'Some string'])], dtype=object) Out[11]: array([[1, A, None], [2, 2, Some string]], dtype=object) In [12]: numpy.array([numpy.array([1,'A', None]), numpy.array([2,2,'Some string'])], dtype=object).shape Out[12]: (2, 3) Works fine in Numeric and pre beta numpy but in beta numpy versions i get: In [6]: numpy.array([numpy.array([1,'A', None]), numpy.array([2,2,'Some string'])], dtype=object) Out[6]: array([[1 A None], [2 2 Some string]], dtype=object) In [7]: numpy.array([numpy.array([1,'A', None]), numpy.array([2,2,'Some string'])], dtype=object).shape Out[7]: (2,) But a lists of lists still gives: In [9]: numpy.array([[1,'A', None], [2,2,'Some string']], dtype=object).shape Out[9]: (2, 3) And if you omit the dtype and use a list of arrays then you get a string array with 2,3 dimensions: In [11]: numpy.array([numpy.array([1,'A', None]), numpy.array([2,2,'Some string'])]).shape Out[11]: (2, 3) This new behavior strikes me as inconsistent. One of the things I love about numpy is the ufuncs, array constructors, etc don't care about whether you pass a combination of lists, arrays, tuples, etc. They just know what you _mean_. And what you _mean_ by a list of lists, tuple of arrays, list of arrays, etc is very consistent across constructors and functions. I think making an exception for dtype=object introduces a lot of inconsistencies and it isn't clear to me what is gained. Does anyone commonly use the array interface in a manner that this new behavior is actuallly favorable? I may be overlooking a common use case or something like that. On 8/31/06, Charles R Harris wrote: > > > > On 8/31/06, Tom Denniston > wrote: > > > > For this simple example yes, but if one of the nice things about the array > constructors is that they know that lists, tuple and arrays are just > sequences and any combination of them is valid numpy input. So for instance > a list of tuples yields a 2d array. A list of tuples of 1d arrays yields a > 3d array. A list of 1d arrays yields 2d array. This was the case > consistently across all dtypes. Now it is the case across all of them > except for the dtype=object which has this unusual behavior. I agree that > vstack is a better choice when you know you have a list of arrays but it is > less useful when you don't know and you can't force a type in the vstack > function so there is no longer an equivalent to the dtype=object behavior: > > > > In [7]: numpy.array([numpy.array([1,2,3]), numpy.array([4,5,6])], > dtype=object) > > Out[7]: > > array([[1, 2, 3], > > [4, 5, 6]], dtype=object) > > > What are you trying to do? If you want integers: > > In [32]: a = array([array([1,2,3]), array([4,5,6])], dtype=int) > > In [33]: a.shape > Out[33]: (2, 3) > > > If you want objects, you have them: > > In [30]: a = array([array([1,2,3]), array([4,5,6])], dtype=object) > > In [31]: a.shape > Out[31]: (2,) > > i.e, a is an array containing two array objects. > > Chuck > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From charlesr.harris at gmail.com Thu Aug 31 13:26:15 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 11:26:15 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <44F7124E.7010702@noaa.gov> References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> <44F7124E.7010702@noaa.gov> Message-ID: On 8/31/06, Christopher Barker wrote: > > Fernando Perez wrote: > > In [8]: N.array(3).shape > > Out[8]: () > > > In [11]: N.array([]).shape > > Out[11]: (0,) > > > I guess my only remaining question is: what is the difference between > > outputs #8 and #11 above? Is an empty shape tuple == array scalar, > > while a (0,) shape indicates a one-dimensional array with no elements? > > If this interpretation is correct, what is the usage of the latter > > kind of object, given how it can't even be indexed? > > It can be iterated over (with zero iterations): > > >>> a = N.array([]) > >>> for i in a: > ... print i > ... > > whereas the scalar can not: > > >>> b = N.array(3) > >>> b > array(3) > >>> for i in b: > ... print i > ... > Traceback (most recent call last): > File "", line 1, in ? > TypeError: iteration over a scalar (0-dim array) > > Of course the scalar isn't empty, so ti's different in that way too. Can > there be an empty scalar? It doesn't look like it. In fact, this looks > like it may be a bug: > >>> a = N.array([1,2,3]).sum(); a.shape; a.size; a > () > 1 > 6 > > That's what I'd expect, but what if you start with a (0,) array: > >>> a = N.array([]).sum(); a.shape; a.size; a > () > 1 > 0 > > where did that zero come from? I think that is correct, sums over empty sets are conventionally set to zero because they are conceived of as adding all the values in the set to zero. Typically this would be implemented as sum = 0 for i in set : sum += i; Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Aug 31 13:36:16 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 11:36:16 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: On 8/31/06, Tom Denniston wrote: > > But i have hetergenious arrays that have numbers and strings and NoneType, > etc. > > Take for instance: > > In [11]: numpy.array([numpy.array([1,'A', None]), > numpy.array([2,2,'Some string'])], dtype=object) > Out[11]: > array([[1, A, None], > [2, 2, Some string]], dtype=object) > > In [12]: numpy.array([numpy.array([1,'A', None]), > numpy.array([2,2,'Some string'])], dtype=object).shape > Out[12]: (2, 3) > > Works fine in Numeric and pre beta numpy but in beta numpy versions i get: I think you want: In [59]: a = array([array([1,'A', None],dtype=object),array([2,2,'Some string'],dtype=object)]) In [60]: a.shape Out[60]: (2, 3) Which makes good sense to me. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Aug 31 13:57:44 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 11:57:44 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: On 8/31/06, Charles R Harris wrote: > > On 8/31/06, Tom Denniston wrote: > > > But i have hetergenious arrays that have numbers and strings and > > NoneType, etc. > > > > Take for instance: > > > > In [11]: numpy.array([numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object) > > Out[11]: > > array([[1, A, None], > > [2, 2, Some string]], dtype=object) > > > > In [12]: numpy.array([numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object).shape > > Out[12]: (2, 3) > > > > Works fine in Numeric and pre beta numpy but in beta numpy versions i > > get: > > > I think you want: > > In [59]: a = array([array([1,'A', None],dtype=object),array([2,2,'Some > string'],dtype=object)]) > > In [60]: a.shape > Out[60]: (2, 3) > > Which makes good sense to me. > OK, I changed my mind. I think you are right and this is a bug. Using svn revision 3098 I get In [2]: a = array([1,'A', None]) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/charris/ TypeError: expected a readable buffer object Which is different than you get with beta 1 in any case. I think array should cast the objects in the list to the first common dtype, object in this case. So the previous should be shorthand for: In [3]: a = array([1,'A', None], dtype=object) In [4]: a.shape Out[4]: (3,) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.denniston at alum.dartmouth.org Thu Aug 31 14:08:17 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 13:08:17 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: Yes one can take a toy example and hack it to work but I don't necessarily have control over the input as to whether it is a list of object arrays, list of 1d heterogenous arrays, etc. Before I didn't need to worry about the input because numpy understood that a list of 1d arrays is a 2d piece of data. Now it understands this for all dtypes except object. My question was is this new set of semantics preferable to the old. I think your example kind of proves my point. Does it really make any sense for the following two ways of specifying an array give such different results? They strike me as _meaning_ the same thing. Doesn't it seem inconsistent to you? In [13]: array([array([1,'A', None], dtype=object),array([2,2,'Some string'],dtype=object)], dtype=object).shape Out[13]: (2,) and In [14]: array([array([1,'A', None], dtype=object),array([2,2,'Some string'],dtype=object)]).shape Out[14]: (2, 3) So my question is what is the _advantage_ of the new semantics? The two examples above used to give the same results. In what cases is it preferable for them to give different results? How does it make life simpler? On 8/31/06, Charles R Harris wrote: > On 8/31/06, Tom Denniston wrote: > > > But i have hetergenious arrays that have numbers and strings and > > NoneType, etc. > > > > Take for instance: > > > > In [11]: numpy.array([numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object) > > Out[11]: > > array([[1, A, None], > > [2, 2, Some string]], dtype=object) > > > > In [12]: numpy.array([numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object).shape > > Out[12]: (2, 3) > > > > Works fine in Numeric and pre beta numpy but in beta numpy versions i > > get: > > > I think you want: > > In [59]: a = array([array([1,'A', None],dtype=object),array([2,2,'Some > string'],dtype=object)]) > > In [60]: a.shape > Out[60]: (2, 3) > > > Which makes good sense to me. > > Chuck > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.denniston at alum.dartmouth.org Thu Aug 31 14:11:22 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 13:11:22 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: wrote the last email before reading your a = array([1,'A', None]) comment. I definately agree with you on that. On 8/31/06, Tom Denniston wrote: > > Yes one can take a toy example and hack it to work but I don't > necessarily have control over the input as to whether it is a list of object > arrays, list of 1d heterogenous arrays, etc. Before I didn't need to worry > about the input because numpy understood that a list of 1d arrays is a > 2d piece of data. Now it understands this for all dtypes except object. My > question was is this new set of semantics preferable to the old. > > I think your example kind of proves my point. Does it really make any > sense for the following two ways of specifying an array give such different > results? They strike me as _meaning_ the same thing. Doesn't it seem > inconsistent to you? > > > In [13]: array([array([1,'A', None], dtype=object),array([2,2,'Some > string'],dtype=object)], dtype=object).shape > Out[13]: (2,) > > and > > In [14]: array([array([1,'A', None], dtype=object),array([2,2,'Some > string'],dtype=object)]).shape > Out[14]: (2, 3) > So my question is what is the _advantage_ of the new semantics? The two > examples above used to give the same results. In what cases is it > preferable for them to give different results? How does it make life > simpler? > > > On 8/31/06, Charles R Harris wrote: > > > On 8/31/06, Tom Denniston wrote: > > > But i have hetergenious arrays that have numbers and strings and > > NoneType, etc. > > > > Take for instance: > > > > In [11]: numpy.array([numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object) > > Out[11]: > > array([[1, A, None], > > [2, 2, Some string]], dtype=object) > > > > In [12]: numpy.array([ numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object).shape > > Out[12]: (2, 3) > > > > Works fine in Numeric and pre beta numpy but in beta numpy versions i > > get: > > > I think you want: > > In [59]: a = array([array([1,'A', None],dtype=object),array([2,2,'Some > string'],dtype=object)]) > > In [60]: a.shape > Out[60]: (2, 3) > > > Which makes good sense to me. > > Chuck > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From secchi at sssup.it Thu Aug 31 14:13:29 2006 From: secchi at sssup.it (Angelo Secchi) Date: Thu, 31 Aug 2006 20:13:29 +0200 Subject: [Numpy-discussion] Strange exp Message-ID: <20060831201329.49946c4e.secchi@sssup.it> Hi, I have the following script import fileinput import string from math import * from scipy import * from rpy import * import Numeric import shelve import sys def dpolya1(n,N,b,a): a=float(a) b=float(b) L=784 probs=((special.gammaln(N+1)+special.gammaln(L*(a/b))+special.gammaln((a/b)+n)+special.gammaln((a/b)*(L-1)+N-n))-(special.gammaln(L*(a/b)+N)+special.gammaln(a/b)+special.gammaln(n+1)+special.gammaln(N-n+1)+special.gammaln(L*(a/b)-(a/b))))#) return probs and I observe the following "strange" (for me of course) behaviour >>> dpolya1(1,2,0.5,0.4) -5.9741312822170585 >>> type(dpolya1(1,2,0.5,0.4)) >>> exp(dpolya1(1,2,0.5,0.4)) Traceback (most recent call last): File "", line 1, in ? AttributeError: 'numpy.ndarray' object has no attribute 'exp' I do not understand what's wrong. Any help? Thanks Angelo From torgil.svensson at gmail.com Thu Aug 31 14:21:50 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Thu, 31 Aug 2006 20:21:50 +0200 Subject: [Numpy-discussion] fromiter shape argument -- was Re: For loop tips In-Reply-To: <44F5A1B5.7090409@ieee.org> References: <44F5A1B5.7090409@ieee.org> Message-ID: > Yes. fromiter(iterable, dtype, count) works. Oh. Thanks. I probably had too old documentation to see this (15 June). If it's not updated since I'll give Travis a rest about this, at least until 1.0 is released :) > Regardless, L is only iterated over once. How can this be true? If no size is given, mustn't numpy either loop over L twice or build an internal representation on which it'll iterate or copy in chunks? I just found out that this works >>> import numpy,itertools >>> rec_dt=numpy.dtype(">i4,S10,f8") >>> rec_iter=itertools.cycle([(1,'s',4.0),(5,'y',190.0),(2,'h',-8)]) >>> numpy.fromiter(rec_iter,rec_dt,10).view(recarray) recarray([(1, 's', 4.0), (5, 'y', 190.0), (2, 'h', -8.0), (1, 's', 4.0), (5, 'y', 190.0), (2, 'h', -8.0), (1, 's', 4.0), (5, 'y', 190.0), (2, 'h', -8.0), (1, 's', 4.0)], dtype=[('f0', '>i4'), ('f1', '|S10'), ('f2', '>> d2_dt=numpy.dtype("4f8") >>> d2_iter=itertools.cycle([(1.0,numpy.nan,-1e10,14.0)]) >>> numpy.fromiter(d2_iter,d2_dt,10) Traceback (most recent call last): File "", line 1, in ? TypeError: a float is required >>> numpy.__version__ '1.0b4' //Torgil On 8/30/06, Tim Hochberg wrote: > Torgil Svensson wrote: > >> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) > >> > > > > Is it possible for fromiter to take an optional shape (or count) > > argument in addition to the dtype argument? > Yes. fromiter(iterable, dtype, count) works. > > > If both is given it could > > preallocate memory and we only have to iterate over L once. > > > Regardless, L is only iterated over once. In general you can't rewind > iterators, so that's a requirement. This is accomplished by doing > successive overallocation similar to the way appending to a list is > handled. By specifying the count up front you save a bunch of reallocs, > but no iteration. > > -tim > > > > > //Torgil > > > > On 8/29/06, Keith Goodman wrote: > > > >> On 8/29/06, Torgil Svensson wrote: > >> > >>> something like this? > >>> > >>> def list2index(L): > >>> uL=sorted(set(L)) > >>> idx=dict((y,x) for x,y in enumerate(uL)) > >>> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) > >>> > >> Wow. That's amazing. Thank you. > >> > >> ------------------------------------------------------------------------- > >> Using Tomcat but need to do more? Need to support web services, security? > >> Get stuff done quickly with pre-integrated technology to make your job easier > >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > >> _______________________________________________ > >> Numpy-discussion mailing list > >> Numpy-discussion at lists.sourceforge.net > >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion > >> > >> > > > > ------------------------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From torgil.svensson at gmail.com Thu Aug 31 14:25:12 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Thu, 31 Aug 2006 20:25:12 +0200 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: <44F48A0B.7020401@ieee.org> Message-ID: def list2index(L): uL=sorted(set(L)) idx=dict((y,x) for x,y in enumerate(uL)) return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int,count=len(L))) adding the count will save you a little more time, and temporary memory [see related thread]. //Torgil On 8/29/06, Keith Goodman wrote: > On 8/29/06, Torgil Svensson wrote: > > something like this? > > > > def list2index(L): > > uL=sorted(set(L)) > > idx=dict((y,x) for x,y in enumerate(uL)) > > return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) > > Wow. That's amazing. Thank you. > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From charlesr.harris at gmail.com Thu Aug 31 14:35:25 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 12:35:25 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: I submitted a ticket for this. On 8/31/06, Tom Denniston wrote: > > wrote the last email before reading your a = array([1,'A', None]) > comment. I definately agree with you on that. > > > On 8/31/06, Tom Denniston wrote: > > > > Yes one can take a toy example and hack it to work but I don't > > necessarily have control over the input as to whether it is a list of object > > arrays, list of 1d heterogenous arrays, etc. Before I didn't need to worry > > about the input because numpy understood that a list of 1d arrays is a > > 2d piece of data. Now it understands this for all dtypes except object. My > > question was is this new set of semantics preferable to the old. > > > > I think your example kind of proves my point. Does it really make any > > sense for the following two ways of specifying an array give such different > > results? They strike me as _meaning_ the same thing. Doesn't it seem > > inconsistent to you? > > > > > > In [13]: array([array([1,'A', None], dtype=object),array([2,2,'Some > > string'],dtype=object)], dtype=object).shape > > Out[13]: (2,) > > > > and > > > > In [14]: array([array([1,'A', None], dtype=object),array([2,2,'Some > > string'],dtype=object)]).shape > > Out[14]: (2, 3) > > So my question is what is the _advantage_ of the new semantics? The two > > examples above used to give the same results. In what cases is it > > preferable for them to give different results? How does it make life > > simpler? > > > > > > On 8/31/06, Charles R Harris wrote: > > > > > On 8/31/06, Tom Denniston wrote: > > > > > But i have hetergenious arrays that have numbers and strings and > > > NoneType, etc. > > > > > > Take for instance: > > > > > > In [11]: numpy.array([numpy.array([1,'A', None]), > > > numpy.array([2,2,'Some string'])], dtype=object) > > > Out[11]: > > > array([[1, A, None], > > > [2, 2, Some string]], dtype=object) > > > > > > In [12]: numpy.array([ numpy.array([1,'A', None]), > > > numpy.array([2,2,'Some string'])], dtype=object).shape > > > Out[12]: (2, 3) > > > > > > Works fine in Numeric and pre beta numpy but in beta numpy versions i > > > get: > > > > > > I think you want: > > > > In [59]: a = array([array([1,'A', None],dtype=object),array([2,2,'Some > > string'],dtype=object)]) > > > > In [60]: a.shape > > Out[60]: (2, 3) > > > > > > Which makes good sense to me. > > > > Chuck > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, > > security? > > Get stuff done quickly with pre-integrated technology to make your job > > easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache > > Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Aug 31 14:35:11 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 31 Aug 2006 13:35:11 -0500 Subject: [Numpy-discussion] Strange exp In-Reply-To: <20060831201329.49946c4e.secchi@sssup.it> References: <20060831201329.49946c4e.secchi@sssup.it> Message-ID: Angelo Secchi wrote: > Hi, > I have the following script > > import fileinput > import string > from math import * > from scipy import * > from rpy import * > import Numeric > import shelve > import sys > > def dpolya1(n,N,b,a): > a=float(a) > b=float(b) > L=784 > probs=((special.gammaln(N+1)+special.gammaln(L*(a/b))+special.gammaln((a/b)+n)+special.gammaln((a/b)*(L-1)+N-n))-(special.gammaln(L*(a/b)+N)+special.gammaln(a/b)+special.gammaln(n+1)+special.gammaln(N-n+1)+special.gammaln(L*(a/b)-(a/b))))#) > return probs > > and I observe the following "strange" (for me of course) behaviour > >>>> dpolya1(1,2,0.5,0.4) > -5.9741312822170585 >>>> type(dpolya1(1,2,0.5,0.4)) > >>>> exp(dpolya1(1,2,0.5,0.4)) > Traceback (most recent call last): > File "", line 1, in ? > AttributeError: 'numpy.ndarray' object has no attribute 'exp' > > I do not understand what's wrong. Any help? Probably rpy (which still uses Numeric, right?) is exposing Numeric's exp() implementation and overriding the one that you got from scipy (which is numpy's, I presume). When Numeric's exp() is confronted with an object that it doesn't recognize, it looks for a .exp() method to call. If you want to avoid this situation in the future, don't use the "from foo import *" form. It makes debugging problems like this difficult. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at ieee.org Thu Aug 31 14:43:22 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Thu, 31 Aug 2006 11:43:22 -0700 Subject: [Numpy-discussion] fromiter shape argument -- was Re: For loop tips In-Reply-To: References: <44F5A1B5.7090409@ieee.org> Message-ID: <44F72DCA.9050700@ieee.org> Torgil Svensson wrote: >> Yes. fromiter(iterable, dtype, count) works. >> > > Oh. Thanks. I probably had too old documentation to see this (15 > June). If it's not updated since I'll give Travis a rest about this, > at least until 1.0 is released :) > Actually I just knew 'cause I wrote it. I don't see a docstring for fromiter, although I though I wrote one. Maybe I just forgot? >> Regardless, L is only iterated over once. >> > > How can this be true? If no size is given, mustn't numpy either loop > over L twice or build an internal representation on which it'll > iterate or copy in chunks? > Well, it can't in general loop over L twice since the only method that L is guaranteed to have is next(); that's the extent of the iterator protocol. What it does is allocate an initial chunk of memory (the size of which I forget -- I did some tuning) and start filling it up. Once it's full, it does a realloc, which expands the existing chunk or memory, if possible, or returns a new, larger, chunk of memory with the data copied into it. Then we iterate on L some more until we fill up the new larger chunk, in which case we go get another one, etc. This is exactly how list.append works, although in that case the chunk of data is acutally a chunk of pointers to objects. -tim > > I just found out that this works > >>>> import numpy,itertools >>>> rec_dt=numpy.dtype(">i4,S10,f8") >>>> rec_iter=itertools.cycle([(1,'s',4.0),(5,'y',190.0),(2,'h',-8)]) >>>> numpy.fromiter(rec_iter,rec_dt,10).view(recarray) >>>> > recarray([(1, 's', 4.0), (5, 'y', 190.0), (2, 'h', -8.0), (1, 's', 4.0), > (5, 'y', 190.0), (2, 'h', -8.0), (1, 's', 4.0), (5, 'y', 190.0), > (2, 'h', -8.0), (1, 's', 4.0)], > dtype=[('f0', '>i4'), ('f1', '|S10'), ('f2', ' > but what's wrong with this? > > >>>> d2_dt=numpy.dtype("4f8") >>>> d2_iter=itertools.cycle([(1.0,numpy.nan,-1e10,14.0)]) >>>> numpy.fromiter(d2_iter,d2_dt,10) >>>> > Traceback (most recent call last): > File "", line 1, in ? > TypeError: a float is required > >>>> numpy.__version__ >>>> > '1.0b4' > > //Torgil > > > > On 8/30/06, Tim Hochberg wrote: > >> Torgil Svensson wrote: >> >>>> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) >>>> >>>> >>> Is it possible for fromiter to take an optional shape (or count) >>> argument in addition to the dtype argument? >>> >> Yes. fromiter(iterable, dtype, count) works. >> >> >>> If both is given it could >>> preallocate memory and we only have to iterate over L once. >>> >>> >> Regardless, L is only iterated over once. In general you can't rewind >> iterators, so that's a requirement. This is accomplished by doing >> successive overallocation similar to the way appending to a list is >> handled. By specifying the count up front you save a bunch of reallocs, >> but no iteration. >> >> -tim >> >> >> >> >>> //Torgil >>> >>> On 8/29/06, Keith Goodman wrote: >>> >>> >>>> On 8/29/06, Torgil Svensson wrote: >>>> >>>> >>>>> something like this? >>>>> >>>>> def list2index(L): >>>>> uL=sorted(set(L)) >>>>> idx=dict((y,x) for x,y in enumerate(uL)) >>>>> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) >>>>> >>>>> >>>> Wow. That's amazing. Thank you. >>>> >>>> ------------------------------------------------------------------------- >>>> Using Tomcat but need to do more? Need to support web services, security? >>>> Get stuff done quickly with pre-integrated technology to make your job easier >>>> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >>>> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >>>> _______________________________________________ >>>> Numpy-discussion mailing list >>>> Numpy-discussion at lists.sourceforge.net >>>> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >>>> >>>> >>>> >>> ------------------------------------------------------------------------- >>> Using Tomcat but need to do more? Need to support web services, security? >>> Get stuff done quickly with pre-integrated technology to make your job easier >>> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >>> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at lists.sourceforge.net >>> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >>> >>> >>> >>> >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From Chris.Barker at noaa.gov Thu Aug 31 14:51:33 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 31 Aug 2006 11:51:33 -0700 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <44F714F5.9050305@ieee.org> References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> <44F7124E.7010702@noaa.gov> <44F714F5.9050305@ieee.org> Message-ID: <44F72FB5.2070300@noaa.gov> Tim Hochberg wrote: >> That's what I'd expect, but what if you start with a (0,) array: >> >>> a = N.array([]).sum(); a.shape; a.size; a >> () >> 1 >> 0 >> >> where did that zero come from? >> > More or less from: > > >>> numpy.add.identity > 0 I'm not totally sure, but I think I'd rather it raise an exception. However, if it's not going to, then 0 is really the only reasonable answer. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu Aug 31 15:08:51 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 31 Aug 2006 12:08:51 -0700 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: <44F733C3.7000307@noaa.gov> Tom Denniston wrote: > So my question is what is the _advantage_ of the new semantics? what if the list don't have the same length, and therefor can not be made into an array, now you get a weird result: >>>N.array([N.array([1,'A',None],dtype=object),N.array([2,2,'Somestring',5],dtype=object)]).shape () Now you get an Object scalar. but: >>>N.array([N.array([1,'A',None],dtype=object),N.array([2,2,'Somestring',5],dtype=object)],dtype=object).shape (2,) Now you get a length 2 array, just like before: far more consistent. With the old semantics, if you test your code with arrays of different lengths, you'll get one thing, but if they then happen to be the same length in some production use, the whole thing breaks -- this is a Bad Idea. Object arrays are just plain weird, there is nothing you can do that will satisfy every need. I think it's best for the array constructor to not try to guess at what the hierarchy of sequences you *meant* to use. You can (and probably should) always be explicit with: >>> A = N.empty((2,), dtype=object) >>> A array([None, None], dtype=object) >>> A[:] = [N.array([1,'A', None], dtype=object),N.array([2,2,'Somestring',5],dtype=object)] >>> A array([[1 A None], [2 2 Somestring 5]], dtype=object) -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tom.denniston at alum.dartmouth.org Thu Aug 31 15:29:15 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 14:29:15 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: <44F733C3.7000307@noaa.gov> References: <44F733C3.7000307@noaa.gov> Message-ID: I would think one would want to throw an error when the data has inconsistent dimensions. This is what numpy does for other dtypes: In [10]: numpy.array(([1,2,3], [4,5,6])) Out[10]: array([[1, 2, 3], [4, 5, 6]]) In [11]: numpy.array(([1,3], [4,5,6])) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) TypeError: an integer is required On 8/31/06, Christopher Barker wrote: > > Tom Denniston wrote: > > So my question is what is the _advantage_ of the new semantics? > > what if the list don't have the same length, and therefor can not be > made into an array, now you get a weird result: > > >>>N.array([N.array([1,'A',None],dtype=object),N.array > ([2,2,'Somestring',5],dtype=object)]).shape > () > > Now you get an Object scalar. > > but: > >>>N.array([N.array([1,'A',None],dtype=object),N.array > ([2,2,'Somestring',5],dtype=object)],dtype=object).shape > (2,) > > Now you get a length 2 array, just like before: far more consistent. > With the old semantics, if you test your code with arrays of different > lengths, you'll get one thing, but if they then happen to be the same > length in some production use, the whole thing breaks -- this is a Bad > Idea. > > Object arrays are just plain weird, there is nothing you can do that > will satisfy every need. I think it's best for the array constructor to > not try to guess at what the hierarchy of sequences you *meant* to use. > You can (and probably should) always be explicit with: > > >>> A = N.empty((2,), dtype=object) > >>> A > array([None, None], dtype=object) > >>> A[:] = [N.array([1,'A', None], > dtype=object),N.array([2,2,'Somestring',5],dtype=object)] > >>> A > array([[1 A None], [2 2 Somestring 5]], dtype=object) > > -Chris > > > > > > -- > Christopher Barker, Ph.D. > Oceanographer > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Thu Aug 31 15:51:07 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 31 Aug 2006 12:51:07 -0700 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: <44F733C3.7000307@noaa.gov> Message-ID: <44F73DAB.3020100@noaa.gov> Tom Denniston wrote: > I would think one would want to throw an error when the data has > inconsistent dimensions. But it doesn't have inconsistent dimensions - they are perfectly consistent with a (2,) array of objects. How is the code to know what you intended? With numeric types, it is unambiguous to march down through the sequences until you get a number. As a sequence is an object, there no way to unambiguously do this automatically. Perhaps the way to solve this is for the array constructor to take a "shape" or "rank" argument, so you could specify what you intend. But that's really just syntactic sugar to avoid for calling numpy.empty() first. Perhaps a numpy.object_array() constructor would be useful, although as I think about it, even specifying a shape or rank would not be unambiguous! This is a useful discussion. If we ever get a nd-array into the standard lib, I suspect that object arrays will get heavy use -- better to clean up the semantics now. Perhaps a Wiki page on building object arrays is called for. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From charlesr.harris at gmail.com Thu Aug 31 15:59:40 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 13:59:40 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: <44F73DAB.3020100@noaa.gov> References: <44F733C3.7000307@noaa.gov> <44F73DAB.3020100@noaa.gov> Message-ID: On 8/31/06, Christopher Barker wrote: > > Tom Denniston wrote: > > I would think one would want to throw an error when the data has > > inconsistent dimensions. > > But it doesn't have inconsistent dimensions - they are perfectly > consistent with a (2,) array of objects. How is the code to know what > you intended? Same as it produces a float array from array([1,2,3.0]). Array is a complicated function for precisely these sort of reasons, but the convenience makes it worthwhile. So, if a list contains something that can only be interpreted as an object, dtype should be set to object. With numeric types, it is unambiguous to march down through the > sequences until you get a number. As a sequence is an object, there no > way to unambiguously do this automatically. > > Perhaps the way to solve this is for the array constructor to take a > "shape" or "rank" argument, so you could specify what you intend. But > that's really just syntactic sugar to avoid for calling numpy.empty() > first. > > Perhaps a numpy.object_array() constructor would be useful, although as > I think about it, even specifying a shape or rank would not be > unambiguous! > > This is a useful discussion. If we ever get a nd-array into the standard > lib, I suspect that object arrays will get heavy use -- better to clean > up the semantics now. > > Perhaps a Wiki page on building object arrays is called for. > > -Chris Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Thu Aug 31 19:11:01 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 31 Aug 2006 19:11:01 -0400 Subject: [Numpy-discussion] amd64 support In-Reply-To: References: Message-ID: <23DA5221-A8C4-4B67-B404-953F3CBC3C69@physics.mcmaster.ca> On Aug 30, 2006, at 11:53 , Keith Goodman wrote: > I plan to build an amd64 box and run debian etch. Are there any big, > 64-bit, show-stopping problems in numpy? Any minor annoyances? Shouldn't be; I regularly build and test it on an amd64 box running Debian unstable, and I know several others use amd64 boxes too. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From nwagner at iam.uni-stuttgart.de Tue Aug 1 02:24:37 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 01 Aug 2006 08:24:37 +0200 Subject: [Numpy-discussion] svn install failure on amd64 In-Reply-To: <1154397584.14839.17.camel@amcnote2.mcmorland.mph.auckland.ac.nz> References: <1154397584.14839.17.camel@amcnote2.mcmorland.mph.auckland.ac.nz> Message-ID: <44CEF3A5.1010200@iam.uni-stuttgart.de> Angus McMorland wrote: > Hi people who know what's going on, > > I'm getting an install failure with the latest numpy from svn (revision > 2940). This is on an amd64 machine running python 2.4.4c0. > > The build halts at: > > compile options: '-Ibuild/src.linux-x86_64-2.4/numpy/core/src > -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core > -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' > gcc: numpy/core/src/multiarraymodule.c > In file included from numpy/core/src/arrayobject.c:508, > from numpy/core/src/multiarraymodule.c:64: > numpy/core/src/arraytypes.inc.src: In function 'set_typeinfo': > numpy/core/src/arraytypes.inc.src:2139: error: 'PyIntpArrType_Type' > undeclared (first use in this function) > numpy/core/src/arraytypes.inc.src:2139: error: (Each undeclared > identifier is reported only once > numpy/core/src/arraytypes.inc.src:2139: error: for each function it > appears in.) > In file included from numpy/core/src/arrayobject.c:508, > from numpy/core/src/multiarraymodule.c:64: > numpy/core/src/arraytypes.inc.src: In function 'set_typeinfo': > numpy/core/src/arraytypes.inc.src:2139: error: 'PyIntpArrType_Type' > undeclared (first use in this function) > numpy/core/src/arraytypes.inc.src:2139: error: (Each undeclared > identifier is reported only once > numpy/core/src/arraytypes.inc.src:2139: error: for each function it > appears in.) > error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall > -Wstrict-prototypes -fPIC -Ibuild/src.linux-x86_64-2.4/numpy/core/src > -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core > -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c > numpy/core/src/multiarraymodule.c -o > build/temp.linux-x86_64-2.4/numpy/core/src/multiarraymodule.o" failed > with exit status 1 > > Am I missing something or might this be a bug? > > Cheers, > > Angus. > I can build numpy on a 32-bit machine but it fails on a 64-bit machine. Travis, please can you have a look at this issue. In file included from numpy/core/src/arrayobject.c:508, from numpy/core/src/multiarraymodule.c:64: numpy/core/src/arraytypes.inc.src: In function ?set_typeinfo?: numpy/core/src/arraytypes.inc.src:2139: error: ?PyIntpArrType_Type? undeclared (first use in this function) numpy/core/src/arraytypes.inc.src:2139: error: (Each undeclared identifier is reported only once numpy/core/src/arraytypes.inc.src:2139: error: for each function it appears in.) In file included from numpy/core/src/arrayobject.c:508, from numpy/core/src/multiarraymodule.c:64: numpy/core/src/arraytypes.inc.src: In function ?set_typeinfo?: numpy/core/src/arraytypes.inc.src:2139: error: ?PyIntpArrType_Type? undeclared (first use in this function) numpy/core/src/arraytypes.inc.src:2139: error: (Each undeclared identifier is reported only once numpy/core/src/arraytypes.inc.src:2139: error: for each function it appears in.) error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -Ibuild/src.linux-x86_64-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c numpy/core/src/multiarraymodule.c -o build/temp.linux-x86_64-2.4/numpy/core/src/multiarraymodule.o" failed with exit status 1 Nils From oliphant.travis at ieee.org Tue Aug 1 02:54:54 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 01 Aug 2006 00:54:54 -0600 Subject: [Numpy-discussion] svn install failure on amd64 In-Reply-To: <1154397584.14839.17.camel@amcnote2.mcmorland.mph.auckland.ac.nz> References: <1154397584.14839.17.camel@amcnote2.mcmorland.mph.auckland.ac.nz> Message-ID: <44CEFABE.6060804@ieee.org> Angus McMorland wrote: > Hi people who know what's going on, > > I'm getting an install failure with the latest numpy from svn (revision > 2940). This is on an amd64 machine running python 2.4.4c0. > This was my fault. Revision 2931 contained a mistaken deletion of a line from arrayobject.h that should not have happened which affected only 64-bit builds. This problem is corrected in revision 2941. -Travis From lcordier at point45.com Tue Aug 1 04:05:46 2006 From: lcordier at point45.com (Louis Cordier) Date: Tue, 1 Aug 2006 10:05:46 +0200 (SAST) Subject: [Numpy-discussion] numpy vs numarray In-Reply-To: <44CE3EF5.9030508@ieee.org> References: <44CE3EF5.9030508@ieee.org> Message-ID: > I listened to this and it looks like Sergio Ray is giving an intro class > on scientific computing with Python and has some concepts confused. We > should take this as a sign that we need to keep doing a good job of > educating people. I'm on UTC+02:00 so only just saw there have been a few posts. Basically my issue was with numarray going to replace NumPy, and that the recording was only a few months old, sitting on the web where newcomers to Python will undoubtedly find it. I thought the proper thing to do would be to ask the 411 site to just append a footnote explaining that some of the info is out-dated. I just didn't want to do it without getting the groups opinion first. Regards, Louis. -- Louis Cordier cell: +27721472305 Point45 Entertainment (Pty) Ltd. http://www.point45.org From klemm at phys.ethz.ch Tue Aug 1 07:25:02 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Tue, 01 Aug 2006 13:25:02 +0200 Subject: [Numpy-discussion] unexpected behaviour of numpy.var Message-ID: Hello, numpy.var exhibits a rather dangereous behviour, as I have just noticed. In some cases, numpy.var calculates the variance, and in some cases the standard deviation (=square root of variance). Is this intended? I have to admit that I use numpy 0.9.6 at the moment. Has this been changed in more recent versions? Below a sample session Python 2.4.3 (#1, May 8 2006, 18:35:42) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> a = [1,2,3,4,5] >>> numpy.var(a) 2.5 >>> numpy.std(a) 1.5811388300841898 >>> numpy.sqrt(2.5) 1.5811388300841898 >>> a1 = numpy.array([[1],[2],[3],[4],[5]]) >>> a1 array([[1], [2], [3], [4], [5]]) >>> numpy.var(a1) array([ 1.58113883]) >>> numpy.std(a1) array([ 1.58113883]) >>> a =numpy.array([1,2,3,4,5]) >>> numpy.std(a) 1.5811388300841898 >>> numpy.var(a) 1.5811388300841898 >>> numpy.__version__ '0.9.6' Hanno -- Hanno Klemm klemm at phys.ethz.ch From David.L.Goldsmith at noaa.gov Tue Aug 1 11:59:16 2006 From: David.L.Goldsmith at noaa.gov (David L Goldsmith) Date: Tue, 01 Aug 2006 08:59:16 -0700 Subject: [Numpy-discussion] unexpected behaviour of numpy.var In-Reply-To: References: Message-ID: <44CF7A54.5050609@noaa.gov> Hi, Hanno. I ran your sample session in numpy 0.9.8 (on a Mac, just so you know; I don't yet have numpy installed on my Windows platform, and I don't have immediate access to a *nix box) and could not reproduce the problem, i.e., it does appear to have been fixed in 0.9.8. DG Hanno Klemm wrote: > Hello, > > numpy.var exhibits a rather dangereous behviour, as I have just > noticed. In some cases, numpy.var calculates the variance, and in some > cases the standard deviation (=square root of variance). Is this > intended? I have to admit that I use numpy 0.9.6 at the moment. Has > this been changed in more recent versions? > > Below a sample session > > > Python 2.4.3 (#1, May 8 2006, 18:35:42) > [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> import numpy >>>> a = [1,2,3,4,5] >>>> numpy.var(a) >>>> > 2.5 > >>>> numpy.std(a) >>>> > 1.5811388300841898 > >>>> numpy.sqrt(2.5) >>>> > 1.5811388300841898 > >>>> a1 = numpy.array([[1],[2],[3],[4],[5]]) >>>> a1 >>>> > array([[1], > [2], > [3], > [4], > [5]]) > >>>> numpy.var(a1) >>>> > array([ 1.58113883]) > >>>> numpy.std(a1) >>>> > array([ 1.58113883]) > >>>> a =numpy.array([1,2,3,4,5]) >>>> numpy.std(a) >>>> > 1.5811388300841898 > >>>> numpy.var(a) >>>> > 1.5811388300841898 > >>>> numpy.__version__ >>>> > '0.9.6' > > > > Hanno > > -- HMRD/ORR/NOS/NOAA From ivilata at carabos.com Tue Aug 1 12:02:01 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Tue, 01 Aug 2006 18:02:01 +0200 Subject: [Numpy-discussion] Int64 and string support for numexpr Message-ID: <44CF7AF9.2070200@carabos.com> Hi all, I'm attaching some patches that enable the current version of numexpr (r2142) to: 1. Handle int64 integers in addition to int32 (constants, variables and arrays). Python int objects are considered int32 if they fit in 32 bits. Python long objects and int objects that don't fit in 32 bits (for 64-bit platforms) are considered int64. 2. Handle string constants, variables and arrays (not Unicode), with support for comparison operators (==, !=, <, <=, >=, >). (This brings the old ``memsizes`` variable back.) String temporaries (necessary for other kinds of operations) are not supported. The patches also include test cases and some minor corrections (e.g. removing odd carriage returns in some lines in compile.py). There are three patches to ease their individual review: * numexpr-int64.diff only contains the changes for int64 support. * numexpr-str.diff only contains the changes for string support. * numexpr-int64str.diff contains all changes. The task has been somehow difficult, but I think the result fits quite well in numexpr. So, what's your opinion about the patches? Are they worth integrating into the main branch? Thanks! :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: numexpr-int64str.tar.gz Type: application/x-gzip Size: 24891 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From ndarray at mac.com Tue Aug 1 12:07:33 2006 From: ndarray at mac.com (Sasha) Date: Tue, 1 Aug 2006 12:07:33 -0400 Subject: [Numpy-discussion] unexpected behaviour of numpy.var In-Reply-To: References: Message-ID: I cannot reproduce your results, but I wonder if the following is right: >>> a = array([1,2,3,4,5]) >>> var(a[newaxis,:]) array([ 0., 0., 0., 0., 0.]) >>> a[newaxis,:].var() 2.0 >>> a[newaxis,:].var(axis=0) array([ 0., 0., 0., 0., 0.]) Are method and function supposed to have different defaults? It looks like the method defaults to variance over all axes while the function defaults to axis=0. >>> __version__ '1.0b2.dev2192' On 8/1/06, Hanno Klemm wrote: > > Hello, > > numpy.var exhibits a rather dangereous behviour, as I have just > noticed. In some cases, numpy.var calculates the variance, and in some > cases the standard deviation (=square root of variance). Is this > intended? I have to admit that I use numpy 0.9.6 at the moment. Has > this been changed in more recent versions? > > Below a sample session > > > Python 2.4.3 (#1, May 8 2006, 18:35:42) > [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> a = [1,2,3,4,5] > >>> numpy.var(a) > 2.5 > >>> numpy.std(a) > 1.5811388300841898 > >>> numpy.sqrt(2.5) > 1.5811388300841898 > >>> a1 = numpy.array([[1],[2],[3],[4],[5]]) > >>> a1 > array([[1], > [2], > [3], > [4], > [5]]) > >>> numpy.var(a1) > array([ 1.58113883]) > >>> numpy.std(a1) > array([ 1.58113883]) > >>> a =numpy.array([1,2,3,4,5]) > >>> numpy.std(a) > 1.5811388300841898 > >>> numpy.var(a) > 1.5811388300841898 > >>> numpy.__version__ > '0.9.6' > > > > Hanno > > -- > Hanno Klemm > klemm at phys.ethz.ch > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From davidgrant at gmail.com Tue Aug 1 12:56:15 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 1 Aug 2006 09:56:15 -0700 Subject: [Numpy-discussion] unexpected behaviour of numpy.var In-Reply-To: <44CF7A54.5050609@noaa.gov> References: <44CF7A54.5050609@noaa.gov> Message-ID: I also couldn't reproduce it on my 0.9.8 on Linux. DG On 8/1/06, David L Goldsmith wrote: > > Hi, Hanno. I ran your sample session in numpy 0.9.8 (on a Mac, just so > you know; I don't yet have numpy installed on my Windows platform, and I > don't have immediate access to a *nix box) and could not reproduce the > problem, i.e., it does appear to have been fixed in 0.9.8. > > DG > > Hanno Klemm wrote: > > Hello, > > > > numpy.var exhibits a rather dangereous behviour, as I have just > > noticed. In some cases, numpy.var calculates the variance, and in some > > cases the standard deviation (=square root of variance). Is this > > intended? I have to admit that I use numpy 0.9.6 at the moment. Has > > this been changed in more recent versions? > > > > Below a sample session > > > > > > Python 2.4.3 (#1, May 8 2006, 18:35:42) > > [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > > >>>> import numpy > >>>> a = [1,2,3,4,5] > >>>> numpy.var(a) > >>>> > > 2.5 > > > >>>> numpy.std(a) > >>>> > > 1.5811388300841898 > > > >>>> numpy.sqrt(2.5) > >>>> > > 1.5811388300841898 > > > >>>> a1 = numpy.array([[1],[2],[3],[4],[5]]) > >>>> a1 > >>>> > > array([[1], > > [2], > > [3], > > [4], > > [5]]) > > > >>>> numpy.var(a1) > >>>> > > array([ 1.58113883]) > > > >>>> numpy.std(a1) > >>>> > > array([ 1.58113883]) > > > >>>> a =numpy.array([1,2,3,4,5]) > >>>> numpy.std(a) > >>>> > > 1.5811388300841898 > > > >>>> numpy.var(a) > >>>> > > 1.5811388300841898 > > > >>>> numpy.__version__ > >>>> > > '0.9.6' > > > > > > > > Hanno > > > > > > > -- > HMRD/ORR/NOS/NOAA > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Tue Aug 1 13:40:35 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 1 Aug 2006 10:40:35 -0700 Subject: [Numpy-discussion] Graph class Message-ID: I have written my own graph class, it doesn't really do much, just has a few methods, it might do more later. Up until now it has just had one piece of data, an adjacency matrix, so it looks something like this: class Graph: def __init__(self, Adj): self.Adj = Adj I had the idea of changing Graph to inherit numpy.ndarray instead, so then I can just access itself directly rather than having to type self.Adj. Is this the right way to go about it? To inherit from numpy.ndarray? The reason I'm using a numpy array to store the graph by the way is the following: -Memory is not a concern (yet) so I don't need to use a sparse structure like a sparse array or a dictionary -I run a lot of sums on it, argmin, blanking out of certain rows and columns using fancy indexing, grabbing subgraphs using vector indexing -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Tue Aug 1 14:41:07 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 2 Aug 2006 03:41:07 +0900 Subject: [Numpy-discussion] Graph class In-Reply-To: References: Message-ID: Hi David, For a graph, the fact that it's stored as a matrix, or stored as linked nodes, or dicts, etc, is an implementation detail. So from a classical OO point of view, inheritance is not what you want. Inheritance says "this is a kind of that". But a graph is not a kind of matrix. A matrix is merely one possible way to represent a graph. Many matrix operations don't even make sense on a graph (although a lot of them do...). Also you say "memory is not a concern (yet)", but maybe it will be later, and then you'll want to change the underlying representation. Ideally you will be able to do this in such a way that all your graph-using code works completely without modification. This will be harder to do if you derive from ndarray. Because to prevent existing code from breaking you have to duplicate ndarray's interface exactly, because you don't know which ndarray methods are being used by all existing Graph-using code. On the other hand, in the short term it's probably easier to derive from ndarray directly if all you need is something quick and dirty. But maybe then you don't even need to make a graph class. All you need is Graph = ndarray I've seen plenty of Matlab code that just uses raw matrices to represent graphs without introducing any new type or class. It may be that's good enough for what you want to do. Python is not really a "Classical OO" language, in the sense that there's.no real data hiding, etc. Python's philosophy seems to be more like "whatever makes your life the easiest". So do what you think will make your life easiest based on the totality of your circumstances (including need for future maintenance). If memory is your only concern, then if/when it becomes and issue, a switch to scipy.sparse matrix shouldn't be too bad if you want to just use the ndarray interface. --bill On 8/2/06, David Grant wrote: > I have written my own graph class, it doesn't really do much, just has a few > methods, it might do more later. Up until now it has just had one piece of > data, an adjacency matrix, so it looks something like this: > > class Graph: > def __init__(self, Adj): > self.Adj = Adj > > I had the idea of changing Graph to inherit numpy.ndarray instead, so then I > can just access itself directly rather than having to type self.Adj. Is this > the right way to go about it? To inherit from numpy.ndarray? > > The reason I'm using a numpy array to store the graph by the way is the > following: > -Memory is not a concern (yet) so I don't need to use a sparse structure > like a sparse array or a dictionary > -I run a lot of sums on it, argmin, blanking out of certain rows and columns > using fancy indexing, grabbing subgraphs using vector indexing > > -- > David Grant > http://www.davidgrant.ca > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From charlesr.harris at gmail.com Tue Aug 1 15:49:00 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 1 Aug 2006 13:49:00 -0600 Subject: [Numpy-discussion] Graph class In-Reply-To: References: Message-ID: Hi David, I often have several thousand nodes in a graph, sometimes clustered into connected components. I suspect that using an adjacency matrix is an inefficient representation for graphs of that size while for smaller graphs the overhead of more complicated structures wouldn't be noticeable. Have you looked at the boost graph library? I don't like all their stuff but it is a good start with lots of code and a suitable license. Chuck On 8/1/06, David Grant wrote: > > I have written my own graph class, it doesn't really do much, just has a > few methods, it might do more later. Up until now it has just had one piece > of data, an adjacency matrix, so it looks something like this: > > class Graph: > def __init__(self, Adj): > self.Adj = Adj > > I had the idea of changing Graph to inherit numpy.ndarray instead, so then > I can just access itself directly rather than having to type self.Adj. Is > this the right way to go about it? To inherit from numpy.ndarray? > > The reason I'm using a numpy array to store the graph by the way is the > following: > -Memory is not a concern (yet) so I don't need to use a sparse structure > like a sparse array or a dictionary > -I run a lot of sums on it, argmin, blanking out of certain rows and > columns using fancy indexing, grabbing subgraphs using vector indexing > > -- > David Grant > http://www.davidgrant.ca > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Tue Aug 1 15:54:46 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Aug 2006 13:54:46 -0600 Subject: [Numpy-discussion] unexpected behaviour of numpy.var In-Reply-To: References: Message-ID: <44CFB186.8020802@ee.byu.edu> Sasha wrote: >I cannot reproduce your results, but I wonder if the following is right: > > > >>>>a = array([1,2,3,4,5]) >>>>var(a[newaxis,:]) >>>> >>>> >array([ 0., 0., 0., 0., 0.]) > > >>>>a[newaxis,:].var() >>>> >>>> >2.0 > > >>>>a[newaxis,:].var(axis=0) >>>> >>>> >array([ 0., 0., 0., 0., 0.]) > >Are method and function supposed to have different defaults? It looks >like the method defaults to variance over all axes while the function >defaults to axis=0. > > > They are supposed to have different defaults because the functional forms are largely for backward compatibility where axis=0 was the default. -Travis From davidgrant at gmail.com Tue Aug 1 16:31:35 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 1 Aug 2006 13:31:35 -0700 Subject: [Numpy-discussion] Graph class In-Reply-To: References: Message-ID: Thanks Bill, I think you are right, I think what I have is what I want (ie. not extending ndarray). I guess do go along with the "whatever makes your life the easiest" mantra, all I am really missing right now is the ability to access my Graph object like this g[blah] with square brackets and to do vector indexing and all that. What is the name of the double-underscored method that I should implement (and then call the underlying datastructure's corresponding method)? I see __getitem__ and __getslice__... hmm, this could get messy. Maybe the way I have it is ok. Maybe I can live with G.Adj. Dave On 8/1/06, Bill Baxter wrote: > > Hi David, > > For a graph, the fact that it's stored as a matrix, or stored as > linked nodes, or dicts, etc, is an implementation detail. So from a > classical OO point of view, inheritance is not what you want. > Inheritance says "this is a kind of that". But a graph is not a kind > of matrix. A matrix is merely one possible way to represent a graph. > Many matrix operations don't even make sense on a graph (although a > lot of them do...). Also you say "memory is not a concern (yet)", but > maybe it will be later, and then you'll want to change the underlying > representation. Ideally you will be able to do this in such a way > that all your graph-using code works completely without modification. > This will be harder to do if you derive from ndarray. Because to > prevent existing code from breaking you have to duplicate ndarray's > interface exactly, because you don't know which ndarray methods are > being used by all existing Graph-using code. > > On the other hand, in the short term it's probably easier to derive > from ndarray directly if all you need is something quick and dirty. > But maybe then you don't even need to make a graph class. All you > need is > > Graph = ndarray > > I've seen plenty of Matlab code that just uses raw matrices to > represent graphs without introducing any new type or class. It may be > that's good enough for what you want to do. > > Python is not really a "Classical OO" language, in the sense that > there's.no real data hiding, etc. Python's philosophy seems to be > more like "whatever makes your life the easiest". So do what you > think will make your life easiest based on the totality of your > circumstances (including need for future maintenance). > > If memory is your only concern, then if/when it becomes and issue, a > switch to scipy.sparse matrix shouldn't be too bad if you want to just > use the ndarray interface. > > --bill > > > On 8/2/06, David Grant wrote: > > I have written my own graph class, it doesn't really do much, just has a > few > > methods, it might do more later. Up until now it has just had one piece > of > > data, an adjacency matrix, so it looks something like this: > > > > class Graph: > > def __init__(self, Adj): > > self.Adj = Adj > > > > I had the idea of changing Graph to inherit numpy.ndarray instead, so > then I > > can just access itself directly rather than having to type self.Adj. Is > this > > the right way to go about it? To inherit from numpy.ndarray? > > > > The reason I'm using a numpy array to store the graph by the way is the > > following: > > -Memory is not a concern (yet) so I don't need to use a sparse structure > > like a sparse array or a dictionary > > -I run a lot of sums on it, argmin, blanking out of certain rows and > columns > > using fancy indexing, grabbing subgraphs using vector indexing > > > > -- > > David Grant > > http://www.davidgrant.ca > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Tue Aug 1 16:36:16 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 1 Aug 2006 13:36:16 -0700 Subject: [Numpy-discussion] Graph class In-Reply-To: References: Message-ID: I actually just looked into the boost graph library and hit a wall. I basically had trouble running bjam on it. It complained about a missing build file or something like that. Anyways, for now I can live with non-sparse implementation. This is mostly prototyping code for integeration in to a largely Java system (with some things written in C). So this will be ported to Java or C eventually. Whether or not I will need to protoype something that scales to thousands of nodes remains to be seen. Dave On 8/1/06, Charles R Harris wrote: > > Hi David, > > I often have several thousand nodes in a graph, sometimes clustered into > connected components. I suspect that using an adjacency matrix is an > inefficient representation for graphs of that size while for smaller graphs > the overhead of more complicated structures wouldn't be noticeable. Have you > looked at the boost graph library? I don't like all their stuff but it is a > good start with lots of code and a suitable license. > > Chuck > > On 8/1/06, David Grant wrote: > > > I have written my own graph class, it doesn't really do much, just has a > few methods, it might do more later. Up until now it has just had one piece > of data, an adjacency matrix, so it looks something like this: > > class Graph: > def __init__(self, Adj): > self.Adj = Adj > > I had the idea of changing Graph to inherit numpy.ndarray instead, so then > I can just access itself directly rather than having to type self.Adj. Is > this the right way to go about it? To inherit from numpy.ndarray? > > The reason I'm using a numpy array to store the graph by the way is the > following: > -Memory is not a concern (yet) so I don't need to use a sparse structure > like a sparse array or a dictionary > -I run a lot of sums on it, argmin, blanking out of certain rows and > columns using fancy indexing, grabbing subgraphs using vector indexing > > -- > David Grant > http://www.davidgrant.ca > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From myeates at jpl.nasa.gov Tue Aug 1 16:46:57 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Tue, 01 Aug 2006 13:46:57 -0700 Subject: [Numpy-discussion] a few problems and fixes Message-ID: <44CFBDC1.1000602@jpl.nasa.gov> Here are few problems I had with numpy and scipy 1) Compiling scipy on solaris requires running ld -G instead of gcc -shared. Apparently, gcc was not passing the correct args to my nongnu ld. I could not figure out how to alter setup.py to link using ld instead of gcc so I had to link by hand. 2) memmap has to be modified to remove "flush" on Windows. If calls to flush are allowed, Python (ActiveState) crashes at program exit. 3) savemat in scipy.io.mio had to be modified to remove type check since I am using the class memmap which derives from ndarray. In savemat a check is made that the object being save is an Array. Mathew From pau.gargallo at gmail.com Tue Aug 1 17:44:59 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Tue, 1 Aug 2006 23:44:59 +0200 Subject: [Numpy-discussion] Graph class In-Reply-To: References: Message-ID: <6ef8f3380608011444h120fd82fj49e8e530382af4cd@mail.gmail.com> you may be interested in this python graph library https://networkx.lanl.gov/ pau On 8/1/06, David Grant wrote: > I actually just looked into the boost graph library and hit a wall. I > basically had trouble running bjam on it. It complained about a missing > build file or something like that. > > Anyways, for now I can live with non-sparse implementation. This is mostly > prototyping code for integeration in to a largely Java system (with some > things written in C). So this will be ported to Java or C eventually. > Whether or not I will need to protoype something that scales to thousands of > nodes remains to be seen. > > Dave > > > On 8/1/06, Charles R Harris wrote: > > > > Hi David, > > > > I often have several thousand nodes in a graph, sometimes clustered into > connected components. I suspect that using an adjacency matrix is an > inefficient representation for graphs of that size while for smaller graphs > the overhead of more complicated structures wouldn't be noticeable. Have you > looked at the boost graph library? I don't like all their stuff but it is a > good start with lots of code and a suitable license. > > > > Chuck > > > > > > > > On 8/1/06, David Grant < davidgrant at gmail.com> wrote: > > > > > > > > > > > I have written my own graph class, it doesn't really do much, just has a > few methods, it might do more later. Up until now it has just had one piece > of data, an adjacency matrix, so it looks something like this: > > > > class Graph: > > def __init__(self, Adj): > > self.Adj = Adj > > > > I had the idea of changing Graph to inherit numpy.ndarray instead, so then > I can just access itself directly rather than having to type self.Adj. Is > this the right way to go about it? To inherit from numpy.ndarray? > > > > The reason I'm using a numpy array to store the graph by the way is the > following: > > -Memory is not a concern (yet) so I don't need to use a sparse structure > like a sparse array or a dictionary > > -I run a lot of sums on it, argmin, blanking out of certain rows and > columns using fancy indexing, grabbing subgraphs using vector indexing > > > > > > -- > > David Grant > > http://www.davidgrant.ca > > > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > > -- > David Grant > http://www.davidgrant.ca > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From davidgrant at gmail.com Tue Aug 1 18:20:00 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 1 Aug 2006 15:20:00 -0700 Subject: [Numpy-discussion] Graph class In-Reply-To: <6ef8f3380608011444h120fd82fj49e8e530382af4cd@mail.gmail.com> References: <6ef8f3380608011444h120fd82fj49e8e530382af4cd@mail.gmail.com> Message-ID: I saw that one as well. Looks neat! Too bad they rarely mention the word "graph" so they never come up on my google searches. I found them through del.icio.us by searching for python and graph. Dave On 8/1/06, Pau Gargallo wrote: > > you may be interested in this python graph library > https://networkx.lanl.gov/ > > pau > > On 8/1/06, David Grant wrote: > > I actually just looked into the boost graph library and hit a wall. I > > basically had trouble running bjam on it. It complained about a missing > > build file or something like that. > > > > Anyways, for now I can live with non-sparse implementation. This is > mostly > > prototyping code for integeration in to a largely Java system (with some > > things written in C). So this will be ported to Java or C eventually. > > Whether or not I will need to protoype something that scales to > thousands of > > nodes remains to be seen. > > > > Dave > > > > > > On 8/1/06, Charles R Harris wrote: > > > > > > Hi David, > > > > > > I often have several thousand nodes in a graph, sometimes clustered > into > > connected components. I suspect that using an adjacency matrix is an > > inefficient representation for graphs of that size while for smaller > graphs > > the overhead of more complicated structures wouldn't be noticeable. Have > you > > looked at the boost graph library? I don't like all their stuff but it > is a > > good start with lots of code and a suitable license. > > > > > > Chuck > > > > > > > > > > > > On 8/1/06, David Grant < davidgrant at gmail.com> wrote: > > > > > > > > > > > > > > > > I have written my own graph class, it doesn't really do much, just has > a > > few methods, it might do more later. Up until now it has just had one > piece > > of data, an adjacency matrix, so it looks something like this: > > > > > > class Graph: > > > def __init__(self, Adj): > > > self.Adj = Adj > > > > > > I had the idea of changing Graph to inherit numpy.ndarray instead, so > then > > I can just access itself directly rather than having to type self.Adj. > Is > > this the right way to go about it? To inherit from numpy.ndarray? > > > > > > The reason I'm using a numpy array to store the graph by the way is > the > > following: > > > -Memory is not a concern (yet) so I don't need to use a sparse > structure > > like a sparse array or a dictionary > > > -I run a lot of sums on it, argmin, blanking out of certain rows and > > columns using fancy indexing, grabbing subgraphs using vector indexing > > > > > > > > > -- > > > David Grant > > > http://www.davidgrant.ca > > > > > > > > > ------------------------------------------------------------------------- > > > Take Surveys. Earn Cash. Influence the Future of IT > > > Join SourceForge.net's Techsay panel and you'll get the chance to > share > > your > > > opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Numpy-discussion at lists.sourceforge.net > > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > > > > > > > > > > > -- > > David Grant > > http://www.davidgrant.ca > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From torgil.svensson at gmail.com Tue Aug 1 18:45:38 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Wed, 2 Aug 2006 00:45:38 +0200 Subject: [Numpy-discussion] unexpected behaviour of numpy.var In-Reply-To: <44CFB186.8020802@ee.byu.edu> References: <44CFB186.8020802@ee.byu.edu> Message-ID: > They are supposed to have different defaults because the functional > forms are largely for backward compatibility where axis=0 was the default. > > -Travis Isn't backwards compatibility what "oldnumeric" is for? +1 for consistent defaults. From oliphant at ee.byu.edu Tue Aug 1 20:21:49 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Aug 2006 18:21:49 -0600 Subject: [Numpy-discussion] Handling of backward compatibility In-Reply-To: References: <44CFB186.8020802@ee.byu.edu> Message-ID: <44CFF01D.4030800@ee.byu.edu> Torgil Svensson wrote: >>They are supposed to have different defaults because the functional >>forms are largely for backward compatibility where axis=0 was the default. >> >>-Travis >> >> > >Isn't backwards compatibility what "oldnumeric" is for? > > > As this discussion indicates there has been a switch from numpy 0.9.8 to numpy 1.0b of how to handle backward compatibility. Instead of importing old names a new sub-package numpy.oldnumeric was created. This mechanism is incomplete in the sense that there are still some backward-compatible items in numpy such as defaults on the axis keyword for functions versus methods and you still have to make the changes that convertcode.py makes to the code to get it to work. I'm wondering about whether or not some additional effort should be placed in numpy.oldnumeric so that replacing Numeric with numpy.oldnumeric actually gives no compatibility issues (i.e. the only thing you have to change is replace imports with new names). In other words a simple array sub-class could be created that mimics the old Numeric array and the old functions could be created as well with the same arguments. The very same thing could be done with numarray. This would make conversion almost trivial. Then, the convertcode script could be improved to make all the changes that would take a oldnumeric-based module to a more modern numpy-based module. A similar numarray script could be developed as well. What do people think? Is it worth it? This could be a coding-sprint effort at SciPy. -Travis From stefan at sun.ac.za Wed Aug 2 07:35:38 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 2 Aug 2006 13:35:38 +0200 Subject: [Numpy-discussion] Handling of backward compatibility In-Reply-To: <44CFF01D.4030800@ee.byu.edu> References: <44CFB186.8020802@ee.byu.edu> <44CFF01D.4030800@ee.byu.edu> Message-ID: <20060802113538.GB21448@mentat.za.net> On Tue, Aug 01, 2006 at 06:21:49PM -0600, Travis Oliphant wrote: > I'm wondering about whether or not some additional effort should be > placed in numpy.oldnumeric so that replacing Numeric with > numpy.oldnumeric actually gives no compatibility issues (i.e. the only > thing you have to change is replace imports with new names). In > other words a simple array sub-class could be created that mimics the > old Numeric array and the old functions could be created as well with > the same arguments. > > The very same thing could be done with numarray. This would make > conversion almost trivial. > > Then, the convertcode script could be improved to make all the changes > that would take a oldnumeric-based module to a more modern numpy-based > module. A similar numarray script could be developed as well. > > What do people think? Is it worth it? This could be a coding-sprint > effort at SciPy. This sounds like a very good idea to me. I hope that those of us who cannot attend SciPy 2006 can still take part in the coding sprints, be it via IRC or some other communications media. Cheers St?fan From bhendrix at enthought.com Wed Aug 2 13:46:12 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 02 Aug 2006 12:46:12 -0500 Subject: [Numpy-discussion] ANN: Python Enthought Edition 1.0.0 Released Message-ID: <44D0E4E4.4020304@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 1.0.0 (http://code.enthought.com/enthon/) -- a python distribution for Windows. About Python Enthought Edition: ------------------------------- Python 2.4.3, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numpy SciPy IPython Enthought Tool Suite wxPython PIL mingw MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com 1.0.0 Release Notes ------------------------- A lot of work has gone into testing this release, and it is our most stable release to date, but there are a couple of caveats: * The generated documentation index entries are missing. The full text search does work and the table of contents is complete, so this feature will be pushed to version 1.1.0. * IPython may cause problems when starting the first time if a previous version of IPython was ran. If you see "WARNING: could not import user config", either follow the directions which follow the warning. * Some users are reporting that older matplotlibrc files are not compatible with the version of matplotlib installed with this release. Please refer to the matplotlib mailing list (http://sourceforge.net/mail/?group_id=80706) for further help. We are grateful to everyone who has helped test this release. If you'd like to contribute or report a bug, you can do so at https://svn.enthought.com/enthought. From oliphant.travis at ieee.org Wed Aug 2 14:06:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 02 Aug 2006 12:06:45 -0600 Subject: [Numpy-discussion] Release Notes for 1.0 posted Message-ID: <44D0E9B5.3080001@ieee.org> http://www.scipy.org/ReleaseNotes/NumPy_1.0 Please correct problems and add to them as needed. -Travis From torgil.svensson at gmail.com Wed Aug 2 14:31:15 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Wed, 2 Aug 2006 20:31:15 +0200 Subject: [Numpy-discussion] Handling of backward compatibility In-Reply-To: <44CFF01D.4030800@ee.byu.edu> References: <44CFB186.8020802@ee.byu.edu> <44CFF01D.4030800@ee.byu.edu> Message-ID: > What do people think? Is it worth it? This could be a coding-sprint > effort at SciPy. > > > -Travis Sounds like a good idea. This should make old code work while not imposing unneccessary restrictions on numpy due to backward compatibility. //Torgil From jk985 at tom.com Sat Aug 5 15:09:29 2006 From: jk985 at tom.com (=?GB2312?B?IjjUwjEyLTEzyNUvyc+6oyI=?=) Date: Sun, 6 Aug 2006 03:09:29 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU7s7W85LncwO3Iy9SxsMvP7tDewbY=?= Message-ID: An HTML attachment was scrubbed... URL: From nvf at MIT.EDU Wed Aug 2 15:18:12 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Wed, 2 Aug 2006 15:18:12 -0400 Subject: [Numpy-discussion] Release Notes for 1.0 posted In-Reply-To: References: Message-ID: > Message: 2 > Date: Wed, 02 Aug 2006 12:06:45 -0600 > From: Travis Oliphant > Subject: [Numpy-discussion] Release Notes for 1.0 posted > To: numpy-discussion > Message-ID: <44D0E9B5.3080001 at ieee.org> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > http://www.scipy.org/ReleaseNotes/NumPy_1.0 > > Please correct problems and add to them as needed. > > -Travis > What's not clear to me upon reading this page is what diff set this is describing. Are these the changes between 0.9.8 and 1.0b1? Especially if this page is to be updated with each release, we should be explicit about what changed when. This is a helpful document. Thanks. Take care, Nick From loredo at astro.cornell.edu Wed Aug 2 15:46:57 2006 From: loredo at astro.cornell.edu (Tom Loredo) Date: Wed, 2 Aug 2006 15:46:57 -0400 Subject: [Numpy-discussion] Release Notes for 1.0 posted In-Reply-To: References: Message-ID: <1154548017.44d101312dabd@astrosun2.astro.cornell.edu> > http://www.scipy.org/ReleaseNotes/NumPy_1.0 > > Please correct problems and add to them as needed. This is incredibly helpful---quite a few things I wasn't aware of. Many, many thanks! -Tom Loredo ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From st at sigmasquared.net Wed Aug 2 15:52:05 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed, 02 Aug 2006 21:52:05 +0200 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users Message-ID: <44D10265.5010103@sigmasquared.net> Hi A user named jlc46 is misusing the wiki page "Installing SciPy/Windows" to ask for help on his installation problems. How can I a) contact him in order to ask him to post his questions on the mailing lists, and b) most easily revert changes to wiki-pages? Any hint would be appreciated. Regards, Stephan From davidlinke at tiscali.de Wed Aug 2 16:13:28 2006 From: davidlinke at tiscali.de (David) Date: Wed, 02 Aug 2006 22:13:28 +0200 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D10265.5010103@sigmasquared.net> References: <44D10265.5010103@sigmasquared.net> Message-ID: <44D10768.8030005@tiscali.de> Stephan Tolksdorf wrote: > Hi > > A user named jlc46 is misusing the wiki page "Installing SciPy/Windows" > to ask for help on his installation problems. How can I > a) contact him in order to ask him to post his questions on the mailing > lists, and You cannot find out his email address as a normal wiki-user. Alternatively, you may add a note at the top of the wiki-page. > b) most easily revert changes to wiki-pages? "Normally", you will have a revert link at each version (if you have 'admin'-permission) at the page-info: http://new.scipy.org/Wiki/Installing_SciPy/Windows?action=info I assume that the people listed on http://new.scipy.org/Wiki/Installing_SciPy/EditorsGroup have this 'admin' permission. Maybe you can be added. Regards, David > Any hint would be appreciated. > > Regards, > Stephan From robert.kern at gmail.com Wed Aug 2 16:14:34 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 Aug 2006 15:14:34 -0500 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D10265.5010103@sigmasquared.net> References: <44D10265.5010103@sigmasquared.net> Message-ID: Stephan Tolksdorf wrote: > Hi > > A user named jlc46 is misusing the wiki page "Installing SciPy/Windows" > to ask for help on his installation problems. How can I > a) contact him in order to ask him to post his questions on the mailing > lists, and Not sure. > b) most easily revert changes to wiki-pages? Click the "info" button on the page. There will be a list of revisions. Old revisions will have a "revert" link in the right-hand column. I believe (although I recommend checking the MoinMoin documentation before trying this) that clicking that link will revert the text back to whatever it was at that revision. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mark at mitre.org Wed Aug 2 16:51:07 2006 From: mark at mitre.org (Mark Heslep) Date: Wed, 02 Aug 2006 16:51:07 -0400 Subject: [Numpy-discussion] Fastest binary threshold? Message-ID: <44D1103B.9000808@mitre.org> I need a binary threshold and numpy.where() seems very slow on numpy 0.9.9.2800: python -m timeit -n 10 -s "import numpy as n;a=n.ones((512,512), n.uint8)*129" "a_bin=n.where( a>128, 128,0)" 10 loops, best of 3: 37.9 msec per loop I'm thinking the conversion of the min, max constants from python ints to n.uint8 might be slowing it down? Is there a better way? Scipy is also an option. Ive search up list quickly and nothing jumps out. For comparison Ive got some ctypes wrapped OpenCv code (that I'd like to avoid) doing the same thing in < 1 msec: Cv images here are unsigned 8 bit as above: python -m timeit -n 50 -s "import cv;sz=cv.cvSize(512,512);a=cv.cvCreateImage(sz, 8, 1); a_bin=cv.cvCreateImage(sz,8,1)" "cv.cvThreshold(a, a_bin, float(128), float(255), cv.CV_THRESH_BINARY )" 50 loops, best of 3: 348 usec per loop And with the Intel IPP optimizations turned on < 0.1msec: python -m timeit -n 50 -s "import cv; sz=cv.cvSize(512,512); a=cv.cvCreateImage(sz, 8, 1); a_bin=cv.cvCreateImage(sz,8,1)" "cv.cvThreshold(a, a_bin, float(128), float(255), cv.CV_THRESH_BINARY )" 50 loops, best of 3: 59.5 usec per loop Regards, Mark From strawman at astraw.com Wed Aug 2 16:53:48 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 02 Aug 2006 13:53:48 -0700 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D10768.8030005@tiscali.de> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> Message-ID: <44D110DC.7070901@astraw.com> David wrote: >Stephan Tolksdorf wrote: > > >>Hi >> >>A user named jlc46 is misusing the wiki page "Installing SciPy/Windows" >>to ask for help on his installation problems. How can I >>a) contact him in order to ask him to post his questions on the mailing >>lists, and >> >> > >You cannot find out his email address as a normal wiki-user. >Alternatively, you may add a note at the top of the wiki-page. > > > >>b) most easily revert changes to wiki-pages? >> >> > >"Normally", you will have a revert link at each version (if you have >'admin'-permission) at the page-info: >http://new.scipy.org/Wiki/Installing_SciPy/Windows?action=info > >I assume that the people listed on >http://new.scipy.org/Wiki/Installing_SciPy/EditorsGroup >have this 'admin' permission. Maybe you can be added. > > Stephan, I just added you to http://scipy.org/Wiki/EditorsGroup , so you should now have "revert" among your options in the "get info" page. The changes by jlc46, I agree, don't look like what we want up there in the long term. However, they do look like valid issues (s)he had while trying to follow the instructions on that page. Not being much of a Windows user myself, I have no idea what the issues involved are, but perhaps before simply reverting them you could get to the bottom of the issue? From st at sigmasquared.net Wed Aug 2 17:42:52 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed, 02 Aug 2006 23:42:52 +0200 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D110DC.7070901@astraw.com> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> <44D110DC.7070901@astraw.com> Message-ID: <44D11C5C.8090700@sigmasquared.net> > The changes by jlc46, I agree, don't look like what we want up there in > the long term. However, they do look like valid issues (s)he had while > trying to follow the instructions on that page. Not being much of a > Windows user myself, I have no idea what the issues involved are, but > perhaps before simply reverting them you could get to the bottom of the > issue? I think these questions should be posted on the mailing list so that everybody gets a chance to answer them, not only the people subscribing to the particular Wiki page. Regarding the installation problems on Windows: A while ago I put some effort into writing a patch to correct a few build issues on windows. Due to unfortunate reasons nobody tried to apply the patch until part of it was obsoleted by changes of David M. Cooke to system_info.py. As I didn't keep track of David's changes to the build system I asked him for advice regarding the integration of my patch, but I never got a reply. Seems like I will have to bite the bullet and replicate some of my earlier efforts... Regards, Stephan From tim.hochberg at ieee.org Wed Aug 2 18:09:52 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Wed, 02 Aug 2006 15:09:52 -0700 Subject: [Numpy-discussion] Int64 and string support for numexpr In-Reply-To: <44CF7AF9.2070200@carabos.com> References: <44CF7AF9.2070200@carabos.com> Message-ID: <44D122B0.9030909@ieee.org> Ivan Vilata i Balaguer wrote: > Hi all, > > I'm attaching some patches that enable the current version of numexpr > (r2142) to: > > 1. Handle int64 integers in addition to int32 (constants, variables and > arrays). Python int objects are considered int32 if they fit in 32 > bits. Python long objects and int objects that don't fit in 32 bits > (for 64-bit platforms) are considered int64. > > 2. Handle string constants, variables and arrays (not Unicode), with > support for comparison operators (==, !=, <, <=, >=, >). (This > brings the old ``memsizes`` variable back.) String temporaries > (necessary for other kinds of operations) are not supported. > > The patches also include test cases and some minor corrections (e.g. > removing odd carriage returns in some lines in compile.py). There are > three patches to ease their individual review: > > * numexpr-int64.diff only contains the changes for int64 support. > * numexpr-str.diff only contains the changes for string support. > * numexpr-int64str.diff contains all changes. > > The task has been somehow difficult, but I think the result fits quite > well in numexpr. So, what's your opinion about the patches? Are they > worth integrating into the main branch? Thanks! > Unfortunately, I'm in the process of moving everything over to a new box, so my build environment is all broken and I can't try them out right now. However, just so you don't think everyone is ignoring you, I figured I'd reply. What use cases do you have in mind for the string comparison stuff? Strings are one of those features of numpy that I've personally never seen a use for, so I'm not that enthusiastic about them them in numarray, Particularly since it sounds like support is likely to only be partial. However, feel free to convince me otherwise. Or just convince David Cooke ;-) -tim > :: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > C?rabos Coop. V. V V Enjoy Data > "" > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From cookedm at physics.mcmaster.ca Wed Aug 2 18:33:24 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 2 Aug 2006 18:33:24 -0400 Subject: [Numpy-discussion] Graph class In-Reply-To: <6ef8f3380608011444h120fd82fj49e8e530382af4cd@mail.gmail.com> References: <6ef8f3380608011444h120fd82fj49e8e530382af4cd@mail.gmail.com> Message-ID: <20060802183324.3e9b0e29@arbutus.physics.mcmaster.ca> On Tue, 1 Aug 2006 23:44:59 +0200 "Pau Gargallo" wrote: > you may be interested in this python graph library > https://networkx.lanl.gov/ There's also http://wiki.python.org/moin/PythonGraphApi, which lists a bunch. It's the result of a discussion on c.l.py a few years ago about trying to come up with a standard API for graphs. I don't believe they came up with anything, but that page contains ideas to consider. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Wed Aug 2 18:36:37 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 2 Aug 2006 18:36:37 -0400 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D11C5C.8090700@sigmasquared.net> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> <44D110DC.7070901@astraw.com> <44D11C5C.8090700@sigmasquared.net> Message-ID: <20060802183637.7a33c0cf@arbutus.physics.mcmaster.ca> On Wed, 02 Aug 2006 23:42:52 +0200 Stephan Tolksdorf wrote: > > The changes by jlc46, I agree, don't look like what we want up there in > > the long term. However, they do look like valid issues (s)he had while > > trying to follow the instructions on that page. Not being much of a > > Windows user myself, I have no idea what the issues involved are, but > > perhaps before simply reverting them you could get to the bottom of the > > issue? > > I think these questions should be posted on the mailing list so that > everybody gets a chance to answer them, not only the people subscribing > to the particular Wiki page. > > Regarding the installation problems on Windows: A while ago I put some > effort into writing a patch to correct a few build issues on windows. > Due to unfortunate reasons nobody tried to apply the patch until part of > it was obsoleted by changes of David M. Cooke to system_info.py. As I > didn't keep track of David's changes to the build system I asked him for > advice regarding the integration of my patch, but I never got a reply. > Seems like I will have to bite the bullet and replicate some of my > earlier efforts... I updated that patch to work (it's in ticket #114, btw, for those following along), and integrated it last week. Please give the current svn a try to see how it works. I had it done mid-July, but I guess you didn't get the Trac email? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From st at sigmasquared.net Wed Aug 2 19:00:06 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Thu, 03 Aug 2006 01:00:06 +0200 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <20060802183637.7a33c0cf@arbutus.physics.mcmaster.ca> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> <44D110DC.7070901@astraw.com> <44D11C5C.8090700@sigmasquared.net> <20060802183637.7a33c0cf@arbutus.physics.mcmaster.ca> Message-ID: <44D12E76.7080801@sigmasquared.net> Hi David, > I updated that patch to work (it's in ticket #114, btw, for those following > along), and integrated it last week. Please give the current svn a try to see > how it works. > I'm really sorry I overlooked your changes. Thanks a lot for your efforts. I will try the various windows builds in the next days and address the remaining issues. > I had it done mid-July, but I guess you didn't get the Trac email? I haven't received any email notfication from Trac. Is there something I can do about the missing notifications? Stephan From cookedm at physics.mcmaster.ca Wed Aug 2 19:22:25 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 2 Aug 2006 19:22:25 -0400 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D12E76.7080801@sigmasquared.net> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> <44D110DC.7070901@astraw.com> <44D11C5C.8090700@sigmasquared.net> <20060802183637.7a33c0cf@arbutus.physics.mcmaster.ca> <44D12E76.7080801@sigmasquared.net> Message-ID: <20060802192225.5b7efb42@arbutus.physics.mcmaster.ca> On Thu, 03 Aug 2006 01:00:06 +0200 Stephan Tolksdorf wrote: > I haven't received any email notfication from Trac. Is there something I > can do about the missing notifications? When logged in, check "Settings" (upper-right corner, besides Logout). Make sure your email address is in there. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Wed Aug 2 19:24:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 Aug 2006 18:24:57 -0500 Subject: [Numpy-discussion] Reverting changes on Wiki, contacting users In-Reply-To: <44D12E76.7080801@sigmasquared.net> References: <44D10265.5010103@sigmasquared.net> <44D10768.8030005@tiscali.de> <44D110DC.7070901@astraw.com> <44D11C5C.8090700@sigmasquared.net> <20060802183637.7a33c0cf@arbutus.physics.mcmaster.ca> <44D12E76.7080801@sigmasquared.net> Message-ID: Stephan Tolksdorf wrote: > Hi David, > >> I updated that patch to work (it's in ticket #114, btw, for those following >> along), and integrated it last week. Please give the current svn a try to see >> how it works. > > I'm really sorry I overlooked your changes. Thanks a lot for your > efforts. I will try the various windows builds in the next days and > address the remaining issues. > > > I had it done mid-July, but I guess you didn't get the Trac email? > > I haven't received any email notfication from Trac. Is there something I > can do about the missing notifications? You can sign up for the numpy-tickets mailing list. http://www.scipy.org/Mailing_Lists -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Wed Aug 2 20:45:22 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 3 Aug 2006 02:45:22 +0200 Subject: [Numpy-discussion] Fastest binary threshold? In-Reply-To: <44D1103B.9000808@mitre.org> References: <44D1103B.9000808@mitre.org> Message-ID: <20060803004522.GC6682@mentat.za.net> On Wed, Aug 02, 2006 at 04:51:07PM -0400, Mark Heslep wrote: > I need a binary threshold and numpy.where() seems very slow on numpy > 0.9.9.2800: > > python -m timeit -n 10 -s "import numpy as n;a=n.ones((512,512), > n.uint8)*129" > "a_bin=n.where( a>128, 128,0)" > 10 loops, best of 3: 37.9 msec per loop Using numpy indexing brings the time down by a factor of 10 or so: In [46]: timeit b = N.where(a>128,128,0) 10 loops, best of 3: 27.1 ms per loop In [47]: timeit b = (a > 128).astype(N.uint8) * 128 100 loops, best of 3: 3.45 ms per loop Binary thresholding can be added to ndimage easily, if further speed improvement is needed. Regards St?fan From haase at msg.ucsf.edu Thu Aug 3 00:31:38 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 02 Aug 2006 21:31:38 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' Message-ID: <44D17C2A.2050601@msg.ucsf.edu> Hi! I just finished maybe a total of 5 hours tracking down a nasty bug. So I thought I would share this: I'm keeping a version of (old) SciPy's 'plt' module around. (I know about matplotlib - anyway - ...) I changed the code some time ago from Numeric to numarray - no problem. Now I switched to numpy ... and suddenly the zooming does not work anymore: it always zooms to "full view". Finally I traced the problem down to a utility function: "is_number" - it is simply implemented as def is_number(val): return (type(val) in [type(0.0),type(0)]) As I said - now I finally saw that I always got False since the type of my number (0.025) is and that's neither nor OK - how should this have been done right ? Anyway, I'm excited about the new numpy and am looking forward to it's progress Thanks, Sebastian Haase From robert.kern at gmail.com Thu Aug 3 00:43:44 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 Aug 2006 23:43:44 -0500 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D17C2A.2050601@msg.ucsf.edu> References: <44D17C2A.2050601@msg.ucsf.edu> Message-ID: Sebastian Haase wrote: > Hi! > I just finished maybe a total of 5 hours tracking down a nasty bug. > So I thought I would share this: > I'm keeping a version of (old) SciPy's 'plt' module around. > (I know about matplotlib - anyway - ...) > I changed the code some time ago from Numeric to numarray - no problem. > Now I switched to numpy ... and suddenly the zooming does not work > anymore: it always zooms to "full view". > > Finally I traced the problem down to a utility function: > "is_number" - it is simply implemented as > def is_number(val): > return (type(val) in [type(0.0),type(0)]) > > As I said - now I finally saw that I always got > False since the type of my number (0.025) is > > and that's neither nor > > OK - how should this have been done right ? It depends on how is_number() is actually used. Probably the best thing to do would be to take a step back and reorganize whatever is calling it to not require specific types. Quick-and-dirty: use isinstance() instead since float64scalar inherits from float. However, float32scalar does not, so this is not a real solution, just a hack to get you on your merry way. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From haase at msg.ucsf.edu Thu Aug 3 00:55:34 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 02 Aug 2006 21:55:34 -0700 Subject: [Numpy-discussion] Handling of backward compatibility In-Reply-To: <44CFF01D.4030800@ee.byu.edu> References: <44CFB186.8020802@ee.byu.edu> <44CFF01D.4030800@ee.byu.edu> Message-ID: <44D181C6.5060605@msg.ucsf.edu> Travis Oliphant wrote: > Torgil Svensson wrote: > >>> They are supposed to have different defaults because the functional >>> forms are largely for backward compatibility where axis=0 was the default. >>> >>> -Travis >>> >>> >> Isn't backwards compatibility what "oldnumeric" is for? >> >> >> > > As this discussion indicates there has been a switch from numpy 0.9.8 to > numpy 1.0b of how to handle backward compatibility. Instead of > importing old names a new sub-package numpy.oldnumeric was created. > This mechanism is incomplete in the sense that there are still some > backward-compatible items in numpy such as defaults on the axis keyword > for functions versus methods and you still have to make the changes that > convertcode.py makes to the code to get it to work. > > I'm wondering about whether or not some additional effort should be > placed in numpy.oldnumeric so that replacing Numeric with > numpy.oldnumeric actually gives no compatibility issues (i.e. the only > thing you have to change is replace imports with new names). In > other words a simple array sub-class could be created that mimics the > old Numeric array and the old functions could be created as well with > the same arguments. > > The very same thing could be done with numarray. This would make > conversion almost trivial. > > Then, the convertcode script could be improved to make all the changes > that would take a oldnumeric-based module to a more modern numpy-based > module. A similar numarray script could be developed as well. > > What do people think? Is it worth it? This could be a coding-sprint > effort at SciPy. > > > -Travis Hi, Just as thought of cautiousness: If people actually get "too much" encouraged to just always say " from numpy.oldnumeric import * " or as suggested maybe soon also something like " from numpy.oldnumarray import * " - could this not soon lead to a great state of confusion when later people on this mailing list ask questions and nobody really knows which of the submodules they are referring to !? Recently someone (Torgil Svensson) here suggested to unify the default argument between a method and a function - I think the discussion was about numpy.var and it's "axis" argument. I would be a clear +1 on unifying these and have a clean design of numpy. Consequently the old way of different defaults should be absorbed by the oldnumeric sub module. All I'm saying then is that this could cause confusion later on - and therefore the whole idea of "easy backwards compatibility" should be qualified by encouraging people to adopt the most problematic changes (like new default values) rather sooner than later. I'm hoping that numpy will find soon an increasingly broader acceptance in the whole Python community (and the entire scientific community for that matter ;-) ). Thanks for all your work, Sebastian Haase From oliphant.travis at ieee.org Thu Aug 3 01:02:39 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 02 Aug 2006 23:02:39 -0600 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D17C2A.2050601@msg.ucsf.edu> References: <44D17C2A.2050601@msg.ucsf.edu> Message-ID: <44D1836F.6070809@ieee.org> Sebastian Haase wrote: > Hi! > I just finished maybe a total of 5 hours tracking down a nasty bug. > > Finally I traced the problem down to a utility function: > "is_number" - it is simply implemented as > def is_number(val): > return (type(val) in [type(0.0),type(0)]) > > As I said - now I finally saw that I always got > False since the type of my number (0.025) is > > and that's neither nor > > OK - how should this have been done right ? > > Code that depends on specific types like this is going to be hard to maintain in Python because many types could reasonably act like a number. I do see code like this pop up from time to time and it will bite you more with NumPy (which has a whole slew of scalar types). The scalar-types are in a hierarchy and so you could replace the code with def is_number(val): return isinstance(val, (int, float, numpy.number)) But, this will break with other "scalar-types" that it really should work with. It's best to look at what is calling is_number and think about what it wants to do with the object and just try it and catch the exception. -Travis From haase at msg.ucsf.edu Thu Aug 3 01:16:59 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 02 Aug 2006 22:16:59 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D1836F.6070809@ieee.org> References: <44D17C2A.2050601@msg.ucsf.edu> <44D1836F.6070809@ieee.org> Message-ID: <44D186CB.6000307@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> Hi! >> I just finished maybe a total of 5 hours tracking down a nasty bug. >> >> Finally I traced the problem down to a utility function: >> "is_number" - it is simply implemented as >> def is_number(val): >> return (type(val) in [type(0.0),type(0)]) >> >> As I said - now I finally saw that I always got >> False since the type of my number (0.025) is >> >> and that's neither nor >> >> OK - how should this have been done right ? >> >> > > Code that depends on specific types like this is going to be hard to > maintain in Python because many types could reasonably act like a > number. I do see code like this pop up from time to time and it will > bite you more with NumPy (which has a whole slew of scalar types). > > The scalar-types are in a hierarchy and so you could replace the code with > > def is_number(val): > return isinstance(val, (int, float, numpy.number)) > > But, this will break with other "scalar-types" that it really should > work with. It's best to look at what is calling is_number and think > about what it wants to do with the object and just try it and catch the > exception. > > -Travis > Thanks, I just found numpy.isscalar() and numpy.issctype() ? These sound like they would do what I need - what is the difference between the two ? (I found that issctype worked OK while isscalar gave some exception in some cases !? ) - Sebastian From aisaac at american.edu Thu Aug 3 01:42:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 3 Aug 2006 01:42:05 -0400 Subject: [Numpy-discussion] Handling of backward compatibility In-Reply-To: <44D181C6.5060605@msg.ucsf.edu> References: <44CFB186.8020802@ee.byu.edu> <44CFF01D.4030800@ee.byu.edu><44D181C6.5060605@msg.ucsf.edu> Message-ID: On Wed, 02 Aug 2006, Sebastian Haase apparently wrote: > Recently someone (Torgil Svensson) here suggested to unify > the default argument between a method and a function > - I think the discussion was about numpy.var and it's > "axis" argument. I would be a clear +1 on unifying these > and have a clean design of numpy. Consequently the old way > of different defaults should be absorbed by the oldnumeric > sub module. +1 I think this consistency is *really* important for the easy acceptance of numpy by new users. (For a user's perspective, I also think is is just good design.) I expect many new users to be "burned" by this inconsistency. However, as an intermediate run (say 1 year) transition measure to the consistent use, I would be comfortable with the numpy functions requiring an axis argument. One user's view, Alan Isaac From pruggera at gmail.com Thu Aug 3 01:41:13 2006 From: pruggera at gmail.com (Phil Ruggera) Date: Wed, 2 Aug 2006 22:41:13 -0700 Subject: [Numpy-discussion] Mean of n values within an array In-Reply-To: References: Message-ID: A variation of the proposed convolve routine is very fast: regular python took: 1.150214 sec. numpy mean slice took: 2.427513 sec. numpy convolve took: 0.546854 sec. numpy convolve noloop took: 0.058611 sec. Code: # mean of n values within an array import numpy, time def nmean(list,n): a = [] for i in range(1,len(list)+1): start = i-n divisor = n if start < 0: start = 0 divisor = i a.append(sum(list[start:i])/divisor) return a t = [1.0*i for i in range(1400)] start = time.clock() for x in range(100): reg = nmean(t,50) print "regular python took: %f sec."%(time.clock() - start) def numpy_nmean(list,n): a = numpy.empty(len(list),dtype=float) for i in range(1,len(list)+1): start = i-n if start < 0: start = 0 a[i-1] = list[start:i].mean(0) return a t = numpy.arange(0,1400,dtype=float) start = time.clock() for x in range(100): npm = numpy_nmean(t,50) print "numpy mean slice took: %f sec."%(time.clock() - start) def numpy_nmean_conv(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") for i in range(0,len(list)): if i < n : a[i] /= i + 1 else : a[i] /= n return a[:len(list)] t = numpy.arange(0,1400,dtype=float) start = time.clock() for x in range(100): npc = numpy_nmean_conv(t,50) print "numpy convolve took: %f sec."%(time.clock() - start) def numpy_nmean_conv_nl(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") for i in range(n): a[i] /= i + 1 a[n:] /= n return a[:len(list)] t = numpy.arange(0,1400,dtype=float) start = time.clock() for x in range(100): npn = numpy_nmean_conv_nl(t,50) print "numpy convolve noloop took: %f sec."%(time.clock() - start) numpy.testing.assert_equal(reg,npm) numpy.testing.assert_equal(reg,npc) numpy.testing.assert_equal(reg,npn) On 7/29/06, David Grant wrote: > > > > On 7/29/06, Charles R Harris wrote: > > > > Hmmm, > > > > I rewrote the subroutine a bit. > > > > > > def numpy_nmean(list,n): > > a = numpy.empty(len(list),dtype=float) > > > > b = numpy.cumsum(list) > > for i in range(0,len(list)): > > if i < n : > > a[i] = b[i]/(i+1) > > else : > > a[i] = (b[i] - b[i-n])/(i+1) > > return a > > > > and got > > > > regular python took: 0.750000 sec. > > numpy took: 0.380000 sec. > > > I got rid of the for loop entirely. Usually this is the thing to do, at > least this will always give speedups in Matlab and also in my limited > experience with Numpy/Numeric: > > def numpy_nmean2(list,n): > > a = numpy.empty(len(list),dtype=float) > b = numpy.cumsum(list) > c = concatenate((b[n:],b[:n])) > a[:n] = b[:n]/(i+1) > a[n:] = (b[n:] - c[n:])/(i+1) > return a > > I got no noticeable speedup from doing this which I thought was pretty > amazing. I even profiled all the functions, the original, the one written by > Charles, and mine, using hotspot just to make sure nothing funny was going > on. I guess plain old Python can be better than you'd expect in certain > situtations. > > -- > David Grant From oliphant.travis at ieee.org Thu Aug 3 01:43:48 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 02 Aug 2006 23:43:48 -0600 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D186CB.6000307@msg.ucsf.edu> References: <44D17C2A.2050601@msg.ucsf.edu> <44D1836F.6070809@ieee.org> <44D186CB.6000307@msg.ucsf.edu> Message-ID: <44D18D14.8030609@ieee.org> Sebastian Haase wrote: > Thanks, > I just found > numpy.isscalar() and numpy.issctype() ? > These sound like they would do what I need - what is the difference > between the two ? > Oh, yeah. numpy.issctype works with type objects numpy.isscalar works with instances Neither of them distinguish between scalars and "numbers." If you get errors with isscalar it would be nice to know what they are. -Travis From rvandermerwe at ska.ac.za Thu Aug 3 05:02:11 2006 From: rvandermerwe at ska.ac.za (Rudolph van der Merwe) Date: Thu, 3 Aug 2006 11:02:11 +0200 Subject: [Numpy-discussion] Confusion re. version numbers Message-ID: <97670e910608030202i591fd9cbybbd1d297307204c2@mail.gmail.com> Is the current 1.0b1 version of Numpy a maintenace release of the stable 1.0 release, or is it a BETA release for the UPCOMMING 1.0 release of Numpy? -- Rudolph van der Merwe From xaz39 at 163.com Mon Aug 7 05:11:04 2006 From: xaz39 at 163.com (=?GB2312?B?IjjUwjEyLTEzyNUvyc+6oyI=?=) Date: Mon, 7 Aug 2006 17:11:04 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU6s7W85LncwO3Iy9SxsMvP7tDewbY=?= Message-ID: An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Thu Aug 3 05:26:32 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 3 Aug 2006 05:26:32 -0400 Subject: [Numpy-discussion] Confusion re. version numbers In-Reply-To: <97670e910608030202i591fd9cbybbd1d297307204c2@mail.gmail.com> References: <97670e910608030202i591fd9cbybbd1d297307204c2@mail.gmail.com> Message-ID: <20060803092632.GA10364@arbutus.physics.mcmaster.ca> On Thu, Aug 03, 2006 at 11:02:11AM +0200, Rudolph van der Merwe wrote: > Is the current 1.0b1 version of Numpy a maintenace release of the > stable 1.0 release, or is it a BETA release for the UPCOMMING 1.0 > release of Numpy? Beta. Maintenance releases will have version numbers like 1.0.1. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From mfajer at gmail.com Thu Aug 3 10:49:43 2006 From: mfajer at gmail.com (Mikolai Fajer) Date: Thu, 3 Aug 2006 10:49:43 -0400 Subject: [Numpy-discussion] Histogram versus histogram2d Message-ID: <3ff66ae00608030749h42e53469j5aa0901628622d79@mail.gmail.com> Hello, I have noticed some that the 1d histogram and 2d histogram. The histogram function bins everything between the elements of edges, and then includes everything greater than the last edge element in the last bin. The histrogram2d function only bins in the range specified by edges. Is there a reason these two functions do not operate in the same way? -- -Mikolai Fajer- From zhermjn at telecable.es Thu Aug 3 15:20:10 2006 From: zhermjn at telecable.es (terms inversion) Date: Thu, 3 Aug 2006 17:20:10 -0200 Subject: [Numpy-discussion] Plenum Message-ID: <127F909347769D1.AFAA39E16E@telecable.es> Chasing Vermeer Djinn Keys Kingdom Land Tower Maxs Mayhem X: descent Chinese Korean Japanese marriage thrives adopt evil Mama swam Signs Only copper dont Service at: Eckers Ruby Message Board constants puncture voltages One Meter Tube Homebuilt CuBr linksJons plasma tube. It absolute beginner suitable anyone who hobbyist and/or Note: construct requires fair degree skill Pakaste juicy isnt Tietokone angry demanding removed. stayed quotes duty disgusted contacted via Email Links Page.Back Sams Table and Books Other Free Trade Magazines Dowload Abdullah Cooneys Huge prevents viewing party supported Supported librarys archives ask them.Free trade rags optics industry. fields havemany card number: Beam: Race Make Oxford due Industry Hilger Eugene Alfred Zajac Karen Guardino August Practice Hrand Walther editors N. Miracle Larry Warwick Story Charles Vere Their Steven seen effort whither die lack usually happens bases postto anyhowFor artists Parental Advisory stickers karmic Leave ReplyName published ltabbr ltacronym ltbgt politic thrown comic relief applies fragile structure threatens collapse. Review: distant Hunt Smiley Bones Challenge Balloon Bust Which Member Andy Griffiths BSC patents. With computer searching andall database websites. charge still free.In days ago IBM had A. Lengyel along advanced courses. Stan Gibilisco Homemade Iovine Total VDC timer chip relay stepup Diode based Emitters Shuji soccer Dec Internal class purchased Sharplan improve Dying tubes welded future laserAron Justins Large Repair Backstage higher Safari Mac. mssg Safari. Block Avant falsevar Wyley Sons Inc. London Sydney If always really terms inversion hyperfine Goverment agencies labs Dowload Abdullah Cooneys Huge Mosss Stony Brook of as retrieved on Jul :: GMT.G the snapshot that we Middle supposed though novel makings important ACCOUNT SALES thought expected outsiders stabilize interface Sparc. Yen executive possess. broaden higherend -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 12718 bytes Desc: not available URL: From mark at mitre.org Thu Aug 3 11:25:26 2006 From: mark at mitre.org (Mark Heslep) Date: Thu, 03 Aug 2006 11:25:26 -0400 Subject: [Numpy-discussion] Fastest binary threshold? In-Reply-To: <20060803004522.GC6682@mentat.za.net> References: <44D1103B.9000808@mitre.org> <20060803004522.GC6682@mentat.za.net> Message-ID: <44D21566.9060708@mitre.org> Stefan van der Walt wrote: > Binary thresholding can be added to ndimage easily, if further speed > improvement is needed. > > Regards > St?fan Yes Id like to become involved in that effort. Whats the status of ndimage now? Has it all been brought over from numarray and placed, where? Is there a template of some kind for adding new code? Regards, Mark From charlesr.harris at gmail.com Thu Aug 3 11:38:25 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 3 Aug 2006 09:38:25 -0600 Subject: [Numpy-discussion] Mean of n values within an array In-Reply-To: References: Message-ID: Heh, This is fun. Two more variations with 1000 reps instead of 100 for better timing: def numpy_nmean_conv_nl_tweak1(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") a[:n] /= numpy.arange(1, n + 1) a[n:] /= n return a[:len(list)] def numpy_nmean_conv_nl_tweak2(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") a[:n] /= numpy.arange(1, n + 1) a[n:] *= 1.0/n return a[:len(list)] Which gives numpy convolve took: 2.630000 sec. numpy convolve noloop took: 0.320000 sec. numpy convolve noloop tweak1 took: 0.250000 sec. numpy convolve noloop tweak2 took: 0.240000 sec. Chuck On 8/2/06, Phil Ruggera wrote: > > A variation of the proposed convolve routine is very fast: > > regular python took: 1.150214 sec. > numpy mean slice took: 2.427513 sec. > numpy convolve took: 0.546854 sec. > numpy convolve noloop took: 0.058611 sec. > > Code: > > # mean of n values within an array > import numpy, time > def nmean(list,n): > a = [] > for i in range(1,len(list)+1): > start = i-n > divisor = n > if start < 0: > start = 0 > divisor = i > a.append(sum(list[start:i])/divisor) > return a > > t = [1.0*i for i in range(1400)] > start = time.clock() > for x in range(100): > reg = nmean(t,50) > print "regular python took: %f sec."%(time.clock() - start) > > def numpy_nmean(list,n): > a = numpy.empty(len(list),dtype=float) > for i in range(1,len(list)+1): > start = i-n > if start < 0: > start = 0 > a[i-1] = list[start:i].mean(0) > return a > > t = numpy.arange(0,1400,dtype=float) > start = time.clock() > for x in range(100): > npm = numpy_nmean(t,50) > print "numpy mean slice took: %f sec."%(time.clock() - start) > > def numpy_nmean_conv(list,n): > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > for i in range(0,len(list)): > if i < n : > a[i] /= i + 1 > else : > a[i] /= n > return a[:len(list)] > > t = numpy.arange(0,1400,dtype=float) > start = time.clock() > for x in range(100): > npc = numpy_nmean_conv(t,50) > print "numpy convolve took: %f sec."%(time.clock() - start) > > def numpy_nmean_conv_nl(list,n): > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > for i in range(n): > a[i] /= i + 1 > a[n:] /= n > return a[:len(list)] > > t = numpy.arange(0,1400,dtype=float) > start = time.clock() > for x in range(100): > npn = numpy_nmean_conv_nl(t,50) > print "numpy convolve noloop took: %f sec."%(time.clock() - start) > > numpy.testing.assert_equal(reg,npm) > numpy.testing.assert_equal(reg,npc) > numpy.testing.assert_equal(reg,npn) > > On 7/29/06, David Grant wrote: > > > > > > > > On 7/29/06, Charles R Harris wrote: > > > > > > Hmmm, > > > > > > I rewrote the subroutine a bit. > > > > > > > > > def numpy_nmean(list,n): > > > a = numpy.empty(len(list),dtype=float) > > > > > > b = numpy.cumsum(list) > > > for i in range(0,len(list)): > > > if i < n : > > > a[i] = b[i]/(i+1) > > > else : > > > a[i] = (b[i] - b[i-n])/(i+1) > > > return a > > > > > > and got > > > > > > regular python took: 0.750000 sec. > > > numpy took: 0.380000 sec. > > > > > > I got rid of the for loop entirely. Usually this is the thing to do, at > > least this will always give speedups in Matlab and also in my limited > > experience with Numpy/Numeric: > > > > def numpy_nmean2(list,n): > > > > a = numpy.empty(len(list),dtype=float) > > b = numpy.cumsum(list) > > c = concatenate((b[n:],b[:n])) > > a[:n] = b[:n]/(i+1) > > a[n:] = (b[n:] - c[n:])/(i+1) > > return a > > > > I got no noticeable speedup from doing this which I thought was pretty > > amazing. I even profiled all the functions, the original, the one > written by > > Charles, and mine, using hotspot just to make sure nothing funny was > going > > on. I guess plain old Python can be better than you'd expect in certain > > situtations. > > > > -- > > David Grant > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Thu Aug 3 12:32:30 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 3 Aug 2006 09:32:30 -0700 Subject: [Numpy-discussion] =?iso-8859-1?q?help!_type_=27float64scalar=27_?= =?iso-8859-1?q?is_not_type=09=27float=27?= In-Reply-To: <44D18D14.8030609@ieee.org> References: <44D17C2A.2050601@msg.ucsf.edu> <44D186CB.6000307@msg.ucsf.edu> <44D18D14.8030609@ieee.org> Message-ID: <200608030932.31118.haase@msg.ucsf.edu> On Wednesday 02 August 2006 22:43, Travis Oliphant wrote: > Sebastian Haase wrote: > > Thanks, > > I just found > > numpy.isscalar() and numpy.issctype() ? > > These sound like they would do what I need - what is the difference > > between the two ? > > Oh, yeah. > > numpy.issctype works with type objects > numpy.isscalar works with instances > > Neither of them distinguish between scalars and "numbers." > > If you get errors with isscalar it would be nice to know what they are. I'm still trying to reproduce the exception, but here is a first comparison that - honestly - does not make much sense to me: (type vs. instance seems to get mostly the same results and why is there a difference with a string ('12') ) >>> N.isscalar(12) True >>> N.issctype(12) True >>> N.isscalar('12') True >>> N.issctype('12') False >>> N.isscalar(N.array([1])) False >>> N.issctype(N.array([1])) True >>> N.isscalar(N.array([1]).dtype) False >>> N.issctype(N.array([1]).dtype) False # apparently new 'scalars' have a dtype attribute ! >>> N.isscalar(N.array([1])[0].dtype) False >>> N.issctype(N.array([1])[0].dtype) False >>> N.isscalar(N.array([1])[0]) True >>> N.issctype(N.array([1])[0]) True -Sebastian From Chris.Barker at noaa.gov Thu Aug 3 13:33:54 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 03 Aug 2006 10:33:54 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D17C2A.2050601@msg.ucsf.edu> References: <44D17C2A.2050601@msg.ucsf.edu> Message-ID: <44D23382.50606@noaa.gov> Sebastian Haase wrote: > Finally I traced the problem down to a utility function: > "is_number" - it is simply implemented as > def is_number(val): > return (type(val) in [type(0.0),type(0)]) > OK - how should this have been done right ? Well, as others have said, python is uses "duck typing", so you really shouldn't be checking for specific types anyway -- if whatever is passed in acts like it should, that's all you need not know. However, sometimes it does make sense to catch the error sooner, rather than later, so that it can be obvious, or handled properly, or give a better error message, or whatever. In this case, I still use a "duck typing" approach: I don't need to know exactly what type it is, I just need to know that I can use it in the way I want, and an easy way to do that is to turn it into a known type: def is_number(val): try: float(val) return True except ValueError: return False Though more often, I'd just call float on it, and pass that along, rather than explicitly checking This works at least with numpy float64scalar and float32scalar, and it should work with all numpy scalar types, except perhaps the long types that don't fit into a Python float. it'll also turn string into floats if it can, which may or may not be what you want. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From stefan at sun.ac.za Thu Aug 3 14:35:34 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 3 Aug 2006 20:35:34 +0200 Subject: [Numpy-discussion] Fastest binary threshold? In-Reply-To: <44D21566.9060708@mitre.org> References: <44D1103B.9000808@mitre.org> <20060803004522.GC6682@mentat.za.net> <44D21566.9060708@mitre.org> Message-ID: <20060803183534.GF6682@mentat.za.net> Hi Mark On Thu, Aug 03, 2006 at 11:25:26AM -0400, Mark Heslep wrote: > Stefan van der Walt wrote: > > Binary thresholding can be added to ndimage easily, if further speed > > improvement is needed. > > > > Regards > > St?fan > Yes Id like to become involved in that effort. Whats the status of > ndimage now? Has it all been brought over from numarray and placed, > where? Is there a template of some kind for adding new code? You can find 'ndimage' in scipy. Travis also recently added the STSCI image processing tools to the sandbox. St?fan From oliphant at ee.byu.edu Thu Aug 3 16:37:33 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 3 Aug 2006 13:37:33 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' Message-ID: <200608031337.33596.oliphant@ee.byu.edu> Sebastian Haase wrote: >On Wednesday 02 August 2006 22:43, Travis Oliphant wrote: >>Sebastian Haase wrote: >>>Thanks, >>>I just found >>>numpy.isscalar() and numpy.issctype() ? >>>These sound like they would do what I need - what is the difference >>>between the two ? >> >>Oh, yeah. >> >>numpy.issctype works with type objects >>numpy.isscalar works with instances >> >>Neither of them distinguish between scalars and "numbers." >> >>If you get errors with isscalar it would be nice to know what they are. > >I'm still trying to reproduce the exception, but here is a first comparison >that - honestly - does not make much sense to me: >(type vs. instance seems to get mostly the same results and why is there a >difference with a string ('12') ) These routines are a little buggy. I've cleaned them up in SVN to reflect what they should do. When the dtype object came into existence a lot of what the scalar types where being used for was no longer needed. Some of these functions weren't updated to deal with the dtype objects correctly either. This is what you get now: >>> import numpy as N >>> N.isscalar(12) True >>> N.issctype(12) False >>> N.isscalar('12') True >>> N.issctype('12') False >>> N.isscalar(N.array([1])) False >>> N.issctype(N.array([1])) False >>> N.isscalar(N.array([1]).dtype) False >>> N.issctype(N.array([1]).dtype) True >>> N.isscalar(N.array([1])[0].dtype) False >>> N.issctype(N.array([1])[0].dtype) True >>> N.isscalar(N.array([1])[0]) True >>> N.issctype(N.array([1])[0]) False -Travis >>>>N.isscalar(12) > >True > >>>>N.issctype(12) > >True > >>>>N.isscalar('12') > >True > >>>>N.issctype('12') > >False > >>>>N.isscalar(N.array([1])) > >False > >>>>N.issctype(N.array([1])) > >True > >>>>N.isscalar(N.array([1]).dtype) > >False > >>>>N.issctype(N.array([1]).dtype) > >False > > # apparently new 'scalars' have a dtype attribute ! > >>>>N.isscalar(N.array([1])[0].dtype) > >False > >>>>N.issctype(N.array([1])[0].dtype) > >False > >>>>N.isscalar(N.array([1])[0]) > >True > >>>>N.issctype(N.array([1])[0]) > >True > >-Sebastian From haase at msg.ucsf.edu Thu Aug 3 16:42:30 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 3 Aug 2006 13:42:30 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <44D25A59.4010907@ee.byu.edu> References: <44D17C2A.2050601@msg.ucsf.edu> <200608030932.31118.haase@msg.ucsf.edu> <44D25A59.4010907@ee.byu.edu> Message-ID: <200608031342.30977.haase@msg.ucsf.edu> On Thursday 03 August 2006 13:19, Travis Oliphant wrote: > Sebastian Haase wrote: > >On Wednesday 02 August 2006 22:43, Travis Oliphant wrote: > >>Sebastian Haase wrote: > >>>Thanks, > >>>I just found > >>>numpy.isscalar() and numpy.issctype() ? > >>>These sound like they would do what I need - what is the difference > >>>between the two ? > >> > >>Oh, yeah. > >> > >>numpy.issctype works with type objects > >>numpy.isscalar works with instances > >> > >>Neither of them distinguish between scalars and "numbers." > >> > >>If you get errors with isscalar it would be nice to know what they are. > > > >I'm still trying to reproduce the exception, but here is a first > > comparison that - honestly - does not make much sense to me: > >(type vs. instance seems to get mostly the same results and why is there > > a difference with a string ('12') ) > > These routines are a little buggy. I've cleaned them up in SVN to > reflect what they should do. When the dtype object came into > existence a lot of what the scalar types where being used for was no > longer needed. Some of these functions weren't updated to deal with > the dtype objects correctly either. > > This is what you get now: > >>> import numpy as N > >>> N.isscalar(12) > > True > > >>> N.issctype(12) > > False > > >>> N.isscalar('12') > > True > > >>> N.issctype('12') > > False > > >>> N.isscalar(N.array([1])) > > False > > >>> N.issctype(N.array([1])) > > False > > >>> N.isscalar(N.array([1]).dtype) > > False > > >>> N.issctype(N.array([1]).dtype) > > True > > >>> N.isscalar(N.array([1])[0].dtype) > > False > > >>> N.issctype(N.array([1])[0].dtype) > > True > > >>> N.isscalar(N.array([1])[0]) > > True > > >>> N.issctype(N.array([1])[0]) > > False > > > -Travis Great! Just wanted to point out that '12' is a scalar - I suppose that's what it is. (To determine if something is a number it seems best to implement a try: ... except: ... something like float(x) - as Chris has suggested ) -S. From myeates at jpl.nasa.gov Thu Aug 3 19:30:33 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Thu, 03 Aug 2006 16:30:33 -0700 Subject: [Numpy-discussion] help! type 'float64scalar' is not type 'float' In-Reply-To: <200608031337.33596.oliphant@ee.byu.edu> References: <200608031337.33596.oliphant@ee.byu.edu> Message-ID: <44D28719.7020703@jpl.nasa.gov> Here is a similar problem I wish could be fixed. In scipy.io.mio is savemat with the line if type(var) != ArrayType which, I believe should be changed to if not isinstance(var,ArrayType): so I can use savemat with memory mapped arrays. Mathew Travis Oliphant wrote: > Sebastian Haase wrote: > >> On Wednesday 02 August 2006 22:43, Travis Oliphant wrote: >> >>> Sebastian Haase wrote: >>> >>>> Thanks, >>>> I just found >>>> numpy.isscalar() and numpy.issctype() ? >>>> These sound like they would do what I need - what is the difference >>>> between the two ? >>>> >>> Oh, yeah. >>> >>> numpy.issctype works with type objects >>> numpy.isscalar works with instances >>> >>> Neither of them distinguish between scalars and "numbers." >>> >>> If you get errors with isscalar it would be nice to know what they are. >>> >> I'm still trying to reproduce the exception, but here is a first comparison >> that - honestly - does not make much sense to me: >> (type vs. instance seems to get mostly the same results and why is there a >> difference with a string ('12') ) >> > > These routines are a little buggy. I've cleaned them up in SVN to > reflect what they should do. When the dtype object came into > existence a lot of what the scalar types where being used for was no > longer needed. Some of these functions weren't updated to deal with > the dtype objects correctly either. > > This is what you get now: > >>> import numpy as N > >>> N.isscalar(12) > > True > > >>> N.issctype(12) > > False > > >>> N.isscalar('12') > > True > > >>> N.issctype('12') > > False > > >>> N.isscalar(N.array([1])) > > False > > >>> N.issctype(N.array([1])) > > False > > >>> N.isscalar(N.array([1]).dtype) > > False > > >>> N.issctype(N.array([1]).dtype) > > True > > >>> N.isscalar(N.array([1])[0].dtype) > > False > > >>> N.issctype(N.array([1])[0].dtype) > > True > > >>> N.isscalar(N.array([1])[0]) > > True > > >>> N.issctype(N.array([1])[0]) > > False > > > -Travis > > >>>>> N.isscalar(12) >>>>> >> True >> >> >>>>> N.issctype(12) >>>>> >> True >> >> >>>>> N.isscalar('12') >>>>> >> True >> >> >>>>> N.issctype('12') >>>>> >> False >> >> >>>>> N.isscalar(N.array([1])) >>>>> >> False >> >> >>>>> N.issctype(N.array([1])) >>>>> >> True >> >> >>>>> N.isscalar(N.array([1]).dtype) >>>>> >> False >> >> >>>>> N.issctype(N.array([1]).dtype) >>>>> >> False >> >> # apparently new 'scalars' have a dtype attribute ! >> >> >>>>> N.isscalar(N.array([1])[0].dtype) >>>>> >> False >> >> >>>>> N.issctype(N.array([1])[0].dtype) >>>>> >> False >> >> >>>>> N.isscalar(N.array([1])[0]) >>>>> >> True >> >> >>>>> N.issctype(N.array([1])[0]) >>>>> >> True >> >> -Sebastian >> > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From misa-v-v at yahoo.co.jp Thu Aug 3 20:28:54 2006 From: misa-v-v at yahoo.co.jp (=?iso-2022-jp?B?bWlzYQ==?=) Date: Fri, 04 Aug 2006 00:28:54 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/07 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/07 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/07 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/07 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From oliphant.travis at ieee.org Thu Aug 3 23:48:42 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 03 Aug 2006 21:48:42 -0600 Subject: [Numpy-discussion] Numpy 1.0b2 for this weekend Message-ID: <44D2C39A.1070400@ieee.org> I'd like to release NumPy beta 2.0 on Saturday to get ready for the SciPy 2006 conference. Please post any bugs and commit any fixes by then. I suspect there will be 4 or 5 beta releases and then a couple of release candidates before the final release comes out at the first of October. -Travis From haase at msg.ucsf.edu Fri Aug 4 00:00:38 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 03 Aug 2006 21:00:38 -0700 Subject: [Numpy-discussion] link to numpy ticket tracker ontp the wiki Message-ID: <44D2C666.2080503@msg.ucsf.edu> Hi! I would like to suggest to put a link to the bug/wishlist tracker web site on the scipy.org wiki site. http://projects.scipy.org/scipy/numpy/ticket I did not do it myself because I could not decide what the best place for it would - I think it should be rather exposed ... The only link I could find was somewhere inside an FAQ for the SciPy package and it was only for the scipy-bug tracker. Thanks, Sebastian Haase From haase at msg.ucsf.edu Fri Aug 4 00:20:07 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 03 Aug 2006 21:20:07 -0700 Subject: [Numpy-discussion] bug tracker to cc email address by default Message-ID: <44D2CAF7.6090900@msg.ucsf.edu> Hi, Is it possible to have 'cc'-ing the poster of a bug ticket be the default !? Or is/can this be set in a per user preference somehow ? Thanks, Sebastian Haase From robert.kern at gmail.com Fri Aug 4 00:27:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 03 Aug 2006 23:27:00 -0500 Subject: [Numpy-discussion] bug tracker to cc email address by default In-Reply-To: <44D2CAF7.6090900@msg.ucsf.edu> References: <44D2CAF7.6090900@msg.ucsf.edu> Message-ID: Sebastian Haase wrote: > Hi, > Is it possible to have > 'cc'-ing the poster of a bug ticket be the default !? > Or is/can this be set in a per user preference somehow ? IIRC, if you supply your email address in your "Settings", you will get notification emails. http://projects.scipy.org/scipy/numpy/settings Otherwise, subscribe to the numpy-tickets email list, and you will get notifications of all tickets. http://www.scipy.org/Mailing_Lists -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pruggera at gmail.com Fri Aug 4 00:46:04 2006 From: pruggera at gmail.com (Phil Ruggera) Date: Thu, 3 Aug 2006 21:46:04 -0700 Subject: [Numpy-discussion] Mean of n values within an array In-Reply-To: References: <20060803184425.GC17862@ssh.cv.nrao.edu> Message-ID: Tweek2 is slightly faster, but does not produce the same result as the regular python baseline: regular python took: 11.997997 sec. numpy convolve took: 0.611996 sec. numpy convolve tweek 1 took: 0.442029 sec. numpy convolve tweek 2 took: 0.418857 sec. Traceback (most recent call last): File "G:\Python\Dev\mean.py", line 57, in ? numpy.testing.assert_equal(reg, np3) File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 130, in assert_equal return assert_array_equal(actual, desired, err_msg) File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 217, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 17.1428571429%): Array 1: [ 0.0000000000000000e+00 6.5000000000000002e-01 1.3000000000000000e+00 ..., 1.7842500000000002e+03 1.785550000... Array 2: [ 0.0000000000000000e+00 6.5000000000000002e-01 1.3000000000000000e+00 ..., 1.7842500000000002e+03 1.785550000... Code: # mean of n values within an array import numpy, time def nmean(list,n): a = [] for i in range(1,len(list)+1): start = i-n divisor = n if start < 0: start = 0 divisor = i a.append(sum(list[start:i])/divisor) return a def testNP(code, text): start = time.clock() for x in range(1000): np = code(t,50) print text, "took: %f sec."%(time.clock() - start) return np t = [1.3*i for i in range(1400)] reg = testNP(nmean, 'regular python') t = numpy.array(t,dtype=float) def numpy_nmean_conv(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") for i in range(n): a[i] /= i + 1 a[n:] /= n return a[:len(list)] np1 = testNP(numpy_nmean_conv, 'numpy convolve') def numpy_nmean_conv_nl_tweak1(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") a[:n] /= numpy.arange(1, n+1) a[n:] /= n return a[:len(list)] np2 = testNP(numpy_nmean_conv_nl_tweak1, 'numpy convolve tweek 1') def numpy_nmean_conv_nl_tweak2(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") a[:n] /= numpy.arange(1, n + 1) a[n:] *= 1.0/n return a[:len(list)] np3 = testNP(numpy_nmean_conv_nl_tweak2, 'numpy convolve tweek 2') numpy.testing.assert_equal(reg, np1) numpy.testing.assert_equal(reg, np2) numpy.testing.assert_equal(reg, np3) On 8/3/06, Charles R Harris wrote: > Hi Scott, > > > On 8/3/06, Scott Ransom wrote: > > You should be able to modify the kernel so that you can avoid > > many of the divides at the end. Something like: > > > > def numpy_nmean_conv_nl2(list,n): > > b = numpy.ones(n,dtype=float) / n > > a = numpy.convolve (c,b,mode="full") > > # Note: something magic in here to fix the first 'n' values > > return a[:len(list)] > > > Yep, I tried that but it wasn't any faster. It might help for really *big* > arrays. The first n-1 values still need to be fixed after. > > Chuck > > > I played with it a bit, but don't have time to figure out exactly > > how convolve is mangling the first n return values... > > > > Scott > > > > > > > > On Thu, Aug 03, 2006 at 09:38:25AM -0600, Charles R Harris wrote: > > > Heh, > > > > > > This is fun. Two more variations with 1000 reps instead of 100 for > better > > > timing: > > > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > > b = numpy.ones(n,dtype=float) > > > a = numpy.convolve(list,b,mode="full") > > > a[:n] /= numpy.arange(1, n + 1) > > > a[n:] /= n > > > return a[:len(list)] > > > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > b = numpy.ones(n,dtype=float) > > > a = numpy.convolve(list,b,mode="full") > > > a[:n] /= numpy.arange(1, n + 1) > > > a[n:] *= 1.0/n > > > return a[:len(list)] > > > > > > Which gives > > > > > > numpy convolve took: 2.630000 sec. > > > numpy convolve noloop took: 0.320000 sec. > > > numpy convolve noloop tweak1 took: 0.250000 sec. > > > numpy convolve noloop tweak2 took: 0.240000 sec. > > > > > > Chuck > > > > > > On 8/2/06, Phil Ruggera wrote: > > > > > > > >A variation of the proposed convolve routine is very fast: > > > > > > > >regular python took: 1.150214 sec. > > > >numpy mean slice took: 2.427513 sec. > > > >numpy convolve took: 0.546854 sec. > > > >numpy convolve noloop took: 0.058611 sec. > > > > > > > >Code: > > > > > > > ># mean of n values within an array > > > >import numpy, time > > > >def nmean(list,n): > > > > a = [] > > > > for i in range(1,len(list)+1): > > > > start = i-n > > > > divisor = n > > > > if start < 0: > > > > start = 0 > > > > divisor = i > > > > a.append(sum(list[start:i])/divisor) > > > > return a > > > > > > > >t = [1.0*i for i in range(1400)] > > > >start = time.clock() > > > >for x in range(100): > > > > reg = nmean(t,50) > > > >print "regular python took: %f sec."%(time.clock() - start) > > > > > > > >def numpy_nmean(list,n): > > > > a = numpy.empty(len(list),dtype=float) > > > > for i in range(1,len(list)+1): > > > > start = i-n > > > > if start < 0: > > > > start = 0 > > > > a[i-1] = list[start:i].mean(0) > > > > return a > > > > > > > >t = numpy.arange (0,1400,dtype=float) > > > >start = time.clock() > > > >for x in range(100): > > > > npm = numpy_nmean(t,50) > > > >print "numpy mean slice took: %f sec."%(time.clock() - start) > > > > > > > >def numpy_nmean_conv(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > for i in range(0,len(list)): > > > > if i < n : > > > > a[i] /= i + 1 > > > > else : > > > > a[i] /= n > > > > return a[:len(list)] > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > >start = time.clock () > > > >for x in range(100): > > > > npc = numpy_nmean_conv(t,50) > > > >print "numpy convolve took: %f sec."%(time.clock() - start) > > > > > > > >def numpy_nmean_conv_nl(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > for i in range(n): > > > > a[i] /= i + 1 > > > > a[n:] /= n > > > > return a[:len(list)] > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > >start = time.clock() > > > >for x in range(100): > > > > npn = numpy_nmean_conv_nl(t,50) > > > >print "numpy convolve noloop took: %f sec."%( time.clock() - start) > > > > > > > >numpy.testing.assert_equal(reg,npm) > > > >numpy.testing.assert_equal(reg,npc) > > > >numpy.testing.assert_equal(reg,npn) > > > > > > > >On 7/29/06, David Grant < davidgrant at gmail.com> wrote: > > > >> > > > >> > > > >> > > > >> On 7/29/06, Charles R Harris wrote: > > > >> > > > > >> > Hmmm, > > > >> > > > > >> > I rewrote the subroutine a bit. > > > >> > > > > >> > > > > >> > def numpy_nmean(list,n): > > > >> > a = numpy.empty(len(list),dtype=float) > > > >> > > > > >> > b = numpy.cumsum(list) > > > >> > for i in range(0,len(list)): > > > >> > if i < n : > > > >> > a[i] = b[i]/(i+1) > > > >> > else : > > > >> > a[i] = (b[i] - b[i-n])/(i+1) > > > >> > return a > > > >> > > > > >> > and got > > > >> > > > > >> > regular python took: 0.750000 sec. > > > >> > numpy took: 0.380000 sec. > > > >> > > > >> > > > >> I got rid of the for loop entirely. Usually this is the thing to do, > at > > > >> least this will always give speedups in Matlab and also in my limited > > > >> experience with Numpy/Numeric: > > > >> > > > >> def numpy_nmean2(list,n): > > > >> > > > >> a = numpy.empty(len(list),dtype=float) > > > >> b = numpy.cumsum(list) > > > >> c = concatenate((b[n:],b[:n])) > > > >> a[:n] = b[:n]/(i+1) > > > >> a[n:] = (b[n:] - c[n:])/(i+1) > > > >> return a > > > >> > > > >> I got no noticeable speedup from doing this which I thought was > pretty > > > >> amazing. I even profiled all the functions, the original, the one > > > >written by > > > >> Charles, and mine, using hotspot just to make sure nothing funny was > > > >going > > > >> on. I guess plain old Python can be better than you'd expect in > certain > > > >> situtations. > > > >> > > > >> -- > > > >> David Grant > > > > > > > > >------------------------------------------------------------------------- > > > >Take Surveys. Earn Cash. Influence the Future of IT > > > >Join SourceForge.net's Techsay panel and you'll get the chance to share > > > >your > > > >opinions on IT & business topics through brief surveys -- and earn cash > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > >_______________________________________________ > > > >Numpy-discussion mailing list > > > > Numpy-discussion at lists.sourceforge.net > > > > >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > ------------------------------------------------------------------------- > > > Take Surveys. Earn Cash. Influence the Future of IT > > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > > opinions on IT & business topics through brief surveys -- and earn cash > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Numpy-discussion at lists.sourceforge.net > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > -- > > -- > > Scott M. Ransom Address: NRAO > > Phone: (434) 296-0320 520 Edgemont Rd. > > email: sransom at nrao.edu Charlottesville, VA 22903 USA > > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From charlesr.harris at gmail.com Fri Aug 4 02:40:13 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 4 Aug 2006 00:40:13 -0600 Subject: [Numpy-discussion] Mean of n values within an array In-Reply-To: References: <20060803184425.GC17862@ssh.cv.nrao.edu> Message-ID: Hi Phil, Curious. It works fine here in the original form. I even expected a tiny difference because of floating point voodoo but there was none at all. Now if I copy your program and run it there *is* a small difference over the slice [1:] (to avoid division by zero). index of max fractional difference: 234 max fractional difference: 2.077e-16 reg at max fractional difference: 1.098e+03 Which is just about roundoff error (1.11e-16) for double precision, so it lost a bit of precision. Still, I am not clear why the results should differ at all between the original and your new code. Cue spooky music. Chuck On 8/3/06, Phil Ruggera wrote: > > Tweek2 is slightly faster, but does not produce the same result as the > regular python baseline: > > regular python took: 11.997997 sec. > numpy convolve took: 0.611996 sec. > numpy convolve tweek 1 took: 0.442029 sec. > numpy convolve tweek 2 took: 0.418857 sec. > Traceback (most recent call last): > File "G:\Python\Dev\mean.py", line 57, in ? > numpy.testing.assert_equal(reg, np3) > File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line > 130, in assert_equal > return assert_array_equal(actual, desired, err_msg) > File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line > 217, in assert_array_equal > assert cond,\ > AssertionError: > Arrays are not equal (mismatch 17.1428571429%): > Array 1: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > ..., 1.7842500000000002e+03 1.785550000... > Array 2: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > ..., 1.7842500000000002e+03 1.785550000... Code: > > # mean of n values within an array > import numpy, time > def nmean(list,n): > a = [] > for i in range(1,len(list)+1): > start = i-n > divisor = n > if start < 0: > start = 0 > divisor = i > a.append(sum(list[start:i])/divisor) > return a > > def testNP(code, text): > start = time.clock() > for x in range(1000): > np = code(t,50) > print text, "took: %f sec."%(time.clock() - start) > return np > > t = [1.3*i for i in range(1400)] > reg = testNP(nmean, 'regular python') > > t = numpy.array(t,dtype=float) > > def numpy_nmean_conv(list,n): > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > for i in range(n): > a[i] /= i + 1 > a[n:] /= n > return a[:len(list)] > > np1 = testNP(numpy_nmean_conv, 'numpy convolve') > > def numpy_nmean_conv_nl_tweak1(list,n): > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > a[:n] /= numpy.arange(1, n+1) > a[n:] /= n > return a[:len(list)] > > np2 = testNP(numpy_nmean_conv_nl_tweak1, 'numpy convolve tweek 1') > > def numpy_nmean_conv_nl_tweak2(list,n): > > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > a[:n] /= numpy.arange(1, n + 1) > a[n:] *= 1.0/n > return a[:len(list)] > > np3 = testNP(numpy_nmean_conv_nl_tweak2, 'numpy convolve tweek 2') > > numpy.testing.assert_equal(reg, np1) > numpy.testing.assert_equal(reg, np2) > numpy.testing.assert_equal(reg, np3) > > On 8/3/06, Charles R Harris wrote: > > Hi Scott, > > > > > > On 8/3/06, Scott Ransom wrote: > > > You should be able to modify the kernel so that you can avoid > > > many of the divides at the end. Something like: > > > > > > def numpy_nmean_conv_nl2(list,n): > > > b = numpy.ones(n,dtype=float) / n > > > a = numpy.convolve (c,b,mode="full") > > > # Note: something magic in here to fix the first 'n' values > > > return a[:len(list)] > > > > > > Yep, I tried that but it wasn't any faster. It might help for really > *big* > > arrays. The first n-1 values still need to be fixed after. > > > > Chuck > > > > > I played with it a bit, but don't have time to figure out exactly > > > how convolve is mangling the first n return values... > > > > > > Scott > > > > > > > > > > > > On Thu, Aug 03, 2006 at 09:38:25AM -0600, Charles R Harris wrote: > > > > Heh, > > > > > > > > This is fun. Two more variations with 1000 reps instead of 100 for > > better > > > > timing: > > > > > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > a[:n] /= numpy.arange(1, n + 1) > > > > a[n:] /= n > > > > return a[:len(list)] > > > > > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > a[:n] /= numpy.arange(1, n + 1) > > > > a[n:] *= 1.0/n > > > > return a[:len(list)] > > > > > > > > Which gives > > > > > > > > numpy convolve took: 2.630000 sec. > > > > numpy convolve noloop took: 0.320000 sec. > > > > numpy convolve noloop tweak1 took: 0.250000 sec. > > > > numpy convolve noloop tweak2 took: 0.240000 sec. > > > > > > > > Chuck > > > > > > > > On 8/2/06, Phil Ruggera wrote: > > > > > > > > > >A variation of the proposed convolve routine is very fast: > > > > > > > > > >regular python took: 1.150214 sec. > > > > >numpy mean slice took: 2.427513 sec. > > > > >numpy convolve took: 0.546854 sec. > > > > >numpy convolve noloop took: 0.058611 sec. > > > > > > > > > >Code: > > > > > > > > > ># mean of n values within an array > > > > >import numpy, time > > > > >def nmean(list,n): > > > > > a = [] > > > > > for i in range(1,len(list)+1): > > > > > start = i-n > > > > > divisor = n > > > > > if start < 0: > > > > > start = 0 > > > > > divisor = i > > > > > a.append(sum(list[start:i])/divisor) > > > > > return a > > > > > > > > > >t = [1.0*i for i in range(1400)] > > > > >start = time.clock() > > > > >for x in range(100): > > > > > reg = nmean(t,50) > > > > >print "regular python took: %f sec."%(time.clock() - start) > > > > > > > > > >def numpy_nmean(list,n): > > > > > a = numpy.empty(len(list),dtype=float) > > > > > for i in range(1,len(list)+1): > > > > > start = i-n > > > > > if start < 0: > > > > > start = 0 > > > > > a[i-1] = list[start:i].mean(0) > > > > > return a > > > > > > > > > >t = numpy.arange (0,1400,dtype=float) > > > > >start = time.clock() > > > > >for x in range(100): > > > > > npm = numpy_nmean(t,50) > > > > >print "numpy mean slice took: %f sec."%(time.clock() - start) > > > > > > > > > >def numpy_nmean_conv(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > for i in range(0,len(list)): > > > > > if i < n : > > > > > a[i] /= i + 1 > > > > > else : > > > > > a[i] /= n > > > > > return a[:len(list)] > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > >start = time.clock () > > > > >for x in range(100): > > > > > npc = numpy_nmean_conv(t,50) > > > > >print "numpy convolve took: %f sec."%(time.clock() - start) > > > > > > > > > >def numpy_nmean_conv_nl(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > for i in range(n): > > > > > a[i] /= i + 1 > > > > > a[n:] /= n > > > > > return a[:len(list)] > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > >start = time.clock() > > > > >for x in range(100): > > > > > npn = numpy_nmean_conv_nl(t,50) > > > > >print "numpy convolve noloop took: %f sec."%( time.clock() - start) > > > > > > > > > >numpy.testing.assert_equal(reg,npm) > > > > >numpy.testing.assert_equal(reg,npc) > > > > >numpy.testing.assert_equal(reg,npn) > > > > > > > > > >On 7/29/06, David Grant < davidgrant at gmail.com> wrote: > > > > >> > > > > >> > > > > >> > > > > >> On 7/29/06, Charles R Harris wrote: > > > > >> > > > > > >> > Hmmm, > > > > >> > > > > > >> > I rewrote the subroutine a bit. > > > > >> > > > > > >> > > > > > >> > def numpy_nmean(list,n): > > > > >> > a = numpy.empty(len(list),dtype=float) > > > > >> > > > > > >> > b = numpy.cumsum(list) > > > > >> > for i in range(0,len(list)): > > > > >> > if i < n : > > > > >> > a[i] = b[i]/(i+1) > > > > >> > else : > > > > >> > a[i] = (b[i] - b[i-n])/(i+1) > > > > >> > return a > > > > >> > > > > > >> > and got > > > > >> > > > > > >> > regular python took: 0.750000 sec. > > > > >> > numpy took: 0.380000 sec. > > > > >> > > > > >> > > > > >> I got rid of the for loop entirely. Usually this is the thing to > do, > > at > > > > >> least this will always give speedups in Matlab and also in my > limited > > > > >> experience with Numpy/Numeric: > > > > >> > > > > >> def numpy_nmean2(list,n): > > > > >> > > > > >> a = numpy.empty(len(list),dtype=float) > > > > >> b = numpy.cumsum(list) > > > > >> c = concatenate((b[n:],b[:n])) > > > > >> a[:n] = b[:n]/(i+1) > > > > >> a[n:] = (b[n:] - c[n:])/(i+1) > > > > >> return a > > > > >> > > > > >> I got no noticeable speedup from doing this which I thought was > > pretty > > > > >> amazing. I even profiled all the functions, the original, the one > > > > >written by > > > > >> Charles, and mine, using hotspot just to make sure nothing funny > was > > > > >going > > > > >> on. I guess plain old Python can be better than you'd expect in > > certain > > > > >> situtations. > > > > >> > > > > >> -- > > > > >> David Grant > > > > > > > > > > > > >------------------------------------------------------------------------- > > > > >Take Surveys. Earn Cash. Influence the Future of IT > > > > >Join SourceForge.net's Techsay panel and you'll get the chance to > share > > > > >your > > > > >opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > >_______________________________________________ > > > > >Numpy-discussion mailing list > > > > > Numpy-discussion at lists.sourceforge.net > > > > > > >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > Take Surveys. Earn Cash. Influence the Future of IT > > > > Join SourceForge.net's Techsay panel and you'll get the chance to > share > > your > > > > opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > _______________________________________________ > > > > Numpy-discussion mailing list > > > > Numpy-discussion at lists.sourceforge.net > > > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > -- > > > -- > > > Scott M. Ransom Address: NRAO > > > Phone: (434) 296-0320 520 Edgemont Rd. > > > email: sransom at nrao.edu Charlottesville, VA 22903 USA > > > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pruggera at gmail.com Fri Aug 4 11:12:51 2006 From: pruggera at gmail.com (Phil Ruggera) Date: Fri, 4 Aug 2006 08:12:51 -0700 Subject: [Numpy-discussion] Mean of n values within an array In-Reply-To: References: <20060803184425.GC17862@ssh.cv.nrao.edu> Message-ID: The spook is in t = [1.3*i for i in range(1400)]. It used to be t = [1.0*i for i in range(1400)] but I changed it to shake out algorithms that produce differences. But a max difference of 2.077e-16 is immaterial for my application. I should use a less strict compare. On 8/3/06, Charles R Harris wrote: > Hi Phil, > > Curious. It works fine here in the original form. I even expected a tiny > difference because of floating point voodoo but there was none at all. Now > if I copy your program and run it there *is* a small difference over the > slice [1:] (to avoid division by zero). > > index of max fractional difference: 234 > max fractional difference: 2.077e-16 > reg at max fractional difference: 1.098e+03 > > Which is just about roundoff error (1.11e-16) for double precision, so it > lost a bit of precision. > > Still, I am not clear why the results should differ at all between the > original and your new code. Cue spooky music. > > Chuck > > On 8/3/06, Phil Ruggera wrote: > > Tweek2 is slightly faster, but does not produce the same result as the > > regular python baseline: > > > > regular python took: 11.997997 sec. > > numpy convolve took: 0.611996 sec. > > numpy convolve tweek 1 took: 0.442029 sec. > > numpy convolve tweek 2 took: 0.418857 sec. > > Traceback (most recent call last): > > File "G:\Python\Dev\mean.py", line 57, in ? > > numpy.testing.assert_equal(reg, np3) > > File > "C:\Python24\Lib\site-packages\numpy\testing\utils.py", > line > > 130, in assert_equal > > return assert_array_equal(actual, desired, err_msg) > > File > "C:\Python24\Lib\site-packages\numpy\testing\utils.py", > line > > 217, in assert_array_equal > > assert cond,\ > > AssertionError: > > Arrays are not equal (mismatch 17.1428571429%): > > Array 1: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > > ..., 1.7842500000000002e+03 1.785550000... > > Array 2: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > > ..., 1.7842500000000002e+03 1.785550000... > > > > > Code: > > > > # mean of n values within an array > > import numpy, time > > def nmean(list,n): > > a = [] > > for i in range(1,len(list)+1): > > start = i-n > > divisor = n > > if start < 0: > > start = 0 > > divisor = i > > a.append(sum(list[start:i])/divisor) > > return a > > > > def testNP(code, text): > > start = time.clock() > > for x in range(1000): > > np = code(t,50) > > print text, "took: %f sec."%( time.clock() - start) > > return np > > > > t = [1.3*i for i in range(1400)] > > reg = testNP(nmean, 'regular python') > > > > t = numpy.array(t,dtype=float) > > > > def numpy_nmean_conv(list,n): > > b = numpy.ones(n,dtype=float) > > a = numpy.convolve(list,b,mode="full") > > for i in range(n): > > a[i] /= i + 1 > > a[n:] /= n > > return a[:len(list)] > > > > np1 = testNP(numpy_nmean_conv, 'numpy convolve') > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > b = numpy.ones(n,dtype=float) > > a = numpy.convolve(list,b,mode="full") > > a[:n] /= numpy.arange(1, n+1) > > a[n:] /= n > > return a[:len(list)] > > > > np2 = testNP(numpy_nmean_conv_nl_tweak1, 'numpy convolve > tweek 1') > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > > b = numpy.ones(n,dtype=float) > > a = numpy.convolve(list,b,mode="full") > > a[:n] /= numpy.arange(1, n + 1) > > a[n:] *= 1.0/n > > return a[:len(list)] > > > > np3 = testNP(numpy_nmean_conv_nl_tweak2, 'numpy convolve > tweek 2') > > > > numpy.testing.assert_equal(reg, np1) > > numpy.testing.assert_equal(reg, np2) > > numpy.testing.assert_equal(reg, np3) > > > > On 8/3/06, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > Hi Scott, > > > > > > > > > On 8/3/06, Scott Ransom wrote: > > > > You should be able to modify the kernel so that you can avoid > > > > many of the divides at the end. Something like: > > > > > > > > def numpy_nmean_conv_nl2(list,n): > > > > b = numpy.ones (n,dtype=float) / n > > > > a = numpy.convolve (c,b,mode="full") > > > > # Note: something magic in here to fix the first 'n' values > > > > return a[:len(list)] > > > > > > > > > Yep, I tried that but it wasn't any faster. It might help for really > *big* > > > arrays. The first n-1 values still need to be fixed after. > > > > > > Chuck > > > > > > > I played with it a bit, but don't have time to figure out exactly > > > > how convolve is mangling the first n return values... > > > > > > > > Scott > > > > > > > > > > > > > > > > On Thu, Aug 03, 2006 at 09:38:25AM -0600, Charles R Harris wrote: > > > > > Heh, > > > > > > > > > > This is fun. Two more variations with 1000 reps instead of 100 for > > > better > > > > > timing: > > > > > > > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > a[:n] /= numpy.arange(1, n + 1) > > > > > a[n:] /= n > > > > > return a[:len(list)] > > > > > > > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > a[:n] /= numpy.arange(1, n + 1) > > > > > a[n:] *= 1.0/n > > > > > return a[:len(list)] > > > > > > > > > > Which gives > > > > > > > > > > numpy convolve took: 2.630000 sec. > > > > > numpy convolve noloop took: 0.320000 sec. > > > > > numpy convolve noloop tweak1 took: 0.250000 sec. > > > > > numpy convolve noloop tweak2 took: 0.240000 sec. > > > > > > > > > > Chuck > > > > > > > > > > On 8/2/06, Phil Ruggera < pruggera at gmail.com> wrote: > > > > > > > > > > > >A variation of the proposed convolve routine is very fast: > > > > > > > > > > > >regular python took: 1.150214 sec. > > > > > >numpy mean slice took: 2.427513 sec. > > > > > >numpy convolve took: 0.546854 sec. > > > > > >numpy convolve noloop took: 0.058611 sec. > > > > > > > > > > > >Code: > > > > > > > > > > > ># mean of n values within an array > > > > > >import numpy, time > > > > > >def nmean(list,n): > > > > > > a = [] > > > > > > for i in range(1,len(list)+1): > > > > > > start = i-n > > > > > > divisor = n > > > > > > if start < 0: > > > > > > start = 0 > > > > > > divisor = i > > > > > > a.append(sum(list[start:i])/divisor) > > > > > > return a > > > > > > > > > > > >t = [1.0*i for i in range(1400)] > > > > > >start = time.clock () > > > > > >for x in range(100): > > > > > > reg = nmean(t,50) > > > > > >print "regular python took: %f sec."%(time.clock() - start) > > > > > > > > > > > >def numpy_nmean(list,n): > > > > > > a = numpy.empty(len(list),dtype=float) > > > > > > for i in range(1,len(list)+1): > > > > > > start = i-n > > > > > > if start < 0: > > > > > > start = 0 > > > > > > a[i-1] = list[start:i].mean(0) > > > > > > return a > > > > > > > > > > > >t = numpy.arange (0,1400,dtype=float) > > > > > >start = time.clock() > > > > > >for x in range(100): > > > > > > npm = numpy_nmean(t,50) > > > > > >print "numpy mean slice took: %f sec."%(time.clock() - start) > > > > > > > > > > > >def numpy_nmean_conv(list,n): > > > > > > b = numpy.ones(n,dtype=float) > > > > > > a = numpy.convolve(list,b,mode="full") > > > > > > for i in range(0,len(list)): > > > > > > if i < n : > > > > > > a[i] /= i + 1 > > > > > > else : > > > > > > a[i] /= n > > > > > > return a[:len(list)] > > > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > > >start = time.clock () > > > > > >for x in range(100): > > > > > > npc = numpy_nmean_conv(t,50) > > > > > >print "numpy convolve took: %f sec."%( time.clock() - start) > > > > > > > > > > > >def numpy_nmean_conv_nl(list,n): > > > > > > b = numpy.ones(n,dtype=float) > > > > > > a = numpy.convolve(list,b,mode="full") > > > > > > for i in range(n): > > > > > > a[i] /= i + 1 > > > > > > a[n:] /= n > > > > > > return a[:len(list)] > > > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > > >start = time.clock() > > > > > >for x in range(100): > > > > > > npn = numpy_nmean_conv_nl(t,50) > > > > > >print "numpy convolve noloop took: %f sec."%( time.clock() - start) > > > > > > > > > > > >numpy.testing.assert_equal(reg,npm) > > > > > >numpy.testing.assert_equal(reg,npc) > > > > > >numpy.testing.assert_equal(reg,npn) > > > > > > > > > > > >On 7/29/06, David Grant < davidgrant at gmail.com> wrote: > > > > > >> > > > > > >> > > > > > >> > > > > > >> On 7/29/06, Charles R Harris wrote: > > > > > >> > > > > > > >> > Hmmm, > > > > > >> > > > > > > >> > I rewrote the subroutine a bit. > > > > > >> > > > > > > >> > > > > > > >> > def numpy_nmean(list,n): > > > > > >> > a = numpy.empty(len(list),dtype=float) > > > > > >> > > > > > > >> > b = numpy.cumsum(list) > > > > > >> > for i in range(0,len(list)): > > > > > >> > if i < n : > > > > > >> > a[i] = b[i]/(i+1) > > > > > >> > else : > > > > > >> > a[i] = (b[i] - b[i-n])/(i+1) > > > > > >> > return a > > > > > >> > > > > > > >> > and got > > > > > >> > > > > > > >> > regular python took: 0.750000 sec. > > > > > >> > numpy took: 0.380000 sec. > > > > > >> > > > > > >> > > > > > >> I got rid of the for loop entirely. Usually this is the thing to > do, > > > at > > > > > >> least this will always give speedups in Matlab and also in my > limited > > > > > >> experience with Numpy/Numeric: > > > > > >> > > > > > >> def numpy_nmean2(list,n): > > > > > >> > > > > > >> a = numpy.empty(len(list),dtype=float) > > > > > >> b = numpy.cumsum(list) > > > > > >> c = concatenate((b[n:],b[:n])) > > > > > >> a[:n] = b[:n]/(i+1) > > > > > >> a[n:] = (b[n:] - c[n:])/(i+1) > > > > > >> return a > > > > > >> > > > > > >> I got no noticeable speedup from doing this which I thought was > > > pretty > > > > > >> amazing. I even profiled all the functions, the original, the one > > > > > >written by > > > > > >> Charles, and mine, using hotspot just to make sure nothing funny > was > > > > > >going > > > > > >> on. I guess plain old Python can be better than you'd expect in > > > certain > > > > > >> situtations. > > > > > >> > > > > > >> -- > > > > > >> David Grant > > > > > > > > > > > > > > > >------------------------------------------------------------------------- > > > > > >Take Surveys. Earn Cash. Influence the Future of IT > > > > > >Join SourceForge.net's Techsay panel and you'll get the chance to > share > > > > > >your > > > > > >opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > >_______________________________________________ > > > > > >Numpy-discussion mailing list > > > > > > Numpy-discussion at lists.sourceforge.net > > > > > > > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > > Take Surveys. Earn Cash. Influence the Future of IT > > > > > Join SourceForge.net's Techsay panel and you'll get the chance to > share > > > your > > > > > opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > _______________________________________________ > > > > > Numpy-discussion mailing list > > > > > Numpy-discussion at lists.sourceforge.net > > > > > > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > -- > > > > -- > > > > Scott M. Ransom Address: NRAO > > > > Phone: (434) 296-0320 520 Edgemont Rd. > > > > email: sransom at nrao.edu Charlottesville, VA 22903 USA > > > > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > > > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From oliphant.travis at ieee.org Fri Aug 4 15:07:21 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 04 Aug 2006 13:07:21 -0600 Subject: [Numpy-discussion] Backward compatibility plans Message-ID: <44D39AE9.6060106@ieee.org> For backward-compatibility with Numeric and Numarray I'm leaning to the following plan: * Do not create compatibility array objects. I initially thought we could sub-class in order to create objects that had the expected attributes and methods of Numeric arrays or Numarray arrays. After some experimentation, I'm ditching this plan. I think this would create too many array-like objects floating around and make unification even harder as these objects interact in difficult-to-predict ways. Instead, I'm planning to: 1) Create compatibility functions in oldnumeric and numarray sub-packages that create NumPy arrays but do it with the same function syntax as the old packages. 2) Create 4 scripts for assisting in conversion (2 for Numeric and 2 for Numarray). a) An initial script that just alters imports (to the compatibility layer) and fixes method and attribute access. b) A secondary script that alters the imports from the compatibility layer and fixes as much as possible the things that need to change in order to make the switch away from the compatibility layer to work correctly. While it is not foolproof, I think this will cover most of the issues and make conversion relatively easy. This will also let us develop NumPy without undue concern for compatibility with older packages. This must all be in place before 1.0 release candidate 1 comes out. Comments and criticisms welcome. -Travis From haase at msg.ucsf.edu Fri Aug 4 18:35:51 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 4 Aug 2006 15:35:51 -0700 Subject: [Numpy-discussion] a**2 60 times slower than a*a - ONLY for int32 Message-ID: <200608041535.51321.haase@msg.ucsf.edu> Hi, >>> a=N.random.poisson(N.arange(1e6)+1) >>> U.timeIt('a**2') 0.59 >>> U.timeIt('a*a') 0.01 >>> a.dtype int32 my U.timeIt function just returns the difference of time in seconds before and after evaluation of the string. For >>> c=N.random.normal(1000, 100, 1e6) >>> c.dtype float64 i get .014 seconds for either c*c or c**2 (I averaged over 100 runs). After converting this to float32 I get 0.008 secs for both. Can the int32 case be speed up the same way !? Thanks, Sebastian Haase From charlesr.harris at gmail.com Fri Aug 4 19:34:09 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 4 Aug 2006 17:34:09 -0600 Subject: [Numpy-discussion] Backward compatibility plans In-Reply-To: <44D39AE9.6060106@ieee.org> References: <44D39AE9.6060106@ieee.org> Message-ID: Hi Travis, I wonder if it is possible to adapt these modules so they can flag all the incompatibilities, maybe with a note on the fix. This would be a useful tool for those having to port code. That might not be the easiest route to go but at least there is a partial list of the functions involved. Chuck On 8/4/06, Travis Oliphant wrote: > > > For backward-compatibility with Numeric and Numarray I'm leaning to the > following plan: > > * Do not create compatibility array objects. I initially thought we > could sub-class in order to > create objects that had the expected attributes and methods of Numeric > arrays or Numarray arrays. After some experimentation, I'm ditching > this plan. I think this would create too many array-like objects > floating around and make unification even harder as these objects > interact in difficult-to-predict ways. > > Instead, I'm planning to: > > 1) Create compatibility functions in oldnumeric and numarray > sub-packages that create NumPy arrays but do it with the same function > syntax as the old packages. > > 2) Create 4 scripts for assisting in conversion (2 for Numeric and 2 for > Numarray). > > a) An initial script that just alters imports (to the compatibility > layer) > and fixes method and attribute access. > > b) A secondary script that alters the imports from the compatibility > layer > and fixes as much as possible the things that need to change in > order to > make the switch away from the compatibility layer to work > correctly. > > > While it is not foolproof, I think this will cover most of the issues > and make conversion relatively easy. This will also let us develop > NumPy without undue concern for compatibility with older packages. > > This must all be in place before 1.0 release candidate 1 comes out. > > Comments and criticisms welcome. > > -Travis > > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Wellary at yahoo.com Fri Aug 4 22:45:08 2006 From: Wellary at yahoo.com (Larry Welenc) Date: Fri, 4 Aug 2006 19:45:08 -0700 Subject: [Numpy-discussion] ImportError: cannot import name oldnumeric Message-ID: I receive an error message when trying to import scipy: import scipy File "C:\Python24\Lib\site-packages\scipy\__init__.py", line 32, in -toplevel- from numpy import oldnumeric ImportError: cannot import name oldnumeric Numpy is installed. How to I correct this problem? Larry W -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Sat Aug 5 03:59:33 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 05 Aug 2006 01:59:33 -0600 Subject: [Numpy-discussion] SciPy SVN and NumPy SVN should work together now Message-ID: <44D44FE5.7030808@ieee.org> I've finished the updates to backward compatibility to Numeric. SciPy passes all tests. Please report any outstanding issues you may encounter. It would be nice to remove dependency on oldnumeric from SciPy entirely. -Travis From uwxuxzpfjl at businessval.com Sun Aug 6 02:29:24 2006 From: uwxuxzpfjl at businessval.com (shapes) Date: Sat, 5 Aug 2006 22:29:24 -0800 Subject: [Numpy-discussion] Talladega Message-ID: <91BA11EE7765F93.28D2AB7226@businessval.com> Phone Where Watching Videos Lenogo Movie iPod Converter Traders costs exception Tanzanite costlier make. admit noses sacred oldest Buddhist shrines world. World Heritage Royal spreading the message of universal love rides flavour season Jaipur: attempt save tourism and interest wildlife sanctury Rajsthan has planned shift from other reserves villagers nearby areas. This decision romance loveshop goddess gift guide soul sexuality profile horoscope email referred both examples lasers lights personal Secretary With efforts immediate future serious committed posture stop Tigers extremely with roughly century ago mainly due rampant suspect numbers could less warned majestic cats siege poachers people living protect reserves. After some fast sanctuary sparked national outrage. Phone: Fax: Oxnam Jedburgh TD QN Scottish Borders Please refresh viewing versions. Online Highlight Superior Low Power Medical Use AntiAging Buy Basics LLLT Level Light Therapy Cold problems sores chapped lips Online Highlight Superior Low Power Medical Use AntiAging Buy seat exiled decades headed monastery larger images Nagendra Ranta craze gold ornaments events like marriages but visible towards using cheaper option. Nitu refresh viewing versions. Online Highlight Superior Low Power Medical Use AntiAging Buy Basics pimples stretch marks cracked burns bruises insect bites acute/ chronic point advanced existed during Ashokas time. Strikes Web Tk Microsoft Linux Tools PDA Wireless Screen Savers Files FontsEZ List Shopping Stores Schools Broadband ISP Tutorials Borders Please refresh home. tourists visiting beauteous region penned messages languages stuck walls. JJI favourite aged cannot JSP DLLs SQL Oracle Flash XML Services Delphi files Software Audio MP CD Chats Messaging Education Email AntiSpam Family Hobbies Video Computer Graphics Linux Tools PDA Wireless Screen Savers Files FontsEZ List Shopping Stores Schools Broadband ISP Tutorials Hosts BB Radio TV Shows JJI favourite haunt locals town adopted Tibetan spiritual leader Dalai Lama. Operated Neema husband last cozy cafe named Lovers love. allowed practice writing MP CD Chats Messaging Education Email AntiSpam Family Hobbies Video Computer Basics LLLT Level Light Therapy bites acute/ chronic sinusitis allergic rhinitis warts verrucae facial palsy gout sciatica arthritis lever must replace medical seeking advice. Always consult doctor worried health condition therapies devices. alongside speed faster spectrum invisible portion Cells lowlevel cellular fuel similar native really wanted leave my Hebrew lovely Natty Israeli said. Tibetans raw material imported South Africa Brazil exported finished form markets exchange earning industry country continues intricate detailed jewellery Books entitled Handbook Tuner Lars Hode guides books presented Basics LLLT Level Light Therapy Cold problems sores chapped lips -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 12687 bytes Desc: not available URL: From fullung at gmail.com Sat Aug 5 18:11:23 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 6 Aug 2006 00:11:23 +0200 Subject: [Numpy-discussion] NumPy documentation Message-ID: Hello all With NumPy 1.0 mere weeks away, I'm hoping we can improve the documentation a bit before the final release. Some things we might want to think about: 1. Documentation Sprint This page: http://www.scipy.org/SciPy2006/CodingSprints mentions a possible Documentation Sprint at SciPy 2006. Does anybody know if this is going to happen? 2. Tickets for missing functions missing docstrings Would it be helpful to create tickets for functions that currently don't have docstrings? If not, is there a better way we can keep track of the state of the documentation? 3. Examples in documentation Do we want to include examples in the docstrings? Some functions already do, and I think think this can be quite useful when one is exploring the library. Maybe the example list: http://www.scipy.org/Numpy_Example_List should be incorporated into the docstrings? Then we can also set up doctests to make sure that all the examples really work. 4. Documentation format If someone wants to submit documentation to be included, say as patches attached to tickets, what kind of format do we want? There's already various PEPs dealing with this topic: Docstring Processing System Framework http://www.python.org/dev/peps/pep-0256/ Docstring Conventions http://www.python.org/dev/peps/pep-0257/ Docutils Design Specification http://www.python.org/dev/peps/pep-0258/ reStructuredText Docstring Format http://www.python.org/dev/peps/pep-0287/ 5. Documentation tools A quick search turned up docutils: http://docutils.sourceforge.net/ and epydoc: http://epydoc.sourceforge.net/ Both of these support restructured text, so that looks like the way to go. I think epydoc can handle LaTeX equations and some LaTeX support has also been added to docutils recently. This might be useful for describing some functions. Something else to consider is pydoc compatibility. NumPy currently breaks pydoc: http://projects.scipy.org/scipy/numpy/ticket/232 It also breaks epydoc 3.0a2 (maybe an epydoc bug): http://sourceforge.net/tracker/index.php?func=detail&aid=1535178&group_id=32 455&atid=405618 Anything else? How should we proceed to improve NumPy's documentation? Regards, Albert From gruben at bigpond.net.au Sat Aug 5 22:28:19 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sun, 06 Aug 2006 12:28:19 +1000 Subject: [Numpy-discussion] NumPy documentation In-Reply-To: References: Message-ID: <44D553C3.4010107@bigpond.net.au> All excellent suggestions Albert. What about creating a numpy version of either the main Numeric or numarray document? I would like to see examples included in numpy of all functions. However, I think a better way to do this would be to place all examples in a separate module and create a function such as example() which would then allow something like example(arange) to spit out the example code. This would make it easier to include multiple examples for each command and to actually execute the example code, which I think is a necessary ability to make the examples testable. Examples could go in like doctests with some sort of delimiting so that they can have numbers generated and be referred to, so that you could execute, say, the 3rd example for the arange() function. Perhaps a runexample() function should be created for this or perhaps provide arguments for the example() function like example(name, number, run) The Maxima CAS package has something like this and also has an apropos() command which lists commands with similar sounding names to the argument. We could implement something similar but better by searching the examples module for similar commands, but also listing "See Also" cross references like those in the Numpy_Example_List, Gary R. Albert Strasheim wrote: > Hello all > > With NumPy 1.0 mere weeks away, I'm hoping we can improve the documentation > a bit before the final release. Some things we might want to think about: > > 1. Documentation Sprint > > This page: > > http://www.scipy.org/SciPy2006/CodingSprints > > mentions a possible Documentation Sprint at SciPy 2006. Does anybody know if > this is going to happen? > > 2. Tickets for missing functions missing docstrings > > Would it be helpful to create tickets for functions that currently don't > have docstrings? If not, is there a better way we can keep track of the > state of the documentation? > > 3. Examples in documentation > > Do we want to include examples in the docstrings? Some functions already do, > and I think think this can be quite useful when one is exploring the > library. > > Maybe the example list: > > http://www.scipy.org/Numpy_Example_List > > should be incorporated into the docstrings? Then we can also set up doctests > to make sure that all the examples really work. > > 4. Documentation format > > If someone wants to submit documentation to be included, say as patches > attached to tickets, what kind of format do we want? > > There's already various PEPs dealing with this topic: > > Docstring Processing System Framework > http://www.python.org/dev/peps/pep-0256/ > > Docstring Conventions > http://www.python.org/dev/peps/pep-0257/ > > Docutils Design Specification > http://www.python.org/dev/peps/pep-0258/ > > reStructuredText Docstring Format > http://www.python.org/dev/peps/pep-0287/ > > 5. Documentation tools > > A quick search turned up docutils: > > http://docutils.sourceforge.net/ > > and epydoc: > > http://epydoc.sourceforge.net/ > > Both of these support restructured text, so that looks like the way to go. I > think epydoc can handle LaTeX equations and some LaTeX support has also been > added to docutils recently. This might be useful for describing some > functions. > > Something else to consider is pydoc compatibility. NumPy currently breaks > pydoc: > > http://projects.scipy.org/scipy/numpy/ticket/232 > > It also breaks epydoc 3.0a2 (maybe an epydoc bug): > > http://sourceforge.net/tracker/index.php?func=detail&aid=1535178&group_id=32 > 455&atid=405618 > > Anything else? How should we proceed to improve NumPy's documentation? > > Regards, > > Albert From davidgrant at gmail.com Sat Aug 5 23:45:49 2006 From: davidgrant at gmail.com (David Grant) Date: Sat, 5 Aug 2006 20:45:49 -0700 Subject: [Numpy-discussion] NumPy documentation In-Reply-To: References: Message-ID: What about the documentation that already exists here: http://www.tramy.us/ I think the more people that buy it the better since that money goes to support Travis does it not? Dave On 8/5/06, Albert Strasheim wrote: > > Hello all > > With NumPy 1.0 mere weeks away, I'm hoping we can improve the > documentation > a bit before the final release. Some things we might want to think about: > > 1. Documentation Sprint > > This page: > > http://www.scipy.org/SciPy2006/CodingSprints > > mentions a possible Documentation Sprint at SciPy 2006. Does anybody know > if > this is going to happen? > > 2. Tickets for missing functions missing docstrings > > Would it be helpful to create tickets for functions that currently don't > have docstrings? If not, is there a better way we can keep track of the > state of the documentation? > > 3. Examples in documentation > > Do we want to include examples in the docstrings? Some functions already > do, > and I think think this can be quite useful when one is exploring the > library. > > Maybe the example list: > > http://www.scipy.org/Numpy_Example_List > > should be incorporated into the docstrings? Then we can also set up > doctests > to make sure that all the examples really work. > > 4. Documentation format > > If someone wants to submit documentation to be included, say as patches > attached to tickets, what kind of format do we want? > > There's already various PEPs dealing with this topic: > > Docstring Processing System Framework > http://www.python.org/dev/peps/pep-0256/ > > Docstring Conventions > http://www.python.org/dev/peps/pep-0257/ > > Docutils Design Specification > http://www.python.org/dev/peps/pep-0258/ > > reStructuredText Docstring Format > http://www.python.org/dev/peps/pep-0287/ > > 5. Documentation tools > > A quick search turned up docutils: > > http://docutils.sourceforge.net/ > > and epydoc: > > http://epydoc.sourceforge.net/ > > Both of these support restructured text, so that looks like the way to go. > I > think epydoc can handle LaTeX equations and some LaTeX support has also > been > added to docutils recently. This might be useful for describing some > functions. > > Something else to consider is pydoc compatibility. NumPy currently breaks > pydoc: > > http://projects.scipy.org/scipy/numpy/ticket/232 > > It also breaks epydoc 3.0a2 (maybe an epydoc bug): > > > http://sourceforge.net/tracker/index.php?func=detail&aid=1535178&group_id=32 > 455&atid=405618 > > Anything else? How should we proceed to improve NumPy's documentation? > > Regards, > > Albert > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Aug 6 03:51:54 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 06 Aug 2006 02:51:54 -0500 Subject: [Numpy-discussion] NumPy documentation In-Reply-To: References: Message-ID: David Grant wrote: > What about the documentation that already exists here: http://www.tramy.us/ Essentially every function and class needs a docstring whether or not there is a manual available. Neither one invalidates the need for the other. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From davidgrant at gmail.com Sun Aug 6 04:14:58 2006 From: davidgrant at gmail.com (David Grant) Date: Sun, 6 Aug 2006 01:14:58 -0700 Subject: [Numpy-discussion] divmod issue Message-ID: The following lines of code: from numpy import floor div, mod = divmod(floor(1.5), 12) generate an exception: ValueError: need more than 0 values to unpack in numpy-0.9.8. Does anyone else see this? It might be due to the fact that floor returns a float64scalar. Should I be forced to cast that to an int before calling divmod with it? -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Aug 6 04:18:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 06 Aug 2006 03:18:23 -0500 Subject: [Numpy-discussion] divmod issue In-Reply-To: References: Message-ID: David Grant wrote: > The following lines of code: > > from numpy import floor > div, mod = divmod(floor(1.5), 12) > > generate an exception: > > ValueError: need more than 0 values to unpack > > in numpy-0.9.8. Does anyone else see this? It might be due to the fact > that floor returns a float64scalar. Should I be forced to cast that to > an int before calling divmod with it? I don't see an exception with a more recent numpy (r2881, to be precise). Please try a later version. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From svetosch at gmx.net Sun Aug 6 15:03:32 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sun, 06 Aug 2006 21:03:32 +0200 Subject: [Numpy-discussion] fixing diag() for matrices In-Reply-To: References: <44C64AA2.7070906@gmx.net> <44C652C0.9040806@gmx.net> <44CA0716.2000707@gmx.net> <44CA3EC0.9020404@noaa.gov> <44CA9283.5030108@gmx.net> Message-ID: <44D63D04.9060600@gmx.net> Charles R Harris schrieb: > Hi Sven, > > On 7/28/06, *Sven Schreiber* > wrote: > > Here's my attempt at summarizing the diag-discussion. > > > > > 2) Deprecate the use of diag which is overloaded with making diagonal > matrices as well as getting diagonals. Instead, use the existing > .diagonal() for getting a diagonal, and introduce a new make_diag() > function which could easily work for numpy-arrays and numpy-matrices > alike. > > > This would be my preference, but with functions {get,put}diag. We could > also add a method or function asdiag, which would always return a > diagonal matrix made from *all* the elements of the matrix taken in > order. For (1,n) or (n,1) this would do what you want. For other > matrices the result would be something new and probably useless, but at > least it wouldn't hurt. > This seems to have been implemented now by the new diagflat() function. So, matrix users can now use m.diagonal() for the matrix->vector direction of diag(), and diagflat(v) for the vector->matrix side of diag(), and always get numpy-matrix output for numpy-matrix input. Thanks a lot for making this possible! One (really minor) comment: "diagflat" as a name is not optimal imho. Are other suggestions welcome, or is there a compelling reason for this name? Thanks, sven From wbaxter at gmail.com Mon Aug 7 01:02:05 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 7 Aug 2006 14:02:05 +0900 Subject: [Numpy-discussion] comments on r_ and c_ ? In-Reply-To: <44CE3EE5.1000904@ieee.org> References: <44CE3EE5.1000904@ieee.org> Message-ID: On 8/1/06, Travis Oliphant wrote: > Bill Baxter wrote: > > When you have a chance, could the powers that be make some comment on > > the r_ and c_ situation? > r_ and c_ were in SciPy and have been there for several years. > > For NumPy, c_ has been deprecated (but not removed because it is used in > SciPy). > > The functionality of c_ is in r_ so it doesn't add anything. I don't see how r_ offers the ability to stack columns like this: >>> c_[ [[0],[1],[2]], [[4],[5],[6]] ] array([[0, 4], [1, 5], [2, 6]]) > There is going to be overlap with long-name functions because of > this. I have not had time to review Bill's suggestions yet --- were > they filed as a ticket? A ticket is the best way to keep track of > issues at this point. I just filed it as #235. But then I noticed I had already filed it previously as #201. Sorry about that. Anyway it's definitely in there now. Regards, --Bill From klemm at phys.ethz.ch Mon Aug 7 04:52:58 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Mon, 07 Aug 2006 10:52:58 +0200 Subject: [Numpy-discussion] numpy compilation question Message-ID: Hello, I try to compile numpy-1.0b1 with blas and lapack support. I have compiled blas and lapack according to the instrunctions in http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 . I copied the libraries to /scratch/python2.4/lib and set the environment variables accordingly. python setup.py config in the numpy directory then finds the libraries. If I then do python setup.py build The compilation dies with the error message: .. build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x28ae): In function `dotblas_vdot': numpy/core/blasdot/_dotblas.c:971: undefined reference to `PyArg_ParseTuple' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2b45):numpy/core/blasdot/_dotblas.c:1002: undefined reference to `PyTuple_N ew' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2b59):numpy/core/blasdot/_dotblas.c:83: undefined reference to `PyArg_Parse Tuple' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2b6d):numpy/core/blasdot/_dotblas.c:107: undefined reference to `_Py_NoneSt ruct' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2cba):numpy/core/blasdot/_dotblas.c:1021: undefined reference to `PyExc_Val ueError' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2cc9):numpy/core/blasdot/_dotblas.c:1021: undefined reference to `PyErr_Set String' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d1c):numpy/core/blasdot/_dotblas.c:1029: undefined reference to `PyEval_Sa veThread' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d3f):numpy/core/blasdot/_dotblas.c:1049: undefined reference to `PyEval_Re storeThread' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d63):numpy/core/blasdot/_dotblas.c:1045: undefined reference to `cblas_cdo tc_sub' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d84):numpy/core/blasdot/_dotblas.c:1041: undefined reference to `cblas_zdo tc_sub' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2da1):numpy/core/blasdot/_dotblas.c:1037: undefined reference to `cblas_sdo t' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2dc6):numpy/core/blasdot/_dotblas.c:1033: undefined reference to `cblas_ddo t' /usr/lib/gcc-lib/x86_64-redhat-linux/3.2.3/libfrtbegin.a(frtbegin.o)(.text+0x22): In function `main': : undefined reference to `MAIN__' collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -L/scratch/apps/lib build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o -L/scratch/python2.4/lib -lfblas - lg2c -o build/lib.linux-x86_64-2.4/numpy/core/_dotblas.so" failed with exit status 1 I try this on a dual processor Xeon machine with gcc 3.2.3 under an old redhat distribution. Therefore using the libraries delivered with the distro don't work as they are broken. At first I tried to compile numpy with atlas support but I got similar problems. I have attached the full output of the failed build. I would be very grateful if somebody with a little more experience with compilers could have a look at it and maybe point me in the right direction. Many thanks in advance, Hanno -- Hanno Klemm klemm at phys.ethz.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log.gz Type: application/x-gzip Size: 4478 bytes Desc: not available URL: From hjn253 at tom.com Thu Aug 10 05:14:17 2006 From: hjn253 at tom.com (=?GB2312?B?IjjUwjE5LTIwyNUvsbG+qSI=?=) Date: Thu, 10 Aug 2006 17:14:17 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU61MvTw0VYQ0VMus1QUFS4xL34udzA7brNvq3Tqr72st8=?= Message-ID: An HTML attachment was scrubbed... URL: From david.huard at gmail.com Mon Aug 7 08:48:52 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 7 Aug 2006 08:48:52 -0400 Subject: [Numpy-discussion] Histogram versus histogram2d In-Reply-To: <3ff66ae00608030749h42e53469j5aa0901628622d79@mail.gmail.com> References: <3ff66ae00608030749h42e53469j5aa0901628622d79@mail.gmail.com> Message-ID: <91cf711d0608070548j2ebda5bat1a92a1932a04388b@mail.gmail.com> I have noticed some that the 1d histogram and 2d histogram. The > histogram function bins everything between the elements of edges, and > then includes everything greater than the last edge element in the > last bin. The histrogram2d function only bins in the range specified > by edges. Is there a reason these two functions do not operate in the > same way? > Hi Mikolai, The reason is that I didn't like the way histogram handled outliers so I wrote histogram1d, histogram2d, and histogramdd to handle 1d, 2d and nd data series. I submitted those functions and only histogram2d got included in numpy, hence the clash. Travis suggested that histogram1d and histogramdd could go into scipy, but with the new compatibility paradigm, I suggest that the old histogram is moved into the compatibility module and histogram1d is renamed to histogram and put into the main namespace. histogramdd could indeed go into scipy.stats. I'll submit a new patch if there is some interest. The new function takes an axis argument so you can make an histogram out of a nd array rowwise or columnwise. Ouliers are not counted, and the bin array has length (nbin +1) (+1 for the right hand side edge). The new function will break some code relying on the old behavior, so its inclusion presupposes the agreement of the users. You can find the code at ticket 189 . David -------------- next part -------------- An HTML attachment was scrubbed... URL: From meesters at uni-mainz.de Mon Aug 7 13:29:44 2006 From: meesters at uni-mainz.de (Christian Meesters) Date: Mon, 7 Aug 2006 19:29:44 +0200 Subject: [Numpy-discussion] numpy and unittests Message-ID: <200608071929.44796.meesters@uni-mainz.de> Hi, I used to work with some unittest scripts for a bigger project of mine. Now that I started the project again the tests don't work anymore, using numpy version '0.9.5.2100' . The errors I get look are like this: ERROR: _normalize() should return dataset scaled between 0 and 1 ---------------------------------------------------------------------- Traceback (most recent call last): File "testingSAXS.py", line 265, in testNormalization self.assertEqual(self.test1._normalize(minimum=0.0,maximum=1.0),self.test5) File "/usr/lib64/python2.4/unittest.py", line 332, in failUnlessEqual if not first == second: File "/home/cm/Documents/Informatics/Python/python_programming/biophysics/SAXS/lib/Data.py", line 174, in __eq__ if self.intensity == other.intensity: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() The 'self.intensity' objects are 1D-arrays containing integers <= 1E6. The unittest script looks like: if __name__=='__main__': from Data import * from Utils import * import unittest def test__eq__(self): """__eq__ should return True with identical array data""" self.assert_(self.test1 == self.test2) suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(Test_SAXS_Sanity)) unittest.TextTestRunner(verbosity=1).run(suite) Any ideas what I have to change? (Possibly trivial, but I have no clue.) TIA Cheers Christian From robert.kern at gmail.com Mon Aug 7 14:04:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 07 Aug 2006 13:04:24 -0500 Subject: [Numpy-discussion] numpy and unittests In-Reply-To: <200608071929.44796.meesters@uni-mainz.de> References: <200608071929.44796.meesters@uni-mainz.de> Message-ID: Christian Meesters wrote: > Hi, > > I used to work with some unittest scripts for a bigger project of mine. Now > that I started the project again the tests don't work anymore, using numpy > version '0.9.5.2100' . > > The errors I get look are like this: > > ERROR: _normalize() should return dataset scaled between 0 and 1 > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "testingSAXS.py", line 265, in testNormalization > > self.assertEqual(self.test1._normalize(minimum=0.0,maximum=1.0),self.test5) > File "/usr/lib64/python2.4/unittest.py", line 332, in failUnlessEqual > if not first == second: > File > "/home/cm/Documents/Informatics/Python/python_programming/biophysics/SAXS/lib/Data.py", > line 174, in __eq__ > if self.intensity == other.intensity: > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() > > The 'self.intensity' objects are 1D-arrays containing integers <= 1E6. > > The unittest script looks like: > > if __name__=='__main__': > from Data import * > from Utils import * > import unittest > > > def test__eq__(self): > """__eq__ should return True with identical array data""" > self.assert_(self.test1 == self.test2) > > suite = unittest.TestSuite() > suite.addTest(unittest.makeSuite(Test_SAXS_Sanity)) > > unittest.TextTestRunner(verbosity=1).run(suite) > > Any ideas what I have to change? (Possibly trivial, but I have no clue.) self.assert_((self.test1 == self.test2).all()) I'm afraid that your test was always broken. Numeric used the convention that if *any* value in a boolean array was True, then the array would evaluate to True when used as a truth value in an if: clause. However, you almost certainly wanted to test that *all* of the values were True. This is why we now raise an exception; lots of people got tripped up over that. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Mon Aug 7 23:18:17 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 8 Aug 2006 12:18:17 +0900 Subject: [Numpy-discussion] Examples of basic C API usage? Message-ID: I see Pyrex and SWIG examples in numpy/doc but there doesn't seem to be an example of just a simple straightforward usage of the C-API. For instance make a few arrays by hand in C and then call numpy.multiply() on them. So far my attempts to call PyArray_SimpleNewFromData all result in segfaults. Anyone have such an example? --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From klemm at phys.ethz.ch Tue Aug 8 05:22:46 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Tue, 08 Aug 2006 11:22:46 +0200 Subject: [Numpy-discussion] numpy import problem Message-ID: Hello, finally after sorting out some homemade problems I managed to compile numpy-1.0b1. If I then start it from the directory where I compiled it, it works fine. However after I installed numpy with python setup.py install --prefix=/scratch/python2.4 I get the error message: Python 2.4.3 (#7, Aug 2 2006, 18:55:46) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/scratch/python2.4/lib/python2.4/site-packages/numpy/__init__.py", line 39, in ? import linalg File "/scratch/python2.4/lib/python2.4/site-packages/numpy/linalg/__init__.py", line 4, in ? from linalg import * File "/scratch/python2.4/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 24, in ? from numpy.linalg import lapack_lite ImportError: /scratch/python2.4/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: atl_f77wrap_zgemv__ >>> I suppose I have to set a path somewhere to the directory where atlas is installed. How do I do this? Hanno -- Hanno Klemm klemm at phys.ethz.ch From mikeyan at yahoo.co.jp Tue Aug 8 06:58:18 2006 From: mikeyan at yahoo.co.jp (=?iso-2022-jp?B?bWlrZQ==?=) Date: Tue, 08 Aug 2006 10:58:18 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?03 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From karol.langner at kn.pl Tue Aug 8 09:45:49 2006 From: karol.langner at kn.pl (Karol Langner) Date: Tue, 8 Aug 2006 15:45:49 +0200 Subject: [Numpy-discussion] Examples of basic C API usage? In-Reply-To: References: Message-ID: <200608081545.50274.karol.langner@kn.pl> On Tuesday 08 of August 2006 05:18, Bill Baxter wrote: > I see Pyrex and SWIG examples in numpy/doc but there doesn't seem to be an > example of just a simple straightforward usage of the C-API. > For instance make a few arrays by hand in C and then call numpy.multiply() > on them. So far my attempts to call PyArray_SimpleNewFromData all result > in segfaults. > Anyone have such an example? > > --Bill Have you looked here? http://numeric.scipy.org/numpydoc/numpy-13.html#pgfId-36640 Karol -- written by Karol Langner wto sie 8 15:45:16 CEST 2006 From ggumas at gmail.com Tue Aug 8 17:02:32 2006 From: ggumas at gmail.com (George Gumas) Date: Tue, 8 Aug 2006 17:02:32 -0400 Subject: [Numpy-discussion] numpy and matplotlib Message-ID: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> I downloaded numpy 10000 and matplotlib and when running numpy i get the error message below from matplotlib._ns_cntr import * RuntimeError: module compiled against version 90709 of C-API but this version of numpy is 1000000 How do I go about chaning the version of rither numpy or matplotlib Thanks George -------------- next part -------------- An HTML attachment was scrubbed... URL: From dd55 at cornell.edu Tue Aug 8 17:11:49 2006 From: dd55 at cornell.edu (Darren Dale) Date: Tue, 8 Aug 2006 17:11:49 -0400 Subject: [Numpy-discussion] numpy and matplotlib In-Reply-To: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> References: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> Message-ID: <200608081711.49315.dd55@cornell.edu> On Tuesday 08 August 2006 17:02, George Gumas wrote: > I downloaded numpy 10000 and matplotlib and when running numpy i get the > error message below > from matplotlib._ns_cntr import * > RuntimeError: module compiled against version 90709 of C-API but this > version of numpy is 1000000 > > How do I go about chaning the version of rither numpy or matplotlib This question is more appropriate for the mpl list, and it was discussed there late last week. The next matplotlib release will support numpy beta 1 and 2. Darren From wbaxter at gmail.com Tue Aug 8 17:11:57 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 9 Aug 2006 06:11:57 +0900 Subject: [Numpy-discussion] numpy and matplotlib In-Reply-To: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> References: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> Message-ID: Matplotlib needs to be recompiled against the latest Numpy. They should release a new version compatible with Numpy 1.0 beta soon. --bb On 8/9/06, George Gumas wrote: > > I downloaded numpy 10000 and matplotlib and when running numpy i get the > error message below > from matplotlib._ns_cntr import * > RuntimeError: module compiled against version 90709 of C-API but this > version of numpy is 1000000 > > How do I go about chaning the version of rither numpy or matplotlib > > Thanks > George > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fullung at gmail.com Tue Aug 8 17:24:08 2006 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 8 Aug 2006 23:24:08 +0200 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes Message-ID: Hello all With the nice ctypes integration in NumPy, and with Python 2.5 which will include ctypes around the corner, a remote possibility exists that within the next year or two, I might not be the only person that wants to use NumPy with ctypes. This is probably going to mean that this someone is going to want to build a shared library for use with ctypes. This is all well and good if you're using a build tool that knows about shared libraries, but in case this person is stuck with distutils, here is what we might want to do. Following this thread from SciPy-dev: http://projects.scipy.org/pipermail/scipy-dev/2006-April/005708.html I came up with the following plan. As it happens, pretending your shared library is a Python extension mostly works. In your setup.py you can do something like this: config = Configuration(package_name,parent_package,top_path) config.add_extension('libsvm_', define_macros=[('LIBSVM_EXPORTS', None), ('LIBSVM_DLL', None)], sources=[join('libsvm-2.82', 'svm.cpp')], depends=[join('libsvm-2.82', 'svm.h')]) First caveat: on Windows, distutils forces the linker to look for an exported symbol called init. In your code you'll have to add an empty function like this: void initlibsvm_() {} This gets us a compiled Python extension, which also happens to be a shared library on every platform I know of, which is Linux and Windows. Counter-examples anyone?. Next caveat: on Windows, shared libraries aka DLLs, typically have a .dll extension. However, Python extensions have a .pyd extension. We have a utility function in NumPy called ctypes_load_library which handles finding and loading of shared libraries with ctypes. Currently, shared library extensions (.dll, .so, .dylib) are hardcoded in this function. I propose we modify this function to look something like this: def ctypes_load_library(libname, loader_path, distutils_hack=False): ... If distutils_hack is True, instead of the default mechanism (which is currently hardcoded extensions), ctypes_load_library should do: import distutils.config so_ext = distutils.sysconfig.get_config_var('SO') to figure out the extension it should use to load shared libraries. This should make it reasonably easy for people to build shared libraries with distutils and use them with NumPy and ctypes. Comments appreciated. Someone checking something along these lines into SVN appreciated more. A solution that doesn't make me want to cry appreciated most. Thanks for reading. Regards, Albert P.S. As it happens, the OOF2 guys have already created a SharedLibrary builder for distutils, but integrating this into numpy.distutils is probably non-trivial. http://www.ctcms.nist.gov/oof/oof2.html From wbaxter at gmail.com Tue Aug 8 17:25:10 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 9 Aug 2006 06:25:10 +0900 Subject: [Numpy-discussion] Examples of basic C API usage? In-Reply-To: <200608081545.50274.karol.langner@kn.pl> References: <200608081545.50274.karol.langner@kn.pl> Message-ID: Ah, great. That is helpful, though it does seem to be a bit outdated. --bb On 8/8/06, Karol Langner wrote: > > On Tuesday 08 of August 2006 05:18, Bill Baxter wrote: > > I see Pyrex and SWIG examples in numpy/doc but there doesn't seem to be > an > > example of just a simple straightforward usage of the C-API. > > For instance make a few arrays by hand in C and then call numpy.multiply > () > > on them. So far my attempts to call PyArray_SimpleNewFromData all > result > > in segfaults. > > Anyone have such an example? > > > > --Bill > > Have you looked here? > > http://numeric.scipy.org/numpydoc/numpy-13.html#pgfId-36640 > > Karol > > -- > written by Karol Langner > wto sie 8 15:45:16 CEST 2006 > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Tue Aug 8 20:22:37 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 9 Aug 2006 09:22:37 +0900 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes In-Reply-To: References: Message-ID: On 8/9/06, Albert Strasheim wrote: > > Next caveat: on Windows, shared libraries aka DLLs, typically have a .dll > extension. However, Python extensions have a .pyd extension. > > We have a utility function in NumPy called ctypes_load_library which > handles > finding and loading of shared libraries with ctypes. Currently, shared > library extensions (.dll, .so, .dylib) are hardcoded in this function. > > I propose we modify this function to look something like this: > > def ctypes_load_library(libname, loader_path, distutils_hack=False): > ... > > If distutils_hack is True, instead of the default mechanism (which is > currently hardcoded extensions), ctypes_load_library should do: > > import distutils.config > so_ext = distutils.sysconfig.get_config_var('SO') > > to figure out the extension it should use to load shared libraries. This > should make it reasonably easy for people to build shared libraries with > distutils and use them with NumPy and ctypes. Wouldn't it make more sense to just rename the .pyd generated by distutils to .dll or .so? Especially since the .pyd generated by distutils won't actually be a python extension module. This renaming could be automated by a simple python script that wraps distutils. The addition of the init{modulename} function could also be done by that script. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From cwmoad at gmail.com Tue Aug 8 20:52:06 2006 From: cwmoad at gmail.com (Charlie Moad) Date: Tue, 8 Aug 2006 20:52:06 -0400 Subject: [Numpy-discussion] numpy and matplotlib In-Reply-To: References: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> Message-ID: <6382066a0608081752q51300fder958703d566881f4f@mail.gmail.com> We're waiting on some possible changes in the numpy c-api before scipy. Hopefully we will have a working release in the next week. On 8/8/06, Bill Baxter wrote: > Matplotlib needs to be recompiled against the latest Numpy. > They should release a new version compatible with Numpy 1.0 beta soon. > --bb > > > On 8/9/06, George Gumas wrote: > > > > I downloaded numpy 10000 and matplotlib and when running numpy i get the > error message below > from matplotlib._ns_cntr import * > RuntimeError: module compiled against version 90709 of C-API but this > version of numpy is 1000000 > > How do I go about chaning the version of rither numpy or matplotlib > > Thanks > George > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From robert.kern at gmail.com Tue Aug 8 21:47:03 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 08 Aug 2006 20:47:03 -0500 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes In-Reply-To: References: Message-ID: Bill Baxter wrote: > Wouldn't it make more sense to just rename the .pyd generated by > distutils to .dll or .so? Especially since the .pyd generated by > distutils won't actually be a python extension module. This renaming > could be automated by a simple python script that wraps distutils. The > addition of the init{modulename} function could also be done by that > script. The strategy of "post-processing" after the setup() is not really robust. I've encountered a number of packages that try to things like that, and I've never had one work right. And no, it won't solve the init{modulename} problem, either. It's a problem that occurs at build-time, not import-time. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Tue Aug 8 21:52:12 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 08 Aug 2006 18:52:12 -0700 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes In-Reply-To: References: Message-ID: <44D93FCC.3070405@astraw.com> Dear Albert, I have started to use numpy and ctypes together and I've been quite pleased. Thanks for your efforts and writings on the wiki. On the topic of ctypes but not directly following from your email: I noticed immediately that the .ctypes attribute of an array is going to be a de-facto array interface, and wondered whether it would actually be better to write some code that takes the __array_struct__ interface and exposes that as an object with ctypes-providing attributes. This way, it could be used by all software exposing the __array_struct__ interface. Still, even with today's implementation, this could be acheived with numpy.asarray( my_array_struct_object ).ctypes. Back to your email: I don't understand why you're trying to build a shared library with distutils. What's wrong with a plain old c-compiler and linker (and mt.exe if you're using MS VC 8)? You can build shared libraries this way with Makefiles, scons, Visual Studio, and about a billion other solutions that have evolved since early C days. I can understand the desire of getting "python setup.py install" to work, but I suspect spawning an appropriate subprocess to do the compilation would be easier and more robust than attempting to get distutils to do something it's not designed for. (Then again, to see what numpy distutils can do, well, let's just say I'm amazed.) Along these lines, I noticed that ctypes-itself seems to have put some hooks into setup.py to perform at least part of the configure/make dance on linux, although I haven't investigated any further yet. Perhaps that's a better way to go than bending distutils to your will? Finally, the ctypes_load_library() function was broken for me and so I just ended up using the appropriate ctypes calls directly. (I should report this bug, I know, and I haven't yet... Bad Andrew.) But the bigger issue for me is that this is a ctypes-level convenience function, and I can't see why it should be in numpy. Is there any reason it should go in numpy and not into ctypes itself where it would surely receive more review and widespread use if it's useful? Albert Strasheim wrote: >Hello all > >With the nice ctypes integration in NumPy, and with Python 2.5 which will >include ctypes around the corner, a remote possibility exists that within >the next year or two, I might not be the only person that wants to use NumPy >with ctypes. > >This is probably going to mean that this someone is going to want to build a >shared library for use with ctypes. This is all well and good if you're >using a build tool that knows about shared libraries, but in case this >person is stuck with distutils, here is what we might want to do. > >Following this thread from SciPy-dev: > >http://projects.scipy.org/pipermail/scipy-dev/2006-April/005708.html > >I came up with the following plan. > >As it happens, pretending your shared library is a Python extension mostly >works. In your setup.py you can do something like this: > >config = Configuration(package_name,parent_package,top_path) >config.add_extension('libsvm_', > define_macros=[('LIBSVM_EXPORTS', None), > ('LIBSVM_DLL', None)], > sources=[join('libsvm-2.82', 'svm.cpp')], > depends=[join('libsvm-2.82', 'svm.h')]) > >First caveat: on Windows, distutils forces the linker to look for an >exported symbol called init. In your code you'll have to >add an empty function like this: > >void initlibsvm_() {} > >This gets us a compiled Python extension, which also happens to be a shared >library on every platform I know of, which is Linux and Windows. >Counter-examples anyone?. > >Next caveat: on Windows, shared libraries aka DLLs, typically have a .dll >extension. However, Python extensions have a .pyd extension. > >We have a utility function in NumPy called ctypes_load_library which handles >finding and loading of shared libraries with ctypes. Currently, shared >library extensions (.dll, .so, .dylib) are hardcoded in this function. > >I propose we modify this function to look something like this: > >def ctypes_load_library(libname, loader_path, distutils_hack=False): > ... > >If distutils_hack is True, instead of the default mechanism (which is >currently hardcoded extensions), ctypes_load_library should do: > >import distutils.config >so_ext = distutils.sysconfig.get_config_var('SO') > >to figure out the extension it should use to load shared libraries. This >should make it reasonably easy for people to build shared libraries with >distutils and use them with NumPy and ctypes. > >Comments appreciated. Someone checking something along these lines into SVN >appreciated more. A solution that doesn't make me want to cry appreciated >most. > >Thanks for reading. > >Regards, > >Albert > >P.S. As it happens, the OOF2 guys have already created a SharedLibrary >builder for distutils, but integrating this into numpy.distutils is probably >non-trivial. > >http://www.ctcms.nist.gov/oof/oof2.html > > > >------------------------------------------------------------------------- >Using Tomcat but need to do more? Need to support web services, security? >Get stuff done quickly with pre-integrated technology to make your job easier >Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From matthew.brett at gmail.com Tue Aug 8 21:52:11 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 9 Aug 2006 02:52:11 +0100 Subject: [Numpy-discussion] astype char conversion Message-ID: <1e2af89e0608081852s6b5e16c0yd67a3ab2958da067@mail.gmail.com> Hi, Sorry if this is silly question, but should this work to convert from int8 to character type? a = array([104, 105], dtype=N.int8) a.astype('|S1') I was a bit surprised by the output: array([1, 1], dtype='|S1') Thanks a lot, Matthew From robert.kern at gmail.com Tue Aug 8 22:02:11 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 08 Aug 2006 21:02:11 -0500 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes In-Reply-To: <44D93FCC.3070405@astraw.com> References: <44D93FCC.3070405@astraw.com> Message-ID: Andrew Straw wrote: > Back to your email: I don't understand why you're trying to build a > shared library with distutils. What's wrong with a plain old c-compiler > and linker (and mt.exe if you're using MS VC 8)? You can build shared > libraries this way with Makefiles, scons, Visual Studio, and about a > billion other solutions that have evolved since early C days. I can > understand the desire of getting "python setup.py install" to work, but > I suspect spawning an appropriate subprocess to do the compilation would > be easier and more robust than attempting to get distutils to do > something it's not designed for. (Then again, to see what numpy > distutils can do, well, let's just say I'm amazed.) Along these lines, I > noticed that ctypes-itself seems to have put some hooks into setup.py to > perform at least part of the configure/make dance on linux, although I > haven't investigated any further yet. Perhaps that's a better way to go > than bending distutils to your will? Well, wrapper he's writing destined for scipy, so "python setup.py build" must work. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Aug 8 22:12:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 08 Aug 2006 21:12:23 -0500 Subject: [Numpy-discussion] NumPy, shared libraries and ctypes In-Reply-To: References: Message-ID: Albert Strasheim wrote: > Comments appreciated. Someone checking something along these lines into SVN > appreciated more. A solution that doesn't make me want to cry appreciated > most. > P.S. As it happens, the OOF2 guys have already created a SharedLibrary > builder for distutils, but integrating this into numpy.distutils is probably > non-trivial. > > http://www.ctcms.nist.gov/oof/oof2.html I recommend using OOF2's stuff, not the .pyd hack. The latter makes *me* want to cry. If you come up with a patch, post it to the numpy Trac, and I'll check it in. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue Aug 8 23:01:56 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 09 Aug 2006 12:01:56 +0900 Subject: [Numpy-discussion] numpy and matplotlib In-Reply-To: <200608081711.49315.dd55@cornell.edu> References: <3761931b0608081402y7306b15gc8ae2948f088187a@mail.gmail.com> <200608081711.49315.dd55@cornell.edu> Message-ID: <44D95024.1090905@ar.media.kyoto-u.ac.jp> Darren Dale wrote: > On Tuesday 08 August 2006 17:02, George Gumas wrote: > >> I downloaded numpy 10000 and matplotlib and when running numpy i get the >> error message below >> from matplotlib._ns_cntr import * >> RuntimeError: module compiled against version 90709 of C-API but this >> version of numpy is 1000000 >> >> This error may happen if you forgot to rebuild all matplotlib against the new numpy. Did you try recompiling everything by removing the build directory of matplotlib ? David From benjamin at decideur.info Wed Aug 9 04:25:10 2006 From: benjamin at decideur.info (Benjamin Thyreau) Date: Wed, 9 Aug 2006 10:25:10 +0200 Subject: [Numpy-discussion] Examples of basic C API usage? In-Reply-To: References: Message-ID: <200608091025.10363.benjamin@decideur.info> Le Mardi 8 Ao?t 2006 05:18, Bill Baxter a ?crit?: > I see Pyrex and SWIG examples in numpy/doc but there doesn't seem to be an > example of just a simple straightforward usage of the C-API. > For instance make a few arrays by hand in C and then call numpy.multiply() > on them. So far my attempts to call PyArray_SimpleNewFromData all result > in segfaults. > Anyone have such an example? > > --Bill For our neuroimagery lib, i had to write some simples straightforward wrappers to the C-lib GSL, which you might be interested to have a quick look.. Trac entry: http://projects.scipy.org/neuroimaging/ni/browser/fff/trunk/bindings/lightwrappers.h http://projects.scipy.org/neuroimaging/ni/browser/fff/trunk/bindings/lightwrappers.c and half-commented example usage.. http://projects.scipy.org/neuroimaging/ni/browser/fff/trunk/pythonTests/fffctests/lightmoduleExample.c -- Benjamin Thyreau CEA Orsay From david.huard at gmail.com Wed Aug 9 10:35:43 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 9 Aug 2006 10:35:43 -0400 Subject: [Numpy-discussion] Moving docstrings from C to Python In-Reply-To: References: <20060728145400.GN6338@mentat.za.net> Message-ID: <91cf711d0608090735h40ec64f7sbaa0d34ceb6e4978@mail.gmail.com> I started to do the same with array methods, but before I spend too much time on it, I'd like to be sure I'm doing the right thing. 1. In add_newdocs.py, add from numpy.core import ndarray 2. then add an entry for each method, eg add_docstring(ndarray.var, """a.var(axis=None, dtype=None) Return the variance, a measure of the spread of a distribution. The variance is the average of the squared deviations from the mean, i.e. var = mean((x - x.mean())**2). See also: std """) 3. in arraymethods.c, delete static char doc_var[] = ... remove doc_var in {"var", (PyCFunction)array_variance, METH_VARARGS|METH_KEYWORDS, doc_var}, David 2006/7/28, Sasha : > > On 7/28/06, Stefan van der Walt wrote: > > > Would anyone mind if the change was made? If not, where should they > > go? (numpy/add_newdocs.py or numpy/core/something) > > Another +1 for numpy/add_newdocs.py and a suggestion: check for > Py_OptimizeFlag > 1 in add_newdoc so that docstrings are not loaded if > python is invoked with -OO option. This will improve import numpy > time and reduce the memory footprint. I'll make the change if no one > objects. > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Wed Aug 9 00:53:36 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 08 Aug 2006 21:53:36 -0700 Subject: [Numpy-discussion] how to reference Numerical Python in a scientific publication Message-ID: <44D96A50.7080002@msg.ucsf.edu> Hi, we are using numerical python as an integral part of a microscope development project over last few years. So far we have been using exclusively numarray but now I decided it's time to slowly but sure migrate to numpy. What is the proper way to reference these packages ? Thanks to everyone involved, Sebastian Haase UCSF From haase at msg.ucsf.edu Wed Aug 9 17:02:14 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 9 Aug 2006 14:02:14 -0700 Subject: [Numpy-discussion] bug !? dtype type_descriptor does not accept zero length tuple Message-ID: <200608091402.14810.haase@msg.ucsf.edu> Hi! I have a problem with record array type descriptor. With numarray this used to work. My records made of n integers and m floats. So I used to be able specify formats="%di4,%df4"%(self.numInts,self.numFloats) in numarray which would translate to byteorder = self.isByteSwapped and '>' or '<' type_descr = [("int", "%s%di4" %(byteorder,self.numInts)), ("float", "%s%df4" %(byteorder,self.numFloats))] The problem occurs when numInts or numFloats is zero !? Could it numpy be changed to silectly accept this case Here is the complete traceback + some debug info: '>0i4'Traceback (most recent call last): File "", line 1, in ? File "/home/haase/PrLinN/Priithon/Mrc.py", line 56, in bindFile a = Mrc(fn, mode) File "/home/haase/PrLinN/Priithon/Mrc.py", line 204, in __init__ self.doExtHdrMap() File "/home/haase/PrLinN/Priithon/Mrc.py", line 271, in doExtHdrMap self.extHdrArray.dtype = type_descr File "/home/haase/qqq/lib/python/numpy/core/records.py", line 194, in __setattr__ return object.__setattr__(self, attr, val) TypeError: invalid data-type for array >>> U.debug() > /home/haase/qqq/lib/python/numpy/core/records.py(196)__setattr__() -> pass (Pdb) l 191 192 def __setattr__(self, attr, val): 193 try: 194 return object.__setattr__(self, attr, val) 195 except AttributeError: # Must be a fieldname 196 -> pass 197 fielddict = sb.ndarray.__getattribute__(self,'dtype').fields 198 try: 199 res = fielddict[attr][:2] 200 except (TypeError,KeyError): 201 raise AttributeError, "record array has no attribute %s" % attr (Pdb) p val [('int', '>0i4'), ('float', '>2f4')] (Pdb) p attr 'dtype' Thanks, Sebastian Haase From oliphant.travis at ieee.org Wed Aug 9 18:11:49 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 09 Aug 2006 16:11:49 -0600 Subject: [Numpy-discussion] astype char conversion In-Reply-To: <1e2af89e0608081852s6b5e16c0yd67a3ab2958da067@mail.gmail.com> References: <1e2af89e0608081852s6b5e16c0yd67a3ab2958da067@mail.gmail.com> Message-ID: <44DA5DA5.1010700@ieee.org> Matthew Brett wrote: > Hi, > > Sorry if this is silly question, but should this work to convert from > int8 to character type? > > a = array([104, 105], dtype=N.int8) > a.astype('|S1') > I'm not sure what you are trying to do here, but the standard coercion to strings will generate ['104', '105']. However you are only allowing 1 character strings so you get the first character. If you are wanting to get characters with ASCII codes 104 and 105 you can do that without coercion by simply viewing the memory as a different data-type: a.view('S1') array([h, i], dtype='|S1') -Travis From oliphant.travis at ieee.org Wed Aug 9 18:18:10 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 09 Aug 2006 16:18:10 -0600 Subject: [Numpy-discussion] bug !? dtype type_descriptor does not accept zero length tuple In-Reply-To: <200608091402.14810.haase@msg.ucsf.edu> References: <200608091402.14810.haase@msg.ucsf.edu> Message-ID: <44DA5F22.7080404@ieee.org> Sebastian Haase wrote: > Hi! > I have a problem with record array type descriptor. > With numarray this used to work. > My records made of n integers and m floats. So I used to be able specify > formats="%di4,%df4"%(self.numInts,self.numFloats) in numarray which would > translate to > byteorder = self.isByteSwapped and '>' or '<' > type_descr = [("int", "%s%di4" %(byteorder,self.numInts)), > ("float", "%s%df4" %(byteorder,self.numFloats))] > > The problem occurs when numInts or numFloats is zero !? > Could it numpy be changed to silectly accept this case > Here is the complete traceback + some debug info: > If numarray supported it, then we should get NumPy to support it as well unless there is a compelling reason not to. I can't think of any except that it might be hard to make it work. What is '0i4' supposed to mean exactly? Do you get a zero-sized field or is the field not included? I think the former will be much easier than the latter. Would that be O.K.? -Travis From haase at msg.ucsf.edu Wed Aug 9 18:41:00 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 9 Aug 2006 15:41:00 -0700 Subject: [Numpy-discussion] bug !? dtype type_descriptor does not accept zero length tuple In-Reply-To: <44DA5F22.7080404@ieee.org> References: <200608091402.14810.haase@msg.ucsf.edu> <44DA5F22.7080404@ieee.org> Message-ID: <200608091541.00208.haase@msg.ucsf.edu> On Wednesday 09 August 2006 15:18, Travis Oliphant wrote: > Sebastian Haase wrote: > > Hi! > > I have a problem with record array type descriptor. > > With numarray this used to work. > > My records made of n integers and m floats. So I used to be able > > specify formats="%di4,%df4"%(self.numInts,self.numFloats) in numarray > > which would translate to > > byteorder = self.isByteSwapped and '>' or '<' > > type_descr = [("int", "%s%di4" %(byteorder,self.numInts)), > > ("float", "%s%df4" %(byteorder,self.numFloats))] > > > > The problem occurs when numInts or numFloats is zero !? > > Could it numpy be changed to silectly accept this case > > Here is the complete traceback + some debug info: > > If numarray supported it, then we should get NumPy to support it as well > unless there is a compelling reason not to. I can't think of any except > that it might be hard to make it work. What is '0i4' supposed to mean > exactly? Do you get a zero-sized field or is the field not included? > I think the former will be much easier than the latter. Would that be > O.K.? That's exactly what numarray did. The rest of my code is assuming that all fields exist (even if they are empty). Otherwise I get a name error which is worse than getting an empty array. Thanks, Sebastian Haase From jek-cygwin1 at kleckner.net Wed Aug 9 20:18:46 2006 From: jek-cygwin1 at kleckner.net (Jim Kleckner) Date: Wed, 09 Aug 2006 17:18:46 -0700 Subject: [Numpy-discussion] Infinite loop in Numeric-24.2 for eigenvalues Message-ID: <44DA7B66.9030802@kleckner.net> It seems that this old problem of compiling Numeric is a problem again (even on my Linux box, not just cygwin): http://sourceforge.net/tracker/index.php?func=detail&aid=732520&group_id=1369&atid=301369 (The issue was the dlamch.f code) The patch recommended to run: python setup.py config in order to work around the problem. Note that this no longer runs and gives the error message: unable to execute _configtest.exe: No such file or directory The setup.py and customize.py code interact in complex ways with Python's build tools. Anyone out there familiar with these and what is going on? BTW, it looks as though the default Makefile in python2.4/config dir now has -O3 turned on which is stimulating this problem. Jim From jek-cygwin1 at kleckner.net Wed Aug 9 20:47:30 2006 From: jek-cygwin1 at kleckner.net (Jim Kleckner) Date: Wed, 09 Aug 2006 17:47:30 -0700 Subject: [Numpy-discussion] Infinite loop in Numeric-24.2 for eigenvalues In-Reply-To: <44DA7B66.9030802@kleckner.net> References: <44DA7B66.9030802@kleckner.net> Message-ID: <44DA8222.5090908@kleckner.net> Jim Kleckner wrote: > It seems that this old problem of compiling Numeric is a problem again > (even on my Linux box, not just cygwin): > http://sourceforge.net/tracker/index.php?func=detail&aid=732520&group_id=1369&atid=301369 > > > (The issue was the dlamch.f code) > > The patch recommended to run: > python setup.py config > in order to work around the problem. > > Note that this no longer runs and gives the error message: > unable to execute _configtest.exe: No such file or directory > > > The setup.py and customize.py code interact in complex ways with > Python's build tools. > > Anyone out there familiar with these and what is going on? > > BTW, it looks as though the default Makefile in python2.4/config dir now > has -O3 turned on which is stimulating this problem. > > Jim > A workaround for this problem in setup.py is to run this simple script to create the config.h file that is failing (probably due to the compile flags): gcc -fno-strict-aliasing -DNDEBUG -g -Wall -Wstrict-prototypes -IInclude -IPackages/FFT/Include -IPackages/RNG/Include -I/usr/include/python2.4 Src/config.c -o mkconfigh ./mkconfigh mv config.h Src From haase at msg.ucsf.edu Thu Aug 10 00:35:30 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 09 Aug 2006 21:35:30 -0700 Subject: [Numpy-discussion] bug !? dtype type_descriptor does not accept zero length tuple In-Reply-To: <44DA7C9A.7010507@ieee.org> References: <200608091402.14810.haase@msg.ucsf.edu> <200608091541.00208.haase@msg.ucsf.edu> <44DA658C.9050205@ieee.org> <200608091600.09607.haase@msg.ucsf.edu> <44DA7C9A.7010507@ieee.org> Message-ID: <44DAB792.2010503@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> On Wednesday 09 August 2006 15:45, you wrote: >> >>> Sebastian Haase wrote: >>> >>>> On Wednesday 09 August 2006 15:18, Travis Oliphant wrote: >>>> >>>>> If numarray supported it, then we should get NumPy to support it as >>>>> well >>>>> unless there is a compelling reason not to. I can't think of any >>>>> except >>>>> that it might be hard to make it work. What is '0i4' supposed to mean >>>>> exactly? Do you get a zero-sized field or is the field not included? >>>>> I think the former will be much easier than the latter. Would >>>>> that be >>>>> O.K.? >>>>> >>>> That's exactly what numarray did. The rest of my code is assuming that >>>> all fields exist (even if they are empty). Otherwise I get a name >>>> error which is worse than getting an empty array. >>>> >>> Do you have a simple code snippet that I could use as a test? >>> >>> -Travis >>> >> >> I think this should do it: >> >> a = N.arange(10, dtype=N.float32) >> a.shape = 5,2 >> type_descr = [("int", "<0i4"),("float", "<2f4")] >> a.dtype = type_descr >> >> > > I'm not sure what a.shape = (5,2) is supposed to do. I left it in the > unit-test out because assigning to the data-type you just defined > already results in > > a['float'].shape being (5,2) > > If it is left in, then an extra dimension is pushed in and > > a['float'].shape is (5,1,2) > > > This is due to the default behavior of assigning data-types when the new > data-type has a larger but compatibile itemsize then the old itemsize. I have to admit that I don't understand that statement. I thought - just "visually" - that a.shape = 5,2 would make a "table" with 2 columns. Then I could go on and give those columns names... Or is the problem that the type "2f4" refers to (some sort of) a "single column" with 2 floats grouped together !? Thanks for implementing it so quickly, Sebastian Haase From haase at msg.ucsf.edu Thu Aug 10 00:36:49 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 09 Aug 2006 21:36:49 -0700 Subject: [Numpy-discussion] how to reference Numerical Python in a scientific publication Message-ID: <44DAB7E1.8090108@msg.ucsf.edu> Hi, we are using numerical python as an integral part of a microscope development project over last few years. So far we have been using exclusively numarray but now I decided it's time to slowly but sure migrate to numpy. What is the proper way to reference these packages ? Thanks to everyone involved, Sebastian Haase UCSF From pfdubois at gmail.com Thu Aug 10 00:55:37 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Wed, 9 Aug 2006 21:55:37 -0700 Subject: [Numpy-discussion] how to reference Numerical Python in a scientific publication In-Reply-To: <44DAB7E1.8090108@msg.ucsf.edu> References: <44DAB7E1.8090108@msg.ucsf.edu> Message-ID: P. F. Dubois, K. Hinsen, and J. Hugunin, "Numerical Python", Computers in Physics, v. 10, #3, May/June 1996. is one reference people have used. Others simply refer to the website. The new book might be the best for numpy itself, dunno. Related papers are: David Ascher, P. F. Dubois, Konrad Hinsen, James Hugunin, and Travis Oliphant, "Numerical Python", UCRL-MA-128569, 93 pp., Lawrence Livermore National Laboratory, Livermore, CA; 1999. -- this is the 'official' Numerical Python documentation as first released. P. F. Dubois, "Extending Python with Fortran", Computing in Science and Engineering, v. 1 #5, Sept./Oct. 1999., p.66-73. David Scherer, Paul Dubois, and Bruce Sherwood, "VPython: 3D Interactive Scientific Graphics for Students", Computing in Science and Engineering, v. 2 #5, Sep./Oct. 2000, p. 56-62. On 09 Aug 2006 21:37:39 -0700, Sebastian Haase wrote: > Hi, > we are using numerical python as an integral part of a microscope > development project over last few years. > > So far we have been using exclusively numarray but now I decided it's > time to slowly but sure migrate to numpy. > > What is the proper way to reference these packages ? > > Thanks to everyone involved, > Sebastian Haase > UCSF > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From drife at ucar.edu Thu Aug 10 01:45:48 2006 From: drife at ucar.edu (Daran L. Rife) Date: Wed, 9 Aug 2006 23:45:48 -0600 (MDT) Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) Message-ID: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> Hello, I recently switched from a Debian Linux box to a Mac G5 PowerPC, running Mac OS X 10.4 Tiger (8.7.0). I use the Python Numeric package extensively, and have come to depend upon it. In my view, this piece of software is truly first rate, and it has greatly improved my productivity in the area of scientific analysis. Unfortunately, I am experiencing a problem that I cannot sort out. I am running Python 2.4.3 on a Debian box (V3.1), using gcc version 4.0.1, and the Apple vecLib.framework which has an optimized BLAS and LAPACK. When building Numeric 24.0, 24.1, or 24.2 everything seems to go AOK. But when I run code which makes use of the Numeric package (maksed arrays, dot product, LinearAlgebra, etc.) my code crashes hard and unpredictably. When it crashes I simply get a "Segmentation Fault". I'm sorry that I can't be more specific about what seems to happen just before the crash...I've tried to trace it but to no avail. Interestingly, I can get Numeric version 23.8 to build and run just fine, but it appears that the dotblas (BLAS optimized matrixmultiply/dot/innerproduct) does not properly get built in. Thus, all my matrix operations are -very- slow. Has anyone seen this problem, or know where I might look to solve it? Perhaps I have overlooked a crucial step in the build/install of Numeric 24.x on the Mac. I searched round the Net with google, and have sifted through the numpy/scipy/numeric Web pages, various mailing lists, user groups, etc., and can't seem to find any relevant info. Alternatively, can someone explain how to get Numeric 23.8 to compile on OS X 10.4 Tiger, with the dotblas module? Thanks very much for your help, Daran From QYWRSSZME at howerobinson.com Thu Aug 10 07:57:45 2006 From: QYWRSSZME at howerobinson.com (Socorro Crane) Date: Thu, 10 Aug 2006 05:57:45 -0600 Subject: [Numpy-discussion] Change DNS Message-ID: Icrease Your S''exual Desire and S''perm volume by 500% L'onger o''rgasms - The longest most intense o''rgasms of your life Multiple o''rgasms - Cum again and again S'PUR-M is The Newest and The Safest Way 100% N''atural and No Side Effects - in contrast to well-known brands. Experience three times longer o''rgasms World Wide shipping within 24 hours Clisk here http://www.guideforswitzerland.info banister vintner eddie chromatogram bureaucrat lascivious cosgrove sash factious continue pulse degree dynast ironstone lambert chevron From pbdr at cmp.uea.ac.uk Thu Aug 10 07:38:37 2006 From: pbdr at cmp.uea.ac.uk (Pierre Barbier de Reuille) Date: Thu, 10 Aug 2006 12:38:37 +0100 Subject: [Numpy-discussion] Change of signature for copyswap function ? Message-ID: <44DB1ABD.6010703@cmp.uea.ac.uk> Hi, in my documentation, the copyswap function in the PyArray_ArrFuncs structure is supposed to have this signature: copyswap (void) (void* dest, void* src, int swap, int itemsize) However, in the latest version of NumPy, the signature is: copyswap (void) (void*, void*, int, void*) My question is: what correspond to the last void* ? Thanks, Pierre From oliphant.travis at ieee.org Thu Aug 10 08:55:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 10 Aug 2006 06:55:29 -0600 Subject: [Numpy-discussion] Change of signature for copyswap function ? In-Reply-To: <44DB1ABD.6010703@cmp.uea.ac.uk> References: <44DB1ABD.6010703@cmp.uea.ac.uk> Message-ID: <44DB2CC1.7090806@ieee.org> Pierre Barbier de Reuille wrote: > Hi, > > in my documentation, the copyswap function in the PyArray_ArrFuncs > structure is supposed to have this signature: > > copyswap (void) (void* dest, void* src, int swap, int itemsize) > > However, in the latest version of NumPy, the signature is: > > copyswap (void) (void*, void*, int, void*) > > My question is: what correspond to the last void* ? > It's only needed for FLEXIBLE arrays (STRING, UNICODE, VOID), then you pass in an array whose ->descr member has the right itemsize. Look in core/src/arratypes for the definitions of the copyswap functions which can be helpful to see if arguments are actually needed. -Travis From drife at ucar.edu Thu Aug 10 09:33:44 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 07:33:44 -0600 (MDT) Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) Message-ID: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> Hello, I am a veteran user of Numeric and am trying out the latest version of numpy (numpy 1.01b) on Mac OS X 10.4 Tiger (8.7.0). When trying to invert a matrix with numpy.linalg.inv I get the following error: ----> Traceback (most recent call last): File "./bias_correction.py", line 381, in ? if __name__ == "__main__": main() File "./bias_correction.py", line 373, in main (index_to_stnid, bias_and_innov) = calc_bias_and_innov(cf, stn_info, obs, infile_obs, grids, infile_grids) File "./bias_correction.py", line 297, in calc_bias_and_innov K = make_kalman_gain(R, P_local, H) File "./bias_correction.py", line 157, in make_kalman_gain K = MA.dot( MA.dot(P, MA.transpose(H)), inv(MA.dot(H, MA.dot(P, MA.transpose(H))) + R ) ) File "/opt/python/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 149, in inv return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) TypeError: __array_wrap__() takes exactly 3 arguments (2 given) <---- Is this a known problem, and if so, what is the fix? Thanks very much, Daran From drife at ucar.edu Thu Aug 10 10:02:23 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 08:02:23 -0600 (MDT) Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> Message-ID: <34498.64.17.89.52.1155218543.squirrel@imap.rap.ucar.edu> Hi group, Sorry, but there was an error on my previous message, 2nd paragraph, 2nd setence. It should read: Unfortunately, I am experiencing a problem that I cannot sort out. I am running Python 2.4.3 on a Mac G5 running OS X 10.4 Tiger (8.7.0), using gcc version 4.0.1, and the Apple vecLib.framework which has an optimized BLAS and LAPACK. When building Numeric 24.0, 24.1, or 24.2 everything seems to go AOK. But when I run code which makes use of the Numeric package (maksed arrays, dot product, LinearAlgebra, etc.) my code crashes hard and unpredictably. When it crashes I simply get a "Segmentation Fault". I'm sorry that I can't be more specific about what seems to happen just before the crash... I've tried to trace it but to no avail. Thanks again for your help. Daran -- > I recently switched from a Debian Linux box to a Mac G5 > PowerPC, running Mac OS X 10.4 Tiger (8.7.0). I use the > Python Numeric package extensively, and have come to > depend upon it. In my view, this piece of software is > truly first rate, and it has greatly improved my > productivity in the area of scientific analysis. > > Unfortunately, I am experiencing a problem that I cannot sort > out. I am running Python 2.4.3 on a Debian box (V3.1), using > gcc version 4.0.1, and the Apple vecLib.framework which has > an optimized BLAS and LAPACK. When building Numeric 24.0, > 24.1, or 24.2 everything seems to go AOK. But when I run > code which makes use of the Numeric package (maksed arrays, > dot product, LinearAlgebra, etc.) my code crashes hard and > unpredictably. When it crashes I simply get a "Segmentation > Fault". I'm sorry that I can't be more specific about what > seems to happen just before the crash...I've tried to trace > it but to no avail. > > Interestingly, I can get Numeric version 23.8 to build and > run just fine, but it appears that the dotblas (BLAS > optimized matrixmultiply/dot/innerproduct) does not properly > get built in. Thus, all my matrix operations are -very- slow. > > Has anyone seen this problem, or know where I might look > to solve it? Perhaps I have overlooked a crucial step in > the build/install of Numeric 24.x on the Mac. > > I searched round the Net with google, and have sifted through > the numpy/scipy/numeric Web pages, various mailing lists, user > groups, etc., and can't seem to find any relevant info. > > Alternatively, can someone explain how to get Numeric 23.8 > to compile on OS X 10.4 Tiger, with the dotblas module? > > > Thanks very much for your help, > > > Daran > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From klemm at phys.ethz.ch Thu Aug 10 10:12:50 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Thu, 10 Aug 2006 16:12:50 +0200 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34498.64.17.89.52.1155218543.squirrel@imap.rap.ucar.edu> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu>, <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> Message-ID: Daran, I had a similar behaviour when I tried to use module compield with an older f2py with a newer version of numpy. So is it maybe possible that some *.so files are used from an earlier build? Hanno "Daran L. Rife" said: > Hi group, > > Sorry, but there was an error on my previous message, > 2nd paragraph, 2nd setence. It should read: > > Unfortunately, I am experiencing a problem that I cannot sort > out. I am running Python 2.4.3 on a Mac G5 running OS X 10.4 > Tiger (8.7.0), using gcc version 4.0.1, and the Apple > vecLib.framework which has an optimized BLAS and LAPACK. > When building Numeric 24.0, 24.1, or 24.2 everything seems > to go AOK. But when I run code which makes use of the Numeric > package (maksed arrays, dot product, LinearAlgebra, etc.) my > code crashes hard and unpredictably. When it crashes I simply > get a "Segmentation Fault". I'm sorry that I can't be more > specific about what seems to happen just before the crash... > I've tried to trace it but to no avail. > > Thanks again for your help. > > > Daran > > -- > > > I recently switched from a Debian Linux box to a Mac G5 > > PowerPC, running Mac OS X 10.4 Tiger (8.7.0). I use the > > Python Numeric package extensively, and have come to > > depend upon it. In my view, this piece of software is > > truly first rate, and it has greatly improved my > > productivity in the area of scientific analysis. > > > > Unfortunately, I am experiencing a problem that I cannot sort > > out. I am running Python 2.4.3 on a Debian box (V3.1), using > > gcc version 4.0.1, and the Apple vecLib.framework which has > > an optimized BLAS and LAPACK. When building Numeric 24.0, > > 24.1, or 24.2 everything seems to go AOK. But when I run > > code which makes use of the Numeric package (maksed arrays, > > dot product, LinearAlgebra, etc.) my code crashes hard and > > unpredictably. When it crashes I simply get a "Segmentation > > Fault". I'm sorry that I can't be more specific about what > > seems to happen just before the crash...I've tried to trace > > it but to no avail. > > > > Interestingly, I can get Numeric version 23.8 to build and > > run just fine, but it appears that the dotblas (BLAS > > optimized matrixmultiply/dot/innerproduct) does not properly > > get built in. Thus, all my matrix operations are -very- slow. > > > > Has anyone seen this problem, or know where I might look > > to solve it? Perhaps I have overlooked a crucial step in > > the build/install of Numeric 24.x on the Mac. > > > > I searched round the Net with google, and have sifted through > > the numpy/scipy/numeric Web pages, various mailing lists, user > > groups, etc., and can't seem to find any relevant info. > > > > Alternatively, can someone explain how to get Numeric 23.8 > > to compile on OS X 10.4 Tiger, with the dotblas module? > > > > > > Thanks very much for your help, > > > > > > Daran > > > > > > ------------------------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job > > easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- Hanno Klemm klemm at phys.ethz.ch From drife at ucar.edu Thu Aug 10 10:58:47 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 08:58:47 -0600 (MDT) Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu>, <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> Message-ID: <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> Hi Hanno, > I had a similar behaviour when I tried to use module compield with an > older f2py with a newer version of numpy. So is it maybe possible that > some *.so files are used from an earlier build? Many thanks for the reply. This was my first attempt to build and use numpy; I have no previous version. May I ask how you specificlly solved the problem on your machine? Thanks, Daran -- From Chris.Barker at noaa.gov Thu Aug 10 12:13:51 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 10 Aug 2006 09:13:51 -0700 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> Message-ID: <44DB5B3F.9080203@noaa.gov> Daran L. Rife wrote: > Many thanks for the reply. This was my first attempt > to build and use numpy; "numpy" used to be a generic name for the Numerical extensions to Python. Now there are three versions: "Numeric": The original, now at version 24.2 This is probably the last version that will be produced. "numarray": This was designed to be the "next generation" array package. It has some nice additional features that Numeric does not have, but is missing some as well. It is at version 1.5.1. it may see some bug fix releases in the future, but probably won't see any more major development. "numpy": this is the "grand unification" array package. It is based on the Numeric code base, and is designed to have the best features of Numeric and numarray, plus some extra good stuff. It is now at version 1.0beta, with an expected release date for 1.0final sometime this fall. It is under active development, the API is pretty stable now, and it appears to have the consensus of the numerical python community as the "way of the future" I wrote all that out so that you can be clear which package you are having trouble with -- you've used both the term "Numeric" and "numpy" in your posts, and there is some confusion. If you are working on a project that does not need to be released for a few months (i.e. after numpy has reached 1.0 final), I'd use numpy, rather than Numeric or numarray. Also: on OS-X, there are far to many ways to build Python. When you report a problem, you need to define exactly which python build you are using, and this goes beyond python version -- fink? darwinports? built-it-from-source? Framework? Universal, etc... The MacPython community is doing it's best to standardize on the Universal Build of 2.4.3 that you can find here: http://www.pythonmac.org/packages/py24-fat/ There you will also find pre-built packages for Numeric24.2, numarray1.5.1, and numpy0.9.8 Have you tried any of those? They should be built against Apple's vectLib. There isn't a package for numpy 1.0beta there yet. I may add one soon. > Interestingly, I can get Numeric version 23.8 to build and > run just fine, but it appears that the dotblas (BLAS > optimized matrixmultiply/dot/innerproduct) does not properly > get built in. Thus, all my matrix operations are -very- slow. I'm not sure of the dates, but that is probably a version that didn't have the check for Apple's vecLib in the setup.py, so it built with the built-in lapack-lite instead. You can compare the setup.py files from that and newer versions to see how to make it build against vectLib, but I suspect if you do that, you'll see the same problems. Also, please send a small test script that crashes for you, so others can test it. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From drife at ucar.edu Thu Aug 10 12:36:15 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 10:36:15 -0600 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <44DB5B3F.9080203@noaa.gov> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> <44DB5B3F.9080203@noaa.gov> Message-ID: <44DB607F.9060903@ucar.edu> Hi Chris, Thanks very much for your reply. My apology for the confusion. To be clear, I am a veteran user of Numeric not numpy. I tried installing four versions of Numeric: 23.8, 24.0, 24.1, and 24.2. My Python distro is built from source, using the GCC 4.0.1 suite of compilers. I am running all of this on a Mac G5 PowerPC with Mac OS X 10.4 Tiger (8.7.0). All branches of Numeric 24.x cause a "Segmentation Fault". The scripts I was running this against are a bit complex, so it is not so easy for me to sort out when/where the failure occurs. I'll keep doing some testing and see if I can get a better idea for what seems to be the issue. I'd very like much like to move to numpy, but I have code that needs to be working -now-, so at this point I am more interested in Numeric; I am an adept user of Numeric, and I know it works well on Debian Linux boxes. I will try your suggestion of installing and running the pre-built packages at . Thanks again for your patience and for your help. Daran -- > Daran L. Rife wrote: >> Many thanks for the reply. This was my first attempt >> to build and use numpy; > > "numpy" used to be a generic name for the Numerical extensions to > Python. Now there are three versions: > > "Numeric": The original, now at version 24.2 This is probably the last > version that will be produced. > > "numarray": This was designed to be the "next generation" array package. > It has some nice additional features that Numeric does not have, but is > missing some as well. It is at version 1.5.1. it may see some bug fix > releases in the future, but probably won't see any more major development. > > "numpy": this is the "grand unification" array package. It is based on > the Numeric code base, and is designed to have the best features of > Numeric and numarray, plus some extra good stuff. It is now at version > 1.0beta, with an expected release date for 1.0final sometime this fall. > It is under active development, the API is pretty stable now, and it > appears to have the consensus of the numerical python community as the > "way of the future" > > I wrote all that out so that you can be clear which package you are > having trouble with -- you've used both the term "Numeric" and "numpy" > in your posts, and there is some confusion. > > If you are working on a project that does not need to be released for a > few months (i.e. after numpy has reached 1.0 final), I'd use numpy, > rather than Numeric or numarray. > > Also: on OS-X, there are far to many ways to build Python. When you > report a problem, you need to define exactly which python build you are > using, and this goes beyond python version -- fink? darwinports? > built-it-from-source? Framework? Universal, etc... > > The MacPython community is doing it's best to standardize on the > Universal Build of 2.4.3 that you can find here: > > http://www.pythonmac.org/packages/py24-fat/ > > There you will also find pre-built packages for Numeric24.2, > numarray1.5.1, and numpy0.9.8 > > Have you tried any of those? They should be built against Apple's > vectLib. There isn't a package for numpy 1.0beta there yet. I may add > one soon. > >> Interestingly, I can get Numeric version 23.8 to build and >> run just fine, but it appears that the dotblas (BLAS >> optimized matrixmultiply/dot/innerproduct) does not properly >> get built in. Thus, all my matrix operations are -very- slow. > > I'm not sure of the dates, but that is probably a version that didn't > have the check for Apple's vecLib in the setup.py, so it built with the > built-in lapack-lite instead. You can compare the setup.py files from > that and newer versions to see how to make it build against vectLib, but > I suspect if you do that, you'll see the same problems. > > Also, please send a small test script that crashes for you, so others > can test it. > > -Chris From klemm at phys.ethz.ch Thu Aug 10 12:50:38 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Thu, 10 Aug 2006 18:50:38 +0200 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu>, <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> , Message-ID: Hi Daran, I fortunately never had the need to run different versions in parallel, so I basically removed the earlier versions of numpy. However, as you possibly know, you can build wrapper functions for fortran code with f2py (which is now shipped with numpy). And that is where I got the segfault behaviour: I had a module compiled for numpy 0.9.6 and then tried to use it with numpy 1.0b. Therefore I thought if you have similar stuff running on your machine that might be a reason. The obvious solution is to recompile the fortran code with the newer version of f2py. But fom what you write, your problem seems to be different. Regards, Hanno "Daran L. Rife" said: > Hi Hanno, > > > I had a similar behaviour when I tried to use module compield with an > > older f2py with a newer version of numpy. So is it maybe possible that > > some *.so files are used from an earlier build? > > > Many thanks for the reply. This was my first attempt > to build and use numpy; I have no previous version. > May I ask how you specificlly solved the problem > on your machine? > > Thanks, > > Daran > > -- > > -- Hanno Klemm klemm at phys.ethz.ch From bhendrix at enthought.com Thu Aug 10 12:53:54 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Thu, 10 Aug 2006 11:53:54 -0500 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available Message-ID: <44DB64A2.60203@enthought.com> For those not able to make SciPy 2006 next week, or who would like to download the ISO a few days early, its available at http://code.enthought.com/downloads/scipy2006-i386.iso.torrent. We squashed a lot onto the CD, so I also had to trim > 100 MB of packages that ship with the standard Ubuntu CD. Here's what I was able to add: * SciPy build from svn (Wed, 12:00 CST) * NumPy built from svn (Wed, 12:00 CST) * Matplotlib built from svn (Wed, 12:00 CST) * IPython built from svn (Wed, 12:00 CST) * Enthought built from svn (Wed, 16:00 CST) * ctypes 1.0.0 * hdf5 1.6.5 * networkx 0.31 * Pyrex 0.9.4.1 * pytables 1.3.2 All of the svn checkouts are zipped in /src, if you'd like to build from a svn version newer than what was shipped, simple copy the compressed package to your home dir, uncompress it, run "svn upate", and built it. Please note: This ISO was built rather hastily, uses un-official code, and received very little testing. Please don't even consider using this in a production environment. Bryce From cookedm at physics.mcmaster.ca Thu Aug 10 14:22:36 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 10 Aug 2006 14:22:36 -0400 Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> References: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> Message-ID: <20060810142236.6770032a@arbutus.physics.mcmaster.ca> On Thu, 10 Aug 2006 07:33:44 -0600 (MDT) "Daran L. Rife" wrote: > Hello, > > I am a veteran user of Numeric and am trying > out the latest version of numpy (numpy 1.01b) > on Mac OS X 10.4 Tiger (8.7.0). > > When trying to invert a matrix with > numpy.linalg.inv I get the following error: > > ----> > > Traceback (most recent call last): > File "./bias_correction.py", line 381, in ? > if __name__ == "__main__": main() > File "./bias_correction.py", line 373, in main > (index_to_stnid, bias_and_innov) = calc_bias_and_innov(cf, stn_info, > obs, infile_obs, grids, infile_grids) > File "./bias_correction.py", line 297, in calc_bias_and_innov > K = make_kalman_gain(R, P_local, H) > File "./bias_correction.py", line 157, in make_kalman_gain > K = MA.dot( MA.dot(P, MA.transpose(H)), inv(MA.dot(H, MA.dot(P, > MA.transpose(H))) + R ) ) > File "/opt/python/lib/python2.4/site-packages/numpy/linalg/linalg.py", > line 149, in inv > return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) > TypeError: __array_wrap__() takes exactly 3 arguments (2 given) > > <---- > > Is this a known problem, and if so, what is the fix? It looks like the problem is that numpy.core.ma.MaskedArray.__array_map__ expects a "context" argument, but none gets passed. I'm not familiar with that, so I don't know what the fix is ... -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From ndarray at mac.com Thu Aug 10 14:41:35 2006 From: ndarray at mac.com (Sasha) Date: Thu, 10 Aug 2006 14:41:35 -0400 Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> References: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> Message-ID: Inverting a matrix with masked values does not make much sense. Call "filled" method with an appropriate fill value before passing the matrix to "inv". On 8/10/06, Daran L. Rife wrote: > Hello, > > I am a veteran user of Numeric and am trying > out the latest version of numpy (numpy 1.01b) > on Mac OS X 10.4 Tiger (8.7.0). > > When trying to invert a matrix with > numpy.linalg.inv I get the following error: > > ----> > > Traceback (most recent call last): > File "./bias_correction.py", line 381, in ? > if __name__ == "__main__": main() > File "./bias_correction.py", line 373, in main > (index_to_stnid, bias_and_innov) = calc_bias_and_innov(cf, stn_info, > obs, infile_obs, grids, infile_grids) > File "./bias_correction.py", line 297, in calc_bias_and_innov > K = make_kalman_gain(R, P_local, H) > File "./bias_correction.py", line 157, in make_kalman_gain > K = MA.dot( MA.dot(P, MA.transpose(H)), inv(MA.dot(H, MA.dot(P, > MA.transpose(H))) + R ) ) > File "/opt/python/lib/python2.4/site-packages/numpy/linalg/linalg.py", > line 149, in inv > return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) > TypeError: __array_wrap__() takes exactly 3 arguments (2 given) > > <---- > > Is this a known problem, and if so, what is the fix? > > > Thanks very much, > > > Daran > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From oliphant.travis at ieee.org Thu Aug 10 15:10:47 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 10 Aug 2006 13:10:47 -0600 Subject: [Numpy-discussion] Numarray compatibility module available Message-ID: <44DB84B7.4060409@ieee.org> I've just finished a first version of the numarray compatibility module. It does not include all the names from the numarray name-space but it does include the most important ones, I believe. It also includes a slightly modified form of the numarray type-objects so that NumPy can recognize them as dtypes. I do not have a lot of code to test the compatibility layer with so any help will be appreciated. The compatibility layer still requires changes to certain methods and attributes on arrays. This is performed by the alter_code1.py module which I will be finishing over the next few hours. Once that is ready (and I've updated NumPy to work with the latest version of Python 2.5 in SVN) I want to make a 1.0b2 release (no later than Friday). I would appreciate it if several people could test the current SVN version of NumPy. In order to support several of the features of NumArray that I had missed, I engaged in a marathon coding sprint last night from about 6:00pm to 6:00am during which time I added output arguments to many of the functions in NumPy, and a clipmode argument to several others. I also added the C-API functions PyArray_OutputConverter and PyArray_ClipmodeConverter to make it easy to get these arguments from Python to C. This caused a change in the C-API that will require re-compilation for 1.0b2. I'm sorry about that. I'm really pushing for stability on the C-API. Now that the numarray compatibility module is complete, I'm more confident that we won't need anymore changes to the C-API for version 1.0. Of course, only when numpy 1.0final comes out will that be a guarantee. While I'm relatively confident about the changes to NumPy, the changes were extensive enough that more testing is warranted including another round of Valgrind tests. Unit-tests written to take advantage of the new output arguments on several of the functions (take, put, compress, clip, conjugate, argmax, argmin, and any function based on a ufunc method -- like sum, product, any, all, etc.) are particularly needed. If serious problems are discovered, then the 1.0b2 might be delayed again, but I'm really pushing to get 1.0b2 out the door soon. The numarray compatibility module and the oldnumeric compatibility module should hopefully help people adapt their code more quickly to NumPy. It's not fool-proof, though, so the best strategy is still to write to NumPy :-) as soon as you can. -Travis From drife at ucar.edu Thu Aug 10 15:33:52 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 13:33:52 -0600 Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: References: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> Message-ID: <44DB8A20.3070605@ucar.edu> Hi Sasha, > Inverting a matrix with masked values does not make much sense. Call > "filled" method with an appropriate fill value before passing the > matrix to "inv". In principle you are right, but even though I use masked arrays in this operation, when the operation itself is done no masked values remain. Thus, my code works very well with the "old" Numeric--and has worked well for some time. That said, I will try your suggestion of doing a "filled" on the matrix before sending it off to the inverse module. Thanks, Daran From ndarray at mac.com Thu Aug 10 16:07:17 2006 From: ndarray at mac.com (Sasha) Date: Thu, 10 Aug 2006 16:07:17 -0400 Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <44DB8A20.3070605@ucar.edu> References: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> <44DB8A20.3070605@ucar.edu> Message-ID: I see that Travis just fixed that by making context optional . I am not sure it is a good idea to allow use of ufuncs for which domain is not defined in ma. This may lead to hard to find bugs coming from ma arrays with nans in the data. I would rather see linalg passing the (func,args) context to wrap. That would not fix the reported problem, but will make diagnostic clearer. On 8/10/06, Daran L. Rife wrote: > Hi Sasha, > > > Inverting a matrix with masked values does not make much sense. Call > > "filled" method with an appropriate fill value before passing the > > matrix to "inv". > > In principle you are right, but even though I use masked arrays > in this operation, when the operation itself is done no masked > values remain. Thus, my code works very well with the "old" > Numeric--and has worked well for some time. That said, I will > try your suggestion of doing a "filled" on the matrix before > sending it off to the inverse module. > > > Thanks, > > > Daran > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From oliphant.travis at ieee.org Thu Aug 10 16:22:21 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 10 Aug 2006 14:22:21 -0600 Subject: [Numpy-discussion] Problem with numpy.linalg.inv in numpy 1.01b on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: References: <34401.64.17.89.52.1155216824.squirrel@imap.rap.ucar.edu> <44DB8A20.3070605@ucar.edu> Message-ID: <44DB957D.5020008@ieee.org> Sasha wrote: > I see that Travis just fixed that by making context optional > . I am not sure > it is a good idea to allow use of ufuncs for which domain is not > defined in ma. This may lead to hard to find bugs coming from ma > arrays with nans in the data. I would rather see linalg passing the > (func,args) context to wrap. That would not fix the reported problem, > but will make diagnostic clearer. > > This can be done as well. The problem is that __array_wrap__ is used in quite a few places (without context) and ma needs to have a default behavior when context is not supplied. -Travis From haase at msg.ucsf.edu Thu Aug 10 19:43:09 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 10 Aug 2006 16:43:09 -0700 Subject: [Numpy-discussion] format typestr for "String" ( 10 strings: '10a80' ) gives just 'None' Message-ID: <200608101643.10100.haase@msg.ucsf.edu> Hi, trying to convert my memmap - records - numarray code for reading a image file format (Mrc). There are 10 fields of strings (each 80 chars long) in the header: in numarray I used the format string '10a80' This results in a single value in numpy. Same after changing it to '10S80'. Am I doing something wrong !? Thanks, Sebastian Haase From haase at msg.ucsf.edu Thu Aug 10 20:23:12 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 10 Aug 2006 17:23:12 -0700 Subject: [Numpy-discussion] format typestr for "String" ( 10 strings: '10a80' ) gives just 'None' In-Reply-To: <44DBC7E4.1010904@ieee.org> References: <200608101643.10100.haase@msg.ucsf.edu> <44DBC7E4.1010904@ieee.org> Message-ID: <200608101723.12514.haase@msg.ucsf.edu> On Thursday 10 August 2006 16:57, Travis Oliphant wrote: > Sebastian Haase wrote: > > Hi, > > trying to convert my memmap - records - numarray code for reading a > > image file format (Mrc). > > There are 10 fields of strings (each 80 chars long) in the header: > > in numarray I used the format string '10a80' > > This results in a single value in numpy. > > Same after changing it to '10S80'. > > > > Am I doing something wrong !? > > Not that I can see. But, it's possible that there is a > misunderstanding of what '10a80' represents. > > What is giving you the value? > > For example, I can create a file with 10, 80-character strings and open it > using memmap and a data-type of > > dt = numpy.dtype('10a80') > > and it seems to work fine. > > -Travis This is what I get: It claims that the 'title' field (the last one) is 10 times 'S80' but trying to read that array from the first (and only) record (a.Mrc._hdrArray.title[0]) I just get None... >>> a=Mrc.bindFile('Heather2/GFPtublive-Vecta43') TODO: byteorder >>> repr(a.Mrc._hdrArray.dtype) 'dtype([('Num', '>> a.Mrc._hdrArray.NumTitles [3] >>> a.Mrc._hdrArray.NumTitles[0] 3 >>> type(a.Mrc._hdrArray.title[0]) >>> type(a.Mrc._hdrArray.title[1]) Traceback (most recent call last): File "", line 1, in ? File "/home/haase/qqq/lib/python/numpy/core/defchararray.py", line 45, in __getitem__ val = ndarray.__getitem__(self, obj) IndexError: index out of bounds I get the same on byteswapped data and non-byteswapped data. -Sebastian From haase at msg.ucsf.edu Thu Aug 10 20:42:51 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 10 Aug 2006 17:42:51 -0700 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? Message-ID: <200608101742.51914.haase@msg.ucsf.edu> Hi, Does numpy.ascontiguousarray(arr) "fix" the byteorder when arr is non-native byteorder ? If not, what functions does ? - Sebastian Haase From hjn253 at tom.com Sun Aug 13 21:38:58 2006 From: hjn253 at tom.com (=?GB2312?B?IjjUwjE5LTIwyNUvsbG+qSI=?=) Date: Mon, 14 Aug 2006 09:38:58 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU71MvTw0VYQ0VMus1QUFS4xL34udzA7brNvq3Tqr72st8=?= Message-ID: An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Thu Aug 10 21:45:10 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 10 Aug 2006 19:45:10 -0600 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <200608101742.51914.haase@msg.ucsf.edu> References: <200608101742.51914.haase@msg.ucsf.edu> Message-ID: <44DBE126.7030001@ieee.org> Sebastian Haase wrote: > Hi, > Does numpy.ascontiguousarray(arr) "fix" the byteorder when arr is non-native > byteorder ? > > If not, what functions does ? > It can if you pass in a data-type with the right byteorder (or use a native built-in data-type). In NumPy, it's the data-type that carries the "byte-order" information. So, there are lot's of ways to "fix" the byte-order. Of course there is still the difference between "fixing" the byte-order and simply "viewing" the memory in the correct byte-order. The former physically flips bytes around, the latter just flips them on calculation and presentation. -Travis From haase at msg.ucsf.edu Fri Aug 11 00:25:22 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 10 Aug 2006 21:25:22 -0700 Subject: [Numpy-discussion] format typestr for "String" ( 10 strings: '10a80' ) gives just 'None' In-Reply-To: <44DBE6B9.7000007@ieee.org> References: <200608101643.10100.haase@msg.ucsf.edu> <44DBC7E4.1010904@ieee.org> <200608101723.12514.haase@msg.ucsf.edu> <44DBE6B9.7000007@ieee.org> Message-ID: <44DC06B2.5000306@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> This is what I get: It claims that the 'title' field (the last one) >> is 10 times 'S80' but trying to read that array from the first (and >> only) record (a.Mrc._hdrArray.title[0]) I just get None... >> > Hopefully that problem is resolved now. I should discuss a little bit > about how the 10-element sub-array field is handled by NumPy. > > Any sub-array present causes the shape of the returned array for a given > field to grow by the sub-array size. > > So, in your case you have a (10,)-shape subarray in the title field. > > Thus if g is a record-array of shape gshape g.title will be a chararray > of shape gshape + (10,) > > In this case of a 1-d array with 1-element we have gshape = (1,). > Therefore, g.title will be a (1,10) chararray and g[0].title will be a > (10,)-shaped chararray. > > -Travis > Thanks, for fixing everything so quickly - I'll test it tomorrow. BTW: are you intentionally sending the last few messages ONLY to me and NOT to the mailing list !? I actually think the mailing should be configured that a "normal reply" automatically defaults to go only (!) to the list. (I'm on some other lists that know how to do that). Who would be able to change that for the numpy and the scipy list !? Thanks again, Sebastian From unospecialduties at mail2senegal.com Fri Aug 11 00:26:29 2006 From: unospecialduties at mail2senegal.com (Mr. Frank Diop) Date: Fri, 11 Aug 2006 04:26:29 -0000 Subject: [Numpy-discussion] Act Fast. Message-ID: UNITED NATIONS ORGANISATION IN CONJUNCTION WITH THE INTERNATIONAL MONETARY FUND WORLD BANK FACT-FINDING & SPECIAL DUTIES OFFICE Office of The Director Special duties. Dakar, Senegal Telephone: +221 418 3317 Fax: +221 418 4418 Email: unospecialduties at mail2senegal.com Special duties reference **UNO/WBF LM-05-371** **ORDERING CONTRACTOR: UNO/WBF ? SG DIPLOMATIC BOX 55KG To the Beneficiary, The World Bank Group, Fact Finding & Special Duties office In conjunction with the United Nations Organization, has received part of your pending payment with reference number (LM-05-371) amounting to US$ 5Million (Five Million United State Dollars) out of your contractual/inheritance funds from our ordering contractor Bank quoting reference to UNO/WBF LM-05-371, the said payment is been arranged in a Security-proof box weighing 55kg padded with synthetic nylon. According to information gathered from the bank's security computer we were notified that you have waited for so long to receive this payment without success, we also confirmed that you have not met all statutory requirements in respect of your pending payment. You are therefore advised to contact our Payment Clearance Department to obtain necessary information to the Security Courier Service Company that is specialized in sending diplomatic materials and information from one country to another, which also has diplomatic immunity to carry consignment (Box) such as this. This office has met with this Security Courier Service and concluded shipping arrangement with them, therefore shipment will commence as soon as we have your go ahead order. The diplomat who will be bring in this Consignment(Box) to you is an expert and has been in this line of work for many years now so you have noting to worry about. After all arrangements we have concluded that you must donate Five Hundred Thousand United States Dollars (USD500,000.00) to a charity organization we designate to you as soon as you receive your money. To this effect, in your response you should send to us a promissory note promissing to donate the stated amount and also with your address where you will like the Box to be delivered. Please maintain topmost secrecy as it may cause a lot of problems if found out that we are using this media to help you. Therefore you are advised not to inform anyone about this until you received your money. The above requirement qualifies you for final remittance process of the received sum. Please confirm message granted with "GO AHEAD ORDER" on mail: unospecialduties at mail2senegal.com Congratulations. Yours Faithfully Mr. Frank Diop Director, Special Duties UNO/WBF. From haase at msg.ucsf.edu Fri Aug 11 00:32:28 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 10 Aug 2006 21:32:28 -0700 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <44DBE126.7030001@ieee.org> References: <200608101742.51914.haase@msg.ucsf.edu> <44DBE126.7030001@ieee.org> Message-ID: <44DC085C.7010009@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> Hi, >> Does numpy.ascontiguousarray(arr) "fix" the byteorder when arr is >> non-native byteorder ? >> >> If not, what functions does ? >> > > It can if you pass in a data-type with the right byteorder (or use a > native built-in data-type). > > In NumPy, it's the data-type that carries the "byte-order" > information. So, there are lot's of ways to "fix" the byte-order. > So then the question is: what is the easiest way to say: give me the equivalent type of dtype, but with byteorder '<' (or '=') !? I would be cumbersome (and ugly ;-) ) if one would have to "manually assemble" such a construct every time ... > Of course there is still the difference between "fixing" the byte-order > and simply "viewing" the memory in the correct byte-order. The former > physically flips bytes around, the latter just flips them on calculation > and presentation. I understand. I need something that I can feed into my C routines that are to dumb to handle non-contiguous or byte-swapped data . - Sebastian From drife at ucar.edu Fri Aug 11 01:50:27 2006 From: drife at ucar.edu (Daran L. Rife) Date: Thu, 10 Aug 2006 23:50:27 -0600 (MDT) Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <44DB5B3F.9080203@noaa.gov> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> <44DB5B3F.9080203@noaa.gov> Message-ID: <35522.64.17.89.52.1155275427.squirrel@imap.rap.ucar.edu> Hi Chris, I tried your suggestion of installing and running the pre-built packages at . I am sorry to report that the pre-built MacPython and Numeric 24.2 package did not work. I get the same "Segmentation Fault" that I got when I built Python 2.4.3 and Numeric 24.2 from source. I tried running my code with debug prints in various places to try and pin down where the problem arises. Thus, I ran my code a number of times. Strangely, it never crashes in the same place twice. I'm not sure what to do next, but I will keep at it. As a last resort, I may build ATLAS and LAPACK from source, then build Numeric 23.8 against these, and try installing this into MacPython. I hate having to try this, but I cannot do any development without a functioning Python and Numeric. Thanks again, Daran -- > Daran L. Rife wrote: >> Many thanks for the reply. This was my first attempt >> to build and use numpy; > > "numpy" used to be a generic name for the Numerical extensions to Python. Now there are three versions: > > "Numeric": The original, now at version 24.2 This is probably the last version that will be produced. > > "numarray": This was designed to be the "next generation" array package. It has some nice additional features that Numeric does not have, but is missing some as well. It is at version 1.5.1. it may see some bug fix releases in the future, but probably won't see any more major development. > > "numpy": this is the "grand unification" array package. It is based on the Numeric code base, and is designed to have the best features of Numeric and numarray, plus some extra good stuff. It is now at version 1.0beta, with an expected release date for 1.0final sometime this fall. It is under active development, the API is pretty stable now, and it appears to have the consensus of the numerical python community as the "way of the future" > > I wrote all that out so that you can be clear which package you are having trouble with -- you've used both the term "Numeric" and "numpy" in your posts, and there is some confusion. > > If you are working on a project that does not need to be released for a few months (i.e. after numpy has reached 1.0 final), I'd use numpy, rather than Numeric or numarray. > > Also: on OS-X, there are far to many ways to build Python. When you report a problem, you need to define exactly which python build you are using, and this goes beyond python version -- fink? darwinports? built-it-from-source? Framework? Universal, etc... > > The MacPython community is doing it's best to standardize on the Universal Build of 2.4.3 that you can find here: > > http://www.pythonmac.org/packages/py24-fat/ > > There you will also find pre-built packages for Numeric24.2, > numarray1.5.1, and numpy0.9.8 > > Have you tried any of those? They should be built against Apple's vectLib. There isn't a package for numpy 1.0beta there yet. I may add one soon. > > > Interestingly, I can get Numeric version 23.8 to build and > > run just fine, but it appears that the dotblas (BLAS > > optimized matrixmultiply/dot/innerproduct) does not properly > > get built in. Thus, all my matrix operations are -very- slow. > > I'm not sure of the dates, but that is probably a version that didn't have the check for Apple's vecLib in the setup.py, so it built with the built-in lapack-lite instead. You can compare the setup.py files from that and newer versions to see how to make it build against vectLib, but I suspect if you do that, you'll see the same problems. > > Also, please send a small test script that crashes for you, so others can test it. > > -Chris > > > > > -- > Christopher Barker, Ph.D. > Oceanographer > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > From ainulinde at gmail.com Fri Aug 11 08:41:54 2006 From: ainulinde at gmail.com (ainulinde) Date: Fri, 11 Aug 2006 20:41:54 +0800 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available In-Reply-To: <44DB64A2.60203@enthought.com> References: <44DB64A2.60203@enthought.com> Message-ID: can't get any seeds for this torrent and any other download methods? thanks On 8/11/06, Bryce Hendrix wrote: > For those not able to make SciPy 2006 next week, or who would like to > download the ISO a few days early, its available at > http://code.enthought.com/downloads/scipy2006-i386.iso.torrent. > > We squashed a lot onto the CD, so I also had to trim > 100 MB of > packages that ship with the standard Ubuntu CD. Here's what I was able > to add: > > * SciPy build from svn (Wed, 12:00 CST) > * NumPy built from svn (Wed, 12:00 CST) > * Matplotlib built from svn (Wed, 12:00 CST) > * IPython built from svn (Wed, 12:00 CST) > * Enthought built from svn (Wed, 16:00 CST) > * ctypes 1.0.0 > * hdf5 1.6.5 > * networkx 0.31 > * Pyrex 0.9.4.1 > * pytables 1.3.2 > > All of the svn checkouts are zipped in /src, if you'd like to build from > a svn version newer than what was shipped, simple copy the compressed > package to your home dir, uncompress it, run "svn upate", and built it. > > Please note: This ISO was built rather hastily, uses un-official code, > and received very little testing. Please don't even consider using this > in a production environment. > > Bryce > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From bhendrix at enthought.com Fri Aug 11 11:56:38 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Fri, 11 Aug 2006 10:56:38 -0500 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available In-Reply-To: References: <44DB64A2.60203@enthought.com> Message-ID: <44DCA8B6.6010807@enthought.com> For those behind firewalls or have other problems connecting via bittorrent, the ISO can also be found here: http://code.enthought.com/downloads/scipy2006-i386.iso Bryce ainulinde wrote: > can't get any seeds for this torrent and any other download methods? thanks > > On 8/11/06, Bryce Hendrix wrote: > >> For those not able to make SciPy 2006 next week, or who would like to >> download the ISO a few days early, its available at >> http://code.enthought.com/downloads/scipy2006-i386.iso.torrent. >> >> We squashed a lot onto the CD, so I also had to trim > 100 MB of >> packages that ship with the standard Ubuntu CD. Here's what I was able >> to add: >> >> * SciPy build from svn (Wed, 12:00 CST) >> * NumPy built from svn (Wed, 12:00 CST) >> * Matplotlib built from svn (Wed, 12:00 CST) >> * IPython built from svn (Wed, 12:00 CST) >> * Enthought built from svn (Wed, 16:00 CST) >> * ctypes 1.0.0 >> * hdf5 1.6.5 >> * networkx 0.31 >> * Pyrex 0.9.4.1 >> * pytables 1.3.2 >> >> All of the svn checkouts are zipped in /src, if you'd like to build from >> a svn version newer than what was shipped, simple copy the compressed >> package to your home dir, uncompress it, run "svn upate", and built it. >> >> Please note: This ISO was built rather hastily, uses un-official code, >> and received very little testing. Please don't even consider using this >> in a production environment. >> >> Bryce >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ainulinde at gmail.com Fri Aug 11 12:49:27 2006 From: ainulinde at gmail.com (ainulinde) Date: Sat, 12 Aug 2006 00:49:27 +0800 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available In-Reply-To: <44DCA8B6.6010807@enthought.com> References: <44DB64A2.60203@enthought.com> <44DCA8B6.6010807@enthought.com> Message-ID: Bryce, thanks. this http works for me, the download speed is about 30k/s and the bt can't download anything, just one ip in the userlist(can't download anything from him/her).don't know why. maybe there is sth wrong with my network. On 8/11/06, Bryce Hendrix wrote: > > For those behind firewalls or have other problems connecting via > bittorrent, the ISO can also be found here: > > > http://code.enthought.com/downloads/scipy2006-i386.iso > > Bryce > > > ainulinde wrote: > can't get any seeds for this torrent and any other download methods? thanks > > On 8/11/06, Bryce Hendrix wrote: > > > For those not able to make SciPy 2006 next week, or who would like to > download the ISO a few days early, its available at > http://code.enthought.com/downloads/scipy2006-i386.iso.torrent. > > We squashed a lot onto the CD, so I also had to trim > 100 MB of > packages that ship with the standard Ubuntu CD. Here's what I was able > to add: > > * SciPy build from svn (Wed, 12:00 CST) > * NumPy built from svn (Wed, 12:00 CST) > * Matplotlib built from svn (Wed, 12:00 CST) > * IPython built from svn (Wed, 12:00 CST) > * Enthought built from svn (Wed, 16:00 CST) > * ctypes 1.0.0 > * hdf5 1.6.5 > * networkx 0.31 > * Pyrex 0.9.4.1 > * pytables 1.3.2 > > All of the svn checkouts are zipped in /src, if you'd like to build from > a svn version newer than what was shipped, simple copy the compressed > package to your home dir, uncompress it, run "svn upate", and built it. > > Please note: This ISO was built rather hastily, uses un-official code, > and received very little testing. Please don't even consider using this > in a production environment. > > Bryce > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > From haase at msg.ucsf.edu Fri Aug 11 15:22:01 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 12:22:01 -0700 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <44DC085C.7010009@msg.ucsf.edu> References: <200608101742.51914.haase@msg.ucsf.edu> <44DBE126.7030001@ieee.org> <44DC085C.7010009@msg.ucsf.edu> Message-ID: <200608111222.01938.haase@msg.ucsf.edu> On Thursday 10 August 2006 21:32, Sebastian Haase wrote: > Travis Oliphant wrote: > > Sebastian Haase wrote: > >> Hi, > >> Does numpy.ascontiguousarray(arr) "fix" the byteorder when arr is > >> non-native byteorder ? > >> > >> If not, what functions does ? > > > > It can if you pass in a data-type with the right byteorder (or use a > > native built-in data-type). > > > > In NumPy, it's the data-type that carries the "byte-order" > > information. So, there are lot's of ways to "fix" the byte-order. > > So then the question is: what is the easiest way to say: > give me the equivalent type of dtype, but with byteorder '<' (or '=') !? > I would be cumbersome (and ugly ;-) ) if one would have to "manually > assemble" such a construct every time ... I just found this in myCVS/numpy/numpy/core/tests/test_numerictypes.py def normalize_descr(descr): "Normalize a description adding the platform byteorder." out = [] for item in descr: dtype = item[1] if isinstance(dtype, str): if dtype[0] not in ['|','<','>']: onebyte = dtype[1:] == "1" if onebyte or dtype[0] in ['S', 'V', 'b']: dtype = "|" + dtype else: dtype = byteorder + dtype if len(item) > 2 and item[2] > 1: nitem = (item[0], dtype, item[2]) else: nitem = (item[0], dtype) out.append(nitem) elif isinstance(item[1], list): l = [] for j in normalize_descr(item[1]): l.append(j) out.append((item[0], l)) else: raise ValueError("Expected a str or list and got %s" % \ (type(item))) return out Is that what I was talking about !? It's quite a big animal. Would this be needed "everytime" I want to get a "systembyte-ordered version" of a given type !? - Sebastian > > > Of course there is still the difference between "fixing" the byte-order > > and simply "viewing" the memory in the correct byte-order. The former > > physically flips bytes around, the latter just flips them on calculation > > and presentation. > > I understand. I need something that I can feed into my C routines that > are to dumb to handle non-contiguous or byte-swapped data . > > - Sebastian From oliphant.travis at ieee.org Fri Aug 11 16:02:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 14:02:28 -0600 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <200608111222.01938.haase@msg.ucsf.edu> References: <200608101742.51914.haase@msg.ucsf.edu> <44DBE126.7030001@ieee.org> <44DC085C.7010009@msg.ucsf.edu> <200608111222.01938.haase@msg.ucsf.edu> Message-ID: <44DCE254.9020107@ieee.org> Sebastian Haase wrote: > I just found this in myCVS/numpy/numpy/core/tests/test_numerictypes.py > > > def normalize_descr(descr): > "Normalize a description adding the platform byteorder." > > return out > > > Is that what I was talking about !? It's quite a big animal. > Would this be needed "everytime" I want to get a "systembyte-ordered version" > of a given type !? > No, I'm not even sure why exactly that was written but it's just in the testing code. I think the email I sent yesterday got lost because I sent it CC: numpy-discussion with no To: address. Here's what I said (more or less) in that email: You can use the .newbyteorder(endian='s') method of the dtype object to get a new data-type with a different byteorder. The possibilities for endian are 'swap', 'big' ('>'), 'little' ('<'), or 'native' ('='). This will descend down a complicated data-type and change all the byte-orders appropriately. Then you can use .astype(newtype) to convert to the new byteorder. The .isnative attribute of the data-type will tell you if the data-type (or all of it's fields in recent SVN) are in native byte-order. -Travis From faltet at carabos.com Fri Aug 11 16:30:28 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 11 Aug 2006 22:30:28 +0200 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy Message-ID: <200608112230.28727.faltet@carabos.com> Hi, I was tracking down a memory leak in PyTables and it boiled down to a problem in the array protocol. The issue is easily exposed by: for i in range(1000000): numarray.array(numpy.zeros(dtype=numpy.float64, shape=3)) and looking at the memory consumption of the process. The same happens with: for i in range(1000000): numarray.asarray(numpy.zeros(dtype=numpy.float64, shape=3)) However, the numpy<--numarray sense seems to work well. for i in range(1000000): numpy.array(numarray.zeros(type="Float64", shape=3)) Using numarray 1.5.1 and numpy 1.0b1 I think this is a relatively important problem, because it somewhat prevents a smooth transition from numarray to NumPy. Thanks, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From jmiller at stsci.edu Fri Aug 11 17:13:33 2006 From: jmiller at stsci.edu (Todd Miller) Date: Fri, 11 Aug 2006 17:13:33 -0400 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <200608112230.28727.faltet@carabos.com> References: <200608112230.28727.faltet@carabos.com> Message-ID: <44DCF2FD.3000602@stsci.edu> Francesc Altet wrote: > Hi, > > I was tracking down a memory leak in PyTables and it boiled down to a problem > in the array protocol. The issue is easily exposed by: > > for i in range(1000000): > numarray.array(numpy.zeros(dtype=numpy.float64, shape=3)) > > and looking at the memory consumption of the process. The same happens with: > > for i in range(1000000): > numarray.asarray(numpy.zeros(dtype=numpy.float64, shape=3)) > > However, the numpy<--numarray sense seems to work well. > > for i in range(1000000): > numpy.array(numarray.zeros(type="Float64", shape=3)) > > Using numarray 1.5.1 and numpy 1.0b1 > > I think this is a relatively important problem, because it somewhat prevents a > smooth transition from numarray to NumPy. > > Thanks, > > I looked at this a little with a debug python and figure it's a bug in numpy.zeros(): >>> numpy.zeros(dtype=numpy.float64, shape=3) array([ 0., 0., 0.]) [147752 refs] >>> numpy.zeros(dtype=numpy.float64, shape=3) array([ 0., 0., 0.]) [147753 refs] >>> numpy.zeros(dtype=numpy.float64, shape=3) array([ 0., 0., 0.]) [147754 refs] >>> numarray.array([1,2,3,4]) array([1, 2, 3, 4]) [147772 refs] >>> numarray.array([1,2,3,4]) array([1, 2, 3, 4]) [147772 refs] >>> numarray.array([1,2,3,4]) array([1, 2, 3, 4]) [147772 refs] Regards, Todd From haase at msg.ucsf.edu Fri Aug 11 17:44:16 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 14:44:16 -0700 Subject: [Numpy-discussion] bug ! arr.mean() outside arr.min() .. arr.max() range Message-ID: <200608111444.16236.haase@msg.ucsf.edu> Hi! b is a non-native byteorder array of type int16 but see further down: same after converting to native ... >>> repr(b.dtype) 'dtype('>i2')' >>> b.dtype.isnative False >>> b.shape (38, 512, 512) >>> b.max() 1279 >>> b.min() 0 >>> b.mean() -65.279878014 >>> U.mmms(b) # my "useful" function for min,max,mean,stddev (0, 1279, 365.878016723, 123.112379036) >>> c = b.copy() >>> c.max() 1279 >>> c.min() 0 >>> c.mean() -65.279878014 >>> d = N.asarray(b, b.dtype.newbyteorder('=')) >>> d.dtype.isnative True >>> >>> >>> d.max() 1279 >>> d.min() 0 >>> d.mean() -65.279878014 >>> N.__version__ '1.0b2.dev2996' >>> Sorry that I don't have a simple example - what could be wrong !? - Sebastian Haase From faltet at carabos.com Fri Aug 11 16:55:06 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 11 Aug 2006 22:55:06 +0200 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <44DCE254.9020107@ieee.org> References: <200608101742.51914.haase@msg.ucsf.edu> <200608111222.01938.haase@msg.ucsf.edu> <44DCE254.9020107@ieee.org> Message-ID: <200608112255.08049.faltet@carabos.com> A Divendres 11 Agost 2006 22:02, Travis Oliphant va escriure: > Sebastian Haase wrote: > > I just found this in myCVS/numpy/numpy/core/tests/test_numerictypes.py > > > > > > def normalize_descr(descr): > > "Normalize a description adding the platform byteorder." > > > > return out > > > > > > > > Is that what I was talking about !? It's quite a big animal. > > Would this be needed "everytime" I want to get a "systembyte-ordered > > version" of a given type !? > > No, I'm not even sure why exactly that was written but it's just in the > testing code. I think this is my fault. Some months ago I contributed some testing code for checking numerical types, and ended with this 'animal'. Sorry about that ;-) Cheers! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From oliphant.travis at ieee.org Fri Aug 11 18:06:12 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 16:06:12 -0600 Subject: [Numpy-discussion] bug ! arr.mean() outside arr.min() .. arr.max() range In-Reply-To: <200608111444.16236.haase@msg.ucsf.edu> References: <200608111444.16236.haase@msg.ucsf.edu> Message-ID: <44DCFF54.7070701@ieee.org> Sebastian Haase wrote: > Hi! > b is a non-native byteorder array of type int16 > but see further down: same after converting to native ... > >>>> repr(b.dtype) >>>> > 'dtype('>i2')' > The problem is no-doubt related to "wrapping" for integers. Your total is getting too large to fit into the reducing data-type. What does d.sum() give you? You can add d.mean(dtype='d') to force reduction over doubles. -Travis From oliphant.travis at ieee.org Fri Aug 11 18:11:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 16:11:03 -0600 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <44DCF2FD.3000602@stsci.edu> References: <200608112230.28727.faltet@carabos.com> <44DCF2FD.3000602@stsci.edu> Message-ID: <44DD0077.4030403@ieee.org> Todd Miller wrote: >> >> > I looked at this a little with a debug python and figure it's a bug in > numpy.zeros(): > > Hmmm. I thought of that, but could not get any memory leak by just creating zeros in a four loop. In other words: for i in xrange(10000000): numpy.zeros(dtype=numpy.float64, shape=3) does not leak.. So, it's seems to be related to the array protocol. I have not been able to spot what is going on though. There does not seem to be any reference-counting problem that I can see. -Travis From svetosch at gmx.net Fri Aug 11 18:23:01 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 12 Aug 2006 00:23:01 +0200 Subject: [Numpy-discussion] why is default axis always different? Message-ID: <44DD0345.9000102@gmx.net> Hi, notice the (confusing, imho) different defaults for the axis of the following related functions: nansum(a, axis=-1) Sum the array over the given axis, treating NaNs as 0. sum(x, axis=None, dtype=None) Sum the array over the given axis. The optional dtype argument is the data type for intermediate calculations. average(a, axis=0, weights=None, returned=False) average(a, axis=0, weights=None, returned=False) Average the array over the given axis. If the axis is None, average over all dimensions of the array. Equivalent to a.mean(axis), but with a default axis of 0 instead of None. >>> numpy.__version__ '1.0b2.dev2973' Shouldn't those kind of functions have the same default behavior? So is this a bug or am I missing something? Thanks for enlightenment, Sven From oliphant.travis at ieee.org Fri Aug 11 18:30:51 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 16:30:51 -0600 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <200608112230.28727.faltet@carabos.com> References: <200608112230.28727.faltet@carabos.com> Message-ID: <44DD051B.1000603@ieee.org> Francesc Altet wrote: > Hi, > > I was tracking down a memory leak in PyTables and it boiled down to a problem > in the array protocol. The issue is easily exposed by: > > for i in range(1000000): > numarray.array(numpy.zeros(dtype=numpy.float64, shape=3)) > > More data: The following code does not leak: import numpy import sys for i in xrange(10000000): a = numpy.zeros(dtype=numpy.float64,shape=3) b = a.__array_struct__ as verified by watching the memory growth As far as numpy knows this is all it's supposed to do. This seems to indicate that something is going on inside numarray.array(a) because once you had that line to the loop, memory consumption shows up. In fact, you can just add the line a = _numarray._array_from_array_struct(a) to see the memory growth problem. -Travis -Travis From oliphant.travis at ieee.org Fri Aug 11 18:52:15 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 16:52:15 -0600 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <200608112230.28727.faltet@carabos.com> References: <200608112230.28727.faltet@carabos.com> Message-ID: <44DD0A1F.4010509@ieee.org> Francesc Altet wrote: > Hi, > > I was tracking down a memory leak in PyTables and it boiled down to a problem > in the array protocol. The issue is easily exposed by: > > for i in range(1000000): > numarray.array(numpy.zeros(dtype=numpy.float64, shape=3)) > > and looking at the memory consumption of the process. The same happens with: > > for i in range(1000000): > numarray.asarray(numpy.zeros(dtype=numpy.float64, shape=3)) > > However, the numpy<--numarray sense seems to work well. > > for i in range(1000000): > numpy.array(numarray.zeros(type="Float64", shape=3)) > > Using numarray 1.5.1 and numpy 1.0b1 > > I think this is a relatively important problem, because it somewhat prevents a > smooth transition from numarray to NumPy. > > I tracked the leak to the numarray function NA_FromDimsStridesDescrAndData This function calls NA_NewAllFromBuffer with a brand-new buffer object when data is passed in (like in the case with the array protocol). That function then takes a reference to the buffer object but then the calling function never releases the reference it already holds. This creates the leak. I added the line if (data) {Py_DECREF(buf);} right after the call to NA_NewAllFromBuffer and the leak disappeared. For what it's worth, I also think the base object for the new numarray object should be the object passed in and not the C-object that is created from it. In other words in the NA_FromArrayStruct function a->base = cobj should be replaced with Py_INCREF(obj) a->base = obj Py_DECREF(cobj) Best, -Travis From Seguridad at banamex.com Fri Aug 11 16:13:43 2006 From: Seguridad at banamex.com (Banamex) Date: Fri, 11 Aug 2006 22:13:43 +0200 Subject: [Numpy-discussion] Seguridad Banamex Message-ID: <9ecd20a573d7d9c0641229109b56a39f@www.slc-gent.org> An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Fri Aug 11 23:40:27 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 20:40:27 -0700 Subject: [Numpy-discussion] bug ! arr.mean() outside arr.min() .. arr.max() range In-Reply-To: <44DCFF54.7070701@ieee.org> References: <200608111444.16236.haase@msg.ucsf.edu> <44DCFF54.7070701@ieee.org> Message-ID: <44DD4DAB.5040509@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> Hi! >> b is a non-native byteorder array of type int16 >> but see further down: same after converting to native ... >> >>>>> repr(b.dtype) >>>>> >> 'dtype('>i2')' >> > > The problem is no-doubt related to "wrapping" for integers. Your total is > getting too large to fit into the reducing data-type. > > What does > > d.sum() give you? I can't check that particular array until Monday... > > You can add d.mean(dtype='d') to force reduction over doubles. This almost sound like what I reported is something like a feature !? Is there a sensible / generic way to avoid those "accident" ? Maybe it must be the default to reduce int8, uint8, int16, uint16 into doubles !? - Sebastian From charlesr.harris at gmail.com Sat Aug 12 00:04:44 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 11 Aug 2006 22:04:44 -0600 Subject: [Numpy-discussion] bug ! arr.mean() outside arr.min() .. arr.max() range In-Reply-To: <44DD4DAB.5040509@msg.ucsf.edu> References: <200608111444.16236.haase@msg.ucsf.edu> <44DCFF54.7070701@ieee.org> <44DD4DAB.5040509@msg.ucsf.edu> Message-ID: On 8/11/06, Sebastian Haase wrote: > > Travis Oliphant wrote: > > Sebastian Haase wrote: > >> Hi! > >> b is a non-native byteorder array of type int16 > >> but see further down: same after converting to native ... > >> > >>>>> repr(b.dtype) > >>>>> > >> 'dtype('>i2')' > >> > > > > The problem is no-doubt related to "wrapping" for integers. Your total > is > > getting too large to fit into the reducing data-type. > > > > What does > > > > d.sum() give you? > I can't check that particular array until Monday... > > > > > You can add d.mean(dtype='d') to force reduction over doubles. > This almost sound like what I reported is something like a feature !? > Is there a sensible / generic way to avoid those "accident" ? Maybe it > must be the default to reduce int8, uint8, int16, uint16 into doubles !? Hard to say. I always bear the precision in mind when accumulating numbers but even so it is possible to get unexpected results. Even doubles can give problems if there are a few large numbers mixed with many small numbers. That said, folks probably expect means to be accurate and don't want modular arithmetic, so doubles would probably be a better default. It would be slower though. I think there was a discussion of this problem previously in regard to the reduce methods. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Sat Aug 12 00:10:45 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 21:10:45 -0700 Subject: [Numpy-discussion] is cygwin patch from from ticket #114 still working !? Message-ID: <44DD54C5.9000100@msg.ucsf.edu> This is what I get ? haase at doe:~/myCVS/numpy: patch.exe -b -p0 < ~/winbuilding3.diff patching file numpy/distutils/misc_util.py Reversed (or previously applied) patch detected! Assume -R? [n] Thanks, Sebastian From haase at msg.ucsf.edu Sat Aug 12 00:18:36 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 21:18:36 -0700 Subject: [Numpy-discussion] is cygwin patch from from ticket #114 still working !? In-Reply-To: <44DD54C5.9000100@msg.ucsf.edu> References: <44DD54C5.9000100@msg.ucsf.edu> Message-ID: <44DD569C.6050105@msg.ucsf.edu> Sebastian Haase wrote: > This is what I get ? > > haase at doe:~/myCVS/numpy: patch.exe -b -p0 < ~/winbuilding3.diff > patching file numpy/distutils/misc_util.py > Reversed (or previously applied) patch detected! Assume -R? [n] > > Thanks, > Sebastian OK - I think I can answer myself. No, but it's not needed anymore ! It looks like it compiled fine without applying it - Sebastian From haase at msg.ucsf.edu Sat Aug 12 00:31:20 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 11 Aug 2006 21:31:20 -0700 Subject: [Numpy-discussion] Does a C-API mismatch require a fatal(!) program termination !? (crash on import !) Message-ID: <44DD5998.7000301@msg.ucsf.edu> Hi, I was just wondering if it might be possible to raise an ImportError instead of terminating python; look what I get: haase at doe:~: python Python 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> import sys >>> sys.path.append("PrCyg") >>> from Priithon import seb RuntimeError: module compiled against version 1000000 of C-API but this version of numpy is 1000002 Fatal Python error: numpy.core.multiarray failed to import... exiting. This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. haase at doe:~: Assume that you are running an interactive session, analysing some important[;-)] data. Then you think: "Oh, I should try this one (maybe little old) module on this" ... so you try to import ... and ... suddenly the entire python application crashes. When your shell application runs without a terminal you don't even get to read the error message ! - Sebastian Haase From jmiller at stsci.edu Sat Aug 12 07:05:51 2006 From: jmiller at stsci.edu (Todd Miller) Date: Sat, 12 Aug 2006 07:05:51 -0400 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <44DD051B.1000603@ieee.org> References: <200608112230.28727.faltet@carabos.com> <44DD051B.1000603@ieee.org> Message-ID: <44DDB60F.9050009@stsci.edu> Travis Oliphant wrote: > As far as numpy knows this is all it's supposed to do. This seems to > indicate that something is going on inside numarray.array(a) > > because once you had that line to the loop, memory consumption shows up. > > In fact, you can just add the line > > a = _numarray._array_from_array_struct(a) > This does demonstrate a huge leak I'll look into. Thanks. Regards, Todd From jmiller at stsci.edu Sat Aug 12 08:37:39 2006 From: jmiller at stsci.edu (Todd Miller) Date: Sat, 12 Aug 2006 08:37:39 -0400 Subject: [Numpy-discussion] Memory leak in array protocol numarray<--numpy In-Reply-To: <44DD0A1F.4010509@ieee.org> References: <200608112230.28727.faltet@carabos.com> <44DD0A1F.4010509@ieee.org> Message-ID: <44DDCB93.5080103@stsci.edu> I agree with all of Travis' comments below and committed the suggested changes to numarray CVS. I found one other numarray change needed for Francesc's examples to run (apparently) leak-free: Py_INCREF(obj) Py_XDECREF(a->base) a->base = obj Py_DECREF(cobj) Thanks Travis! Regards, Todd Travis Oliphant wrote: > Francesc Altet wrote: > >> Hi, >> >> I was tracking down a memory leak in PyTables and it boiled down to a problem >> in the array protocol. The issue is easily exposed by: >> >> for i in range(1000000): >> numarray.array(numpy.zeros(dtype=numpy.float64, shape=3)) >> >> and looking at the memory consumption of the process. The same happens with: >> >> for i in range(1000000): >> numarray.asarray(numpy.zeros(dtype=numpy.float64, shape=3)) >> >> However, the numpy<--numarray sense seems to work well. >> >> for i in range(1000000): >> numpy.array(numarray.zeros(type="Float64", shape=3)) >> >> Using numarray 1.5.1 and numpy 1.0b1 >> >> I think this is a relatively important problem, because it somewhat prevents a >> smooth transition from numarray to NumPy. >> >> >> > > I tracked the leak to the numarray function > > NA_FromDimsStridesDescrAndData > > This function calls NA_NewAllFromBuffer with a brand-new buffer object > when data is passed in (like in the case with the array protocol). That > function then takes a reference to the buffer object but then the > calling function never releases the reference it already holds. This > creates the leak. > > I added the line > > if (data) {Py_DECREF(buf);} > > right after the call to NA_NewAllFromBuffer and the leak disappeared. > > For what it's worth, I also think the base object for the new numarray > object should be the object passed in and not the C-object that is > created from it. > > In other words in the NA_FromArrayStruct function > > a->base = cobj > > should be replaced with > > Py_INCREF(obj) > a->base = obj > Py_DECREF(cobj) > > > Best, > > > -Travis > > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From faltet at carabos.com Sat Aug 12 11:53:31 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 12 Aug 2006 17:53:31 +0200 Subject: [Numpy-discussion] =?iso-8859-1?q?Memory_leak_in_array_protocol?= =?iso-8859-1?q?=09numarray_=3C--numpy?= In-Reply-To: <44DDCB93.5080103@stsci.edu> References: <200608112230.28727.faltet@carabos.com> <44DD0A1F.4010509@ieee.org> <44DDCB93.5080103@stsci.edu> Message-ID: <200608121753.33150.faltet@carabos.com> A Dissabte 12 Agost 2006 14:37, Todd Miller va escriure: > I agree with all of Travis' comments below and committed the suggested > changes to numarray CVS. I found one other numarray change needed > for Francesc's examples to run (apparently) leak-free: > > Py_INCREF(obj) > Py_XDECREF(a->base) > a->base = obj > Py_DECREF(cobj) > > Thanks Travis! Hey! I checked this morning Travis' patch and seems to work well for me. I'll add yours as well later on and see... BTW, where exactly I've to add the above lines? Many thanks Travis and Todd. You are great! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From jdhunter at ace.bsd.uchicago.edu Sat Aug 12 12:27:07 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sat, 12 Aug 2006 11:27:07 -0500 Subject: [Numpy-discussion] build bug Message-ID: <87lkpt3oo4.fsf@peds-pc311.bsd.uchicago.edu> Just tried to build svn 2999 on OSX 10.3 w/ python2.3 and encountered a bug in numpy/core/setup.py on line 102 if sys.version[:3] < '2.4': #kws_args['headers'].append('stdlib.h') if check_func('strtod'): moredefs.append(('PyOS_ascii_strtod', 'strtod')) I've commented out the kws_args because it is not defined in this function. Appeared to build fine w/o it. JDH From cookedm at physics.mcmaster.ca Sat Aug 12 14:27:26 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 12 Aug 2006 14:27:26 -0400 Subject: [Numpy-discussion] build bug In-Reply-To: <87lkpt3oo4.fsf@peds-pc311.bsd.uchicago.edu> References: <87lkpt3oo4.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <20060812182726.GA930@arbutus.physics.mcmaster.ca> On Sat, Aug 12, 2006 at 11:27:07AM -0500, John Hunter wrote: > > Just tried to build svn 2999 on OSX 10.3 w/ python2.3 and encountered > a bug in numpy/core/setup.py on line 102 > > if sys.version[:3] < '2.4': > #kws_args['headers'].append('stdlib.h') > if check_func('strtod'): > moredefs.append(('PyOS_ascii_strtod', 'strtod')) > > I've commented out the kws_args because it is not defined in this > function. Appeared to build fine w/o it. Whoops, missed that one. Fixed in svn. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Fri Aug 11 03:12:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 11 Aug 2006 01:12:45 -0600 Subject: [Numpy-discussion] numpy.ascontiguousarray on byteswapped data !? In-Reply-To: <44DC085C.7010009@msg.ucsf.edu> References: <200608101742.51914.haase@msg.ucsf.edu> <44DBE126.7030001@ieee.org> <44DC085C.7010009@msg.ucsf.edu> Message-ID: <44DC2DED.7010102@ieee.org> Sebastian Haase wrote: > Travis Oliphant wrote: >> Sebastian Haase wrote: >>> Hi, >>> Does numpy.ascontiguousarray(arr) "fix" the byteorder when arr is >>> non-native byteorder ? >>> >>> If not, what functions does ? >>> >> >> It can if you pass in a data-type with the right byteorder (or use a >> native built-in data-type). >> >> In NumPy, it's the data-type that carries the "byte-order" >> information. So, there are lot's of ways to "fix" the byte-order. >> > So then the question is: what is the easiest way to say: > give me the equivalent type of dtype, but with byteorder '<' (or '=') !? > I would be cumbersome (and ugly ;-) ) if one would have to "manually > assemble" such a construct every time ... Two things. Every dtype object has the method self.newbyteorder(endian) which can be used to either swap the byte order or apply a new one to every sub-field. endian can be '<', '>', '=', 'swap', 'little', 'big' If you want to swap bytes based on whether or not the data-type is machine native you can do something like the following if not a.dtype.isnative: a = a.astype(a.dtype.newbyteorder()) You can make sure the array has the correct data-type using .astype(newtype) or array(a, newtype). You can also set the data-type of the array a.dtype = newtype but this won't change anything just how they are viewed. You can always byteswap the data explicitly a.byteswap(True) will do it in-place. So, you can change both the data-type and the way it's stored using a.byteswap(True) # Changes the data but not the data-type a.dtype = a.dtype.newbyteorder() # changes the data-type but not the data -Travis From svetosch at gmx.net Sat Aug 12 17:35:36 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 12 Aug 2006 23:35:36 +0200 Subject: [Numpy-discussion] why is default axis always different? In-Reply-To: <44DD0345.9000102@gmx.net> References: <44DD0345.9000102@gmx.net> Message-ID: <44DE49A8.4010606@gmx.net> Sven Schreiber schrieb: > Hi, > notice the (confusing, imho) different defaults for the axis of the > following related functions: > > nansum(a, axis=-1) > Sum the array over the given axis, treating NaNs as 0. > > sum(x, axis=None, dtype=None) > Sum the array over the given axis. The optional dtype argument > is the data type for intermediate calculations. > > > average(a, axis=0, weights=None, returned=False) > average(a, axis=0, weights=None, returned=False) > > Average the array over the given axis. If the axis is None, average > over all dimensions of the array. Equivalent to a.mean(axis), but > with a default axis of 0 instead of None. > >>>> numpy.__version__ > '1.0b2.dev2973' > > Shouldn't those kind of functions have the same default behavior? So is > this a bug or am I missing something? > > Thanks for enlightenment, > Sven > Perhaps this is useful for others, so I'll share my self-enlightenment (please correct me if I got it wrong): - sum's axis=None default actually conforms to what's in the numpy 1.0 release notes (functions that match methods should also get their default, which for such methods is axis=None) - nansum's axis=-1 default is normal for functions which don't match equivalent methods - However, I still don't understand why then average() doesn't have axis=-1 as its default like other functions...? Apparently the axis=0 default of average() is its main feature, explaining its existence vis-?-vis .mean. But that seems inconsistent to me, as it breaks all the rules: It doesn't conform to the standard axis=-1 default for functions, and if it's viewed as equivalent to the .mean method (which it is), it doesn't conform to the rule that it should share the latter's default axis=None. So imho it seems like there's no real use for average() other than creating confusion. (Well that sounds a bit too strong, but anyway...) I therefore suggest to officially deprecate it and move it to some compatibility module. I'm going to file a corresponding ticket tomorrow unless somebody tells me not to. Cheers, Sven From fgv87 at tom.com Wed Aug 16 05:11:37 2006 From: fgv87 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Wed, 16 Aug 2006 17:11:37 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU6yfqy+tK7z9/W97ncvLzE3Mzhyf0=?= Message-ID: An HTML attachment was scrubbed... URL: From MAILER-DAEMON at rosi.szbk.u-szeged.hu Sun Aug 13 05:07:22 2006 From: MAILER-DAEMON at rosi.szbk.u-szeged.hu (Mail Delivery System) Date: Sun, 13 Aug 2006 11:07:22 +0200 (CEST) Subject: [Numpy-discussion] Undelivered Mail Returned to Sender Message-ID: <20060813090722.82ED71BD8E@rosi.szbk.u-szeged.hu> This is the Postfix program at host rosi.szbk.u-szeged.hu. I'm sorry to have to inform you that your message could not be be delivered to one or more recipients. It's attached below. For further assistance, please send mail to If you do so, please include this problem report. You can delete your own text from the attached returned message. The Postfix program : permission denied. Command output: maildrop: maildir over quota. -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From fgv87 at tom.com Wed Aug 16 05:11:37 2006 From: fgv87 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Wed, 16 Aug 2006 17:11:37 +0800 Subject: *****SPAM***** [Numpy-discussion] re:Éú²úÒ»ÏßÖ÷¹Ü¼¼ÄÜÌáÉý Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 -------------- next part -------------- _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From jmiller at stsci.edu Sun Aug 13 08:58:52 2006 From: jmiller at stsci.edu (Todd Miller) Date: Sun, 13 Aug 2006 08:58:52 -0400 Subject: [Numpy-discussion] Memory leak in array protocol numarray <--numpy In-Reply-To: <200608121753.33150.faltet@carabos.com> References: <200608112230.28727.faltet@carabos.com> <44DD0A1F.4010509@ieee.org> <44DDCB93.5080103@stsci.edu> <200608121753.33150.faltet@carabos.com> Message-ID: <44DF220C.2020600@stsci.edu> Francesc Altet wrote: > A Dissabte 12 Agost 2006 14:37, Todd Miller va escriure: > >> I agree with all of Travis' comments below and committed the suggested >> changes to numarray CVS. I found one other numarray change needed >> for Francesc's examples to run (apparently) leak-free: >> >> Py_INCREF(obj) >> Py_XDECREF(a->base) >> a->base = obj >> Py_DECREF(cobj) >> >> Thanks Travis! >> > > Hey! I checked this morning Travis' patch and seems to work well for me. I'll > add yours as well later on and see... BTW, where exactly I've to add the > above lines? > The lines above are a modification to Travis' patch, so basically the same place: ******* a = NA_FromDimsStridesTypeAndData(arrayif->nd, shape, strides, t, arrayif->data); if (!a) goto _fail; ! a->base = cobj; return a; ------- a = NA_FromDimsStridesTypeAndData(arrayif->nd, shape, strides, t, arrayif->data); if (!a) goto _fail; ! Py_INCREF(obj); ! Py_XDECREF(a->base); ! a->base = obj; ! Py_DECREF(cobj); return a; Todd From jdhunter at ace.bsd.uchicago.edu Sun Aug 13 16:02:13 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sun, 13 Aug 2006 15:02:13 -0500 Subject: [Numpy-discussion] numarray cov function Message-ID: <871wrkqu9m.fsf@peds-pc311.bsd.uchicago.edu> I was surprised to see that numarray.mlab.cov is returning a rank-0 complex number when given two 1D arrays as inputs rather than the standard 2x2 covariance array I am used to seeing. Is this a feature or a bug? In [2]: import numarray.mlab as nam In [3]: x = nam.rand(10) In [4]: y = nam.rand(10) In [5]: nam.cov(x, y) Out[5]: array((0.014697855954587828+0j)) In [6]: import numpy.oldnumeric.mlab as npm In [7]: x = npm.rand(10) In [8]: y = npm.rand(10) In [9]: npm.cov(x, y) Out[9]: array([[ 0.13243082, 0.0520454 ], [ 0.0520454 , 0.07435816]]) In [10]: import numarray In [11]: numarray.__version__ Out[11]: '1.3.3' In [12]: import numpy In [13]: numpy.__version__ Out[13]: '1.0b2.dev2999' From oliphant.travis at ieee.org Sun Aug 13 17:33:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 13 Aug 2006 15:33:28 -0600 Subject: [Numpy-discussion] numarray cov function In-Reply-To: <871wrkqu9m.fsf@peds-pc311.bsd.uchicago.edu> References: <871wrkqu9m.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <44DF9AA8.8080802@ieee.org> John Hunter wrote: > I was surprised to see that numarray.mlab.cov is returning a rank-0 > complex number when given two 1D arrays as inputs rather than the > standard 2x2 covariance array I am used to seeing. Is this a feature > or a bug? > > > In [2]: import numarray.mlab as nam > > In [3]: x = nam.rand(10) > > In [4]: y = nam.rand(10) > > In [5]: nam.cov(x, y) > Out[5]: array((0.014697855954587828+0j)) > > In [6]: import numpy.oldnumeric.mlab as npm > > In [7]: x = npm.rand(10) > > In [8]: y = npm.rand(10) > > In [9]: npm.cov(x, y) > Out[9]: > array([[ 0.13243082, 0.0520454 ], > [ 0.0520454 , 0.07435816]]) > > In [10]: import numarray > > In [11]: numarray.__version__ > Out[11]: '1.3.3' > > In [12]: import numpy > > In [13]: numpy.__version__ > Out[13]: '1.0b2.dev2999' > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From oliphant.travis at ieee.org Sun Aug 13 17:35:00 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 13 Aug 2006 15:35:00 -0600 Subject: [Numpy-discussion] numarray cov function In-Reply-To: <871wrkqu9m.fsf@peds-pc311.bsd.uchicago.edu> References: <871wrkqu9m.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <44DF9B04.3010401@ieee.org> John Hunter wrote: > I was surprised to see that numarray.mlab.cov is returning a rank-0 > complex number when given two 1D arrays as inputs rather than the > standard 2x2 covariance array I am used to seeing. Is this a feature > or a bug? > This was the old behavior of the Numeric cov function which numarray borrowed. We changed the behavior of cov in NumPy because it makes more sense to return the full covariance matrix in this case. -Travis From haase at msg.ucsf.edu Sun Aug 13 18:48:41 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 13 Aug 2006 15:48:41 -0700 Subject: [Numpy-discussion] concersion warning: numarray to numpy - now array defaults to not copy Message-ID: <44DFAC49.1060903@msg.ucsf.edu> Hi, I just wanted to point out that the default of the copy argument changed from numpy to numarray. Don't forget about that in the conversion script ... Cheers, Sebastian Haase From oliphant.travis at ieee.org Sun Aug 13 18:57:13 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 13 Aug 2006 16:57:13 -0600 Subject: [Numpy-discussion] concersion warning: numarray to numpy - now array defaults to not copy In-Reply-To: <44DFAC49.1060903@msg.ucsf.edu> References: <44DFAC49.1060903@msg.ucsf.edu> Message-ID: <44DFAE49.4080601@ieee.org> Sebastian Haase wrote: > Hi, > I just wanted to point out that the default of the copy argument changed > from numpy to numarray. > Don't forget about that in the conversion script ... > Hmm.. I don't see what you are talking about. The default for the copy argument in the array function is still copy=True. If there is something else then it is a bug. -Travis From oliphant.travis at ieee.org Sun Aug 13 18:57:13 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 13 Aug 2006 16:57:13 -0600 Subject: [Numpy-discussion] concersion warning: numarray to numpy - now array defaults to not copy In-Reply-To: <44DFAC49.1060903@msg.ucsf.edu> References: <44DFAC49.1060903@msg.ucsf.edu> Message-ID: <44DFAE49.4080601@ieee.org> Sebastian Haase wrote: > Hi, > I just wanted to point out that the default of the copy argument changed > from numpy to numarray. > Don't forget about that in the conversion script ... > Hmm.. I don't see what you are talking about. The default for the copy argument in the array function is still copy=True. If there is something else then it is a bug. -Travis From haase at msg.ucsf.edu Sun Aug 13 20:28:36 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 13 Aug 2006 17:28:36 -0700 Subject: [Numpy-discussion] concersion warning: numarray to numpy - now array defaults to not copy In-Reply-To: <44DFAE49.4080601@ieee.org> References: <44DFAC49.1060903@msg.ucsf.edu> <44DFAE49.4080601@ieee.org> Message-ID: <44DFC3B4.2030901@msg.ucsf.edu> SORRY FOR THE CONFUSION !! I must have been on drugs ! Maybe I did not get enough sleep. asarray() is the function that does not create a copy - both in numpy and in numarray. Sorry, Sebastian Travis Oliphant wrote: > Sebastian Haase wrote: >> Hi, >> I just wanted to point out that the default of the copy argument changed >> from numpy to numarray. >> Don't forget about that in the conversion script ... >> > > Hmm.. I don't see what you are talking about. The default for the copy > argument in the array function is still copy=True. If there is > something else then it is a bug. > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From davidgrant at gmail.com Mon Aug 14 02:33:54 2006 From: davidgrant at gmail.com (David Grant) Date: Sun, 13 Aug 2006 23:33:54 -0700 Subject: [Numpy-discussion] Profiling line-by-line In-Reply-To: References: Message-ID: Could this http://oubiwann.blogspot.com/2006/08/python-and-kcachegrind.html lead to line-by-line profiling with numpy functions? Dave On 7/26/06, David Grant wrote: > > Does anyone know if this issue related to profiling with numpy is a python > problem or a numpy problem? > > Dave > > > On 7/20/06, David Grant < davidgrant at gmail.com> wrote: > > > > > > > > On 7/20/06, Arnd Baecker wrote: > > > > > > > > > More importantly note that profiling in connection > > > with ufuncs seems problematic: > > > > > > Yes, that seems to be my problem... I read the threads you provided > > links to. Do you know why this is the case? > > > > I have tried hotshot2calltree by the way, and I didn't find out anything > > new. > > > > -- > > David Grant > > > > > > -- > David Grant > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes25 at tom.com Thu Aug 17 07:46:26 2006 From: wes25 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Thu, 17 Aug 2006 19:46:26 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU6yfqy+tK7z9/W97ncvLzE3Mzhyf0=?= Message-ID: An HTML attachment was scrubbed... URL: From MAILER-DAEMON at rosi.szbk.u-szeged.hu Mon Aug 14 07:42:31 2006 From: MAILER-DAEMON at rosi.szbk.u-szeged.hu (Mail Delivery System) Date: Mon, 14 Aug 2006 13:42:31 +0200 (CEST) Subject: [Numpy-discussion] Undelivered Mail Returned to Sender Message-ID: <20060814114231.A6BA01BD9A@rosi.szbk.u-szeged.hu> This is the Postfix program at host rosi.szbk.u-szeged.hu. I'm sorry to have to inform you that your message could not be be delivered to one or more recipients. It's attached below. For further assistance, please send mail to If you do so, please include this problem report. You can delete your own text from the attached returned message. The Postfix program : permission denied. Command output: maildrop: maildir over quota. -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From wes25 at tom.com Thu Aug 17 07:46:26 2006 From: wes25 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Thu, 17 Aug 2006 19:46:26 +0800 Subject: *****SPAM***** [Numpy-discussion] re:Éú²úÒ»ÏßÖ÷¹Ü¼¼ÄÜÌáÉý Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 -------------- next part -------------- _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From ainulinde at gmail.com Mon Aug 14 13:21:39 2006 From: ainulinde at gmail.com (ainulinde) Date: Tue, 15 Aug 2006 01:21:39 +0800 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available In-Reply-To: References: <44DB64A2.60203@enthought.com> Message-ID: FYI, I chang my bt client from bitcomet to uTorrent, and it works now. and I have downloaded the iso by http://blabla. in the vmware vitural machine, the livecd boot and i can use ipython/import numpy... is there any more feature or special scipy conference stuff on the cd? On 8/11/06, ainulinde wrote: > can't get any seeds for this torrent and any other download methods? thanks > > On 8/11/06, Bryce Hendrix wrote: > > For those not able to make SciPy 2006 next week, or who would like to > > download the ISO a few days early, its available at > > http://code.enthought.com/downloads/scipy2006-i386.iso.torrent. > > > > We squashed a lot onto the CD, so I also had to trim > 100 MB of > > packages that ship with the standard Ubuntu CD. Here's what I was able > > to add: > > > > * SciPy build from svn (Wed, 12:00 CST) > > * NumPy built from svn (Wed, 12:00 CST) > > * Matplotlib built from svn (Wed, 12:00 CST) > > * IPython built from svn (Wed, 12:00 CST) > > * Enthought built from svn (Wed, 16:00 CST) > > * ctypes 1.0.0 > > * hdf5 1.6.5 > > * networkx 0.31 > > * Pyrex 0.9.4.1 > > * pytables 1.3.2 > > > > All of the svn checkouts are zipped in /src, if you'd like to build from > > a svn version newer than what was shipped, simple copy the compressed > > package to your home dir, uncompress it, run "svn upate", and built it. > > > > Please note: This ISO was built rather hastily, uses un-official code, > > and received very little testing. Please don't even consider using this > > in a production environment. > > > > Bryce > > > > ------------------------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From matthew.brett at gmail.com Mon Aug 14 13:23:07 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 14 Aug 2006 18:23:07 +0100 Subject: [Numpy-discussion] Creating and reshaping fortran order arrays Message-ID: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> Hi, I am sorry if this is obvious, but: I am working on the scipy loadmat module, and would like to use numpy to reformat the fortran order arrays that matlab saves. I was not sure how to do this, and would like to ask for advice. Let us say that I have some raw binary data as a string. The data contains 4 integers, for a 2x2 array, stored in fortran order. For example, here is 0,1,2,3 as int32 str = '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' What is the best way of me putting this into a 2x2 array object so that the array recognizes the data is in fortran order. Sort of: a = somefunction(str, shape=(2,2), dtype=int32, order='F') such that a.shape = (2,2) and a[1,0] == 1, rather than 2. Sorry if that's obvious, but I couldn't see it immediately.... Thanks a lot, Matthew From bhendrix at enthought.com Mon Aug 14 13:36:35 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Mon, 14 Aug 2006 12:36:35 -0500 Subject: [Numpy-discussion] SciPy 2006 LiveCD torrent is available In-Reply-To: References: <44DB64A2.60203@enthought.com> Message-ID: <44E0B4A3.3000307@enthought.com> The Live CD is meant to be paired with the tutorial sessions, but contains just the latest builds + svn checkouts. Once the tutorials are available, we should add them to the same wiki page for downloading. I built the CD's in a VMWare virtual machine, if anyone is interested in the VMWare image, I can make it available via bittorrent too, maybe even with instructions on how to update the files and re-master the ISO :) Bryce ainulinde wrote: > FYI, I chang my bt client from bitcomet to uTorrent, and it works now. > and I have downloaded the iso by http://blabla. > in the vmware vitural machine, the livecd boot and i can use > ipython/import numpy... > is there any more feature or special scipy conference stuff on the cd? > > From oliphant.travis at ieee.org Mon Aug 14 13:55:40 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 11:55:40 -0600 Subject: [Numpy-discussion] Creating and reshaping fortran order arrays In-Reply-To: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> References: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> Message-ID: <44E0B91C.8070807@ieee.org> Matthew Brett wrote: > Hi, > > I am sorry if this is obvious, but: > It's O.K. I don't think many people are used to the fortran-order stuff. So, I doubt it's obvious. > For example, here is 0,1,2,3 as int32 > > str = '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' > > What is the best way of me putting this into a 2x2 array object so > that the array recognizes the data is in fortran order. Sort of: > > a = somefunction(str, shape=(2,2), dtype=int32, order='F') > There isn't really a function like this because the fromstring function only creates 1-d arrays that must be reshaped later (it also copies the data from the string). However, you can use the ndarray creation function itself to do what you want: a = ndarray(shape=(2,2), dtype=int32, buffer=str, order='F') This will use the memory of the string as the new array memory. -Travis From oliphant.travis at ieee.org Mon Aug 14 14:01:48 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 12:01:48 -0600 Subject: [Numpy-discussion] Creating and reshaping fortran order arrays In-Reply-To: <44E0B91C.8070807@ieee.org> References: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> <44E0B91C.8070807@ieee.org> Message-ID: <44E0BA8C.2070801@ieee.org> Travis Oliphant wrote: > However, you can use the ndarray creation function itself to do what you > want: > > a = ndarray(shape=(2,2), dtype=int32, buffer=str, order='F') > > This will use the memory of the string as the new array memory. > Incidentally, the new array will be read-only. But, you can fix this in two ways: 1) a.flags.writeable = True --- This is a cheat that avoids the extra copy on pickle-load and let's you use strings as writeable buffers. Don't abuse it. It will disappear once Python 3k has a proper bytes type. 2) a = a.copy() -Travis From haase at msg.ucsf.edu Mon Aug 14 14:02:53 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 14 Aug 2006 11:02:53 -0700 Subject: [Numpy-discussion] trivial question: how to compare dtype - but ignoring byteorder ? In-Reply-To: <44C52158.3050600@ieee.org> References: <44C450CA.3010609@msg.ucsf.edu> <44C52158.3050600@ieee.org> Message-ID: <200608141102.53945.haase@msg.ucsf.edu> On Monday 24 July 2006 12:36, Travis Oliphant wrote: > Sebastian Haase wrote: > > Hi, > > if I have a numpy array 'a' > > and say: > > a.dtype == numpy.float32 > > > > Is the result independent of a's byteorder ? > > (That's what I would expect ! Just checking !) > > I think I misread the question and saw "==" as "=" > > But, the answer I gave should still help: the byteorder is a property > of the data-type. There is no such thing as "a's" byteorder. Thus, > numpy.float32 (which is actually an array-scalar and not a true > data-type) is interepreted as a machine-byte-order IEEE floating-point > data-type with 32 bits. Thus, the result will depend on whether or not > a.dtype is machine-order or not. > > -Travis Hi, I just realized that this question did actually not get sorted out. Now I'm just about to convert my code to compare arr.dtype.type to the (default scalar!) dtype numpy.uint8 like this: if self.img.dtype.type == N.uint8: self.hist_min, self.hist_max = 0, 1<<8 elif self.img.dtype.type == N.uint16: self.hist_min, self.hist_max = 0, 1<<16 ... This seems to work independent of byteorder - (but looks ugly(er)) ... Is this the best way of doing this ? - Sebastian Haase From oliphant.travis at ieee.org Mon Aug 14 15:32:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 13:32:05 -0600 Subject: [Numpy-discussion] trivial question: how to compare dtype - but ignoring byteorder ? In-Reply-To: <200608141102.53945.haase@msg.ucsf.edu> References: <44C450CA.3010609@msg.ucsf.edu> <44C52158.3050600@ieee.org> <200608141102.53945.haase@msg.ucsf.edu> Message-ID: <44E0CFB5.9060801@ieee.org> > Hi, > I just realized that this question did actually not get sorted out. > Now I'm just about to convert my code to compare > arr.dtype.type to the (default scalar!) dtype numpy.uint8 > like this: > if self.img.dtype.type == N.uint8: > self.hist_min, self.hist_max = 0, 1<<8 > elif self.img.dtype.type == N.uint16: > self.hist_min, self.hist_max = 0, 1<<16 > ... > > Yes, you can do this and it should work independent of byteorder. The dtype comparison will take into account the byte-order but comparing the type objects directly won't. So, if that is your intent, then great. -Travis From satyaupadhya at yahoo.co.in Mon Aug 14 15:44:05 2006 From: satyaupadhya at yahoo.co.in (Satya Upadhya) Date: Mon, 14 Aug 2006 20:44:05 +0100 (BST) Subject: [Numpy-discussion] Regarding Matrices Message-ID: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> Dear All, Just a few queries regarding matrices. On my python shell i typed: >>> from Numeric import * >>> from LinearAlgebra import * >>> A = [1,2,3,4,5,6,7,8,9] >>> B = reshape(A,(3,3)) >>> B array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> X = identity(3) >>> X array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) >>> D = power(B,0) >>> D array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]) the power function is giving a resultant matrix in which each element of matrix B is raised to the power of 0 so as to make it 1. But, taken as a whole i.e. matrix B to the power of 0 should have given the identity matrix. Also, what is the procedure for taking the log of an entire matrix (log(A) where A is a matrix takes the log of every individual element in A, but thats not the same as taking the log of the entire matrix) Thanking you, Satya --------------------------------- Here's a new way to find what you're looking for - Yahoo! Answers Send FREE SMS to your friend's mobile from Yahoo! Messenger Version 8. Get it NOW -------------- next part -------------- An HTML attachment was scrubbed... URL: From svetosch at gmx.net Mon Aug 14 15:58:50 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Mon, 14 Aug 2006 21:58:50 +0200 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> Message-ID: <44E0D5FA.9040505@gmx.net> Hi, Satya Upadhya schrieb: >>>> from Numeric import * Well this list is about the numpy package, but anyway... > the power function is giving a resultant matrix in which each element of > matrix B is raised to the power of 0 so as to make it 1. But, taken as a > whole i.e. matrix B to the power of 0 should have given the identity > matrix. afaik, in numpy terms if you are dealing with a numpy array, such functions are elementwise by design. In contrast, if you have a numpy matrix (a special subclass of the array class) --constructed e.g. as mat(eye(3))-- then power is redefined to be the matrix power; at least that's the rule for the ** operator, not 100% sure if for the explicit power() function as well, but I suppose so. > > Also, what is the procedure for taking the log of an entire matrix > (log(A) where A is a matrix takes the log of every individual element in > A, but thats not the same as taking the log of the entire matrix) I don't understand what you want, how do you take the log of a matrix mathematically? -Sven From nwagner at iam.uni-stuttgart.de Mon Aug 14 16:24:20 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Aug 2006 22:24:20 +0200 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: <44E0D5FA.9040505@gmx.net> References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> <44E0D5FA.9040505@gmx.net> Message-ID: On Mon, 14 Aug 2006 21:58:50 +0200 Sven Schreiber wrote: > Hi, > > Satya Upadhya schrieb: > >>>>> from Numeric import * > > Well this list is about the numpy package, but anyway... > >> the power function is giving a resultant matrix in which >>each element of >> matrix B is raised to the power of 0 so as to make it 1. >>But, taken as a >> whole i.e. matrix B to the power of 0 should have given >>the identity >> matrix. > > afaik, in numpy terms if you are dealing with a numpy >array, such > functions are elementwise by design. > In contrast, if you have a numpy matrix (a special >subclass of the array > class) --constructed e.g. as mat(eye(3))-- then power is >redefined to be > the matrix power; at least that's the rule for the ** >operator, not 100% > sure if for the explicit power() function as well, but I >suppose so. > >> >> Also, what is the procedure for taking the log of an >>entire matrix >> (log(A) where A is a matrix takes the log of every >>individual element in >> A, but thats not the same as taking the log of the >>entire matrix) > > I don't understand what you want, how do you take the >log of a matrix > mathematically? > > -Sven > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web >services, security? > Get stuff done quickly with pre-integrated technology to >make your job easier > Download IBM WebSphere Application Server v.1.0.1 based >on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion Help on function logm in module scipy.linalg.matfuncs: logm(A, disp=1) Matrix logarithm, inverse of expm. Nils From torgil.svensson at gmail.com Mon Aug 14 17:03:04 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Mon, 14 Aug 2006 23:03:04 +0200 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> <44E0D5FA.9040505@gmx.net> Message-ID: >>> import numpy >>> numpy.__version__ '1.0b1' >>> from numpy import * >>> A = [1,2,3,4,5,6,7,8,9] >>> B = asmatrix(reshape(A,(3,3))) >>> B matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> B**0 matrix([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> power(B,0) matrix([[1, 1, 1], [1, 1, 1], [1, 1, 1]]) Shouldn't power() and the ** operator return the same result for matrixes? //Torgil From oliphant.travis at ieee.org Mon Aug 14 17:13:50 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 15:13:50 -0600 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> <44E0D5FA.9040505@gmx.net> Message-ID: <44E0E78E.5060600@ieee.org> Torgil Svensson wrote: >>>> import numpy >>>> numpy.__version__ >>>> > '1.0b1' > >>>> from numpy import * >>>> A = [1,2,3,4,5,6,7,8,9] >>>> B = asmatrix(reshape(A,(3,3))) >>>> B >>>> > matrix([[1, 2, 3], > [4, 5, 6], > [7, 8, 9]]) > >>>> B**0 >>>> > matrix([[ 1., 0., 0.], > [ 0., 1., 0.], > [ 0., 0., 1.]]) > >>>> power(B,0) >>>> > matrix([[1, 1, 1], > [1, 1, 1], > [1, 1, 1]]) > > Shouldn't power() and the ** operator return the same result for matrixes? > No. power is always the ufunc which does element-by-element raising to a power. This is actually a feature in that you can use the function call to do raising to a power without caring what kind of array subclass is used. In the same manner, multiply is *always* the ufunc. -Travis From fullung at gmail.com Mon Aug 14 17:16:06 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon, 14 Aug 2006 23:16:06 +0200 Subject: [Numpy-discussion] ctypes and ndpointer Message-ID: Hello all Just a quick note on the ndpointer function that Travis recently added to NumPy (thanks Travis!). When wrapping functions with ctypes, one can specify the argument types of the function. ctypes then checks that the parameters are valid before invoking the C function. This is described here in detail: http://docs.python.org/dev/lib/ctypes-specifying-required-argument-types.htm l The argtypes list is optional, and I think previously Travis suggested not specifying the argtypes because it would require one to write something like this: bar.argtypes = [POINTER(c_double)] x = N.array([...]) bar(x.data_as(POINTER(c_double)) instead of simply: bar(x) What ndpointer allows one to do is to build classes with a from_param method that knows about the details of ndarrays and how to convert them to something that ctypes can send to a C function. For example, suppose you have the following function: void bar(int* data, double x); You know that bar expects a 20x30 array of big-endian integers in Fortran order. You can make sure it gets only this kind of array by doing: _foolib = N.ctypes_load_library('foolib_', '.') bar = _foolib.bar bar.restype = None p = N.ndpointer(dtype='>i4', ndim=2, shape=(20,30), flags='FORTRAN') bar.argtypes = [p, ctypes.c_double] x = N.zeros((20,30),dtype='>i4',order='F') bar(x, 123.0) If you want your function to accept any kind of ndarray, you can do: bar.argtypes = [N.ndpointer(),...] In this case it will probably still make sense to wrap the C function in a Python function that also passes the .ctypes.strides and .ctypes.shape of the array. Cheers, Albert P.S. Sidebar: do we want these ctypes functions in the top-level namespace? Maybe not. Also, I'm starting to wonder whether ctypes_load_library deserves to exist or whether we should hear from the ctypes guys if there is a better way to accomplish what it does (which is to make it easy to load a shared library/DLL/dylib relative to some file in your module on any platform). From oliphant.travis at ieee.org Mon Aug 14 17:25:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 15:25:53 -0600 Subject: [Numpy-discussion] ctypes and ndpointer In-Reply-To: References: Message-ID: <44E0EA61.7010807@ieee.org> Albert Strasheim wrote: > P.S. Sidebar: do we want these ctypes functions in the top-level namespace? > Maybe not. Also, I'm starting to wonder whether ctypes_load_library deserves > to exist or whether we should hear from the ctypes guys if there is a better > way to accomplish what it does (which is to make it easy to load a shared > library/DLL/dylib relative to some file in your module on any platform). > I'm happy to move them from the top-level name-space to something else prior to 1.0 final. It's probably a good idea. -Travis From Chris.Barker at noaa.gov Mon Aug 14 19:37:31 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 14 Aug 2006 16:37:31 -0700 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> <44E0D5FA.9040505@gmx.net> Message-ID: <44E1093B.6040405@noaa.gov> Torgil Svensson wrote: > Shouldn't power() and the ** operator return the same result for matrixes? no, but the built-in pow() should -- does it? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Mon Aug 14 19:40:31 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 14 Aug 2006 16:40:31 -0700 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <35522.64.17.89.52.1155275427.squirrel@imap.rap.ucar.edu> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> <44DB5B3F.9080203@noaa.gov> <35522.64.17.89.52.1155275427.squirrel@imap.rap.ucar.edu> Message-ID: <44E109EF.9040700@noaa.gov> Daran L. Rife wrote: > I tried your suggestion of installing and running the pre-built > packages at . I am > sorry to report that the pre-built MacPython and Numeric 24.2 > package did not work. I get the same "Segmentation Fault" that > I got when I built Python 2.4.3 and Numeric 24.2 from source. Darn. My few simple tests all work. If you can figure out which functions are failing, and make a small sample that fails, post it here and to the python-mac list. There are some smart folks there that might be able to help. > As a last resort, I may build ATLAS and LAPACK from source, > then build Numeric 23.8 against these, and try installing > this into MacPython. I hate having to try this, but I cannot > do any development without a functioning Python and Numeric. However, it might be easier to port to numpy that do all that. And you'll definitely get more help solving any problems you have with numpy. good luck. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From drife at ucar.edu Mon Aug 14 19:56:52 2006 From: drife at ucar.edu (Daran Rife) Date: Mon, 14 Aug 2006 17:56:52 -0600 Subject: [Numpy-discussion] Segmentation Fault with Numeric 24.2 on Mac OS X 10.4 Tiger (8.7.0) In-Reply-To: <44E109EF.9040700@noaa.gov> References: <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34102.64.17.89.52.1155188748.squirrel@imap.rap.ucar.edu> <34543.64.17.89.52.1155221927.squirrel@imap.rap.ucar.edu> <44DB5B3F.9080203@noaa.gov> <35522.64.17.89.52.1155275427.squirrel@imap.rap.ucar.edu> <44E109EF.9040700@noaa.gov> Message-ID: <44E10DC4.2040609@ucar.edu> Hi Chris, > Darn. My few simple tests all work. If you can figure out which > functions are failing, and make a small sample that fails, post it here > and to the python-mac list. There are some smart folks there that might > be able to help. I will try to do so, but like you, I think my time is better spent transitioning to Numpy. Incidentally, I am now using the MacPython distro--thanks for pointing me toward that. I also got Numeric 23.8 to work well with MacPython, including the optimized vecLib framework. I got the harebrained idea to try compiling and installing Numeric 23.8 using the setup.py and customize.py files from Numeric 24.x, since they seem to get the Apple veclib stuff compiled in properly, especially the optimized matrix math libs. The one tweak I had to make was in setup.py, where I pointed it to the new vecLib in: /System/Library/Frameworks/Accelerate.framework > However, it might be easier to port to numpy that do all that. And > you'll definitely get more help solving any problems you have with numpy. Agreed. I am looking forward to the first official release of numpy. In the meantime, I will experiment with the Beta version. Thanks again, Daran From haase at msg.ucsf.edu Mon Aug 14 20:26:33 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 14 Aug 2006 17:26:33 -0700 Subject: [Numpy-discussion] How to share memory when bArr is smaller-sized than aArr Message-ID: <200608141726.33317.haase@msg.ucsf.edu> Hi, in numarray I could do this >>> import numarray as na >>> a = na.arange(10) >>> b = na.array(a._data, type=na.int32, shape=8) b would use the beginning part of a. This is actually important for inplace FFT (where in real-to-complex-fft the input has 2 "columns" more memory than the output) I found that in numpy there is no shape argument in array() at all anymore ! How can this be done with numpy ? Thanks, Sebastian Haase From oliphant.travis at ieee.org Mon Aug 14 20:38:02 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 18:38:02 -0600 Subject: [Numpy-discussion] How to share memory when bArr is smaller-sized than aArr In-Reply-To: <200608141726.33317.haase@msg.ucsf.edu> References: <200608141726.33317.haase@msg.ucsf.edu> Message-ID: <44E1176A.4020005@ieee.org> Sebastian Haase wrote: > Hi, > in numarray I could do this > >>>> import numarray as na >>>> a = na.arange(10) >>>> b = na.array(a._data, type=na.int32, shape=8) >>>> > > b would use the beginning part of a. > > This is actually important for inplace FFT (where in real-to-complex-fft the > input has 2 "columns" more memory than the output) > > I found that in numpy there is no shape argument in array() at all anymore ! > > No, there is no shape argument anymore. But, the ndarray() constructor does have the shape argument and can be used in this way. so import numpy as na b = na.ndarray(buffer=a, dtype=na.int32, shape=9) should work. -Travis From haase at msg.ucsf.edu Mon Aug 14 21:02:21 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 14 Aug 2006 18:02:21 -0700 Subject: [Numpy-discussion] please comment on scalar types Message-ID: <200608141802.21883.haase@msg.ucsf.edu> Hi! I have a record array with a field 'mode' Mode is a small integer that I use to choose a "PixelType" So I did: >>> print PixelTypes[ mode ] TypeError: tuple indices must be integers >>> pdb.pm() > /home/haase/PrLinN64/Priithon/Mrc.py(813)MrcMode2numType() -> return PixelTypes[ mode ] (Pdb) p mode 1 (Pdb) p type(mode) (Pdb) p isinstance(mode, int) False Since numpy introduced special scalar types a simple statement like this doesn't work anymore ! Would it work if int32scalar was derived from int ? I actually thought it was ... Comments ? - Sebastian Haase From oliphant.travis at ieee.org Mon Aug 14 21:18:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 Aug 2006 19:18:04 -0600 Subject: [Numpy-discussion] please comment on scalar types In-Reply-To: <200608141802.21883.haase@msg.ucsf.edu> References: <200608141802.21883.haase@msg.ucsf.edu> Message-ID: <44E120CC.8050400@ieee.org> Sebastian Haase wrote: > Hi! > I have a record array with a field 'mode' > Mode is a small integer that I use to choose a "PixelType" > So I did: > >>>> print PixelTypes[ mode ] >>>> > TypeError: tuple indices must be integers > >>>> pdb.pm() >>>> >> /home/haase/PrLinN64/Priithon/Mrc.py(813)MrcMode2numType() >> > -> return PixelTypes[ mode ] > (Pdb) p mode > 1 > (Pdb) p type(mode) > > (Pdb) p isinstance(mode, int) > False > > Since numpy introduced special scalar types a simple statement like this > doesn't work anymore ! Would it work if int32scalar was derived from int ? I > actually thought it was ... > It does sub-class from int unless you are on a system where a c-long is 64-bit then int64scalar inherits from int. On my 32-bit system: isinstance(array([1,2,3])[0],int) is true. -Travis From haase at msg.ucsf.edu Mon Aug 14 22:40:49 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 14 Aug 2006 19:40:49 -0700 Subject: [Numpy-discussion] please comment on scalar types In-Reply-To: <44E120CC.8050400@ieee.org> References: <200608141802.21883.haase@msg.ucsf.edu> <44E120CC.8050400@ieee.org> Message-ID: <44E13431.2040205@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> Hi! >> I have a record array with a field 'mode' >> Mode is a small integer that I use to choose a "PixelType" >> So I did: >> >>>>> print PixelTypes[ mode ] >>>>> >> TypeError: tuple indices must be integers >> >>>>> pdb.pm() >>>>> >>> /home/haase/PrLinN64/Priithon/Mrc.py(813)MrcMode2numType() >>> >> -> return PixelTypes[ mode ] >> (Pdb) p mode >> 1 >> (Pdb) p type(mode) >> >> (Pdb) p isinstance(mode, int) >> False >> >> Since numpy introduced special scalar types a simple statement like this >> doesn't work anymore ! Would it work if int32scalar was derived from int ? I >> actually thought it was ... >> > It does sub-class from int unless you are on a system where a c-long is > 64-bit then int64scalar inherits from int. > > On my 32-bit system: > > isinstance(array([1,2,3])[0],int) is true. > > > > -Travis I see - yes I forgot - that test was indeed run on 64bit Linux. And that automatically implies that there a 32bit-int cannot be used in place of a "normal python integer" !? I could see wanting to use int16 or event uint8 as a tuple index. Logically a small type would be save to use in place of a bigger one ... - Sebastian From oliphant at ee.byu.edu Mon Aug 14 23:13:37 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 14 Aug 2006 21:13:37 -0600 Subject: [Numpy-discussion] please comment on scalar types In-Reply-To: <44E13431.2040205@msg.ucsf.edu> References: <200608141802.21883.haase@msg.ucsf.edu> <44E120CC.8050400@ieee.org> <44E13431.2040205@msg.ucsf.edu> Message-ID: <44E13BE1.7080607@ee.byu.edu> Sebastian Haase wrote: >Travis Oliphant wrote: > > >And that automatically implies that there a 32bit-int cannot be used in >place of a "normal python integer" !? >I could see wanting to use int16 or event uint8 as a tuple index. >Logically a small type would be save to use in place of a bigger one ... > > That is the purpose behind the __index__ attribute I added to Python 2.5 (see PEP 357). This allows all the scalar integers to be used in place of integers inside of Python. -Travis From fperez.net at gmail.com Tue Aug 15 00:06:29 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 14 Aug 2006 22:06:29 -0600 Subject: [Numpy-discussion] Creating and reshaping fortran order arrays In-Reply-To: <44E0BA8C.2070801@ieee.org> References: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> <44E0B91C.8070807@ieee.org> <44E0BA8C.2070801@ieee.org> Message-ID: On 8/14/06, Travis Oliphant wrote: > Travis Oliphant wrote: > > However, you can use the ndarray creation function itself to do what you > > want: > > > > a = ndarray(shape=(2,2), dtype=int32, buffer=str, order='F') > > > > This will use the memory of the string as the new array memory. > > > Incidentally, the new array will be read-only. But, you can fix this in > two ways: > > 1) a.flags.writeable = True Sweet! We now finally have mutable strings for Python: In [2]: astr = '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' In [4]: a = N.ndarray(shape=(2,2), dtype=N.int32, buffer=astr, order='F') In [5]: astr Out[5]: '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' In [6]: a.flags.writeable = True In [7]: a Out[7]: array([[0, 2], [1, 3]]) In [8]: a[0] = 1 In [9]: astr Out[9]: '\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00' Guido's going to kill you on Thursday, you know ;) f From strawman at astraw.com Tue Aug 15 01:37:22 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 14 Aug 2006 22:37:22 -0700 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: <44E0D5FA.9040505@gmx.net> References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com> <44E0D5FA.9040505@gmx.net> Message-ID: <44E15D92.1060805@astraw.com> Sven Schreiber wrote: > Hi, > > Satya Upadhya schrieb: > > >>>>> from Numeric import * >>>>> > > Well this list is about the numpy package, but anyway... > This list is for numpy, numarray, and Numeric. There's just a lot more numpy talk going on these days, but "numpy-discussion" comes from the bad old days where no one realized that allowing your software package to be called multiple things (Numeric, Numeric Python, numpy) might result in confusion years later. Cheers! Andrew From oliphant.travis at ieee.org Tue Aug 15 02:01:51 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 15 Aug 2006 00:01:51 -0600 Subject: [Numpy-discussion] Creating and reshaping fortran order arrays In-Reply-To: References: <1e2af89e0608141023v5c9ee071yf6295b945ac1dbec@mail.gmail.com> <44E0B91C.8070807@ieee.org> <44E0BA8C.2070801@ieee.org> Message-ID: <44E1634F.3050201@ieee.org> Fernando Perez wrote: > Sweet! We now finally have mutable strings for Python: > > In [2]: astr = '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' > > In [4]: a = N.ndarray(shape=(2,2), dtype=N.int32, buffer=astr, order='F') > > In [5]: astr > Out[5]: '\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' > > In [6]: a.flags.writeable = True > > In [7]: a > Out[7]: > array([[0, 2], > [1, 3]]) > > In [8]: a[0] = 1 > > In [9]: astr > Out[9]: '\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00' > > > Guido's going to kill you on Thursday, you know ;) > Don't tell him ;-) But, if he had provided a suitable bytes type already (that was pickleable) we wouldn't need to do this :-) Notice it's not writeable by default, so at least you have to "know what you are doing" to shoot yourself in the foot. -Travis From pauli.virtanen at iki.fi Tue Aug 15 02:07:57 2006 From: pauli.virtanen at iki.fi (Pauli Virtanen) Date: Tue, 15 Aug 2006 09:07:57 +0300 Subject: [Numpy-discussion] Numpy 1.0b2 crash Message-ID: <200608150907.57881.pauli.virtanen@iki.fi> Hi all, The following code causes a segmentation fault in Numpy 1.0b2 and 1.0b1. import numpy as N v = N.array([1,2,3,4,5,6,7,8,9,10]) N.lexsort(v) Stack trace =========== $ gdb --args python crash.py GNU gdb 6.4-debian Copyright 2005 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i486-linux-gnu"...Using host libthread_db library "/lib/tls/i686/cmov/libthread_db.so.1". (gdb) run Starting program: /usr/bin/python crash.py [Thread debugging using libthread_db enabled] [New Thread -1209857824 (LWP 22827)] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1209857824 (LWP 22827)] 0xb7d48f8d in PyArray_LexSort (sort_keys=0x81ed7e0, axis=) at arrayobject.c:8483 8483 arrayobject.c: No such file or directory. in arrayobject.c (gdb) bt #0 0xb7d48f8d in PyArray_LexSort (sort_keys=0x81ed7e0, axis=) at arrayobject.c:8483 #1 0xb7d49da5 in array_lexsort (ignored=0x0, args=0x822cb18, kwds=0x822cb18) at numpy/core/src/multiarraymodule.c:6271 #2 0x080b62c7 in PyEval_EvalFrame (f=0x8185c24) at ../Python/ceval.c:3563 #3 0x080b771f in PyEval_EvalCodeEx (co=0xb7e27ce0, globals=0xb7e08824, locals=0xb7e08824, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2736 #4 0x080b7965 in PyEval_EvalCode (co=0x822cb18, globals=0x822cb18, locals=0x822cb18) at ../Python/ceval.c:484 #5 0x080d94cc in PyRun_FileExFlags (fp=0x813e008, filename=0xbfcc98f3 "crash.py", start=136497944, globals=0x822cb18, locals=0x822cb18, closeit=1, flags=0xbfcc91d4) at ../Python/pythonrun.c:1265 #6 0x080d976c in PyRun_SimpleFileExFlags (fp=, filename=0xbfcc98f3 "crash.py", closeit=1, flags=0xbfcc91d4) at ../Python/pythonrun.c:860 #7 0x08055b33 in Py_Main (argc=1, argv=0xbfcc9274) at ../Modules/main.c:493 #8 0xb7e45ea2 in __libc_start_main () from /lib/tls/i686/cmov/libc.so.6 #9 0x08054fa1 in _start () at ../sysdeps/i386/elf/start.S:119 (gdb) #0 0xb7d48f8d in PyArray_LexSort (sort_keys=0x81ed7e0, axis=) at arrayobject.c:8483 #1 0xb7d49da5 in array_lexsort (ignored=0x0, args=0x822cb18, kwds=0x822cb18) at numpy/core/src/multiarraymodule.c:6271 #2 0x080b62c7 in PyEval_EvalFrame (f=0x8185c24) at ../Python/ceval.c:3563 #3 0x080b771f in PyEval_EvalCodeEx (co=0xb7e27ce0, globals=0xb7e08824, locals=0xb7e08824, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2736 #4 0x080b7965 in PyEval_EvalCode (co=0x822cb18, globals=0x822cb18, locals=0x822cb18) at ../Python/ceval.c:484 #5 0x080d94cc in PyRun_FileExFlags (fp=0x813e008, filename=0xbfcc98f3 "crash.py", start=136497944, globals=0x822cb18, locals=0x822cb18, closeit=1, flags=0xbfcc91d4) at ../Python/pythonrun.c:1265 #6 0x080d976c in PyRun_SimpleFileExFlags (fp=, filename=0xbfcc98f3 "crash.py", closeit=1, flags=0xbfcc91d4) at ../Python/pythonrun.c:860 #7 0x08055b33 in Py_Main (argc=1, argv=0xbfcc9274) at ../Modules/main.c:493 #8 0xb7e45ea2 in __libc_start_main () from /lib/tls/i686/cmov/libc.so.6 #9 0x08054fa1 in _start () at ../sysdeps/i386/elf/start.S:119 From drswalton at gmail.com Tue Aug 15 04:04:58 2006 From: drswalton at gmail.com (Stephen Walton) Date: Tue, 15 Aug 2006 01:04:58 -0700 Subject: [Numpy-discussion] site.cfg problems Message-ID: <693733870608150104q5fe24d5ag27eacdbd24780830@mail.gmail.com> Does site.cfg actually work? I ask because I want to test numpy (and soon scipy) against ATLAS 3.7.13. For simplicity I used the "make install" with that distribution, which puts the files in /usr/local/atlas/lib, /usr/local/atlas/include, and so on. No problem, so I created a site.cfg in the numpy root directory with [atlas] library_dirs = /usr/local/atlas/lib atlas_libs = lapack, blas, cblas, atlas include_dirs = /usr/local/atlas/include/ The numpy build did not find atlas; the output of "python setup.py build" shows no sign of even having checked the listed directory above for the libraries. Did I do something wrong? Should site.cfg be in numpy/numpy/distutils instead? -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue Aug 15 10:56:54 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 15 Aug 2006 10:56:54 -0400 Subject: [Numpy-discussion] Regarding Matrices In-Reply-To: <44E1093B.6040405@noaa.gov> References: <20060814194406.32239.qmail@web8508.mail.in.yahoo.com><44E0D5FA.9040505@gmx.net> <44E1093B.6040405@noaa.gov> Message-ID: > Torgil Svensson wrote: >> Shouldn't power() and the ** operator return the same result for matrixes? On Mon, 14 Aug 2006, Christopher Barker apparently wrote: > no, but the built-in pow() should -- does it? The "try it and see" approach says that it does. Cheers, Alan Isaac From elcorto at gmx.net Tue Aug 15 12:02:00 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 15 Aug 2006 18:02:00 +0200 Subject: [Numpy-discussion] test fails Message-ID: <44E1EFF8.9050100@gmx.net> The test in In [2]: numpy.__version__ Out[2]: '1.0b2.dev3007' fails: [...] check_1D_array (numpy.lib.tests.test_shape_base.test_vstack) ... ok check_2D_array (numpy.lib.tests.test_shape_base.test_vstack) ... ok check_2D_array2 (numpy.lib.tests.test_shape_base.test_vstack) ... ok ====================================================================== ERROR: check_ascii (numpy.core.tests.test_multiarray.test_fromstring) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.3/site-packages/numpy/core/tests/test_multiarray.py", line 120, in check_ascii a = fromstring('1 , 2 , 3 , 4',sep=',') ValueError: don't know how to read character strings for given array type ---------------------------------------------------------------------- Ran 476 tests in 1.291s FAILED (errors=1) -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From nwagner at iam.uni-stuttgart.de Tue Aug 15 12:06:10 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 15 Aug 2006 18:06:10 +0200 Subject: [Numpy-discussion] test fails In-Reply-To: <44E1EFF8.9050100@gmx.net> References: <44E1EFF8.9050100@gmx.net> Message-ID: <44E1F0F2.3050704@iam.uni-stuttgart.de> Steve Schmerler wrote: > The test in > > In [2]: numpy.__version__ > Out[2]: '1.0b2.dev3007' > > fails: > > > [...] > check_1D_array (numpy.lib.tests.test_shape_base.test_vstack) ... ok > check_2D_array (numpy.lib.tests.test_shape_base.test_vstack) ... ok > check_2D_array2 (numpy.lib.tests.test_shape_base.test_vstack) ... ok > > ====================================================================== > ERROR: check_ascii (numpy.core.tests.test_multiarray.test_fromstring) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.3/site-packages/numpy/core/tests/test_multiarray.py", > line 120, in check_ascii > a = fromstring('1 , 2 , 3 , 4',sep=',') > ValueError: don't know how to read character strings for given array type > > ---------------------------------------------------------------------- > Ran 476 tests in 1.291s > > FAILED (errors=1) > > > I cannot reproduce it here Numpy version 1.0b3.dev3025 python /usr/lib64/python2.4/site-packages/numpy/core/tests/test_multiarray.py Found 153 tests for numpy.core.multiarray Found 0 tests for __main__ ......................................................................................................................................................... ---------------------------------------------------------------------- Ran 153 tests in 0.047s OK Nils From etc2103 at columbia.edu Tue Aug 15 13:27:06 2006 From: etc2103 at columbia.edu (Ethan T Coon) Date: Tue, 15 Aug 2006 13:27:06 -0400 (EDT) Subject: [Numpy-discussion] f2py --include_paths from command line Message-ID: Hi all, The following line: f2py -c -m _test --include_paths ./include test.f (where test.f contains the line " include 'test_inc.h' " and 'test_inc.h' exists in the directory './include' ) results in the errors: ------------------------------------------------------------------ running build running config_fc running build_src building extension "_test" sources f2py options: [] f2py:> /tmp/tmpJqhFcQ/src.linux-i686-2.4/_testmodule.c creating /tmp/tmpJqhFcQ creating /tmp/tmpJqhFcQ/src.linux-i686-2.4 Reading fortran codes... Reading file 'test.f' (format:fix,strict) Line #6 in test.f:" INCLUDE 'test_inc.h'" readfortrancode: could not find include file 'test_inc.h'. Ignoring. Post-processing... Block: _test Block: test In: :_test:test.f:test getarrlen:variable "n" undefined Post-processing (stage 2)... Building modules... Building module "_test"... Constructing wrapper function "test"... a = test() Wrote C/API module "_test" to file "/tmp/tmpJqhFcQ/src.linux-i686-2.4/_testmodule.c" adding '/tmp/tmpJqhFcQ/src.linux-i686-2.4/fortranobject.c' to sources. adding '/tmp/tmpJqhFcQ/src.linux-i686-2.4' to include_dirs. copying /packages/lib/python2.4/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpJqhFcQ/src.linux-i686-2.4 copying /packages/lib/python2.4/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpJqhFcQ/src.linux-i686-2.4 running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_ext building '_test' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' error: unknown file type '' (from '--include_paths') --------------------------------------------------------------- Similar versions ( --include_paths=./include , --include_paths "./include" , --include_paths `pwd`/include ) fail similarly. Everything works fine from a distutils extension/setup call, but not from the command line. Thanks, Ethan ------------------------------------------- Ethan Coon DOE CSGF - Graduate Student Dept. Applied Physics & Applied Mathematics Columbia University 212-854-0415 http://www.ldeo.columbia.edu/~ecoon/ ------------------------------------------- From davidgrant at gmail.com Tue Aug 15 13:34:17 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 15 Aug 2006 10:34:17 -0700 Subject: [Numpy-discussion] scipy_distutils Message-ID: Where can I find the Extension module now? In the f2py documentation, the following import is used: from scipy_distutils.core import Extension but that doesn't work, and I read that this was moved into numpy along with f2py. I can't seem to find it anywhere. What's the current way of doing this? -- David Grant http://www.davidgrant.ca From robert.kern at gmail.com Tue Aug 15 14:01:27 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 15 Aug 2006 11:01:27 -0700 Subject: [Numpy-discussion] scipy_distutils In-Reply-To: References: Message-ID: David Grant wrote: > Where can I find the Extension module now? In the f2py documentation, > the following import is used: > > from scipy_distutils.core import Extension > > but that doesn't work, and I read that this was moved into numpy along > with f2py. I can't seem to find it anywhere. What's the current way of > doing this? That documentation is no longer up-to-date wrt building. I don't think that Pearu has done a comprehensive update of that section. The best place to look for documentation is numpy/doc/DISTUTILS.txt . numpy itself and scipy provide excellent examples of use, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From haase at msg.ucsf.edu Tue Aug 15 14:50:36 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 15 Aug 2006 11:50:36 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() Message-ID: <200608151150.36554.haase@msg.ucsf.edu> Hi! numpy renamed the *function* abs to absolute. Most functions like mean, min, max, average, ... have an equivalent array *method*. Why is absolute left out ? I think it should be added . Furthermore, looking at some line of code that have multiple calls to absolute [ like f(absolute(a), absolute(b), absolute(c)) ] I think "some people" might prefer less typing and less reading, like f( a.abs(), b.abs(), c.abs() ). One could even consider not requiring the "function call" parenthesis '()' at all - but I don't know about further implications that might have. Thanks, Sebastian Haase PS: is there any performace hit in using the built-in abs function ? From drswalton at gmail.com Tue Aug 15 19:08:55 2006 From: drswalton at gmail.com (Stephen Walton) Date: Tue, 15 Aug 2006 16:08:55 -0700 Subject: [Numpy-discussion] f2py --include_paths from command line In-Reply-To: References: Message-ID: <693733870608151608w4cb133a1hc1156f8479ba8e4f@mail.gmail.com> On 8/15/06, Ethan T Coon wrote: > > Hi all, > > The following line: > > f2py -c -m _test --include_paths ./include test.f Typing f2py alone seems to indicate the syntax should be f2py -I./include [other args] test.f I tried this and it seems to work here. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmt at cs.ubc.ca Tue Aug 15 21:34:06 2006 From: mmt at cs.ubc.ca (Matthew Trentacoste) Date: Tue, 15 Aug 2006 18:34:06 -0700 Subject: [Numpy-discussion] numpy 1.0b2 problems Message-ID: Hey. I'm trying to get numpy up and running on SuSE 10.1 and not having much luck. I've been working with 1.0b2 and can get it to install without any errors, but can't do anything with it. I run a local install of python 2.4.3 just to keep out of whatever weirdness gets installed on my machine by our sysadmins. Pretty standard fare, untar the ball, and './setup.py install --prefix=$HOME/local' It will complete that without issue, but when I try to run the test, I get: Python 2.4.3 (#1, Aug 15 2006, 18:09:56) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(1) Traceback (most recent call last): File "", line 1, in ? File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ __init__.py", line 77, in test return NumpyTest().test(level, verbosity) File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/testing/ numpytest.py", line 285, in __init__ from numpy.distutils.misc_util import get_frame File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 7, in ? from distutils import ccompiler ImportError: cannot import name ccompiler Any thoughts? Thanks Matt [ matthew m trentacoste mmt at cs.ubc.ca ] [ ] [ graduate student lead software developer ] [ university of british columbia brightside technologies ] [ http://www.cs.ubc.ca/~mmt http://brightsidetech.com ] [ +1 (604) 827-3979 +1 (604) 228-4624 ] From davidgrant at gmail.com Tue Aug 15 22:06:51 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 15 Aug 2006 19:06:51 -0700 Subject: [Numpy-discussion] some work on arpack Message-ID: Building an arpack extension turned out to be surprisingly simple. For example for dsaupd: f2py -c dsaupd.f -m dsaupd -L/usr/lib/blas/atlas:/usr/lib/lapack/atlas -llapack -lblas -larpack It took me a long time to get the command down to something that simple. Took me a while even to figure out I could just use the arpack library on my computer rather than re-linking all of arpack! I was able to import the dsaupd.so python module just fine and I was also able to call it just fine. I'll have to tweak the pyf file in order to get some proper output. But this gives me confidence that arpack is easy to hook into which is what others have said in the past, but without any experience with f2py I had no idea myself. f2py is awesome, for anyone who doesn't know. Matlab has interfaces for the arpack functions like dsaupd, dseupd, dnaupd, znaupd, zneupd (the mex file documentation claims those are the only ones, but they have more). Matlab has a C interface to these functions in arpackc.mex* and the script eigs.m does the grunt work, providing a very high-level interface as well as doing some linear algebra (the same type of stuff that is done in arpack's examples directory I gather) and various other things. My idea is (if I have time) to write an eigs-like function in python that will only perform a subset of what Matlab's eigs does for. It will, for example, compute a certain number of eigenvalues and eigenvectors for a real, sparse, symmetric matrix (the case I'm interested in)... I hope that this subset-of-matlab's-eigs function will not be too hard to write. Then more functionality can be added on to eigs.py later... Does this make sense? Has anyone else started work on arpack integration at all? -- David Grant http://www.davidgrant.ca From mmt at cs.ubc.ca Tue Aug 15 22:12:49 2006 From: mmt at cs.ubc.ca (Matthew Trentacoste) Date: Tue, 15 Aug 2006 19:12:49 -0700 Subject: [Numpy-discussion] Numpy 1.0b2 install issues Message-ID: <84E05761-5BA0-4098-A408-7D3D42C8D91C@cs.ubc.ca> Hey. I'm trying to get numpy up and running on SuSE 10.1 and not having much luck. I've been working with 1.0b2 and can get it to install without any errors, but can't do anything with it. I run a local install of python 2.4.3 just to keep out of whatever weirdness gets installed on my machine by our sysadmins. Pretty standard fare, untar the ball, and './setup.py install --prefix=$HOME/local' It will complete that without issue, but when I try to run the test, I get: Python 2.4.3 (#1, Aug 15 2006, 18:09:56) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(1) Traceback (most recent call last): File "", line 1, in ? File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ __init__.py", line 77, in test return NumpyTest().test(level, verbosity) File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/testing/ numpytest.py", line 285, in __init__ from numpy.distutils.misc_util import get_frame File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 7, in ? from distutils import ccompiler ImportError: cannot import name ccompiler This pretty much borks everything. I have to remove it before I can try to install other packages and stuff. Any thoughts? Thanks Matt From mmt at cs.ubc.ca Tue Aug 15 22:51:27 2006 From: mmt at cs.ubc.ca (Matthew Trentacoste) Date: Tue, 15 Aug 2006 19:51:27 -0700 Subject: [Numpy-discussion] numpy 1.0b2 problems Message-ID: <560C253F-DA6F-4BAB-8F13-28AD0800F4FC@cs.ubc.ca> Hey. I'm trying to get numpy up and running on SuSE 10.1 and not having much luck. I've been working with 1.0b2 and can get it to install without any errors, but can't do anything with it. I run a local install of python 2.4.3 just to keep out of whatever weirdness gets installed on my machine by our sysadmins. Pretty standard fare, untar the ball, and './setup.py install --prefix=$HOME/local' It will complete that without issue, but when I try to run the test, I get: Python 2.4.3 (#1, Aug 15 2006, 18:09:56) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(1) Traceback (most recent call last): File "", line 1, in ? File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ __init__.py", line 77, in test return NumpyTest().test(level, verbosity) File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/testing/ numpytest.py", line 285, in __init__ from numpy.distutils.misc_util import get_frame File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 7, in ? from distutils import ccompiler ImportError: cannot import name ccompiler Once installed, it messes up trying to install anything else, so I have to move it out of the way in the short term. Any thoughts? Thanks Matt [ matthew m trentacoste mmt at cs.ubc.ca ] [ ] [ graduate student lead software developer ] [ university of british columbia brightside technologies ] [ http://www.cs.ubc.ca/~mmt http://brightsidetech.com ] [ +1 (604) 827-3979 +1 (604) 228-4624 ] From mmt at cs.ubc.ca Tue Aug 15 23:19:47 2006 From: mmt at cs.ubc.ca (Matthew Trentacoste) Date: Tue, 15 Aug 2006 20:19:47 -0700 Subject: [Numpy-discussion] Fwd: numpy 1.0b2 problems References: <560C253F-DA6F-4BAB-8F13-28AD0800F4FC@cs.ubc.ca> Message-ID: <86D6645D-0257-401E-98B7-3AC77623398B@cs.ubc.ca> Hey. I'm trying to get numpy up and running on SuSE 10.1 and not having much luck. I've been working with 1.0b2 and can get it to install without any errors, but can't do anything with it. I run a local install of python 2.4.3 just to keep out of whatever weirdness gets installed on my machine by our sysadmins. Pretty standard fare, untar the ball, and './setup.py install --prefix=$HOME/local' It will complete that without issue, but when I try to run the test, I get: Python 2.4.3 (#1, Aug 15 2006, 18:09:56) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(1) Traceback (most recent call last): File "", line 1, in ? File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ __init__.py", line 77, in test return NumpyTest().test(level, verbosity) File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/testing/ numpytest.py", line 285, in __init__ from numpy.distutils.misc_util import get_frame File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/__init__.py", line 5, in ? import ccompiler File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ distutils/ccompiler.py", line 7, in ? from distutils import ccompiler ImportError: cannot import name ccompiler Once installed, it messes up trying to install anything else, so I have to move it out of the way in the short term. Any thoughts? Thanks Matt [ matthew m trentacoste mmt at cs.ubc.ca ] [ ] [ graduate student lead software developer ] [ university of british columbia brightside technologies ] [ http://www.cs.ubc.ca/~mmt http://brightsidetech.com ] [ +1 (604) 827-3979 +1 (604) 228-4624 ] From oliphant.travis at ieee.org Wed Aug 16 00:18:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 15 Aug 2006 22:18:04 -0600 Subject: [Numpy-discussion] numpy 1.0b2 problems In-Reply-To: References: Message-ID: <44E29C7C.2050509@ieee.org> Matthew Trentacoste wrote: > Hey. I'm trying to get numpy up and running on SuSE 10.1 and not > having much luck. > > I've been working with 1.0b2 and can get it to install without any > errors, but can't do anything with it. I run a local install of > python 2.4.3 just to keep out of whatever weirdness gets installed on > my machine by our sysadmins. Pretty standard fare, untar the ball, > and './setup.py install --prefix=$HOME/local' > Do you need to specify --prefix if you've already got Python installed somewhere? Are you missing it. From oliphant.travis at ieee.org Wed Aug 16 00:19:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 15 Aug 2006 22:19:29 -0600 Subject: [Numpy-discussion] numpy 1.0b2 problems In-Reply-To: References: Message-ID: <44E29CD1.4090509@ieee.org> Matthew Trentacoste wrote: > Hey. I'm trying to get numpy up and running on SuSE 10.1 and not > having much luck. > > I've been working with 1.0b2 and can get it to install without any > errors, but can't do anything with it. I run a local install of > python 2.4.3 just to keep out of whatever weirdness gets installed on > my machine by our sysadmins. Pretty standard fare, untar the ball, > and './setup.py install --prefix=$HOME/local' > > It will complete that without issue, but when I try to run the test, > I get: > > Python 2.4.3 (#1, Aug 15 2006, 18:09:56) > [GCC 4.1.0 (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> numpy.test(1) > Traceback (most recent call last): > File "", line 1, in ? > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ > __init__.py", line 77, in test > return NumpyTest().test(level, verbosity) > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/testing/ > numpytest.py", line 285, in __init__ > from numpy.distutils.misc_util import get_frame > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ > distutils/__init__.py", line 5, in ? > import ccompiler > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ > distutils/ccompiler.py", line 6, in ? > from distutils.ccompiler import * > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ > distutils/__init__.py", line 5, in ? > import ccompiler > File "/home/m/mmt/local/lib/python2.4/site-packages/numpy/ > distutils/ccompiler.py", line 7, in ? > from distutils import ccompiler > ImportError: cannot import name ccompiler > > This seems to be a path issue. Can you give us import sys print sys.path() -Travis From mmt at cs.ubc.ca Wed Aug 16 02:58:53 2006 From: mmt at cs.ubc.ca (Matthew Trentacoste) Date: Tue, 15 Aug 2006 23:58:53 -0700 Subject: [Numpy-discussion] Numpy-discussion Digest, Vol 3, Issue 42 In-Reply-To: References: Message-ID: For starters, wow. I'm sorry. I didn't mean to spam my problem 5 times. My mail server decided to fritz out today and I thought it was Sourceforge rejecting my emails since they didn't originate the address I'm registered as. My apologies. > Do you need to specify --prefix if you've already got Python installed > somewhere? > > Are you missing it. I tried it again without setting it. No more luck. > This seems to be a path issue. Can you give us > > import sys > print sys.path() [ '', '/home/m/mmt/local/lib/python2.4/site-packages', '/home/m/mmt/local/lib/python2.4/site-packages/PIL', '/home/m/mmt/local/lib/python2.4/site-packages/numpy', '/grads/mmt/local/lib/python24.zip', '/grads/mmt/local/lib/python2.4', '/grads/mmt/local/lib/python2.4/plat-linux2', '/grads/mmt/local/lib/python2.4/lib-tk', '/grads/mmt/local/lib/python2.4/lib-dynload', '/grads/mmt/local/lib/python2.4/site-packages', '/grads/mmt/local/lib/python2.4/site-packages/PIL' ] The top 3 are added to my python path by myself, the rest are included by default. FYI: /grads/mmt and /home/m/mmt map to the same diretory. Sorry again about the repeat emails. Matt From kwgoodman at gmail.com Wed Aug 16 09:45:14 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 16 Aug 2006 06:45:14 -0700 Subject: [Numpy-discussion] some work on arpack In-Reply-To: References: Message-ID: On 8/15/06, David Grant wrote: > My idea is (if I have time) to write an eigs-like function in python > that will only perform a subset of what Matlab's eigs does for. It > will, for example, compute a certain number of eigenvalues and > eigenvectors for a real, sparse, symmetric matrix (the case I'm > interested in) Will it also work for a real, dense, symmetric matrix? That's the case I'm interested in. But even if it doesn't, your work is great news for numpy. From nwagner at iam.uni-stuttgart.de Wed Aug 16 10:14:30 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Aug 2006 16:14:30 +0200 Subject: [Numpy-discussion] some work on arpack In-Reply-To: References: Message-ID: <44E32846.5000508@iam.uni-stuttgart.de> Keith Goodman wrote: > On 8/15/06, David Grant wrote: > > >> My idea is (if I have time) to write an eigs-like function in python >> that will only perform a subset of what Matlab's eigs does for. It >> will, for example, compute a certain number of eigenvalues and >> eigenvectors for a real, sparse, symmetric matrix (the case I'm >> interested in) >> > > AFAIK, pysparse (in the sandbox) includes a module that implements a Jacobi-Davidson eigenvalue solver for the symmetric, generalised matrix eigenvalue problem (JDSYM). Did someone test pysparse ? Nils > Will it also work for a real, dense, symmetric matrix? That's the case > I'm interested in. But even if it doesn't, your work is great news for > numpy. > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From david.huard at gmail.com Wed Aug 16 10:16:59 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 16 Aug 2006 10:16:59 -0400 Subject: [Numpy-discussion] array equivalent to string.split(sep) Message-ID: <91cf711d0608160716p5d52c18fr1f68297fdcbee6f3@mail.gmail.com> Hi, I have a time series that I want to split into contiguous groups differentiated by a condition. I didn't find a vectorized way to that, so I ended up doing a for loop... I know there are split functions that split arrays into equal lengths subarrays, but is there a swell trick to return a sequence of arrays separated by a condition ? For instance, I would like to do something like: >>> a = array([1,1,1,1,1,5,1,1,1,1,1,1,6,2,1,1]) >>> a.argsplit(a>1) [[0,1,2,3,4], [6,7,8,9,10,11], [14,15]] Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Aug 16 10:28:35 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Aug 2006 16:28:35 +0200 Subject: [Numpy-discussion] some work on arpack In-Reply-To: <44E32846.5000508@iam.uni-stuttgart.de> References: <44E32846.5000508@iam.uni-stuttgart.de> Message-ID: <44E32B93.7020609@iam.uni-stuttgart.de> Nils Wagner wrote: > Keith Goodman wrote: > >> On 8/15/06, David Grant wrote: >> >> >> >>> My idea is (if I have time) to write an eigs-like function in python >>> that will only perform a subset of what Matlab's eigs does for. It >>> will, for example, compute a certain number of eigenvalues and >>> eigenvectors for a real, sparse, symmetric matrix (the case I'm >>> interested in) >>> >>> >> >> > AFAIK, pysparse (in the sandbox) includes a module that implements a > Jacobi-Davidson > eigenvalue solver for the symmetric, generalised matrix eigenvalue > problem (JDSYM). > Did someone test pysparse ? > > Nils > > >> Will it also work for a real, dense, symmetric matrix? That's the case >> I'm interested in. But even if it doesn't, your work is great news for >> numpy. >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > Ok it's not ready... gcc: Lib/sandbox/pysparse/src/spmatrixmodule.c In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:17: Lib/sandbox/pysparse/src/ll_mat.c: In function ?LLMat_matvec_transp?: Lib/sandbox/pysparse/src/ll_mat.c:760: error: ?CONTIGUOUS? undeclared (first use in this function) Lib/sandbox/pysparse/src/ll_mat.c:760: error: (Each undeclared identifier is reported only once Lib/sandbox/pysparse/src/ll_mat.c:760: error: for each function it appears in.) Lib/sandbox/pysparse/src/ll_mat.c: In function ?LLMat_matvec?: Lib/sandbox/pysparse/src/ll_mat.c:797: error: ?CONTIGUOUS? undeclared (first use in this function) In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:18: Lib/sandbox/pysparse/src/csr_mat.c: In function ?CSRMat_matvec_transp?: Lib/sandbox/pysparse/src/csr_mat.c:119: error: ?CONTIGUOUS? undeclared (first use in this function) Lib/sandbox/pysparse/src/csr_mat.c: In function ?CSRMat_matvec?: Lib/sandbox/pysparse/src/csr_mat.c:146: error: ?CONTIGUOUS? undeclared (first use in this function) In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:19: Lib/sandbox/pysparse/src/sss_mat.c: In function ?SSSMat_matvec?: Lib/sandbox/pysparse/src/sss_mat.c:83: error: ?CONTIGUOUS? undeclared (first use in this function) In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:17: Lib/sandbox/pysparse/src/ll_mat.c: In function ?LLMat_matvec_transp?: Lib/sandbox/pysparse/src/ll_mat.c:760: error: ?CONTIGUOUS? undeclared (first use in this function) Lib/sandbox/pysparse/src/ll_mat.c:760: error: (Each undeclared identifier is reported only once Lib/sandbox/pysparse/src/ll_mat.c:760: error: for each function it appears in.) Lib/sandbox/pysparse/src/ll_mat.c: In function ?LLMat_matvec?: Lib/sandbox/pysparse/src/ll_mat.c:797: error: ?CONTIGUOUS? undeclared (first use in this function) In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:18: Lib/sandbox/pysparse/src/csr_mat.c: In function ?CSRMat_matvec_transp?: Lib/sandbox/pysparse/src/csr_mat.c:119: error: ?CONTIGUOUS? undeclared (first use in this function) Lib/sandbox/pysparse/src/csr_mat.c: In function ?CSRMat_matvec?: Lib/sandbox/pysparse/src/csr_mat.c:146: error: ?CONTIGUOUS? undeclared (first use in this function) In file included from Lib/sandbox/pysparse/src/spmatrixmodule.c:19: Lib/sandbox/pysparse/src/sss_mat.c: In function ?SSSMat_matvec?: Lib/sandbox/pysparse/src/sss_mat.c:83: error: ?CONTIGUOUS? undeclared (first use in this function) error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -ILib/sandbox/pysparse/include/ -I/usr/lib64/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c Lib/sandbox/pysparse/src/spmatrixmodule.c -o build/temp.linux-x86_64-2.4/Lib/sandbox/pysparse/src/spmatrixmodule.o" failed with exit status 1 Nils From davidgrant at gmail.com Wed Aug 16 11:10:35 2006 From: davidgrant at gmail.com (David Grant) Date: Wed, 16 Aug 2006 08:10:35 -0700 Subject: [Numpy-discussion] some work on arpack In-Reply-To: References: Message-ID: On 8/16/06, Keith Goodman wrote: > > On 8/15/06, David Grant wrote: > > > My idea is (if I have time) to write an eigs-like function in python > > that will only perform a subset of what Matlab's eigs does for. It > > will, for example, compute a certain number of eigenvalues and > > eigenvectors for a real, sparse, symmetric matrix (the case I'm > > interested in) > > Will it also work for a real, dense, symmetric matrix? That's the case > I'm interested in. But even if it doesn't, your work is great news for > numpy. > Real, dense, symmetric, well doesn't scipy already have something for this? I'm honestly not sure on the arpack side of things, I thought arpack was only useful (over other tools) for sparse matrices, I could be wrong. -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From fullung at gmail.com Wed Aug 16 11:23:05 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 16 Aug 2006 17:23:05 +0200 Subject: [Numpy-discussion] some work on arpack In-Reply-To: Message-ID: Hello all > -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of David Grant > Sent: 16 August 2006 17:11 > To: Discussion of Numerical Python > Subject: Re: [Numpy-discussion] some work on arpack > > > > On 8/16/06, Keith Goodman wrote: > > On 8/15/06, David Grant wrote: > > > My idea is (if I have time) to write an eigs-like function in > python > > that will only perform a subset of what Matlab's eigs does for. It > > will, for example, compute a certain number of eigenvalues and > > eigenvectors for a real, sparse, symmetric matrix (the case I'm > > interested in) > > Will it also work for a real, dense, symmetric matrix? That's the > case > I'm interested in. But even if it doesn't, your work is great news > for > numpy. > > Real, dense, symmetric, well doesn't scipy already have something for > this? I'm honestly not sure on the arpack side of things, I thought arpack > was only useful (over other tools) for sparse matrices, I could be wrong. Maybe SciPy can also do this, but what makes ARPACK useful is that it can get you a few eigenvalues and eigenvectors of a massive matrix without having to have the whole thing in memory. Instead, you provide ARPACK with a function that does A*x on your matrix. ARPACK passes a few x's to your function and a few eigenvalues and eigenvectors fall out. I recently used MATLAB's eigs to do exactly this. I had a dense matrix A with dimensions m x n, where m >> n. I wanted the eigenvalues of A'A (which has dimensions m x m, which is too large to keep in memory). But I could keep A and A' in memory I could quickly calculate A'A*x, which is what ARPACK needs. Cheers, Albert From fullung at gmail.com Wed Aug 16 11:29:51 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 16 Aug 2006 17:29:51 +0200 Subject: [Numpy-discussion] some work on arpack In-Reply-To: Message-ID: Argh... > I recently used MATLAB's eigs to do exactly this. I had a dense matrix A > with dimensions m x n, where m >> n. I wanted the eigenvalues of A'A > (which > has dimensions m x m, which is too large to keep in memory). But I could Make that AA'. Cheers, Albert From aisaac at american.edu Wed Aug 16 11:13:03 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 16 Aug 2006 11:13:03 -0400 Subject: [Numpy-discussion] array equivalent to string.split(sep) In-Reply-To: <91cf711d0608160716p5d52c18fr1f68297fdcbee6f3@mail.gmail.com> References: <91cf711d0608160716p5d52c18fr1f68297fdcbee6f3@mail.gmail.com> Message-ID: On Wed, 16 Aug 2006, David Huard apparently wrote: > I have a time series that I want to split into contiguous > groups differentiated by a condition. Perhaps itertools.groupby()? fwiw, Alan Isaac From davidgrant at gmail.com Wed Aug 16 12:26:07 2006 From: davidgrant at gmail.com (David Grant) Date: Wed, 16 Aug 2006 09:26:07 -0700 Subject: [Numpy-discussion] some work on arpack In-Reply-To: References: Message-ID: On 8/16/06, Albert Strasheim wrote: > > Hello all > > > -----Original Message----- > > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > > discussion-bounces at lists.sourceforge.net] On Behalf Of David Grant > > Sent: 16 August 2006 17:11 > > To: Discussion of Numerical Python > > Subject: Re: [Numpy-discussion] some work on arpack > > > > > > > > On 8/16/06, Keith Goodman wrote: > > > > On 8/15/06, David Grant wrote: > > > > > My idea is (if I have time) to write an eigs-like function in > > python > > > that will only perform a subset of what Matlab's eigs does for. > It > > > will, for example, compute a certain number of eigenvalues and > > > eigenvectors for a real, sparse, symmetric matrix (the case I'm > > > interested in) > > > > Will it also work for a real, dense, symmetric matrix? That's the > > case > > I'm interested in. But even if it doesn't, your work is great news > > for > > numpy. > > > > Real, dense, symmetric, well doesn't scipy already have something for > > this? I'm honestly not sure on the arpack side of things, I thought > arpack > > was only useful (over other tools) for sparse matrices, I could be > wrong. > > Maybe SciPy can also do this, but what makes ARPACK useful is that it can > get you a few eigenvalues and eigenvectors of a massive matrix without > having to have the whole thing in memory. Instead, you provide ARPACK with > a > function that does A*x on your matrix. ARPACK passes a few x's to your > function and a few eigenvalues and eigenvectors fall out. Cool, thanks for the info. -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Wed Aug 16 11:08:16 2006 From: davidgrant at gmail.com (David Grant) Date: Wed, 16 Aug 2006 08:08:16 -0700 Subject: [Numpy-discussion] some work on arpack In-Reply-To: <44E32846.5000508@iam.uni-stuttgart.de> References: <44E32846.5000508@iam.uni-stuttgart.de> Message-ID: On 8/16/06, Nils Wagner wrote: > > Keith Goodman wrote: > > On 8/15/06, David Grant wrote: > > > > > >> My idea is (if I have time) to write an eigs-like function in python > >> that will only perform a subset of what Matlab's eigs does for. It > >> will, for example, compute a certain number of eigenvalues and > >> eigenvectors for a real, sparse, symmetric matrix (the case I'm > >> interested in) > >> > > > > > AFAIK, pysparse (in the sandbox) includes a module that implements a > Jacobi-Davidson > eigenvalue solver for the symmetric, generalised matrix eigenvalue > problem (JDSYM). > Did someone test pysparse ? > > I did try pysparse a few years ago (I think right before sparse stuff came into scipy). I think there is probably an old post asking the list about sparse stuff and I think Travis had just written it and told me about it... can't remember. Can JDSYM just return the k lowest eigenvalues/eigenvectors? -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Aug 16 12:50:05 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Aug 2006 18:50:05 +0200 Subject: [Numpy-discussion] some work on arpack In-Reply-To: References: <44E32846.5000508@iam.uni-stuttgart.de> Message-ID: On Wed, 16 Aug 2006 08:08:16 -0700 "David Grant" wrote: > On 8/16/06, Nils Wagner >wrote: >> >> Keith Goodman wrote: >> > On 8/15/06, David Grant wrote: >> > >> > >> >> My idea is (if I have time) to write an eigs-like >>function in python >> >> that will only perform a subset of what Matlab's eigs >>does for. It >> >> will, for example, compute a certain number of >>eigenvalues and >> >> eigenvectors for a real, sparse, symmetric matrix >>(the case I'm >> >> interested in) >> >> >> > >> > >> AFAIK, pysparse (in the sandbox) includes a module that >>implements a >> Jacobi-Davidson >> eigenvalue solver for the symmetric, generalised matrix >>eigenvalue >> problem (JDSYM). >> Did someone test pysparse ? >> >> I did try pysparse a few years ago (I think right before >>sparse stuff came > into scipy). I think there is probably an old post >asking the list about > sparse stuff and I think Travis had just written it and >told me about it... > can't remember. Can JDSYM just return the k lowest >eigenvalues/eigenvectors? > > -- > David Grant > http://www.davidgrant.ca Yes. See http://people.web.psi.ch/geus/pyfemax/pysparse_examples.html for details. Nils From davidgrant at gmail.com Wed Aug 16 14:45:27 2006 From: davidgrant at gmail.com (David Grant) Date: Wed, 16 Aug 2006 11:45:27 -0700 Subject: [Numpy-discussion] log can't handle big ints Message-ID: I am using numpy-0.9.8 and it seems that numpy's log2 function can't handle large integers? In [19]: a=11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 In [20]: math.log(a,2) Out[20]: 292.48167544353294 In [21]: numpy.log2(a) --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/david/ /usr/lib/python2.4/site-packages/numpy/lib/ufunclike.py in log2(x, y) 52 x = asarray(x) 53 if y is None: ---> 54 y = umath.log(x) 55 else: 56 umath.log(x, y) AttributeError: 'long' object has no attribute 'log' Does anyone else get this in numpy? if not, what version are you using? -- David Grant http://www.davidgrant.ca From elijah.gregory at gmail.com Wed Aug 16 15:15:51 2006 From: elijah.gregory at gmail.com (Elijah Gregory) Date: Wed, 16 Aug 2006 12:15:51 -0700 Subject: [Numpy-discussion] Installation and Uninstallation Message-ID: Dear NumPy Users, I am attempting to install numpy-0.9.8 as a user on unix system. When I install numpy by typing "python setup.py install" as per the (only) instructions in the README.txt file everything proceeds smoothly until some point where the script attempts to write a file to the root-level /usr/lib64. How can I configure the setup.py script to use my user-level directories which I do have access to? Also, given that the install exited with an error, how do I clean up the aborted installation? Thank you for your help, regards, Elijah Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhendrix at enthought.com Wed Aug 16 15:18:22 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 16 Aug 2006 14:18:22 -0500 Subject: [Numpy-discussion] Installation and Uninstallation In-Reply-To: References: Message-ID: <44E36F7E.1080307@enthought.com> python setup.py install --prefix=your_path You shouldn't have to clean up the previous install, if it got to the point where it was copy files, the first one would have failed. Next time you run setup.py with the --prefix option, it will pick up where the previous install left off. Bryce Elijah Gregory wrote: > Dear NumPy Users, > > I am attempting to install numpy-0.9.8 as a user on unix system. > When I install numpy by typing "python setup.py install" as per the > (only) instructions in the README.txt file everything proceeds > smoothly until some point where the script attempts to write a file to > the root-level /usr/lib64. How can I configure the setup.py script to > use my user-level directories which I do have access to? Also, given > that the install exited with an error, how do I clean up the aborted > installation? Thank you for your help, > > regards, > > Elijah Gregory > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Aug 16 15:20:43 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 16 Aug 2006 12:20:43 -0700 Subject: [Numpy-discussion] log can't handle big ints In-Reply-To: References: Message-ID: <44E3700B.1060802@ieee.org> David Grant wrote: > I am using numpy-0.9.8 and it seems that numpy's log2 function can't > handle large integers? > > In [19]: a=11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 > > In [20]: math.log(a,2) > Out[20]: 292.48167544353294 > > In [21]: numpy.log2(a) > Ufuncs on objects (like the long object) work by looking for the corresponding method. It's not found for long objects. Convert the long object to a float first. I'm not sure of any other way to "fix" it. I suppose if no method is found an attempt to convert them to floats could be performed under the covers on all object array inputs. -Travis From kortmann at ideaworks.com Wed Aug 16 16:54:35 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Wed, 16 Aug 2006 13:54:35 -0700 (PDT) Subject: [Numpy-discussion] numpy.linalg.linalg.LinAlgError: Singular matrix Message-ID: <1377.12.216.231.149.1155761675.squirrel@webmail.ideaworks.com> all of the variables n, st, st2, st3, st4, st5, st6, sx, sxt, sxt2, and sxt3 are all floats. A = array([[N, st, st2, st3],[st, st2, st3, st4], [st2, st3, st4, st5], [st3, st4, st5, st6]]) B = array ([sx, sxt, sxt2, sxt3]) lina = linalg.solve(A, B) is there something wrong with this code? it is returning File "C:\PYTHON23\Lib\site-packages\numpy\linalg\linalg.py", line 138, in solve raise LinAlgError, 'Singular matrix' numpy.linalg.linalg.LinAlgError: Singular matrix Does anyone know what I am doing wrong? -Kenny From oliphant.travis at ieee.org Wed Aug 16 17:10:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 16 Aug 2006 14:10:17 -0700 Subject: [Numpy-discussion] Installation and Uninstallation In-Reply-To: References: Message-ID: <44E389B9.1030905@ieee.org> Elijah Gregory wrote: > Dear NumPy Users, > > I am attempting to install numpy-0.9.8 as a user on unix system. > When I install numpy by typing "python setup.py install" as per the > (only) instructions in the README.txt file everything proceeds > smoothly until some point where the script attempts to write a file to > the root-level /usr/lib64. How can I configure the setup.py script to > use my user-level directories which I do have access to? Also, given > that the install exited with an error, how do I clean up the aborted > installation? Is there a particular reason you are installing numpy-0.9.8? Please use the latest version as 0.9.8 is a pre-beta release. -Travis From yatimameiji at gmail.com Wed Aug 16 19:29:11 2006 From: yatimameiji at gmail.com (Yatima Meiji) Date: Wed, 16 Aug 2006 18:29:11 -0500 Subject: [Numpy-discussion] Atempt to build numpy-1.0b2 fail on distutils.ccompiler Message-ID: <877dd2d00608161629t71f98125m913165f6693ab41f@mail.gmail.com> I'm currently running a fresh install of Suse 10.1. I ran the numpy setup script using "python setup.py install" and it fails with this error: Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 89, in ? setup_package() File "setup.py", line 59, in setup_package from numpy.distutils.core import setup File "/home/xxx/numpy-1.0b2/numpy/distutils/__init__.py", line 5, in ? import ccompiler File "/home/xxx/numpy-1.0b2/numpy/distutils/ccompiler.py", line 6, in ? from distutils.ccompiler import * ImportError: No module named distutils.ccompiler I checked ccompiler.py to see what was wrong. I'm not much of a programmer, but it seems strange to have ccompiler.py reference itself. I'm guessing others have compilied numpy just fine, so whats wrong with me? Thanks in advanced. -- "Physics is like sex: sure, it may give some practical results, but that's not why we do it." -- Richard P. Feynman -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Aug 16 19:33:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Aug 2006 16:33:43 -0700 Subject: [Numpy-discussion] Atempt to build numpy-1.0b2 fail on distutils.ccompiler In-Reply-To: <877dd2d00608161629t71f98125m913165f6693ab41f@mail.gmail.com> References: <877dd2d00608161629t71f98125m913165f6693ab41f@mail.gmail.com> Message-ID: Yatima Meiji wrote: > I'm currently running a fresh install of Suse 10.1. I ran the numpy > setup script using "python setup.py install" and it fails with this error: > > Running from numpy source directory. > Traceback (most recent call last): > File "setup.py", line 89, in ? > setup_package() > File "setup.py", line 59, in setup_package > from numpy.distutils.core import setup > File "/home/xxx/numpy-1.0b2/numpy/distutils/__init__.py", line 5, in ? > import ccompiler > File "/home/xxx/numpy-1.0b2/numpy/distutils/ccompiler.py", line 6, in ? > from distutils.ccompiler import * > ImportError: No module named distutils.ccompiler > > I checked ccompiler.py to see what was wrong. I'm not much of a > programmer, but it seems strange to have ccompiler.py reference itself. It's not; it's trying to import from the standard library's distutils.ccompiler module. Suse, like several other Linux distributions, separates distutils from the rest of the standard library in a separate package which you will need to install. It will be called something like python-dev or python-devel. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From drswalton at gmail.com Wed Aug 16 19:51:24 2006 From: drswalton at gmail.com (Stephen Walton) Date: Wed, 16 Aug 2006 16:51:24 -0700 Subject: [Numpy-discussion] numpy.linalg.linalg.LinAlgError: Singular matrix In-Reply-To: <1377.12.216.231.149.1155761675.squirrel@webmail.ideaworks.com> References: <1377.12.216.231.149.1155761675.squirrel@webmail.ideaworks.com> Message-ID: <693733870608161651j77732739w6a90e449bf6670b2@mail.gmail.com> On 8/16/06, kortmann at ideaworks.com wrote: > > all of the variables n, st, st2, st3, st4, st5, st6, sx, sxt, sxt2, and > sxt3 are all floats. > > > A = array([[N, st, st2, st3],[st, st2, st3, st4], [st2, st3, st4, st5], > [st3, st4, st5, st6]]) > B = array ([sx, sxt, sxt2, sxt3]) > lina = linalg.solve(A, B) Is your matrix A in fact singular? Without numerical values of A, st, etc., it is hard to know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chanley at stsci.edu Thu Aug 17 08:23:03 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 17 Aug 2006 08:23:03 -0400 Subject: [Numpy-discussion] numpy.bool8 Message-ID: <44E45FA7.7080209@stsci.edu> What happened to numpy.bool8? I realize that bool_ is just as good. I was just wondering what motivated the change? Chris From aisaac at american.edu Thu Aug 17 12:37:47 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 17 Aug 2006 12:37:47 -0400 Subject: [Numpy-discussion] how to reference Numerical Python in a scientific publication In-Reply-To: References: <44DAB7E1.8090108@msg.ucsf.edu> Message-ID: In BibTeX format. fwiw, Alan Isaac @MANUAL{Oliphant:2006, author = {Oliphant, Travis E.}, year = 2006, title = {Guide to NumPy}, month = mar, address = {Provo, UT}, institution = {Brigham Young University} } @ARTICLE{Dubois+etal:1996, author = {Dubois, Paul F. and Konrad Hinsen and James Hugunin}, year = {1996}, title = {Numerical Python}, journal = {Computers in Physics}, volume = 10, number = 3, month = {May/June} } @ARTICLE{Dubois:1999, author = {Dubois, Paul F.}, year = 1999, title = {Extending Python with Fortran}, journal = {Computing Science and Engineering}, volume = 1, number = 5, month = {Sep/Oct}, pages = {66--73} } @ARTICLE{Scherer+etal:2000, author = {Scherer, David and Paul Dubois and Bruce Sherwood}, year = 2000, title = {VPython: 3D Interactive Scientific Graphics for Students}, journal = {Computing in Science and Engineering}, volume = 2, number = 5, month = {Sep/Oct}, pages = {56--62} } @MANUAL{Ascher+etal:1999, author = {Ascher, David and Paul F. Dubois and Konrad Hinsen and James Hugunin and Travis Oliphant}, year = 1999, title = {Numerical Python}, edition = {UCRL-MA-128569}, address = {Livermore, CA}, organization = {Lawrence Livermore National Laboratory} } From christopher.e.kees at erdc.usace.army.mil Thu Aug 17 13:01:13 2006 From: christopher.e.kees at erdc.usace.army.mil (Chris Kees) Date: Thu, 17 Aug 2006 12:01:13 -0500 Subject: [Numpy-discussion] convertcode.py Message-ID: <9EC96922-E299-4996-BED4-262B0E3E0126@erdc.usace.army.mil> Hi, I just ran convertcode.py on my code (from the latest svn source of numpy) and it looks like it just changed the import statements to import numpy.oldnumeric as Numeric So it doesn't look like it's really helping me move over to the new usage. Is there a script that will converts code to use the new numpy as it's intended to be used? Thanks, Chris From wes25 at tom.com Sun Aug 20 14:30:57 2006 From: wes25 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Mon, 21 Aug 2006 02:30:57 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU6yfqy+tK7z9/W97ncvLzE3Mzhyf0=?= Message-ID: An HTML attachment was scrubbed... URL: From MAILER-DAEMON at rosi.szbk.u-szeged.hu Thu Aug 17 15:02:46 2006 From: MAILER-DAEMON at rosi.szbk.u-szeged.hu (Mail Delivery System) Date: Thu, 17 Aug 2006 21:02:46 +0200 (CEST) Subject: [Numpy-discussion] Undelivered Mail Returned to Sender Message-ID: <20060817190246.17AC61BD7D@rosi.szbk.u-szeged.hu> This is the Postfix program at host rosi.szbk.u-szeged.hu. I'm sorry to have to inform you that your message could not be be delivered to one or more recipients. It's attached below. For further assistance, please send mail to If you do so, please include this problem report. You can delete your own text from the attached returned message. The Postfix program : permission denied. Command output: maildrop: maildir over quota. -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From wes25 at tom.com Sun Aug 20 14:30:57 2006 From: wes25 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Mon, 21 Aug 2006 02:30:57 +0800 Subject: *****SPAM***** [Numpy-discussion] re:Éú²úÒ»ÏßÖ÷¹Ü¼¼ÄÜÌáÉý Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 -------------- next part -------------- _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From davidgrant at gmail.com Thu Aug 17 15:48:35 2006 From: davidgrant at gmail.com (David Grant) Date: Thu, 17 Aug 2006 12:48:35 -0700 Subject: [Numpy-discussion] numpy 0.9.8->1.0b2 Message-ID: I'm contemplating upgrading to 1.0b2. The main reason is that I am experiencing a major memory leak and before I report a bug I think the developers would appeciate if I was using the most recent version. Am I correct in that the only major change that might actually break my code is that the following functions: take, repeat, sum, product, sometrue, cumsum, cumproduct, ptp, amax, amin, prod, cumprod, mean, std, var now have axis=None as argument? BTW, how come alter_code2.py ( http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/oldnumeric/alter_code2.py?rev=HEAD) says in the docstring that it "converts functions that don't give axis= keyword that have changed" but I don't see it actually doing that anywhere in the code? Thanks, David -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Thu Aug 17 19:25:12 2006 From: davidgrant at gmail.com (David Grant) Date: Thu, 17 Aug 2006 16:25:12 -0700 Subject: [Numpy-discussion] Interesting memory leak Message-ID: Hello all, I had a massive memory leak in some of my code. It would basically end up using up all 1GB of my RAM or more if I don't kill the application. I managed to finally figure out which portion of the code was causing the leak (with great difficulty) and have a little example which exposes the leak. I am using numpy-0.9.8 and I'm wondering if perhaps this is already fixed in 1.0b2. Run this through valgrind with appropriate options (I used the recommended valgrind_py.sh that I found on scipy's site somewhere) and this will leak 100kB. Increase the xrange on the big loop and you can watch the memory increase over time in top. The interesting thing is that the only difference between the leaky and non-leaky code is: if not adjacencyMatrix[anInt2,anInt1] == 0: (leaky) vs. if not adjacencyMatrix[anInt2][anInt1] == 0: (non-leaky) however another way to make the leaky code non-leaky is to change anArrayOfInts to just be [1] Here's the code: from numpy import array def leakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix): ys = set() for aList in aListOfArrays: for anInt1 in anArrayOfInts: for anInt2 in aList: if not adjacencyMatrix[anInt2,anInt1] == 0: ys.add(anInt1) return ys def nonLeakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix): ys = set() for aList in aListOfArrays: for anInt1 in anArrayOfInts: for anInt2 in aList: if not adjacencyMatrix[anInt2][anInt1] == 0: ys.add(anInt1) return ys if __name__ == "__main__": for i in xrange(10000): aListOfArrays = [[0, 1]] anArrayOfInts = array([1]) adjacencyMatrix = array([[0,1],[1,0]]) #COMMENT OUT ONE OF THE 2 LINES BELOW #bar = nonLeakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix) bar = leakyCode(aListOfArrays, anArrayOfInts, adjacencyMatrix) -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Aug 17 19:30:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Aug 2006 16:30:23 -0700 Subject: [Numpy-discussion] Interesting memory leak In-Reply-To: References: Message-ID: David Grant wrote: > Hello all, > > I had a massive memory leak in some of my code. It would basically end > up using up all 1GB of my RAM or more if I don't kill the application. I > managed to finally figure out which portion of the code was causing the > leak (with great difficulty) and have a little example which exposes the > leak. I am using numpy-0.9.8 and I'm wondering if perhaps this is > already fixed in 1.0b2. Run this through valgrind with appropriate > options (I used the recommended valgrind_py.sh that I found on scipy's > site somewhere) and this will leak 100kB. Increase the xrange on the big > loop and you can watch the memory increase over time in top. I don't see a leak in 1.0b2.dev3002. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From numt at usa.net Thu Aug 17 22:40:56 2006 From: numt at usa.net (numt at usa.net) Date: Thu, 17 Aug 2006 23:40:56 -0300 Subject: [Numpy-discussion] Passed over again for that promotion, no Degree? Message-ID: wsSgWsS8VbEuW.xkGdRTUahhaOT@usa.net An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Thu Aug 17 20:08:28 2006 From: davidgrant at gmail.com (David Grant) Date: Thu, 17 Aug 2006 17:08:28 -0700 Subject: [Numpy-discussion] Interesting memory leak In-Reply-To: References: Message-ID: On 8/17/06, Robert Kern wrote: > > David Grant wrote: > > Hello all, > > > > I had a massive memory leak in some of my code. It would basically end > > up using up all 1GB of my RAM or more if I don't kill the application. I > > managed to finally figure out which portion of the code was causing the > > leak (with great difficulty) and have a little example which exposes the > > leak. I am using numpy-0.9.8 and I'm wondering if perhaps this is > > already fixed in 1.0b2. Run this through valgrind with appropriate > > options (I used the recommended valgrind_py.sh that I found on scipy's > > site somewhere) and this will leak 100kB. Increase the xrange on the big > > loop and you can watch the memory increase over time in top. > > I don't see a leak in 1.0b2.dev3002. Thanks Robert. I decided to upgrade to 1.0b2 just to see what I get and now I get 7kB of "possibly lost" memory, coming from PyObject_Malloc (in /usr/lib/libpython2.4.so.1.0). This is a constant 7kB, however, and it isn't getting any larger if I increase the loop iterations. Looks good then. I don't really know the meaning of this "possibly lost" memory. -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Fri Aug 18 00:13:06 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 18 Aug 2006 13:13:06 +0900 Subject: [Numpy-discussion] bug with numpy.linalg.eig for complex output Message-ID: If you do this: >>> numpy.linalg.eig(numpy.random.rand(3,3)) You'll (almost always) get a wrong answer back from numpy. Something like: (array([ 1.72167898, -0.07251007, -0.07251007]), array([[ 0.47908847, 0.72095163, 0.72095163], [ 0.56659142, -0.46403504, -0.46403504], [ 0.67040914, 0.01361572, 0.01361572]])) The return value should be complex (unless rand() just happens to return something symmetric). It really needs to either throw an exception, or preferably for this function, just go ahead and return something complex, like the numpy.dftfunctions do. On the other hand it, would be nice to stick with plain doubles if the output isn't complex, but I'd rather get the right output all the time than get the minimal type that will handle the output. This is with beta 1. Incidentally, I tried logging into the Trac here: http://projects.scipy.org/scipy/scipy to file a bug, but it wouldn't let me in under the account I've been using for a while now. Is the login system broken? Were passwords reset or something? --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Aug 18 00:54:44 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 18 Aug 2006 13:54:44 +0900 Subject: [Numpy-discussion] ctypes: how does load_library work ? Message-ID: <44E54814.7030803@ar.media.kyoto-u.ac.jp> Hi, I am investigating the use of ctypes to write C extensions for numpy/scipy. First, thank you for the wiki, it makes it easy to implement in a few minutes a wrapper for a C function taking arrays as arguments. I am running recent SVN version of numpy and scipy, and I couldn't make load_library work as I expected: Let's say I have a libhello.so library on linux, which contains the C function int sum(const int* in, size_t n). To wrap it, I use: import numpy as N from ctypes import cdll, POINTER, c_int, c_uint _hello = cdll.LoadLibrary('libhello.so') _hello.sum.restype = c_int _hello.sum.artype = [POINTER(c_int), c_uint] def sum(data): return _hello.sum(data.ctypes.data_as(POINTER(c_int)), len(data)) n = 10 data = N.arange(n) print data print "sum(data) is " + str(sum(data)) That works OK, but to avoid the platform dependency, I would like to use load_library from numpy: I just replace the cdll.LoadLibrary by : _hello = N.ctypeslib.load_library('hello', '.') which does not work. The python interpreter returns a strange error message, because it says hello.so.so is not found, and it is looking for the library in the directory usr/$(PWD), which does not make sense to me. Is it a bug, or am I just not understanding how to use the load_library function ? David From joris at ster.kuleuven.be Fri Aug 18 02:21:39 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 18 Aug 2006 08:21:39 +0200 Subject: [Numpy-discussion] numpy installation problem Message-ID: <200608180821.39074.joris@ster.kuleuven.be> Hi, In the README.txt of the numpy installation it says that one could use a site.cfg file to specify non-standard locations of ATLAS en LAPACK libraries, but it doesn't explain how. I have a directory software/atlas3.6.0/lib/Linux_PPROSSE2/ which contains libcombinedlapack.a libatlas.a libcblas.a libf77blas.a liblapack.a libtstatlas.a where liblapack.a are the few lapack routines provided by ATLAS, and libcombinedlapack.a (> 5 MB) contains the full LAPACK library including the few optimized routines of ATLAS. From the example in numpy/distutils/system_info.py I figured that my site.cfg file should look like --- site.cfg --- [atlas] library_dirs = /software/atlas3.6.0/lib/Linux_PPROSSE2/ atlas_libs = combinedlapack, f77blas, cblas, atlas --------------- However, during numpy installation, he says: FOUND: libraries = ['combinedlapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/software/atlas3.6.0/lib/Linux_PPROSSE2/'] which is good, but afterwards he also says: Lapack library (from ATLAS) is probably incomplete: size of /software/atlas3.6.0/lib/Linux_PPROSSE2/liblapack.a is 305k (expected >4000k) which he shouldn't use at all. Strangely enough, renaming libcombinedlapack.a to liblapack.a and adapting the site.cfg file accordingly still gives the same message. Any pointers? Joris From nwagner at iam.uni-stuttgart.de Fri Aug 18 03:21:38 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 18 Aug 2006 09:21:38 +0200 Subject: [Numpy-discussion] bug with numpy.linalg.eig for complex output In-Reply-To: References: Message-ID: <44E56A82.2030604@iam.uni-stuttgart.de> Bill Baxter wrote: > If you do this: > >>> numpy.linalg.eig(numpy.random.rand(3,3)) > > You'll (almost always) get a wrong answer back from numpy. Something > like: > > (array([ 1.72167898, -0.07251007, -0.07251007]), > array([[ 0.47908847, 0.72095163, 0.72095163], > [ 0.56659142, -0.46403504, -0.46403504], > [ 0.67040914, 0.01361572, 0.01361572]])) > > The return value should be complex (unless rand() just happens to > return something symmetric). > > It really needs to either throw an exception, or preferably for this > function, just go ahead and return something complex, like the > numpy.dft functions do. > On the other hand it, would be nice to stick with plain doubles if the > output isn't complex, but I'd rather get the right output all the time > than get the minimal type that will handle the output. > > This is with beta 1. > > Incidentally, I tried logging into the Trac here: > http://projects.scipy.org/scipy/scipy > to file a bug, but it wouldn't let me in under the account I've been > using for a while now. Is the login system broken? Were passwords > reset or something? > > > --bb > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > AFAIK this problem is fixed. http://projects.scipy.org/scipy/numpy/ticket/215 I have no problem wrt the Trac system. Nils From wbaxter at gmail.com Fri Aug 18 04:06:41 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 18 Aug 2006 17:06:41 +0900 Subject: [Numpy-discussion] bug with numpy.linalg.eig for complex output In-Reply-To: <44E56A82.2030604@iam.uni-stuttgart.de> References: <44E56A82.2030604@iam.uni-stuttgart.de> Message-ID: Thanks for the info Nils. Sounds like it was fixed post-1.0b1. Good news. And Trac seems to be letting me in again. Not sure what was wrong there. --bb On 8/18/06, Nils Wagner wrote: > > Bill Baxter wrote: > > If you do this: > > >>> numpy.linalg.eig(numpy.random.rand(3,3)) > > > > You'll (almost always) get a wrong answer back from numpy. Something > > like: > > > > (array([ 1.72167898, -0.07251007, -0.07251007]), > > array([[ 0.47908847, 0.72095163, 0.72095163], > > [ 0.56659142, -0.46403504, -0.46403504], > > [ 0.67040914, 0.01361572, 0.01361572]])) > > > > The return value should be complex (unless rand() just happens to > > return something symmetric). > > > > It really needs to either throw an exception, or preferably for this > > function, just go ahead and return something complex, like the > > numpy.dft functions do. > > On the other hand it, would be nice to stick with plain doubles if the > > output isn't complex, but I'd rather get the right output all the time > > than get the minimal type that will handle the output. > > > > This is with beta 1. > > > > Incidentally, I tried logging into the Trac here: > > http://projects.scipy.org/scipy/scipy > > to file a bug, but it wouldn't let me in under the account I've been > > using for a while now. Is the login system broken? Were passwords > > reset or something? > > > > > > --bb > > > > - > > AFAIK this problem is fixed. > > http://projects.scipy.org/scipy/numpy/ticket/215 > > I have no problem wrt the Trac system. > > Nils > > -discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri Aug 18 05:16:46 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 18 Aug 2006 11:16:46 +0200 Subject: [Numpy-discussion] ctypes: how does load_library work ? In-Reply-To: <44E54814.7030803@ar.media.kyoto-u.ac.jp> References: <44E54814.7030803@ar.media.kyoto-u.ac.jp> Message-ID: <20060818091646.GR10593@mentat.za.net> On Fri, Aug 18, 2006 at 01:54:44PM +0900, David Cournapeau wrote: > import numpy as N > from ctypes import cdll, POINTER, c_int, c_uint > > _hello = cdll.LoadLibrary('libhello.so') > > _hello.sum.restype = c_int > _hello.sum.artype = [POINTER(c_int), c_uint] > > def sum(data): > return _hello.sum(data.ctypes.data_as(POINTER(c_int)), len(data)) > > n = 10 > data = N.arange(n) > > print data > print "sum(data) is " + str(sum(data)) > > > That works OK, but to avoid the platform dependency, I would like to use > load_library from numpy: I just replace the cdll.LoadLibrary by : > > _hello = N.ctypeslib.load_library('hello', '.') Shouldn't that be 'libhello'? Try _hello = N.ctypes_load_library('libhello','__file__') Cheers St?fan From fullung at gmail.com Fri Aug 18 06:31:06 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 18 Aug 2006 12:31:06 +0200 Subject: [Numpy-discussion] Interesting memory leak In-Reply-To: Message-ID: Hello all > > I decided to upgrade to 1.0b2 just to see what I get and now I get 7kB of > "possibly lost" memory, coming from PyObject_Malloc (in > /usr/lib/libpython2.4.so.1.0). This is a constant 7kB, however, and it > isn't getting any larger if I increase the loop iterations. Looks good > then. I don't really know the meaning of this "possibly lost" memory. http://projects.scipy.org/scipy/numpy/ticket/195 This leak is caused by add_docstring, but it's supposed to leak. I wonder if there's a way to register some kind of on-exit handler in Python so that this can also be cleaned up? Cheers, Albert From fullung at gmail.com Fri Aug 18 06:40:05 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 18 Aug 2006 12:40:05 +0200 Subject: [Numpy-discussion] ctypes: how does load_library work ? In-Reply-To: <44E54814.7030803@ar.media.kyoto-u.ac.jp> Message-ID: Hello all > -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of David Cournapeau > Sent: 18 August 2006 06:55 > To: Discussion of Numerical Python > Subject: [Numpy-discussion] ctypes: how does load_library work ? > > > That works OK, but to avoid the platform dependency, I would like to use > load_library from numpy: I just replace the cdll.LoadLibrary by : > > _hello = N.ctypeslib.load_library('hello', '.') > > which does not work. The python interpreter returns a strange error > message, because it says hello.so.so is not found, and it is looking for > the library in the directory usr/$(PWD), which does not make sense to > me. Is it a bug, or am I just not understanding how to use the > load_library function ? load_library currently assumes that library names don't have a prefix. We might want to rethink this assumption on Linux and other Unixes. load_library's second argument is a filename or a directory name. If it's a directory, load_library looks for hello. in that directory. If it's a filename, load_library calls os.path.dirname to get a directory. The idea with this is that in a module you'll probably have one file that loads the library and sets up argtypes and restypes and here you'll do (in mylib.py): _mylib = numpy.ctypeslib.load_library('mylib_', __file__) and then the library will be installed in the same directory as mylib.py. Better suggestions for doing all this appreciated. ;-) Cheers, Albert From david at ar.media.kyoto-u.ac.jp Fri Aug 18 07:36:21 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 18 Aug 2006 20:36:21 +0900 Subject: [Numpy-discussion] ctypes: how does load_library work ? In-Reply-To: <20060818091646.GR10593@mentat.za.net> References: <44E54814.7030803@ar.media.kyoto-u.ac.jp> <20060818091646.GR10593@mentat.za.net> Message-ID: <44E5A635.3090403@ar.media.kyoto-u.ac.jp> Stefan van der Walt wrote: > On Fri, Aug 18, 2006 at 01:54:44PM +0900, David Cournapeau wrote: > >> import numpy as N >> from ctypes import cdll, POINTER, c_int, c_uint >> >> _hello = cdll.LoadLibrary('libhello.so') >> >> _hello.sum.restype = c_int >> _hello.sum.artype = [POINTER(c_int), c_uint] >> >> def sum(data): >> return _hello.sum(data.ctypes.data_as(POINTER(c_int)), len(data)) >> >> n = 10 >> data = N.arange(n) >> >> print data >> print "sum(data) is " + str(sum(data)) >> >> >> That works OK, but to avoid the platform dependency, I would like to use >> load_library from numpy: I just replace the cdll.LoadLibrary by : >> >> _hello = N.ctypeslib.load_library('hello', '.') >> > > Shouldn't that be 'libhello'? Try > > _hello = N.ctypes_load_library('libhello','__file__') > Well, the library name convention under unix, as far as I know, is 'lib'+ name + '.so' + 'version'. And if I put lib in front of hello, it then does not work under windows. David From david at ar.media.kyoto-u.ac.jp Fri Aug 18 07:42:22 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 18 Aug 2006 20:42:22 +0900 Subject: [Numpy-discussion] ctypes: how does load_library work ? In-Reply-To: References: Message-ID: <44E5A79E.5090402@ar.media.kyoto-u.ac.jp> Albert Strasheim wrote: > Hello all > > >> -----Original Message----- >> From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- >> discussion-bounces at lists.sourceforge.net] On Behalf Of David Cournapeau >> Sent: 18 August 2006 06:55 >> To: Discussion of Numerical Python >> Subject: [Numpy-discussion] ctypes: how does load_library work ? >> >> >> That works OK, but to avoid the platform dependency, I would like to use >> load_library from numpy: I just replace the cdll.LoadLibrary by : >> >> _hello = N.ctypeslib.load_library('hello', '.') >> >> which does not work. The python interpreter returns a strange error >> message, because it says hello.so.so is not found, and it is looking for >> the library in the directory usr/$(PWD), which does not make sense to >> me. Is it a bug, or am I just not understanding how to use the >> load_library function ? >> > > load_library currently assumes that library names don't have a prefix. We > might want to rethink this assumption on Linux and other Unixes. > I think it needs to be modified for linux and Solaris at least, where the prefix lib is put in the library name. When linking, you use -lm, and not -llibm. In dlopen, you use the full name (libm.so). After a quick look at ctypes reference doc, it looks like there are some function to search a library, maybe this can be used ? Anyway, this is kind of nickpicking, as ctypes is really a breeze to use. To be able to do the whole wrapping in pure python is great, thanks ! David From faltet at carabos.com Fri Aug 18 07:59:03 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 18 Aug 2006 13:59:03 +0200 Subject: [Numpy-discussion] First impressions on migrating to NumPy Message-ID: <200608181359.03643.faltet@carabos.com> Hi, I'm starting to (slowly) replace numarray by NumPy at the core of PyTables, specially at those places where the speed of NumPy is *much* better, that is, in the creation of arrays (there are places in PyTables where this is critical, most specially in indexation) and in copying arrays. In both cases, NumPy performs between 8x to 40x than numarray and this is, well..., excellent :-) Also, the big unification between numerical homogeneous arrays, string homogeneous arrays (with unicode support added) and heterogeneous arrays (recarrays, with nested records support there also!) is simplyfying very much the code in PyTables where there are many places where one have to distinguish between those different objects in numarray. Fortunately, this distinction is not necessary anymore in many of this places. Furthermore, I'm seeing that most of the corner cases where numarray do well (this was the main reason I was conservative about migrating anyway), are also very well resolved in NumPy (in some cases better, as for one, NumPy has chosen NULL terminated strings for internal representation, instead of space padding in numarray that gave me lots of headaches). Of course, there are some glitches that I'll report appropriately, but overall, NumPy is behaving better than expected (and I already had *great* expectations). Well, I just wanted to report these experiences just in case other people is pondering about migrating as well to NumPy. But also wanted to thanks (once more), the excellent work of the NumPy crew, and specially Travis for their first-class work. Thanks! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From Norbert.Nemec.list at gmx.de Fri Aug 18 09:36:47 2006 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Fri, 18 Aug 2006 15:36:47 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup Message-ID: <44E5C26F.6020609@gmx.de> Hi there, in numpy-1.0b2 the logic in setup.py is slightly off. The attached patch fixes the issue. Greetings, Norbert PS: I would have preferred to submit this patch via the sourceforge bug-tracker, but that seems rather confusing: there are tabs "Numarray Patches" and "Numarray Bugs" but no "NumPy bugs" and the tab "Patches" seems to be used for Numeric. Why isn't NumPy handled via the Sourceforge page? -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: threading-without-smp-setup-bugfix.diff URL: From faltet at carabos.com Fri Aug 18 10:34:18 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 18 Aug 2006 16:34:18 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <44E5C26F.6020609@gmx.de> References: <44E5C26F.6020609@gmx.de> Message-ID: <200608181634.19694.faltet@carabos.com> A Divendres 18 Agost 2006 15:36, Norbert Nemec va escriure: > PS: I would have preferred to submit this patch via the sourceforge > bug-tracker, but that seems rather confusing: there are tabs "Numarray > Patches" and "Numarray Bugs" but no "NumPy bugs" and the tab "Patches" > seems to be used for Numeric. Why isn't NumPy handled via the > Sourceforge page? Because it has its own development site in: http://projects.scipy.org/scipy/numpy/ Log your bug reports there. Sourceforge is mainly used to distribute tarballs and binary packages of public releases, that's all. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From christopher.e.kees at erdc.usace.army.mil Fri Aug 18 10:44:12 2006 From: christopher.e.kees at erdc.usace.army.mil (Chris Kees) Date: Fri, 18 Aug 2006 09:44:12 -0500 Subject: [Numpy-discussion] First impressions on migrating to NumPy In-Reply-To: <200608181359.03643.faltet@carabos.com> References: <200608181359.03643.faltet@carabos.com> Message-ID: <7A1915DC-2495-480C-9CE1-68D0A5C67FFA@erdc.usace.army.mil> Can you provide some details about your approach to migrating to NumPy? Are you following some documentation on migration or do you have your own plan of attack? Chris On Aug 18, 2006, at 6:59 AM, Francesc Altet wrote: > Hi, > > I'm starting to (slowly) replace numarray by NumPy at the core of > PyTables, > specially at those places where the speed of NumPy is *much* > better, that is, > in the creation of arrays (there are places in PyTables where this is > critical, most specially in indexation) and in copying arrays. In > both cases, > NumPy performs between 8x to 40x than numarray and this is, well..., > excellent :-) > > Also, the big unification between numerical homogeneous arrays, > string > homogeneous arrays (with unicode support added) and heterogeneous > arrays > (recarrays, with nested records support there also!) is simplyfying > very much > the code in PyTables where there are many places where one have to > distinguish between those different objects in numarray. > Fortunately, this > distinction is not necessary anymore in many of this places. > > Furthermore, I'm seeing that most of the corner cases where > numarray do well > (this was the main reason I was conservative about migrating > anyway), are > also very well resolved in NumPy (in some cases better, as for one, > NumPy has > chosen NULL terminated strings for internal representation, instead > of space > padding in numarray that gave me lots of headaches). Of course, > there are > some glitches that I'll report appropriately, but overall, NumPy is > behaving > better than expected (and I already had *great* expectations). > > Well, I just wanted to report these experiences just in case other > people is > pondering about migrating as well to NumPy. But also wanted to > thanks (once > more), the excellent work of the NumPy crew, and specially Travis > for their > first-class work. > > Thanks! > > -- >> 0,0< Francesc Altet http://www.carabos.com/ > V V C?rabos Coop. V. Enjoy Data > "-" > > ---------------------------------------------------------------------- > --- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your > job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From stefan at sun.ac.za Fri Aug 18 10:45:03 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 18 Aug 2006 16:45:03 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <44E5C26F.6020609@gmx.de> References: <44E5C26F.6020609@gmx.de> Message-ID: <20060818144503.GW10593@mentat.za.net> Hi Norbert On Fri, Aug 18, 2006 at 03:36:47PM +0200, Norbert Nemec wrote: > in numpy-1.0b2 the logic in setup.py is slightly off. The attached patch > fixes the issue. Please file a ticket so that we don't lose track of this. St?fan From faltet at carabos.com Fri Aug 18 11:07:51 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 18 Aug 2006 17:07:51 +0200 Subject: [Numpy-discussion] First impressions on migrating to NumPy In-Reply-To: <7A1915DC-2495-480C-9CE1-68D0A5C67FFA@erdc.usace.army.mil> References: <200608181359.03643.faltet@carabos.com> <7A1915DC-2495-480C-9CE1-68D0A5C67FFA@erdc.usace.army.mil> Message-ID: <200608181707.52563.faltet@carabos.com> A Divendres 18 Agost 2006 16:44, Chris Kees va escriure: > Can you provide some details about your approach to migrating to > NumPy? Are you following some documentation on migration or do you > have your own plan of attack? Well, to say the truth none of both ;-). The truth is that I was trying to accelerate some parts of my software and realized that numarray was an important bottleneck. NumPy was already in advanced beta stage and some small benchmarks conviced me that it would be the solution. So, I started porting one single C extension (PyTables has several), the simplest one, and checked that the results were correct (and confirmed that the new code was much faster!). After that, the second extension came converted and I'm in the process of checking everything. Now, there remains 3 more extensions to migrate, but the important ones for me are done. So, no plans other than having a good motivation (and the need for speed was a very good one). However, I think that having a complete test suite checking every detail of your software was key. Also, having access to the excellent book by Travis was extremely helpful. Finally, having IPython opened to check everything, look at online docstrings and be able to do fast timings added the "cerise sur le g?teau". Luck! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From mowry at de.multek.com Fri Aug 18 13:30:32 2006 From: mowry at de.multek.com (Jaka Kornreich) Date: Fri, 18 Aug 2006 10:30:32 -0700 Subject: [Numpy-discussion] te test Message-ID: <000001c6c2ec$01aad0c0$6963a8c0@utycir> Hi, It is so common to have problems with erecxxtion, Try VIrAGRA and forget about it http://www.vabaominheran.com which in turn . . . Charges up your batteries. We know a thing or two as well, Hingst, greeter of strangers to Paradise, and we are not your usual goaty -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Fri Aug 18 14:18:14 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 11:18:14 -0700 Subject: [Numpy-discussion] numpy.bool8 In-Reply-To: <44E45FA7.7080209@stsci.edu> References: <44E45FA7.7080209@stsci.edu> Message-ID: <44E60466.2060504@ieee.org> Christopher Hanley wrote: > What happened to numpy.bool8? I realize that bool_ is just as good. I > was just wondering what motivated the change? > > I think it was accidental... The numpy scalar tp_names were recently changed to be more consistent with Python and the bool8 construct probably disappeared because it was automatically generated. Thanks for the check. -Travis From oliphant.travis at ieee.org Fri Aug 18 14:21:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 11:21:05 -0700 Subject: [Numpy-discussion] convertcode.py In-Reply-To: <9EC96922-E299-4996-BED4-262B0E3E0126@erdc.usace.army.mil> References: <9EC96922-E299-4996-BED4-262B0E3E0126@erdc.usace.army.mil> Message-ID: <44E60511.40507@ieee.org> Chris Kees wrote: > Hi, > > I just ran convertcode.py on my code (from the latest svn source > of numpy) and it looks like it just changed the import statements to > > import numpy.oldnumeric as Numeric > > So it doesn't look like it's really helping me move over to the > new usage. Is there a script that will converts code to use the > new numpy as it's intended to be used? > Not yet. The transition approach is to use the compatibility layer first by running oldnumeric.alter_code1.py and then running alter_code2.py which will take you from the compatibility layer to NumPy (but alter_code2 is not completed yet). The description of what these codes do is in the latest version of the second chapter of my book (which is part of the preview chapters that are available on the web). -Travis From oliphant.travis at ieee.org Fri Aug 18 14:23:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 11:23:45 -0700 Subject: [Numpy-discussion] numpy 0.9.8->1.0b2 In-Reply-To: References: Message-ID: <44E605B1.2060705@ieee.org> David Grant wrote: > I'm contemplating upgrading to 1.0b2. The main reason is that I am > experiencing a major memory leak and before I report a bug I think the > developers would appeciate if I was using the most recent version. Am > I correct in that the only major change that might actually break my > code is that the following functions: > > take, repeat, sum, product, sometrue, cumsum, cumproduct, ptp, amax, > amin, prod, cumprod, mean, std, var > > now have axis=None as argument? Also the default return type is "float" instead of "int". I've highlighted the changes I think might break 0.9.8 code with the NOTE annotation on the page of release notes. > > BTW, how come alter_code2.py ( > http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/oldnumeric/alter_code2.py?rev=HEAD) > says in the docstring that it "converts functions that don't give > axis= keyword that have changed" but I don't see it actually doing > that anywhere in the code? Because it isn't done. The comments are a "this is what it should do". If you notice there is a warning on import (probably should be an error). -Travis From haase at msg.ucsf.edu Fri Aug 18 14:26:12 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 18 Aug 2006 11:26:12 -0700 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize Message-ID: <200608181126.12599.haase@msg.ucsf.edu> Hi, array dtype descriptors have an attribute itemsize that gives the total number of bytes required for an item of that dtype. Scalar types, like numy.int32, also have that attribute, but it returns "something else" - don't know what: >>> a.dtype.itemsize 4 >>> a.dtype.name 'float32' >>> N.int32.itemsize Furthermore there are *lot's* of more attributes to a scalar dtype, e.g. >>> N.int32.data >>> N.int32.argmax() Traceback (most recent call last): File "", line 1, in ? TypeError: descriptor 'argmax' of 'genericscalar' object needs an argument Are those useful ? Thanks, Sebastian Haase From oliphant.travis at ieee.org Fri Aug 18 14:34:27 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 11:34:27 -0700 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <44E5C26F.6020609@gmx.de> References: <44E5C26F.6020609@gmx.de> Message-ID: <44E60833.2060100@ieee.org> Norbert Nemec wrote: > Hi there, > > in numpy-1.0b2 the logic in setup.py is slightly off. The attached patch > fixes the issue. > > Greetings, > Norbert > > PS: I would have preferred to submit this patch via the sourceforge > bug-tracker, but that seems rather confusing: there are tabs "Numarray > Patches" and "Numarray Bugs" but no "NumPy bugs" and the tab "Patches" > seems to be used for Numeric. Why isn't NumPy handled via the > Sourceforge page? > NumPy development happens on the SVN servers at scipy.org and bug-tracking is handled through the Trac system at http://projects.scipy.org/scipy/numpy We only use sourceforge for distribution. I need more description on why the logic is not right. -Travis From oliphant.travis at ieee.org Fri Aug 18 14:38:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 11:38:17 -0700 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize In-Reply-To: <200608181126.12599.haase@msg.ucsf.edu> References: <200608181126.12599.haase@msg.ucsf.edu> Message-ID: <44E60919.1000606@ieee.org> Sebastian Haase wrote: > Hi, > array dtype descriptors have an attribute itemsize that gives the total > number of bytes required for an item of that dtype. > > Scalar types, like numy.int32, also have that attribute, > but it returns "something else" - don't know what: > > > Furthermore there are *lot's* of more attributes to a scalar dtype, e.g. > The scalar types are actual Python types (classes) whereas the dtype objects are instances. The attributes you are seeing of the typeobject are very useful when you have an instance of that type. With numpy.int32.itemsize you are doing the equivalent of numpy.dtype.itemsize -Travis From kortmann at ideaworks.com Fri Aug 18 15:32:17 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Fri, 18 Aug 2006 12:32:17 -0700 (PDT) Subject: [Numpy-discussion] 1.02b Message-ID: <1356.12.216.231.149.1155929537.squirrel@webmail.ideaworks.com> I realize it was just released, but is there going to be a windows release for 1.02b? From haase at msg.ucsf.edu Fri Aug 18 16:16:15 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 18 Aug 2006 13:16:15 -0700 Subject: [Numpy-discussion] =?iso-8859-1?q?attributes_of_scalar_types_-_e?= =?iso-8859-1?q?=2Eg=2E=09numpy=2Eint32=2Eitemsize?= In-Reply-To: <44E60919.1000606@ieee.org> References: <200608181126.12599.haase@msg.ucsf.edu> <44E60919.1000606@ieee.org> Message-ID: <200608181316.15166.haase@msg.ucsf.edu> On Friday 18 August 2006 11:38, Travis Oliphant wrote: > Sebastian Haase wrote: > > Hi, > > array dtype descriptors have an attribute itemsize that gives the total > > number of bytes required for an item of that dtype. > > > > Scalar types, like numy.int32, also have that attribute, > > but it returns "something else" - don't know what: > > > > > > Furthermore there are *lot's* of more attributes to a scalar dtype, e.g. > > The scalar types are actual Python types (classes) whereas the dtype > objects are instances. > > The attributes you are seeing of the typeobject are very useful when you > have an instance of that type. > > With numpy.int32.itemsize you are doing the equivalent of > numpy.dtype.itemsize but why then do I not get the result 4 ? -Sebastian From charlesr.harris at gmail.com Fri Aug 18 17:03:35 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 18 Aug 2006 15:03:35 -0600 Subject: [Numpy-discussion] convertcode.py In-Reply-To: <44E60511.40507@ieee.org> References: <9EC96922-E299-4996-BED4-262B0E3E0126@erdc.usace.army.mil> <44E60511.40507@ieee.org> Message-ID: Hi Travis, > The description of what these codes do is in the latest version of the > second chapter of my book (which is part of the preview chapters that > are available on the web). Speaking of which, is it possible for us early buyers to get updated copies? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Fri Aug 18 17:09:07 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 14:09:07 -0700 Subject: [Numpy-discussion] 1.02b In-Reply-To: <1356.12.216.231.149.1155929537.squirrel@webmail.ideaworks.com> References: <1356.12.216.231.149.1155929537.squirrel@webmail.ideaworks.com> Message-ID: <44E62C73.6070304@ieee.org> kortmann at ideaworks.com wrote: > I realize it was just released, but is there going to be a windows release > for 1.02b? > > There will be either be one of 1.0b3 or one of 1.0b2 released for windows by Monday. -Travis From davidgrant at gmail.com Fri Aug 18 17:40:37 2006 From: davidgrant at gmail.com (David Grant) Date: Fri, 18 Aug 2006 14:40:37 -0700 Subject: [Numpy-discussion] numpy 0.9.8->1.0b2 In-Reply-To: <44E605B1.2060705@ieee.org> References: <44E605B1.2060705@ieee.org> Message-ID: On 8/18/06, Travis Oliphant wrote: > David Grant wrote: > > I'm contemplating upgrading to 1.0b2. The main reason is that I am > > experiencing a major memory leak and before I report a bug I think the > > developers would appeciate if I was using the most recent version. Am > > I correct in that the only major change that might actually break my > > code is that the following functions: > > > > take, repeat, sum, product, sometrue, cumsum, cumproduct, ptp, amax, > > amin, prod, cumprod, mean, std, var > > > > now have axis=None as argument? > Also the default return type is "float" instead of "int". I've > highlighted the changes I think might break 0.9.8 code with the NOTE > annotation on the page of release notes. > > > > BTW, how come alter_code2.py ( > > http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/oldnumeric/alter_code2.py?rev=HEAD) > > says in the docstring that it "converts functions that don't give > > axis= keyword that have changed" but I don't see it actually doing > > that anywhere in the code? > Because it isn't done. The comments are a "this is what it should do". > If you notice there is a warning on import (probably should be an error). Oh ok, so maybe a FIXME then... oh well, it's all a question of personal style, as long as you know what they mean. :-) I see the warning now...good idea. I see the "Important changes are denoted with a NOTE:" now in the release notes now. Finally realizing that I had a scipy wiki account, I added some more emphasis here for others. Thanks, David -- David Grant http://www.davidgrant.ca From Fernando.Perez at colorado.edu Fri Aug 18 17:54:13 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 18 Aug 2006 15:54:13 -0600 Subject: [Numpy-discussion] [Fwd: Re: Signal handling] Message-ID: <44E63705.3020804@colorado.edu> Hi all, here is the SAGE signal handling code, graciously donated by William Stein. I'd suggest putting (with any modifications to adapt it to numpy conventions) this into the actual numpy headers, so that not only all of our auto-generation tools (f2py, weave) can use it, but so that it also becomes trivial for end-users to user the same macros in their own code without doing anything additional. Regards, f -------- Original Message -------- Subject: Re: Signal handling Date: Fri, 18 Aug 2006 21:15:38 +0000 From: William Stein To: Fernando Perez References: <44E586D3.7010209 at colorado.edu> Here you are (see attached). Let me know if you have any trouble with gmail mangling the attachment. On 8/18/06, Fernando Perez wrote: > Hi William, > > could you please send me 'officially' an email with the interrupt.{c,h} files > and a notice of them being BSD licensed ? With that, I can then forward them > to the numpy list and work on their inclusion tomorrow. -- William Stein Associate Professor of Mathematics University of Washington -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: interrupt.c URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: interrupt.h URL: From fperez.net at gmail.com Fri Aug 18 17:58:34 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 18 Aug 2006 15:58:34 -0600 Subject: [Numpy-discussion] [Fwd: Re: Signal handling] In-Reply-To: <44E63705.3020804@colorado.edu> References: <44E63705.3020804@colorado.edu> Message-ID: On 8/18/06, Fernando Perez wrote: > here is the SAGE signal handling code, graciously donated by William Stein. Hit send too soon... I forgot to thank William for this code :) hopefully one of many things we'll be sharing between numpy/scipy and SAGE. Cheers, f From oliphant.travis at ieee.org Fri Aug 18 18:25:47 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 15:25:47 -0700 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize In-Reply-To: <200608181316.15166.haase@msg.ucsf.edu> References: <200608181126.12599.haase@msg.ucsf.edu> <44E60919.1000606@ieee.org> <200608181316.15166.haase@msg.ucsf.edu> Message-ID: <44E63E6B.2090503@ieee.org> Sebastian Haase wrote: > On Friday 18 August 2006 11:38, Travis Oliphant wrote: > >> Sebastian Haase wrote: >> >>> Hi, >>> array dtype descriptors have an attribute itemsize that gives the total >>> number of bytes required for an item of that dtype. >>> >>> Scalar types, like numy.int32, also have that attribute, >>> but it returns "something else" - don't know what: >>> >>> >>> Furthermore there are *lot's* of more attributes to a scalar dtype, e.g. >>> >> The scalar types are actual Python types (classes) whereas the dtype >> objects are instances. >> >> The attributes you are seeing of the typeobject are very useful when you >> have an instance of that type. >> >> With numpy.int32.itemsize you are doing the equivalent of >> numpy.dtype.itemsize >> > > but why then do I not get the result 4 ? > Because it's not a "class" attribute, it's an instance attribute. What does numpy.dtype.itemsize give you? -Travis From haase at msg.ucsf.edu Fri Aug 18 18:57:22 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 18 Aug 2006 15:57:22 -0700 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize In-Reply-To: <44E63E6B.2090503@ieee.org> References: <200608181126.12599.haase@msg.ucsf.edu> <200608181316.15166.haase@msg.ucsf.edu> <44E63E6B.2090503@ieee.org> Message-ID: <200608181557.22912.haase@msg.ucsf.edu> On Friday 18 August 2006 15:25, Travis Oliphant wrote: > Sebastian Haase wrote: > > On Friday 18 August 2006 11:38, Travis Oliphant wrote: > >> Sebastian Haase wrote: > >>> Hi, > >>> array dtype descriptors have an attribute itemsize that gives the > >>> total number of bytes required for an item of that dtype. > >>> > >>> Scalar types, like numy.int32, also have that attribute, > >>> but it returns "something else" - don't know what: > >>> > >>> > >>> Furthermore there are *lot's* of more attributes to a scalar dtype, > >>> e.g. > >> > >> The scalar types are actual Python types (classes) whereas the dtype > >> objects are instances. > >> > >> The attributes you are seeing of the typeobject are very useful when you > >> have an instance of that type. > >> > >> With numpy.int32.itemsize you are doing the equivalent of > >> numpy.dtype.itemsize > > > > but why then do I not get the result 4 ? > > Because it's not a "class" attribute, it's an instance attribute. > > What does numpy.dtype.itemsize give you? > I'm really sorry for being so dumb - but HOW can I get then the number of bytes needed by a given scalar type ? -S. From joris at ster.kuleuven.ac.be Fri Aug 18 18:07:17 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Sat, 19 Aug 2006 00:07:17 +0200 Subject: [Numpy-discussion] numpy installation Message-ID: <1155938837.44e63a15b8c43@webmail.ster.kuleuven.be> Hi, I am correctly assuming that numpy needs the full lapack distribution, and not just the few lapack routines given by atlas? After installing numpy I still get the warning ImportError: /software/python-2.4.1/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: s_wsfe which seems to indicate that numpy is trying to use its lapack_lite version instead of the full lapack distribution. Defining [lapack] library_dirs = /software/lapack3.0/ lapack_libs = combinedlapack in my site.cfg does not help. It also always gives a warning that my lapack lib in my atlas directory is incomplete despite the fact that I specified the full lapack library. The complaint of incompleteness disappears when I overwrite the liblapack.a of atlas with the one of the full lapack distribution, but then I still have the ImportError when I try to import numpy in my python shell. Any pointers? Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From luszczek at cs.utk.edu Fri Aug 18 19:48:43 2006 From: luszczek at cs.utk.edu (Piotr Luszczek) Date: Fri, 18 Aug 2006 19:48:43 -0400 Subject: [Numpy-discussion] numpy installation In-Reply-To: <1155938837.44e63a15b8c43@webmail.ster.kuleuven.be> References: <1155938837.44e63a15b8c43@webmail.ster.kuleuven.be> Message-ID: <200608181948.43282.luszczek@cs.utk.edu> s_wsfe is not LAPACK's routine it's a routine from the g2c library. You have to link it in in addition to lapack_lite. Piotr On Friday 18 August 2006 18:07, joris at ster.kuleuven.ac.be wrote: > Hi, > > I am correctly assuming that numpy needs the full lapack > distribution, and not just the few lapack routines given by atlas? > After installing numpy I still get the warning > > ImportError: > /software/python-2.4.1/lib/python2.4/site-packages/numpy/linalg/lapac >k_lite.so: undefined symbol: s_wsfe > > which seems to indicate that numpy is trying to use its lapack_lite > version instead of the full lapack distribution. Defining > > [lapack] > library_dirs = /software/lapack3.0/ > lapack_libs = combinedlapack > > in my site.cfg does not help. It also always gives a warning that my > lapack lib in my atlas directory is incomplete despite the fact that > I specified the full lapack library. The complaint of incompleteness > disappears when I overwrite the liblapack.a of atlas with the one of > the full lapack distribution, but then I still have the ImportError > when I try to import numpy in my python shell. > > Any pointers? > > Cheers, > Joris > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > --------------------------------------------------------------------- >---- Using Tomcat but need to do more? Need to support web services, > security? Get stuff done quickly with pre-integrated technology to > make your job easier Download IBM WebSphere Application Server > v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121 >642 _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant.travis at ieee.org Fri Aug 18 19:51:35 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 18 Aug 2006 16:51:35 -0700 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize In-Reply-To: <200608181557.22912.haase@msg.ucsf.edu> References: <200608181126.12599.haase@msg.ucsf.edu> <200608181316.15166.haase@msg.ucsf.edu> <44E63E6B.2090503@ieee.org> <200608181557.22912.haase@msg.ucsf.edu> Message-ID: <44E65287.4020508@ieee.org> Sebastian Haase wrote: > On Friday 18 August 2006 15:25, Travis Oliphant wrote: > >> Sebastian Haase wrote: >> >>> On Friday 18 August 2006 11:38, Travis Oliphant wrote: >>> >>>> Sebastian Haase wrote: >>>> >>>>> Hi, >>>>> array dtype descriptors have an attribute itemsize that gives the >>>>> total number of bytes required for an item of that dtype. >>>>> >>>>> Scalar types, like numy.int32, also have that attribute, >>>>> but it returns "something else" - don't know what: >>>>> >>>>> >>>>> Furthermore there are *lot's* of more attributes to a scalar dtype, >>>>> e.g. >>>>> >>>> The scalar types are actual Python types (classes) whereas the dtype >>>> objects are instances. >>>> >>>> The attributes you are seeing of the typeobject are very useful when you >>>> have an instance of that type. >>>> >>>> With numpy.int32.itemsize you are doing the equivalent of >>>> numpy.dtype.itemsize >>>> >>> but why then do I not get the result 4 ? >>> >> Because it's not a "class" attribute, it's an instance attribute. >> >> What does numpy.dtype.itemsize give you? >> >> > I'm really sorry for being so dumb - but HOW can I get then the number of > bytes needed by a given scalar type ? > > Ah, the real question. Sorry for not catching it earlier. I've been in "make sure this isn't a bug mode" for a long time. If you have a scalar type you could create one and then check the itemsize: int32(0).itemsize Or you could look at the name and parse out how big it is. There is also a stored dictionary-like object that returns the number of bytes for any data-type recognized: numpy.nbytes[int32] -Travis From fperez.net at gmail.com Fri Aug 18 19:52:57 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 18 Aug 2006 17:52:57 -0600 Subject: [Numpy-discussion] Interesting memory leak In-Reply-To: References: Message-ID: > This leak is caused by add_docstring, but it's supposed to leak. I wonder if > there's a way to register some kind of on-exit handler in Python so that > this can also be cleaned up? import atexit atexit.register(your_cleanup_function) whose api is no args on input: def your_cleanup_function(): do_whatever... You could use here a little extension function which goes in and does the necessary free() calls on a pre-stored list of allocated pointers, if there's more than one (I don't really know what's going on here). Cheers, f From stefan at sun.ac.za Fri Aug 18 20:00:57 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 19 Aug 2006 02:00:57 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <20060818144503.GW10593@mentat.za.net> References: <44E5C26F.6020609@gmx.de> <20060818144503.GW10593@mentat.za.net> Message-ID: <20060819000057.GZ10593@mentat.za.net> On Fri, Aug 18, 2006 at 04:45:03PM +0200, Stefan van der Walt wrote: > Hi Norbert > > On Fri, Aug 18, 2006 at 03:36:47PM +0200, Norbert Nemec wrote: > > in numpy-1.0b2 the logic in setup.py is slightly off. The attached patch > > fixes the issue. > > Please file a ticket so that we don't lose track of this. Urgh, please excuse me. It seems that I have lost the ability to read more than one paragraph. St?fan From haase at msg.ucsf.edu Fri Aug 18 20:05:21 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 18 Aug 2006 17:05:21 -0700 Subject: [Numpy-discussion] =?iso-8859-1?q?attributes_of_scalar_types_-_e?= =?iso-8859-1?q?=2Eg=2E=09numpy=2Eint32=2Eitemsize?= In-Reply-To: <44E65287.4020508@ieee.org> References: <200608181126.12599.haase@msg.ucsf.edu> <200608181557.22912.haase@msg.ucsf.edu> <44E65287.4020508@ieee.org> Message-ID: <200608181705.21240.haase@msg.ucsf.edu> On Friday 18 August 2006 16:51, Travis Oliphant wrote: > Sebastian Haase wrote: > > On Friday 18 August 2006 15:25, Travis Oliphant wrote: > >> Sebastian Haase wrote: > >>> On Friday 18 August 2006 11:38, Travis Oliphant wrote: > >>>> Sebastian Haase wrote: > >>>>> Hi, > >>>>> array dtype descriptors have an attribute itemsize that gives the > >>>>> total number of bytes required for an item of that dtype. > >>>>> > >>>>> Scalar types, like numy.int32, also have that attribute, > >>>>> but it returns "something else" - don't know what: > >>>>> > >>>>> > >>>>> Furthermore there are *lot's* of more attributes to a scalar dtype, > >>>>> e.g. > >>>> > >>>> The scalar types are actual Python types (classes) whereas the dtype > >>>> objects are instances. > >>>> > >>>> The attributes you are seeing of the typeobject are very useful when > >>>> you have an instance of that type. > >>>> > >>>> With numpy.int32.itemsize you are doing the equivalent of > >>>> numpy.dtype.itemsize > >>> > >>> but why then do I not get the result 4 ? > >> > >> Because it's not a "class" attribute, it's an instance attribute. > >> > >> What does numpy.dtype.itemsize give you? > > > > I'm really sorry for being so dumb - but HOW can I get then the number of > > bytes needed by a given scalar type ? > > Ah, the real question. Sorry for not catching it earlier. I've been in > "make sure this isn't a bug mode" for a long time. > > If you have a scalar type you could create one and then check the itemsize: > > int32(0).itemsize > > Or you could look at the name and parse out how big it is. > > There is also a stored dictionary-like object that returns the number of > bytes for any data-type recognized: > > numpy.nbytes[int32] Thanks, that seems to be a handy "dictionary-like object" Just for the record - in the meantime I found this: >>> N.dtype(N.int32).itemsize 4 Cheers, Sebastian From joris at ster.kuleuven.be Fri Aug 18 20:16:52 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Sat, 19 Aug 2006 02:16:52 +0200 Subject: [Numpy-discussion] numpy installation In-Reply-To: <200608181948.43282.luszczek@cs.utk.edu> References: <1155938837.44e63a15b8c43@webmail.ster.kuleuven.be> <200608181948.43282.luszczek@cs.utk.edu> Message-ID: <200608190216.52391.joris@ster.kuleuven.be> Hi, [PL]: s_wsfe is not LAPACK's routine it's a routine from the g2c library. [PL]: You have to link it in in addition to lapack_lite. Thanks for the pointer. Sorry about my ignorance about these things. But is lapack_lite linked to numpy even if you specify the full lapack library? After some googling I learned that g2c is a lib which takes care that you can link fortran and C libraries (again my ignorance...). It's still not obvious for me, though, where/how I can make the install program do this linking. I have a /usr/lib/libg2c.a, so I am surprised it doesn't find it right away... Anybody experienced something similar, or other pointers? Ciao, Joris From charlesr.harris at gmail.com Fri Aug 18 21:35:43 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 18 Aug 2006 19:35:43 -0600 Subject: [Numpy-discussion] Whitespace Message-ID: Hi All, I've noticed a lot of trailing whitespace while browsing through the numpy subversion repository. So here is a perl script I pinched from the linux-kernel mailing list that does a good job of removing it. Chuck -------------- next part -------------- A non-text attachment was scrubbed... Name: cleanfile Type: application/octet-stream Size: 1122 bytes Desc: not available URL: From joris at ster.kuleuven.be Sat Aug 19 18:55:32 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Sun, 20 Aug 2006 00:55:32 +0200 Subject: [Numpy-discussion] speed degression Message-ID: <200608200055.32320.joris@ster.kuleuven.be> Hi, Some of my code is heavily using large complex arrays, and I noticed a speed degression in NumPy 1.0b2 with respect to Numarray. The following code snippet is an example that on my computer runs 10% faster in Numarray than in NumPy. >>> A = zeros(1000000, complex) >>> for k in range(1000): ... A *= zeros(1000000, complex) (I replaced 'complex' with 'Complex' in Numarray). Can anyone confirm this? Ciao, Joris From charlesr.harris at gmail.com Sat Aug 19 20:00:22 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 19 Aug 2006 18:00:22 -0600 Subject: [Numpy-discussion] speed degression In-Reply-To: <200608200055.32320.joris@ster.kuleuven.be> References: <200608200055.32320.joris@ster.kuleuven.be> Message-ID: Yes, On 8/19/06, Joris De Ridder wrote: > Hi, > > Some of my code is heavily using large complex arrays, and I noticed a speed > degression in NumPy 1.0b2 with respect to Numarray. The following code snippet > is an example that on my computer runs 10% faster in Numarray than in NumPy. > > >>> A = zeros(1000000, complex) > >>> for k in range(1000): > ... A *= zeros(1000000, complex) > > (I replaced 'complex' with 'Complex' in Numarray). Can anyone confirm this? I see this too. In [242]: t1 = timeit.Timer('a *= nx.zeros(1000000,"D")','import numarray as nx; a = nx.zeros(1000000,"D")') In [243]: t2 = timeit.Timer('a *= nx.zeros(1000000,"D")','import numpy as nx; a = nx.zeros(1000000,"D")') In [244]: t1.repeat(3,100) Out[244]: [5.184194803237915, 5.1135070323944092, 5.1053409576416016] In [245]: t2.repeat(3,100) Out[245]: [5.5170519351959229, 5.4989008903503418, 5.479154109954834] Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From djoefish at yahoo.com Sat Aug 19 20:43:44 2006 From: djoefish at yahoo.com (Daniel Fish) Date: Sat, 19 Aug 2006 17:43:44 -0700 Subject: [Numpy-discussion] install on Python 2.5 Message-ID: Any advice on installing numpy in Python 2.5 on WindowsXP? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgrav at mac.com Sat Aug 19 20:54:12 2006 From: tgrav at mac.com (Tommy Grav) Date: Sat, 19 Aug 2006 20:54:12 -0400 Subject: [Numpy-discussion] 1.02b problems In-Reply-To: <44E62C73.6070304@ieee.org> References: <1356.12.216.231.149.1155929537.squirrel@webmail.ideaworks.com> <44E62C73.6070304@ieee.org> Message-ID: <58D304A5-7274-479C-AE89-E975B59F4B50@mac.com> I am trying to install numpy on my Apple Powerbook G4 running OS X Tiger (10.4.7). I am running ActivePython 2.4.3. Installing the numPy package seems to work fine but when I try to import it I get the following: /Users/tgrav --> python ActivePython 2.4.3 Build 11 (ActiveState Software Inc.) based on Python 2.4.3 (#1, Apr 3 2006, 18:07:18) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/__init__.py", line 35, in ? import core File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/core/__init__.py", line 10, in ? from numeric import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/core/numeric.py", line 33, in ? CLIP = multiarray.CLIP AttributeError: 'module' object has no attribute 'CLIP' >>> How can I remedy this problem? Cheers Tommy tgrav at mac.com http://homepage.mac.com/tgrav/ "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genious -- and a lot of courage -- to move in the opposite direction" -- Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Sun Aug 20 14:32:24 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sun, 20 Aug 2006 19:32:24 +0100 Subject: [Numpy-discussion] Patch against Image.py in the PIL In-Reply-To: <44B57AEF.3080300@ieee.org> References: <44B57AEF.3080300@ieee.org> Message-ID: <20060820193224.303481aa.simon@arrowtheory.com> On Wed, 12 Jul 2006 16:42:55 -0600 Travis Oliphant wrote: > > Attached is a patch that makes PIL Image objects both export and consume > the array interface. Cool ! I found that upon converting to/from a numpy array the image is upside-down. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From Norbert.Nemec.list at gmx.de Sun Aug 20 06:51:52 2006 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Sun, 20 Aug 2006 12:51:52 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <44E60833.2060100@ieee.org> References: <44E5C26F.6020609@gmx.de> <44E60833.2060100@ieee.org> Message-ID: <44E83EC8.3020501@gmx.de> Travis Oliphant wrote: > Norbert Nemec wrote: > >> Hi there, >> >> in numpy-1.0b2 the logic in setup.py is slightly off. The attached patch >> fixes the issue. >> >> Greetings, >> Norbert >> >> PS: I would have preferred to submit this patch via the sourceforge >> bug-tracker, but that seems rather confusing: there are tabs "Numarray >> Patches" and "Numarray Bugs" but no "NumPy bugs" and the tab "Patches" >> seems to be used for Numeric. Why isn't NumPy handled via the >> Sourceforge page? >> >> > NumPy development happens on the SVN servers at scipy.org and > bug-tracking is handled through the Trac system at > > http://projects.scipy.org/scipy/numpy > > We only use sourceforge for distribution. > OK, sorry. I found this myself in the meantime. I even remember that I stumbled over this some time ago already. Problem is: I'm submitting bug-reports, fixes and small patches to so many different projects, that I start mixing up the details of the individual procedures. Furthermore: the TRAC tickets do not seem to allow attachment of patches. Did I miss something there? > I need more description on why the logic is not right. > The original code reads: ----------------------- [...snip...] if nosmp: moredefs = [('NPY_ALLOW_THREADS', '0')] else: moredefs = [] [...snip...] if moredefs: target_f = open(target,'a') for d in moredefs: if isinstance(d,str): target_f.write('#define %s\n' % (d)) else: target_f.write('#define %s %s\n' % (d[0],d[1])) if not nosmp: # default is to use WITH_THREAD target_f.write('#ifdef WITH_THREAD\n#define NPY_ALLOW_THREADS 1\n#else\n#define NPY_ALLOW_THREADS 0\n#endif\n') target_f.close() [...snip...] ---------------- That is: if not nosmp, then moredefs may be empty, in which case NPY_ALLOW_THREADS is not defined at all. My patch ensures that NPY_ALLOW_THREADS is defined in any case, either by putting it in moredefs, or by adding the special conditional define. The conditional "if moredefs" is not needed at all: the file needs to be opened in any case, to define NPY_ALLOW_THREADS one way or other. Greetings, Norbert From fullung at gmail.com Sun Aug 20 08:53:52 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 20 Aug 2006 14:53:52 +0200 Subject: [Numpy-discussion] bugfix-patch for numpy-1.0b2 setup In-Reply-To: <44E83EC8.3020501@gmx.de> Message-ID: Hello all > > Furthermore: the TRAC tickets do not seem to allow attachment of > patches. Did I miss something there? After submitting the initial report, you can attach files to the ticket. Regards, Albert From drswalton at gmail.com Sun Aug 20 18:32:29 2006 From: drswalton at gmail.com (Stephen Walton) Date: Sun, 20 Aug 2006 15:32:29 -0700 Subject: [Numpy-discussion] numpy installation In-Reply-To: <200608190216.52391.joris@ster.kuleuven.be> References: <1155938837.44e63a15b8c43@webmail.ster.kuleuven.be> <200608181948.43282.luszczek@cs.utk.edu> <200608190216.52391.joris@ster.kuleuven.be> Message-ID: <693733870608201532n49f840c4jdf7fa7ca3efd2623@mail.gmail.com> On 8/18/06, Joris De Ridder wrote: > > > > Sorry about my ignorance about these things. But is lapack_lite linked > to numpy even if you specify the full lapack library? As I understand it, lapack_lite is built and used by numpy as a shared library with a subset of the LAPACK routines. After some googling I learned that g2c is a lib which takes care that you > can link fortran and C libraries (again my ignorance...). Which platform are you on? If you do python setup.py build >& spool grep lapack spool what output do you get? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjn253 at tom.com Thu Aug 24 19:48:03 2006 From: hjn253 at tom.com (=?GB2312?B?IjjUwjI2LTI3yNUvy9XW3SI=?=) Date: Fri, 25 Aug 2006 07:48:03 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU6yOe6zrPJzqrTxdDjtcSztbzk1ve53A==?= Message-ID: An HTML attachment was scrubbed... URL: From service at citibank.com Sun Aug 20 20:53:18 2006 From: service at citibank.com (service at citibank.com) Date: Sun, 20 Aug 2006 20:53:18 -0400 (EDT) Subject: [Numpy-discussion] Citibank Update Message-ID: <20060821005318.B3E534DF68@nietzsche.smarterlinux.com> An HTML attachment was scrubbed... URL: From billieo at american-controls.com Sun Aug 20 21:16:00 2006 From: billieo at american-controls.com (Faith Thoreson) Date: Sun, 20 Aug 2006 18:16:00 -0700 Subject: [Numpy-discussion] news eafyby Message-ID: <000001c6c4bf$5cc1f310$90bfa8c0@wtavl> Hi, Economize up to 50 % on your R X with us http://www.rehungla.st As we raised and clashed our glasses together, drank deep, I thought of my mother. I do this very rarely; it must be all the male-female myth dredging that brought her to mind. Or what she used to say. Very -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Mon Aug 21 03:45:59 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 21 Aug 2006 16:45:59 +0900 Subject: [Numpy-discussion] linspace upper bound not met? Message-ID: I was porting some code over from matlab in which I relied on the upper bound of linspace to be met exactly. It turns out that it isn't always exactly met in numpy. In [390]: filter(lambda x: x[1]!=0.0, [ (i,1.0-numpy.linspace(0,1,i)[-1]) for i in range(2,200) ]) Out[390]: [(50, 1.1102230246251565e-016), (99, 1.1102230246251565e-016), (104, 1.1102230246251565e-016), (108, 1.1102230246251565e-016), (162, 1.1102230246251565e-016), (188, 1.1102230246251565e-016), (197, 1.1102230246251565e-016), (198, 1.1102230246251565e-016)] I know it's not a good idea to count on floating point equality in general, but still it doesn't seem too much to expect that the first and last values returned by linspace are exactly the values asked for if they both have exact floating point representations. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon Aug 21 11:31:14 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 21 Aug 2006 11:31:14 -0400 Subject: [Numpy-discussion] linspace upper bound not met? In-Reply-To: References: Message-ID: The definition of linspace is: def linspace(start, stop, num=50, endpoint=True, retstep=False): """Return evenly spaced numbers. Return 'num' evenly spaced samples from 'start' to 'stop'. If 'endpoint' is True, the last sample is 'stop'. If 'retstep' is True then return the step value used. """ num = int(num) if num <= 0: return array([], float) if endpoint: if num == 1: return array([float(start)]) step = (stop-start)/float((num-1)) else: step = (stop-start)/float(num) y = _nx.arange(0, num) * step + start if retstep: return y, step else: return y The simplest way to achieve this goal is to add right after the assignment to y two new lines: if endpoint: y[-1] = float(stop) Cheers, Alan Isaac PS I'll take this opportunity to state again my opinion that in the denerate case num=1 that if endpoint=True then linspace should return stop rather than start. (Otherwise endpoint is ignored. But I do not expect anyone to agree.) From wbaxter at gmail.com Mon Aug 21 12:27:20 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 22 Aug 2006 01:27:20 +0900 Subject: [Numpy-discussion] linspace upper bound not met? In-Reply-To: References: Message-ID: Out of curiosity I checked on what matlab does. It does explicity set the last value to 'stop' to avoid the roundoff issue. In numpy terms, it does something like y = r_[start+r_[0:num-1]*(stop-start)/(num-1.0), stop] But for numpy it's probably more efficient to just do the 'y[-1] = stop' like you say. --bb On 8/22/06, Alan G Isaac wrote: > > The definition of linspace is: > def linspace(start, stop, num=50, endpoint=True, retstep=False): > """Return evenly spaced numbers. > > Return 'num' evenly spaced samples from 'start' to 'stop'. If > 'endpoint' is True, the last sample is 'stop'. If 'retstep' is > True then return the step value used. > """ > num = int(num) > if num <= 0: > return array([], float) > if endpoint: > if num == 1: > return array([float(start)]) > step = (stop-start)/float((num-1)) > else: > step = (stop-start)/float(num) > y = _nx.arange(0, num) * step + start > if retstep: > return y, step > else: > return y > > The simplest way to achieve this goal is to add right after > the assignment to y two new lines: > if endpoint: > y[-1] = float(stop) > > Cheers, > Alan Isaac > > PS I'll take this opportunity to state again my opinion that > in the denerate case num=1 that if endpoint=True then > linspace should return stop rather than start. (Otherwise > endpoint is ignored. But I do not expect anyone to agree.) > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Mon Aug 21 14:55:15 2006 From: davidgrant at gmail.com (David Grant) Date: Mon, 21 Aug 2006 11:55:15 -0700 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple Message-ID: I was a bit surprised today to find that numpy.random.rand doesn't take in a tuple as input for the dimensions of the desired array. I am very used to using a tuple for zeros, ones. Also, wouldn't this mean that it would not be possible to add other non-keyword arguments to rand later? -- David Grant http://www.davidgrant.ca From oliphant at ee.byu.edu Mon Aug 21 15:02:05 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 21 Aug 2006 13:02:05 -0600 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: <44EA032D.4040309@ee.byu.edu> David Grant wrote: >I was a bit surprised today to find that numpy.random.rand doesn't >take in a tuple as input for the dimensions of the desired array. I am >very used to using a tuple for zeros, ones. Also, wouldn't this mean >that it would not be possible to add other non-keyword arguments to >rand later? > > > numpy.random.rand?? Return an array of the given dimensions which is initialized to random numbers from a uniform distribution in the range [0,1). rand(d0, d1, ..., dn) -> random values Note: This is a convenience function. If you want an interface that takes a tuple as the first argument use numpy.random.random_sample(shape_tuple). From aisaac at american.edu Mon Aug 21 15:14:20 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 21 Aug 2006 15:14:20 -0400 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: On Mon, 21 Aug 2006, David Grant apparently wrote: > I was a bit surprised today to find that numpy.random.rand > doesn't take in a tuple as input for the dimensions of the > desired array. I am very used to using a tuple for zeros, > ones. Also, wouldn't this mean that it would not be > possible to add other non-keyword arguments to rand later? You will find a long discussion of this in the archives. Cheers, Alan Isaac PS Thank you for improving the average predictive accuracy of economists. (You'll understand when you read the thread.) From robert.kern at gmail.com Mon Aug 21 15:07:55 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 Aug 2006 14:07:55 -0500 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: David Grant wrote: > I was a bit surprised today to find that numpy.random.rand doesn't > take in a tuple as input for the dimensions of the desired array. I am > very used to using a tuple for zeros, ones. Also, wouldn't this mean > that it would not be possible to add other non-keyword arguments to > rand later? Don't use rand(), then. Use random(). rand()'s sole purpose in life is to *not* take a tuple. If you like, you can read the archives on the several (long) discussions on this and why things are the way they are now. We finally achieved something resembling consensus, so please let's not resurrect this argument. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mithrandir42 at web.de Mon Aug 21 16:07:12 2006 From: mithrandir42 at web.de (N. Volbers) Date: Mon, 21 Aug 2006 22:07:12 +0200 Subject: [Numpy-discussion] error message when using insufficient dtype dict Message-ID: <44EA1270.6060805@web.de> Hello everyone, I had quite some trouble figuring out the _correct_ way to create heterogeneous arrays. What I wanted to do was something like the following: >>> numpy.array( [(0,0,0)], dtype={'names':['a','b','c'], 'formats':['f4','f4','f4']}) This works fine. Now, let's do something wrong, e.g. leave out the 'formats' specifier: >>> numpy.array( [(0,0,0)], dtype={'names':['a','b','c']}) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/numpy/core/_internal.py", line 53, in _usefields names, formats, offsets, titles = _makenames_list(adict) File "/usr/lib/python2.4/site-packages/numpy/core/_internal.py", line 21, in _makenames_list raise ValueError, "entry not a 2- or 3- tuple" ValueError: entry not a 2- or 3- tuple This error message was totally unclear to me. After reading a little on the scipy wiki I finally realized that (maybe) numpy internally converts the dict with the names and the formats to a list of 2-tuples of the form (name, format). Since no formats were given, these 2-tuples were invalid. I would suggest a check for the required dict keys and some meaningful error message like: "The dtype dictionary must at least contain the 'names' and the 'formats' items." Keep up the great work, Niklas. From davidgrant at gmail.com Mon Aug 21 19:26:10 2006 From: davidgrant at gmail.com (David Grant) Date: Mon, 21 Aug 2006 16:26:10 -0700 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: On 8/21/06, Robert Kern wrote: > > David Grant wrote: > > I was a bit surprised today to find that numpy.random.rand doesn't > > take in a tuple as input for the dimensions of the desired array. I am > > very used to using a tuple for zeros, ones. Also, wouldn't this mean > > that it would not be possible to add other non-keyword arguments to > > rand later? > > Don't use rand(), then. Use random(). rand()'s sole purpose in life is to > *not* > take a tuple. If you like, you can read the archives on the several (long) > discussions on this and why things are the way they are now. We finally > achieved > something resembling consensus, so please let's not resurrect this > argument. Thanks everyone. My only question now is why there is random_sample and random. My guess is that one is there for compatibility with older releases and so I'm not bothered by it. -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Aug 21 19:38:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 Aug 2006 18:38:05 -0500 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: David Grant wrote: > Thanks everyone. My only question now is why there is random_sample and > random. My guess is that one is there for compatibility with older > releases and so I'm not bothered by it. Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Mon Aug 21 19:48:09 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 22 Aug 2006 08:48:09 +0900 Subject: [Numpy-discussion] numpy.random.rand function doesn't take tuple In-Reply-To: References: Message-ID: If you like, here's a rand function that takes either a sequence or a tuple. I use this for interactive sessions. def rand(*shape): """ Return an array of the given dimensions which is initialized to random numbers from a uniform distribution in the range [0,1). rand(d0, d1, ..., dn) -> random values or rand((d0, d1, ..., dn)) -> random values """ if len(shape) == 0 or not hasattr(shape[0],'__getitem__'): return numpy.random.rand(*shape) else: if len(shape) != 1: raise TypeError('Argument should either be a tuple or an argument list') else: return numpy.random.rand(*shape[0]) On 8/22/06, David Grant wrote: > > > > On 8/21/06, Robert Kern wrote: > > > > David Grant wrote: > > > I was a bit surprised today to find that numpy.random.rand doesn't > > > take in a tuple as input for the dimensions of the desired array. I am > > > very used to using a tuple for zeros, ones. Also, wouldn't this mean > > > that it would not be possible to add other non-keyword arguments to > > > rand later? > > > > Don't use rand(), then. Use random(). rand()'s sole purpose in life is > > to *not* > > take a tuple. If you like, you can read the archives on the several > > (long) > > discussions on this and why things are the way they are now. We finally > > achieved > > something resembling consensus, so please let's not resurrect this > > argument. > > > > Thanks everyone. My only question now is why there is random_sample and > random. My guess is that one is there for compatibility with older releases > and so I'm not bothered by it. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Mon Aug 21 21:09:43 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 21 Aug 2006 18:09:43 -0700 Subject: [Numpy-discussion] bug is arr.real for byteswapped array Message-ID: <200608211809.43642.haase@msg.ucsf.edu> Hi, We just spend some time debugging some numpy image analysis code where we finally noticed that our file was byte-swapped ;-). Even though we got much crazier numbers, the test below already shows one bug in the a.real.max() line. My numpy.__version__ is '1.0b3.dev3015' and this is run on pentium (little endian) Linux (both 64bit and 32bit version give same results): >>> a = N.arange(4, dtype='>c8') >>> a [ 0. +0.00000000e+00j 0. +1.00000000e+00j 0. +2.00000000e+00j 0. +3.00000000e+00j] >>> a.max() (3+0j) >>> a.real.max() 0.0 >>> a.imag.max() 4.60060298822e-41 >>> >>> a = N.arange(4, dtype='>> a.max() (3+0j) >>> a.real.max() 3.0 >>> a.imag.max() 0.0 >>> Can someone test this on a newer SVN version ? Thanks, Sebastian Haase From lists.steve at arachnedesign.net Mon Aug 21 22:08:29 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Mon, 21 Aug 2006 22:08:29 -0400 Subject: [Numpy-discussion] bug is arr.real for byteswapped array In-Reply-To: <200608211809.43642.haase@msg.ucsf.edu> References: <200608211809.43642.haase@msg.ucsf.edu> Message-ID: <8A9B3015-5136-430C-A48A-0BAC1EE254F8@arachnedesign.net> Hi Sebastian, > We just spend some time debugging some numpy image analysis code > where we finally noticed that our file was byte-swapped ;-). > Even though we got much crazier numbers, > the test below already shows one bug in the a.real.max() line. > My numpy.__version__ is '1.0b3.dev3015' and this is run on > pentium (little > endian) Linux (both 64bit and 32bit version give same results): I'm getting the same results you are. I just recompiled numpy to the latest svn (1.0b4.dev3050) and am running your example on intel (32 bit) Mac OS X.4.7. -steve From fullung at gmail.com Tue Aug 22 04:46:22 2006 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 22 Aug 2006 10:46:22 +0200 Subject: [Numpy-discussion] bug is arr.real for byteswapped array In-Reply-To: <200608211809.43642.haase@msg.ucsf.edu> Message-ID: Hello all > > > >>> a = N.arange(4, dtype='>c8') > >>> a.imag.max() > 4.60060298822e-41 Confirmed on Windows 32-bit with 1.0b4.dev3050. I created a ticket here: http://projects.scipy.org/scipy/numpy/ticket/265 Regards, Albert From misa-v-v at yahoo.co.jp Tue Aug 22 07:58:06 2006 From: misa-v-v at yahoo.co.jp (=?iso-2022-jp?B?bWlzYQ==?=) Date: Tue, 22 Aug 2006 11:58:06 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/07 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/07 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/07 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/07 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/07 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/07 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From oliphant.travis at ieee.org Tue Aug 22 12:36:14 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 22 Aug 2006 10:36:14 -0600 Subject: [Numpy-discussion] bug is arr.real for byteswapped array In-Reply-To: <200608211809.43642.haase@msg.ucsf.edu> References: <200608211809.43642.haase@msg.ucsf.edu> Message-ID: <44EB327E.1040302@ieee.org> Sebastian Haase wrote: > Hi, > We just spend some time debugging some numpy image analysis code > where we finally noticed that our file was byte-swapped ;-). > Even though we got much crazier numbers, > the test below already shows one bug in the a.real.max() line. > My numpy.__version__ is '1.0b3.dev3015' and this is run on pentium (little > endian) Linux (both 64bit and 32bit version give same results): > > I just fixed two bugs with respect to this issue which were introduced at various stages of development 1) The real and imag attribute getting functions were not respecting the byte-order of the data-type object of the array on creation of the "floating-point" equivalent data-type --- this one was introduced on the change to have byteorder part of the data-type object itself. 2) The copyswapn function for complex arrays was not performing two sets of swaps. It was performing one large swap (which had the effect of moving the real part to the imaginary part and vice-versa). These bug-fixes will be in 1.0b4 -Travis From haase at msg.ucsf.edu Tue Aug 22 12:33:53 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 22 Aug 2006 09:33:53 -0700 Subject: [Numpy-discussion] bug is arr.real for byteswapped array In-Reply-To: References: Message-ID: <200608220933.54066.haase@msg.ucsf.edu> Hi, probably related to this is that arr[2].real is read-only ... I noticed that you cannot assign to arr[2].real : >>> a[2].real =6 Traceback (most recent call last): File "", line 1, in ? TypeError: attribute 'real' of 'genericscalar' objects is not writable >>> a.real[2] =6 >>> >>> a[2].real.flags CONTIGUOUS : True FORTRAN : True OWNDATA : True WRITEABLE : False ALIGNED : True UPDATEIFCOPY : False >>> a.real[2].flags WRITEABLE : False >>> >>> a.real.flags CONTIGUOUS : False FORTRAN : False OWNDATA : False WRITEABLE : True >>> a[2].flags CONTIGUOUS : True FORTRAN : True OWNDATA : True WRITEABLE : False ALIGNED : True UPDATEIFCOPY : False Is the "not writable" restriction necessary ? Thanks, Sebastian Haase On Tuesday 22 August 2006 01:46, Albert Strasheim wrote: > Hello all > > > > > > > >>> a = N.arange(4, dtype='>c8') > > >>> a.imag.max() > > > > 4.60060298822e-41 > > Confirmed on Windows 32-bit with 1.0b4.dev3050. > > I created a ticket here: > > http://projects.scipy.org/scipy/numpy/ticket/265 > > Regards, > > Albert > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant.travis at ieee.org Tue Aug 22 15:15:36 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 22 Aug 2006 12:15:36 -0700 Subject: [Numpy-discussion] bug is arr.real for byteswapped array In-Reply-To: <200608220933.54066.haase@msg.ucsf.edu> References: <200608220933.54066.haase@msg.ucsf.edu> Message-ID: <44EB57D8.5000200@ieee.org> Sebastian Haase wrote: > Hi, > probably related to this is that > arr[2].real is read-only ... > > I noticed that you cannot assign > to arr[2].real : > No, that's unrelated. The problem is that arr[2] is a scalar and so it is immutable. When an array scalar is created you get a *copy* of the data. Setting it would not have the effect you imagine as the original data would go unchanged. The only exception to this is the array of type "void" which *does not* copy the data. -Travis From haase at msg.ucsf.edu Tue Aug 22 15:11:03 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 22 Aug 2006 12:11:03 -0700 Subject: [Numpy-discussion] why is int32 a NPY_LONG on 32bitLinux & NPY_INT on 64bitLinux Message-ID: <200608221211.03343.haase@msg.ucsf.edu> Hi, I just ran into more problems with my SWIG typemaps. In the C api the current enum for NPY_INT is 5 NPY_LONG is 7 to match overloaded function I need to check these type values. On 64bit all works fine: my 32bit int function matches NPY_INT - which is "int" in C/C++ my 64bit int function matches NPY_LONG - which is "long" in C/C++ but on 32bit Linux the 32bit int function matches NPY_LONG there is no NPY_INT on 32bit that is: if I have a non overloaded C/C++ function that expects a C "int" - i.e. a 32bit int - I have write different function matching rules !!! REQUEST: Can a 32bit int array get the typenumber code NPY_INT on 32bit Linux !? Then it would work for both 32bit Linux and 64bit Linux the same ! (I don't know about 64bit windows - I have heard that both C int and C long are 64bit - so that is screwed in any case .... ) Thanks, Sebastian Haase From oliphant.travis at ieee.org Tue Aug 22 15:30:54 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 22 Aug 2006 12:30:54 -0700 Subject: [Numpy-discussion] why is int32 a NPY_LONG on 32bitLinux & NPY_INT on 64bitLinux In-Reply-To: <200608221211.03343.haase@msg.ucsf.edu> References: <200608221211.03343.haase@msg.ucsf.edu> Message-ID: <44EB5B6E.5020908@ieee.org> Sebastian Haase wrote: > Hi, > I just ran into more problems with my SWIG > typemaps. > In the C api the current enum for > NPY_INT is 5 > NPY_LONG is 7 > > to match overloaded function I need to check these type values. > > On 64bit all works fine: > my 32bit int function matches NPY_INT - which is "int" in C/C++ > my 64bit int function matches NPY_LONG - which is "long" in C/C++ > > but on 32bit Linux > the 32bit int function matches NPY_LONG > there is no NPY_INT on 32bit > Yes there is. Both NPY_INT and NPY_LONG are always there. One matches the int and one matches the long. Perhaps you are confused about what the special defines NPY_INT32 match to? The behavior is that the 'long' type gets "first-dibs" then the 'longlong' type gets a crack. Finally, the 'int' type is chosen. The first one that matches the bit-type is used. > that is: if I have a non overloaded C/C++ function that expects a C "int" > - i.e. a 32bit int - I have write different function matching rules !!! > What you need to do is stop trying to match bit-widths and instead match c-types. That's why NPY_INT and NPY_LONG are both there. Let me know if you have further questions. I don't really understand what the issue is. -Travis From boyle5 at llnl.gov Tue Aug 22 15:38:00 2006 From: boyle5 at llnl.gov (James Boyle) Date: Tue, 22 Aug 2006 12:38:00 -0700 Subject: [Numpy-discussion] numpy/Numeric co-existence Message-ID: <25676ec40523d25046073f1c37ea49e3@llnl.gov> I have some codes which require a Numeric array and others which require a numpy array. I have no control over either code, and not the time to convert all to numpy if I did. The problem is this - say I have a routine that returns a numpy array as a result and I wish to do something to this array using a code that uses Numeric. Just passing the numpy array to the numeric code does not work. In my case the Numeric code thinks that the numpy float is a long int, this is not good. So what does one do in the interim? There are some legacy codes which will never be converted to numpy. I have seen discussion as to how to convert Numeric -> numpy, but not how the two can play together. I can appreciate the strong desire to eliminate having two systems, but the practical aspects of getting things done must also be considered. I am using numpy 1.0b1 and Numeric 23.7 . Thanks for any enlightenment - perhaps I am missing something obvious. --Jim From oliphant.travis at ieee.org Tue Aug 22 15:39:37 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 22 Aug 2006 12:39:37 -0700 Subject: [Numpy-discussion] why is int32 a NPY_LONG on 32bitLinux & NPY_INT on 64bitLinux In-Reply-To: <200608221211.03343.haase@msg.ucsf.edu> References: <200608221211.03343.haase@msg.ucsf.edu> Message-ID: <44EB5D79.90806@ieee.org> Sebastian Haase wrote: > Hi, > I just ran into more problems with my SWIG > typemaps. > In the C api the current enum for > NPY_INT is 5 > NPY_LONG is 7 > > to match overloaded function I need to check these type values. > > On 64bit all works fine: > my 32bit int function matches NPY_INT - which is "int" in C/C++ > my 64bit int function matches NPY_LONG - which is "long" in C/C++ > As you noted below, this is not always the case. You can't assume that 64-bit means "long" Let me assume that you are trying to write functions for each of the "data-types". You can proceed in a couple of ways: 1) Use the basic c-types 2) Use "bit-width" types (npy_int32, npy_int64, etc...) The advantage of the former is that it avoids any confusion in terms of what kind of c-type it matches. This is really only important if you are trying to interface with external code that uses basic c-types. The advantage of the latter is that you don't have to write a redundant routine (i.e. on 32-bit linux the int and long routines should be identical machine code), but you will have to be careful in matching to a c-type should you need to call some external routine. The current system gives you as many choices as possible (you can either match external code using the c-types) or you can write to a particular bit-width. This is accomplished through comprehensive checks defined in the arrayobject.h file. -Travis From robert.kern at gmail.com Tue Aug 22 15:49:17 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 22 Aug 2006 14:49:17 -0500 Subject: [Numpy-discussion] numpy/Numeric co-existence In-Reply-To: <25676ec40523d25046073f1c37ea49e3@llnl.gov> References: <25676ec40523d25046073f1c37ea49e3@llnl.gov> Message-ID: James Boyle wrote: > I have some codes which require a Numeric array and others which > require a numpy array. > I have no control over either code, and not the time to convert all to > numpy if I did. > The problem is this - say I have a routine that returns a numpy array > as a result and I wish to do something to this array using a code that > uses Numeric. Just passing the numpy array to the numeric code does > not work. In my case the Numeric code thinks that the numpy float is a > long int, this is not good. So what does one do in the interim? There > are some legacy codes which will never be converted to numpy. > > I have seen discussion as to how to convert Numeric -> numpy, but not > how the two can play together. I can appreciate the strong desire to > eliminate having two systems, but the practical aspects of getting > things done must also be considered. > > I am using numpy 1.0b1 and Numeric 23.7 . Upgrade to Numeric 24.2 and use Numeric.asarray(numpy_array) and numpy.asarray(numeric_array) at the interfaces between your codes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kortmann at ideaworks.com Tue Aug 22 16:27:11 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Tue, 22 Aug 2006 13:27:11 -0700 (PDT) Subject: [Numpy-discussion] Version 1.0b3 In-Reply-To: References: Message-ID: <1214.12.216.231.149.1156278431.squirrel@webmail.ideaworks.com> Since no one has downloaded 1.0b3 yet, if someone wants to put up the windows version for python2.3 i would be more than happy to be the first person to download it :) From haase at msg.ucsf.edu Tue Aug 22 16:44:32 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 22 Aug 2006 13:44:32 -0700 Subject: [Numpy-discussion] why is int32 a NPY_LONG on 32bitLinux & NPY_INT on 64bitLinux In-Reply-To: <44EB5B6E.5020908@ieee.org> References: <200608221211.03343.haase@msg.ucsf.edu> <44EB5B6E.5020908@ieee.org> Message-ID: <200608221344.33145.haase@msg.ucsf.edu> Thanks for the reply, see question below... On Tuesday 22 August 2006 12:30, Travis Oliphant wrote: > Sebastian Haase wrote: > > Hi, > > I just ran into more problems with my SWIG > > typemaps. > > In the C api the current enum for > > NPY_INT is 5 > > NPY_LONG is 7 > > > > to match overloaded function I need to check these type values. > > > > On 64bit all works fine: > > my 32bit int function matches NPY_INT - which is "int" in C/C++ > > my 64bit int function matches NPY_LONG - which is "long" in C/C++ > > > > but on 32bit Linux > > the 32bit int function matches NPY_LONG > > there is no NPY_INT on 32bit > > Yes there is. Both NPY_INT and NPY_LONG are always there. One matches > the int and one matches the long. > > Perhaps you are confused about what the special defines NPY_INT32 match to? > > The behavior is that the 'long' type gets "first-dibs" then the > 'longlong' type gets a crack. Finally, the 'int' type is chosen. The > first one that matches the bit-type is used. > This explains it - my specific function overloads only one of its two array arguments (i.e. allow many different types) - the second one must be a C "int". [(a 32bit int) - but SWIG matches the "C signature" ] But what is the type number of " > that is: if I have a non overloaded C/C++ function that expects a C "int" > > - i.e. a 32bit int - I have write different function matching rules !!! > > What you need to do is stop trying to match bit-widths and instead match > c-types. That's why NPY_INT and NPY_LONG are both there. If you are referring to use of the sizeof() operator - I'm not doing that. Thanks as always for your quick and careful replies. - Sebastian From oliphant.travis at ieee.org Tue Aug 22 20:34:26 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 22 Aug 2006 17:34:26 -0700 Subject: [Numpy-discussion] why is int32 a NPY_LONG on 32bitLinux & NPY_INT on 64bitLinux In-Reply-To: <200608221344.33145.haase@msg.ucsf.edu> References: <200608221211.03343.haase@msg.ucsf.edu> <44EB5B6E.5020908@ieee.org> <200608221344.33145.haase@msg.ucsf.edu> Message-ID: <44EBA292.8010806@ieee.org> Sebastian Haase wrote: > This explains it - my specific function overloads only one of its two array > arguments (i.e. allow many different types) - the second one must be a > C "int". > [(a 32bit int) - but SWIG matches the "C signature" ] > But what is the type number of " But on 32bitLinux I get NPY_LONG because of that rule. > > My SWIG typemaps want to "double check" that a C function expecting c-type > "int" gets a NPY_INT - (a "long" needs a "NPY_LONG") > Perhaps I can help you do what you want without making assumptions about the platform. I'll assume you are matching on an int* signature and want to "translate" that to an integer array of the correct bit-width. So, you have a PyArrayObject as input I'll call self Just check: (PyArray_ISSIGNED(self) && PyArray_ITEMSIZE(self)==SIZEOF_INT) For your type-map check. This will work on all platforms and allow signed integers of the right type. > I don't know what the solution should be - but maybe the rule should be > changed based on the assumption that "int" in more common !? > That's not going to happen at this point. Besides in the Python world, the fact that Python integers are "long" means that the "long" is the more common 32-bit integer on 32-bit machines. -Travis From carlosjosepita at yahoo.com.ar Tue Aug 22 22:51:01 2006 From: carlosjosepita at yahoo.com.ar (Carlos Pita) Date: Tue, 22 Aug 2006 23:51:01 -0300 (ART) Subject: [Numpy-discussion] Array pooling Message-ID: <20060823025101.30020.qmail@web50302.mail.yahoo.com> Hi! I'm writting a real time sound synthesis framework where processing units are interconnected via numpy arrays. These buffers are all the same size and type, so it would be easy and convenient pooling them in order to avoid excesive creation/destruction of arrays (consider that thousands of them are acquired and released per second, but just a few dozens used at the same time). But first I would like to know if numpy implements some pooling mechanism by itself. Could you give me some insight on this? Also, is it possible to obtain an uninitialized array? I mean, sometimes I don't feel like wasting valuable cpu clocks filling arrays with zeros, ones or whatever. Thank you in advance. Regards, Carlos --------------------------------- Pregunt?. Respond?. Descubr?. Todo lo que quer?as saber, y lo que ni imaginabas, est? en Yahoo! Respuestas (Beta). Probalo ya! -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Wed Aug 23 08:00:56 2006 From: simon at arrowtheory.com (Simon Burton) Date: Wed, 23 Aug 2006 13:00:56 +0100 Subject: [Numpy-discussion] Array pooling In-Reply-To: <20060823025101.30020.qmail@web50302.mail.yahoo.com> References: <20060823025101.30020.qmail@web50302.mail.yahoo.com> Message-ID: <20060823130056.576e41cc.simon@arrowtheory.com> On Tue, 22 Aug 2006 23:51:01 -0300 (ART) Carlos Pita wrote: > Hi! I'm writting a real time sound synthesis framework where processing units are interconnected via numpy arrays. These buffers are all the same size and type, so it would be easy and convenient pooling them in order to avoid excesive creation/destruction of arrays (consider that thousands of them are acquired and released per second, but just a few dozens used at the same time). But first I would like to know if numpy implements some pooling mechanism by itself. I don't think so. > Could you give me some insight on this? Also, is it possible to obtain an uninitialized array? numpy.empty > I mean, sometimes I don't feel like wasting valuable cpu clocks filling arrays with zeros, ones or whatever. > Thank you in advance. > Regards, > Carlos Sounds like fun. Simon. > > > > > > --------------------------------- > Pregunt?. Respond?. Descubr?. > Todo lo que quer?as saber, y lo que ni imaginabas, > est? en Yahoo! Respuestas (Beta). > Probalo ya! -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From charlesr.harris at gmail.com Tue Aug 22 23:31:32 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 22 Aug 2006 21:31:32 -0600 Subject: [Numpy-discussion] Array pooling In-Reply-To: <20060823025101.30020.qmail@web50302.mail.yahoo.com> References: <20060823025101.30020.qmail@web50302.mail.yahoo.com> Message-ID: On 8/22/06, Carlos Pita wrote: > > Hi! I'm writting a real time sound synthesis framework where processing > units are interconnected via numpy arrays. These buffers are all the same > size and type, so it would be easy and convenient pooling them in order to > avoid excesive creation/destruction of arrays (consider that thousands of > them are acquired and released per second, but just a few dozens used at the > same time). But first I would like to know if numpy implements some pooling > mechanism by itself. Could you give me some insight on this? Also, is it > possible to obtain an uninitialized array? I mean, sometimes I don't feel > like wasting valuable cpu clocks filling arrays with zeros, ones or > whatever. > Is there any reason to keep allocating arrays if you are just using them as data buffers? It seems you should be able to reuse them. If you wanted to be fancy you could keep them in a list, which would retain a reference and keep them from being garbage collected. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlosjosepita at yahoo.com.ar Wed Aug 23 00:11:03 2006 From: carlosjosepita at yahoo.com.ar (Carlos Pita) Date: Wed, 23 Aug 2006 01:11:03 -0300 (ART) Subject: [Numpy-discussion] Array pooling In-Reply-To: Message-ID: <20060823041103.64388.qmail@web50302.mail.yahoo.com> One reason is to use operator syntax: buf1 = buf2 + buf3, instead of add(buf2,buf3, buf1). The other is to spare the final user (synth programmer) any buffer bookkeeping. My idea was to keep track of pooled buffers' reference counts, so that those currently unused would have a refcount of 1 and could be safely deleted (well, if pool policy variables allow it). But as buffers are acquired all the time, even a simple (pure-python) pooling policy implementation is pretty time consuming. In fact, I have benchmarked this against simply creating new zeros-arrays every time, and the non-pooling version just runs faster. That was when I thought that numpy could be doing some internal pooling by itself. Regards, Carlos Is there any reason to keep allocating arrays if you are just using them as data buffers? It seems you should be able to reuse them. If you wanted to be fancy you could keep them in a list, which would retain a reference and keep them from being garbage collected. --------------------------------- Pregunt?. Respond?. Descubr?. Todo lo que quer?as saber, y lo que ni imaginabas, est? en Yahoo! Respuestas (Beta). Probalo ya! -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Aug 23 10:39:44 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 23 Aug 2006 08:39:44 -0600 Subject: [Numpy-discussion] Array pooling In-Reply-To: <20060823041103.64388.qmail@web50302.mail.yahoo.com> References: <20060823041103.64388.qmail@web50302.mail.yahoo.com> Message-ID: Hi Carlos, On 8/22/06, Carlos Pita wrote: > > One reason is to use operator syntax: buf1 = buf2 + buf3, instead of > add(buf2,buf3, buf1). The other is to spare the final user (synth > programmer) any buffer bookkeeping. > I see. My idea was to keep track of pooled buffers' reference counts, so that those > currently unused would have a refcount of 1 and could be safely deleted > (well, if pool policy variables allow it). But as buffers are acquired all > the time, even a simple (pure-python) pooling policy implementation is > pretty time consuming. In fact, I have benchmarked this against simply > creating new zeros-arrays every time, and the non-pooling version just runs > faster. That was when I thought that numpy could be doing some internal > pooling by itself. > I think the language libraries themselves must do some sort of pooling, at least the linux ones seem to. C++ programs do a lot of creation/destruction of structures on the heap and I have found the overhead noticeable but surprisingly small. Numpy arrays are a couple of layers of abstraction up, so maybe not quite as fast. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Aug 23 14:45:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 11:45:29 -0700 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions Message-ID: <44ECA249.3030007@ieee.org> I'm working on some macros that will allow extensions to be "interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the implementation is complicated by the possibility of threads and making sure to handle clean-up code correctly when the interrupt returns. I'd like to get this in to 1.0 final. Anything needed will not require re-compilation of extension modules built for 1.0b2 however. This will be strictly "extra" and if an extension module doesn't use it there will be no problems. Step 1: Define the interface. Here are a couple of draft proposals. Please comment on them. 1) General purpose interface NPY_SIG_TRY { [code] } NPY_SIG_EXCEPT(signum) { [interrupt handling return] } NPY_SIG_ELSE [normal return] The idea of signum is to hold the signal actually caught. 2) Simpler interface NPY_SIG_TRY { [code] } NPY_SIG_EXCEPT_GOTO(label) [normal return] label: [interrupt handling return] C-extensions often use the notion of a label to handle failure code. If anybody has any thoughts on this, they would be greatly appreciated. Step 2: Implementation. I have the idea to have a single interrupt handler (defined globally in NumPy) that basically uses longjmp to return to the section of code corresponding to the thread that is handling the interrupt. I had thought to use a global variable containing a linked list of jmp_buf structures with a thread-id attached (PyThread_get_thread_ident()) so that the interrupt handler can search it to see if the thread has registered a return location. If it has not, then the intterupt handler will just return normally. In this way a thread that calls setjmpbuf will be sure to return to the correct place when it handles the interrupt. Concern: My thinking is that this mechanism should work whether or not the GIL is held so that we don't have to worry about whether or not the GIL is held except in the interrupt handling case (when Python exceptions are to be set). But, honestly, this gets very confusing. The sigjmp / longjmp mechanism for handling interrupts is not recommended under windows (not sure about mingw), but there we could possibly use Microsoft's __try and __except extension to implement. Initially, it would be "un-implemented" on platforms where it didn't work. Any comments are greatly appreciated -Travis From paul_midgley2000 at yahoo.co.uk Wed Aug 23 15:12:42 2006 From: paul_midgley2000 at yahoo.co.uk (Paul Midgley) Date: Wed, 23 Aug 2006 19:12:42 +0000 (GMT) Subject: [Numpy-discussion] Newbie question In-Reply-To: Message-ID: <20060823191242.37257.qmail@web25711.mail.ukl.yahoo.com> Hello I have been interested in using python for some time for carrying out calculations, but I have not been able to determine if it is possible to use it to print out a report at the end. What I want is to use it similar to Mathcad producing structured equations in line with the text, graphs etc. I can produced decent reports using MS Word or open office, but these will not do the calculations and the anlysis work that can be done with python and similar languages. What I am trying to achieve is calculations in a template form where the raw data can be put into it and carries out the calculations and it can be printed out in the form of a report. Any help would be appreciated. Regards Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at nnytech.net Wed Aug 23 15:23:59 2006 From: john at nnytech.net (John Byrnes) Date: Wed, 23 Aug 2006 19:23:59 +0000 Subject: [Numpy-discussion] Newbie question In-Reply-To: <20060823191242.37257.qmail@web25711.mail.ukl.yahoo.com> References: <20060823191242.37257.qmail@web25711.mail.ukl.yahoo.com> Message-ID: <200608231924.10768.john@nnytech.net> On Wednesday 23 August 2006 19:12, Paul Midgley wrote: > Hello > > I have been interested in using python for some time for carrying out > calculations, but I have not been able to determine if it is possible to > use it to print out a report at the end. What I want is to use it similar > to Mathcad producing structured equations in line with the text, graphs > etc. > > I can produced decent reports using MS Word or open office, but these will > not do the calculations and the anlysis work that can be done with python > and similar languages. > > What I am trying to achieve is calculations in a template form where the > raw data can be put into it and carries out the calculations and it can be > printed out in the form of a report. > You may be able to use GNU TeXmacs with the Python plugin. I've not tried this so YMMV. TeXmacs: http://www.texmacs.org/ Python Plugin: http://dkbza.org/tmPython.html Enjoy! John -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 191 bytes Desc: not available URL: From aisaac at american.edu Wed Aug 23 15:38:51 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 23 Aug 2006 15:38:51 -0400 Subject: [Numpy-discussion] Newbie question In-Reply-To: <20060823191242.37257.qmail@web25711.mail.ukl.yahoo.com> References: <20060823191242.37257.qmail@web25711.mail.ukl.yahoo.com> Message-ID: On Wed, 23 Aug 2006, (GMT) Paul Midgley apparently wrote: > I have been interested in using python for some time for > carrying out calculations, but I have not been able to > determine if it is possible to use it to print out > a report at the end. http://gael-varoquaux.info/computers/pyreport/ hth, Alan Isaac From oliphant.travis at ieee.org Wed Aug 23 15:59:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 12:59:29 -0700 Subject: [Numpy-discussion] speed degression In-Reply-To: References: <200608200055.32320.joris@ster.kuleuven.be> Message-ID: <44ECB3A1.5050304@ieee.org> Charles R Harris wrote: > Yes, > > On 8/19/06, Joris De Ridder > wrote: > > Hi, > > > > Some of my code is heavily using large complex arrays, and I noticed > a speed > > degression in NumPy 1.0b2 with respect to Numarray. The following > code snippet > > is an example that on my computer runs 10% faster in Numarray than > in NumPy. > > > > >>> A = zeros(1000000, complex) > > >>> for k in range(1000): > > ... A *= zeros(1000000, complex) > > > > (I replaced 'complex' with 'Complex' in Numarray). Can anyone > confirm this? > The multiply (and divide functions) for complex arrays were using the "generic interface" (probably because this is what Numeric did) which calls out to a function to compute each result. I just committed a switch to "in-line" the multiplication and division calls. The speed-up is about that 10%. Now, my numarray and NumPy versions of the test are very similar. -Travis From haase at msg.ucsf.edu Wed Aug 23 16:51:02 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 23 Aug 2006 13:51:02 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() Message-ID: <200608231351.02236.haase@msg.ucsf.edu> Hi! numpy renamed the *function* abs to absolute. Most functions like mean, min, max, average, ... have an equivalent array *method*. Why is absolute left out ? I think it should be added . Furthermore, looking at some line of code that have multiple calls to absolute [ like f(absolute(a), absolute(b), absolute(c)) ] I think "some people" might prefer less typing and less reading, like f( a.abs(), b.abs(), c.abs() ). One could even consider not requiring the "function call" parenthesis '()' at all - but I don't know about further implications that might have. Thanks, Sebastian Haase PS: is there any performace hit in using the built-in abs function ? From cookedm at physics.mcmaster.ca Wed Aug 23 17:13:45 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 23 Aug 2006 17:13:45 -0400 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <200608231351.02236.haase@msg.ucsf.edu> References: <200608231351.02236.haase@msg.ucsf.edu> Message-ID: <20060823171345.786680ad@arbutus.physics.mcmaster.ca> On Wed, 23 Aug 2006 13:51:02 -0700 Sebastian Haase wrote: > Hi! > numpy renamed the *function* abs to absolute. > Most functions like mean, min, max, average, ... > have an equivalent array *method*. > > Why is absolute left out ? > I think it should be added . We've got __abs__ :-) > Furthermore, looking at some line of code that have multiple calls to > absolute [ like f(absolute(a), absolute(b), absolute(c)) ] > I think "some people" might prefer less typing and less reading, > like f( a.abs(), b.abs(), c.abs() ). > One could even consider not requiring the "function call" parenthesis '()' > at all - but I don't know about further implications that might have. eh, no. things that return new arrays should be functions. (As opposed to views of existing arrays, like a.T) > PS: is there any performace hit in using the built-in abs function ? Shouldn't be: abs(x) looks for the x.__abs__() method (which arrays have). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From jdawe at eos.ubc.ca Wed Aug 23 17:27:29 2006 From: jdawe at eos.ubc.ca (Jordan Dawe) Date: Wed, 23 Aug 2006 14:27:29 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows Message-ID: <44ECC841.1040304@eos.ubc.ca> I just tried to compile numpy-1.0b3 under windows using mingw. I got this error: compile options: '-Ibuild\src.win32-2.4\numpy\core\src -Inumpy\core\include -Ibuild\src.win32-2.4\numpy\core -Inumpy\core\src -Inumpy\core\include -Ic:\Python24\include -Ic:\Python24\PC -c' gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -Ibuild\src.win32-2.4\numpy\core\src -Inumpy\core\include -Ibuild\src.win32-2.4\numpy\core -Inumpy\core\src -Inumpy\core\include -Ic:\Python24\include -Ic:\Python24\PC -c numpy\core\src\multiarraymodule.c -o build\temp.win32-2.4\Release\numpy\core\src\multiarraymodule.o In file included from numpy/core/src/multiarraymodule.c:64: numpy/core/src/arrayobject.c:6643: initializer element is not constant numpy/core/src/arrayobject.c:6643: (near initialization for `PyArray_Type.tp_free') numpy/core/src/arrayobject.c:10312: initializer element is not constant numpy/core/src/arrayobject.c:10312: (near initialization for `PyArrayMultiIter_Type.tp_free') numpy/core/src/arrayobject.c:11189: initializer element is not constant numpy/core/src/arrayobject.c:11189: (near initialization for `PyArrayDescr_Type.tp_hash') error: Command "gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -Ibuild\src.win32-2.4\numpy\core\src -Inumpy\core\include -Ibuild\src.win32-2.4\numpy\core -Inumpy\core\src -Inumpy\core\include -Ic:\Python24\include -Ic:\Python24\PC -c numpy\core\src\multiarraymodule.c -o build\temp.win32-2.4\Release\numpy\core\src\multiarraymodule.o" failed with exit status 1 Any ideas? Jordan Dawe From svetosch at gmx.net Wed Aug 23 17:34:48 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed, 23 Aug 2006 23:34:48 +0200 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44ECC841.1040304@eos.ubc.ca> References: <44ECC841.1040304@eos.ubc.ca> Message-ID: <44ECC9F8.1050108@gmx.net> Jordan Dawe schrieb: > I just tried to compile numpy-1.0b3 under windows using mingw. I got > this error: ... > > Any ideas? > No, except that I ran into the same problem... Hooray, I'm not alone ;-) -sven From perry at stsci.edu Wed Aug 23 17:43:15 2006 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 23 Aug 2006 17:43:15 -0400 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44ECA249.3030007@ieee.org> References: <44ECA249.3030007@ieee.org> Message-ID: I thought it might be useful to give a little more context on the problems involved in handling such interruptions. Basically, one doesn't want to exit out of places where data structures are incompletely set up, or memory isn't properly handled so that later references to these don't cause segfaults (or experience memory leaks). There may be more exotic cases but typically many extensions are as simple as: 1) Figure out what inputs one has and the mode of computation needed 2) allocate and setup output arrays 3) do computation, possibly lengthy, over arrays 4) free temporary arrays and other data structures 5) return results Typically, the interrupt handling is needed only for 3, the part that it may spend a very long time in. 1, 2, 4, and 5 are not worth interrupting, and the area that may cause the most trouble. I'd argue that many things could do with a very simple structure where section 3 is bracketed with macros. Something like: NPY_SIG_INTERRUPTABLE [long looping computational code that doesn't create or destroy objects] NPY_SIG_END_INTERRUPTABLE followed by the normal code to do 4 and 5. What happens during an interrupt is the computation code is exited and execution resumes right after the closing macro. Very often one doesn't care that the results in the arrays may be incomplete, or invalid numbers (presumably you know that since you just did control-C, but maybe I'm confused). Any reason that most cases couldn't be handled with something this simple? All cases can't be handled with this, but most should I think. Perry On Aug 23, 2006, at 2:45 PM, Travis Oliphant wrote: > > I'm working on some macros that will allow extensions to be > "interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the > implementation is complicated by the possibility of threads and making > sure to handle clean-up code correctly when the interrupt returns. > > I'd like to get this in to 1.0 final. Anything needed will not > require > re-compilation of extension modules built for 1.0b2 however. This > will > be strictly "extra" and if an extension module doesn't use it there > will > be no problems. > > Step 1: > > Define the interface. Here are a couple of draft proposals. Please > comment on them. > > 1) General purpose interface > > NPY_SIG_TRY { > [code] > } > NPY_SIG_EXCEPT(signum) { > [interrupt handling return] > } > NPY_SIG_ELSE > [normal return] > > The idea of signum is to hold the signal actually caught. > > > 2) Simpler interface > > NPY_SIG_TRY { > [code] > } > NPY_SIG_EXCEPT_GOTO(label) > [normal return] > > label: > [interrupt handling return] > > > C-extensions often use the notion of a label to handle failure code. > > If anybody has any thoughts on this, they would be greatly > appreciated. > > > Step 2: > > Implementation. I have the idea to have a single interrupt handler > (defined globally in NumPy) that basically uses longjmp to return > to the > section of code corresponding to the thread that is handling the > interrupt. I had thought to use a global variable containing a linked > list of jmp_buf structures with a thread-id attached > (PyThread_get_thread_ident()) so that the interrupt handler can search > it to see if the thread has registered a return location. If it has > not, then the intterupt handler will just return normally. In > this way > a thread that calls setjmpbuf will be sure to return to the correct > place when it handles the interrupt. > > Concern: > > My thinking is that this mechanism should work whether or not the > GIL is > held so that we don't have to worry about whether or not the GIL is > held > except in the interrupt handling case (when Python exceptions are > to be > set). But, honestly, this gets very confusing. > > The sigjmp / longjmp mechanism for handling interrupts is not > recommended under windows (not sure about mingw), but there we could > possibly use Microsoft's __try and __except extension to implement. > Initially, it would be "un-implemented" on platforms where it > didn't work. > > Any comments are greatly appreciated > > -Travis > > > > > ---------------------------------------------------------------------- > --- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your > job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From frank at qfin.net Wed Aug 23 17:47:28 2006 From: frank at qfin.net (Frank Conradie) Date: Wed, 23 Aug 2006 14:47:28 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44ECC9F8.1050108@gmx.net> References: <44ECC841.1040304@eos.ubc.ca> <44ECC9F8.1050108@gmx.net> Message-ID: <44ECCCF0.3080206@qfin.net> Hi Sven and Jordan I wish to add my name to this list ;-) I just got the same error trying to compile for Python 2.3 with latest candidate mingw32, following the instructions at http://www.scipy.org/Installing_SciPy/Windows . Hopefully someone can shed some light on this error - what I've been able to find on the net explains something about C not allowing dynamic initializing of global variables, whereas C++ does...? - Frank Sven Schreiber wrote: > Jordan Dawe schrieb: > >> I just tried to compile numpy-1.0b3 under windows using mingw. I got >> this error: >> > ... > >> Any ideas? >> >> > > No, except that I ran into the same problem... Hooray, I'm not alone ;-) > -sven > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Aug 23 18:13:57 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 15:13:57 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44ECCCF0.3080206@qfin.net> References: <44ECC841.1040304@eos.ubc.ca> <44ECC9F8.1050108@gmx.net> <44ECCCF0.3080206@qfin.net> Message-ID: <44ECD325.2040204@ieee.org> Frank Conradie wrote: > Hi Sven and Jordan > > I wish to add my name to this list ;-) I just got the same error > trying to compile for Python 2.3 with latest candidate mingw32, > following the instructions at > http://www.scipy.org/Installing_SciPy/Windows . > > Hopefully someone can shed some light on this error - what I've been > able to find on the net explains something about C not allowing > dynamic initializing of global variables, whereas C++ does...? > Edit line 690 of ndarrayobject.h to read #define NPY_USE_PYMEM 0 Hopefully that should fix the error. -Travis From oliphant.travis at ieee.org Wed Aug 23 18:21:41 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 15:21:41 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44ECD325.2040204@ieee.org> References: <44ECC841.1040304@eos.ubc.ca> <44ECC9F8.1050108@gmx.net> <44ECCCF0.3080206@qfin.net> <44ECD325.2040204@ieee.org> Message-ID: <44ECD4F5.9000401@ieee.org> Travis Oliphant wrote: > Frank Conradie wrote: > >> Hi Sven and Jordan >> >> I wish to add my name to this list ;-) I just got the same error >> trying to compile for Python 2.3 with latest candidate mingw32, >> following the instructions at >> http://www.scipy.org/Installing_SciPy/Windows . >> >> Hopefully someone can shed some light on this error - what I've been >> able to find on the net explains something about C not allowing >> dynamic initializing of global variables, whereas C++ does...? >> >> > Edit line 690 of ndarrayobject.h to read > > #define NPY_USE_PYMEM 0 > > Hopefully that should fix the error. > You will also have to alter line 11189 so that _Py_HashPointer is replaced by 0 or NULL From wbaxter at gmail.com Wed Aug 23 19:12:31 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 24 Aug 2006 08:12:31 +0900 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <20060823171345.786680ad@arbutus.physics.mcmaster.ca> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> Message-ID: The thing that I find I keep forgetting is that abs() is a built-in, but other simple functions are not. So it's abs(foo), but numpy.floor(foo) and numpy.ceil(foo). And then there's round() which is a built-in but can't be used with arrays, so numpy.round_(foo). Seems like it would be more consistent to just add a numpy.abs() and numpy.round(). But I guess there's nothing numpy can do about it... you can't name a method the same as a built-in function, right? That's why we have numpy.round_() instead of numpy.round(), no? [...goes and checks] Oh, you *can* name a module function the same as a built-in. Hmm... so then why isn't numpy.round_() just numpy.round()? Is it just so "from numpy import *" won't hide the built-in? --bill On 8/24/06, David M. Cooke wrote: > > On Wed, 23 Aug 2006 13:51:02 -0700 > Sebastian Haase wrote: > > > Hi! > > numpy renamed the *function* abs to absolute. > > Most functions like mean, min, max, average, ... > > have an equivalent array *method*. > > > > Why is absolute left out ? > > I think it should be added . > > We've got __abs__ :-) > > > Furthermore, looking at some line of code that have multiple calls to > > absolute [ like f(absolute(a), absolute(b), absolute(c)) ] > > I think "some people" might prefer less typing and less reading, > > like f( a.abs(), b.abs(), c.abs() ). > > > One could even consider not requiring the "function call" parenthesis > '()' > > at all - but I don't know about further implications that might have. > > eh, no. things that return new arrays should be functions. (As opposed to > views of existing arrays, like a.T) > > > PS: is there any performace hit in using the built-in abs function ? > > Shouldn't be: abs(x) looks for the x.__abs__() method (which arrays have). > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Wed Aug 23 19:22:52 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 23 Aug 2006 16:22:52 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> Message-ID: <200608231622.52266.haase@msg.ucsf.edu> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: > The thing that I find I keep forgetting is that abs() is a built-in, but > other simple functions are not. So it's abs(foo), but numpy.floor(foo) and > numpy.ceil(foo). And then there's round() which is a built-in but can't be > used with arrays, so numpy.round_(foo). Seems like it would be more > consistent to just add a numpy.abs() and numpy.round(). > > But I guess there's nothing numpy can do about it... you can't name a > method the same as a built-in function, right? That's why we have > numpy.round_() instead of numpy.round(), no? > [...goes and checks] > Oh, you *can* name a module function the same as a built-in. Hmm... so > then why isn't numpy.round_() just numpy.round()? Is it just so "from > numpy import *" won't hide the built-in? > That is my theory... Even tough I try to advertise import numpy as N a) "N." is not *that* much extra typing b) it much clearer to read code and see what is special from numpy vs. what is builtin c) (most important for me): I use PyShell/PyCrust and when I type the '.' after 'N' I get a nice pop-up list reminding me of all the function in numy ;-) Regarding the original subject: a) "absolute" is impractically too much typing and b) I just thought some (module-) functions might be "forgotten" to be put in as (object-) methods ... !? Cheers, Sebastian > --bill > > On 8/24/06, David M. Cooke wrote: > > On Wed, 23 Aug 2006 13:51:02 -0700 > > > > Sebastian Haase wrote: > > > Hi! > > > numpy renamed the *function* abs to absolute. > > > Most functions like mean, min, max, average, ... > > > have an equivalent array *method*. > > > > > > Why is absolute left out ? > > > I think it should be added . > > > > We've got __abs__ :-) > > > > > Furthermore, looking at some line of code that have multiple calls to > > > absolute [ like f(absolute(a), absolute(b), absolute(c)) ] > > > I think "some people" might prefer less typing and less reading, > > > like f( a.abs(), b.abs(), c.abs() ). > > > > > > One could even consider not requiring the "function call" parenthesis > > > > '()' > > > > > at all - but I don't know about further implications that might have. > > > > eh, no. things that return new arrays should be functions. (As opposed to > > views of existing arrays, like a.T) > > > > > PS: is there any performace hit in using the built-in abs function ? > > > > Shouldn't be: abs(x) looks for the x.__abs__() method (which arrays > > have). From cookedm at physics.mcmaster.ca Wed Aug 23 19:35:49 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 23 Aug 2006 19:35:49 -0400 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44ECA249.3030007@ieee.org> References: <44ECA249.3030007@ieee.org> Message-ID: <20060823193549.70728721@arbutus.physics.mcmaster.ca> On Wed, 23 Aug 2006 11:45:29 -0700 Travis Oliphant wrote: > > I'm working on some macros that will allow extensions to be > "interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the > implementation is complicated by the possibility of threads and making > sure to handle clean-up code correctly when the interrupt returns. > For writing clean-up code, here's some prior art on adding exceptions to C: http://www.ossp.org/pkg/lib/ex/ (BSD license) http://adomas.org/excc/ (GPL'd, so no good) http://ldeniau.web.cern.ch/ldeniau/html/exception/exception.html (no license given) The last one has functions that allow you to add pointers (and their deallocation functions) to a list so that they can be deallocated when an exception is thrown. (You don't necessarily need something like these libraries, but I thought I'd throw it in here, because it's along the same lines) > Step 2: > > Implementation. I have the idea to have a single interrupt handler > (defined globally in NumPy) that basically uses longjmp to return to the > section of code corresponding to the thread that is handling the > interrupt. I had thought to use a global variable containing a linked > list of jmp_buf structures with a thread-id attached > (PyThread_get_thread_ident()) so that the interrupt handler can search > it to see if the thread has registered a return location. If it has > not, then the intterupt handler will just return normally. In this way > a thread that calls setjmpbuf will be sure to return to the correct > place when it handles the interrupt. Signals and threads don't mix well at *all*. With POSIX semantics, synchronous signals (ones caused by the thread itself) should be sent to the handler for that thread. Asynchronous ones (like SIGINT for Ctrl-C) will be sent to an *arbitrary* thread. (Apple, for instance, doesn't make any guarantees on which thread gets it: http://developer.apple.com/qa/qa2001/qa1184.html) Best way I can see this is to have a SIGINT handler installed that sets a global variable, and check that every so often. It's such a good way that Python already does this -- Parser/intrcheck.c sets the handler, and you can use PyOS_InterruptOccurred() to check if one happened. So something like while (long running loop) { if (PyOS_InterruptOccurred()) goto error: ... useful stuff ... } error: This could be abstracted to a set of macros (with Perry's syntax): NPY_SIG_INTERRUPTABLE while (long loop) { NPY_CHECK_SIGINT; .. more stuff .. } NPY_SIG_END_INTERRUPTABLE where NPY_CHECK_SIGINT would do a longjmp(). Or come up with a good (fast) way to run stuff in another process :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Wed Aug 23 19:40:48 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 23 Aug 2006 19:40:48 -0400 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <200608231622.52266.haase@msg.ucsf.edu> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> <200608231622.52266.haase@msg.ucsf.edu> Message-ID: <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> On Wed, 23 Aug 2006 16:22:52 -0700 Sebastian Haase wrote: > On Wednesday 23 August 2006 16:12, Bill Baxter wrote: > > The thing that I find I keep forgetting is that abs() is a built-in, but > > other simple functions are not. So it's abs(foo), but numpy.floor(foo) > > and numpy.ceil(foo). And then there's round() which is a built-in but > > can't be used with arrays, so numpy.round_(foo). Seems like it would > > be more consistent to just add a numpy.abs() and numpy.round(). > > > > Regarding the original subject: > a) "absolute" is impractically too much typing and > b) I just thought some (module-) functions might be "forgotten" to be put > in as (object-) methods ... !? Four-line change, so I added a.abs() (three lines for array, one for MaskedArray). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Wed Aug 23 19:46:15 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 23 Aug 2006 17:46:15 -0600 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> Message-ID: On 8/23/06, Bill Baxter wrote: > The thing that I find I keep forgetting is that abs() is a built-in, but > other simple functions are not. So it's abs(foo), but numpy.floor(foo) and > numpy.ceil(foo). And then there's round() which is a built-in but can't be > used with arrays, so numpy.round_(foo). Seems like it would be more > consistent to just add a numpy.abs() and numpy.round(). > > But I guess there's nothing numpy can do about it... you can't name a > method the same as a built-in function, right? That's why we have > numpy.round_() instead of numpy.round(), no? > [...goes and checks] > Oh, you *can* name a module function the same as a built-in. Hmm... so then > why isn't numpy.round_() just numpy.round()? Is it just so "from numpy > import *" won't hide the built-in? Technically numpy could simply have (illustrated with round, but works also with abs) round = round_ and simply NOT include round in the __all__ list. This would make numpy.round(x) work (clean syntax) while from numpy import * would not clobber the builtin round. That sounds like a decent solution to me. Cheers, f From oliphant.travis at ieee.org Wed Aug 23 21:37:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 18:37:28 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> <200608231622.52266.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> Message-ID: <44ED02D8.6030401@ieee.org> David M. Cooke wrote: > On Wed, 23 Aug 2006 16:22:52 -0700 > Sebastian Haase wrote: > > >> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: >> >>> The thing that I find I keep forgetting is that abs() is a built-in, but >>> other simple functions are not. So it's abs(foo), but numpy.floor(foo) >>> and numpy.ceil(foo). And then there's round() which is a built-in but >>> can't be used with arrays, so numpy.round_(foo). Seems like it would >>> be more consistent to just add a numpy.abs() and numpy.round(). >>> >>> >> Regarding the original subject: >> a) "absolute" is impractically too much typing and >> b) I just thought some (module-) functions might be "forgotten" to be put >> in as (object-) methods ... !? >> > > Four-line change, so I added a.abs() (three lines for array, one > for MaskedArray). > While I appreciate it's proactive nature, I don't like this change because it adds another "ufunc" as a method. Right now, I think conj is the only other method like that. Instead, I like better the idea of adding abs, round, max, and min to the "non-import-*" namespace of numpy. From haase at msg.ucsf.edu Wed Aug 23 22:02:13 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 23 Aug 2006 19:02:13 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <44ED02D8.6030401@ieee.org> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> Message-ID: <200608231902.13491.haase@msg.ucsf.edu> On Wednesday 23 August 2006 18:37, Travis Oliphant wrote: > David M. Cooke wrote: > > On Wed, 23 Aug 2006 16:22:52 -0700 > > > > Sebastian Haase wrote: > >> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: > >>> The thing that I find I keep forgetting is that abs() is a built-in, > >>> but other simple functions are not. So it's abs(foo), but > >>> numpy.floor(foo) and numpy.ceil(foo). And then there's round() which > >>> is a built-in but can't be used with arrays, so numpy.round_(foo). > >>> Seems like it would be more consistent to just add a numpy.abs() and > >>> numpy.round(). > >> > >> Regarding the original subject: > >> a) "absolute" is impractically too much typing and > >> b) I just thought some (module-) functions might be "forgotten" to be > >> put in as (object-) methods ... !? > > > > Four-line change, so I added a.abs() (three lines for array, one > > for MaskedArray). > > While I appreciate it's proactive nature, I don't like this change > because it adds another "ufunc" as a method. Right now, I think conj is > the only other method like that. > > Instead, I like better the idea of adding abs, round, max, and min to > the "non-import-*" namespace of numpy. > How does this compare with mean, min, max, average ? BTW: I think me choice is now settled on the builtin call: abs(arr) -- short and sweet. (As long as it is really supposed to *always* work and is not *slow* in any way !?!?!?!?) Cheers, Sebastian From oliphant.travis at ieee.org Wed Aug 23 22:12:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 Aug 2006 19:12:03 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <200608231902.13491.haase@msg.ucsf.edu> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> <200608231902.13491.haase@msg.ucsf.edu> Message-ID: <44ED0AF3.2020601@ieee.org> Sebastian Haase wrote: > On Wednesday 23 August 2006 18:37, Travis Oliphant wrote: > >> David M. Cooke wrote: >> >>> On Wed, 23 Aug 2006 16:22:52 -0700 >>> >>> Sebastian Haase wrote: >>> >>>> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: >>>> >>>>> The thing that I find I keep forgetting is that abs() is a built-in, >>>>> but other simple functions are not. So it's abs(foo), but >>>>> numpy.floor(foo) and numpy.ceil(foo). And then there's round() which >>>>> is a built-in but can't be used with arrays, so numpy.round_(foo). >>>>> Seems like it would be more consistent to just add a numpy.abs() and >>>>> numpy.round(). >>>>> >>>> Regarding the original subject: >>>> a) "absolute" is impractically too much typing and >>>> b) I just thought some (module-) functions might be "forgotten" to be >>>> put in as (object-) methods ... !? >>>> >>> Four-line change, so I added a.abs() (three lines for array, one >>> for MaskedArray). >>> >> While I appreciate it's proactive nature, I don't like this change >> because it adds another "ufunc" as a method. Right now, I think conj is >> the only other method like that. >> >> Instead, I like better the idea of adding abs, round, max, and min to >> the "non-import-*" namespace of numpy. >> >> > How does this compare with > mean, min, max, average > ? > I'm not sure what this question is asking, so I'll answer what I think it is asking. The mean, min, max, and average functions are *not* ufuncs. They are methods of particular ufuncs. The abs() should not be slow (because it calls the __abs__ method which for arrays is mapped to the ufunc absolute). Thus, there is one more layer of indirection which will only matter for small arrays. -Travis From david at ar.media.kyoto-u.ac.jp Wed Aug 23 23:11:34 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 24 Aug 2006 12:11:34 +0900 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <20060823193549.70728721@arbutus.physics.mcmaster.ca> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> Message-ID: <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> David M. Cooke wrote: > On Wed, 23 Aug 2006 11:45:29 -0700 > Travis Oliphant wrote: > >> I'm working on some macros that will allow extensions to be >> "interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the >> implementation is complicated by the possibility of threads and making >> sure to handle clean-up code correctly when the interrupt returns. >> > This is funny, I was just thinking about that yesterday. This is a major problem when writing C extensions in matlab (the manual says use the matlab allocator instead of malloc/new/whatever, but when you call a library, you cannot do that...). > > Best way I can see this is to have a SIGINT handler installed that sets a > global variable, and check that every so often. It's such a good way that > Python already does this -- Parser/intrcheck.c sets the handler, and you can > use PyOS_InterruptOccurred() to check if one happened. So something like This is the way I do it when writing extension under matlab. I am by no means knowledgeable about those kind of things, but this is the simplest solution I came up with so far. I would guess that because it uses one global variable, it should not matter which thread receives the signal ? > > while (long running loop) { > if (PyOS_InterruptOccurred()) goto error: > ... useful stuff ... > } > error: > > This could be abstracted to a set of macros (with Perry's syntax): > > NPY_SIG_INTERRUPTABLE > while (long loop) { > NPY_CHECK_SIGINT; > .. more stuff .. > } > NPY_SIG_END_INTERRUPTABLE > > where NPY_CHECK_SIGINT would do a longjmp(). Is there really a need for a longjmp ? What I simply do in this case is checking the global variable, and if its value changes, goto to the normal error handling. Let's say you have already a good error handling in your function, as Travis described in his email: status = do_stuff(); if (status < 0) { goto cleanup; } Then, to handle sigint, you need a global variable got_sigint which is modified by the signal handler, and check its value (the exact type of this variable is platform specific; on linux, I am using volatile sig_atomic_t, as recommeded by the GNU C doc):: /* status is 0 if everything is OK */ status = do_stuff(); if (status < 0) { goto cleanup; } sigprocmask (SIG_BLOCK, &block_sigint, NULL); if (got_sigint) { got_sigint = 0; goto cleanup; } sigprocmask (SIG_UNBLOCK, &block_sigint, NULL); So the error handling does not be modified, and no longjmp is needed ? Or maybe I don't understand what you mean. I think the case proposer by Perry is too restrictive: it is really common to use external libraries which we do not know whether they use memory allocation inside the processing, and there is a need to clean that too. > > Or come up with a good (fast) way to run stuff in another process :-) > This sounds a bit overkill, and a pain to implement for different platforms ? The checking of signals should be fast, but it has a cost (you have to use a branch) which prevents is from being called to often inside a loop, for example. David From haase at msg.ucsf.edu Thu Aug 24 00:22:32 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 23 Aug 2006 21:22:32 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <44ED0AF3.2020601@ieee.org> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> <200608231902.13491.haase@msg.ucsf.edu> <44ED0AF3.2020601@ieee.org> Message-ID: <44ED2988.8020501@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> On Wednesday 23 August 2006 18:37, Travis Oliphant wrote: >> >>> David M. Cooke wrote: >>> >>>> On Wed, 23 Aug 2006 16:22:52 -0700 >>>> >>>> Sebastian Haase wrote: >>>> >>>>> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: >>>>> >>>>>> The thing that I find I keep forgetting is that abs() is a built-in, >>>>>> but other simple functions are not. So it's abs(foo), but >>>>>> numpy.floor(foo) and numpy.ceil(foo). And then there's round() which >>>>>> is a built-in but can't be used with arrays, so numpy.round_(foo). >>>>>> Seems like it would be more consistent to just add a numpy.abs() and >>>>>> numpy.round(). >>>>>> >>>>> Regarding the original subject: >>>>> a) "absolute" is impractically too much typing and >>>>> b) I just thought some (module-) functions might be "forgotten" to be >>>>> put in as (object-) methods ... !? >>>>> >>>> Four-line change, so I added a.abs() (three lines for array, one >>>> for MaskedArray). >>>> >>> While I appreciate it's proactive nature, I don't like this change >>> because it adds another "ufunc" as a method. Right now, I think conj is >>> the only other method like that. >>> >>> Instead, I like better the idea of adding abs, round, max, and min to >>> the "non-import-*" namespace of numpy. >>> >>> >> How does this compare with >> mean, min, max, average >> ? >> > > I'm not sure what this question is asking, so I'll answer what I think > it is asking. > > The mean, min, max, and average functions are *not* ufuncs. They are > methods of particular ufuncs. > Yes - that's what wanted to hear ! I'm just trying to bring in the "user's" point of view: Not thinking about how they are implemented under the hood: mean,min,max,average have a very similar "feeling" to them as "abs". I'm hoping this ("seeing things from the user p.o.v.") can stay like that for as long as possible ! Numpy should be focused on "scientist not programers". (This is just why I posted this comment about "arr.abs()" - but if there is a good reason to not have this for "simplicity reasons 'under the hood'" I can see that perfectly fine !) - Sebastian From wbaxter at gmail.com Thu Aug 24 00:41:50 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 24 Aug 2006 13:41:50 +0900 Subject: [Numpy-discussion] users point of view and ufuncs Message-ID: On 8/24/06, Sebastian Haase wrote: > > > I'm not sure what this question is asking, so I'll answer what I think > > it is asking. > > > > The mean, min, max, and average functions are *not* ufuncs. They are > > methods of particular ufuncs. > > > Yes - that's what wanted to hear ! I'm just trying to bring in the > "user's" point of view: Not thinking about how they are implemented > under the hood: mean,min,max,average have a very similar "feeling" to > them as "abs". While we're on the subject of the "user's" point of view, the term "ufunc" is not very new-user friendly, yet it gets slung around fairly often. I'm not sure what to do about it exactly, but maybe for starters it would be nice to add a concise definition of "ufunc" to the numpy glossary: http://www.scipy.org/Numpy_Glossary. Can anyone come up with such a definition? Here's my stab at it: ufunc: A function that operates element-wise on arrays. But I have a feeling there's more to it than that. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From chanley at stsci.edu Thu Aug 24 08:57:12 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 24 Aug 2006 08:57:12 -0400 Subject: [Numpy-discussion] numpy revision 3056 will not build on RHE3 or Solaris Message-ID: <44EDA228.20100@stsci.edu> Good Morning, Numpy revision 3056 will not build on either Red Hat Enterprise 3 or Solaris 8. The relevant syntax errors are below: For RHE3: --------- creating build/temp.linux-i686-2.4 creating build/temp.linux-i686-2.4/numpy creating build/temp.linux-i686-2.4/numpy/core creating build/temp.linux-i686-2.4/numpy/core/src compile options: '-Ibuild/src.linux-i686-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-i686-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/stsci/pyssgdev/Python-2.4.2/include/python2.4 -c' gcc: numpy/core/src/multiarraymodule.c In file included from numpy/core/include/numpy/arrayobject.h:19, from numpy/core/src/multiarraymodule.c:25: numpy/core/include/numpy/npy_interrupt.h:95: syntax error before "_NPY_SIGINT_BUF" numpy/core/include/numpy/npy_interrupt.h:95: warning: type defaults to `int' in declaration of `_NPY_SIGINT_BUF' numpy/core/include/numpy/npy_interrupt.h:95: warning: data definition has no type or storage class numpy/core/include/numpy/npy_interrupt.h: In function `_npy_sighandler': numpy/core/include/numpy/npy_interrupt.h:100: `SIG_IGN' undeclared (first use in this function) numpy/core/include/numpy/npy_interrupt.h:100: (Each undeclared identifier is reported only once numpy/core/include/numpy/npy_interrupt.h:100: for each function it appears in.) numpy/core/include/numpy/npy_interrupt.h:101: warning: implicit declaration of function `longjmp' numpy/core/src/multiarraymodule.c: In function `test_interrupt': numpy/core/src/multiarraymodule.c:6441: `SIGINT' undeclared (first use in this function) numpy/core/src/multiarraymodule.c:6441: warning: implicit declaration of function `setjmp' In file included from numpy/core/include/numpy/arrayobject.h:19, from numpy/core/src/multiarraymodule.c:25: numpy/core/include/numpy/npy_interrupt.h:95: syntax error before "_NPY_SIGINT_BUF" numpy/core/include/numpy/npy_interrupt.h:95: warning: type defaults to `int' in declaration of `_NPY_SIGINT_BUF' numpy/core/include/numpy/npy_interrupt.h:95: warning: data definition has no type or storage class numpy/core/include/numpy/npy_interrupt.h: In function `_npy_sighandler': numpy/core/include/numpy/npy_interrupt.h:100: `SIG_IGN' undeclared (first use in this function) numpy/core/include/numpy/npy_interrupt.h:100: (Each undeclared identifier is reported only once numpy/core/include/numpy/npy_interrupt.h:100: for each function it appears in.) numpy/core/include/numpy/npy_interrupt.h:101: warning: implicit declaration of function `longjmp' numpy/core/src/multiarraymodule.c: In function `test_interrupt': numpy/core/src/multiarraymodule.c:6441: `SIGINT' undeclared (first use in this function) numpy/core/src/multiarraymodule.c:6441: warning: implicit declaration of function `setjmp' error: Command "gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -Ibuild/src.linux-i686-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-i686-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/stsci/pyssgdev/Python-2.4.2/include/python2.4 -c numpy/core/src/multiarraymodule.c -o build/temp.linux-i686-2.4/numpy/core/src/multiarraymodule.o" failed with exit status 1 For Solaris 8: -------------- creating build/temp.solaris-2.8-sun4u-2.4 creating build/temp.solaris-2.8-sun4u-2.4/numpy creating build/temp.solaris-2.8-sun4u-2.4/numpy/core creating build/temp.solaris-2.8-sun4u-2.4/numpy/core/src compile options: '-Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/ra/pyssg/Python-2.4.2/include/python2.4 -c' cc: numpy/core/src/multiarraymodule.c "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: jmp_buf "numpy/core/include/numpy/npy_interrupt.h", line 95: syntax error before or at: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 100: undefined symbol: SIG_IGN "numpy/core/include/numpy/npy_interrupt.h", line 100: warning: improper pointer/integer combination: arg #2 "numpy/core/src/scalartypes.inc.src", line 70: warning: statement not reached "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arrayobject.c", line 4338: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, int) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 4444: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to unsigned long, pointer to unsigned long, int, int) returning int "numpy/core/src/arrayobject.c", line 4448: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to char, pointer to char, int, int) returning int "numpy/core/src/arrayobject.c", line 5313: warning: assignment type mismatch: pointer to function(pointer to struct PyArrayObject {int ob_refcnt, pointer to struct _typeobject {..} ob_type, pointer to char data, int nd, pointer to int dimensions, pointer to int strides, pointer to struct _object {..} base, pointer to struct {..} descr, int flags, pointer to struct _object {..} weakreflist}, pointer to struct _object {int ob_refcnt, pointer to struct _typeobject {..} ob_type}) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 7280: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, pointer to void, pointer to void) returning void "=" pointer to void "numpy/core/src/multiarraymodule.c", line 6441: undefined symbol: SIGINT cc: acomp failed for numpy/core/src/multiarraymodule.c "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: jmp_buf "numpy/core/include/numpy/npy_interrupt.h", line 95: syntax error before or at: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 100: undefined symbol: SIG_IGN "numpy/core/include/numpy/npy_interrupt.h", line 100: warning: improper pointer/integer combination: arg #2 "numpy/core/src/scalartypes.inc.src", line 70: warning: statement not reached "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arrayobject.c", line 4338: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, int) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 4444: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to unsigned long, pointer to unsigned long, int, int) returning int "numpy/core/src/arrayobject.c", line 4448: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to char, pointer to char, int, int) returning int "numpy/core/src/arrayobject.c", line 5313: warning: assignment type mismatch: pointer to function(pointer to struct PyArrayObject {int ob_refcnt, pointer to struct _typeobject {..} ob_type, pointer to char data, int nd, pointer to int dimensions, pointer to int strides, pointer to struct _object {..} base, pointer to struct {..} descr, int flags, pointer to struct _object {..} weakreflist}, pointer to struct _object {int ob_refcnt, pointer to struct _typeobject {..} ob_type}) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 7280: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, pointer to void, pointer to void) returning void "=" pointer to void "numpy/core/src/multiarraymodule.c", line 6441: undefined symbol: SIGINT cc: acomp failed for numpy/core/src/multiarraymodule.c error: Command "/opt/SUNWspro-6u2/bin/cc -DNDEBUG -O -Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/ra/pyssg/Python-2.4.2/include/python2.4 -c numpy/core/src/multiarraymodule.c -o build/temp.solaris-2.8-sun4u-2.4/numpy/core/src/multiarraymodule.o" failed with exit status 2 Chris From ndarray at mac.com Thu Aug 24 09:27:46 2006 From: ndarray at mac.com (Sasha) Date: Thu, 24 Aug 2006 09:27:46 -0400 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: Message-ID: On 8/24/06, Bill Baxter wrote: >[snip] it would be > nice to add a concise definition of "ufunc" to the numpy glossary: > http://www.scipy.org/Numpy_Glossary. > done > Can anyone come up with such a definition? I copied the definition from the old Numeric manual. > Here's my stab at it: > > ufunc: A function that operates element-wise on arrays. > This is not entirely correct. Ufuncs operate on anything that can be passed to asarray: arrays, python lists, tuples or scalars. From frank at qfin.net Thu Aug 24 11:36:20 2006 From: frank at qfin.net (Frank Conradie) Date: Thu, 24 Aug 2006 08:36:20 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44ECD4F5.9000401@ieee.org> References: <44ECC841.1040304@eos.ubc.ca> <44ECC9F8.1050108@gmx.net> <44ECCCF0.3080206@qfin.net> <44ECD325.2040204@ieee.org> <44ECD4F5.9000401@ieee.org> Message-ID: <44EDC774.6050603@qfin.net> Thanks Travis - that did the trick. Is this an issue specifically with mingw and Windows? - Frank Travis Oliphant wrote: > Travis Oliphant wrote: > >> Frank Conradie wrote: >> >> >>> Hi Sven and Jordan >>> >>> I wish to add my name to this list ;-) I just got the same error >>> trying to compile for Python 2.3 with latest candidate mingw32, >>> following the instructions at >>> http://www.scipy.org/Installing_SciPy/Windows . >>> >>> Hopefully someone can shed some light on this error - what I've been >>> able to find on the net explains something about C not allowing >>> dynamic initializing of global variables, whereas C++ does...? >>> >>> >>> >> Edit line 690 of ndarrayobject.h to read >> >> #define NPY_USE_PYMEM 0 >> >> Hopefully that should fix the error. >> >> > > You will also have to alter line 11189 so that > > _Py_HashPointer is replaced by 0 or NULL > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Thu Aug 24 12:22:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 10:22:29 -0600 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44EDC774.6050603@qfin.net> References: <44ECC841.1040304@eos.ubc.ca> <44ECC9F8.1050108@gmx.net> <44ECCCF0.3080206@qfin.net> <44ECD325.2040204@ieee.org> <44ECD4F5.9000401@ieee.org> <44EDC774.6050603@qfin.net> Message-ID: <44EDD245.6020708@ieee.org> Frank Conradie wrote: > Thanks Travis - that did the trick. Is this an issue specifically with > mingw and Windows? > Yes, I keep forgetting that Python functions are not necessarily defined at compile time on Windows. It may also be a problem with MSVC on Windows but I'm not sure. The real fix is now in SVN where these function pointers are initialized before calling PyType_Ready -Travis From oliphant.travis at ieee.org Thu Aug 24 12:24:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 10:24:04 -0600 Subject: [Numpy-discussion] numpy revision 3056 will not build on RHE3 or Solaris In-Reply-To: <44EDA228.20100@stsci.edu> References: <44EDA228.20100@stsci.edu> Message-ID: <44EDD2A4.5090606@ieee.org> Christopher Hanley wrote: > Good Morning, > > Numpy revision 3056 will not build on either Red Hat Enterprise 3 or > Solaris 8. The relevant syntax errors are below: > > I'd like to see which platforms do not work with the npy_interrupt.h stuff. If you have a unique platform please try the latest SVN. There is a NPY_NO_SIGNAL define that will "turn off" support for interrupts which we can define on platforms that won't work. -Travis From dd55 at cornell.edu Thu Aug 24 12:36:21 2006 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 24 Aug 2006 12:36:21 -0400 Subject: [Numpy-discussion] =?iso-8859-1?q?numpy_revision_3056_will_not_bu?= =?iso-8859-1?q?ild_on_RHE3_or=09Solaris?= In-Reply-To: <44EDD2A4.5090606@ieee.org> References: <44EDA228.20100@stsci.edu> <44EDD2A4.5090606@ieee.org> Message-ID: <200608241236.21573.dd55@cornell.edu> Hi Travis, On Thursday 24 August 2006 12:24, you wrote: > Christopher Hanley wrote: > > Good Morning, > > > > Numpy revision 3056 will not build on either Red Hat Enterprise 3 or > > Solaris 8. The relevant syntax errors are below: > > I'd like to see which platforms do not work with the npy_interrupt.h > stuff. If you have a unique platform please try the latest SVN. I am able to build on an amd64/gentoo with python 2.4.3 and gcc-4.1.1. I am not able to build on 32bit RHEL4: --------------------------------- In file included from numpy/core/include/numpy/arrayobject.h:19, from numpy/core/src/multiarraymodule.c:25: numpy/core/include/numpy/npy_interrupt.h: In function `_npy_sighandler': numpy/core/include/numpy/npy_interrupt.h:102: error: `SIG_IGN' undeclared (first use in this function) numpy/core/include/numpy/npy_interrupt.h:102: error: (Each undeclared identifier is reported only once numpy/core/include/numpy/npy_interrupt.h:102: error: for each function it appears in.) numpy/core/src/multiarraymodule.c: In function `test_interrupt': numpy/core/src/multiarraymodule.c:6439: error: `SIGINT' undeclared (first use in this function) In file included from numpy/core/include/numpy/arrayobject.h:19, from numpy/core/src/multiarraymodule.c:25: numpy/core/include/numpy/npy_interrupt.h: In function `_npy_sighandler': numpy/core/include/numpy/npy_interrupt.h:102: error: `SIG_IGN' undeclared (first use in this function) numpy/core/include/numpy/npy_interrupt.h:102: error: (Each undeclared identifier is reported only once numpy/core/include/numpy/npy_interrupt.h:102: error: for each function it appears in.) numpy/core/src/multiarraymodule.c: In function `test_interrupt': numpy/core/src/multiarraymodule.c:6439: error: `SIGINT' undeclared (first use in this function) error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -D_GNU_SOURCE -fPIC -fPIC -Ibuild/src.linux-i686-2.3/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-i686-2.3/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.3 -c numpy/core/src/multiarraymodule.c -o build/temp.linux-i686-2.3/numpy/core/src/multiarraymodule.o" failed with exit status 1 From kortmann at ideaworks.com Thu Aug 24 12:50:55 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Thu, 24 Aug 2006 09:50:55 -0700 (PDT) Subject: [Numpy-discussion] numpy-1.0b3 under windows Message-ID: <1244.12.216.231.149.1156438255.squirrel@webmail.ideaworks.com> Sorry for my ignorance, but I have not ever heard of or used mingw32. I am also using python 2.3. Is there any way someone could possibly send me a brief walk through of how to install 1.0b3 on windows32? Also I am not sure that I know how to manipulate the code that you guys said that you have to so that it will work so if that is needed could you post a walk through of that also? From haase at msg.ucsf.edu Thu Aug 24 12:55:37 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 09:55:37 -0700 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <1244.12.216.231.149.1156438255.squirrel@webmail.ideaworks.com> References: <1244.12.216.231.149.1156438255.squirrel@webmail.ideaworks.com> Message-ID: <200608240955.38031.haase@msg.ucsf.edu> On Thursday 24 August 2006 09:50, kortmann at ideaworks.com wrote: > Sorry for my ignorance, but I have not ever heard of or used mingw32. I > am also using python 2.3. http://en.wikipedia.org/wiki/Mingw explains in detail. > > Is there any way someone could possibly send me a brief walk through of > how to install 1.0b3 on windows32? do you know about the ("awesome" wiki website at scipy.org) try your luck at http://www.scipy.org/Build_for_Windows > > Also I am not sure that I know how to manipulate the code that you guys > said that you have to so that it will work so if that is needed could you > post a walk through of that also? > To my knowledge there is no need to "manipulate code" .... Maybe you should try getting per-build versions first. Sebastian Haase From oliphant.travis at ieee.org Thu Aug 24 12:34:47 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 10:34:47 -0600 Subject: [Numpy-discussion] numpy revision 3056 will not build on RHE3 or Solaris In-Reply-To: <44EDD2A4.5090606@ieee.org> References: <44EDA228.20100@stsci.edu> <44EDD2A4.5090606@ieee.org> Message-ID: <44EDD527.1040008@ieee.org> Travis Oliphant wrote: > Christopher Hanley wrote: > >> Good Morning, >> >> Numpy revision 3056 will not build on either Red Hat Enterprise 3 or >> Solaris 8. The relevant syntax errors are below: >> >> >> > I'd like to see which platforms do not work with the npy_interrupt.h > stuff. If you have a unique platform please try the latest SVN. > > There is a NPY_NO_SIGNAL define that will "turn off" support for > interrupts which we can define on platforms that won't work. > > In particular, if the signal handling works on your platform, then numpy.core.multiarray.test_interrupt() should be interruptable. Otherwise, it will continue until the incrementing counter becomes negative which on my system takes about 10 seconds -Travis From chanley at stsci.edu Thu Aug 24 13:32:54 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 24 Aug 2006 13:32:54 -0400 Subject: [Numpy-discussion] numpy revision 3056 will not build on RHE3 or Solaris In-Reply-To: <44EDD527.1040008@ieee.org> References: <44EDA228.20100@stsci.edu> <44EDD2A4.5090606@ieee.org> <44EDD527.1040008@ieee.org> Message-ID: <44EDE2C6.40209@stsci.edu> Travis, Numpy version '1.0b4.dev3060' will now build on both a 32bit Red Hat Enterprise 3 machine as well as Solaris 8. Chris From haase at msg.ucsf.edu Thu Aug 24 14:01:20 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 11:01:20 -0700 Subject: [Numpy-discussion] should a flatiter object get a 'dtype' attribute ? Message-ID: <200608241101.20636.haase@msg.ucsf.edu> Hi, I suppose the answer is no . But converting more code to numpy I got this error AttributeError: 'numpy.flatiter' object has no attribute 'dtype' (I found that I did not need the .flat in the first place ) So I was just wondering if (or how much) a flatiter object should behave like an ndarray ? Also this is an opportunity to have some talk about the relative newcomer "flatiter generator objects" ... Thanks, - Sebastian Haase From oliphant at ee.byu.edu Thu Aug 24 15:07:44 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 13:07:44 -0600 Subject: [Numpy-discussion] should a flatiter object get a 'dtype' attribute ? In-Reply-To: <200608241101.20636.haase@msg.ucsf.edu> References: <200608241101.20636.haase@msg.ucsf.edu> Message-ID: <44EDF900.5070206@ee.byu.edu> Sebastian Haase wrote: >Hi, >I suppose the answer is no . >But converting more code to numpy I got this error >AttributeError: 'numpy.flatiter' object has no attribute 'dtype' >(I found that I did not need the .flat in the first place ) >So I was just wondering if (or how much) a flatiter object should behave like >an ndarray ? > > It's a good question. Right now, they act like an array when passed to functions, but don't have the same attributes and/or methods of an ndarray. I've not wanted to add them because I'm not sure how far thinking that a.flat is an actual array will go and so it's probably better not to try and hide the fact that it isn't an array object. I've slowly added a few things (like comparison operators), but the real-purpose of the object returned from .flat is for indexing using flat indexes into the array. a.flat[10] = 10 a.flat[30] Beyond that you should use .ravel() (only copies when necessary to create a contiguous chunk of data) and .flatten() (copies all the time). -Travis From oliphant at ee.byu.edu Thu Aug 24 15:18:01 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 13:18:01 -0600 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: References: Message-ID: <44EDFB69.8090608@ee.byu.edu> Albert Strasheim wrote: >Dialog pops up: > >--------------------------- >python.exe - Application Error >--------------------------- >The exception unknown software exception (0xc0000029) occurred in the >application at location 0x7c86d474. > > >Click on OK to terminate the program >Click on CANCEL to debug the program >--------------------------- >OK Cancel >--------------------------- > >In the Python console it prints: > >-2147483648 > >If you can give me some idea of what should be happening, I can take a look >at fixing it. > > When does the crash happen? Does it happen when you press Ctrl-C? What's supposed to be happening is that we are registering a handler for Ctrl-C that longjmps back to just after the code between NPY_SIGINT_ON and NPY_SIGINT_OFF. I'm not sure how to actually accomplish something like that under windows as I've heard mention that longjmp should not be used with signals under win32. The easy "fix" is to just define NPY_NO_SIGNAL in setup.py when on a platform that doesn't support using signals and longjmp (like apparently win32). If you could figure out what to do instead on windows that would be preferrable. -Travis From kortmann at ideaworks.com Thu Aug 24 16:10:36 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Thu, 24 Aug 2006 13:10:36 -0700 (PDT) Subject: [Numpy-discussion] (no subject) Message-ID: <1804.12.216.231.149.1156450236.squirrel@webmail.ideaworks.com> >On Thursday 24 August 2006 09:50, kortmann at ideaworks.com wrote: >> Sorry for my ignorance, but I have not ever heard of or used mingw32. I >> am also using python 2.3. >http://en.wikipedia.org/wiki/Mingw explains in detail. >> >> Is there any way someone could possibly send me a brief walk through of >> how to install 1.0b3 on windows32? >do you know about the ("awesome" wiki website at scipy.org) >try your luck at >http://www.scipy.org/Build_for_Windows >> >> Also I am not sure that I know how to manipulate the code that you guys >> said that you have to so that it will work so if that is needed could you >> post a walk through of that also? >> >To my knowledge there is no need to "manipulate code" .... >Maybe you should try getting per-build versions first. >Sebastian Haase Thank you for all of that. I followed the directions carefully. created a numpy folder, checked out the svn via http://svn.scipy.org/svn/numpy/trunk changed to the numpy directory and typed python setup.py config --compiler=mingw32 build --compiler=mingw32 install and then reinstalled sci py because it says to install sci py after numpy. And then I recieved this after trying to run my program. Any Ideas anyone? $HOME=C:\Documents and Settings\Administrator CONFIGDIR=C:\Documents and Settings\Administrator\.matplotlib loaded ttfcache file C:\Documents and Settings\Administrator\.matplotlib\ttffont .cache matplotlib data path c:\python23\lib\site-packages\matplotlib\mpl-data backend WXAgg version 2.6.3.2 Overwriting info= from scipy.misc.helpmod (was from numpy.lib.utils) Overwriting who= from scipy.misc.common (was from numpy.lib.utils) Overwriting source= from scipy.misc.helpmod (was from numpy.lib.utils) RuntimeError: module compiled against version 1000000 of C-API but this version of numpy is 1000002 Fatal Python error: numpy.core.multiarray failed to import... exiting. abnormal program termination From oliphant at ee.byu.edu Thu Aug 24 16:17:44 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 14:17:44 -0600 Subject: [Numpy-discussion] (no subject) In-Reply-To: <1804.12.216.231.149.1156450236.squirrel@webmail.ideaworks.com> References: <1804.12.216.231.149.1156450236.squirrel@webmail.ideaworks.com> Message-ID: <44EE0968.1030904@ee.byu.edu> kortmann at ideaworks.com wrote: >>On Thursday 24 August 2006 09:50, kortmann at ideaworks.com wrote: >> >> >>>Sorry for my ignorance, but I have not ever heard of or used mingw32. I >>>am also using python 2.3. >>> >>> >>http://en.wikipedia.org/wiki/Mingw explains in detail. >> >> > >$HOME=C:\Documents and Settings\Administrator >CONFIGDIR=C:\Documents and Settings\Administrator\.matplotlib >loaded ttfcache file C:\Documents and >Settings\Administrator\.matplotlib\ttffont >.cache >matplotlib data path c:\python23\lib\site-packages\matplotlib\mpl-data >backend WXAgg version 2.6.3.2 >Overwriting info= from scipy.misc.helpmod >(was ction info at 0x01F896F0> from numpy.lib.utils) >Overwriting who= from scipy.misc.common (was >on who at 0x01F895F0> from numpy.lib.utils) >Overwriting source= from scipy.misc.helpmod >(was > from numpy.lib.utils) >RuntimeError: module compiled against version 1000000 of C-API but this >version >of numpy is 1000002 >Fatal Python error: numpy.core.multiarray failed to import... exiting. > > >abnormal program termination > > You have a module built against an older version of NumPy. What modules are being loaded? Perhaps it is matplotlib or SciPy -Travis From haase at msg.ucsf.edu Thu Aug 24 17:05:21 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 14:05:21 -0700 Subject: [Numpy-discussion] possible bug in C-API Message-ID: <200608241405.21834.haase@msg.ucsf.edu> Hi, I noticed in numpy/numarray/_capi.c: NA_NewAllFromBuffer() a) the original numarray function could create arrays of any (ndim) shape, while PyArray_FromBuffer() looks to me that the returned array is always 1D. b) in the code part npy_intp size = dtype->elsize; for ... size *= self->dimensions[i]; PyArray_FromBuffer(bufferObject, dtype, size, byteoffset); Is "size" here a muplitple of the itemsize !? I think I got a crashed (my code) that I fixed when I set size to (the equivalent of) N.prod(array.shape) Cheers, Sebastian Haase From oliphant at ee.byu.edu Thu Aug 24 18:38:43 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 16:38:43 -0600 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> Message-ID: <44EE2A73.2080406@ee.byu.edu> David Cournapeau wrote: >>>I'm working on some macros that will allow extensions to be >>>"interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the >>>implementation is complicated by the possibility of threads and making >>>sure to handle clean-up code correctly when the interrupt returns. >>> >>> >>> >This is funny, I was just thinking about that yesterday. This is a major >problem when writing C extensions in matlab (the manual says use the >matlab allocator instead of malloc/new/whatever, but when you call a >library, you cannot do that...). > > I'm glad many people are thinking about it. There is no reason we can't have a few ways to handle the situation. Currently in SVN, the simple NPY_SIGINT_ON [code] NPY_SIGINT_OFF approach is implemented (for platforms with sigsetjmp/siglongjmp). You can already use the approach suggested: if (PyOS_InterruptOccurred()) goto error to handle interrupts. The drawback of this approach is that the loop executes more slowly because a check for the interrupt occurs many times in the loop which costs time. The advantage is that it may work with threads (I'm not clear on whether or not PyOS_InterruptOccurred can be called without the GIL, though). >I think the case proposer by Perry is too restrictive: it is really >common to use external libraries which we do not know whether they use >memory allocation inside the processing, and there is a need to clean >that too. > > If nothing is known about memory allocation of the external library, then I don't see how it can be safely interrupted using any mechanism. What is available now is sufficient. I played far too long with how to handle threads, but was not able to come up with a solution, so for now I've punted. -Travis From hetland at tamu.edu Thu Aug 24 18:42:19 2006 From: hetland at tamu.edu (Rob Hetland) Date: Thu, 24 Aug 2006 17:42:19 -0500 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44EDFB69.8090608@ee.byu.edu> References: <44EDFB69.8090608@ee.byu.edu> Message-ID: <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> In compiling matplotlib and scipy, I get errors complaining about multiply defined symbols (See below). I tried to fix this with - multiply_defined suppress but this did not work. Is there a way to make this go away? -Rob Scipy error: c++ -bundle -undefined dynamic_lookup build/temp.macosx-10.4-i386-2.4/ Lib/sandbox/delaunay/_delaunay.o build/temp.macosx-10.4-i386-2.4/Lib/ sandbox/delaunay/VoronoiDiagramGenerator.o build/temp.macosx-10.4- i386-2.4/Lib/sandbox/delaunay/delaunay_utils.o build/temp.macosx-10.4- i386-2.4/Lib/sandbox/delaunay/natneighbors.o -Lbuild/temp.macosx-10.4- i386-2.4 -o build/lib.macosx-10.4-i386-2.4/scipy/sandbox/delaunay/ _delaunay.so /usr/bin/ld: multiple definitions of symbol __NPY_SIGINT_BUF build/temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/_delaunay.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) build/temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/ VoronoiDiagramGenerator.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) collect2: ld returned 1 exit status /usr/bin/ld: multiple definitions of symbol __NPY_SIGINT_BUF build/temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/_delaunay.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) build/temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/ VoronoiDiagramGenerator.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) collect2: ld returned 1 exit status error: Command "c++ -bundle -undefined dynamic_lookup build/ temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/_delaunay.o build/ temp.macosx-10.4-i386-2.4/Lib/sandbox/delaunay/ VoronoiDiagramGenerator.o build/temp.macosx-10.4-i386-2.4/Lib/sandbox/ delaunay/delaunay_utils.o build/temp.macosx-10.4-i386-2.4/Lib/sandbox/ delaunay/natneighbors.o -Lbuild/temp.macosx-10.4-i386-2.4 -o build/ lib.macosx-10.4-i386-2.4/scipy/sandbox/delaunay/_delaunay.so" failed with exit status 1 matplotlib error: c++ -bundle -undefined dynamic_lookup build/temp.macosx-10.4-i386-2.4/ agg23/src/agg_trans_affine.o build/temp.macosx-10.4-i386-2.4/agg23/ src/agg_path_storage.o build/temp.macosx-10.4-i386-2.4/agg23/src/ agg_bezier_arc.o build/temp.macosx-10.4-i386-2.4/agg23/src/ agg_curves.o build/temp.macosx-10.4-i386-2.4/agg23/src/ agg_vcgen_dash.o build/temp.macosx-10.4-i386-2.4/agg23/src/ agg_vcgen_stroke.o build/temp.macosx-10.4-i386-2.4/agg23/src/ agg_rasterizer_scanline_aa.o build/temp.macosx-10.4-i386-2.4/agg23/ src/agg_image_filters.o build/temp.macosx-10.4-i386-2.4/src/_image.o build/temp.macosx-10.4-i386-2.4/src/ft2font.o build/temp.macosx-10.4- i386-2.4/src/mplutils.o build/temp.macosx-10.4-i386-2.4/CXX/ cxx_extensions.o build/temp.macosx-10.4-i386-2.4/CXX/cxxsupport.o build/temp.macosx-10.4-i386-2.4/CXX/IndirectPythonInterface.o build/ temp.macosx-10.4-i386-2.4/CXX/cxxextensions.o build/temp.macosx-10.4- i386-2.4/src/_ns_backend_agg.o -L/usr/local/lib -L/usr/lib -L/usr/ local/lib -L/usr/lib -lpng -lz -lstdc++ -lm -lfreetype -lz -lstdc++ - lm -o build/lib.macosx-10.4-i386-2.4/matplotlib/backends/ _ns_backend_agg.so /usr/bin/ld: multiple definitions of symbol __NPY_SIGINT_BUF build/temp.macosx-10.4-i386-2.4/src/_image.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) build/temp.macosx-10.4-i386-2.4/src/_ns_backend_agg.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) collect2: ld returned 1 exit status /usr/bin/ld: multiple definitions of symbol __NPY_SIGINT_BUF build/temp.macosx-10.4-i386-2.4/src/_image.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) build/temp.macosx-10.4-i386-2.4/src/_ns_backend_agg.o definition of __NPY_SIGINT_BUF in section (__DATA,__common) collect2: ld returned 1 exit status error: Command "c++ -bundle -undefined dynamic_lookup build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_trans_affine.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_path_storage.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_bezier_arc.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_curves.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_vcgen_dash.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_vcgen_stroke.o build/ temp.macosx-10.4-i386-2.4/agg23/src/agg_rasterizer_scanline_aa.o build/temp.macosx-10.4-i386-2.4/agg23/src/agg_image_filters.o build/ temp.macosx-10.4-i386-2.4/src/_image.o build/temp.macosx-10.4- i386-2.4/src/ft2font.o build/temp.macosx-10.4-i386-2.4/src/mplutils.o build/temp.macosx-10.4-i386-2.4/CXX/cxx_extensions.o build/ temp.macosx-10.4-i386-2.4/CXX/cxxsupport.o build/temp.macosx-10.4- i386-2.4/CXX/IndirectPythonInterface.o build/temp.macosx-10.4- i386-2.4/CXX/cxxextensions.o build/temp.macosx-10.4-i386-2.4/src/ _ns_backend_agg.o -L/usr/local/lib -L/usr/lib -L/usr/local/lib -L/usr/ lib -lpng -lz -lstdc++ -lm -lfreetype -lz -lstdc++ -lm -o build/ lib.macosx-10.4-i386-2.4/matplotlib/backends/_ns_backend_agg.so" failed with exit status 1 On Aug 24, 2006, at 2:18 PM, Travis Oliphant wrote: > Albert Strasheim wrote: > >> Dialog pops up: >> >> --------------------------- >> python.exe - Application Error >> --------------------------- >> The exception unknown software exception (0xc0000029) occurred in the >> application at location 0x7c86d474. >> >> >> Click on OK to terminate the program >> Click on CANCEL to debug the program >> --------------------------- >> OK Cancel >> --------------------------- >> >> In the Python console it prints: >> >> -2147483648 >> >> If you can give me some idea of what should be happening, I can >> take a look >> at fixing it. >> >> > > When does the crash happen? Does it happen when you press Ctrl-C? > > What's supposed to be happening is that we are registering a > handler for > Ctrl-C that longjmps back to just after the code between NPY_SIGINT_ON > and NPY_SIGINT_OFF. > > I'm not sure how to actually accomplish something like that under > windows as I've heard mention that longjmp should not be used with > signals under win32. > > The easy "fix" is to just define NPY_NO_SIGNAL in setup.py when on a > platform that doesn't support using signals and longjmp (like > apparently > win32). > > If you could figure out what to do instead on windows that would be > preferrable. > > -Travis > > > ---------------------------------------------------------------------- > --- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your > job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From oliphant at ee.byu.edu Thu Aug 24 18:52:25 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 16:52:25 -0600 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> References: <44EDFB69.8090608@ee.byu.edu> <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> Message-ID: <44EE2DA9.50908@ee.byu.edu> Rob Hetland wrote: >In compiling matplotlib and scipy, I get errors complaining about >multiply defined symbols (See below). I tried to fix this with - >multiply_defined suppress but this did not work. Is there a way to >make this go away? > > define NPY_NO_SIGNAL for now. -Travis From paul_midgley2000 at yahoo.co.uk Thu Aug 24 19:28:59 2006 From: paul_midgley2000 at yahoo.co.uk (Paul Midgley) Date: Thu, 24 Aug 2006 23:28:59 +0000 (GMT) Subject: [Numpy-discussion] Numpy-discussion Digest, Vol 3, Issue 61 In-Reply-To: Message-ID: <20060824232859.61786.qmail@web25710.mail.ukl.yahoo.com> Thanks for your help -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Thu Aug 24 19:39:45 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 17:39:45 -0600 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> References: <44EDFB69.8090608@ee.byu.edu> <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> Message-ID: <44EE38C1.8000804@ee.byu.edu> Rob Hetland wrote: >In compiling matplotlib and scipy, I get errors complaining about >multiply defined symbols (See below). I tried to fix this with - >multiply_defined suppress but this did not work. Is there a way to >make this go away? > > Can you try current SVN again, to see if it now works? -Travis From cookedm at physics.mcmaster.ca Thu Aug 24 19:40:55 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 24 Aug 2006 19:40:55 -0400 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE2A73.2080406@ee.byu.edu> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> Message-ID: <88EB405A-22AB-4B7C-B009-B96288E45B7E@physics.mcmaster.ca> On Aug 24, 2006, at 18:38 , Travis Oliphant wrote: > > You can already use the approach suggested: > > if (PyOS_InterruptOccurred()) goto error > > to handle interrupts. The drawback of this approach is that the loop > executes more slowly because a check for the interrupt occurs many > times > in the loop which costs time. > > The advantage is that it may work with threads (I'm not clear on > whether > or not PyOS_InterruptOccurred can be called without the GIL, though). It should be; it's pure C code: int PyOS_InterruptOccurred(void) { if (!interrupted) return 0; interrupted = 0; return 1; } (where interrupted is a static int). -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From haase at msg.ucsf.edu Thu Aug 24 20:09:48 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 17:09:48 -0700 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules Message-ID: <200608241709.48522.haase@msg.ucsf.edu> Hi, I get TypeError: array cannot be safely cast to required type when calling hstack() ( which calls concatenate() ) on two arrays being a int32 and a float32 respectively. I understand now that a int32 cannot be safely converted into a float32 but why does concatenate not automatically up(?) cast to float64 ?? Is this really required to be done *explicitly* every time ? ** In general it makes float32 cubersome to use. ** ( Background: my large image data is float32 (float64 would require too much memory) and the hstack call happens inside scipy plt module when I try to get a 1d line profile and the "y_data" is hstack'ed with the x-axis values (int32)) ) Thanks, Sebastian Haase From oliphant at ee.byu.edu Thu Aug 24 20:11:09 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 18:11:09 -0600 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <88EB405A-22AB-4B7C-B009-B96288E45B7E@physics.mcmaster.ca> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> <88EB405A-22AB-4B7C-B009-B96288E45B7E@physics.mcmaster.ca> Message-ID: <44EE401D.3070006@ee.byu.edu> David M. Cooke wrote: >On Aug 24, 2006, at 18:38 , Travis Oliphant wrote: > > > >>You can already use the approach suggested: >> >>if (PyOS_InterruptOccurred()) goto error >> >>to handle interrupts. The drawback of this approach is that the loop >>executes more slowly because a check for the interrupt occurs many >>times >>in the loop which costs time. >> >>The advantage is that it may work with threads (I'm not clear on >>whether >>or not PyOS_InterruptOccurred can be called without the GIL, though). >> >> > >It should be; it's pure C code: > >int >PyOS_InterruptOccurred(void) >{ > if (!interrupted) > return 0; > interrupted = 0; > return 1; >} > > I tried to test this with threads using the following program and it doesn't seem to respond to interrupts. import threading import numpy.core.multiarray as ncm class mythread(threading.Thread): def run(self): print "Starting thread", self.getName() ncm.test_interrupt(1) print "Ending thread", self.getName() m1 = mythread() m2 = mythread() m1.start() m2.start() From oliphant at ee.byu.edu Thu Aug 24 20:28:19 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 18:28:19 -0600 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules In-Reply-To: <200608241709.48522.haase@msg.ucsf.edu> References: <200608241709.48522.haase@msg.ucsf.edu> Message-ID: <44EE4423.2010909@ee.byu.edu> Sebastian Haase wrote: >Hi, >I get >TypeError: array cannot be safely cast to required type > >when calling hstack() ( which calls concatenate() ) >on two arrays being a int32 and a float32 respectively. > >I understand now that a int32 cannot be safely converted into a float32 >but why does concatenate not automatically >up(?) cast to float64 ?? > > Basically, NumPy is following Numeric's behavior of raising an error in this case of unsafe casting in concatenate. For functions that are not universal-function objects, mixed-type behavior works basically just like Numeric did (using the ordering of the types to determine which one to choose as the output). It could be argued that the ufunc-rules should be followed instead. -Travis From wbaxter at gmail.com Thu Aug 24 20:39:50 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 25 Aug 2006 09:39:50 +0900 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: Message-ID: On 8/24/06, Sasha wrote: > On 8/24/06, Bill Baxter wrote: > >[snip] it would be > > nice to add a concise definition of "ufunc" to the numpy glossary: > > http://www.scipy.org/Numpy_Glossary. > > > > done > > > Can anyone come up with such a definition? > > I copied the definition from the old Numeric manual. > > > Here's my stab at it: > > > > ufunc: A function that operates element-wise on arrays. > > > This is not entirely correct. Ufuncs operate on anything that can be > passed to asarray: arrays, python lists, tuples or scalars. Hey Sasha. Your defnition may be more correct, but I have to confess I don't understand it. "Universal function. Universal functions follow similar rules for broadcasting, coercion and "element-wise operation"." What is "coercion"? (Who or what is being coerced to do what?) and what does it mean to "follow similar rules for ... coercion"? Similar to what? --bill From haase at msg.ucsf.edu Thu Aug 24 20:47:08 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 17:47:08 -0700 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules In-Reply-To: <44EE4423.2010909@ee.byu.edu> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> Message-ID: <200608241747.08195.haase@msg.ucsf.edu> On Thursday 24 August 2006 17:28, Travis Oliphant wrote: > Sebastian Haase wrote: > >Hi, > >I get > >TypeError: array cannot be safely cast to required type > > > >when calling hstack() ( which calls concatenate() ) > >on two arrays being a int32 and a float32 respectively. > > > >I understand now that a int32 cannot be safely converted into a float32 > >but why does concatenate not automatically > >up(?) cast to float64 ?? > > Basically, NumPy is following Numeric's behavior of raising an error in > this case of unsafe casting in concatenate. For functions that are not > universal-function objects, mixed-type behavior works basically just > like Numeric did (using the ordering of the types to determine which one > to choose as the output). > > It could be argued that the ufunc-rules should be followed instead. > > -Travis > Are you saying the ufunc-rules would convert "int32-float32" to float64 and hence make my code "just work" !? And why are there two sets of rules ? Are the Numeric rules used at many places ? Thanks, Sebastian Haase From megu24 at yahoo.co.jp Thu Aug 24 21:21:58 2006 From: megu24 at yahoo.co.jp (=?iso-2022-jp?B?bWVndQ==?=) Date: Fri, 25 Aug 2006 01:21:58 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?06 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?06 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?06 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?06 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?06 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From simon at arrowtheory.com Fri Aug 25 07:42:19 2006 From: simon at arrowtheory.com (Simon Burton) Date: Fri, 25 Aug 2006 12:42:19 +0100 Subject: [Numpy-discussion] tensor dot ? Message-ID: <20060825124219.6581a608.simon@arrowtheory.com> >>> numpy.dot.__doc__ matrixproduct(a,b) Returns the dot product of a and b for arrays of floating point types. Like the generic numpy equivalent the product sum is over the last dimension of a and the second-to-last dimension of b. NB: The first argument is not conjugated. Does numpy support summing over arbitrary dimensions, as in tensor calculus ? I could cook up something that uses transpose and dot, but it's reasonably tricky i think :) Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From david at ar.media.kyoto-u.ac.jp Thu Aug 24 23:11:26 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 Aug 2006 12:11:26 +0900 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE2A73.2080406@ee.byu.edu> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> Message-ID: <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> Travis Oliphant wrote: > I'm glad many people are thinking about it. There is no reason we > can't have a few ways to handle the situation. > > Currently in SVN, the simple > > NPY_SIGINT_ON > [code] > NPY_SIGINT_OFF > > approach is implemented (for platforms with sigsetjmp/siglongjmp). > > You can already use the approach suggested: > > if (PyOS_InterruptOccurred()) goto error > > to handle interrupts. The drawback of this approach is that the loop > executes more slowly because a check for the interrupt occurs many times > in the loop which costs time. > I am not sure whether there are other solutions... This is the way I saw signal handling done in common programs when I looked for a solution for my matlab extensions. > The advantage is that it may work with threads (I'm not clear on whether > or not PyOS_InterruptOccurred can be called without the GIL, though). > > >> I think the case proposer by Perry is too restrictive: it is really >> common to use external libraries which we do not know whether they use >> memory allocation inside the processing, and there is a need to clean >> that too. >> >> >> > > If nothing is known about memory allocation of the external library, > then I don't see how it can be safely interrupted using any mechanism. > If the library does nothing w.r.t signals, then you just have to clean all the things related to the library once you caught a signal. This is no different than cleaning your own code. Actually, cleaning libraries is the main reason why I implemented this signal scheme in matlab extensions, since they cannot use the matlab memory allocator, and because they live in the same memory space, calling several times the same extension can corrupt really quickly most of matlab memory space. Maybe there are some problems I am not aware of ? David From oliphant.travis at ieee.org Thu Aug 24 22:46:51 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 20:46:51 -0600 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules In-Reply-To: <200608241747.08195.haase@msg.ucsf.edu> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> Message-ID: <44EE649B.1020500@ieee.org> Sebastian Haase wrote: > On Thursday 24 August 2006 17:28, Travis Oliphant wrote: > > Are you saying the ufunc-rules would convert "int32-float32" to float64 and > hence make my code "just work" !? > Yes. That's what I'm saying (but you would get float64 out --- but if you didn't want that then you would have to be specific). > And why are there two sets of rules ? > Because there are two modules (multiarray and umath) where the functionality is implemented. > Are the Numeric rules used at many places ? > Not that many. I did abstract the notion to a C-API: PyArray_ConvertToCommonType and implemented the scalars-don't-cause-upcasting part of the ufunc rules in that code. But, I followed the old-style Numeric coercion rules for the rest of it (because I was adapting Numeric). Right now, unless there are strong objections, I'm leaning to changing that so that the same coercion rules are used whenever a common type is needed. It would not be that difficult of a change. -Travis From ndarray at mac.com Thu Aug 24 23:10:24 2006 From: ndarray at mac.com (Sasha) Date: Thu, 24 Aug 2006 23:10:24 -0400 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: Message-ID: On 8/24/06, Bill Baxter wrote: [snip] > Hey Sasha. Your defnition may be more correct, but I have to confess > I don't understand it. > > "Universal function. Universal functions follow similar rules for > broadcasting, coercion and "element-wise operation"." > > What is "coercion"? (Who or what is being coerced to do what?) and > what does it mean to "follow similar rules for ... coercion"? Similar > to what? This is not my definition, I just rephrased the introductory paragraph from the ufunc section of the "Numerical Python" . Feel free to edit it so that it makes more sense. Please note that I originally intended the "Numpy Glossary" not as a place to learn new terms, but as a guide for those who know more than one meaning of the terms or more than one way to call something. (See the preamble.) This may explain why I did not include "ufunc" to begin with. (I remember deciding not to include "ufunc", but I don't remember the exact reason anymore.) I would welcome an effort to make the glossary more novice friendly, but not at the expense of oversimplifying things. BTW, do you think "Rank ... (2) number of orthogonal dimensions of a matrix" is clear? Considering that matrix is defined a "an array of rank 2"? Is "rank" in linear algebra sense common enough in numpy documentation to be included in the glossary? For comparison, here are a few alternative formulations of matrix rank definition: "The rank of a matrix or a linear map is the dimension of the image of the matrix or the linear map, corresponding to the number of linearly independent rows or columns of the matrix, or to the number of nonzero singular values of the map." "In linear algebra, the column rank (row rank respectively) of a matrix A with entries in some field is defined to be the maximal number of columns (rows respectively) of A which are linearly independent." From oliphant.travis at ieee.org Thu Aug 24 23:20:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 21:20:45 -0600 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> Message-ID: <44EE6C8D.5000208@ieee.org> David Cournapeau wrote: >>> >>> >> If nothing is known about memory allocation of the external library, >> then I don't see how it can be safely interrupted using any mechanism. >> >> > If the library does nothing w.r.t signals, then you just have to clean > all the things related to the library once > you caught a signal. This is no different than cleaning your own code. > Right, as long as you know what to do you are O.K. I was just thinking about a hypothetical situation where the library allocated some temporary memory that it was going to free at the end of the subroutine but then an interrupt jumped out back to your code before it could finish. In a case like this, you would have to use the "check if interrupt has occurred" approach before and after the library call. But, then that library call is not interruptable. I could also see wanting to be able to interrupt a library calculation when you know it isn't allocating memory. So, I like having both possibilities available. So far we haven't actually put anything in the numpy code itself. I'm leaning to putting PyOS_InterruptOccurred-style checks in a few places at some point down the road. -Travis From haase at msg.ucsf.edu Thu Aug 24 23:59:19 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 24 Aug 2006 20:59:19 -0700 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules In-Reply-To: <44EE649B.1020500@ieee.org> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> <44EE649B.1020500@ieee.org> Message-ID: <44EE7597.7000908@msg.ucsf.edu> Travis Oliphant wrote: > Sebastian Haase wrote: >> On Thursday 24 August 2006 17:28, Travis Oliphant wrote: >> >> Are you saying the ufunc-rules would convert "int32-float32" to float64 and >> hence make my code "just work" !? >> > Yes. That's what I'm saying (but you would get float64 out --- but if > you didn't want that then you would have to be specific). > >> And why are there two sets of rules ? >> > Because there are two modules (multiarray and umath) where the > functionality is implemented. > >> Are the Numeric rules used at many places ? >> > Not that many. I did abstract the notion to a C-API: > PyArray_ConvertToCommonType and implemented the > scalars-don't-cause-upcasting part of the ufunc rules in that code. > But, I followed the old-style Numeric coercion rules for the rest of it > (because I was adapting Numeric). > > Right now, unless there are strong objections, I'm leaning to changing > that so that the same coercion rules are used whenever a common type is > needed. If you mean keeping the ufunc rules (which seem more liberal, fix my problem ;-) and might make using float32 in general more painless) - I would be all for it ... simplifying is always good in the long term ... Cheers, Sebastian > > It would not be that difficult of a change. From oliphant.travis at ieee.org Fri Aug 25 00:03:10 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 24 Aug 2006 22:03:10 -0600 Subject: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules In-Reply-To: <200608241747.08195.haase@msg.ucsf.edu> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> Message-ID: <44EE767E.7000207@ieee.org> Sebastian Haase wrote: > On Thursday 24 August 2006 17:28, Travis Oliphant wrote: > >> Sebastian Haase wrote: >> >>> Hi, >>> I get >>> TypeError: array cannot be safely cast to required type >>> >>> when calling hstack() ( which calls concatenate() ) >>> on two arrays being a int32 and a float32 respectively. >>> >>> I understand now that a int32 cannot be safely converted into a float32 >>> but why does concatenate not automatically >>> up(?) cast to float64 ?? >>> >> Basically, NumPy is following Numeric's behavior of raising an error in >> this case of unsafe casting in concatenate. For functions that are not >> universal-function objects, mixed-type behavior works basically just >> like Numeric did (using the ordering of the types to determine which one >> to choose as the output). >> >> It could be argued that the ufunc-rules should be followed instead. >> >> -Travis >> >> > Are you saying the ufunc-rules would convert "int32-float32" to float64 and > hence make my code "just work" !? > This is now the behavior in SVN. Note that this is different from both Numeric (which gave an error) and numarray (which coerced to float32). But, it is consistent with how mixed-types are handled in calculations and is thus an easier rule to explain. Thanks for the testing. -Travis From david at ar.media.kyoto-u.ac.jp Fri Aug 25 00:39:23 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 Aug 2006 13:39:23 +0900 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE6C8D.5000208@ieee.org> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> <44EE6C8D.5000208@ieee.org> Message-ID: <44EE7EFB.2080203@ar.media.kyoto-u.ac.jp> Travis Oliphant wrote: > > Right, as long as you know what to do you are O.K. I was just thinking > about a hypothetical situation where the library allocated some > temporary memory that it was going to free at the end of the subroutine > but then an interrupt jumped out back to your code before it could > finish. In a case like this, you would have to use the "check if > interrupt has occurred" approach before and after the library call. Indeed. By the way, I tried something for python.thread + signals. This is posix specific, and it works as expected on linux: - first, a C extension which implements the signal handling. It has a function called hello, which is the entry point of the C module, and calls the function process (which does random computation). It checks if it got a SIGINT signal, and returns -1 if caught. Returns 0 if no SIGINT called: - extension compiled into python module (I used boost python because I am too lazy to find how to do it in C :) ) - python script which creates several threads running the hello function. They run in parallel, and ctrl+C is correctly handled. I think this is signal specific, and this needs to be improved (this is just meant as a toy example): import threading import hello import time class mythread(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): print "Starting thread", self.getName() st = 0 while st == 0: st = hello.foo(self.getName()) # sleep to force the python interpreter to run # other threads if available time.sleep(1) if st == -1: print self.getName() + " got signal" print "Ending thread", self.getName() nthread = 5 t = [mythread() for i in range(nthread)] [i.start() for i in t] Then, you have something like: tarting thread Thread-1 Thread-1 processing... done clean called Starting thread Thread-5 Thread-5 processing... done clean called Starting thread Thread-3 Thread-3 processing... done clean called Starting thread Thread-2 Thread-2 processing... done hello.c:hello signal caught line 56 for thread Thread-2 clean called Thread-1 processing... done clean called Starting thread Thread-4 Thread-4 processing... done clean called Thread-5 processing... done clean called Thread-3 processing... done hello.c:hello signal caught line 56 for thread Thread-3 clean called Thread-2 got signal Ending thread Thread-2 Thread-1 processing... done clean called Thread-4 processing... done clean called Thread-5 processing... done clean called Thread-3 got signal Ending thread Thread-3 Thread-1 processing... done hello.c:hello signal caught line 56 for thread Thread-1 clean called Thread-4 processing... done clean called Thread-5 processing... done hello.c:hello signal caught line 56 for thread Thread-5 clean called Thread-1 got signal Ending thread Thread-1 Thread-4 processing... done clean called Thread-5 got signal Ending thread Thread-5 Thread-4 processing... done clean called Thread-4 processing... done clean called Thread-4 processing... done hello.c:hello signal caught line 56 for thread Thread-4 clean called Thread-4 got signal Ending thread Thread-4 (SIGINT are received when Ctrl+C on linux) You can find all sources here: http://www.ar.media.kyoto-u.ac.jp/members/david/numpysig/ Please note that I know almost nothing about all this stuff, I just naively implemented from the example of GNU C library, and it always worked for me on matlab on my machine. I do not know if this is portable, if this can work for other signals, etc... David From oliphant.travis at ieee.org Fri Aug 25 02:10:26 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 00:10:26 -0600 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE7EFB.2080203@ar.media.kyoto-u.ac.jp> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> <44EE6C8D.5000208@ieee.org> <44EE7EFB.2080203@ar.media.kyoto-u.ac.jp> Message-ID: <44EE9452.1080007@ieee.org> David Cournapeau wrote: > Indeed. > > By the way, I tried something for python.thread + signals. This is posix > specific, and it works as expected on linux: > Am I right that this could this be accomplished simply by throwing away all the interrupt handling stuff in the code and checking for PyOS_InterruptOccurred() in the place where you check for the global variable that your signal handler uses? Your signal handler does essentially what Python's signal handler already does, if I'm not mistaken. -Travis From stefan at sun.ac.za Fri Aug 25 03:45:26 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 25 Aug 2006 09:45:26 +0200 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: Message-ID: <20060825074526.GC17119@mentat.za.net> On Thu, Aug 24, 2006 at 11:10:24PM -0400, Sasha wrote: > I would welcome an effort to make the glossary more novice friendly, > but not at the expense of oversimplifying things. > > BTW, do you think "Rank ... (2) number of orthogonal dimensions of a > matrix" is clear? Considering that matrix is defined a "an array of > rank 2"? Is "rank" in linear algebra sense common enough in numpy > documentation to be included in the glossary? > > For comparison, here are a few alternative formulations of matrix rank > definition: > > "The rank of a matrix or a linear map is the dimension of the image of > the matrix or the linear map, corresponding to the number of linearly > independent rows or columns of the matrix, or to the number of nonzero > singular values of the map." > > > "In linear algebra, the column rank (row rank respectively) of a > matrix A with entries in some field is defined to be the maximal > number of columns (rows respectively) of A which are linearly > independent." > I prefer the last definition. Introductory algebra courses teach the term "linearly independent" before "orthogonal" (IIRC). As for "linear map", it has other names, too, and doesn't (in my mind) clarify the definition of rank in this context. Regards St?fan From david at ar.media.kyoto-u.ac.jp Fri Aug 25 06:12:57 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 Aug 2006 19:12:57 +0900 Subject: [Numpy-discussion] Handling interrupts in NumPy extensions In-Reply-To: <44EE9452.1080007@ieee.org> References: <44ECA249.3030007@ieee.org> <20060823193549.70728721@arbutus.physics.mcmaster.ca> <44ED18E6.5060100@ar.media.kyoto-u.ac.jp> <44EE2A73.2080406@ee.byu.edu> <44EE6A5E.5050807@ar.media.kyoto-u.ac.jp> <44EE6C8D.5000208@ieee.org> <44EE7EFB.2080203@ar.media.kyoto-u.ac.jp> <44EE9452.1080007@ieee.org> Message-ID: <44EECD29.5070207@ar.media.kyoto-u.ac.jp> Travis Oliphant wrote: > David Cournapeau wrote: >> Indeed. >> >> By the way, I tried something for python.thread + signals. This is posix >> specific, and it works as expected on linux: >> > Am I right that this could this be accomplished simply by throwing away > all the interrupt handling stuff in the code and checking for > PyOS_InterruptOccurred() in the place where you check for the global > variable that your signal handler uses? Your signal handler does > essentially what Python's signal handler already does, if I'm not mistaken. I don't know how the python signal handler works, but I believe it should do more or less the same, indeed. The key idea is that it is important to mask other signals related to interrupting. To have a relatively clear view on this, if you have not seen it, you may take a look at the gnu C doc on signal handling: http://www.gnu.org/software/libc/manual/html_node/Defining-Handlers.html#Defining-Handlers After having given some thought, I am wondering about what exactly we are trying to do: - the main problem is to be able to interrupt some which may take a long time to compute, without corrupting the whole python process. - for that, those function need to be able to trap the usual signals corresponding to interrupt (SIGINT, etc... on Unix, equivalents on windows). There are two ways to handle a signal: - check regularly some global (that is, global to the whole process) value, and if change this value if a signal is trapped. That's the easier way, but this is not thread safe as I first thought (I will code an example if I have time). - the signal handler jumps to an other point of the program where cleaning is done: this is more complicated, and I am not sure we need the complication (I have never used this scheme, so I may just miss the point totally). I don't even want to think how it works in multi-threading environment :) Now, the threading issue came in, and I am not sure why we need to care: this is a problem if numpy is implemented in a multi-thread way, but I don't believe it to be the case, right ? An other solution, which is used I think in more sophisticated programs, is having one thread with high priority, which only job is to detect signals, and to mask all signals in all other threads. Again, this seems overkill (and highly non portable) ? And this should be the python interpreter job, no ? Actually, as this is a generic problem for any python extension code, other really smart people should have thought about that... If I am interpreting correctly what is said here http://docs.python.org/lib/module-signal.html, I believe that what you suggest (using PyOS_InterruptOccurred() at some points) is what shall be done: the python interpreter is making sure that the signal is send to the main thread, that is the thread where numpy is executed (that's my understanding on the way python interpreter works, not a fact). David From faltet at carabos.com Fri Aug 25 06:11:25 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 25 Aug 2006 12:11:25 +0200 Subject: [Numpy-discussion] [ANN] PyTables 1.3.3 released Message-ID: <200608251211.25886.faltet@carabos.com> =========================== Announcing PyTables 1.3.3 =========================== I'm happy to announce a new minor release of PyTables. In this one, we have focused on improving compatibility with latest beta versions of NumPy (0.9.8, 1.0b2, 1.0b3 and higher), adding some improvements and the typical bunch of fixes (some of them are important, like the possibility of re-using the same nested class in declaration of table records; see later). Go to the PyTables web site for downloading the beast: http://www.pytables.org/ or keep reading for more info about the new features and bugs fixed. Changes more in depth ===================== Improvements: - Added some workarounds on a couple of 'features' of recent versions of NumPy. Now, PyTables should work with a broad range of NumPy versions, ranging from 0.9.8 up to 1.0b3 (and hopefully beyond, but let's see). - When a loop for appending a table is not flushed before the node is unbounded (and hence, becomes ``killed`` in PyTables slang), like in:: import tables as T class Item(T.IsDescription): name = T.StringCol(length=16) vals = T.Float32Col(0.0) fileh = T.openFile("/tmp/test.h5", "w") table = fileh.createTable(fileh.root, 'table', Item) for i in range(100): table.row.append() #table.flush() # uncomment this prevent the warning table = None # Unbounding table node! a ``PerformanceWarning`` is issued telling the user that it is *much* recommended flushing the buffers in a table before unbounding it. Hopefully, this will also prevent other scary errors (like ``Illegal Instruction``, ``Malloc(): trying to call free() twice``, ``Bus Error`` or ``Segmentation fault`` ) that some people is seeing lately and which are most probably related with this issue. Bug fixes: - In situations where the same metaclass is used for declaring several columns in a table, like in:: class Nested(IsDescription): uid = IntCol() data = FloatCol() class B_Candidate(IsDescription): nested1 = Nested() nested2 = Nested() they were sharing the same column metadata behind the scenes, introducing several inconsistencies on it. This has been fixed. - More work on different padding conventions between NumPy/numarray. Now, all trailing spaces in chararrays are stripped-off during write/read operations. This means that when retrieving NumPy chararrays, it shouldn't appear spureous trailing spaces anymore (not even in the context of recarrays). The drawback is that you will loose *all* the trailing spaces, no matter if you want them in this place or not. This is not a very confortable situation to deal with, but hopefully, things will get better when NumPy would be at the core of PyTables. In the meanwhile, I hope that the current behaviour would be a minor evil for most of situations. This closes ticket #13 (again). - Solved a problem with conversions from numarray charrays to numpy objects. Before, when saving numpy chararrays with a declared length of N, but none of this components reached such a length, the dtype of the numpy chararray retrieved was the maximum length of the component strings. This has been corrected. - Fixed a minor glitch in detection of signedness in IntAtom classes. Thanks to Norbert Nemec for reporting this one and providing the fix. Known bugs: - Using ``Row.update()`` in tables with some columns marked as indexed gives a ``NotImplemented`` error although it should not. This is fixed in SVN trunk and the functionality will be available in the 1.4.x series. Meanwhile, a workaround would be refraining to declare columns as indexed and index them *after* the update process (with Col.createIndex() for example). Deprecated features: - None Backward-incompatible changes: - Please, see ``RELEASE-NOTES.txt`` file. Important note for Windows users ================================ If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win.ZIP What it is ========== PyTables is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with qsupport for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. PyTables runs on top of the HDF5 library and numarray (but NumPy and Numeric are also supported) package for achieving maximum throughput and convenient use. Besides, PyTables I/O for table objects is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. PyTables sports indexing capabilities as well, allowing doing selections in tables exceeding one billion of rows in just seconds. Platforms ========= This version has been extensively checked on quite a few platforms, like Linux on Intel32 (Pentium), Win on Intel32 (Pentium), Linux on Intel64 (Itanium2), FreeBSD on AMD64 (Opteron), Linux on PowerPC (and PowerPC64) and MacOSX on PowerPC. For other platforms, chances are that the code can be easily compiled and run without further issues. Please, contact us in case you are experiencing problems. Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdf.ncsa.uiuc.edu/HDF5/ About numarray: http://www.stsci.edu/resources/software_hardware/numarray To know more about the company behind the PyTables development, see: http://www.carabos.com/ Acknowledgments =============== Thanks to various the users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last but not least, a big thank you to THG (http://www.hdfgroup.org/) for sponsoring many of the new features recently introduced in PyTables. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team From svetosch at gmx.net Fri Aug 25 06:27:51 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 25 Aug 2006 12:27:51 +0200 Subject: [Numpy-discussion] Version 1.0b3 In-Reply-To: <1214.12.216.231.149.1156278431.squirrel@webmail.ideaworks.com> References: <1214.12.216.231.149.1156278431.squirrel@webmail.ideaworks.com> Message-ID: <44EED0A7.2000103@gmx.net> kortmann at ideaworks.com schrieb: > Since no one has downloaded 1.0b3 yet, if someone wants to put up the > windows version for python2.3 i would be more than happy to be the first > person to download it :) > I'm sorry, this is *not* for python 2.3, but I posted a build of current svn for python 2.4 under windows here (direct download link): http://www.wiwi.uni-frankfurt.de/profs/nautz/downloads/software/numpy-1.0b4.dev3068.win32-py2.4.exe I didn't do anything except checking out and compiling it, so I guess this is not optimized in any way. Maybe it's still useful for some people. cheers, Sven From charlesr.harris at gmail.com Fri Aug 25 09:34:20 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 25 Aug 2006 07:34:20 -0600 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: <20060825074526.GC17119@mentat.za.net> References: <20060825074526.GC17119@mentat.za.net> Message-ID: Hi, On 8/25/06, Stefan van der Walt wrote: > > On Thu, Aug 24, 2006 at 11:10:24PM -0400, Sasha wrote: > > I would welcome an effort to make the glossary more novice friendly, > > but not at the expense of oversimplifying things. > > > > BTW, do you think "Rank ... (2) number of orthogonal dimensions of a > > matrix" is clear? Considering that matrix is defined a "an array of > > rank 2"? Is "rank" in linear algebra sense common enough in numpy > > documentation to be included in the glossary? > > > > For comparison, here are a few alternative formulations of matrix rank > > definition: > > > > "The rank of a matrix or a linear map is the dimension of the image of > > the matrix or the linear map, corresponding to the number of linearly > > independent rows or columns of the matrix, or to the number of nonzero > > singular values of the map." > > > > > > "In linear algebra, the column rank (row rank respectively) of a > > matrix A with entries in some field is defined to be the maximal > > number of columns (rows respectively) of A which are linearly > > independent." > > > > I prefer the last definition. Introductory algebra courses teach the > term "linearly independent" before "orthogonal" (IIRC). As for > "linear map", it has other names, too, and doesn't (in my mind) > clarify the definition of rank in this context. Matrix rank has nothing to do with numpy rank. Numpy rank is simply the number of indices required to address an element of an ndarray. I always thought a better name for the Numpy rank would be dimensionality, but like everything else one gets used to the numpy jargon, it only needs to be defined someplace for what it is. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndarray at mac.com Fri Aug 25 09:48:54 2006 From: ndarray at mac.com (Sasha) Date: Fri, 25 Aug 2006 09:48:54 -0400 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: <20060825074526.GC17119@mentat.za.net> Message-ID: On 8/25/06, Charles R Harris wrote: > Matrix rank has nothing to do with numpy rank. Numpy rank is simply the > number of indices required to address an element of an ndarray. I always > thought a better name for the Numpy rank would be dimensionality, but like > everything else one gets used to the numpy jargon, it only needs to be > defined someplace for what it is. That's my point exactly. The rank(2) definition was added by Sebastian Haase who advocates the use of the term "ndims" instead of "rank". I've discussed the use of "dimentionality' in the preamble. Note that ndims stands for the number of dimensions, not dimensionality. I don't want to remove rank(2) without hearing from Sebastian first and I appreciate his effort to improve the glossary. Maybe we shold add a "matrix rank" entry instead. From haase at msg.ucsf.edu Fri Aug 25 11:18:11 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 08:18:11 -0700 Subject: [Numpy-discussion] coercion rules for float32 in numpy are different from numarray In-Reply-To: <44EE767E.7000207@ieee.org> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> <44EE767E.7000207@ieee.org> Message-ID: <44EF14B3.2030904@msg.ucsf.edu> was: Re: [Numpy-discussion] hstack(arr_Int32, arr_float32) fails because of casting rules Travis Oliphant wrote: > Sebastian Haase wrote: >> On Thursday 24 August 2006 17:28, Travis Oliphant wrote: >> >>> Sebastian Haase wrote: >>> >>>> Hi, >>>> I get >>>> TypeError: array cannot be safely cast to required type >>>> >>>> when calling hstack() ( which calls concatenate() ) >>>> on two arrays being a int32 and a float32 respectively. >>>> >>>> I understand now that a int32 cannot be safely converted into a float32 >>>> but why does concatenate not automatically >>>> up(?) cast to float64 ?? >>>> >>> Basically, NumPy is following Numeric's behavior of raising an error in >>> this case of unsafe casting in concatenate. For functions that are not >>> universal-function objects, mixed-type behavior works basically just >>> like Numeric did (using the ordering of the types to determine which one >>> to choose as the output). >>> >>> It could be argued that the ufunc-rules should be followed instead. >>> >>> -Travis >>> >>> >> Are you saying the ufunc-rules would convert "int32-float32" to float64 and >> hence make my code "just work" !? >> > > This is now the behavior in SVN. Note that this is different from both > Numeric (which gave an error) and numarray (which coerced to float32). > > But, it is consistent with how mixed-types are handled in calculations > and is thus an easier rule to explain. > > Thanks for the testing. > > -Travis After sleeping over this, I am contemplating about the cases where one would use float32 in the first place. My case yesterday, where I only had a 1d line profile of my data, I was of course OK with coercion to float64. But if you are working with 3D image data (as in medicine) or large 2D images as in astronomy I would assume the reason use float32 is that computer memory is to tight to afford 64bits per pixel. This is probably why numarray tried to keep float32. Float32 can handle a few more digits of precision than int16, but not as much as int32. But I find that I most always have int32s only because its the default, whereas I have float32 as a clear choice to save memory. How hard would it be to change the rules back to the numarray behavior ? Who would be negatively affected ? And who positively ? Thanks for the great work. Sebastian From haase at msg.ucsf.edu Fri Aug 25 11:34:25 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 08:34:25 -0700 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: <20060825074526.GC17119@mentat.za.net> Message-ID: <44EF1881.8020604@msg.ucsf.edu> Sasha wrote: > On 8/25/06, Charles R Harris wrote: >> Matrix rank has nothing to do with numpy rank. Numpy rank is simply the >> number of indices required to address an element of an ndarray. I always >> thought a better name for the Numpy rank would be dimensionality, but like >> everything else one gets used to the numpy jargon, it only needs to be >> defined someplace for what it is. > > That's my point exactly. The rank(2) definition was added by > Sebastian Haase who advocates the use of the term "ndims" instead of > "rank". I've discussed the use of "dimentionality' in the preamble. > Note that ndims stands for the number of dimensions, not > dimensionality. > > I don't want to remove rank(2) without hearing from Sebastian first > and I appreciate his effort to improve the glossary. Maybe we shold > add a "matrix rank" entry instead. My phasing is certainly suboptimal (I only remember the German wording - and even that only faintly - "linear independent" !?) But I put it in, remembering the discussion in "numpy" on *why* array.rank (numarray) was changed to array.ndim (numpy) I just thought this page might be a good place to 'discourage usage of badly-defined terms' or at least give the argument for "ndim". [ OK: it's not "badly" defined: but there are two separate camps on *what* it should mean --- ndim is clear.] BTW: Does the "matrix" class have m.rank attribute !? Cheers, Sebastian. From hetland at tamu.edu Fri Aug 25 12:12:55 2006 From: hetland at tamu.edu (Rob Hetland) Date: Fri, 25 Aug 2006 11:12:55 -0500 Subject: [Numpy-discussion] numpy-1.0b3 under windows In-Reply-To: <44EE38C1.8000804@ee.byu.edu> References: <44EDFB69.8090608@ee.byu.edu> <7F4A30E8-E00E-474D-A79A-BD2313BFE5A1@tamu.edu> <44EE38C1.8000804@ee.byu.edu> Message-ID: <03D72D27-4E5B-45AB-B749-77F1926F34B6@tamu.edu> Yes, it works now. Thanks, -Rob On Aug 24, 2006, at 6:39 PM, Travis Oliphant wrote: > Rob Hetland wrote: > >> In compiling matplotlib and scipy, I get errors complaining about >> multiply defined symbols (See below). I tried to fix this with - >> multiply_defined suppress but this did not work. Is there a way to >> make this go away? >> >> > Can you try current SVN again, to see if it now works? > > -Travis > > > ---------------------------------------------------------------------- > --- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your > job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From robert.kern at gmail.com Fri Aug 25 14:02:10 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 Aug 2006 13:02:10 -0500 Subject: [Numpy-discussion] users point of view and ufuncs In-Reply-To: References: <20060825074526.GC17119@mentat.za.net> Message-ID: Charles R Harris wrote: > Matrix rank has nothing to do with numpy rank. Numpy rank is simply the > number of indices required to address an element of an ndarray. I always > thought a better name for the Numpy rank would be dimensionality, but > like everything else one gets used to the numpy jargon, it only needs to > be defined someplace for what it is. "numpy rank" derives from "tensor rank" rather than "matrix rank". It's not *wrong*, but as with many things in mathematics, the term is overloaded and can be confusing. "dimensionality" is no better. A "three-dimensional array" might be [1, 2, 3], not [[[1]]]. http://mathworld.wolfram.com/TensorRank.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Fri Aug 25 08:50:32 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 06:50:32 -0600 Subject: [Numpy-discussion] coercion rules for float32 in numpy are different from numarray In-Reply-To: <44EF14B3.2030904@msg.ucsf.edu> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> <44EE767E.7000207@ieee.org> <44EF14B3.2030904@msg.ucsf.edu> Message-ID: <44EEF218.7070103@ieee.org> Sebastian Haase wrote: >> This is now the behavior in SVN. Note that this is different from both >> Numeric (which gave an error) and numarray (which coerced to float32). >> >> But, it is consistent with how mixed-types are handled in calculations >> and is thus an easier rule to explain. >> >> Thanks for the testing. >> >> -Travis >> > > How hard would it be to change the rules back to the numarray behavior ? > It wouldn't be hard, but I'm not so sure that's a good idea. I do see the logic behind that approach and it is worthy of some discussion. I'll give my current opinion: The reason I changed the behavior is to get consistency so there is one set of rules on mixed-type interaction to explain. You can always do what you want by force-casting your int32 arrays to float32. There will always be some people who don't like whichever behavior is selected, but we are trying to move NumPy in a direction of consistency with fewer exceptions to explain (although this is a guideline and not an absolute requirement). Mixed-type interaction is always somewhat ambiguous. Now there is a consistent rule for both universal functions and other functions (move to a precision where both can be safely cast to --- unless one is a scalar and then its precision is ignored). If you don't want that to happen, then be clear about what data-type should be used by casting yourself. In this case, we should probably not try and guess about what users really want in mixed data-type situations. -Travis From kwgoodman at gmail.com Fri Aug 25 14:58:06 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 25 Aug 2006 11:58:06 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix Message-ID: How do I delete a row (or list of rows) from a matrix object? To remove the n'th row in octave I use x(n,:) = []. Or n could be a vector of rows to remove. In numpy 0.9.9.2813 x[[1,2],:] = [] changes the values of all the elements of x without changing the size of x. In numpy do I have to turn it around and construct a list of the rows I want to keep? From charlesr.harris at gmail.com Fri Aug 25 15:19:31 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 25 Aug 2006 13:19:31 -0600 Subject: [Numpy-discussion] coercion rules for float32 in numpy are different from numarray In-Reply-To: <44EEF218.7070103@ieee.org> References: <200608241709.48522.haase@msg.ucsf.edu> <44EE4423.2010909@ee.byu.edu> <200608241747.08195.haase@msg.ucsf.edu> <44EE767E.7000207@ieee.org> <44EF14B3.2030904@msg.ucsf.edu> <44EEF218.7070103@ieee.org> Message-ID: Hi, On 8/25/06, Travis Oliphant wrote: > > Sebastian Haase wrote: > >> This is now the behavior in SVN. Note that this is different from > both > >> Numeric (which gave an error) and numarray (which coerced to float32). > >> > >> But, it is consistent with how mixed-types are handled in calculations > >> and is thus an easier rule to explain. > >> > >> Thanks for the testing. > >> > >> -Travis > >> > > > > How hard would it be to change the rules back to the numarray behavior ? > > > It wouldn't be hard, but I'm not so sure that's a good idea. I do see > the logic behind that approach and it is worthy of some discussion. > I'll give my current opinion: > > The reason I changed the behavior is to get consistency so there is one > set of rules on mixed-type interaction to explain. You can always do > what you want by force-casting your int32 arrays to float32. There > will always be some people who don't like whichever behavior is > selected, but we are trying to move NumPy in a direction of consistency > with fewer exceptions to explain (although this is a guideline and not > an absolute requirement). > > Mixed-type interaction is always somewhat ambiguous. Now there is a > consistent rule for both universal functions and other functions (move > to a precision where both can be safely cast to --- unless one is a > scalar and then its precision is ignored). I think this is a good thing. It makes it easy to remember what the function will produce. The only oddity the user has to be aware of is that int32 has more precision than float32. Probably not obvious to a newbie, but a newbie will probably be using the double defaults anyway. Which is another good reason for making double the default type. If you don't want that to happen, then be clear about what data-type > should be used by casting yourself. In this case, we should probably > not try and guess about what users really want in mixed data-type > situations. I wonder if it would be reasonable to add the dtype keyword to hstack itself? Hmmm, what are the conventions for coercions to lesser precision? That could get messy indeed, maybe it is best to leave such things alone and let the programmer deal with it by rethinking the program. In the float case that would probably mean using a float32 array instead of an int32 array. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Fri Aug 25 15:32:25 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 12:32:25 -0700 Subject: [Numpy-discussion] coercion rules for float32 in numpy are different from numarray In-Reply-To: References: <200608241709.48522.haase@msg.ucsf.edu> <44EEF218.7070103@ieee.org> Message-ID: <200608251232.25199.haase@msg.ucsf.edu> On Friday 25 August 2006 12:19, Charles R Harris wrote: > Hi, > > On 8/25/06, Travis Oliphant wrote: > > Sebastian Haase wrote: > > >> This is now the behavior in SVN. Note that this is different from > > > > both > > > > >> Numeric (which gave an error) and numarray (which coerced to float32). > > >> > > >> But, it is consistent with how mixed-types are handled in calculations > > >> and is thus an easier rule to explain. > > >> > > >> Thanks for the testing. > > >> > > >> -Travis > > > > > > How hard would it be to change the rules back to the numarray behavior > > > ? > > > > It wouldn't be hard, but I'm not so sure that's a good idea. I do see > > the logic behind that approach and it is worthy of some discussion. > > I'll give my current opinion: > > > > The reason I changed the behavior is to get consistency so there is one > > set of rules on mixed-type interaction to explain. You can always do > > what you want by force-casting your int32 arrays to float32. There > > will always be some people who don't like whichever behavior is > > selected, but we are trying to move NumPy in a direction of consistency > > with fewer exceptions to explain (although this is a guideline and not > > an absolute requirement). > > > > Mixed-type interaction is always somewhat ambiguous. Now there is a > > consistent rule for both universal functions and other functions (move > > to a precision where both can be safely cast to --- unless one is a > > scalar and then its precision is ignored). > > I think this is a good thing. It makes it easy to remember what the > function will produce. The only oddity the user has to be aware of is that > int32 has more precision than float32. Probably not obvious to a newbie, > but a newbie will probably be using the double defaults anyway. Which is > another good reason for making double the default type. Not true - a numpy-(or numeric-programming) newbie working in medical imaging or astronomy would still get float32 data to work with. He/She would do some operations on the data and be surprised that memory (or disk space) blows up. > > If you don't want that to happen, then be clear about what data-type > > > should be used by casting yourself. In this case, we should probably > > not try and guess about what users really want in mixed data-type > > situations. > > I wonder if it would be reasonable to add the dtype keyword to hstack > itself? Hmmm, what are the conventions for coercions to lesser precision? > That could get messy indeed, maybe it is best to leave such things alone > and let the programmer deal with it by rethinking the program. In the float > case that would probably mean using a float32 array instead of an int32 > array. > > Chuck I think my main argument is that float32 is a very common type in (large) data processing to save memory. But I don't know about how many exceptions like an extra "float32 rule" we can handle ... I would like to hear how the numarray (STScI) folks think about this. Who else works with data of the order of GBs !? - Sebastian From oliphant.travis at ieee.org Fri Aug 25 10:01:36 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 08:01:36 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: Message-ID: <44EF02C0.6050408@ieee.org> Keith Goodman wrote: > How do I delete a row (or list of rows) from a matrix object? > > To remove the n'th row in octave I use x(n,:) = []. Or n could be a > vector of rows to remove. > > In numpy 0.9.9.2813 x[[1,2],:] = [] changes the values of all the > elements of x without changing the size of x. > > In numpy do I have to turn it around and construct a list of the rows > I want to keep? > Basically, that is true for now. I think it would be worth implementing some kind of function for making this easier. One might think of using: del a[obj] But, the problem with both of those approaches is that once you start removing arbitrary rows (or n-1 dimensional sub-spaces) from an array you very likely will no longer have a chunk of memory that can be described using the n-dimensional array memory model. So, you would have to make memory copies. This could be done, of course, and the data area of "a" altered appropriately. But, such alteration of the memory would break any other objects that have a "view" of the memory area of "a." Right now, there is no way to track which objects have such "views", and therefore no good way to tell (other than the very conservative reference count) if it is safe to re-organize the memory of "a" in this way. So, "in-place" deletion of array objects would not be particularly useful, because it would only work for arrays with no additional reference counts (i.e. simple b=a assignment would increase the reference count and make it impossible to say del a[obj]). However, a function call that returned a new array object with the appropriate rows deleted (implemented by constructing a new array with the remaining rows) would seem to be a good idea. I'll place a prototype (named delete) to that effect into SVN soon. -Travis From kortmann at ideaworks.com Fri Aug 25 16:32:33 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Fri, 25 Aug 2006 13:32:33 -0700 (PDT) Subject: [Numpy-discussion] 1.0b3 in windows Message-ID: <2546.12.216.231.149.1156537953.squirrel@webmail.ideaworks.com> From oliphant at ee.byu.edu Thu Aug 24 16:17:44 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Aug 2006 14:17:44 -0600 Subject: [Numpy-discussion] (no subject) Message-ID: <44EE0968.1030904@ee.byu.edu> kortmann at ideaworks.com wrote: >>On Thursday 24 August 2006 09:50, kortmann at ideaworks.com wrote: >> >> >>>Sorry for my ignorance, but I have not ever heard of or used mingw32. I >>>am also using python 2.3. >>> >>> >>http://en.wikipedia.org/wiki/Mingw explains in detail. >> >> > >$HOME=C:\Documents and Settings\Administrator >CONFIGDIR=C:\Documents and Settings\Administrator\.matplotlib >loaded ttfcache file C:\Documents and >Settings\Administrator\.matplotlib\ttffont >.cache >matplotlib data path c:\python23\lib\site-packages\matplotlib\mpl-data >backend WXAgg version 2.6.3.2 >Overwriting info= from scipy.misc.helpmod >(was ction info at 0x01F896F0> from numpy.lib.utils) >Overwriting who= from scipy.misc.common (was >on who at 0x01F895F0> from numpy.lib.utils) >Overwriting source= from scipy.misc.helpmod >(was > from numpy.lib.utils) >RuntimeError: module compiled against version 1000000 of C-API but this >version >of numpy is 1000002 >Fatal Python error: numpy.core.multiarray failed to import... exiting. > > >abnormal program termination > > You have a module built against an older version of NumPy. What modules are being loaded? Perhaps it is matplotlib or SciPy -Travis Travis I tried doing it again with removing scipy and my old version of numpy. I also have matplotlib installed. is there a special way that i have to go about installing this because of matplotlib? From oliphant.travis at ieee.org Fri Aug 25 10:38:59 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 08:38:59 -0600 Subject: [Numpy-discussion] 1.0b3 in windows In-Reply-To: <2546.12.216.231.149.1156537953.squirrel@webmail.ideaworks.com> References: <2546.12.216.231.149.1156537953.squirrel@webmail.ideaworks.com> Message-ID: <44EF0B83.6090904@ieee.org> kortmann at ideaworks.com wrote: > Message: 4 > Date: Thu, 24 Aug 2006 14:17:44 -0600 > From: Travis Oliphant > Subject: Re: [Numpy-discussion] (no subject) > To: Discussion of Numerical Python > > Message-ID: <44EE0968.1030904 at ee.byu.edu> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > kortmann at ideaworks.com wrote: > > > > You have a module built against an older version of NumPy. What modules > are being loaded? Perhaps it is matplotlib or SciPy > You need to re-build matplotlib. They should be producing a binary that is compatible with 1.0b2 (I'm being careful to make sure future releases are binary compatible with 1.0b2). Also, make sure that you remove the build directory under numpy if you have previously built a version of numpy prior to 1.0b2. -Travis From haase at msg.ucsf.edu Fri Aug 25 16:48:23 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 13:48:23 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF02C0.6050408@ieee.org> References: <44EF02C0.6050408@ieee.org> Message-ID: <200608251348.23730.haase@msg.ucsf.edu> On Friday 25 August 2006 07:01, Travis Oliphant wrote: > Keith Goodman wrote: > > How do I delete a row (or list of rows) from a matrix object? > > > > To remove the n'th row in octave I use x(n,:) = []. Or n could be a > > vector of rows to remove. > > > > In numpy 0.9.9.2813 x[[1,2],:] = [] changes the values of all the > > elements of x without changing the size of x. > > > > In numpy do I have to turn it around and construct a list of the rows > > I want to keep? > > Basically, that is true for now. > > I think it would be worth implementing some kind of function for making > this easier. > > One might think of using: > > del a[obj] > > But, the problem with both of those approaches is that once you start > removing arbitrary rows (or n-1 dimensional sub-spaces) from an array > you very likely will no longer have a chunk of memory that can be > described using the n-dimensional array memory model. > > So, you would have to make memory copies. This could be done, of > course, and the data area of "a" altered appropriately. But, such > alteration of the memory would break any other objects that have a > "view" of the memory area of "a." Right now, there is no way to track > which objects have such "views", and therefore no good way to tell > (other than the very conservative reference count) if it is safe to > re-organize the memory of "a" in this way. > > So, "in-place" deletion of array objects would not be particularly > useful, because it would only work for arrays with no additional > reference counts (i.e. simple b=a assignment would increase the > reference count and make it impossible to say del a[obj]). > > However, a function call that returned a new array object with the > appropriate rows deleted (implemented by constructing a new array with > the remaining rows) would seem to be a good idea. > > I'll place a prototype (named delete) to that effect into SVN soon. > > -Travis > Now of course: I often needed to "insert" a column, row or section, ... ? I made a quick and dirty implementation for that myself: def insert(arr, i, entry, axis=0): """returns new array with new element inserted at index i along axis if arr.ndim>1 and if entry is scalar it gets filled in (ref. broadcasting) note: (original) arr does not get affected """ if i > arr.shape[axis]: raise IndexError, "index i larger than arr size" shape = list(arr.shape) shape[axis] += 1 a= N.empty(dtype=arr.dtype, shape=shape) aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) aa[:i] = aarr[:i] aa[i+1:] = aarr[i:] aa[i] = entry return a but maybe there is a way to put it it numpy directly. - Sebastian From oliphant.travis at ieee.org Fri Aug 25 10:54:21 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 08:54:21 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <200608251348.23730.haase@msg.ucsf.edu> References: <44EF02C0.6050408@ieee.org> <200608251348.23730.haase@msg.ucsf.edu> Message-ID: <44EF0F1D.3060805@ieee.org> Sebastian Haase wrote: > On Friday 25 August 2006 07:01, Travis Oliphant wrote: > >> Keith Goodman wrote: >> >>> How do I delete a row (or list of rows) from a matrix object? >>> >>> To remove the n'th row in octave I use x(n,:) = []. Or n could be a >>> vector of rows to remove. >>> >>> In numpy 0.9.9.2813 x[[1,2],:] = [] changes the values of all the >>> elements of x without changing the size of x. >>> >>> In numpy do I have to turn it around and construct a list of the rows >>> I want to keep? >>> >> Basically, that is true for now. >> >> I think it would be worth implementing some kind of function for making >> this easier. >> >> One might think of using: >> >> del a[obj] >> >> But, the problem with both of those approaches is that once you start >> removing arbitrary rows (or n-1 dimensional sub-spaces) from an array >> you very likely will no longer have a chunk of memory that can be >> described using the n-dimensional array memory model. >> >> So, you would have to make memory copies. This could be done, of >> course, and the data area of "a" altered appropriately. But, such >> alteration of the memory would break any other objects that have a >> "view" of the memory area of "a." Right now, there is no way to track >> which objects have such "views", and therefore no good way to tell >> (other than the very conservative reference count) if it is safe to >> re-organize the memory of "a" in this way. >> >> So, "in-place" deletion of array objects would not be particularly >> useful, because it would only work for arrays with no additional >> reference counts (i.e. simple b=a assignment would increase the >> reference count and make it impossible to say del a[obj]). >> >> However, a function call that returned a new array object with the >> appropriate rows deleted (implemented by constructing a new array with >> the remaining rows) would seem to be a good idea. >> >> I'll place a prototype (named delete) to that effect into SVN soon. >> >> -Travis >> >> > Now of course: I often needed to "insert" a column, row or section, ... ? > I made a quick and dirty implementation for that myself: > def insert(arr, i, entry, axis=0): > """returns new array with new element inserted at index i along axis > if arr.ndim>1 and if entry is scalar it gets filled in (ref. broadcasting) > > note: (original) arr does not get affected > """ > if i > arr.shape[axis]: > raise IndexError, "index i larger than arr size" > shape = list(arr.shape) > shape[axis] += 1 > a= N.empty(dtype=arr.dtype, shape=shape) > aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) > aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) > aa[:i] = aarr[:i] > aa[i+1:] = aarr[i:] > aa[i] = entry > return a > Sure, it makes sense to parallel the delete function. -Travis From oliphant.travis at ieee.org Fri Aug 25 11:01:58 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 09:01:58 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF0F1D.3060805@ieee.org> References: <44EF02C0.6050408@ieee.org> <200608251348.23730.haase@msg.ucsf.edu> <44EF0F1D.3060805@ieee.org> Message-ID: <44EF10E6.5080501@ieee.org> Travis Oliphant wrote: >> Now of course: I often needed to "insert" a column, row or section, ... ? >> I made a quick and dirty implementation for that myself: >> def insert(arr, i, entry, axis=0): >> """returns new array with new element inserted at index i along axis >> if arr.ndim>1 and if entry is scalar it gets filled in (ref. broadcasting) >> >> note: (original) arr does not get affected >> """ >> if i > arr.shape[axis]: >> raise IndexError, "index i larger than arr size" >> shape = list(arr.shape) >> shape[axis] += 1 >> a= N.empty(dtype=arr.dtype, shape=shape) >> aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) >> aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) >> aa[:i] = aarr[:i] >> aa[i+1:] = aarr[i:] >> aa[i] = entry >> return a >> >> > > Sure, it makes sense to parallel the delete function. > Although there is already and insert function present in numpy.... -Travis From haase at msg.ucsf.edu Fri Aug 25 17:47:20 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 14:47:20 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF10E6.5080501@ieee.org> References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> Message-ID: <200608251447.20953.haase@msg.ucsf.edu> On Friday 25 August 2006 08:01, Travis Oliphant wrote: > Travis Oliphant wrote: > >> Now of course: I often needed to "insert" a column, row or section, ... > >> ? I made a quick and dirty implementation for that myself: > >> def insert(arr, i, entry, axis=0): > >> """returns new array with new element inserted at index i along axis > >> if arr.ndim>1 and if entry is scalar it gets filled in (ref. > >> broadcasting) > >> > >> note: (original) arr does not get affected > >> """ > >> if i > arr.shape[axis]: > >> raise IndexError, "index i larger than arr size" > >> shape = list(arr.shape) > >> shape[axis] += 1 > >> a= N.empty(dtype=arr.dtype, shape=shape) > >> aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) > >> aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) > >> aa[:i] = aarr[:i] > >> aa[i+1:] = aarr[i:] > >> aa[i] = entry > >> return a > > > > Sure, it makes sense to parallel the delete function. > > Although there is already and insert function present in numpy.... > > -Travis Yeah - I saw that ... maybe one could introduce consistent namings like arr.copy_insert() arr.copy_delete() arr.copy_append() for the new ones. This emphasis the fact that a copy is created ... (Append is also a function often asked for when people expect "list capabilities" - did I miss others ?) -Sebastian From oliphant.travis at ieee.org Fri Aug 25 19:16:09 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 17:16:09 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <200608251447.20953.haase@msg.ucsf.edu> References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> Message-ID: <44EF84B9.5000909@ieee.org> Sebastian Haase wrote: > On Friday 25 August 2006 08:01, Travis Oliphant wrote: > >> Travis Oliphant wrote: >> >>>> Now of course: I often needed to "insert" a column, row or section, ... >>>> ? I made a quick and dirty implementation for that myself: >>>> def insert(arr, i, entry, axis=0): >>>> """returns new array with new element inserted at index i along axis >>>> if arr.ndim>1 and if entry is scalar it gets filled in (ref. >>>> broadcasting) >>>> >>>> note: (original) arr does not get affected >>>> """ >>>> if i > arr.shape[axis]: >>>> raise IndexError, "index i larger than arr size" >>>> shape = list(arr.shape) >>>> shape[axis] += 1 >>>> a= N.empty(dtype=arr.dtype, shape=shape) >>>> aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) >>>> aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) >>>> aa[:i] = aarr[:i] >>>> aa[i+1:] = aarr[i:] >>>> aa[i] = entry >>>> return a >>>> >>> Sure, it makes sense to parallel the delete function. >>> >> Although there is already and insert function present in numpy.... >> >> -Travis >> > > Yeah - I saw that ... > maybe one could introduce consistent namings like > arr.copy_insert() > arr.copy_delete() > arr.copy_append() > I've come up with adding the functions (not methods at this point) deletefrom insertinto appendto (syntatic sugar for concatenate but with a separate argument for the array and the extra stuff) --- is this needed? These functions will operate along a particular axis (default is axis=0 to match concatenate). Comments? -Travis From haase at msg.ucsf.edu Fri Aug 25 19:24:47 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 25 Aug 2006 16:24:47 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF84B9.5000909@ieee.org> References: <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: <200608251624.47782.haase@msg.ucsf.edu> On Friday 25 August 2006 16:16, Travis Oliphant wrote: > Sebastian Haase wrote: > > On Friday 25 August 2006 08:01, Travis Oliphant wrote: > >> Travis Oliphant wrote: > >>>> Now of course: I often needed to "insert" a column, row or section, > >>>> ... ? I made a quick and dirty implementation for that myself: > >>>> def insert(arr, i, entry, axis=0): > >>>> """returns new array with new element inserted at index i along > >>>> axis if arr.ndim>1 and if entry is scalar it gets filled in (ref. > >>>> broadcasting) > >>>> > >>>> note: (original) arr does not get affected > >>>> """ > >>>> if i > arr.shape[axis]: > >>>> raise IndexError, "index i larger than arr size" > >>>> shape = list(arr.shape) > >>>> shape[axis] += 1 > >>>> a= N.empty(dtype=arr.dtype, shape=shape) > >>>> aa=N.transpose(a, [axis]+range(axis)+range(axis+1,a.ndim)) > >>>> aarr=N.transpose(arr, [axis]+range(axis)+range(axis+1,arr.ndim)) > >>>> aa[:i] = aarr[:i] > >>>> aa[i+1:] = aarr[i:] > >>>> aa[i] = entry > >>>> return a > >>> > >>> Sure, it makes sense to parallel the delete function. > >> > >> Although there is already and insert function present in numpy.... > >> > >> -Travis > > > > Yeah - I saw that ... > > maybe one could introduce consistent namings like > > arr.copy_insert() > > arr.copy_delete() > > arr.copy_append() > > I've come up with adding the functions (not methods at this point) > > deletefrom > insertinto > > appendto (syntatic sugar for concatenate but with a separate argument > for the array and the extra stuff) --- is this needed? not for me. -S. From kwgoodman at gmail.com Fri Aug 25 19:47:00 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 25 Aug 2006 16:47:00 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF84B9.5000909@ieee.org> References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: On 8/25/06, Travis Oliphant wrote: > I've come up with adding the functions (not methods at this point) > > deletefrom > insertinto > > appendto (syntatic sugar for concatenate but with a separate argument > for the array and the extra stuff) --- is this needed? > > These functions will operate along a particular axis (default is axis=0 > to match concatenate). It is probably obvious to everyone except me: what is the syntax? If x is 5x5 and I want to delete rows 2 and 4 is it deletfrom(x, [1,3], axis=0)? From robert.kern at gmail.com Fri Aug 25 19:55:51 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 Aug 2006 18:55:51 -0500 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF84B9.5000909@ieee.org> References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: Travis Oliphant wrote: > I've come up with adding the functions (not methods at this point) > > deletefrom > insertinto > > appendto (syntatic sugar for concatenate but with a separate argument > for the array and the extra stuff) --- is this needed? > > These functions will operate along a particular axis (default is axis=0 > to match concatenate). > > Comments? I would drop appendto(). I also recommend leaving them as functions and not making methods from them. This will help prevent people from thinking that these modify the arrays in-place. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Fri Aug 25 20:04:57 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 25 Aug 2006 18:04:57 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: <44EF9029.7020807@ieee.org> Keith Goodman wrote: > On 8/25/06, Travis Oliphant wrote: > > >> I've come up with adding the functions (not methods at this point) >> >> deletefrom >> insertinto >> >> appendto (syntatic sugar for concatenate but with a separate argument >> for the array and the extra stuff) --- is this needed? >> >> These functions will operate along a particular axis (default is axis=0 >> to match concatenate). >> > > It is probably obvious to everyone except me: what is the syntax? > No, I'm sure it isn't obvious to anyone. Here's what I'm implementing (I'm using the default axis=None now which I like because it's consistent with everything else and it forces you to pick an axis for >1d arrays --- this also gives some purpose for the appendonto function) deletefrom(arr, obj, axis=None) where obj is either an integer, a slice object, or a sequence of integers indicating the rows to delete: > If x is 5x5 and I want to delete rows 2 and 4 is it deletfrom(x, [1,3], axis=0)? > Yes, if you are counting from 1. -Travis From torgil.svensson at gmail.com Fri Aug 25 20:22:34 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Sat, 26 Aug 2006 02:22:34 +0200 Subject: [Numpy-discussion] 1.0b3 in windows In-Reply-To: <44EF0B83.6090904@ieee.org> References: <2546.12.216.231.149.1156537953.squirrel@webmail.ideaworks.com> <44EF0B83.6090904@ieee.org> Message-ID: Not really recommended. But it might "work" with just running the script twice. I'm doing that with beta1 and the matplotlib that was current at the time of that release. Laziness i guess. //Torgil On 8/25/06, Travis Oliphant wrote: > kortmann at ideaworks.com wrote: > > Message: 4 > > Date: Thu, 24 Aug 2006 14:17:44 -0600 > > From: Travis Oliphant > > Subject: Re: [Numpy-discussion] (no subject) > > To: Discussion of Numerical Python > > > > Message-ID: <44EE0968.1030904 at ee.byu.edu> > > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > > > kortmann at ideaworks.com wrote: > > > > > > > > You have a module built against an older version of NumPy. What modules > > are being loaded? Perhaps it is matplotlib or SciPy > > > > You need to re-build matplotlib. They should be producing a binary that > is compatible with 1.0b2 (I'm being careful to make sure future releases > are binary compatible with 1.0b2). > > Also, make sure that you remove the build directory under numpy if you > have previously built a version of numpy prior to 1.0b2. > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From faltet at carabos.com Sat Aug 26 03:20:04 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 26 Aug 2006 09:20:04 +0200 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: <44EF84B9.5000909@ieee.org> Message-ID: <200608260920.05184.faltet@carabos.com> Hi, A Dissabte 26 Agost 2006 01:55, Robert Kern va escriure: > Travis Oliphant wrote: > > I've come up with adding the functions (not methods at this point) > > > > deletefrom > > insertinto > > > > appendto (syntatic sugar for concatenate but with a separate argument > > for the array and the extra stuff) --- is this needed? > > > > These functions will operate along a particular axis (default is axis=0 > > to match concatenate). > > > > Comments? > > I would drop appendto(). I also recommend leaving them as functions and not > making methods from them. This will help prevent people from thinking that > these modify the arrays in-place. But there are already quite a few methods in NumPy that doesn't modify the array in-place (swapaxes, flatten, ravel or squeeze, but I guess many more). I'm personally an addict to encapsulate as much functionality as possible in methods (but perhaps I'm biased by an insane use of TAB in ipython console). Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From faltet at carabos.com Sat Aug 26 04:05:39 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 26 Aug 2006 10:05:39 +0200 Subject: [Numpy-discussion] [RFE] Suport for version 3 of array protocol in numarray Message-ID: <200608261005.42388.faltet@carabos.com> Hi, I've lately ran into problems in numarray-->numpy conversions which are due to a lack of suport of the array procol version 3 on behalf of numarray. For more info on this issue see: http://projects.scipy.org/scipy/numpy/ticket/256 and http://projects.scipy.org/scipy/numpy/ticket/266 Question: is the numarray crew going to add this support anytime soon? If not, I'd advocate to retain support for version 2 in NumPy at least for sometime (until numarray gets the support), although I don't know whether this will complicate things a lot in NumPy. I personally don't need this functionality as I've found a workaround for PyTables (i.e. using the numpy.ndarray factory in order to create the NumPy object directly from the numarray buffer), but I think this would be very useful in helping other users (end-users mainly) in the numarray-->NumPy transition. Thanks, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From oliphant.travis at ieee.org Sat Aug 26 04:34:15 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 26 Aug 2006 02:34:15 -0600 Subject: [Numpy-discussion] [RFE] Suport for version 3 of array protocol in numarray In-Reply-To: <200608261005.42388.faltet@carabos.com> References: <200608261005.42388.faltet@carabos.com> Message-ID: <44F00787.1020500@ieee.org> Francesc Altet wrote: > Hi, > > I've lately ran into problems in numarray-->numpy conversions which are due to > a lack of suport of the array procol version 3 on behalf of numarray. For > more info on this issue see: > > http://projects.scipy.org/scipy/numpy/ticket/256 > > and > > http://projects.scipy.org/scipy/numpy/ticket/266 > > Question: is the numarray crew going to add this support anytime soon? If not, > I'd advocate to retain support for version 2 in NumPy at least for sometime > (until numarray gets the support), although I don't know whether this will > complicate things a lot in NumPy. > > I personally don't need this functionality as I've found a workaround for > PyTables (i.e. using the numpy.ndarray factory in order to create the NumPy > object directly from the numarray buffer), but I think this would be very > useful in helping other users (end-users mainly) in the numarray-->NumPy > transition. > Remember it's only the Python-side of version 2 of the protocol that is not supported. The C-side is still supported. Thus, it's only objects which don't export the C-side of the interface that are affected. In numarray that is the chararray and the recarray. Normal numarray arrays should work fine as the C-side of version 2 is still supported. I think the number of objects supporting the Python side of version 2 of the protocol is small enough that it is not worth the extra hassle (and attribute lookup time) in NumPy to support it. It would be a good thing if numarray supported version 3 of the protocol by adding the __array_interface__ attribute to support the Python side of version 3. -Travis From oliphant.travis at ieee.org Sat Aug 26 05:44:34 2006 From: oliphant.travis at ieee.org (Travis E. Oliphant) Date: Sat, 26 Aug 2006 03:44:34 -0600 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available Message-ID: <44F01802.8050505@ieee.org> The 4th beta release of NumPy 1.0 has just been made available. NumPy 1.0 represents the culmination of over 18 months of work to unify the Numeric and Numarray array packages into a single best-of-breed array package for Python. NumPy supports all the features of Numeric and Numarray with a healthy dose of it's own improved features. It's time to start porting your applications to use NumPy as Numeric is no longer maintained and Numarray will only be maintained for a few more months. Porting is not difficult especially using the compatibility layers numpy.oldnumeric and numpy.numarray and the alter_code1.py modules in those packages. The full C-API of Numeric is supported as is the C-API of Numarray. More information is available at http://numpy.scipy.org NumPy Developers From numpy at mspacek.mm.st Sat Aug 26 06:06:42 2006 From: numpy at mspacek.mm.st (Martin Spacek) Date: Sat, 26 Aug 2006 03:06:42 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array Message-ID: <44F01D32.9080103@mspacek.mm.st> Hello, I'm a bit ignorant of optimization in numpy. I have a movie with 65535 32x32 frames stored in a 3D array of uint8 with shape (65535, 32, 32). I load it from an open file f like this: >>> import numpy as np >>> data = np.fromfile(f, np.uint8, count=65535*32*32) >>> data = data.reshape(65535, 32, 32) I'm picking several thousand frames more or less randomly from throughout the movie and finding the mean frame over those frames: >>> meanframe = data[frameis].mean(axis=0) frameis is a 1D array of frame indices with no repeated values in it. If it has say 4000 indices in it, then the above line takes about 0.5 sec to complete on my system. I'm doing this for a large number of different frameis, some of which can have many more indices in them. All this takes many minutes to complete, so I'm looking for ways to speed it up. If I divide it into 2 steps: >>> temp = data[frameis] >>> meanframe = temp.mean(axis=0) and time it, I find the first step takes about 0.2 sec, and the second takes about 0.3 sec. So it's not just the mean() step, but also the indexing step that's taking some time. If I flatten with ravel: >>> temp = data[frameis].ravel() >>> meanframe = temp.mean(axis=0) then the first step still takes about 0.2 sec, but the mean() step drops to about 0.1 sec. But of course, this is taking a flat average across all pixels in the movie, which isn't what I want to do. I have a feeling that the culprit is the non contiguity of the movie frames being averaged, but I don't know how to proceed. Any ideas? Could reshaping the data somehow speed things up? Would weave.blitz or weave.inline or pyrex help? I'm running numpy 0.9.8 Thanks, Martin From oliphant.travis at ieee.org Sat Aug 26 06:26:32 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 26 Aug 2006 04:26:32 -0600 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F01D32.9080103@mspacek.mm.st> References: <44F01D32.9080103@mspacek.mm.st> Message-ID: <44F021D8.5070002@ieee.org> Martin Spacek wrote: > Hello, > > I'm a bit ignorant of optimization in numpy. > > I have a movie with 65535 32x32 frames stored in a 3D array of uint8 > with shape (65535, 32, 32). I load it from an open file f like this: > > >>> import numpy as np > >>> data = np.fromfile(f, np.uint8, count=65535*32*32) > >>> data = data.reshape(65535, 32, 32) > > I'm picking several thousand frames more or less randomly from > throughout the movie and finding the mean frame over those frames: > > >>> meanframe = data[frameis].mean(axis=0) > > frameis is a 1D array of frame indices with no repeated values in it. If > it has say 4000 indices in it, then the above line takes about 0.5 sec > to complete on my system. I'm doing this for a large number of different > frameis, some of which can have many more indices in them. All this > takes many minutes to complete, so I'm looking for ways to speed it up. > > If I divide it into 2 steps: > > >>> temp = data[frameis] > >>> meanframe = temp.mean(axis=0) > > and time it, I find the first step takes about 0.2 sec, and the second > takes about 0.3 sec. So it's not just the mean() step, but also the > indexing step that's taking some time. > If frameis is 1-D, then you should be able to use temp = data.take(frameis,axis=0) for the first step. This can be quite a bit faster (and is a big reason why take is still around). There are several reasons for this (one of which is that index checking is done over the entire list when using indexing). -Travis From wbaxter at gmail.com Sat Aug 26 07:42:32 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 26 Aug 2006 20:42:32 +0900 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <200608260920.05184.faltet@carabos.com> References: <44EF84B9.5000909@ieee.org> <200608260920.05184.faltet@carabos.com> Message-ID: On 8/26/06, Francesc Altet wrote: > > I'm personally an addict to encapsulate as much functionality as possible > in > methods (but perhaps I'm biased by an insane use of TAB in ipython > console). You can still get tab completion for functions: numpy. Even if it's your custom to "from numpy import *" you can still also do an "import numpy" or "import numpy as N". --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Sat Aug 26 08:13:15 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 26 Aug 2006 21:13:15 +0900 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: <44EF84B9.5000909@ieee.org> References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: On 8/26/06, Travis Oliphant wrote: > > > I've come up with adding the functions (not methods at this point) > > deletefrom > insertinto "delete" and "insert" really would be better. The current "insert" function seems inaptly named. What it does sounds more like "overlay" or "set_masked". ... or the existing "putmask" which I see does a similar thing. Actually there seems to be a little doc-bug there or something. numpy.insert claims it differs from putmask in that it only accepts a vector of values with same number of vals as the # of non-zero entries in the mask, but a quick test revals it's quite happy with a different number and cycles through them. In [31]: a = numpy.zeros((3,3)) In [32]: numpy.insert(a, [[0,1,0],[1,0,0],[1,0,0]], [4,5]) In [33]: a Out[33]: array([[ 0., 4., 0.], [ 5., 0., 0.], [ 4., 0., 0.]]) Anyway, in the end nothing has really been inserted, existing entries have just been replaced. So "insert" seems like a much better name for a function that actually puts in a new row or column. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From svetosch at gmx.net Sat Aug 26 08:13:30 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 26 Aug 2006 14:13:30 +0200 Subject: [Numpy-discussion] memory corruption bug Message-ID: <44F03AEA.7010403@gmx.net> Hi, I experienced this strange bug which caused a totally unrelated variable to be overwritten (no exception or error was raised, so it took me while to rule out any errors of my own). The context where this is in is a method of a class (Vecm.getSW()), and the instance of Vecm is created within a different class (GG.__init__). Now, the affected variable is in the namespace of GG (it's GG.urate), and so I would think that anything local in Vecm.getSW() should not affect GG.urate, right? Originally I did: glx[lag:, :] -= temp But that caused the described problem. Then I tried: glx[lag:, :] = glx[lag:, :] - temp But the same problem remains. Then I worked around the slice assignment like this: temp4 = r_[zeros([lag, n_y]), temp] glx = glx - temp4 And everything is ok! However, when I alter the second line of this workaround to: glx -= temp4 The problem reappears! So I'm not even sure whether this is one or two bugs... This is with yesterday's numpy svn on windows, but the same thing happens with an earlier svn (~b2) as well. If you need further info, please tell me how to provide it. Thanks, Sven From svetosch at gmx.net Sat Aug 26 08:20:10 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 26 Aug 2006 14:20:10 +0200 Subject: [Numpy-discussion] round() bug Message-ID: <44F03C7A.4060908@gmx.net> Hi, is this normal behavior?: >>> import numpy as n; print n.mat(0.075).round(2); print n.mat(0.575).round(2) [[ 0.08]] [[ 0.57]] Again, yesterday's svn on windows. cheers, Sven From nadavh at visionsense.com Sat Aug 26 09:45:39 2006 From: nadavh at visionsense.com (Nadav Horesh) Date: Sat, 26 Aug 2006 15:45:39 +0200 Subject: [Numpy-discussion] tensor dot ? Message-ID: <07C6A61102C94148B8104D42DE95F7E8C8F051@exchange2k.envision.co.il> I once wrote a function "tensormultiply" which is a part of numarray (undocumented). You can borrow it from there. Nadav -----Original Message----- From: numpy-discussion-bounces at lists.sourceforge.net on behalf of Simon Burton Sent: Fri 25-Aug-06 14:42 To: numpy-discussion at lists.sourceforge.net Cc: Subject: [Numpy-discussion] tensor dot ? >>> numpy.dot.__doc__ matrixproduct(a,b) Returns the dot product of a and b for arrays of floating point types. Like the generic numpy equivalent the product sum is over the last dimension of a and the second-to-last dimension of b. NB: The first argument is not conjugated. Does numpy support summing over arbitrary dimensions, as in tensor calculus ? I could cook up something that uses transpose and dot, but it's reasonably tricky i think :) Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From wbaxter at gmail.com Sat Aug 26 08:52:01 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 26 Aug 2006 21:52:01 +0900 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: <44F03AEA.7010403@gmx.net> References: <44F03AEA.7010403@gmx.net> Message-ID: You're sure it's not just pass-by-reference semantics biting you? If you make an array and pass it to another class or function, by default they just get a reference to the same array. so e.g.: a = numpy.array([1,2,3]) some_class.set_array(a) a[1] = 10 Then both the local 'a' and the 'a' that some_class has are now [1,10,3]. If you don't want that sharing then you need to make an explicit copy of a by calling a.copy(). Watch out for lists or dicts of arrays too. The python idom for copying a list: new_list = list_orig[:], won't copy the contents of elements that are array. If you want to be sure to make complete copies of complex data structures, there's the deepcopy method of the copy module. new_list = copy.deepcopy(list_orig). I found a bunch of these sorts of bugs in some code I ported over from Matlab last week. Matlab uses copy semantics for everything, so if you pass a matrix A to a function in Matlab you can always treat it as a fresh local copy inside the function. Not so with Python. I found that locating and fixing those bugs was the most difficult thing about porting Matlab code to Numpy (that and the lack of some major toolkit or function you use in Matlab doesn't have an equivalent in Numpy... like eigs()). --bb On 8/26/06, Sven Schreiber wrote: > > Hi, > I experienced this strange bug which caused a totally unrelated variable > to be overwritten (no exception or error was raised, so it took me while > to rule out any errors of my own). > > The context where this is in is a method of a class (Vecm.getSW()), and > the instance of Vecm is created within a different class (GG.__init__). > Now, the affected variable is in the namespace of GG (it's GG.urate), > and so I would think that anything local in Vecm.getSW() should not > affect GG.urate, right? > > Originally I did: > > glx[lag:, :] -= temp > > But that caused the described problem. Then I tried: > > glx[lag:, :] = glx[lag:, :] - temp > > But the same problem remains. Then I worked around the slice assignment > like this: > > temp4 = r_[zeros([lag, n_y]), temp] > glx = glx - temp4 > > And everything is ok! However, when I alter the second line of this > workaround to: > > glx -= temp4 > > The problem reappears! So I'm not even sure whether this is one or two > bugs... > > This is with yesterday's numpy svn on windows, but the same thing > happens with an earlier svn (~b2) as well. If you need further info, > please tell me how to provide it. > > Thanks, > Sven > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Sat Aug 26 10:05:16 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Sat, 26 Aug 2006 07:05:16 -0700 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: On 8/26/06, Bill Baxter wrote: > On 8/26/06, Travis Oliphant wrote: > > > > > I've come up with adding the functions (not methods at this point) > > > > deletefrom > > insertinto > > > "delete" and "insert" really would be better. The current "insert" > function seems inaptly named. What it does sounds more like "overlay" or > "set_masked". I prefer delete and insert too. I guess it is OK that del and delete are similar (?) From svetosch at gmx.net Sat Aug 26 11:12:31 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 26 Aug 2006 17:12:31 +0200 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: References: <44F03AEA.7010403@gmx.net> Message-ID: <44F064DF.1090805@gmx.net> I appreciate your warnings, thanks. However, they don't seem to apply here, or why would my described workaround work at all in that case? Also, afaict, the affected variable is not even passed to the class where the problematic assignment happens. -sven Bill Baxter schrieb: > You're sure it's not just pass-by-reference semantics biting you? > If you make an array and pass it to another class or function, by > default they just get a reference to the same array. > so e.g.: > > a = numpy.array ([1,2,3]) > some_class.set_array(a) > a[1] = 10 > > Then both the local 'a' and the 'a' that some_class has are now [1,10,3]. > If you don't want that sharing then you need to make an explicit copy of > a by calling a.copy (). > Watch out for lists or dicts of arrays too. The python idom for > copying a list: new_list = list_orig[:], won't copy the contents of > elements that are array. If you want to be sure to make complete copies > of complex data structures, there's the deepcopy method of the copy > module. new_list = copy.deepcopy(list_orig). > > I found a bunch of these sorts of bugs in some code I ported over from > Matlab last week. Matlab uses copy semantics for everything, so if you > pass a matrix A to a function in Matlab you can always treat it as a > fresh local copy inside the function. Not so with Python. I found that > locating and fixing those bugs was the most difficult thing about > porting Matlab code to Numpy (that and the lack of some major toolkit or > function you use in Matlab doesn't have an equivalent in Numpy... like > eigs()). > > --bb > > > > On 8/26/06, *Sven Schreiber* > wrote: > > Hi, > I experienced this strange bug which caused a totally unrelated variable > to be overwritten (no exception or error was raised, so it took me while > to rule out any errors of my own). > > The context where this is in is a method of a class ( Vecm.getSW()), and > the instance of Vecm is created within a different class (GG.__init__). > Now, the affected variable is in the namespace of GG (it's GG.urate), > and so I would think that anything local in Vecm.getSW () should not > affect GG.urate, right? > > Originally I did: > > glx[lag:, :] -= temp > > But that caused the described problem. Then I tried: > > glx[lag:, :] = glx[lag:, :] - temp > > But the same problem remains. Then I worked around the slice assignment > like this: > > temp4 = r_[zeros([lag, n_y]), temp] > glx = glx - temp4 > > And everything is ok! However, when I alter the second line of this > workaround to: > > glx -= temp4 > > The problem reappears! So I'm not even sure whether this is one or two > bugs... > > This is with yesterday's numpy svn on windows, but the same thing > happens with an earlier svn (~b2) as well. If you need further info, > please tell me how to provide it. > > Thanks, > Sven > From fullung at gmail.com Sat Aug 26 11:20:15 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 26 Aug 2006 17:20:15 +0200 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: <44F03AEA.7010403@gmx.net> Message-ID: A complete code snippet that reproduces the bug would be most helpful. If there is a memory corruption problem, it might show up if we run the problematic code under Valgrind. Regards, Albert > -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of Sven Schreiber > Sent: 26 August 2006 14:14 > To: numpy-discussion > Subject: [Numpy-discussion] memory corruption bug > > Hi, > I experienced this strange bug which caused a totally unrelated variable > to be overwritten (no exception or error was raised, so it took me while > to rule out any errors of my own). From hjn253 at tom.com Wed Aug 30 11:41:30 2006 From: hjn253 at tom.com (=?GB2312?B?IjnUwjktMTDI1S/Jz7qjIg==?=) Date: Wed, 30 Aug 2006 23:41:30 +0800 Subject: [Numpy-discussion] =?GB2312?B?cmU61MvTw0VYQ0VMus1QUFS4xL34udzA7brNvq3Tqr72st8=?= Message-ID: An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Aug 26 12:02:53 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 10:02:53 -0600 Subject: [Numpy-discussion] round() bug In-Reply-To: <44F03C7A.4060908@gmx.net> References: <44F03C7A.4060908@gmx.net> Message-ID: Hi, On 8/26/06, Sven Schreiber wrote: > > Hi, > > is this normal behavior?: > > >>> import numpy as n; print n.mat(0.075).round(2); print > n.mat(0.575).round(2) > [[ 0.08]] > [[ 0.57]] In [7]: (arange(100)*.5).round() Out[7]: array([ 0., 0., 1., 2., 2., 2., 3., 4., 4., 4., 5., 6., 6., 6., 7., 8., 8., 8., 9., 10., 10., 10., 11., 12., 12., 12., 13., 14., 14., 14., 15., 16., 16., 16., 17., 18., 18., 18., 19., 20., 20., 20., 21., 22., 22., 22., 23., 24., 24., 24., 25., 26., 26., 26., 27., 28., 28., 28., 29., 30., 30., 30., 31., 32., 32., 32., 33., 34., 34., 34., 35., 36., 36., 36., 37., 38., 38., 38., 39., 40., 40., 40., 41., 42., 42., 42., 43., 44., 44., 44., 45., 46., 46., 46., 47., 48., 48., 48., 49., 50.]) It looks like numpy does round to even. Knuth has a discussion of rounding that is worth reading, although he prefers round to odd. The basic idea is to avoid the systematic bias that comes from always rounding in one direction. Another thing to bear in mind is that floating point isn't always what it seems due to the conversion between decimal and binary representation: In [8]: print '%25.18f'%.075 0.074999999999999997 Throw in multiplication, different precisions in the internal computations of the fpu, rounding in the print routine, and other complications, and it is tough to know precisely what should happen. For instance: In [15]: '%25.18f'%(mat(0.575)*100) Out[15]: ' 57.499999999999992895' In [16]: '%25.18f'%(around(mat(0.575)*100)) Out[16]: ' 57.000000000000000000' In [17]: '%25.18f'%(around(mat(0.575)*100)/100) Out[17]: ' 0.569999999999999951' And you can see that .575 after conversion to IEEE floating point and scaling was properly rounded down and showed up as .57 after the default print precision is taken into account. Python, on the other hand, always rounds up: In [12]: for i in range(10) : print '%25.18f'%round(i*.5) ....: 0.000000000000000000 1.000000000000000000 1.000000000000000000 2.000000000000000000 2.000000000000000000 3.000000000000000000 3.000000000000000000 4.000000000000000000 4.000000000000000000 5.000000000000000000 Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Aug 26 12:22:33 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 10:22:33 -0600 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: References: <44F03AEA.7010403@gmx.net> Message-ID: Hi, On 8/26/06, Bill Baxter wrote: > > You're sure it's not just pass-by-reference semantics biting you? > If you make an array and pass it to another class or function, by default > they just get a reference to the same array. > so e.g.: > > a = numpy.array ([1,2,3]) > some_class.set_array(a) > a[1] = 10 > > Then both the local 'a' and the 'a' that some_class has are now [1,10,3]. > If you don't want that sharing then you need to make an explicit copy of a > by calling a.copy (). > Watch out for lists or dicts of arrays too. The python idom for copying > a list: new_list = list_orig[:], won't copy the contents of elements that > are array. If you want to be sure to make complete copies of complex data > structures, there's the deepcopy method of the copy module. new_list = > copy.deepcopy(list_orig). > > I found a bunch of these sorts of bugs in some code I ported over from > Matlab last week. Matlab uses copy semantics for everything, > Matlab does copy on write, so it maintains a reference until an element is modified, at which point it makes a copy. I believe it does this for efficiency and memory conservation, probably the latter because it doesn't seem to have garbage collection. I could be wrong about that, though. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Aug 26 12:30:00 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 10:30:00 -0600 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: <44EF0F1D.3060805@ieee.org> <44EF10E6.5080501@ieee.org> <200608251447.20953.haase@msg.ucsf.edu> <44EF84B9.5000909@ieee.org> Message-ID: Hi, On 8/26/06, Keith Goodman wrote: > > On 8/26/06, Bill Baxter wrote: > > On 8/26/06, Travis Oliphant wrote: > > > > > > > > I've come up with adding the functions (not methods at this point) > > > > > > deletefrom > > > insertinto > > > > > > "delete" and "insert" really would be better. The current "insert" > > function seems inaptly named. What it does sounds more like "overlay" > or > > "set_masked". > > I prefer delete and insert too. I guess it is OK that del and delete > are similar (?) Me too, although remove could be used instead of delete. Is there a problem besides compatibility with removing or changing the old insert? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Aug 26 12:35:12 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 10:35:12 -0600 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: References: <44F03AEA.7010403@gmx.net> Message-ID: Hi, On 8/26/06, Albert Strasheim wrote: > > A complete code snippet that reproduces the bug would be most helpful. +1. I too suspect that what you have here is a reference/copy problem. The only thing that is local to the class is the reference (pointer), the data is global. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From torgil.svensson at gmail.com Sat Aug 26 13:02:52 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Sat, 26 Aug 2006 19:02:52 +0200 Subject: [Numpy-discussion] std(axis=1) memory footprint issues + moving avg / stddev Message-ID: Hi ndarray.std(axis=1) seems to have memory issues on large 2D-arrays. I first thought I had a performance issue but discovered that std() used lots of memory and therefore caused lots of swapping. I want to get an array where element i is the stadard deviation of row i in the 2D array. Using valgrind on the std() function... $ valgrind --tool=massif python -c "from numpy import *; a=reshape(arange(100000*100),(100000,100)).std(axis=1)" ... showed me a peak of 200Mb memory while iterating line by line... $ valgrind --tool=massif python -c "from numpy import *; a=array([x.std() for x in reshape(arange(100000*100),(100000,100))])" ... got a peak of 40Mb memory. This seems unnecessary since we know before calculations what the output shape will be and should therefore be able to preallocate memory. My original problem was to get an moving average and a moving standard deviation (120k rows and N=1000). For average I guess convolve should perform good, but is there anything smart for std()? For now I use ... >>> moving_std=array([a[i:i+n].std() for i in range(len(a)-n)]) which seems to perform quite well. BR, //Torgil From charlesr.harris at gmail.com Sat Aug 26 13:49:33 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 11:49:33 -0600 Subject: [Numpy-discussion] std(axis=1) memory footprint issues + moving avg / stddev In-Reply-To: References: Message-ID: On 8/26/06, Torgil Svensson wrote: > > Hi > > ndarray.std(axis=1) seems to have memory issues on large 2D-arrays. I > first thought I had a performance issue but discovered that std() used > lots of memory and therefore caused lots of swapping. > > I want to get an array where element i is the stadard deviation of row > i in the 2D array. Using valgrind on the std() function... > > $ valgrind --tool=massif python -c "from numpy import *; > a=reshape(arange(100000*100),(100000,100)).std(axis=1)" > > ... showed me a peak of 200Mb memory while iterating line by line... > > $ valgrind --tool=massif python -c "from numpy import *; > a=array([x.std() for x in reshape(arange(100000*100),(100000,100))])" > > ... got a peak of 40Mb memory. > > This seems unnecessary since we know before calculations what the > output shape will be and should therefore be able to preallocate > memory. > > > My original problem was to get an moving average and a moving standard > deviation (120k rows and N=1000). For average I guess convolve should > perform good, but is there anything smart for std()? For now I use ... Why not use convolve for the std also? You can't subtract the average first, but you could convolve the square of the vector and then use some variant of std = sqrt((convsqrs - n*avg**2)/(n-1)). There are possible precision problems but they may not matter for your application, especially if the moving window isn't really big. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.hochberg at ieee.org Sat Aug 26 13:59:38 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Sat, 26 Aug 2006 10:59:38 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F01D32.9080103@mspacek.mm.st> References: <44F01D32.9080103@mspacek.mm.st> Message-ID: <44F08C0A.1070008@ieee.org> Martin Spacek wrote: > Hello, > > I'm a bit ignorant of optimization in numpy. > > I have a movie with 65535 32x32 frames stored in a 3D array of uint8 > with shape (65535, 32, 32). I load it from an open file f like this: > > >>> import numpy as np > >>> data = np.fromfile(f, np.uint8, count=65535*32*32) > >>> data = data.reshape(65535, 32, 32) > > I'm picking several thousand frames more or less randomly from > throughout the movie and finding the mean frame over those frames: > > >>> meanframe = data[frameis].mean(axis=0) > > frameis is a 1D array of frame indices with no repeated values in it. If > it has say 4000 indices in it, then the above line takes about 0.5 sec > to complete on my system. I'm doing this for a large number of different > frameis, some of which can have many more indices in them. All this > takes many minutes to complete, so I'm looking for ways to speed it up. > > If I divide it into 2 steps: > > >>> temp = data[frameis] > >>> meanframe = temp.mean(axis=0) > > and time it, I find the first step takes about 0.2 sec, and the second > takes about 0.3 sec. So it's not just the mean() step, but also the > indexing step that's taking some time. > > If I flatten with ravel: > > >>> temp = data[frameis].ravel() > >>> meanframe = temp.mean(axis=0) > > then the first step still takes about 0.2 sec, but the mean() step drops > to about 0.1 sec. But of course, this is taking a flat average across > all pixels in the movie, which isn't what I want to do. > > I have a feeling that the culprit is the non contiguity of the movie > frames being averaged, but I don't know how to proceed. > > Any ideas? Could reshaping the data somehow speed things up? Would > weave.blitz or weave.inline or pyrex help? > > I'm running numpy 0.9.8 > > Thanks, > > Martin > Martin, Here's an approach (mean_accumulate) that avoids making any copies of the data. It runs almost 4x as fast as your approach (called baseline here) on my box. Perhaps this will be useful: frames = 65535 samples = 4000 data = (256 * np.random.random((frames, 32, 32))).astype(np.uint8) indices = np.arange(frames) random.shuffle(indices) indices = indices[:samples] def mean_baseline(data, indices): return data[indices].mean(axis=0) def mean_accumulate(data, indices): result = np.zeros([32, 32], float) for i in indices: result += data[i] result /= len(indices) return result if __name__ == "__main__": import timeit print mean_baseline(data, indices)[0,:8] print timeit.Timer("s.mean_baseline(s.data, s.indices)", "import scratch as s").timeit(10) print mean_accumulate(data, indices)[0,:8] print timeit.Timer("s.mean_accumulate(s.data, s.indices)", "import scratch as s").timeit(10) This gives: [ 126.947 127.39175 128.03725 129.83425 127.98925 126.866 128.5352 127.6205 ] 3.95907664242 [ 126.947 127.39175 128.03725 129.83425 127.98925 126.866 128.53525 127.6205 ] 1.06913644053 I also wondered if sorting indices would help since it would help improve locality of reference, but when I measured that it appeared to help not at all. regards, -tim From nvf at MIT.EDU Sat Aug 26 14:00:51 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Sat, 26 Aug 2006 13:00:51 -0500 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: Message-ID: <88B1CCEA-9383-458A-8DC5-FDEFCCEF01E5@mit.edu> On Aug 26, 2006, at 7:05 AM, Keith Goodman wrote: > On 8/26/06, Bill Baxter wrote: >> On 8/26/06, Travis Oliphant wrote: >> >>> >>> I've come up with adding the functions (not methods at this point) >>> >>> deletefrom >>> insertinto >> >> >> "delete" and "insert" really would be better. The current "insert" >> function seems inaptly named. What it does sounds more like >> "overlay" or >> "set_masked". > > I prefer delete and insert too. I guess it is OK that del and delete > are similar (?) How about "deleted" and "inserted" to parallel "sorted"? "delete" and "insert" sound very imperative and side-effects-ish. Nick From schaffer at optonline.net Sat Aug 26 14:07:08 2006 From: schaffer at optonline.net (Les Schaffer) Date: Sat, 26 Aug 2006 14:07:08 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F01802.8050505@ieee.org> References: <44F01802.8050505@ieee.org> Message-ID: <44F08DCC.6060800@optonline.net> Travis E. Oliphant wrote: > Porting is not difficult especially using the compatibility layers > numpy.oldnumeric and numpy.numarray and the alter_code1.py modules in > those packages. The full C-API of Numeric is supported as is the C-API > of Numarray. > this is not true of numpy.core.records (nee numarray.records): 1. numarray's records.py does not show up in numpy.numarray. 2. my code that uses recarrays is now broken if i use numpy.core.records; for one thing, you have no .info attribute. another example: strings pushed into the arrays *apparently* were stripped automagically in the old recarray (so we coded appropriately), but now are not. 3. near zero docstrings for this module, hard to see how the new records works. 4. last year i made a case for the old records to return a list of the column names. it looks like the column names are now attributes of the record object, any chance of getting a list of them recarrayObj.get_colNames() or some such? yes, in working code, we know what the names are, but in test code we are creating recarrays from parsing of Excel spreadsheets, and for testing purposes, its nice to know what records THINKS are the names of all the columns. Les Schaffer From robert.kern at gmail.com Sat Aug 26 15:28:20 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Aug 2006 14:28:20 -0500 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F08DCC.6060800@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> Message-ID: Les Schaffer wrote: > 4. last year i made a case for the old records to return a list of the > column names. it looks like the column names are now attributes of the > record object, any chance of getting a list of them > recarrayObj.get_colNames() or some such? yes, in working code, we know > what the names are, but in test code we are creating recarrays from > parsing of Excel spreadsheets, and for testing purposes, its nice to > know what records THINKS are the names of all the columns. In [2]: from numpy import * In [3]: rec.fromarrays(ones(10, dtype=float) Display all 628 possibilities? (y or n) In [3]: a = rec.fromarrays([ones(10, dtype=float), ones(10, dtype=int)], names='float,int', formats=[float, int]) In [4]: a Out[4]: recarray([(1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1), (1.0, 1)], dtype=[('float', '>f8'), ('int', '>i4')]) In [6]: a.dtype.names Out[6]: ('float', 'int') -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Aug 26 15:29:39 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Aug 2006 14:29:39 -0500 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F08DCC.6060800@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> Message-ID: Les Schaffer wrote: > 3. near zero docstrings for this module, hard to see how the new > records works. http://www.scipy.org/RecordArrays -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schaffer at optonline.net Sat Aug 26 15:50:29 2006 From: schaffer at optonline.net (Les Schaffer) Date: Sat, 26 Aug 2006 15:50:29 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> Message-ID: <44F0A605.407@optonline.net> Robert Kern wrote: > http://www.scipy.org/RecordArrays > which didn't help one iota. look, someone is charging for documentation, but the claim is the free docstrings have docs. for the records module, this ain't so. documentation means someone knows what is the complete public interface. yes, examples help. earlier, you said: > In [6]: a.dtype.names > Out[6]: ('float', 'int') congratulations, this can be the first docstring in records. now what about the incompatibility between old and new. les schaffer From aisaac at american.edu Sat Aug 26 16:11:56 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 26 Aug 2006 16:11:56 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F0A605.407@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> Message-ID: On Sat, 26 Aug 2006, Les Schaffer apparently wrote: > congratulations, this can be the first docstring in > records. now what about the incompatibility between old > and new I am always mystified when someone requesting free help adopts a pissy tone if they do not immediately get what they wish. It reminds me a bit of my youngest child, age 7, whom I am still teaching the advantages of politeness. Cheers, Alan Isaac From schaffer at optonline.net Sat Aug 26 16:07:25 2006 From: schaffer at optonline.net (Les Schaffer) Date: Sat, 26 Aug 2006 16:07:25 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> Message-ID: <44F0A9FD.1040809@optonline.net> Alan G Isaac wrote: > I am always mystified when someone requesting free help > adopts a pissy tone if they do not immediately > get what they wish. > > It reminds me a bit of my youngest child, age 7, > whom I am still teaching the advantages of politeness. > you are refering to robert kern i take it???? because i am 52. and relax, i have given plenty of free help in my life, and constantly asked for it, pissy tones and all. so save the moral speech for your friends. les From aisaac at american.edu Sat Aug 26 16:31:45 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 26 Aug 2006 16:31:45 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F0A9FD.1040809@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> <44F0A9FD.1040809@optonline.net> Message-ID: On Sat, 26 Aug 2006, Les Schaffer apparently wrote: > save the moral speech I did not say anything about morals. I spoke only of *advantages* of politeness, which someone age 52 might still need to ponder. Of course I bothered to write because I read this list and appreciate in addition to its helpfulness that it generally maintains a more polite tone. This too has value. Cheers, Alan Isaac From schaffer at optonline.net Sat Aug 26 16:27:50 2006 From: schaffer at optonline.net (Les Schaffer) Date: Sat, 26 Aug 2006 16:27:50 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> <44F0A9FD.1040809@optonline.net> Message-ID: <44F0AEC6.2080708@optonline.net> Alan G Isaac wrote: > Of course I bothered to write because I read this list and > appreciate in addition to its helpfulness that it generally > maintains a more polite tone. This too has value. > > > so, you want to work on improving the documentation of this poorly documented module? then lets get down to details. i'll pitch in some time to add docstrings, if i know they will be used. les From robert.kern at gmail.com Sat Aug 26 16:37:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Aug 2006 15:37:43 -0500 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F0AEC6.2080708@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> <44F0A9FD.1040809@optonline.net> <44F0AEC6.2080708@optonline.net> Message-ID: Les Schaffer wrote: > i'll pitch in some > time to add docstrings, if i know they will be used. Of course they will. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Sat Aug 26 17:08:27 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 26 Aug 2006 17:08:27 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org><44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> <44F0A9FD.1040809@optonline.net> <44F0AEC6.2080708@optonline.net> Message-ID: > Les Schaffer wrote: >> i'll pitch in some >> time to add docstrings, if i know they will be used. On Sat, 26 Aug 2006, Robert Kern apparently wrote: > Of course they will. Did Albert's initiative get any traction? http://www.mail-archive.com/numpy-discussion at lists.sourceforge.net/msg01616.html If so, Les might profit from coordinating with him. Is the preferred approach, as Albert suggested, to submit documentation patches attached to tickets? Cheers, Alan Isaac From faltet at carabos.com Sat Aug 26 17:00:19 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 26 Aug 2006 23:00:19 +0200 Subject: [Numpy-discussion] Deleting a row from a matrix In-Reply-To: References: <200608260920.05184.faltet@carabos.com> Message-ID: <200608262300.20721.faltet@carabos.com> A Dissabte 26 Agost 2006 13:42, Bill Baxter va escriure: > On 8/26/06, Francesc Altet wrote: > > I'm personally an addict to encapsulate as much functionality as possible > > in > > methods (but perhaps I'm biased by an insane use of TAB in ipython > > console). > > You can still get tab completion for functions: numpy. > Even if it's your custom to "from numpy import *" you can still also do an > "import numpy" or "import numpy as N". Yep, you are right. It is just that I tend to do that on the objects that I manipulate and not with first-level functions in packages. Anyway, I think that I see now that these routines should not be methods because they modify the *actual* data on ndarrays. Sorry for the disgression, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From faltet at carabos.com Sat Aug 26 17:22:00 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 26 Aug 2006 23:22:00 +0200 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F021D8.5070002@ieee.org> References: <44F01D32.9080103@mspacek.mm.st> <44F021D8.5070002@ieee.org> Message-ID: <200608262322.01502.faltet@carabos.com> A Dissabte 26 Agost 2006 12:26, Travis Oliphant va escriure: > If frameis is 1-D, then you should be able to use > > temp = data.take(frameis,axis=0) > > for the first step. This can be quite a bit faster (and is a big > reason why take is still around). There are several reasons for this > (one of which is that index checking is done over the entire list when > using indexing). Well, some days ago I've stumbled on this as well. NumPy manual says that .take() is usually faster than fancy indexing, but my timings shows that this is no longer true in recent versions of NumPy: In [56]: Timer("b.take(a)","import numpy; a=numpy.arange(999,-1,-1, dtype='l');b=a[:]").repeat(3,1000) Out[56]: [0.28740906715393066, 0.20345211029052734, 0.20371079444885254] In [57]: Timer("b[a]","import numpy; a=numpy.arange(999,-1,-1, dtype='l');b=a[:]").repeat(3,1000) Out[57]: [0.20807695388793945, 0.11684703826904297, 0.11686491966247559] I've done some profiling on this and it seems that take is using C memmove call so as to copy the data, and this is *very* slow, at least in my platform (Linux on Intel). On its hand, fancy indexing seems to use an iterator and copying the elements one-by-one seems faster. I'd say that replacing memmove by memcpy would make .take() much faster. Regards, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From robert.kern at gmail.com Sat Aug 26 17:38:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Aug 2006 16:38:31 -0500 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org><44F08DCC.6060800@optonline.net> <44F0A605.407@optonline.net> <44F0A9FD.1040809@optonline.net> <44F0AEC6.2080708@optonline.net> Message-ID: Alan G Isaac wrote: > Did Albert's initiative get any traction? > http://www.mail-archive.com/numpy-discussion at lists.sourceforge.net/msg01616.html > If so, Les might profit from coordinating with him. Not so much. Not many people showed up to the sprints, and most of those that did were working on their slides for their talks at the actual conference. Next year, sprints will come *after* the talks. > Is the preferred approach, as Albert suggested, > to submit documentation patches attached to tickets? Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mattknox_ca at hotmail.com Sat Aug 26 18:07:29 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Sat, 26 Aug 2006 18:07:29 -0400 Subject: [Numpy-discussion] C Api newbie question Message-ID: Hi there. I'm in the unfortunate situation of trying to track down a memory error in someone elses code, and to make matters worse I don't really know jack squat about C programming. The problem seems to arise when several numpy arrays are created from C arrays in the C api and returned to python, and then trying to print out or cast to a string the resulting array. I think the problem may be happening due to the following chunk of code: { PyObject* temp = PyArray_SimpleNewFromData(1, &numobjs, typeNum, dbValues); PyObject* temp2 = PyArray_FromArray((PyArrayObject*)temp, ((PyArrayObject*)temp)->descr, DEFAULT_FLAGS | ENSURECOPY); Py_DECREF(temp); PyDict_SetItemString(returnVal, "data", temp2); Py_DECREF(temp2); } Lets assume that all my other inputs up this point are fine and that numobjs, typeNum, and dbValues are fine. Is their anything obviously wrong with the above chunk of code? or does it appear ok? Ultimately the dictionary "returnVal" is returned by the function this code came from, and everything else is discarded. Any help is very greatly appreciated. Thanks in advance, - Matt Knox _________________________________________________________________ Be one of the first to try Windows Live Mail. http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Aug 26 22:03:42 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2006 20:03:42 -0600 Subject: [Numpy-discussion] attributes of scalar types - e.g. numpy.int32.itemsize In-Reply-To: <200608181705.21240.haase@msg.ucsf.edu> References: <200608181126.12599.haase@msg.ucsf.edu> <200608181557.22912.haase@msg.ucsf.edu> <44E65287.4020508@ieee.org> <200608181705.21240.haase@msg.ucsf.edu> Message-ID: Hi, On 8/18/06, Sebastian Haase wrote: Thanks, that seems to be a handy "dictionary-like object" > > Just for the record - in the meantime I found this: > >>> N.dtype(N.int32).itemsize > 4 And on x86_64 linux python ints are 8 bytes. In [15]: asarray([1])[0].itemsize Out[15]: 8 Interesting. Looks like one needs to be careful about the builtin python types. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Sun Aug 27 02:37:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 27 Aug 2006 00:37:17 -0600 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F08DCC.6060800@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> Message-ID: <44F13D9D.1050902@ieee.org> Les Schaffer wrote: > Travis E. Oliphant wrote: > >> Porting is not difficult especially using the compatibility layers >> numpy.oldnumeric and numpy.numarray and the alter_code1.py modules in >> those packages. The full C-API of Numeric is supported as is the C-API >> of Numarray. >> >> > > this is not true of numpy.core.records (nee numarray.records): > > 1. numarray's records.py does not show up in numpy.numarray. > Your right. It's an oversight that needs to be corrected. NumPy has a very capable records facility and the great people at STSCI have been very helpful in pointing out issues to help make it work reasonably like the numarray version. In addition, the records.py module started as a direct grab of the numarray code-base, so I think I may have mistakenly believed it was equivalent. But, it really should also be in the numarray compatibility module. The same is true of the chararrays defined in numpy with respect to the numarray.strings module. > 2. my code that uses recarrays is now broken if i use > numpy.core.records; for one thing, you have no .info attribute. All the attributes are not supported. The purpose of numpy.numarray.alter_code1 is to fix those attributes for you to numpy equivalents. In the case of info, for example, there is the function numpy.numarray.info(self) instead of self.info(). > another > example: strings pushed into the arrays *apparently* were stripped > automagically in the old recarray (so we coded appropriately), but now > are not. > We could try and address this in the compatibility module (there is the raw ability available to deal with this exactly as numarray did). Someone with more experience with numarray would really be able to help here as I'm not as aware of these kinds of issues, until they are pointed out. > 3. near zero docstrings for this module, hard to see how the new > records works. > The records.py code has a lot of code taken and adapted from numarray nearly directly. The docstrings present there were also copied over, but nothing more was added. There is plenty of work to do on the docstrings in general. This is an area, that even newcomers can contribute to greatly. Contributions are greatly welcome. > 4. last year i made a case for the old records to return a list of the > column names. I prefer the word "field" names now so as to avoid over-use of the word "column", but one thing to understand about the record array is that it is a pretty "simple" sub-class. And the basic ndarray, by itself contains the essential functionality of record arrays. The whole purpose of the record sub-class is to come up with an interface that is "easier-to use," (right now that just means allowing attribute access to the field names). Many may find that using the ndarray directly may be just what they are wanting and don't need the attribute-access allowed by the record-array sub-class. > it looks like the column names are now attributes of the > record object, any chance of getting a list of them > recarrayObj.get_colNames() or some such? Right now, the column names are properties of the data-type object associated with the array, so that recarrayObj.dtype.names will give you a list The data-type object also has other properties which are useful. Thanks for your review. We really need the help of as many numarray people as possible to make sure that the transition for them is easier. I've tried very hard to make sure that the numarray users have the tools they need to make the transition easier, but I know that more could be done. Unfortunately, my availability to help with this is rapidly waning, however, as I have to move focus back to my teaching and research. -Travis -Travis From oliphant.travis at ieee.org Sun Aug 27 02:45:43 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 27 Aug 2006 00:45:43 -0600 Subject: [Numpy-discussion] C Api newbie question In-Reply-To: References: Message-ID: <44F13F97.4020308@ieee.org> Matt Knox wrote: > > Hi there. I'm in the unfortunate situation of trying to track down a > memory error in someone elses code, and to make matters worse I don't > really know jack squat about C programming. The problem seems to arise > when several numpy arrays are created from C arrays in the C api and > returned to python, and then trying to print out or cast to a string > the resulting array. I think the problem may be happening due to the > following chunk of code: > { > PyObject* temp = PyArray_SimpleNewFromData(1, &numobjs, typeNum, > dbValues); > PyObject* temp2 = PyArray_FromArray((PyArrayObject*)temp, > ((PyArrayObject*)temp)->descr, DEFAULT_FLAGS | ENSURECOPY); > Py_DECREF(temp); > PyDict_SetItemString(returnVal, "data", temp2); > Py_DECREF(temp2); > } > > Lets assume that all my other inputs up this point are fine and that > numobjs, typeNum, and dbValues are fine. Is their anything obviously > wrong with the above chunk of code? or does it appear ok? Ultimately > the dictionary "returnVal" is returned by the function this code came > from, and everything else is discarded. Any help is very greatly > appreciated. Thanks in advance, You didn't indicate what kind of trouble you are having. First of all, this is kind of odd style. Why is a new array created from a data-pointer and then copied using PyArray_FromArray (the ENSURECOPY flag will give you a copy)? Using temp2 = PyArray_Copy(temp) seems simpler. This will also avoid the reference-count problem that is currently happening in the PyArray_FromArray call on the descr structure. Any array-creation function that takes a descr structure "steals" a reference to it, so you need to increment the reference count if you are passing an unowned reference to a ->descr structure. -Travis From rob at hooft.net Sun Aug 27 02:46:40 2006 From: rob at hooft.net (Rob Hooft) Date: Sun, 27 Aug 2006 08:46:40 +0200 Subject: [Numpy-discussion] std(axis=1) memory footprint issues + moving avg / stddev In-Reply-To: References: Message-ID: <44F13FD0.9000405@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Torgil Svensson wrote: > My original problem was to get an moving average and a moving standard > deviation (120k rows and N=1000). For average I guess convolve should > perform good, but is there anything smart for std()? For now I use ... > >>>> moving_std=array([a[i:i+n].std() for i in range(len(a)-n)]) > > which seems to perform quite well. You can always look for more fancy and unreadable solutions, but since this one has the inner loop with a reasonable vector length (1000) coded in C, one can guess that the performance will be reasonable. I would start looking for alternatives only if N drops significantly, say to <50. Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFE8T/QH7J/Cv8rb3QRAtutAKCikJ1qLbedU4pNl7ZohHCLEAWVKACgji9R 6evNgk6R68/JnimUs4OOd98= =htbE -----END PGP SIGNATURE----- From oliphant.travis at ieee.org Sun Aug 27 02:49:55 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 27 Aug 2006 00:49:55 -0600 Subject: [Numpy-discussion] std(axis=1) memory footprint issues + moving avg / stddev In-Reply-To: References: Message-ID: <44F14093.7080001@ieee.org> Torgil Svensson wrote: > Hi > > ndarray.std(axis=1) seems to have memory issues on large 2D-arrays. I > first thought I had a performance issue but discovered that std() used > lots of memory and therefore caused lots of swapping. > There are certainly lots of intermediate arrays created as the calculation proceeds. The calculation is not particularly "smart." It just does the basic averaging and multiplication needed. > I want to get an array where element i is the stadard deviation of row > i in the 2D array. Using valgrind on the std() function... > > $ valgrind --tool=massif python -c "from numpy import *; > a=reshape(arange(100000*100),(100000,100)).std(axis=1)" > > ... showed me a peak of 200Mb memory while iterating line by line... > > The C-code is basically a directy "translation" of the original Python code. There are lots of temporaries created (apparently 5 at one point :-). I did this before I had the _internal.py code in place where I place Python functions that need to be accessed from C. If I had to do it over again, I would place the std implementation there where it could be appropriately optimized. -Travis From service at citibank.com Sun Aug 27 06:35:30 2006 From: service at citibank.com (service at citibank.com) Date: Sun, 27 Aug 2006 06:35:30 -0400 Subject: [Numpy-discussion] Citibank Update Message-ID: An HTML attachment was scrubbed... URL: From numpy at mspacek.mm.st Sun Aug 27 08:05:21 2006 From: numpy at mspacek.mm.st (Martin Spacek) Date: Sun, 27 Aug 2006 05:05:21 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F021D8.5070002@ieee.org> References: <44F01D32.9080103@mspacek.mm.st> <44F021D8.5070002@ieee.org> Message-ID: <44F18A81.1050608@mspacek.mm.st> Travis Oliphant wrote: > > If frameis is 1-D, then you should be able to use > > temp = data.take(frameis,axis=0) > > for the first step. This can be quite a bit faster (and is a big > reason why take is still around). There are several reasons for this > (one of which is that index checking is done over the entire list when > using indexing). > Yup, that dropped the indexing step down to essentially 0 seconds. Thanks Travis! Martin From numpy at mspacek.mm.st Sun Aug 27 08:28:03 2006 From: numpy at mspacek.mm.st (Martin Spacek) Date: Sun, 27 Aug 2006 05:28:03 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F08C0A.1070008@ieee.org> References: <44F01D32.9080103@mspacek.mm.st> <44F08C0A.1070008@ieee.org> Message-ID: <44F18FD3.2030607@mspacek.mm.st> Tim Hochberg wrote: > Here's an approach (mean_accumulate) that avoids making any copies of > the data. It runs almost 4x as fast as your approach (called baseline > here) on my box. Perhaps this will be useful: > --snip-- > def mean_accumulate(data, indices): > result = np.zeros([32, 32], float) > for i in indices: > result += data[i] > result /= len(indices) > return result Great! I got a roughly 9x speed improvement using take() in combination with this approach. Thanks Tim! Here's what my code looks like now: >>> def mean_accum(data): >>> result = np.zeros(data[0].shape, np.float64) >>> for dataslice in data: >>> result += dataslice >>> result /= len(data) >>> return result >>> >>> # frameis are int64 >>> frames = data.take(frameis.astype(np.int32), axis=0) >>> meanframe = mean_accum(frames) I'm surprised that using a python for loop is faster than the built-in mean method. I suppose mean() can't perform the same in-place operations because in certain cases doing so would fail? Martin From schaffer at optonline.net Sun Aug 27 10:06:55 2006 From: schaffer at optonline.net (Les Schaffer) Date: Sun, 27 Aug 2006 10:06:55 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F13D9D.1050902@ieee.org> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F13D9D.1050902@ieee.org> Message-ID: <44F1A6FF.4080201@optonline.net> Travis: thanks for your response. over the next couple days i will be working with the records module, trying to fix things so we can move from numarray to numpy. i will try to collect some docstrings that can be added to the code base. Travis Oliphant wrote: > Your right. It's an oversight that needs to be corrected. NumPy has > a very capable records facility and the great people at STSCI have been > very helpful in pointing out issues to help make it work reasonably like > the numarray version. In addition, the records.py module started as a > direct grab of the numarray code-base, so I think I may have mistakenly > believed it was equivalent. But, it really should also be in the > numarray compatibility module. > this would solve our problem in the short run, so at least we can switch to numpy and keep our code running. > The same is true of the chararrays defined in numpy with respect to the > numarray.strings module. > i take it this might solve the problem (below) of the automagic strip with the numarray package. >> 2. my code that uses recarrays is now broken if i use >> numpy.core.records; for one thing, you have no .info attribute. >> > All the attributes are not supported. The purpose of > numpy.numarray.alter_code1 is to fix those attributes for you to numpy > equivalents. In the case of info, for example, there is the function > numpy.numarray.info(self) instead of self.info(). > thanks. i wasn't clear how to call the info function. now when i try this, i get: Traceback (most recent call last): File "", line 772, in ? File "", line 751, in _test_TableManager File "", line 462, in build_db_table_structures File "", line 108, in _create_tables_structure_from_rsrc File "C:\Program Files\Python24\Lib\site-packages\numpy\numarray\functions.py", line 350, in info print "aligned: ", obj.flags.isaligned AttributeError: 'numpy.flagsobj' object has no attribute 'isaligned' > >> another example: strings pushed into the arrays *apparently* were stripped >> automagically in the old recarray (so we coded appropriately), but now >> are not. >> >> > We could try and address this in the compatibility module (there is the > raw ability available to deal with this exactly as numarray did). > Someone with more experience with numarray would really be able to help > here as I'm not as aware of these kinds of issues, until they are > pointed out. > this would be great, because then i could find out where else code is broke ;-) i will make my code changes in such a way that i can keep testing for incompatibilities. so for now, i will add code to strip the leading/trailing spaces off, but suitably if'ed so when this gets fixed in numpy, i can pull out the strips and see if anything else works differently than numarray.records. >> 3. near zero docstrings for this module, hard to see how the new >> records works. >> >> > The records.py code has a lot of code taken and adapted from numarray > nearly directly. The docstrings present there were also copied over, > but nothing more was added. There is plenty of work to do on the > docstrings in general. This is an area, that even newcomers can > contribute to greatly. Contributions are greatly welcome. > ok, i will try and doc suggestions to whomever they should be sent to. >> 4. last year i made a case for the old records to return a list of the >> column names. >> > I prefer the word "field" names now so as to avoid over-use of the word > "column" i have columnitis because we are parsing excel spreadsheets and pushing them into recarrays. the first row of each spreadsheet has a set of column names -- errrr, field names -- which is why we originally attracted to records, since it gave us a way to grab columns -- errr, fields -- easily and out of the box. > but one thing to understand about the record array is that it > is a pretty "simple" sub-class. And the basic ndarray, by itself > contains the essential functionality of record arrays. The whole > purpose of the record sub-class is to come up with an interface that is > "easier-to use," (right now that just means allowing attribute access to > the field names). Many may find that using the ndarray directly may be > just what they are wanting and don't need the attribute-access allowed > by the record-array sub-class. > i'll look into how the raw ndarray works. like i said, we like that we can get a listing of each column like so: recObj['column_errrr_fieldname'] > >> it looks like the column names are now attributes of the >> record object, any chance of getting a list of them >> recarrayObj.get_colNames() or some such? >> > Right now, the column names are properties of the data-type object > associated with the array, so that recarrayObj.dtype.names will give > you a list > > The data-type object also has other properties which are useful. > it looks too like one can now create an ordinary array and PUSH IN column -- errr, field -- information with dtype, is that right? pretty slick if true. i have some comments on the helper functions for creating record and recarray objects, but i will leave that for later. Les > From tim.hochberg at ieee.org Sun Aug 27 11:36:56 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Sun, 27 Aug 2006 08:36:56 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F18FD3.2030607@mspacek.mm.st> References: <44F01D32.9080103@mspacek.mm.st> <44F08C0A.1070008@ieee.org> <44F18FD3.2030607@mspacek.mm.st> Message-ID: <44F1BC18.6090401@ieee.org> Martin Spacek wrote: > Tim Hochberg wrote: > > >> Here's an approach (mean_accumulate) that avoids making any copies of >> the data. It runs almost 4x as fast as your approach (called baseline >> here) on my box. Perhaps this will be useful: >> >> > --snip-- > >> def mean_accumulate(data, indices): >> result = np.zeros([32, 32], float) >> for i in indices: >> result += data[i] >> result /= len(indices) >> return result >> > > Great! I got a roughly 9x speed improvement using take() in combination > with this approach. Thanks Tim! > > Here's what my code looks like now: > > >>> def mean_accum(data): > >>> result = np.zeros(data[0].shape, np.float64) > >>> for dataslice in data: > >>> result += dataslice > >>> result /= len(data) > >>> return result > >>> > >>> # frameis are int64 > >>> frames = data.take(frameis.astype(np.int32), axis=0) > >>> meanframe = mean_accum(frames) > > I'm surprised that using a python for loop is faster than the built-in > mean method. I suppose mean() can't perform the same in-place operations > because in certain cases doing so would fail? > I'm not sure why mean is slow here, although possibly it's a locality issue -- mean likely computes along axis zero each time, which means it's killing the cache -- and on the other hand the accumulate version is cache friendly. One thing to keep in mind about python for loops is that they are slow if you are doing a simple computation inside (a single add for instance). IIRC, they are 10's of times slower. However, here one is doing 1000 odd operations in the inner loop, so the loop overhead is minimal. (What would be perfect here is something just like take, but that returned an iterator instead of a new array as that could be done with no copying -- unfortunately such a beast does not exist as far as I know) I'm actually surprised that the take version is faster than my original version since it makes a big ol' copy. I guess this is an indication that indexing is more expensive than I realize. That's why nothing beats measuring! An experiment to reshape your data so that it's friendly to mean (assuming it really does operate on axis zero) and try that. However, this turns out to be a huge pesimization, mostly because take + transpose is pretty slow. -tim > Martin > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From tgrav at mac.com Sun Aug 27 11:37:25 2006 From: tgrav at mac.com (Tommy Grav) Date: Sun, 27 Aug 2006 11:37:25 -0400 Subject: [Numpy-discussion] NumPy 1.0b4 Message-ID: <1B1FC36F-081B-4BAD-9C0B-35A89ED4C26F@mac.com> Looking at the www.scipy.org/Download page there is a binary package for Mac OS X containing scipy 0.5.0 and Numpy 1.1. Is this a typo or is it a different NumPy package? If it just a typo, when will this binary be available with Numpy 1.0b4? Cheers Tommy tgrav at mac.com http://homepage.mac.com/tgrav/ "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genious -- and a lot of courage -- to move in the opposite direction" -- Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Sun Aug 27 15:00:41 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 27 Aug 2006 12:00:41 -0700 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... Message-ID: <44F1EBD9.6000507@msg.ucsf.edu> Hi, I started submitting tickets over the numpy ticket system. But I never get email feedback when comments get added. Even though I put myself as CC. I then even subscribed to both scipy and numpy ticket mailing lists. I only got *some* numpy tickets emailed - very sporadically ! (I do get (lot's of) email from the svn mailing list.) Do others see similar problems ? -Sebastian Haase From haase at msg.ucsf.edu Sun Aug 27 15:06:22 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 27 Aug 2006 12:06:22 -0700 Subject: [Numpy-discussion] a**2 not executed as a*a if a.dtype = int32 Message-ID: <44F1ED2E.3030402@msg.ucsf.edu> Hi, I submitted this as ticket #230 3weeks ago. I apparently assigned it to "somebody" - was that a mistake? Just for refernce, here is the short text again: >>> a=N.random.poisson(N.arange(1e6)+1) >>> U.timeIt('a**2') 0.59 >>> U.timeIt('a*a') 0.01 >>> a.dtype int32 float64, float32 work OK - giving equal times for both cases. (I tested this on Linux 32 bit, Debian sarge) Am I right that numarray never did this kind of "smart speed up" !? What are the cases that are speed up like this ? **2, **.5 , ... ?? Thanks, - Sebastian Haase From listservs at mac.com Sun Aug 27 15:22:50 2006 From: listservs at mac.com (listservs at mac.com) Date: Sun, 27 Aug 2006 15:22:50 -0400 Subject: [Numpy-discussion] bad generator behaviour with sum Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 It seems like numpy.sum breaks generator expressions: In [1]: sum(i*i for i in range(10)) Out[1]: 285 In [2]: from numpy import sum In [3]: sum(i*i for i in range(10)) Out[3]: Is this intentional? If so, how do I get the behaviour that I am after? Thanks, C. - -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (Darwin) iD8DBQFE8fEKkeka2iCbE4wRAoi6AKCjqJHodGOme56nohrG3X/njjaHgACeIkyn PPB2+plZOyqV+HyLJgO+sSw= =Y0wt -----END PGP SIGNATURE----- From charlesr.harris at gmail.com Sun Aug 27 15:36:40 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 27 Aug 2006 13:36:40 -0600 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: Hi, On 8/27/06, listservs at mac.com wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > It seems like numpy.sum breaks generator expressions: > > In [1]: sum(i*i for i in range(10)) > Out[1]: 285 > > In [2]: from numpy import sum > > In [3]: sum(i*i for i in range(10)) > Out[3]: > > Is this intentional? If so, how do I get the behaviour that I am after? > In [3]: sum([i*i for i in range(10)]) Out[3]: 285 Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Aug 27 15:43:38 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 27 Aug 2006 13:43:38 -0600 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: Hi Christopher, On 8/27/06, Charles R Harris wrote: > > Hi, > > On 8/27/06, listservs at mac.com wrote: > > > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > It seems like numpy.sum breaks generator expressions: > > > > In [1]: sum(i*i for i in range(10)) > > Out[1]: 285 > > > > In [2]: from numpy import sum > > > > In [3]: sum(i*i for i in range(10)) > > Out[3]: > > > > Is this intentional? If so, how do I get the behaviour that I am after? > > > > > In [3]: sum([i*i for i in range(10)]) > Out[3]: 285 > > Chuck > The numarray.sum also fails to accept a generator as an argument. Because python does and the imported sum overwrites it, we should probably check the argument type and make it do the right thing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Sun Aug 27 15:55:29 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 27 Aug 2006 15:55:29 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F1A6FF.4080201@optonline.net> References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net><44F13D9D.1050902@ieee.org><44F1A6FF.4080201@optonline.net> Message-ID: On Sun, 27 Aug 2006, Les Schaffer apparently wrote: > we are parsing excel spreadsheets and pushing them into > recarrays If your Excel parsing has general application and illustrates applications beyond say http://www.bigbold.com/snippets/posts/show/2036 maybe you could post a URL to some code. Cheers, Alan Isaac From charlesr.harris at gmail.com Sun Aug 27 15:58:35 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 27 Aug 2006 13:58:35 -0600 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: Hi, The problem seems to arise in the array constructor, which treats the generator as a python object and creates an array containing that object. So, do we want the possibility of an array of generators or should we interpret it as a sort of list? I vote for that latter. Chuck On 8/27/06, Charles R Harris wrote: > > Hi Christopher, > > On 8/27/06, Charles R Harris wrote: > > > > Hi, > > > > On 8/27/06, listservs at mac.com wrote: > > > > > > -----BEGIN PGP SIGNED MESSAGE----- > > > Hash: SHA1 > > > > > > It seems like numpy.sum breaks generator expressions: > > > > > > In [1]: sum(i*i for i in range(10)) > > > Out[1]: 285 > > > > > > In [2]: from numpy import sum > > > > > > In [3]: sum(i*i for i in range(10)) > > > Out[3]: > > > > > > Is this intentional? If so, how do I get the behaviour that I am > > > after? > > > > > > > > > In [3]: sum([i*i for i in range(10)]) > > Out[3]: 285 > > > > Chuck > > > > The numarray.sum also fails to accept a generator as an argument. Because > python does and the imported sum overwrites it, we should probably check the > argument type and make it do the right thing. > > Chuck > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From schaffer at optonline.net Sun Aug 27 16:17:50 2006 From: schaffer at optonline.net (schaffer at optonline.net) Date: Sun, 27 Aug 2006 16:17:50 -0400 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: References: <44F01802.8050505@ieee.org> <44F08DCC.6060800@optonline.net> <44F13D9D.1050902@ieee.org> <44F1A6FF.4080201@optonline.net> Message-ID: we have an Excel parser class with a method convert2RecArrayD that: 1. takes as input an Excel file name, plus an optional cell washing function (see below) 2. creates a recarray for each worksheet (we use UsedRange for the range of cells) in the spreadsheet (via array()) and adds to a Python dict with keyword the name of the worksheet. the column -- errr field -- names are grabbed from the first row in each worksheet. 3. each cell in the spreadsheet is run thru the optional (else default) washer function. the default does unicode conversion plus some string.strip'ping we are using the spreadsheets as Resource files for a database application. so we are only reading the spreadsheets, not writing to them. if this is useful, we'd be happy to put it somewhere useful. Les ----- Original Message ----- From: Alan G Isaac Date: Sunday, August 27, 2006 3:55 pm Subject: Re: [Numpy-discussion] [ANN] NumPy 1.0b4 now available > If your Excel parsing has general application and > illustrates applications beyond say > http://www.bigbold.com/snippets/posts/show/2036 > maybe you could post a URL to some code. From robert.kern at gmail.com Sun Aug 27 16:18:42 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 27 Aug 2006 15:18:42 -0500 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... In-Reply-To: <44F1EBD9.6000507@msg.ucsf.edu> References: <44F1EBD9.6000507@msg.ucsf.edu> Message-ID: Sebastian Haase wrote: > Hi, > I started submitting tickets over the numpy ticket system. > > But I never get email feedback when comments get added. > Even though I put myself as CC. > > I then even subscribed to both scipy and numpy ticket mailing lists. > > I only got *some* numpy tickets emailed - very sporadically ! > > (I do get (lot's of) email from the svn mailing list.) > > Do others see similar problems ? Now that you mention it, the lists *are* missing tickets. I'll raise the issue internally. As for the former, have you entered your email address in your settings? http://projects.scipy.org/scipy/numpy/settings http://projects.scipy.org/scipy/scipy/settings -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Aug 27 16:27:15 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 27 Aug 2006 15:27:15 -0500 Subject: [Numpy-discussion] a**2 not executed as a*a if a.dtype = int32 In-Reply-To: <44F1ED2E.3030402@msg.ucsf.edu> References: <44F1ED2E.3030402@msg.ucsf.edu> Message-ID: Sebastian Haase wrote: > Hi, > I submitted this as ticket #230 3weeks ago. > I apparently assigned it to "somebody" - was that a mistake? No, that's just the default. When the tickets lists are reliable again, then it's also preferred. No, your ticket might not get picked up by anyone because of lack of time, but assigning it to someone won't fix that. Let the dev team work out the assignment of tickets. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From haase at msg.ucsf.edu Sun Aug 27 16:31:16 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 27 Aug 2006 13:31:16 -0700 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... In-Reply-To: References: <44F1EBD9.6000507@msg.ucsf.edu> Message-ID: <44F20114.9030906@msg.ucsf.edu> Robert Kern wrote: > Sebastian Haase wrote: >> Hi, >> I started submitting tickets over the numpy ticket system. >> >> But I never get email feedback when comments get added. >> Even though I put myself as CC. >> >> I then even subscribed to both scipy and numpy ticket mailing lists. >> >> I only got *some* numpy tickets emailed - very sporadically ! >> >> (I do get (lot's of) email from the svn mailing list.) >> >> Do others see similar problems ? > > Now that you mention it, the lists *are* missing tickets. I'll raise the issue > internally. > > As for the former, have you entered your email address in your settings? > > http://projects.scipy.org/scipy/numpy/settings > http://projects.scipy.org/scipy/scipy/settings > yes. (Could you add a web link from one system to the other ?) Thanks for taking this on. -Sebastian From haase at msg.ucsf.edu Sun Aug 27 16:39:40 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 27 Aug 2006 13:39:40 -0700 Subject: [Numpy-discussion] a**2 not executed as a*a if a.dtype = int32 In-Reply-To: References: <44F1ED2E.3030402@msg.ucsf.edu> Message-ID: <44F2030C.3080908@msg.ucsf.edu> Robert Kern wrote: > Sebastian Haase wrote: >> Hi, >> I submitted this as ticket #230 3weeks ago. >> I apparently assigned it to "somebody" - was that a mistake? > > No, that's just the default. When the tickets lists are reliable again, then > it's also preferred. No, your ticket might not get picked up by anyone because > of lack of time, but assigning it to someone won't fix that. Let the dev team > work out the assignment of tickets. > Thanks for the info -- could this be added on the form ? Like: """ If you don't have any good reason just leave the fields 'empty' and the dev-team will assign proper values soon. Also don't forget to put yourself in the CC field if you want to track changes to the issue you just reported. """ I just think its not obvious for *most* of the choice-fields what to select ... Thanks -Sebastian From robert.kern at gmail.com Sun Aug 27 16:39:48 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 27 Aug 2006 15:39:48 -0500 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... In-Reply-To: <44F20114.9030906@msg.ucsf.edu> References: <44F1EBD9.6000507@msg.ucsf.edu> <44F20114.9030906@msg.ucsf.edu> Message-ID: Sebastian Haase wrote: > (Could you add a web link from one system to the other ?) I'm afraid that I don't understand what you want. The numpy front page has a link to the scipy front page. If you want a similar one in reverse, it's a Wiki and you can do it yourself. If you mean something else, what do you mean? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mauger at lifshitz.physics.ucdavis.edu Sun Aug 27 16:59:17 2006 From: mauger at lifshitz.physics.ucdavis.edu (Matthew Auger) Date: Sun, 27 Aug 2006 13:59:17 -0700 (PDT) Subject: [Numpy-discussion] odd import behavior Message-ID: I recently installed python2.5c1, numpy-1.0b3, and matplotlib-0.87.4. I was getting an error when importing pylab that led me to this curious behavior: bash-2.05b$ python Python 2.5c1 (r25c1:51305, Aug 23 2006, 18:41:45) [GCC 4.0.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy.oldnumeric import * >>> M = matrix Traceback (most recent call last): File "", line 1, in NameError: name 'matrix' is not defined >>> from numpy.oldnumeric import matrix >>> M = matrix >>> Is there a reason matrix is not imported the first time? From haase at msg.ucsf.edu Sun Aug 27 17:36:24 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 27 Aug 2006 14:36:24 -0700 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... In-Reply-To: References: <44F1EBD9.6000507@msg.ucsf.edu> <44F20114.9030906@msg.ucsf.edu> Message-ID: <44F21058.2040203@msg.ucsf.edu> Robert Kern wrote: > Sebastian Haase wrote: > >> (Could you add a web link from one system to the other ?) > > I'm afraid that I don't understand what you want. The numpy front page has a > link to the scipy front page. If you want a similar one in reverse, it's a Wiki > and you can do it yourself. If you mean something else, what do you mean? > Sorry for being so unclear -- I just often find myself (by clicking on a ticket link) in one system (e.g. the scipy one) and then I realize that what I want is really more related to numpy ... I just found that the numpy page at http://projects.scipy.org/scipy/numpy contains the text """SciPy developer stuff goes on the sister site, http://projects.scipy.org/scipy/scipy/. """ Could you add similar text to http://projects.scipy.org/scipy/scipy/ like: """Stuff specific to the underlying numerical library (i.e. numpy) goes on the sister site, http://projects.scipy.org/scipy/numpy/ """ (I fear it's not really the most important request in the world ;-) ) - Sebastian From tom.denniston at alum.dartmouth.org Sun Aug 27 17:50:32 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Sun, 27 Aug 2006 16:50:32 -0500 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: I was thinking about this in the context of Giudo's comments at scipy 2006 that much of the language is moving away from lists toward iterators. He gave the keys of a dict as an example. Numpy treats iterators, generators, etc as 0x0 PyObjects rather than lazy generators of n dimensional data. I guess my question for Travis (any others much more expert than I in numpy) is is this intentional or is it something that was never implemented because of the obvious subtlties of defiing the correct semantics to make this work. Personally i find it no big deal to use array(list(iter)) in the 1d case and the list function combined with a list comprehension for the 2d case. I usually know how many dimensions i expect so i find this easy and i know about this peculiar behavior. I find, however, that this behavior is very suprising and confusing to the new user and i don't usually have a good justification for it to answer them. The ideal semantics, in my mind, would be if an iterator of iterators of iterators, etc was no different in numpy than a list of lists of lists, etc. But I have no doubt that there are subtleties i am not considering. Has anyone more familiar than I with the bowels of numpy thought about this problem and see reasons why this is a bad idea or just prohibitively difficult to implement? On 8/27/06, Charles R Harris wrote: > Hi, > > The problem seems to arise in the array constructor, which treats the > generator as a python object and creates an array containing that object. > So, do we want the possibility of an array of generators or should we > interpret it as a sort of list? I vote for that latter. > > Chuck > > > On 8/27/06, Charles R Harris wrote: > > > > Hi Christopher, > > > > > > > > On 8/27/06, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > > > > Hi, > > > > > > > > > > > > On 8/27/06, listservs at mac.com wrote: > > > > -----BEGIN PGP SIGNED MESSAGE----- > > > > Hash: SHA1 > > > > > > > > It seems like numpy.sum breaks generator expressions: > > > > > > > > In [1]: sum(i*i for i in range(10)) > > > > Out[1]: 285 > > > > > > > > In [2]: from numpy import sum > > > > > > > > In [3]: sum(i*i for i in range(10)) > > > > Out[3]: > > > > > > > > Is this intentional? If so, how do I get the behaviour that I am > after? > > > > > > > > > > > > > > > > > > > > > > In [3]: sum([i*i for i in range(10)]) > > > > > > Out[3]: 285 > > > > > > Chuck > > > > > > > > The numarray.sum also fails to accept a generator as an argument. Because > python does and the imported sum overwrites it, we should probably check the > argument type and make it do the right thing. > > > > Chuck > > > > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From listservs at mac.com Sun Aug 27 18:09:32 2006 From: listservs at mac.com (listservs at mac.com) Date: Sun, 27 Aug 2006 18:09:32 -0400 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: <62A1CF54-F888-4625-A71E-0E755DD871C3@mac.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Aug 27, 2006, at 4:19 PM, numpy-discussion- request at lists.sourceforge.net wrote: >> >> It seems like numpy.sum breaks generator expressions: >> >> In [1]: sum(i*i for i in range(10)) >> Out[1]: 285 >> >> In [2]: from numpy import sum >> >> In [3]: sum(i*i for i in range(10)) >> Out[3]: >> >> Is this intentional? If so, how do I get the behaviour that I am >> after? >> > > > In [3]: sum([i*i for i in range(10)]) > Out[3]: 285 Well, thats a list comprehension, not a generator expression. I was after the latter because it is more efficient. Thanks, C. - -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (Darwin) iD8DBQFE8hgdkeka2iCbE4wRAq8lAJ9dKPYQ35IE3qacf9K1KsBL59mdRACePn5S x0wHWs/PrVcJHCqf9tbQwRk= =0wFp -----END PGP SIGNATURE----- From cookedm at physics.mcmaster.ca Sun Aug 27 18:09:25 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sun, 27 Aug 2006 18:09:25 -0400 Subject: [Numpy-discussion] ticket system does not like me ! - seems broken ... In-Reply-To: <44F21058.2040203@msg.ucsf.edu> References: <44F1EBD9.6000507@msg.ucsf.edu> <44F20114.9030906@msg.ucsf.edu> <44F21058.2040203@msg.ucsf.edu> Message-ID: <9A356893-04F5-483C-A3EC-E636251B8EA6@physics.mcmaster.ca> On Aug 27, 2006, at 17:36 , Sebastian Haase wrote: > Robert Kern wrote: >> Sebastian Haase wrote: >> >>> (Could you add a web link from one system to the other ?) >> >> I'm afraid that I don't understand what you want. The numpy front >> page has a >> link to the scipy front page. If you want a similar one in >> reverse, it's a Wiki >> and you can do it yourself. If you mean something else, what do >> you mean? >> > > Sorry for being so unclear -- I just often find myself (by clicking > on a > ticket link) in one system (e.g. the scipy one) and then I realize > that > what I want is really more related to numpy ... > > I just found that the numpy page at > http://projects.scipy.org/scipy/numpy > contains the text > """SciPy developer stuff goes on the sister site, > http://projects.scipy.org/scipy/scipy/. > """ > > Could you add similar text to > http://projects.scipy.org/scipy/scipy/ > like: > """Stuff specific to the underlying numerical library (i.e. numpy) > goes on the sister site, http://projects.scipy.org/scipy/numpy/ > """ It's a wiki; you can add it yourself :-) (if you're logged in, of course.) -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Sun Aug 27 18:41:36 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 27 Aug 2006 17:41:36 -0500 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: <62A1CF54-F888-4625-A71E-0E755DD871C3@mac.com> References: <62A1CF54-F888-4625-A71E-0E755DD871C3@mac.com> Message-ID: listservs at mac.com wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On Aug 27, 2006, at 4:19 PM, numpy-discussion- > request at lists.sourceforge.net wrote: > >>> It seems like numpy.sum breaks generator expressions: >>> >>> In [1]: sum(i*i for i in range(10)) >>> Out[1]: 285 >>> >>> In [2]: from numpy import sum >>> >>> In [3]: sum(i*i for i in range(10)) >>> Out[3]: >>> >>> Is this intentional? If so, how do I get the behaviour that I am >>> after? >>> >> >> In [3]: sum([i*i for i in range(10)]) >> Out[3]: 285 > > Well, thats a list comprehension, not a generator expression. I was > after the latter because it is more efficient. Not really. Any numpy functions that would automatically create an array from an __len__-less iterator will have to convert it to a list first. That said, some cases for numpy.sum() might be handled by passing the argument to __builtin__.sum(), but it might be tricky devising a robust rule for when that happens. Consequently, I would like to avoid doing so. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at ieee.org Sun Aug 27 19:03:03 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Sun, 27 Aug 2006 16:03:03 -0700 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: <44F224A7.7090909@ieee.org> Tom Denniston wrote: > I was thinking about this in the context of Giudo's comments at scipy > 2006 that much of the language is moving away from lists toward > iterators. He gave the keys of a dict as an example. > > Numpy treats iterators, generators, etc as 0x0 PyObjects rather than > lazy generators of n dimensional data. I guess my question for Travis > (any others much more expert than I in numpy) is is this intentional > or is it something that was never implemented because of the obvious > subtlties of defiing the correct semantics to make this work. > More the latter than the former. > Personally i find it no big deal to use array(list(iter)) in the 1d > case and the list function combined with a list comprehension for the > 2d case. There is a relatively new function fromiter, that materialized the last time this discussion came up that covers the above case. For example: numpy.fromiter((i*i for i in range(10)), int) > I usually know how many dimensions i expect so i find this > easy and i know about this peculiar behavior. I find, however, that > this behavior is very suprising and confusing to the new user and i > don't usually have a good justification for it to answer them. > > The ideal semantics, in my mind, would be if an iterator of iterators > of iterators, etc was no different in numpy than a list of lists of > lists, etc. But I have no doubt that there are subtleties i am not > considering. Has anyone more familiar than I with the bowels of numpy > thought about this problem and see reasons why this is a bad idea or > just prohibitively difficult to implement? > There was some discussion about this several months ago and I even set out to implement it. I realized after not too long however that a complete solution, as you describe above, was going to be difficult and that I only really cared about the 1D case anyway, so punted and implemented fromiter instead. As I recall, there are two issues that complicate the general case: 1. You need to specify the type or you gain no advantage over just instantiating the list. This is because you need to know the type before you allocate space for the array. Normally you do this by traversing the structure and looking at the contents. However for an iterable, you have to stash the results when you iterate over it looking for the type. This means that unless the array type is specified up front, you might as well just convert everything to lists as far as performance goes. 2. For 1D arrays you can get away without knowing the shape by doing doing successive overallocation of memory, similar to the way list and array.array work. This is what fromiter does. I suppose the same tactic would work for iterators of iterators, but the bookkeeping becomes quite daunting. Issue 1 is the real killer -- because of that a solution would either sometimes (mysteriously for the unitiated) be really inefficient or one would be required to specify types for array(iterable). The latter is my preference, but I'm beginning to think it would actually be better to always have to specify types. It's tempting to take another stab at this, in Python this time, and see if I can get a Python level soltuion working. However I don't have the time to try it right now. -tim > On 8/27/06, Charles R Harris wrote: > >> Hi, >> >> The problem seems to arise in the array constructor, which treats the >> generator as a python object and creates an array containing that object. >> So, do we want the possibility of an array of generators or should we >> interpret it as a sort of list? I vote for that latter. >> >> Chuck >> >> >> On 8/27/06, Charles R Harris wrote: >> >>> Hi Christopher, >>> >>> >>> >>> On 8/27/06, Charles R Harris < charlesr.harris at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> >>>> >>>> On 8/27/06, listservs at mac.com wrote: >>>> >>>>> -----BEGIN PGP SIGNED MESSAGE----- >>>>> Hash: SHA1 >>>>> >>>>> It seems like numpy.sum breaks generator expressions: >>>>> >>>>> In [1]: sum(i*i for i in range(10)) >>>>> Out[1]: 285 >>>>> >>>>> In [2]: from numpy import sum >>>>> >>>>> In [3]: sum(i*i for i in range(10)) >>>>> Out[3]: >>>>> >>>>> Is this intentional? If so, how do I get the behaviour that I am >>>>> >> after? >> >>>> >>>> >>>> >>>> In [3]: sum([i*i for i in range(10)]) >>>> >>>> Out[3]: 285 >>>> >>>> Chuck >>>> >>> >>> The numarray.sum also fails to accept a generator as an argument. Because >>> >> python does and the imported sum overwrites it, we should probably check the >> argument type and make it do the right thing. >> >>> Chuck >>> >>> >>> >>> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job >> easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> >> > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From carlosjosepita at yahoo.com.ar Mon Aug 28 01:55:56 2006 From: carlosjosepita at yahoo.com.ar (Carlos Pita) Date: Mon, 28 Aug 2006 02:55:56 -0300 (ART) Subject: [Numpy-discussion] Constant array Message-ID: <20060828055556.52095.qmail@web50306.mail.yahoo.com> Hi all! Is there a more efficient way of creating a constant K-valued array of size N than: zeros(N) + K ? Thank you in advance. Regards, Carlos --------------------------------- Pregunt?. Respond?. Descubr?. Todo lo que quer?as saber, y lo que ni imaginabas, est? en Yahoo! Respuestas (Beta). Probalo ya! -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Aug 28 02:05:27 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 28 Aug 2006 00:05:27 -0600 Subject: [Numpy-discussion] Constant array In-Reply-To: <20060828055556.52095.qmail@web50306.mail.yahoo.com> References: <20060828055556.52095.qmail@web50306.mail.yahoo.com> Message-ID: Hi Carlos, On 8/27/06, Carlos Pita wrote: > > Hi all! > Is there a more efficient way of creating a constant K-valued array of > size N than: > zeros(N) + K > ? > Maybe something like this: In [12]: a = empty((3,3), dtype=int) In [13]: a.fill(11) In [14]: a Out[14]: array([[11, 11, 11], [11, 11, 11], [11, 11, 11]]) I haven't timed it, so don't know how fast it is. Looking at this makes me think fill should return the array so that one could do something like: a = empty((3,3), dtype=int).fill(10) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Mon Aug 28 02:12:26 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 00:12:26 -0600 Subject: [Numpy-discussion] odd import behavior In-Reply-To: References: Message-ID: <44F2894A.8040902@ieee.org> Matthew Auger wrote: > I recently installed python2.5c1, numpy-1.0b3, and matplotlib-0.87.4. I > was getting an error when importing pylab that led me to this curious > behavior: > matplotlib-0.87.4 is *not* compatible with 1.0b2 and above. A new version needs to be released to work with NumPy 1.0 The SVN version of matplotlib works fine with NumPy 1.0 -Travis From oliphant.travis at ieee.org Mon Aug 28 02:17:59 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 00:17:59 -0600 Subject: [Numpy-discussion] bad generator behaviour with sum In-Reply-To: References: Message-ID: <44F28A97.6010102@ieee.org> Tom Denniston wrote: > I was thinking about this in the context of Giudo's comments at scipy > 2006 that much of the language is moving away from lists toward > iterators. He gave the keys of a dict as an example. > > Numpy treats iterators, generators, etc as 0x0 PyObjects rather than > lazy generators of n dimensional data. I guess my question for Travis > (any others much more expert than I in numpy) is is this intentional > or is it something that was never implemented because of the obvious > subtlties of defiing the correct semantics to make this work. > > It's not intentional, it's just that iterators came later and I did not try to figure out how to "do the right thing" in the array function. Thanks to Tim Hochberg, there is a separate fromiter function that creates arrays from iterators. > Personally i find it no big deal to use array(list(iter)) in the 1d > case and the list function combined with a list comprehension for the > 2d case. I usually know how many dimensions i expect so i find this > easy and i know about this peculiar behavior. I find, however, that > this behavior is very suprising and confusing to the new user and i > don't usually have a good justification for it to answer them. > The problem is that NumPy arrays need to know both how big they are and what data-type they are. With iterators you have to basically construct the whole thing before you can even interrogate that question. Iterators were not part of the language when Numeric (from which NumPy got it's code base) was created. > The ideal semantics, in my mind, would be if an iterator of iterators > of iterators, etc was no different in numpy than a list of lists of > lists, etc. But I have no doubt that there are subtleties i am not > considering. Has anyone more familiar than I with the bowels of numpy > thought about this problem and see reasons why this is a bad idea or > just prohibitively difficult to implement? > It's been discussed before and ideas have been considered. Right now, the fromiter function carries the load. Whether or not to bring that functionality into the array function itself has been met with hesitancy because of how bulky the array function already is. -Travis From numpy at mspacek.mm.st Mon Aug 28 03:01:57 2006 From: numpy at mspacek.mm.st (Martin Spacek) Date: Mon, 28 Aug 2006 00:01:57 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F1BC18.6090401@ieee.org> References: <44F01D32.9080103@mspacek.mm.st> <44F08C0A.1070008@ieee.org> <44F18FD3.2030607@mspacek.mm.st> <44F1BC18.6090401@ieee.org> Message-ID: <44F294E5.8020008@mspacek.mm.st> Tim Hochberg wrote: > I'm actually surprised that the take version is faster than my original > version since it makes a big ol' copy. I guess this is an indication > that indexing is more expensive than I realize. That's why nothing beats > measuring! Actually, your original version is just as fast as the take() version. Both are about 9X faster than numpy.mean() on my system. I prefer the take() version because you only have to pass a single argument to mean_accum() Martin From numpy at mspacek.mm.st Mon Aug 28 03:13:14 2006 From: numpy at mspacek.mm.st (Martin Spacek) Date: Mon, 28 Aug 2006 00:13:14 -0700 Subject: [Numpy-discussion] Optimizing mean(axis=0) on a 3D array In-Reply-To: <44F294E5.8020008@mspacek.mm.st> References: <44F01D32.9080103@mspacek.mm.st> <44F08C0A.1070008@ieee.org> <44F18FD3.2030607@mspacek.mm.st> <44F1BC18.6090401@ieee.org> <44F294E5.8020008@mspacek.mm.st> Message-ID: <44F2978A.1070509@mspacek.mm.st> Martin Spacek wrote: > > Actually, your original version is just as fast as the take() version. > Both are about 9X faster than numpy.mean() on my system. I prefer the > take() version because you only have to pass a single argument to > mean_accum() I forgot to mention that all my indices are, for now, sorted. I just tried shuffling them (as you did), but I still get the same 9x improvement in speed, so I don't know why you only get a 4x improvement on your system. Martin From svetosch at gmx.net Mon Aug 28 04:31:47 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Mon, 28 Aug 2006 10:31:47 +0200 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: References: <44F03AEA.7010403@gmx.net> Message-ID: <44F2A9F3.3070606@gmx.net> Charles R Harris schrieb: > +1. I too suspect that what you have here is a reference/copy problem. > The only thing that is local to the class is the reference (pointer), > the data is global. > > Chuck Ok, so you guys were right, turns out that my problem was caused by the fact that a local assignment like x = y is also by reference only, which I wasn't really aware of. (Of course, it's explained in Travis' book...) So that behavior is different from standard python assignments, isn't it? Sorry for the noise. -Sven From pbqmiz at bbi-net.de Mon Aug 28 04:39:34 2006 From: pbqmiz at bbi-net.de (Beatrix Mcgrath) Date: Mon, 28 Aug 2006 04:39:34 -0400 Subject: [Numpy-discussion] worthwhile carefree Message-ID: <000b01c6ca7e$699e4dbc$7c719d8d@kmzht> The driver, a young man showily dressed, shoved down hishand-brake with an angry expression. Allevidences of the room having been used had been obliterated. Anyhow he wouldbe unable to penetrate to the truth. A little farther on hesecured a basket of Alpine strawberries. Its only effect was to make his pulsesthrob more wildly. He seizedher hand, pressing it hard, and fixed his eyes on hers. Allevidences of the room having been used had been obliterated. What else might happen he did not care, he must set himself free fromthis. Just once he looked back, then went into his room. Richard wanted to tell the man to drive to the hotel at once, but MrKurt would not let him. She was too statuesque to fall in love with. Thesethoughts flashed through his mind as he paused. Walking aimlessly through the Galleria Umberto, he ran into CesareSismondo. His father wassitting on a chair tugging at his shoes. Youll have to spend the night here, thats all. Was this a foundation uponwhich to rebuild his life? One or two people were moving about the hall on theirway to their rooms. We drove back again and I never even kissedher. He would leave the lake now and for ever. Richard managed to introduce other topics, and soonafterwards his father said he would go to bed. I must go and leave some cards, good-bye Katie, honey. I drove her to the racesin a hired drag with two horses. Atthe bridge over the Serpentine Ive got back to my marriage and UncleTheos cable to Dr Fl?ssheim. Perhaps she wont even be there and if sheis I may not be able to speak to her. I often think of that wonderful stroke of luck of yours. Elinors affairs werevenial in comparison. He had not finished his cigarette when Virginia rushedinto the room, breathless. And by a strange irony it was he who at variousmoments of crisis in my life stood in the breach. They carried the parcels between them, and Richard took leave of heroutside the palazzo Peraldi. Itelephoned afterwards but I couldnt understand a word. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: alias.gif Type: image/gif Size: 34216 bytes Desc: not available URL: From wbaxter at gmail.com Mon Aug 28 05:17:35 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 28 Aug 2006 18:17:35 +0900 Subject: [Numpy-discussion] memory corruption bug In-Reply-To: <44F2A9F3.3070606@gmx.net> References: <44F03AEA.7010403@gmx.net> <44F2A9F3.3070606@gmx.net> Message-ID: Nope, that's the way python works in general for any type other than basic scalar types. >>> a = [1,2,3,4] >>> b = a >>> b[1] = 99 >>> print a [1, 99, 3, 4] >>> print b [1, 99, 3, 4] Also the issue never comes up for types like tuples or strings because they aren't mutable. --bb On 8/28/06, Sven Schreiber wrote: > Charles R Harris schrieb: > > +1. I too suspect that what you have here is a reference/copy problem. > > The only thing that is local to the class is the reference (pointer), > > the data is global. > > > > Chuck > > Ok, so you guys were right, turns out that my problem was caused by the > fact that a local assignment like x = y is also by reference only, which > I wasn't really aware of. (Of course, it's explained in Travis' book...) > So that behavior is different from standard python assignments, isn't it? > > Sorry for the noise. > > -Sven > From mattknox_ca at hotmail.com Mon Aug 28 10:02:34 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Mon, 28 Aug 2006 10:02:34 -0400 Subject: [Numpy-discussion] C Api newbie question Message-ID: >>Matt Knox wrote:>>> Hi there. I'm in the unfortunate situation of trying to track down a >> memory error in someone elses code, and to make matters worse I don't >> really know jack squat about C programming. The problem seems to arise >> when several numpy arrays are created from C arrays in the C api and >> returned to python, and then trying to print out or cast to a string >> the resulting array. I think the problem may be happening due to the >> following chunk of code:>> {>> PyObject* temp = PyArray_SimpleNewFromData(1, &numobjs, typeNum, >> dbValues);>> PyObject* temp2 = PyArray_FromArray((PyArrayObject*)temp, >> ((PyArrayObject*)temp)->descr, DEFAULT_FLAGS | ENSURECOPY);>> Py_DECREF(temp);>> PyDict_SetItemString(returnVal, "data", temp2);>> Py_DECREF(temp2);>> }>> >> Lets assume that all my other inputs up this point are fine and that >> numobjs, typeNum, and dbValues are fine. Is their anything obviously >> wrong with the above chunk of code? or does it appear ok? Ultimately >> the dictionary "returnVal" is returned by the function this code came >> from, and everything else is discarded. Any help is very greatly >> appreciated. Thanks in advance, > You didn't indicate what kind of trouble you are having.>> First of all, this is kind of odd style. Why is a new array created > from a data-pointer and then copied using PyArray_FromArray (the > ENSURECOPY flag will give you a copy)? Using>> temp2 = PyArray_Copy(temp)>> seems simpler. This will also avoid the reference-count problem that > is currently happening in the PyArray_FromArray call on the descr > structure. Any array-creation function that takes a descr structure > "steals" a reference to it, so you need to increment the reference count > if you are passing an unowned reference to a ->descr structure.>> -Travis Sorry. Yeah, the problem was the interpreter crashing on exit, which afteryour response definitely seems like it was a reference count issue. Ichanged the PyArray_FromArray call to be PyArray_Copy and it seems to workfine. Thank you very much! Love the numpy stuff (when I can stay in the python world and not mess withthe C stuff :) ). Keep up the great work! - Matt _________________________________________________________________ Be one of the first to try Windows Live Mail. http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d -------------- next part -------------- An HTML attachment was scrubbed... URL: From rex at nosyntax.com Mon Aug 28 10:36:38 2006 From: rex at nosyntax.com (rex) Date: Mon, 28 Aug 2006 07:36:38 -0700 Subject: [Numpy-discussion] numpy1.04b4: undefined symbol: PyUnicodeUCS2_FromUnicode. error No _WIN32 Message-ID: <20060828143638.GB5139@x2.nosyntax.com> Numpy builds, but fails to run with the error message: > python Python 2.4.2 (#1, Apr 24 2006, 18:13:30) [GCC 4.1.0 (SUSE 10.1 Linux)] on linux2 >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/numpy/__init__.py", line 35, in ? import core File "/usr/lib/python2.4/site-packages/numpy/core/__init__.py", line 5, in ? import multiarray ImportError: /usr/lib/python2.4/site-packages/numpy/core/multiarray.so: undefined symbol: PyUnicodeUCS2_FromUnicode Build was without BLAS or LAPACK. Results were the same when Intel MKL was used. python setup.py install >& inst.log Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2_3078 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE [...] running install running build running config_fc running build_src building py_modules sources building extension "numpy.core.multiarray" sources adding 'build/src.linux-i686-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.linux-i686-2.4/numpy/core/__multiarray_api.h' to sources. adding 'build/src.linux-i686-2.4/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src.linux-i686-2.4/numpy/core/src/scalartypes .inc', 'build/src.linux-i686-2.4/numpy/core/src/arraytypes.inc', 'build/src.linux-i686-2.4/nu mpy/core/config.h', 'build/src.linux-i686-2.4/numpy/core/__multiarray_api.h'] building extension "numpy.core.umath" sources adding 'build/src.linux-i686-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.linux-i686-2.4/numpy/core/__ufunc_api.h' to sources. adding 'build/src.linux-i686-2.4/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src.linux-i686-2.4/numpy/core/src/scalartypes .inc', 'build/src.linux-i686-2.4/numpy/core/src/arraytypes.inc', 'build/src.linux-i686-2.4/nu mpy/core/config.h', 'build/src.linux-i686-2.4/numpy/core/__ufunc_api.h'] building extension "numpy.core._sort" sources adding 'build/src.linux-i686-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.linux-i686-2.4/numpy/core/__multiarray_api.h' to sources. numpy.core - nothing done with h_files= ['build/src.linux-i686-2.4/numpy/core/config.h', 'bui ld/src.linux-i686-2.4/numpy/core/__multiarray_api.h'] building extension "numpy.core.scalarmath" sources adding 'build/src.linux-i686-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.linux-i686-2.4/numpy/core/__multiarray_api.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.linux-i686-2.4/numpy/core/__ufunc_api.h' to sources. numpy.core - nothing done with h_files= ['build/src.linux-i686-2.4/numpy/core/config.h', 'bui ld/src.linux-i686-2.4/numpy/core/__multiarray_api.h', 'build/src.linux-i686-2.4/numpy/core/__ ufunc_api.h'] building extension "numpy.core._dotblas" sources building extension "numpy.lib._compiled_base" sources building extension "numpy.numarray._capi" sources building extension "numpy.fft.fftpack_lite" sources building extension "numpy.linalg.lapack_lite" sources ### Warning: Using unoptimized lapack ### adding 'numpy/linalg/lapack_litemodule.c' to sources. adding 'numpy/linalg/zlapack_lite.c' to sources. adding 'numpy/linalg/dlapack_lite.c' to sources. adding 'numpy/linalg/blas_lite.c' to sources. adding 'numpy/linalg/dlamch.c' to sources. adding 'numpy/linalg/f2c_lite.c' to sources. building extension "numpy.random.mtrand" sources Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config ******************************************************************************************* C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c:7:2: error: #error No _WIN32 _configtest.c:7:2: error: #error No _WIN32 failure. removing: _configtest.c _configtest.o ******************************************************************************************* building data_files sources [...] changing mode of /usr/bin/f2py to 755 running install_data copying build/src.linux-i686-2.4/numpy/core/__multiarray_api.h -> /usr/lib/python2.4/site-pac kages/numpy/core/include/numpy copying build/src.linux-i686-2.4/numpy/core/multiarray_api.txt -> /usr/lib/python2.4/site-pac kages/numpy/core/include/numpy copying build/src.linux-i686-2.4/numpy/core/__ufunc_api.h -> /usr/lib/python2.4/site-packages /numpy/core/include/numpy copying build/src.linux-i686-2.4/numpy/core/ufunc_api.txt -> /usr/lib/python2.4/site-packages /numpy/core/include/numpy Any pointers would be much appreciated. This isn't the first time I've spent days trying to get SciPy built under SUSE... :( -rex From Chris.Barker at noaa.gov Mon Aug 28 13:48:11 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 28 Aug 2006 10:48:11 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <44ED02D8.6030401@ieee.org> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> <200608231622.52266.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> Message-ID: <44F32C5B.8010101@noaa.gov> Travis Oliphant wrote: > Instead, I like better the idea of adding abs, round, max, and min to > the "non-import-*" namespace of numpy. Another I'd like is the built-in data types. I always use: import numpy as N so then I do: a = zeros(shape, float) or a = zeros(shape, N.float_) but for non-built-in types, I can't do the former. The underscore is minor but why not just have: float = float in numpy.py? (and of course, the others) -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From oliphant.travis at ieee.org Mon Aug 28 15:34:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 13:34:03 -0600 Subject: [Numpy-discussion] numpy1.04b4: undefined symbol: PyUnicodeUCS2_FromUnicode. error No _WIN32 In-Reply-To: <20060828143638.GB5139@x2.nosyntax.com> References: <20060828143638.GB5139@x2.nosyntax.com> Message-ID: <44F3452B.7030000@ieee.org> rex wrote: > Numpy builds, but fails to run with the error message: > > >> python >> > Python 2.4.2 (#1, Apr 24 2006, 18:13:30) > [GCC 4.1.0 (SUSE 10.1 Linux)] on linux2 > >>>> import numpy >>>> > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib/python2.4/site-packages/numpy/__init__.py", line 35, in ? > import core > File "/usr/lib/python2.4/site-packages/numpy/core/__init__.py", line 5, in ? > import multiarray > ImportError: /usr/lib/python2.4/site-packages/numpy/core/multiarray.so: undefined symbol: PyUnicodeUCS2_FromUnicode > > > This error usually means that NumPy was built and linked against a Python build where unicode strings were 2-bytes per character but you are trying to import it on a Python build where unicode strings are 4-bytes per character. Perhaps you have changed your build of Python and did not remove the build directory of NumPy. Try rm -fr build in the numpy directory (where you run setup.py) and build again. You can tell how many bytes-per-unicode character your system is built with by looking at the output of sys.maxunicode From oliphant.travis at ieee.org Mon Aug 28 15:36:24 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 13:36:24 -0600 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <44F32C5B.8010101@noaa.gov> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> <200608231622.52266.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> <44F32C5B.8010101@noaa.gov> Message-ID: <44F345B8.6070705@ieee.org> Christopher Barker wrote: > Travis Oliphant wrote: > > >> Instead, I like better the idea of adding abs, round, max, and min to >> the "non-import-*" namespace of numpy. >> > > Another I'd like is the built-in data types. I always use: > > import numpy as N > > so then I do: > > a = zeros(shape, float) > or > a = zeros(shape, N.float_) > > but for non-built-in types, I can't do the former. > > The underscore is minor but why not just have: > > float = float > > in numpy.py? > > (and of course, the others) > I think I prefer to just add the float, bool, object, unicode, str names to the "non-imported" numpy name-space. -Travis From strawman at astraw.com Mon Aug 28 16:15:40 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 28 Aug 2006 13:15:40 -0700 Subject: [Numpy-discussion] Numeric/numpy incompatibility Message-ID: <44F34EEC.7060505@astraw.com> The following code indicates there is a problem adding a numpy scalar type to a Numeric array. Is this expected behavior or is there a bug somewhere? This bit me in the context of updating some of my code to numpy, while part of it still uses Numeric. import Numeric import numpy print 'Numeric.__version__',Numeric.__version__ print 'numpy.__version__',numpy.__version__ a = Numeric.zeros( (10,2), Numeric.Float ) b = numpy.float64(23.39) a[0,1] = a[0,1] + b assert a[0,1]==b From rex at nosyntax.com Mon Aug 28 16:52:49 2006 From: rex at nosyntax.com (rex) Date: Mon, 28 Aug 2006 13:52:49 -0700 Subject: [Numpy-discussion] numpy1.04b4: undefined symbol: PyUnicodeUCS2_FromUnicode. error No _WIN32 In-Reply-To: <44F3452B.7030000@ieee.org> References: <20060828143638.GB5139@x2.nosyntax.com> <44F3452B.7030000@ieee.org> Message-ID: <20060828205249.GF5139@x2.nosyntax.com> Travis Oliphant [2006-08-28 12:42]: > rex wrote: > > ImportError: /usr/lib/python2.4/site-packages/numpy/core/multiarray.so: undefined symbol: PyUnicodeUCS2_FromUnicode > > > > > > > > This error usually means that NumPy was built and linked against a > Python build where unicode strings were 2-bytes per character but you > are trying to import it on a Python build where unicode strings are > 4-bytes per character. Perhaps you have changed your build of Python > and did not remove the build directory of NumPy. > > Try > > rm -fr build > > in the numpy directory (where you run setup.py) and build again. Ah! THANK YOU! Python 2.4.2 (#1, May 2 2006, 08:13:46) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test() Found 5 tests for numpy.distutils.misc_util Found 3 tests for numpy.lib.getlimits Found 31 tests for numpy.core.numerictypes Found 32 tests for numpy.linalg Found 13 tests for numpy.core.umath Found 4 tests for numpy.core.scalarmath Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 155 tests for numpy.core.multiarray Found 3 tests for numpy.fft.helper Found 36 tests for numpy.core.ma Found 10 tests for numpy.lib.twodim_base Found 10 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Found 4 tests for numpy.ctypeslib Found 39 tests for numpy.lib.function_base Found 1 tests for numpy.lib.polynomial Found 8 tests for numpy.core.records Found 26 tests for numpy.core.numeric Found 4 tests for numpy.lib.index_tricks Found 46 tests for numpy.lib.shape_base Found 0 tests for __main__ ---------------------------------------------------------------------- Ran 481 tests in 1.956s OK Now on to doing it again with MKL... >From the numpy directory: rm -fr build cp site_mkl.cfg site.cfg where site_mkl.cfg is: ----------------------------------------------------------------------- [DEFAULT] library_dirs=/opt/intel/mkl/8.1/lib/32 include_dirs=/opt/intel/mkl/8.1/include [blas_opt] libraries=libmkl.so,libmkl_p3.so,libmkl_vml_p3.so,libmkl_ia32.a,libguide.so,libmkl_def.so #libraries=whatever_the_mkl_blas_lib_is,mkl_ia32,mkl,guide [lapack_opt] libraries=libmkl_lapack32.so,libmkl_lapack.a, #libraries=mkl_lapack,mkl_lapack32,mkl_ia32,mkl,guide ---------------------------------------------------------------------- python setup.py install >& inst.log Looks OK, so in another window: python Python 2.4.2 (#1, May 2 2006, 08:13:46) [GCC 4.1.0 (SUSE Linux)] on linux2 >>> import numpy Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/numpy/__init__.py", line 39, in ? import linalg File "/usr/lib/python2.4/site-packages/numpy/linalg/__init__.py", line 4, in ? from linalg import * File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 25, in ? from numpy.linalg import lapack_lite ImportError: libmkl_lapack32.so: cannot open shared object file: No such file or directory >>> Oops! ^d export INCLUDE=/opt/intel/mkl/8.1/include:$INCLUDE export LD_LIBRARY_PATH=/opt/intel/mkl/8.1/lib/32:$LD_LIBRARY_PATH python Python 2.4.2 (#1, May 2 2006, 08:13:46) [GCC 4.1.0 (SUSE Linux)] on linux2 >>> import numpy >>> numpy.test() Found 5 tests for numpy.distutils.misc_util Found 3 tests for numpy.lib.getlimits Found 31 tests for numpy.core.numerictypes Found 32 tests for numpy.linalg Found 13 tests for numpy.core.umath Found 4 tests for numpy.core.scalarmath Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 155 tests for numpy.core.multiarray Found 3 tests for numpy.fft.helper Found 36 tests for numpy.core.ma Found 10 tests for numpy.lib.twodim_base Found 10 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Found 4 tests for numpy.ctypeslib Found 39 tests for numpy.lib.function_base Found 1 tests for numpy.lib.polynomial Found 8 tests for numpy.core.records Found 26 tests for numpy.core.numeric Found 4 tests for numpy.lib.index_tricks Found 46 tests for numpy.lib.shape_base Found 0 tests for __main__ ---------------------------------------------------------------------- Ran 481 tests in 2.152s OK Now off to build SciPy. Thanks again! -rex From oliphant.travis at ieee.org Mon Aug 28 16:56:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 14:56:53 -0600 Subject: [Numpy-discussion] Numeric/numpy incompatibility In-Reply-To: <44F34EEC.7060505@astraw.com> References: <44F34EEC.7060505@astraw.com> Message-ID: <44F35895.3070501@ieee.org> Andrew Straw wrote: > The following code indicates there is a problem adding a numpy scalar > type to a Numeric array. Is this expected behavior or is there a bug > somewhere? > There was a bug in the __array_struct__ attribute of array flags wherein the NOTSWAPPED flag was not being set as it should be. This is fixed in SVN. -Travis From carlosjosepita at yahoo.com.ar Mon Aug 28 17:16:36 2006 From: carlosjosepita at yahoo.com.ar (Carlos Pita) Date: Mon, 28 Aug 2006 21:16:36 +0000 (GMT) Subject: [Numpy-discussion] weave using numeric or numpy? Message-ID: <20060828211636.55953.qmail@web50314.mail.yahoo.com> Hi all! I'm rewriting some swig-based extensions that implement intensive inner loops dealing with numeric/numpy arrays. The intention is to build these extensions by means of weave inline, ext_module, ext_function, etc. I'm not sure about how to point weave to my numpy instalation. By default it tries to include "Numeric/arrayobject.h" and fails if you hack things to get that resolved to numpy arrayobject.h (for example, it complaints that PyArray_SBYTE is undefined). Anyway, even if I managed myself to force weave to compile against numpy/arrayobject.h, I'd still not be sure about the "runtime" that will be chosen. I'm very confused at this point, no library flags are provided at compile/link time, so how is the runtime selected between numpy, Numeric (or even numarray)? Thank you in advance. Best regards, Carlos --------------------------------- Pregunt?. Respond?. Descubr?. Todo lo que quer?as saber, y lo que ni imaginabas, est? en Yahoo! Respuestas (Beta). Probalo ya! -------------- next part -------------- An HTML attachment was scrubbed... URL: From kortmann at ideaworks.com Mon Aug 28 17:35:59 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Mon, 28 Aug 2006 14:35:59 -0700 (PDT) Subject: [Numpy-discussion] 1.0b4 problem continuted from 1.0b3 Message-ID: <1391.12.216.231.149.1156800959.squirrel@webmail.ideaworks.com> On 8/25/06, Travis Oliphant wrote: > kortmann at ideaworks.com wrote: > > Message: 4 > > Date: Thu, 24 Aug 2006 14:17:44 -0600 > > From: Travis Oliphant > > Subject: Re: [Numpy-discussion] (no subject) > > To: Discussion of Numerical Python > > > > Message-ID: <44EE0968.1030904 at ee.byu.edu> > > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > > > kortmann at ideaworks.com wrote: > > > > > > > > You have a module built against an older version of NumPy. What modules > > are being loaded? Perhaps it is matplotlib or SciPy > > > > You need to re-build matplotlib. They should be producing a binary that > is compatible with 1.0b2 (I'm being careful to make sure future releases > are binary compatible with 1.0b2). > > Also, make sure that you remove the build directory under numpy if you > have previously built a version of numpy prior to 1.0b2. > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > Travis I have recompiled everything. I removed sci py numpy and matplotlib. I installed the numpy 1.0b4 win32exe, and then installed scipi 0.5 and then the latest matplotlib 0.87.4 I recieved this error at first, which is a matplot lib error, C:\Lameness>c:\python23\python templatewindow.py Traceback (most recent call last): File "templatewindow.py", line 7, in ? import wxmpl File "c:\python23\lib\site-packages\wxmpl.py", line 25, in ? import matplotlib.numerix as Numeric File "C:\PYTHON23\Lib\site-packages\matplotlib\numerix\__init__.py", line 74, in ? Matrix = matrix NameError: name 'matrix' is not defined , and then switched matplotlib to use numeric, and i recieve this error once again Overwriting info= from scipy.misc.helpmod (was from numpy.lib.utils) Overwriting who= from scipy.misc.common (was from numpy.lib.utils) Overwriting source= from scipy.misc.helpmod (was from numpy.lib.utils) RuntimeError: module compiled against version 1000000 of C-API but this version of numpy is 1000002 Fatal Python error: numpy.core.multiarray failed to import... exiting. abnormal program termination i googled the error and also found this thread but have not found a solution http://www.mail-archive.com/numpy-discussion at lists.sourceforge.net/msg01700.html any help? thanks -Kenny From oliphant.travis at ieee.org Mon Aug 28 17:51:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 15:51:53 -0600 Subject: [Numpy-discussion] 1.0b4 problem continuted from 1.0b3 In-Reply-To: <1391.12.216.231.149.1156800959.squirrel@webmail.ideaworks.com> References: <1391.12.216.231.149.1156800959.squirrel@webmail.ideaworks.com> Message-ID: <44F36579.7070502@ieee.org> kortmann at ideaworks.com wrote: > On 8/25/06, Travis Oliphant wrote: > >> kortmann at ideaworks.com wrote: >> >>> Message: 4 >>> Date: Thu, 24 Aug 2006 14:17:44 -0600 >>> From: Travis Oliphant >>> Subject: Re: [Numpy-discussion] (no subject) >>> To: Discussion of Numerical Python >>> >>> Message-ID: <44EE0968.1030904 at ee.byu.edu> >>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >>> >>> kortmann at ideaworks.com wrote: >>> >>> >>> >>> You have a module built against an older version of NumPy. What modules >>> are being loaded? Perhaps it is matplotlib or SciPy >>> >>> >> You need to re-build matplotlib. They should be producing a binary that >> is compatible with 1.0b2 (I'm being careful to make sure future releases >> are binary compatible with 1.0b2). >> >> Also, make sure that you remove the build directory under numpy if you >> have previously built a version of numpy prior to 1.0b2. >> You have to download the SVN version of matplotlib. The released version does not support 1.0b2 and above yet. -Travis From Chris.Barker at noaa.gov Mon Aug 28 19:11:46 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 28 Aug 2006 16:11:46 -0700 Subject: [Numpy-discussion] request for new array method: arr.abs() In-Reply-To: <44F345B8.6070705@ieee.org> References: <200608231351.02236.haase@msg.ucsf.edu> <20060823171345.786680ad@arbutus.physics.mcmaster.ca> <200608231622.52266.haase@msg.ucsf.edu> <20060823194048.2073c0c7@arbutus.physics.mcmaster.ca> <44ED02D8.6030401@ieee.org> <44F32C5B.8010101@noaa.gov> <44F345B8.6070705@ieee.org> Message-ID: <44F37832.2020804@noaa.gov> Travis Oliphant wrote: > I think I prefer to just add the float, bool, object, unicode, str names > to the "non-imported" numpy > name-space. which mean you get it with: import numpy as N N.float but not with from numpy import * ? If that's what you mean, then I'm all for it! -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Mon Aug 28 19:25:44 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 28 Aug 2006 16:25:44 -0700 Subject: [Numpy-discussion] Is numpy supposed to support the buffer protocol? Message-ID: <44F37B78.2050009@noaa.gov> HI all, Robin Dunn has been working on adding better support for dumping data directly to wxPython from the num* packages. I've been talking to him about the new array interface, and he might well support it (particularly if one of us contributes code), but in the meantime, he's got a number of things working with python buffers. For instance: wx.Image.SetDataBuffer(dataBuffer) That sets the data for a wxImage to the buffer handed in. This isn't as nice as the array protocol, as it has no way of checking anything other than if the length of the buffer is correct, but it is a good way to maximize performance for this sort of thing. he's now working on adding methods for creating wx.Bitmaps directly from buffers. In the process if testing some of this, I discovered that numarray (which Robin is testing with) works fine, but numpy does not. I get: File "/usr/lib/python2.4/site-packages/wx-2.6-gtk2-unicode/wx/_core.py", line 2814, in SetDataBuffer return _core_.Image_SetDataBuffer(*args, **kwargs) TypeError: non-character array cannot be interpreted as character buffer If I try to pass in a numpy array, while it works great with a numarray array. While I'm a great advocate of the new array protocol, it seems supporting the buffer protocol also would be a good idea. I've enclosed some simple test code. It works with numarray, but not numpy 1.0b4 Tested with Python 2.4.3, wxPython 2.6.3.0, Linux fedora core4 -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- A non-text attachment was scrubbed... Name: ImageBuffer2.py Type: text/x-python Size: 793 bytes Desc: not available URL: From oliphant.travis at ieee.org Mon Aug 28 19:32:20 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 17:32:20 -0600 Subject: [Numpy-discussion] Is numpy supposed to support the buffer protocol? In-Reply-To: <44F37B78.2050009@noaa.gov> References: <44F37B78.2050009@noaa.gov> Message-ID: <44F37D04.10807@ieee.org> Christopher Barker wrote: > HI all, > > File > "/usr/lib/python2.4/site-packages/wx-2.6-gtk2-unicode/wx/_core.py", > line 2814, in SetDataBuffer > return _core_.Image_SetDataBuffer(*args, **kwargs) > TypeError: non-character array cannot be interpreted as character buffer > > If I try to pass in a numpy array, while it works great with a > numarray array. This error sounds like wx is using the *wrong* buffer protocol. Don't use bf_getcharbuffer as it is of uncertain utility. It is slated for removal from Python 3000. It was meant to be used as a way to determine buffers that were supposed to contain characters (not arbitrary data). Just use bf_getreadbuffer and bf_getwritebuffer from tp_as_buffer. More support for the buffer protocol all the way around is a good idea. NumPy has always supported it very well (just make sure to use it correctly). FYI, I'm going to write a PEP to get the array protocol placed as an add-on to the buffer protocol for Python 2.6 -Travis From robert.kern at gmail.com Mon Aug 28 19:37:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Aug 2006 18:37:57 -0500 Subject: [Numpy-discussion] Is numpy supposed to support the buffer protocol? In-Reply-To: <44F37B78.2050009@noaa.gov> References: <44F37B78.2050009@noaa.gov> Message-ID: Christopher Barker wrote: > While I'm a great advocate of the new array protocol, it seems > supporting the buffer protocol also would be a good idea. I've enclosed > some simple test code. It works with numarray, but not numpy 1.0b4 Instead of I.SetDataBuffer(some_array) you can use I.SetDataBuffer(buffer(some_array)) and it seems to work on OS X with Python 2.4, numpy 1.0b2 and wxMac 2.6.3.3 . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From torgil.svensson at gmail.com Mon Aug 28 20:24:37 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 29 Aug 2006 02:24:37 +0200 Subject: [Numpy-discussion] 1.0b4 problem continuted from 1.0b3 In-Reply-To: <44F36579.7070502@ieee.org> References: <1391.12.216.231.149.1156800959.squirrel@webmail.ideaworks.com> <44F36579.7070502@ieee.org> Message-ID: This is really a matplotlib problem. >From matplotlib users mailing-list archives: > From: Charlie Moad > Snapshot build for use with numpy-1.0b3 > 2006-08-23 06:11 > > Here is a snapshot of svn this morning for those wanting to work with the numpy beta. Both builds are for python2.4 and windows. > > exe: http://tinyurl.com/gf299 > egg: http://tinyurl.com/fbjmg > > -Charlie That exe-file worked for me. //Torgil On 8/28/06, Travis Oliphant wrote: > kortmann at ideaworks.com wrote: > > On 8/25/06, Travis Oliphant wrote: > > > >> kortmann at ideaworks.com wrote: > >> > >>> Message: 4 > >>> Date: Thu, 24 Aug 2006 14:17:44 -0600 > >>> From: Travis Oliphant > >>> Subject: Re: [Numpy-discussion] (no subject) > >>> To: Discussion of Numerical Python > >>> > >>> Message-ID: <44EE0968.1030904 at ee.byu.edu> > >>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed > >>> > >>> kortmann at ideaworks.com wrote: > >>> > >>> > >>> > >>> You have a module built against an older version of NumPy. What modules > >>> are being loaded? Perhaps it is matplotlib or SciPy > >>> > >>> > >> You need to re-build matplotlib. They should be producing a binary that > >> is compatible with 1.0b2 (I'm being careful to make sure future releases > >> are binary compatible with 1.0b2). > >> > >> Also, make sure that you remove the build directory under numpy if you > >> have previously built a version of numpy prior to 1.0b2. > >> > > You have to download the SVN version of matplotlib. The released > version does not support 1.0b2 and above yet. > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From cvpl at fleegerfishing.com Mon Aug 28 20:26:45 2006 From: cvpl at fleegerfishing.com (Lillian Slaughter) Date: Mon, 28 Aug 2006 19:26:45 -0500 Subject: [Numpy-discussion] nausea Message-ID: <001001c6cb01$fec336cc$7f24c947@hanhky.ct> Kit took Pentreaths book away; it was Jukes Brownes Geology. During the winter he made a change in his own habit of life. My dear old chap, exams arent everything. His room hadtwo windows and a mahogany door, a bookcase, and two or three oddchairs. The barbarians were growing too strongfor him. The shop becamealive, with liquid capital circulating in its blood vessels. Maurice flinched; he flinched too easily. That she did appear inthat echoing room was another revealing of the world to Kit. On Saturdays he played soccer for the hospital. His impetus swung him along,and he cultivated this impetus. Also Kit met people in the Chelsea house, people who mattered, whohad done things. The barbarians were growing too strongfor him. Appreciation matters, and his fathers understanding keenness wasno small part of Kits inspiration. Kit confessed that he passed through the centre of the spiders webonce each day. Maurice was always looking fearfully at theclock. If anyone appreciated the pretty and nicely wingedjibes in Punch, Sorrell appreciated them. She put a live slow-worm in his bed, filled his tennisshoes with flour, and mocked him openly. Ada, the middle-aged maid who had been withMrs. Maurice was always looking fearfully at theclock. Youth explores, and Kits questing hada serious and high ardour. Gibbins provided him with breakfast and a hot meal atnight. She put out a hand and touched a rigid arm. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: backstage.gif Type: image/gif Size: 35347 bytes Desc: not available URL: From torgil.svensson at gmail.com Mon Aug 28 20:30:58 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 29 Aug 2006 02:30:58 +0200 Subject: [Numpy-discussion] std(axis=1) memory footprint issues + moving avg / stddev In-Reply-To: <44F14093.7080001@ieee.org> References: <44F14093.7080001@ieee.org> Message-ID: > The C-code is basically a directy "translation" of the original Python > code. ... > If I had to do it over again, I would place the std implementation there where > it could be appropriately optimized. Isn't C-code a good place for optimizations? //Torgil On 8/27/06, Travis Oliphant wrote: > Torgil Svensson wrote: > > Hi > > > > ndarray.std(axis=1) seems to have memory issues on large 2D-arrays. I > > first thought I had a performance issue but discovered that std() used > > lots of memory and therefore caused lots of swapping. > > > There are certainly lots of intermediate arrays created as the > calculation proceeds. The calculation is not particularly "smart." It > just does the basic averaging and multiplication needed. > > > I want to get an array where element i is the stadard deviation of row > > i in the 2D array. Using valgrind on the std() function... > > > > $ valgrind --tool=massif python -c "from numpy import *; > > a=reshape(arange(100000*100),(100000,100)).std(axis=1)" > > > > ... showed me a peak of 200Mb memory while iterating line by line... > > > > > The C-code is basically a directy "translation" of the original Python > code. There are lots of temporaries created (apparently 5 at one point > :-). I did this before I had the _internal.py code in place where I > place Python functions that need to be accessed from C. If I had to do > it over again, I would place the std implementation there where it could > be appropriately optimized. > > > > -Travis > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From oliphant.travis at ieee.org Mon Aug 28 23:03:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 21:03:29 -0600 Subject: [Numpy-discussion] tensor dot ? In-Reply-To: <20060825124219.6581a608.simon@arrowtheory.com> References: <20060825124219.6581a608.simon@arrowtheory.com> Message-ID: <44F3AE81.7010305@ieee.org> Simon Burton wrote: >>>> numpy.dot.__doc__ >>>> > matrixproduct(a,b) > Returns the dot product of a and b for arrays of floating point types. > Like the generic numpy equivalent the product sum is over > the last dimension of a and the second-to-last dimension of b. > NB: The first argument is not conjugated. > > Does numpy support summing over arbitrary dimensions, > as in tensor calculus ? > > I could cook up something that uses transpose and dot, but it's > reasonably tricky i think :) > I've just added tensordot to NumPy (adapted and enhanced from numarray). It allows you to sum over an arbitrary number of axes. It uses a 2-d dot-product internally as that is optimized if you have a fast blas installed. Example: If a.shape is (3,4,5) and b.shape is (4,3,2) Then tensordot(a, b, axes=([1,0],[0,1])) returns a (5,2) array which is equivalent to the code: c = zeros((5,2)) for i in range(5): for j in range(2): for k in range(3): for l in range(4): c[i,j] += a[k,l,i]*b[l,k,j] -Travis From wbaxter at gmail.com Mon Aug 28 23:55:06 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 29 Aug 2006 12:55:06 +0900 Subject: [Numpy-discussion] tensor dot ? In-Reply-To: <44F3AE81.7010305@ieee.org> References: <20060825124219.6581a608.simon@arrowtheory.com> <44F3AE81.7010305@ieee.org> Message-ID: On 8/29/06, Travis Oliphant wrote: > Example: > > If a.shape is (3,4,5) > and b.shape is (4,3,2) > > Then > > tensordot(a, b, axes=([1,0],[0,1])) > > returns a (5,2) array which is equivalent to the code: > > c = zeros((5,2)) > for i in range(5): > for j in range(2): > for k in range(3): > for l in range(4): > c[i,j] += a[k,l,i]*b[l,k,j] That's pretty cool. >From there it shouldn't be too hard to make a wrapper that would allow you to write c_ji = a_kli * b_lkj (w/sum over k and l) like: tensordot_ez(a,'kli', b,'lkj', out='ji') or maybe with numexpr-like syntax: tensor_expr('_ji = a_kli * b_lkj') [pulling a and b out of the globals()/locals()] Might be neat to be able to build a callable function for repeated reuse: tprod = tensor_func('_ji = [0]_kli * [1]_lkj') # [0] and [1] become parameters 0 and 1 c = tprod(a, b) or to pass the output through a (potentially reused) array argument: tprod1 = tensor_func('[0]_ji = [1]_kli * [2]_lkj') tprod1(c, a, b) --bb From pgmdevlist at gmail.com Tue Aug 29 01:25:25 2006 From: pgmdevlist at gmail.com (PGM) Date: Tue, 29 Aug 2006 01:25:25 -0400 Subject: [Numpy-discussion] A minor annoyance with MA Message-ID: <200608290125.25232.pgmdevlist@gmail.com> Folks, I keep running into the following problem since some recent update (I'm currently running 1.0b3, but the problem occurred roughly around 0.9.8): >>> import numpy.core.ma as MA >>> x=MA.array([[1],[2]],mask=False) >>> x.sum(None) /usr/lib64/python2.4/site-packages/numpy/core/ma.py in reduce(self, target, axis, dtype) 393 m.shape = (1,) 394 if m is nomask: --> 395 return masked_array (self.f.reduce (t, axis)) 396 else: 397 t = masked_array (t, m) TypeError: an integer is required #................................ Note that x.sum(0) and x.sum(1) work fine. I know some consensus seems to be lacking with MA, but still, I can't see why axis=None is not recognized. Corollary: with masked array, the default axis for sum is 0, when it's None for regular arrays. Is there a reason for this inconsistency ? Thanks a lot From robin at alldunn.com Tue Aug 29 02:09:35 2006 From: robin at alldunn.com (Robin Dunn) Date: Mon, 28 Aug 2006 23:09:35 -0700 Subject: [Numpy-discussion] Is numpy supposed to support the buffer protocol? In-Reply-To: <44F37D04.10807@ieee.org> References: <44F37B78.2050009@noaa.gov> <44F37D04.10807@ieee.org> Message-ID: <44F3DA1F.4020007@alldunn.com> Travis Oliphant wrote: > Christopher Barker wrote: >> HI all, >> >> File >> "/usr/lib/python2.4/site-packages/wx-2.6-gtk2-unicode/wx/_core.py", >> line 2814, in SetDataBuffer >> return _core_.Image_SetDataBuffer(*args, **kwargs) >> TypeError: non-character array cannot be interpreted as character buffer >> >> If I try to pass in a numpy array, while it works great with a >> numarray array. > > This error sounds like wx is using the *wrong* buffer protocol. Don't > use bf_getcharbuffer as it is of uncertain utility. It is slated for > removal from Python 3000. It was meant to be used as a way to determine > buffers that were supposed to contain characters (not arbitrary data). > > Just use bf_getreadbuffer and bf_getwritebuffer from tp_as_buffer. I'm using PyArg_Parse($input, "t#", ...) to get the buffer pointer and size. Is there another format specifier to use for the buffer pointer using the other slots or do I need to drop down to a lower level API to get it? I didn't realize there was a distinction between buffer and character buffer. Another read of the PyArg_Parse docs with that new fact makes things a little more clear. Looking at the code I guess "s#" will do it, I guess I thought it would try to coerce the object to a PyString like some other APIs do, which I was trying to avoid, but it doesn't appear to do that, (only encoding a unicode object if that is passed.) I think I'll take a shot at using tp_as_buffer directly to avoid any confusion in the future and avoid the arg parse overhead... Any other suggestions? BTW Chris, try using buffer(RGB) and buffer(Alpha) in your sample, I expect that will work with the current code. -- Robin Dunn Software Craftsman http://wxPython.org Java give you jitters? Relax with wxPython! From bruce.who.hk at gmail.com Tue Aug 29 02:03:10 2006 From: bruce.who.hk at gmail.com (Bruce Who) Date: Tue, 29 Aug 2006 14:03:10 +0800 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available In-Reply-To: <44F341E4.7000003@ieee.org> References: <44F01802.8050505@ieee.org> <200608281448353906004@gmail.com> <44F341E4.7000003@ieee.org> Message-ID: Hi, Travis I can pack my scripts into an executable with py2exe, but errors occur once it runs: No scipy-style subpackage 'random' found in D:\test\dist\numpy. Ignoring: No module named info import core -> failed: No module named _internal import lib -> failed: 'module' object has no attribute '_ARRAY_API' import linalg -> failed: 'module' object has no attribute '_ARRAY_API' import dft -> failed: 'module' object has no attribute '_ARRAY_API' Traceback (most recent call last): File "main.py", line 9, in ? File "numpy\__init__.pyc", line 49, in ?  File "numpy\add_newdocs.pyc", line 2, in ? gkDc File "numpy\lib\__init__.pyc", line 5, in ? File "numpy\lib\type_check.pyc", line 8, in ? File "numpy\core\__init__.pyc", line 6, in ? File "numpy\core\umath.pyc", line 12, in ? File "numpy\core\umath.pyc", line 10, in __load AttributeError: 'module' object has no attribute '_ARRAY_API' This is the main.py file: #======================================= # filename:main.py import wx import numpy class myFrame(wx.Frame): def __init__(self, *args, **kwds): wx.Frame.__init__(self, *args, **kwds) ##------ your widgets ##------ put stuff into sizer self.sizer_ = wx.BoxSizer(wx.VERTICAL) ## self.sizer_.Add(your_ctrl, proportion = 1, flag = wx.EXPAND) ## apply sizer self.SetSizer(self.sizer_) self.SetAutoLayout(True) def main(): ## {{{ app = wx.PySimpleApp(0) frame = myFrame(None, -1, title = '') frame.Show(True) app.SetTopWindow(frame) app.MainLoop() ## }}} if __name__ == "__main__":main() #======================================= # filename:setup.py import glob import sys from distutils.core import setup import py2exe includes = ["encodings", "encodings.*", ] excludes = ["javax.comm"] options = { "py2exe": { #"compressed": 1, #"optimize": 0, #"bundle_files":2, "skip_archive":1, "includes": includes, 'excludes': excludes } } setup( version = "0.1", description = "", name = "test", options = options, windows = [ { "script":"main.py", } ], #zipfile = None, ) and I run this command to compile the scripts: python setup.py py2exe and all packages I use are: python2.4.3 numpy-0.98 py2exe-0.6.5 wxpython-2.6.3.2 I unistalled Numeric before I compiled scripts. If you google "numpy py2exe", you can find others guys stumbled by the same issue with ease: http://aspn.activestate.com/ASPN/Mail/Message/py2exe-users/3249182 http://www.nabble.com/matplotlib,-numpy-and-py2exe-t1901429.html I just hope this can be fixed in the next table release of numpy. On 8/29/06, Travis Oliphant wrote: > bruce.who.hk wrote: > > Hi, Travis > > > > I just wonder if NumPy 1.0b4 can get along with py2exe? Just a few weeks ago I made a application in Python. At first I used Numpy, it works OK, but I cannot pack it into a workable executable with py2exe and the XXX.log saied that numpy cannot find some module. I found some hints in py2exe wiki, but it still doesn't work. At Last I tried Numeric instead and it got OK. I just hope that you donnot stop the maintenance of Numeric before you are sure that Numpy can work with py2exe. > > > We've already stopped maintenance of Numeric nearly 1 year ago. If > NumPy doesn't work with py2exe then we need help figuring out why. The > beta-release period is the perfect time to fix that. I've never used > py2exe myself, but I seem to recall that some have been able to make it > work. > > The problem may just be listing the right set of modules to carry along > because you may not be able to get that with just the Python-side > imports. Post any errors you receive to > numpy-discussion at lists.sourceforge.net > > Thanks, > > > -Travis > > Bruce Who From gquqef at cpwarehouse.com Tue Aug 29 05:06:34 2006 From: gquqef at cpwarehouse.com (Connor Pritchard) Date: Tue, 29 Aug 2006 12:06:34 +0300 Subject: [Numpy-discussion] ignorance gust Message-ID: <000c01c6cb4b$8fc58514$dfa26455@df.bsm> Cleohad become a sinister young huntress, a chipmunk-stalker and adabbler after fish. Ill just say I admire Jinny, and will he lay off, or else. I wish youd have a talk with your friend Jay. Or her laudable griefover the sickness of the second child of Cousin Mary, who lived inIndiana. Benjamin took to staying away from the business, to guard her. Only one of them had a bar, and this was the Laverick-Crileyestablishment. Wellgo up on the porch, like little mice, and not disturb the BigBoys. He saw Jinny snatch it away, but not tooswiftly, after what seemed to be a laughing debate. Whoever said there wasnt a lot of wanton in everygood woman? Jinny said reverently, Now is that a pretty trick! Werent they seen leaving you two fellows shack at dawn onWednesday? The cottage, of pine clapboards apparently once painted green, wasairy as a birdcage. The true American is active even in his inactivities. Im going to stay right with you all the time hes here! Benjamin was very sorry when she spoke thus. Her circle felt that that was too many Indiana relatives toounexpectedly. They couldscarce avoid meeting, with the swimming, tennis, canoeing. But I do wish she wouldnt so perpetuallyget herself ambushed by Nimbus and Jay. We have a very fine old house now, and to get a new onewould be spending our CAPITAL. For my own honor, if there is such a thing, but morefor HER honor and contentment. Howl would hate to havesomebody offer me a hundred-thousand-dollar bribe! I told him to go jump inthe lake, said Jinny, in a refined manner. On her way home, with bottle, a policeman stopped her. How can you insult her withsuch suspicions? They say that if I dont go in voluntarily, the Marines will forceme to. On her way home, with bottle, a policeman stopped her. Doyou mean to tell me Frank Brightwing gets three thousand dollars ofour money? Ive always loved Jinny like an uncle,and I want to protect her almost as much as you do. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: disengage.gif Type: image/gif Size: 37939 bytes Desc: not available URL: From tcorcelle at yahoo.fr Tue Aug 29 06:01:32 2006 From: tcorcelle at yahoo.fr (tristan CORCELLE) Date: Tue, 29 Aug 2006 10:01:32 +0000 (GMT) Subject: [Numpy-discussion] Py2exe / numpy troubles Message-ID: <20060829100135.12485.qmail@web26511.mail.ukl.yahoo.com> Hello, I am having troubles with py2exe and numpy/matplotlib... Configuration : Windows XP pro ActivePython 2.4.2.10 Scipy 0.4.9 Numpy 0.9.8 MatplotLib 0.87.1 Py2exe 0.6.5 WxPython 2.6 I am using the following setup.py file: #--------------------------------------------------------- from distutils.core import setup import py2exe from distutils.filelist import findall import os import matplotlib matplotlibdatadir = matplotlib.get_data_path() matplotlibdata = findall(matplotlibdatadir) matplotlibdata_files = [] for f in matplotlibdata: dirname = os.path.join('matplotlibdata', f[len(matplotlibdatadir)+1:]) matplotlibdata_files.append((os.path.split(dirname)[0], [f])) packages = ['matplotlib', 'pytz'] includes = [] excludes = [] dll_excludes = ['libgdk_pixbuf-2.0-0.dll', 'libgobject-2.0-0.dll', 'libgdk-win32-2.0-0.dll', 'wxmsw26uh_vc.dll'] opts = { 'py2exe': { 'packages' : packages, 'includes' : includes, 'excludes' : excludes, 'dll_excludes' : dll_excludes } } setup ( console=['test.py'], options = opts, data_files = matplotlibdata_files ) #----------------------------- EOF --------------------------- I compile the application by running ">setup.py py2exe" At the end of compilation phase, it is written : The following modules appear to be missing ['AppKit', 'FFT', 'Foundation', 'Image', 'LinearAlgebra', 'MA', 'MLab', 'Matrix', 'Numeric', 'PyObjCTools', 'P yQt4', 'Pyrex', 'Pyrex.Compiler', 'RandomArray', '_curses', '_ssl', 'backends.draw_if_interactive', 'backends. new_figure_manager', 'backends.pylab_setup', 'backends.show', 'cairo', 'cairo.gtk', 'fcompiler.FCompiler', 'fc ompiler.show_fcompilers', 'fltk', 'gd', 'gobject', 'gtk', 'lib.add_newdoc', 'matplotlib.enthought.pyface.actio n', 'mlab.amax', 'mlab.amin', 'numarray', 'numarray.convolve', 'numarray.fft', 'numarray.ieeespecial', 'numarr ay.linear_algebra', 'numarray.linear_algebra.mlab', 'numarray.ma', 'numarray.numeric', 'numarray.random_array' , 'numerix.ArrayType', 'numerix.Complex', 'numerix.Complex32', 'numerix.Complex64', 'numerix.Float', 'numerix. Float32', 'numerix.Float64', 'numerix.Int', 'numerix.Int16', 'numerix.Int32', 'numerix.Int8', 'numerix.NewAxis ', 'numerix.UInt16', 'numerix.UInt32', 'numerix.UInt8', 'numerix.absolute', 'numerix.add', 'numerix.all', 'num erix.allclose', 'numerix.alltrue', 'numerix.arange', 'numerix.arccos', 'numerix.arccosh', 'numerix.arcsin', 'n umerix.arcsinh', 'numerix.arctan', 'numerix.arctan2', 'numerix.arctanh', 'numerix.argmax', 'numerix.argmin', ' numerix.argsort', 'numerix.around', 'numerix.array', 'numerix.arrayrange', 'numerix.asarray', 'numerix.asum', 'numerix.bitwise_and', 'numerix.bitwise_or', 'numerix.bitwise_xor', 'numerix.ceil', 'numerix.choose', 'numerix .clip', 'numerix.compress', 'numerix.concatenate', 'numerix.conjugate', 'numerix.convolve', 'numerix.cos', 'nu merix.cosh', 'numerix.cross_correlate', 'numerix.cumproduct', 'numerix.cumsum', 'numerix.diagonal', 'numerix.d ivide', 'numerix.dot', 'numerix.equal', 'numerix.exp', 'numerix.fabs', 'numerix.fft.fft', 'numerix.fft.inverse _fft', 'numerix.floor', 'numerix.fmod', 'numerix.fromfunction', 'numerix.fromstring', 'numerix.greater', 'nume rix.greater_equal', 'numerix.hypot', 'numerix.identity', 'numerix.indices', 'numerix.innerproduct', 'numerix.i scontiguous', 'numerix.less', 'numerix.less_equal', 'numerix.log', 'numerix.log10', 'numerix.logical_and', 'nu merix.logical_not', 'numerix.logical_or', 'numerix.logical_xor', 'numerix.matrixmultiply', 'numerix.maximum', 'numerix.minimum', 'numerix.mlab.amax', 'numerix.mlab.amin', 'numerix.mlab.cov', 'numerix.mlab.diff', 'numerix .mlab.hanning', 'numerix.mlab.rand', 'numerix.mlab.std', 'numerix.mlab.svd', 'numerix.multiply', 'numerix.nega tive', 'numerix.newaxis', 'numerix.nonzero', 'numerix.not_equal', 'numerix.nx', 'numerix.ones', 'numerix.outer product', 'numerix.pi', 'numerix.power', 'numerix.product', 'numerix.put', 'numerix.putmask', 'numerix.rank', 'numerix.ravel', 'numerix.repeat', 'numerix.reshape', 'numerix.resize', 'numerix.searchsorted', 'numerix.shape ', 'numerix.sin', 'numerix.sinh', 'numerix.size', 'numerix.sometrue', 'numerix.sort', 'numerix.sqrt', 'numerix .subtract', 'numerix.swapaxes', 'numerix.take', 'numerix.tan', 'numerix.tanh', 'numerix.trace', 'numerix.trans pose', 'numerix.typecode', 'numerix.typecodes', 'numerix.where', 'numerix.which', 'numerix.zeros', 'numpy.Comp lex', 'numpy.Complex32', 'numpy.Complex64', 'numpy.Float', 'numpy.Float32', 'numpy.Float64', 'numpy.Infinity', 'numpy.Int', 'numpy.Int16', 'numpy.Int32', 'numpy.Int8', 'numpy.UInt16', 'numpy.UInt32', 'numpy.UInt8', 'nump y.inf', 'numpy.infty', 'numpy.oldnumeric', 'objc', 'paint', 'pango', 'pre', 'pyemf', 'qt', 'setuptools', 'setu ptools.command', 'setuptools.command.egg_info', 'trait_sheet', 'matplotlib.numerix.Float', 'matplotlib.numerix .Float32', 'matplotlib.numerix.absolute', 'matplotlib.numerix.alltrue', 'matplotlib.numerix.asarray', 'matplot lib.numerix.ceil', 'matplotlib.numerix.equal', 'matplotlib.numerix.fromstring', 'matplotlib.numerix.indices', 'matplotlib.numerix.put', 'matplotlib.numerix.ravel', 'matplotlib.numerix.sqrt', 'matplotlib.numerix.take', 'm atplotlib.numerix.transpose', 'matplotlib.numerix.where', 'numpy.core.conjugate', 'numpy.core.equal', 'numpy.c ore.less', 'numpy.core.less_equal', 'numpy.dft.old', 'numpy.random.rand', 'numpy.random.randn'] 1) First Problem: numpy\core\_internal.pyc not included in Library.zip No scipy-style subpackage 'core' found in C:\WinCE\Traces\py2exe test\dist\library.zip\numpy. Ignoring: No module named _internal Traceback (most recent call last): File "profiler_ftt.py", line 15, in ? from matplotlib.backends.backend_wx import Toolbar, FigureCanvasWx,\ File "matplotlib\backends\backend_wx.pyc", line 152, in ? File "matplotlib\backend_bases.pyc", line 10, in ? File "matplotlib\colors.pyc", line 33, in ? File "matplotlib\numerix\__init__.pyc", line 67, in ? File "numpy\__init__.pyc", line 35, in ? File "numpy\_import_tools.pyc", line 173, in __call__ File "numpy\_import_tools.pyc", line 68, in _init_info_modules File "", line 1, in ? File "numpy\lib\__init__.pyc", line 5, in ? File "numpy\lib\type_check.pyc", line 8, in ? File "numpy\core\__init__.pyc", line 6, in ? File "numpy\core\umath.pyc", line 12, in ? File "numpy\core\umath.pyc", line 10, in __load AttributeError: 'module' object has no attribute '_ARRAY_API' I resolved that issue by adding the file ...\Python24\Lib\site-packages\numpy\core\_internal.pyc in ...\test\dist\library.zip\numpy\core. Each time I compile that executable, I add the file by hand. Does anybody know how to automatically add that file? 2) Second problem: I don't know how to resolve that issue: Traceback (most recent call last): File "profiler_ftt.py", line 15, in ? from matplotlib.backends.backend_wx import Toolbar, FigureCanvasWx,\ File "matplotlib\backends\backend_wx.pyc", line 152, in ? File "matplotlib\backend_bases.pyc", line 10, in ? File "matplotlib\colors.pyc", line 33, in ? File "matplotlib\numerix\__init__.pyc", line 67, in ? File "numpy\__init__.pyc", line 35, in ? File "numpy\_import_tools.pyc", line 173, in __call__ File "numpy\_import_tools.pyc", line 68, in _init_info_modules File "", line 1, in ? File "numpy\random\__init__.pyc", line 3, in ? File "numpy\random\mtrand.pyc", line 12, in ? File "numpy\random\mtrand.pyc", line 10, in __load File "numpy.pxi", line 32, in mtrand AttributeError: 'module' object has no attribute 'dtype' I don't find the file numpy.pxi in my file tree nor in \test\dist\library.zip. I browsed the web in the hope to find a solution but nothing. It seems that this issue is well known but no solution provided in mailing lists. What is that file "numpix.pxi"? Where to find it or how is it generated? How to resolve that execution issue? Thanks, Regards, Tristan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattknox_ca at hotmail.com Tue Aug 29 08:59:10 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 29 Aug 2006 08:59:10 -0400 Subject: [Numpy-discussion] possible bug with numpy.object_ Message-ID: is the following behaviour expected? or is this a bug with numpy.object_ ? I'm using numpy 1.0b1 >>> print numpy.array([],numpy.float64).size0 >>> print numpy.array([],numpy.object_).size1 Should the size of an array initialized from an empty list not always be 1 ? or am I just crazy? Thanks, - Matt Knox _________________________________________________________________ Be one of the first to try Windows Live Mail. http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattknox_ca at hotmail.com Tue Aug 29 10:05:27 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 29 Aug 2006 10:05:27 -0400 Subject: [Numpy-discussion] possible bug with numpy.object_ Message-ID: # is the following behaviour expected? or is this a bug with numpy.object_ ? I'm using numpy 1.0b1# # >>> print numpy.array([],numpy.float64).size# 0## >>> print numpy.array([],numpy.object_).size# 1## Should the size of an array initialized from an empty list not always be 1 ? or am I just crazy?## Thanks,# # - Matt Knox Correction... I mean shouldn't it always be 0, not 1 _________________________________________________________________ Be one of the first to try Windows Live Mail. http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d -------------- next part -------------- An HTML attachment was scrubbed... URL: From cssmwbs at gmail.com Tue Aug 29 11:15:21 2006 From: cssmwbs at gmail.com (W. Bryan Smith) Date: Tue, 29 Aug 2006 08:15:21 -0700 Subject: [Numpy-discussion] error in ctypes example from the numpy book? Message-ID: <7c13686f0608290815i1078a347s18dbbd196dd429af@mail.gmail.com> hi, i posted this to the forum, but it looks like the email list gets much more traffic, so here goes. i am attempting to reproduce a portion of the example on using ctypes from the current version of the numpy book (the example can be found on pps 313-16). here is what i am trying to do: import numpy import interface x = numpy.array(range(1,1)) y = numpy.ones_like(x) z = interface.add(a,b) prints the following error: BEGIN ERROR>> 26 b = N.require(b, dtype, requires) 27 c = N.empty_like(a) ---> 28 func(a,b,c,a.size) 29 return c 30 ArgumentError: argument 1: exceptions.TypeError: Don't know how to convert parameter 1 <> /* Add arrays of contiguous data */ typedef struct {double real;} cdouble; typedef struct {float real;} cfloat; void dadd(double *a, double *b, double *c, long n) { while (n--) { *c++ = *a++ + *b++; } } void sadd(float *a, float *b, float *c, long n) { while (n--) { *c++ = *a++ + *b++; } } <> __all__ = ['add'] import numpy as N from ctypes import * import os _path = os.path.dirname('__file__') lib = N.ctypeslib.ctypes_load_library('testAddInt', _path) for name in ['sadd','dadd']: getattr(lib,name).restype=None def select(dtype): if dtype.char in['?bBhHf']: return lib.sadd, single else: return lib.dadd, float return func, ntype def add(a,b): requires = ['CONTIGUOUS','ALIGNED'] a = N.asanyarray(a) func, dtype = select(a.dtype) a = N.require(a, dtype, requires) b = N.require(b, dtype, requires) c = N.empty_like(a) func(a,b,c,a.size) return c < From kwgoodman at gmail.com Tue Aug 29 11:43:34 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Aug 2006 08:43:34 -0700 Subject: [Numpy-discussion] Problem with randn Message-ID: randn incorrectly returns random numbers only between 0 and 1 in numpy 1.0b1. random.randn works. >> from numpy.matlib import * >> randn(3,4) matrix([[ 0.60856413, 0.35500732, 0.48089868, 0.7044022 ], [ 0.71098538, 0.8506885 , 0.56154652, 0.4243273 ], [ 0.89655777, 0.92339559, 0.62247685, 0.70340003]]) >> randn(3,4) matrix([[ 0.84349201, 0.55638171, 0.19052097, 0.0927636 ], [ 0.60144183, 0.3788309 , 0.41451568, 0.61766302], [ 0.98992704, 0.94276652, 0.18569066, 0.69976656]]) >> randn(3,4) matrix([[ 0.69003273, 0.07171546, 0.34549767, 0.20901683], [ 0.1333439 , 0.4086678 , 0.80960253, 0.86864547], [ 0.75329427, 0.6760677 , 0.32496542, 0.99402779]]) >> random.randn(3,4) array([[ 1.00107604, 0.41418557, -0.07923699, 0.19203247], [-0.29386593, 0.02343702, -0.42366834, -1.27978993], [ 0.25722357, -0.53765827, 0.50569238, -2.44592854]]) From oliphant.travis at ieee.org Tue Aug 29 12:49:58 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 29 Aug 2006 10:49:58 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: References: Message-ID: <44F47036.8040300@ieee.org> Matt Knox wrote: > is the following behaviour expected? or is this a bug with > numpy.object_ ? I'm using numpy 1.0b1 > > >>> print numpy.array([],numpy.float64).size > 0 > > >>> print numpy.array([],numpy.object_).size > 1 > > Should the size of an array initialized from an empty list not always > be 1 ? or am I just crazy? > Not in this case. Explictly creating an object array from any object (even the empty-list object) gives you a 0-d array containing that object. When you explicitly create an object array a different section of code handles it and gives this result. This is a recent change, and I don't think this use-case was considered as a backward incompatibility (which I believe it is). Perhaps we should make it so array([],....) always returns an empty array. I'm not sure. Comments? -Travis From Chris.Barker at noaa.gov Tue Aug 29 13:12:04 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 29 Aug 2006 10:12:04 -0700 Subject: [Numpy-discussion] Is numpy supposed to support the buffer protocol? In-Reply-To: <44F3DA1F.4020007@alldunn.com> References: <44F37B78.2050009@noaa.gov> <44F37D04.10807@ieee.org> <44F3DA1F.4020007@alldunn.com> Message-ID: <44F47564.4070208@noaa.gov> Robin Dunn wrote: > BTW Chris, try using buffer(RGB) and buffer(Alpha) in your sample, I > expect that will work with the current code. yup. that does work. I was concerned that it would make a copy, but it looks like it makes a new buffer object, but using the same data buffer, so that should be fine. Thanks for all this, -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From kortmann at ideaworks.com Tue Aug 29 13:18:44 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Tue, 29 Aug 2006 10:18:44 -0700 (PDT) Subject: [Numpy-discussion] py2exe error Message-ID: <3345.12.216.231.149.1156871924.squirrel@webmail.ideaworks.com> >Hi, Travis >I can pack my scripts into an executable with py2exe, but errors occur >once it runs: >No scipy-style subpackage 'random' found in D:\test\dist\numpy. >Ignoring: No module named info >import core -> failed: No module named _internal >import lib -> failed: 'module' object has no attribute '_ARRAY_API' >import linalg -> failed: 'module' object has no attribute '_ARRAY_API' >import dft -> failed: 'module' object has no attribute '_ARRAY_API' >Traceback (most recent call last): > File "main.py", line 9, in ? > File "numpy\__init__.pyc", line 49, in ? >  > File "numpy\add_newdocs.pyc", line 2, in ? > gkDc > File "numpy\lib\__init__.pyc", line 5, in ? > > File "numpy\lib\type_check.pyc", line 8, in ? > > File "numpy\core\__init__.pyc", line 6, in ? > > File "numpy\core\umath.pyc", line 12, in ? > > File "numpy\core\umath.pyc", line 10, in __load I am cross referencing this from the py2exe mailing list. There seems to have been a fix for this problem #---------------------------begining of setup.py--------------------# from distutils.core import setup import py2exe from distutils.filelist import findall import os import matplotlib matplotlibdatadir = matplotlib.get_data_path() matplotlibdata = findall(matplotlibdatadir) matplotlibdata_files = [] for f in matplotlibdata: dirname = os.path.join('matplotlibdata', f[len(matplotlibdatadir)+1:]) matplotlibdata_files.append((os.path.split(dirname)[0], [f])) packages = ['matplotlib', 'pytz'] includes = [] excludes = [] dll_excludes = ['libgdk_pixbuf-2.0-0.dll', 'libgobject-2.0-0.dll', 'libgdk-win32-2.0-0.dll', 'wxmsw26uh_vc.dll'] opts = { 'py2exe': { 'packages' : packages, 'includes' : includes, 'excludes' : excludes, 'dll_excludes' : dll_excludes } } setup ( console=['test.py'], options = opts, data_files = matplotlibdata_files ) #--------------------------End of setup.py--------------# >>1) First Problem: numpy\core\_internal.pyc not included in Library.zip >>No scipy-style subpackage 'core' found in C:\WinCE\Traces\py2exe >>test\dist\library.zip\numpy. Ignoring: No module named _internal >>Traceback (most recent call last): >> File "profiler_ftt.py", line 15, in ? >> from matplotlib.backends.backend_wx import Toolbar, FigureCanvasWx,\ >> File "matplotlib\backends\backend_wx.pyc", line 152, in ? >> File "matplotlib\backend_bases.pyc", line 10, in ? >> File "matplotlib\colors.pyc", line 33, in ? >> File "matplotlib\numerix\__init__.pyc", line 67, in ? >> File "numpy\__init__.pyc", line 35, in ? >> File "numpy\_import_tools.pyc", line 173, in __call__ >> File "numpy\_import_tools.pyc", line 68, in _init_info_modules >> File "", line 1, in ? >> File "numpy\lib\__init__.pyc", line 5, in ? >> File "numpy\lib\type_check.pyc", line 8, in ? >> File "numpy\core\__init__.pyc", line 6, in ? >> File "numpy\core\umath.pyc", line 12, in ? >> File "numpy\core\umath.pyc", line 10, in __load >>AttributeError: 'module' object has no attribute '_ARRAY_API' >>I resolved that issue by adding the file >>...\Python24\Lib\site-packages\numpy\core\_internal.pyc in >>...\test\dist\library.zip\numpy\core. >>Each time I compile that executable, I add the file by hand. >>Does anybody know how to automatically add that file? the setup.py was from the person who wrote the instructions for this fix. also here is my setup.py just for reference although mine is probably incorrect due to me being new with py2exe #------------------------setup.py------------------------# from distutils.core import setup import py2exe from distutils.filelist import findall import os import matplotlib matplotlibdatadir = matplotlib.get_data_path() matplotlibdata = findall(matplotlibdatadir) matplotlibdata_files = [] for f in matplotlibdata: dirname = os.path.join('matplotlibdata', f[len(matplotlibdatadir)+1:]) matplotlibdata_files.append((os.path.split(dirname)[0], [f])) setup( console=['templatewindow.py'], options={ "py2exe": { "compressed": 1, "optimize": 2, "packages": ["encodings", "kinterbasdb", "pytz.zoneinfo.UTC", "matplotlib.numerix", ], "dll_excludes": ["tcl84.dll", "tk84.dll"] } mpldata = glob.glob(r'C:\Python24\share\matplotlib\*') mpldata.append(r'C:\Python24\share\matplotlib\.matplotlibrc') data_files = [("prog\\locale\\fr\\LC_MESSAGES", mylocaleFR), ("prog\\locale\\de\\LC_MESSAGES", mylocaleDE), ("prog\\locale\\en\\LC_MESSAGES", mylocaleEN), ... ("matplotlibdata", mpldata), ("prog\\amaradata", amaradata), ("prog\\amaradata\\Schemata", amaraschemata), ] ) #-----------------------EOF-----------------# I was receiving this same "AttributeError: 'module' object has no attribute '_ARRAY_API'" error, and i did the same thing this person did, unzipped the folder, put the _internal.pyc file in the numpy/core folder and then rezipped the folder and I am receiving a wx error, but the numpy array_api error is gone. You may want to check this out and let us know if it works for you also. -Kenny p.s. i tried sending this 4 times prior but believe it did not send because it was alot longer so i shortened it, sorry if it posted 4 times From charlesr.harris at gmail.com Tue Aug 29 13:32:54 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 11:32:54 -0600 Subject: [Numpy-discussion] Documentation Message-ID: Hi All, I've finished moving all the docstrings in arraymethods to add_newdocs. Much of the documentation is still incomplete and needs nicer formatting, so if you are so inclined, or even annoyed with some of the help messages, feel free to fix things up. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Tue Aug 29 13:57:26 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Aug 2006 10:57:26 -0700 Subject: [Numpy-discussion] For loop tips Message-ID: I have a very long list that contains many repeated elements. The elements of the list can be either all numbers, or all strings, or all dates [datetime.date]. I want to convert the list into a matrix where each unique element of the list is assigned a consecutive integer starting from zero. I've done it by brute force below. Any tips for making it faster? (5x would make it useful; 10x would be a dream.) >> list2index.test() Numbers: 5.84955787659 seconds Characters: 24.3192870617 seconds Dates: 39.288228035 seconds import datetime, time from numpy import nan, asmatrix, ones def list2index(L): # Find unique elements in list uL = dict.fromkeys(L).keys() # Convert list to matrix L = asmatrix(L).T # Initialize return matrix idx = nan * ones((L.size, 1)) # Assign numbers to unique L values for i, uLi in enumerate(uL): idx[L == uLi,:] = i def test(): L = 5000*range(255) t1 = time.time() idx = list2index(L) t2 = time.time() print 'Numbers:', t2-t1, 'seconds' L = 5000*[chr(z) for z in range(255)] t1 = time.time() idx = list2index(L) t2 = time.time() print 'Characters:', t2-t1, 'seconds' d = datetime.date step = datetime.timedelta L = 5000*[d(2006,1,1)+step(z) for z in range(255)] t1 = time.time() idx = list2index(L) t2 = time.time() print 'Dates:', t2-t1, 'seconds' From tim.hochberg at ieee.org Tue Aug 29 14:40:11 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 11:40:11 -0700 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: Message-ID: <44F48A0B.7020401@ieee.org> Keith Goodman wrote: > I have a very long list that contains many repeated elements. The > elements of the list can be either all numbers, or all strings, or all > dates [datetime.date]. > > I want to convert the list into a matrix where each unique element of > the list is assigned a consecutive integer starting from zero. > If what you want is that the first unique element get's zero, the second one, I don't think the code below will work in general since the dict does not preserve order. You might want to look at the results for the character case to see what I mean. If you're looking for something else, you'll need to elaborate a bit. Since list2index doesn't return anything, it's not entirely clear what the answer consists of. Just idx? Idx plus uL? > I've done it by brute force below. Any tips for making it faster? (5x > would make it useful; 10x would be a dream.) > Assuming I understand what you're trying to do, this might help: def list2index2(L): idx = ones([len(L)]) map = {} for i, x in enumerate(L): index = map.get(x) if index is None: map[x] = index = len(map) idx[i] = index return idx It's almost 10x faster for numbers and about 40x faster for characters and dates. However it produces different results from list2index in the second two cases. That may or may not be a good thing depending on what you're really trying to do. -tim > >>> list2index.test() >>> > Numbers: 5.84955787659 seconds > Characters: 24.3192870617 seconds > Dates: 39.288228035 seconds > > > import datetime, time > from numpy import nan, asmatrix, ones > > def list2index(L): > > # Find unique elements in list > uL = dict.fromkeys(L).keys() > > # Convert list to matrix > L = asmatrix(L).T > > # Initialize return matrix > idx = nan * ones((L.size, 1)) > > # Assign numbers to unique L values > for i, uLi in enumerate(uL): > idx[L == uLi,:] = i > > def test(): > > L = 5000*range(255) > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Numbers:', t2-t1, 'seconds' > > L = 5000*[chr(z) for z in range(255)] > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Characters:', t2-t1, 'seconds' > > d = datetime.date > step = datetime.timedelta > L = 5000*[d(2006,1,1)+step(z) for z in range(255)] > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Dates:', t2-t1, 'seconds' > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From tim.hochberg at ieee.org Tue Aug 29 14:48:19 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 11:48:19 -0700 Subject: [Numpy-discussion] For loop tips In-Reply-To: <44F48A0B.7020401@ieee.org> References: <44F48A0B.7020401@ieee.org> Message-ID: <44F48BF3.9090108@ieee.org> Tim Hochberg wrote: > Keith Goodman wrote: > >> I have a very long list that contains many repeated elements. The >> elements of the list can be either all numbers, or all strings, or all >> dates [datetime.date]. >> >> I want to convert the list into a matrix where each unique element of >> the list is assigned a consecutive integer starting from zero. >> >> > If what you want is that the first unique element get's zero, the second > one, I don't think the code below will work in general since the dict > does not preserve order. You might want to look at the results for the > character case to see what I mean. If you're looking for something else, > you'll need to elaborate a bit. Since list2index doesn't return > anything, it's not entirely clear what the answer consists of. Just idx? > Idx plus uL? > > >> I've done it by brute force below. Any tips for making it faster? (5x >> would make it useful; 10x would be a dream.) >> >> > Assuming I understand what you're trying to do, this might help: > > def list2index2(L): > idx = ones([len(L)]) > map = {} > for i, x in enumerate(L): > index = map.get(x) > if index is None: > map[x] = index = len(map) > idx[i] = index > return idx > > > It's almost 10x faster for numbers and about 40x faster for characters > and dates. However it produces different results from list2index in the > second two cases. That may or may not be a good thing depending on what > you're really trying to do. > Ugh! I fell victim to premature optimization disease. The following is both clearer and faster: Sigh. def list2index3(L): idx = ones([len(L)]) map = {} for i, x in enumerate(L): if x not in map: map[x] = len(map) idx[i] = map[x] return idx > -tim > > >> >> >>>> list2index.test() >>>> >>>> >> Numbers: 5.84955787659 seconds >> Characters: 24.3192870617 seconds >> Dates: 39.288228035 seconds >> >> >> import datetime, time >> from numpy import nan, asmatrix, ones >> >> def list2index(L): >> >> # Find unique elements in list >> uL = dict.fromkeys(L).keys() >> >> # Convert list to matrix >> L = asmatrix(L).T >> >> # Initialize return matrix >> idx = nan * ones((L.size, 1)) >> >> # Assign numbers to unique L values >> for i, uLi in enumerate(uL): >> idx[L == uLi,:] = i >> >> def test(): >> >> L = 5000*range(255) >> t1 = time.time() >> idx = list2index(L) >> t2 = time.time() >> print 'Numbers:', t2-t1, 'seconds' >> >> L = 5000*[chr(z) for z in range(255)] >> t1 = time.time() >> idx = list2index(L) >> t2 = time.time() >> print 'Characters:', t2-t1, 'seconds' >> >> d = datetime.date >> step = datetime.timedelta >> L = 5000*[d(2006,1,1)+step(z) for z in range(255)] >> t1 = time.time() >> idx = list2index(L) >> t2 = time.time() >> print 'Dates:', t2-t1, 'seconds' >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> >> > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From oliphant.travis at ieee.org Tue Aug 29 14:57:30 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 29 Aug 2006 12:57:30 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend Message-ID: <44F48E1A.1020006@ieee.org> Hi all, Classes start for me next Tuesday, and I'm teaching a class for which I will be using NumPy / SciPy extensively. I need to have a release of these two (and hopefully matplotlib) that work with each other. Therefore, I'm going to make a 1.0b5 release of NumPy over the weekend (probably Monday), and also get a release of SciPy out as well. At that point, I'll only be available for bug-fixes to 1.0. Therefore, the next release after 1.0b5 I would like to be 1.0rc1 (release-candidate 1). To facilitate that, after 1.0b5 there will be a feature-freeze (except for in the compatibility modules and the alter_code scripts which can still be modified to ease the transition burden). The 1.0rc1 release of NumPy will be mid September I suspect. Also, I recognize that the default-axis switch is a burden for those who have already transitioned code to use NumPy (for those just starting out it's not a big deal because of the compatibility layer). As a result, I've added a module called fix_default_axis whose converttree method will walk a hierarchy and change all .py files to fix the default axis problem in those files. This can be done in one of two ways (depending on the boolean argument import_change). If import_change is False a) Add and axis= keyword argument to any function whose default changed in 1.0b2 or 1.0b3, which does not already have the axis argument --- this method does not distinguish where the function came from and so can do the wrong thing with similarly named functions from other modules (.e.g. builtin sum and itertools.repeat). If import_change is True b) Change the location where the function is imported from numpy to numpy.oldnumeric where the default axis is the same as before. This approach looks for several flavors of the import statement and alters the import location for any function whose default axis argument changed --- this can get confused if you use from numpy import sum as mysum --- it will not replace that usage of sum. I used this script on the scipy tree in mode a) as a test (followed by a manual replacement of all? incorrect substitutions). I hope it helps. I know it's annoying to have such things change. But, it does make NumPy much more consistent with respect to the default axis argument. With a few exceptions (concatenate, diff, trapz, split, array_split), the rule is that you need to specify the axis if there is more than 1 dimension or it will ravel the input. -Travis From torgil.svensson at gmail.com Tue Aug 29 14:59:55 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 29 Aug 2006 20:59:55 +0200 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: Message-ID: def list2index(L): idx=dict((y,x) for x,y in enumerate(set(L))) return asmatrix(fromiter((idx[x] for x in L),dtype=int)) # old $ python test.py Numbers: 29.4062280655 seconds Characters: 84.6239070892 seconds Dates: 117.560418844 seconds # new $ python test.py Numbers: 1.79700994492 seconds Characters: 1.6025249958 seconds Dates: 1.7974088192 seconds 16, 52 and 100 times faster //Torgil On 8/29/06, Keith Goodman wrote: > I have a very long list that contains many repeated elements. The > elements of the list can be either all numbers, or all strings, or all > dates [datetime.date]. > > I want to convert the list into a matrix where each unique element of > the list is assigned a consecutive integer starting from zero. > > I've done it by brute force below. Any tips for making it faster? (5x > would make it useful; 10x would be a dream.) > > >> list2index.test() > Numbers: 5.84955787659 seconds > Characters: 24.3192870617 seconds > Dates: 39.288228035 seconds > > > import datetime, time > from numpy import nan, asmatrix, ones > > def list2index(L): > > # Find unique elements in list > uL = dict.fromkeys(L).keys() > > # Convert list to matrix > L = asmatrix(L).T > > # Initialize return matrix > idx = nan * ones((L.size, 1)) > > # Assign numbers to unique L values > for i, uLi in enumerate(uL): > idx[L == uLi,:] = i > > def test(): > > L = 5000*range(255) > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Numbers:', t2-t1, 'seconds' > > L = 5000*[chr(z) for z in range(255)] > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Characters:', t2-t1, 'seconds' > > d = datetime.date > step = datetime.timedelta > L = 5000*[d(2006,1,1)+step(z) for z in range(255)] > t1 = time.time() > idx = list2index(L) > t2 = time.time() > print 'Dates:', t2-t1, 'seconds' > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From aisaac at american.edu Tue Aug 29 15:13:54 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 29 Aug 2006 15:13:54 -0400 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: Message-ID: You can get some speed up for numeric data: def list2index2(L): aL = asarray(L) eL = empty_like(L) for v,k in enumerate(set(L)): eL[aL == k] = v return numpy.asmatrix(eL).T fwiw, Alan Isaac From charlesr.harris at gmail.com Tue Aug 29 15:06:38 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 13:06:38 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F48E1A.1020006@ieee.org> References: <44F48E1A.1020006@ieee.org> Message-ID: Hi Travis, On 8/29/06, Travis Oliphant wrote: > > > Hi all, > > Classes start for me next Tuesday, and I'm teaching a class for which I > will be using NumPy / SciPy extensively. I need to have a release of > these two (and hopefully matplotlib) that work with each other. > > Therefore, I'm going to make a 1.0b5 release of NumPy over the weekend > (probably Monday), and also get a release of SciPy out as well. At that > point, I'll only be available for bug-fixes to 1.0. Therefore, the next > release after 1.0b5 I would like to be 1.0rc1 (release-candidate 1). > > To facilitate that, after 1.0b5 there will be a feature-freeze (except > for in the compatibility modules and the alter_code scripts which can > still be modified to ease the transition burden). Speaking of features, I wonder if more of the methods should return references. For instance, it might be nice to write something like: a.sort().searchsorted([...]) instead of making two statements out of it. The 1.0rc1 release of NumPy will be mid September I suspect. > > Also, I recognize that the default-axis switch is a burden for those who > have already transitioned code to use NumPy (for those just starting out > it's not a big deal because of the compatibility layer). I am curious as to why you made this switch. Not complaining, mind. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Tue Aug 29 15:11:47 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 29 Aug 2006 13:11:47 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: On 8/29/06, Charles R Harris wrote: > Speaking of features, I wonder if more of the methods should return > references. For instance, it might be nice to write something like: > > a.sort().searchsorted([...]) > > instead of making two statements out of it. +1 for more 'return self' at the end of methods which currently don't return anything (well, we get the default None), as long as it's sensible. I really like this 'message chaining' style of programming, and it annoys me that much of the python stdlib gratuitously prevents it by NOT returning self in places where it would be a perfectly sensible thing to do. I find it much cleaner to write x = foo.bar().baz(param).frob() than foo.bar() foo.baz(param) x = foo.frob() but perhaps others disagree. Cheers, f From rudolphv at gmail.com Tue Aug 29 15:15:30 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Tue, 29 Aug 2006 21:15:30 +0200 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> This definitely gets my vote as well (for what it's worth). R. On 8/29/06, Fernando Perez wrote: > +1 for more 'return self' at the end of methods which currently don't > return anything (well, we get the default None), as long as it's > sensible. I really like this 'message chaining' style of programming, > and it annoys me that much of the python stdlib gratuitously prevents > it by NOT returning self in places where it would be a perfectly > sensible thing to do. > > I find it much cleaner to write > > x = foo.bar().baz(param).frob() > > than > > foo.bar() > foo.baz(param) > x = foo.frob() > > but perhaps others disagree. > > Cheers, > > f -- Rudolph van der Merwe Karoo Array Telescope / Square Kilometer Array - http://www.ska.ac.za From charlesr.harris at gmail.com Tue Aug 29 15:25:14 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 13:25:14 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: Hi Fernando, On 8/29/06, Fernando Perez wrote: > > On 8/29/06, Charles R Harris wrote: > > > Speaking of features, I wonder if more of the methods should return > > references. For instance, it might be nice to write something like: > > > > a.sort().searchsorted([...]) > > > > instead of making two statements out of it. > > +1 for more 'return self' at the end of methods which currently don't > return anything (well, we get the default None), as long as it's > sensible. I really like this 'message chaining' style of programming, > and it annoys me that much of the python stdlib gratuitously prevents > it by NOT returning self in places where it would be a perfectly > sensible thing to do. My pet peeve example: a.reverse() I would also like to see simple methods for "+=" operator and such. Then one could write x = a.copy().add(10) One could make a whole reverse polish translator out of such operations and a few parenthesis. I have in mind some sort of code optimizer. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.hochberg at ieee.org Tue Aug 29 15:26:06 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 12:26:06 -0700 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> Message-ID: <44F494CE.1080008@ieee.org> -0.5 from me if what we're talking about here is having mutating methods return self rather than None. Chaining stuff is pretty, but having methods that mutate self and return self looks like a source of elusive bugs to me. -tim Rudolph van der Merwe wrote: > This definitely gets my vote as well (for what it's worth). > > R. > > On 8/29/06, Fernando Perez wrote: > >> +1 for more 'return self' at the end of methods which currently don't >> return anything (well, we get the default None), as long as it's >> sensible. I really like this 'message chaining' style of programming, >> and it annoys me that much of the python stdlib gratuitously prevents >> it by NOT returning self in places where it would be a perfectly >> sensible thing to do. >> >> I find it much cleaner to write >> >> x = foo.bar().baz(param).frob() >> >> than >> >> foo.bar() >> foo.baz(param) >> x = foo.frob() >> >> but perhaps others disagree. >> >> Cheers, >> >> f >> > > From kwgoodman at gmail.com Tue Aug 29 15:27:34 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Aug 2006 12:27:34 -0700 Subject: [Numpy-discussion] For loop tips In-Reply-To: <44F48A0B.7020401@ieee.org> References: <44F48A0B.7020401@ieee.org> Message-ID: On 8/29/06, Tim Hochberg wrote: > Keith Goodman wrote: > > I have a very long list that contains many repeated elements. The > > elements of the list can be either all numbers, or all strings, or all > > dates [datetime.date]. > > > > I want to convert the list into a matrix where each unique element of > > the list is assigned a consecutive integer starting from zero. > > > If what you want is that the first unique element get's zero, the second > one, I don't think the code below will work in general since the dict > does not preserve order. You might want to look at the results for the > character case to see what I mean. If you're looking for something else, > you'll need to elaborate a bit. Since list2index doesn't return > anything, it's not entirely clear what the answer consists of. Just idx? > Idx plus uL? The output I wanted (in my mind, but unfortunately not in my previous email) is idx and uL where uL[0] corresponds to the zeros in idx, uL[1] corresponds to the ones in idx. etc. I'd also like the uL's to be ordered (now I see that characters and dates aren't ordered, ooops, thanks for telling me about that). Or optionally ordered by a second list input which if present would be used instead of the unique values of L. Thank you all for the huge improvements to my code. I'll learn a lot studying all of them. From charlesr.harris at gmail.com Tue Aug 29 15:36:33 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 13:36:33 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F494CE.1080008@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> Message-ID: Hi, On 8/29/06, Tim Hochberg wrote: > > > -0.5 from me if what we're talking about here is having mutating methods > return self rather than None. Chaining stuff is pretty, but having > methods that mutate self and return self looks like a source of elusive > bugs to me. > > -tim But how is that any worse than the current mutating operators? I think the operating principal is that methods generally work in place, functions make copies. The exceptions to this rule need to be noted. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue Aug 29 15:49:25 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 29 Aug 2006 15:49:25 -0400 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F494CE.1080008@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> Message-ID: On Tue, 29 Aug 2006, Tim Hochberg apparently wrote: > -0.5 from me if what we're talking about here is having > mutating methods return self rather than None. Chaining > stuff is pretty, but having methods that mutate self and > return self looks like a source of elusive bugs to me. I believe this reasoning was the basis of sort (method, returns None) and sorted (function, returns new object) in Python. I believe that was a long and divisive discussion ... Cheers, Alan Isaac From torgil.svensson at gmail.com Tue Aug 29 15:44:11 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 29 Aug 2006 21:44:11 +0200 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: <44F48A0B.7020401@ieee.org> Message-ID: something like this? def list2index(L): uL=sorted(set(L)) idx=dict((y,x) for x,y in enumerate(uL)) return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) //Torgil On 8/29/06, Keith Goodman wrote: > On 8/29/06, Tim Hochberg wrote: > > Keith Goodman wrote: > > > I have a very long list that contains many repeated elements. The > > > elements of the list can be either all numbers, or all strings, or all > > > dates [datetime.date]. > > > > > > I want to convert the list into a matrix where each unique element of > > > the list is assigned a consecutive integer starting from zero. > > > > > If what you want is that the first unique element get's zero, the second > > one, I don't think the code below will work in general since the dict > > does not preserve order. You might want to look at the results for the > > character case to see what I mean. If you're looking for something else, > > you'll need to elaborate a bit. Since list2index doesn't return > > anything, it's not entirely clear what the answer consists of. Just idx? > > Idx plus uL? > > The output I wanted (in my mind, but unfortunately not in my previous > email) is idx and uL where uL[0] corresponds to the zeros in idx, > uL[1] corresponds to the ones in idx. etc. > > I'd also like the uL's to be ordered (now I see that characters and > dates aren't ordered, ooops, thanks for telling me about that). Or > optionally ordered by a second list input which if present would be > used instead of the unique values of L. > > Thank you all for the huge improvements to my code. I'll learn a lot > studying all of them. > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tim.hochberg at ieee.org Tue Aug 29 16:00:50 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 13:00:50 -0700 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> Message-ID: <44F49CF2.5020505@ieee.org> Charles R Harris wrote: > Hi, > > On 8/29/06, *Tim Hochberg* > wrote: > > > -0.5 from me if what we're talking about here is having mutating > methods > return self rather than None. Chaining stuff is pretty, but having > methods that mutate self and return self looks like a source of > elusive > bugs to me. > > -tim > > > But how is that any worse than the current mutating operators? I think > the operating principal is that methods generally work in place, > functions make copies. The exceptions to this rule need to be noted. Is that really the case? I was more under the impression that there wasn't much rhyme nor reason to this. Let's do a quick dir(somearray) and see what we get (I'll strip out the __XXX__ names): 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', 'base', 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', 'copy', 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dtype', 'dump', 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', 'item', 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', 'newbyteorder', 'nonzero', 'prod', 'ptp', 'put', 'putmask', 'ravel', 'real', 'repeat', 'reshape', 'resize', 'round', 'searchsorted', 'setfield', 'setflags', 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', 'swapaxes', 'take', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', 'var', 'view' Hmmm. Without taking too much time to go through these one at a time, I'm pretty certain that they do not in general mutate things in place. Probably at least half return, or can return new arrays, sometimes with references to the original data, but new shapes, sometimes with completely new data. In fact, other than sort, I'm not sure which of these does mutate in place. -tim From kwgoodman at gmail.com Tue Aug 29 16:02:10 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Aug 2006 13:02:10 -0700 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: <44F48A0B.7020401@ieee.org> Message-ID: On 8/29/06, Torgil Svensson wrote: > something like this? > > def list2index(L): > uL=sorted(set(L)) > idx=dict((y,x) for x,y in enumerate(uL)) > return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) Wow. That's amazing. Thank you. From charlesr.harris at gmail.com Tue Aug 29 16:17:29 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 14:17:29 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F49CF2.5020505@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> Message-ID: On 8/29/06, Tim Hochberg wrote: > > Charles R Harris wrote: > > Hi, > > > > On 8/29/06, *Tim Hochberg* > > wrote: > > > > > > -0.5 from me if what we're talking about here is having mutating > > methods > > return self rather than None. Chaining stuff is pretty, but having > > methods that mutate self and return self looks like a source of > > elusive > > bugs to me. > > > > -tim > > > > > > But how is that any worse than the current mutating operators? I think > > the operating principal is that methods generally work in place, > > functions make copies. The exceptions to this rule need to be noted. > Is that really the case? I was more under the impression that there > wasn't much rhyme nor reason to this. Let's do a quick dir(somearray) > and see what we get (I'll strip out the __XXX__ names): > > 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', 'base', > 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', 'copy', > 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dtype', 'dump', > 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', 'item', > 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', 'newbyteorder', > 'nonzero', 'prod', 'ptp', 'put', 'putmask', 'ravel', 'real', 'repeat', > 'reshape', 'resize', 'round', 'searchsorted', 'setfield', 'setflags', > 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', 'swapaxes', > 'take', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', 'var', > 'view' There are certainly many methods where inplace operations make no sense. But for such things as conjugate and clip I think it should be preferred. Think of them as analogs of the "+=" operators that allow memory efficient inplace operations. At the moment there are too few such operators, IMHO, and that makes it hard to write memory efficient code when you want to do so. If you need a copy, the functional form should be the preferred way to go and can easily be implement by constructions like a.copy().sort(). Hmmm. Without taking too much time to go through these one at a time, > I'm pretty certain that they do not in general mutate things in place. > Probably at least half return, or can return new arrays, sometimes with > references to the original data, but new shapes, sometimes with > completely new data. In fact, other than sort, I'm not sure which of > these does mutate in place. > > -tim Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Tue Aug 29 16:36:09 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 29 Aug 2006 14:36:09 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: <44F4A539.3090702@ee.byu.edu> Charles R Harris wrote: > > The 1.0rc1 release of NumPy will be mid September I suspect. > > Also, I recognize that the default-axis switch is a burden for > those who > have already transitioned code to use NumPy (for those just > starting out > it's not a big deal because of the compatibility layer). > > > I am curious as to why you made this switch. Not complaining, mind. New-comers to NumPy asked why there were different conventions on the methods and the functions for the axis argument. The only reason was backward compatibility. Because we had already created a compatibility layer for code transitioning, that seemed like a weak reason to keep the current behavior. The problem is it left early NumPy adopters (including me :-) ) in a bit of a bind, when it comes to code (like SciPy) that had already been converted. Arguments like Fernando's: "it's better to have a bit of pain now, then regrets later" also were convincing. -Travis From oliphant at ee.byu.edu Tue Aug 29 16:43:14 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 29 Aug 2006 14:43:14 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F494CE.1080008@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> Message-ID: <44F4A6E2.1070002@ee.byu.edu> Tim Hochberg wrote: >-0.5 from me if what we're talking about here is having mutating methods >return self rather than None. Chaining stuff is pretty, but having >methods that mutate self and return self looks like a source of elusive >bugs to me. > > I'm generally +0 on this idea (it seems like the clarity in writing comes largely for interactive users), and don't see much difficulty in separating the constructs. On the other hand, I don't see much problem in returning a reference to self either. I guess you are worried about the situation where you write b = a.sort() and think you have a new array, but in fact have a new reference to the already-altered 'a'? Hmm.. So, how is this different from the fact that b = a[1:10:3] already returns a reference to 'a' (I suppose in the fact that it actually returns a new object just one that happens to share the same data with a). However, I suppose that other methods don't return a reference to an already-altered object, do they. Tim's argument has moved me from +0 to -0 -Travis From Chris.Barker at noaa.gov Tue Aug 29 16:49:20 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 29 Aug 2006 13:49:20 -0700 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: <44F4A850.3030903@noaa.gov> Fernando Perez wrote: > more 'return self' at the end of methods which currently don't > return anything (well, we get the default None), as long as it's > sensible. +1 Though I'm a bit hesitant: if it's really consistent that methods that alter the object in place NEVER return themselves, the there is something to be said for that. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tim.hochberg at ieee.org Tue Aug 29 17:03:39 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 14:03:39 -0700 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> Message-ID: <44F4ABAB.3090508@ieee.org> Charles R Harris wrote: > > > On 8/29/06, *Tim Hochberg* > wrote: > > Charles R Harris wrote: > > Hi, > > > > On 8/29/06, *Tim Hochberg* > > >> > wrote: > > > > > > -0.5 from me if what we're talking about here is having mutating > > methods > > return self rather than None. Chaining stuff is pretty, but > having > > methods that mutate self and return self looks like a source of > > elusive > > bugs to me. > > > > -tim > > > > > > But how is that any worse than the current mutating operators? I > think > > the operating principal is that methods generally work in place, > > functions make copies. The exceptions to this rule need to be noted. > Is that really the case? I was more under the impression that there > wasn't much rhyme nor reason to this. Let's do a quick dir(somearray) > and see what we get (I'll strip out the __XXX__ names): > > 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', 'base', > 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', 'copy', > 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dtype', 'dump', > 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', > 'item', > 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', 'newbyteorder', > 'nonzero', 'prod', 'ptp', 'put', 'putmask', 'ravel', 'real', > 'repeat', > 'reshape', 'resize', 'round', 'searchsorted', 'setfield', 'setflags', > 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', > 'swapaxes', > 'take', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', > 'var', 'view' > > > There are certainly many methods where inplace operations make no > sense. But for such things as conjugate and clip I think it should be > preferred. Think of them as analogs of the "+=" operators that allow > memory efficient inplace operations. At the moment there are too few > such operators, IMHO, and that makes it hard to write memory efficient > code when you want to do so. If you need a copy, the functional form > should be the preferred way to go and can easily be implement by > constructions like a.copy().sort(). So let's make this clear; what you are proposing is more that just returning self for more operations. You are proposing changing the meaning of the existing methods to operate in place rather than return new objects. It seems awfully late in the day to be considering this being that we're on the edge of 1.0 and this would could break any existing numpy code that is out there. Just for grins let's look at the operations that could potentially benefit from being done in place. I think they are: byteswap clip conjugate round sort Of these, clip, conjugate and round support an 'out' argument like that supported by ufunces; byteswap has a boolean argument telling it whether to perform operations in place; and sort always operates in place. Noting that the ufunc-like methods (max, argmax, etc) appear to support the 'out' argument as well although it's not documented for most of them, it looks to me as if the two odd methods are byteswap and sort. The method situation could be made more consistent by swapping the boolean inplace flag in byteswapped with another 'out' argument and also having sort not operate in place by default, but also supply an out argument there. Thus: b = a.sort() # Returns a copy a.sort(out=a) # Sorts a in place a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort() in this case since we don't want to rewrite the sort routines) On the whole I think that this would be an improvement, but it may be too late in the day to actually implement it since 1.0 is coming up. There would still be a few methods (fill, put, etc) that modify the array in place and return None, but I haven't heard any complaints about those. -tim > > Hmmm. Without taking too much time to go through these one at a time, > I'm pretty certain that they do not in general mutate things in place. > Probably at least half return, or can return new arrays, sometimes > with > references to the original data, but new shapes, sometimes with > completely new data. In fact, other than sort, I'm not sure which of > these does mutate in place. > > -tim > > > Chuck > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From kortmann at ideaworks.com Tue Aug 29 17:16:12 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Tue, 29 Aug 2006 14:16:12 -0700 (PDT) Subject: [Numpy-discussion] Release of 1.0b5 this weekend Message-ID: <2369.12.216.231.149.1156886172.squirrel@webmail.ideaworks.com> >I find it much cleaner to write >x = foo.bar().baz(param).frob() >than >foo.bar() >foo.baz(param) >x = foo.frob() >but perhaps others disagree. Both of these look "clean" but i do not think that moving 3 lines to one line makes code "cleaner" They both do the same thing and if someone that does not know what .bar() .baz(param) and .frob() are IMO the second version that takes place on three lines would be easier to understand. >I'm generally +0 on this idea (it seems like the clarity in writing >comes largely for interactive users), and don't see much difficulty in >separating the constructs. On the other hand, I don't see much problem >in returning a reference to self either. >I guess you are worried about the situation where you write >b = a.sort() >and think you have a new array, but in fact have a new reference to the >already-altered 'a'? >Hmm.. So, how is this different from the fact that >b = a[1:10:3] already returns a reference to 'a' >(I suppose in the fact that it actually returns a new object just one >that happens to share the same data with a). >However, I suppose that other methods don't return a reference to an >already-altered object, do they. >Tim's argument has moved me from +0 to -0 >-Travis I couldn't agree more with you and Tim on this. I would rather have code that works all the time and will not possibly confuse people later, like the example of >b = a.sort() >and think you have a new array, but in fact have a new reference to the >already-altered 'a'? alot of people have problems grasping this "memory management" type of programming...or at least in my C class half of the kids dropped out because the couldnt keep track of b = a.sort() meaning that b was actually just referencing a and if you changed b then a was changed also. But then again who on this list has problems remembering things like that anyways right?... ~Kenny From cookedm at physics.mcmaster.ca Tue Aug 29 17:19:50 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 29 Aug 2006 17:19:50 -0400 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F4ABAB.3090508@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> Message-ID: <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> On Tue, 29 Aug 2006 14:03:39 -0700 Tim Hochberg wrote: > Of these, clip, conjugate and round support an 'out' argument like that > supported by ufunces; byteswap has a boolean argument telling it > whether to perform operations in place; and sort always operates in > place. Noting that the ufunc-like methods (max, argmax, etc) appear to > support the 'out' argument as well although it's not documented for most > of them, it looks to me as if the two odd methods are byteswap and sort. > The method situation could be made more consistent by swapping the > boolean inplace flag in byteswapped with another 'out' argument and also > having sort not operate in place by default, but also supply an out > argument there. Thus: > > b = a.sort() # Returns a copy > a.sort(out=a) # Sorts a in place > a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort() > in this case since we don't want to rewrite the sort routines) Ugh. That's completely different semantics from sort() on lists, so I think it would be a source of bugs (at least, it would mean keeping two different ideas of .sort() in my head). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From charlesr.harris at gmail.com Tue Aug 29 17:20:24 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 15:20:24 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F4ABAB.3090508@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> Message-ID: Hi Tim, On 8/29/06, Tim Hochberg wrote: > > Charles R Harris wrote: > > > > > > On 8/29/06, *Tim Hochberg* > > wrote: > > > > Charles R Harris wrote: > > > Hi, > > > > > > On 8/29/06, *Tim Hochberg* > > > > >> > > wrote: > > > > > > > > > -0.5 from me if what we're talking about here is having > mutating > > > methods > > > return self rather than None. Chaining stuff is pretty, but > > having > > > methods that mutate self and return self looks like a source > of > > > elusive > > > bugs to me. > > > > > > -tim > > > > > > > > > But how is that any worse than the current mutating operators? I > > think > > > the operating principal is that methods generally work in place, > > > functions make copies. The exceptions to this rule need to be > noted. > > Is that really the case? I was more under the impression that there > > wasn't much rhyme nor reason to this. Let's do a quick > dir(somearray) > > and see what we get (I'll strip out the __XXX__ names): > > > > 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', 'base', > > 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', > 'copy', > > 'ctypes', 'cumprod', 'cumsum', 'data', 'diagonal', 'dtype', 'dump', > > 'dumps', 'fill', 'flags', 'flat', 'flatten', 'getfield', 'imag', > > 'item', > > 'itemsize', 'max', 'mean', 'min', 'nbytes', 'ndim', 'newbyteorder', > > 'nonzero', 'prod', 'ptp', 'put', 'putmask', 'ravel', 'real', > > 'repeat', > > 'reshape', 'resize', 'round', 'searchsorted', 'setfield', > 'setflags', > > 'shape', 'size', 'sort', 'squeeze', 'std', 'strides', 'sum', > > 'swapaxes', > > 'take', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', > > 'var', 'view' > > > > > > There are certainly many methods where inplace operations make no > > sense. But for such things as conjugate and clip I think it should be > > preferred. Think of them as analogs of the "+=" operators that allow > > memory efficient inplace operations. At the moment there are too few > > such operators, IMHO, and that makes it hard to write memory efficient > > code when you want to do so. If you need a copy, the functional form > > should be the preferred way to go and can easily be implement by > > constructions like a.copy().sort(). > So let's make this clear; what you are proposing is more that just > returning self for more operations. You are proposing changing the > meaning of the existing methods to operate in place rather than return > new objects. It seems awfully late in the day to be considering this > being that we're on the edge of 1.0 and this would could break any > existing numpy code that is out there. > > Just for grins let's look at the operations that could potentially > benefit from being done in place. I think they are: > byteswap > clip > conjugate > round > sort > > Of these, clip, conjugate and round support an 'out' argument like that > supported by ufunces; byteswap has a boolean argument telling it > whether to perform operations in place; and sort always operates in > place. Noting that the ufunc-like methods (max, argmax, etc) appear to > support the 'out' argument as well although it's not documented for most > of them, it looks to me as if the two odd methods are byteswap and sort. > The method situation could be made more consistent by swapping the > boolean inplace flag in byteswapped with another 'out' argument and also > having sort not operate in place by default, but also supply an out > argument there. Thus: > > b = a.sort() # Returns a copy > a.sort(out=a) # Sorts a in place > a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort() > in this case since we don't want to rewrite the sort routines) > > On the whole I think that this would be an improvement, but it may be > too late in the day to actually implement it since 1.0 is coming up. > There would still be a few methods (fill, put, etc) that modify the > array in place and return None, but I haven't heard any complaints about > those. That sounds like a good idea. One could keep the present behaviour in most cases by supplying a default value, although the out keyword might need a None value to indicate "copy" and a 'Self' value that means in place, or something like that, and then have all reasonable methods return values. That way the change would be transparent. The changes to the sort method would all be upper level, the low level sorting routines would remain unchanged. Methods are new, so code that needs to be changed is code specifically written for Numpy and now is the time to make these sort of decisions. -tim Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Tue Aug 29 17:21:40 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 29 Aug 2006 17:21:40 -0400 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: <20060829172140.29db40dd@arbutus.physics.mcmaster.ca> On Tue, 29 Aug 2006 13:25:14 -0600 "Charles R Harris" wrote: > Hi Fernando, > > On 8/29/06, Fernando Perez wrote: > > > > On 8/29/06, Charles R Harris wrote: > > > > > Speaking of features, I wonder if more of the methods should return > > > references. For instance, it might be nice to write something like: > > > > > > a.sort().searchsorted([...]) > > > > > > instead of making two statements out of it. > > > > +1 for more 'return self' at the end of methods which currently don't > > return anything (well, we get the default None), as long as it's > > sensible. I really like this 'message chaining' style of programming, > > and it annoys me that much of the python stdlib gratuitously prevents > > it by NOT returning self in places where it would be a perfectly > > sensible thing to do. -1, for the same reasons l.sort() doesn't (for a list l). For lists, the reason .sort() returns None is because it makes it clear it's a mutation. Returning self would make it look like it was doing a copy. > My pet peeve example: a.reverse() > > I would also like to see simple methods for "+=" operator and such. Then one > could write > > x = a.copy().add(10) There are: x = a.copy().__add__(10) or, for +=: x.__iadd__(10) > One could make a whole reverse polish translator out of such operations and > a few parenthesis. I have in mind some sort of code optimizer. It wouldn't be anymore efficient than the other way. For a code optimizer, you'll either have to parse the python code or use special objects (much like numexpr does), and then you might as well use the operators. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Tue Aug 29 17:25:08 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 29 Aug 2006 15:25:08 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> Message-ID: On 8/29/06, David M. Cooke wrote: > On Tue, 29 Aug 2006 14:03:39 -0700 > Tim Hochberg wrote: > > b = a.sort() # Returns a copy > > a.sort(out=a) # Sorts a in place > > a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort() > > in this case since we don't want to rewrite the sort routines) > > Ugh. That's completely different semantics from sort() on lists, so I think > it would be a source of bugs (at least, it would mean keeping two different > ideas of .sort() in my head). Agreed. Except where very well justified (such as slicing returning views for memory reasons), let's keep numpy arrays similar to native lists in their behavior... Special cases aren't special enough to break the rules. and all that :) Cheers, f From charlesr.harris at gmail.com Tue Aug 29 17:32:25 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 15:32:25 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <2369.12.216.231.149.1156886172.squirrel@webmail.ideaworks.com> References: <2369.12.216.231.149.1156886172.squirrel@webmail.ideaworks.com> Message-ID: Hi, On 8/29/06, kortmann at ideaworks.com wrote: > > >I find it much cleaner to write > > >x = foo.bar().baz(param).frob() > > >than > > >foo.bar() > >foo.baz(param) > >x = foo.frob() > > >but perhaps others disagree. > > Both of these look "clean" but i do not think that moving 3 lines to one > line makes code "cleaner" They both do the same thing and if someone that > does not know what .bar() .baz(param) and .frob() are IMO the second > version that takes place on three lines would be easier to understand. > > > > >I'm generally +0 on this idea (it seems like the clarity in writing > >comes largely for interactive users), and don't see much difficulty in > >separating the constructs. On the other hand, I don't see much problem > >in returning a reference to self either. > > >I guess you are worried about the situation where you write > > >b = a.sort() > > >and think you have a new array, but in fact have a new reference to the > >already-altered 'a'? > > >Hmm.. So, how is this different from the fact that > > >b = a[1:10:3] already returns a reference to 'a' > > >(I suppose in the fact that it actually returns a new object just one > >that happens to share the same data with a). > > >However, I suppose that other methods don't return a reference to an > >already-altered object, do they. > > >Tim's argument has moved me from +0 to -0 > > >-Travis > > > I couldn't agree more with you and Tim on this. I would rather have code > that works all the time and will not possibly confuse people later, like > the example of > > >b = a.sort() > >and think you have a new array, but in fact have a new reference to the > >already-altered 'a'? > > alot of people have problems grasping this "memory management" type of > programming...or at least in my C class half of the kids dropped out > because the couldnt keep track of > > b = a.sort() meaning that b was actually just referencing a and if you > changed b then a was changed also. Maybe they should start with assembly (or mix ;) instead of C? In any case, references are pointer wrappers and pointers seem to be the biggest bugaboo in C. Maybe everyone should start with Fortran where most everything was a reference. I say "was" because the last Fortran I used was F77 and I have no idea what the current situation is. I suppose the in/out specs make a difference. But then again who on this list has problems remembering things like that > anyways right?... > ~Kenny Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue Aug 29 17:46:57 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 29 Aug 2006 17:46:57 -0400 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F4ABAB.3090508@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> Message-ID: On Tue, 29 Aug 2006, Tim Hochberg apparently wrote: > b = a.sort() # Returns a copy Given the extant Python vocabulary, this seems like a bad idea to me. (Better to call it 'sorted' in this case.) fwiw, Alan Isaac From torgil.svensson at gmail.com Tue Aug 29 17:43:48 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 29 Aug 2006 23:43:48 +0200 Subject: [Numpy-discussion] fromiter shape argument -- was Re: For loop tips Message-ID: > return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) Is it possible for fromiter to take an optional shape (or count) argument in addition to the dtype argument? If both is given it could preallocate memory and we only have to iterate over L once. //Torgil On 8/29/06, Keith Goodman wrote: > On 8/29/06, Torgil Svensson wrote: > > something like this? > > > > def list2index(L): > > uL=sorted(set(L)) > > idx=dict((y,x) for x,y in enumerate(uL)) > > return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) > > Wow. That's amazing. Thank you. > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tim.hochberg at ieee.org Tue Aug 29 17:49:26 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 29 Aug 2006 14:49:26 -0700 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> Message-ID: <44F4B666.3070901@ieee.org> David M. Cooke wrote: > On Tue, 29 Aug 2006 14:03:39 -0700 > Tim Hochberg wrote: > > >> Of these, clip, conjugate and round support an 'out' argument like that >> supported by ufunces; byteswap has a boolean argument telling it >> whether to perform operations in place; and sort always operates in >> place. Noting that the ufunc-like methods (max, argmax, etc) appear to >> support the 'out' argument as well although it's not documented for most >> of them, it looks to me as if the two odd methods are byteswap and sort. >> The method situation could be made more consistent by swapping the >> boolean inplace flag in byteswapped with another 'out' argument and also >> having sort not operate in place by default, but also supply an out >> argument there. Thus: >> >> b = a.sort() # Returns a copy >> a.sort(out=a) # Sorts a in place >> a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort() >> in this case since we don't want to rewrite the sort routines) >> > > Ugh. That's completely different semantics from sort() on lists, so I think > it would be a source of bugs (at least, it would mean keeping two different > ideas of .sort() in my head). > Thinking about it a bit more, I'd leave sort alone (returning None and all).. I was (over)reacting to changing to sort to return self, which makes the set of methods both less consistent within itself, less consistent with python and more error prone IMO, which seems the worst possibility. For the moment at least I do stand by the suggestion of changing byteswap to match the rest of the methods, as that would remove one outlier in the set methods. -tim From tcorcelle at yahoo.fr Tue Aug 29 17:57:59 2006 From: tcorcelle at yahoo.fr (tristan CORCELLE) Date: Tue, 29 Aug 2006 21:57:59 +0000 (GMT) Subject: [Numpy-discussion] Py2exe / numpy troubles Message-ID: <20060829215759.67527.qmail@web26509.mail.ukl.yahoo.com> > >1) First Problem: numpy\core\_internal.pyc not included in Library.zip > >C:\Lameness\dist>templatewindow.exe > Traceback (most recent call last): > File "templatewindow.py", line 7, in ? > File "wxmpl.pyc", line 25, in ? > File "matplotlib\numerix\__init__.pyc", line 60, in ? > File "Numeric.pyc", line 91, in ? > File "numpy\__init__.pyc", line 35, in ? > File "numpy\core\__init__.pyc", line 6, in ? > File "numpy\core\umath.pyc", line 12, in ? > File "numpy\core\umath.pyc", line 10, in __load > AttributeError: 'module' object has no attribute '_ARRAY_API' > > > > >I resolved that issue by adding the file > >...\Python24\Lib\site-packages\numpy\core\_internal.pyc in > >...\test\dist\library.zip\numpy\core. > >Each time I compile that executable, I add the file by hand. > >Does anybody know how to automatically add that file? > > although mine was in \python23 respectively :) > > thanks for this fix > now i have this problem > > C:\Lameness\dist>templatewindow.exe > Traceback (most recent call last): > File "c:\python23\lib\site-packages\py2exe\boot_common.py", line 92, in ? > import linecache > ImportError: No module named linecache > Traceback (most recent call last): > File "templatewindow.py", line 1, in ? > ImportError: No module named wx > > C:\Lameness\dist> > > > current setup.py = > > ######################################################## > from distutils.filelist import findall > import os > import matplotlib > matplotlibdatadir = matplotlib.get_data_path() > matplotlibdata = findall(matplotlibdatadir) > matplotlibdata_files = [] > for f in matplotlibdata: > dirname = os.path.join('matplotlibdata', f[len(matplotlibdatadir)+1:]) > matplotlibdata_files.append((os.path.split(dirname)[0], [f])) > > > packages = ['matplotlib', 'pytz'] > includes = [] > excludes = [] > dll_excludes = ['libgdk_pixbuf-2.0-0.dll', > 'libgobject-2.0-0.dll', > 'libgdk-win32-2.0-0.dll', > 'wxmsw26uh_vc.dll'] > > > opts = { 'py2exe': { 'packages' : packages, > 'includes' : includes, > 'excludes' : excludes, > 'dll_excludes' : dll_excludes > } > } > > setup ( console=['templatewindow.py'], > options = opts, > data_files = matplotlibdata_files > ) > ########################################################## > > anyone seen this problem before? > > first line of template window = import wx > My Configuration : Windows XP pro, ActivePython 2.4.2.10, Scipy 0.4.9, Numpy 0.9.8, MatplotLib 0.87.1, Py2exe 0.6.5, WxPython 2.6 ---- 1) Be very careful on how you generate the file "...\dist\library.zip".I don't know why, but the zip file generated by hand doesn't work. Check its size! Specific zip format? Specific options to generate it? I didn't check source files to know how library.zip is generated.My method is the following one: - Extract the ...\test\dist\library.zip file in ...\test\dist\library - Add the file ...\Python24\Lib\site-packages\numpy\core\_internal.pyc in ...\test\dist\library\numpy\core. - Use Winzip to Add the ...\test\dist\library\numpy directory to the ...\dist\library.zip fileI know, it is not really beautiful but it seems to work. It is a temporary solution for debug. I am new in Python so my style is not really "academic" ---- 2) If you use my setup.py file, one more time, be careful cause of the wx specific dll: wxmsw26uh_vc.dllI don't know why, but Py2Exe doesn't find it. I remove that dll from the compilation phase and I copy it by hand in ...\test\dist directory.An idea may be the modification of setup.py file to indicate the path of that dll or something like that.DOES ANYONE HAVE THE SOLUTION? ---- 3)I am still blocked on my second issue > >2) Second problem: I don't know how to resolve that issue:> > > >Traceback (most recent call last):> > File "profiler_ftt.py", line 15, in ?> > from matplotlib.backends.backend_wx import Toolbar, FigureCanvasWx,\> > File "matplotlib\backends\backend_wx.pyc", line 152, in ?> > File "matplotlib\backend_bases.pyc", line 10, in ?> > File "matplotlib\colors.pyc", line 33, in ?> > File "matplotlib\numerix\__init__.pyc", line 67, in ?> > File "numpy\__init__.pyc", line 35, in ?> > File "numpy\_import_tools.pyc", line 173, in __call__> > File "numpy\_import_tools.pyc", line 68, in _init_info_modules> > File "", line 1, in ?> > File "numpy\random\__init__.pyc", line 3, in ?> > File "numpy\random\mtrand.pyc", line 12, in ?> > File "numpy\random\mtrand.pyc", line 10, in __load> > File "numpy.pxi", line 32, in mtrand> >AttributeError: 'module' object has no attribute 'dtype'> > > >I don't find the file numpy.pxi in my file tree nor in \test\dist\library.zip.> >I browsed the web in the hope to find a solution but nothing.> >It seems that this issue is well known but no solution provided in mailing lists.> > > >What is that file "numpix.pxi"? Where to find it or how is it generated?> >How to resolve that execution issue? Regards,Tristan -------------- next part -------------- An HTML attachment was scrubbed... URL: From kortmann at ideaworks.com Tue Aug 29 18:16:41 2006 From: kortmann at ideaworks.com (kortmann at ideaworks.com) Date: Tue, 29 Aug 2006 15:16:41 -0700 (PDT) Subject: [Numpy-discussion] Py2exe / numpy troubles Message-ID: <2588.12.216.231.149.1156889801.squirrel@webmail.ideaworks.com> My Configuration : Windows XP pro, ActivePython 2.4.2.10, Scipy 0.4.9, Numpy 0.9.8, MatplotLib 0.87.1, Py2exe 0.6.5, WxPython 2.6 ---- 1) Be very careful on how you generate the file "...\dist\library.zip".I don't know why, but the zip file generated by hand doesn't work. Check its size! Specific zip format? Specific options to generate it? I didn't check source files to know how library.zip is generated.My method is the following one: - Extract the ...\test\dist\library.zip file in ...\test\dist\library - Add the file ...\Python24\Lib\site-packages\numpy\core\_internal.pyc in ...\test\dist\library\numpy\core. - Use Winzip to Add the ...\test\dist\library\numpy directory to the ...\dist\library.zip file I know, it is not really beautiful but it seems to work. It is a temporary solution for debug. I am new in Python so my style is not really "academic" ---- 2) If you use my setup.py file, one more time, be careful cause of the wx specific dll: wxmsw26uh_vc.dllI don't know why, but Py2Exe doesn't find it. I remove that dll from the compilation phase and I copy it by hand in ...\test\dist directory.An idea may be the modification of setup.py file to indicate the path of that dll or something like that.DOES ANYONE HAVE THE SOLUTION? ---- 3)I am still blocked on my second issue > >2) Second problem: I don't know how to resolve that issue: > > > >Traceback (most recent call last):> > File "profiler_ftt.py", line 15, in ? > > from matplotlib.backends.backend_wx import Toolbar, FigureCanvasWx,\ > > File "matplotlib\backends\backend_wx.pyc", line 152, in ? > > File "matplotlib\backend_bases.pyc", line 10, in ? > > File "matplotlib\colors.pyc", line 33, in ? > > File "matplotlib\numerix\__init__.pyc", line 67, in ? > > File "numpy\__init__.pyc", line 35, in ? > > File "numpy\_import_tools.pyc", line 173, in __call__ > > File "numpy\_import_tools.pyc", line 68, in _init_info_modules > > File "", line 1, in ? > > File "numpy\random\__init__.pyc", line 3, in ? > > File "numpy\random\mtrand.pyc", line 12, in ? > > File "numpy\random\mtrand.pyc", line 10, in __load > > File "numpy.pxi", line 32, in mtrand > > AttributeError: 'module' object has no attribute 'dtype' > > > >I don't find the file numpy.pxi in my file tree nor in \test\dist\library.zip. > >I browsed the web in the hope to find a solution but nothing. > >It seems that this issue is well known but no solution provided in mailing lists. > > > >What is that file "numpix.pxi"? Where to find it or how is it generated? > >How to resolve that execution issue? Regards,Tristan could you post your setup file please? i can look at it i may not be much help but some is better than none From pfdubois at gmail.com Tue Aug 29 18:20:44 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Tue, 29 Aug 2006 15:20:44 -0700 Subject: [Numpy-discussion] A minor annoyance with MA In-Reply-To: <200608290125.25232.pgmdevlist@gmail.com> References: <200608290125.25232.pgmdevlist@gmail.com> Message-ID: Whatever the current state of the implementation, the original intention was that ma be, where it makes sense, a "drop-in" replacement for numpy arrays. Being retired I don't read this list all that carefully but I did see some subjects concerning axis defaults (about the 98th time we have had that discussion I suppose) and perhaps ma and numpy got out of sync, even if they were in sync to begin with. For sum, x.sum() should be the sum of the entire array, no? And that implies a default of None, doesn't it? So a default of zero or one would be wrong. Oh well, back to my nap. On 28 Aug 2006 22:26:54 -0700, PGM wrote: > > Folks, > I keep running into the following problem since some recent update (I'm > currently running 1.0b3, but the problem occurred roughly around 0.9.8): > > >>> import numpy.core.ma as MA > >>> x=MA.array([[1],[2]],mask=False) > >>> x.sum(None) > /usr/lib64/python2.4/site-packages/numpy/core/ma.py in reduce(self, > target, > axis, dtype) > 393 m.shape = (1,) > 394 if m is nomask: > --> 395 return masked_array (self.f.reduce (t, axis)) > 396 else: > 397 t = masked_array (t, m) > > TypeError: an integer is required > #................................ > > Note that x.sum(0) and x.sum(1) work fine. I know some consensus seems to > be > lacking with MA, but still, I can't see why axis=None is not recognized. > > Corollary: with masked array, the default axis for sum is 0, when it's > None > for regular arrays. Is there a reason for this inconsistency ? > > Thanks a lot > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Tue Aug 29 18:36:55 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 29 Aug 2006 16:36:55 -0600 Subject: [Numpy-discussion] A minor annoyance with MA In-Reply-To: <200608290125.25232.pgmdevlist@gmail.com> References: <200608290125.25232.pgmdevlist@gmail.com> Message-ID: <44F4C187.10102@ee.byu.edu> PGM wrote: >Folks, >I keep running into the following problem since some recent update (I'm >currently running 1.0b3, but the problem occurred roughly around 0.9.8): > > > >>>>import numpy.core.ma as MA >>>>x=MA.array([[1],[2]],mask=False) >>>>x.sum(None) >>>> >>>> >/usr/lib64/python2.4/site-packages/numpy/core/ma.py in reduce(self, target, >axis, dtype) > 393 m.shape = (1,) > 394 if m is nomask: >--> 395 return masked_array (self.f.reduce (t, axis)) > 396 else: > 397 t = masked_array (t, m) > >TypeError: an integer is required >#................................ > > This bug has hopefully been fixed (in SVN). Please let us know if it still persists. -Travis From charlesr.harris at gmail.com Tue Aug 29 18:42:23 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 16:42:23 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F4B666.3070901@ieee.org> References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> <44F4B666.3070901@ieee.org> Message-ID: On 8/29/06, Tim Hochberg wrote: > > David M. Cooke wrote: > > On Tue, 29 Aug 2006 14:03:39 -0700 > > Tim Hochberg wrote: > > > > > >> Of these, clip, conjugate and round support an 'out' argument like > that > >> supported by ufunces; byteswap has a boolean argument telling it > >> whether to perform operations in place; and sort always operates in > >> place. Noting that the ufunc-like methods (max, argmax, etc) appear to > >> support the 'out' argument as well although it's not documented for > most > >> of them, it looks to me as if the two odd methods are byteswap and > sort. > >> The method situation could be made more consistent by swapping the > >> boolean inplace flag in byteswapped with another 'out' argument and > also > >> having sort not operate in place by default, but also supply an out > >> argument there. Thus: > >> > >> b = a.sort() # Returns a copy > >> a.sort(out=a) # Sorts a in place > >> a.sort(out=c) # Sorts a into c (probably just equivalent to c = a.sort > () > >> in this case since we don't want to rewrite the sort routines) > >> > > > > Ugh. That's completely different semantics from sort() on lists, so I > think > > it would be a source of bugs (at least, it would mean keeping two > different > > ideas of .sort() in my head). > > > Thinking about it a bit more, I'd leave sort alone (returning None and > all).. I was (over)reacting to changing to sort to return self, which > makes the set of methods both less consistent within itself, less > consistent with python and more error prone IMO, which seems the worst > possibility. Here is Guido on sort: I'd like to explain once more why I'm so adamant that *sort*() shouldn't *return* 'self'. This comes from a coding style (popular in various other languages, I believe especially Lisp revels in it) where a series of side effects on a single object can be chained like this: x.compress().chop(y).*sort*(z) which would be the same as x.compress() x.chop(y) x.*sort*(z) I find the chaining form a threat to readability; it requires that the reader must be intimately familiar with each of the methods. The second form makes it clear that each of these calls acts on the same object, and so even if you don't know the class and its methods very well, you can understand that the second and third call are applied to x (and that all calls are made for their side-effects), and not to something else. I'd like to reserve chaining for operations that *return* new values, like string processing operations: y = x.rstrip("\n").split(":").lower() There are a few standard library modules that encourage chaining of side-effect calls (pstat comes to mind). There shouldn't be any new ones; pstat slipped through my filter when it was weak. So it seems you are correct in light of the Python philosophy. For those operators that allow specification of out I would still like to see a special value that means inplace, I think it would make the code clearer. Of course, merely having the out flag violates Guido's intent. The idea seems to be that we want some way to avoid allocating new memory. So maybe byteswap should be inplace and return None, while a copyto method could be added. Then one would do a.copyto(b) b.byteswap() instead of b = a.byteswap() -tim Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From myeates at jpl.nasa.gov Tue Aug 29 18:46:45 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Tue, 29 Aug 2006 15:46:45 -0700 Subject: [Numpy-discussion] stumped numpy user seeks help Message-ID: <44F4C3D5.80600@jpl.nasa.gov> My head is about to explode. I have an M by N array of floats. Associated with the columns are character labels ['a','b','b','c','d','e','e','e'] note: already sorted so duplicates are contiguous I want to replace the 2 'b' columns with the sum of the 2 columns. Similarly, replace the 3 'e' columns with the sum of the 3 'e' columns. The resulting array still has M rows but less than N columns. Anyone? Could be any harder than Sudoku. Mathew From kwgoodman at gmail.com Tue Aug 29 19:09:34 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Aug 2006 16:09:34 -0700 Subject: [Numpy-discussion] stumped numpy user seeks help In-Reply-To: <44F4C3D5.80600@jpl.nasa.gov> References: <44F4C3D5.80600@jpl.nasa.gov> Message-ID: On 8/29/06, Mathew Yeates wrote: > I have an M by N array of floats. Associated with the columns are > character labels > ['a','b','b','c','d','e','e','e'] note: already sorted so duplicates > are contiguous > > I want to replace the 2 'b' columns with the sum of the 2 columns. > Similarly, replace the 3 'e' columns with the sum of the 3 'e' columns. Make a cumsum of the array. Find the index of the last 'a', last 'b', etc and make the reduced array from that. Then take the diff of the columns. I know that's vague, but so is my understanding of python/numpy. Or even more vague: make a function that does what you want. From charlesr.harris at gmail.com Tue Aug 29 19:17:36 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 17:17:36 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> <97670e910608291215md4a75d4hb7255aa131e2868a@mail.gmail.com> <44F494CE.1080008@ieee.org> <44F49CF2.5020505@ieee.org> <44F4ABAB.3090508@ieee.org> <20060829171950.62c0199e@arbutus.physics.mcmaster.ca> <44F4B666.3070901@ieee.org> Message-ID: On 8/29/06, Charles R Harris wrote: > > On 8/29/06, Tim Hochberg wrote: > > > David M. Cooke wrote: > > > On Tue, 29 Aug 2006 14:03:39 -0700 > > > Tim Hochberg wrote: > > > > > > > > >> Of these, clip, conjugate and round support an 'out' argument like > > that > > >> supported by ufunces; byteswap has a boolean argument telling it > > >> whether to perform operations in place; and sort always operates in > > >> place. Noting that the ufunc-like methods (max, argmax, etc) appear > > to > > >> support the 'out' argument as well although it's not documented for > > most > > >> of them, it looks to me as if the two odd methods are byteswap and > > sort. > > >> The method situation could be made more consistent by swapping the > > >> boolean inplace flag in byteswapped with another 'out' argument and > > also > > >> having sort not operate in place by default, but also supply an out > > >> argument there. Thus: > > >> > > >> b = a.sort() # Returns a copy > > >> a.sort(out=a) # Sorts a in place > > >> a.sort(out=c) # Sorts a into c (probably just equivalent to c = > > a.sort() > > >> in this case since we don't want to rewrite the sort routines) > > >> > > > > > > Ugh. That's completely different semantics from sort() on lists, so I > > think > > > it would be a source of bugs (at least, it would mean keeping two > > different > > > ideas of .sort() in my head). > > > > > Thinking about it a bit more, I'd leave sort alone (returning None and > > all).. I was (over)reacting to changing to sort to return self, which > > makes the set of methods both less consistent within itself, less > > consistent with python and more error prone IMO, which seems the worst > > possibility. > > > Here is Guido on sort: > > I'd like to explain once more why I'm so adamant that * > sort*() shouldn't > *return* 'self'. > > This comes from a coding style (popular in various other languages, I > believe especially Lisp revels in it) where a series of side effects > > on a single object can be chained like this: > > x.compress().chop(y).*sort*(z) > > which would be the same as > > x.compress() > x.chop > (y) > x.*sort*(z) > > I find the chaining form a threat to readability; it requires that the > reader must be intimately familiar with each of the methods. The > > second form makes it clear that each of these calls acts on the same > object, and so even if you don't know the class and its methods very > well, you can understand that the second and third call are applied to > > x (and that all calls are made for their side-effects), and not to > something else. > > I'd like to reserve chaining for operations that *return* new values, > > like string processing operations: > > y = x.rstrip("\n").split(":").lower() > > There are a few standard library modules that encourage chaining of > side-effect calls (pstat comes to mind). There shouldn't be any new > > ones; pstat slipped through my filter when it was weak. > > So it seems you are correct in light of the Python philosophy. For those > operators that allow specification of out I would still like to see a > special value that means inplace, I think it would make the code clearer. Of > course, merely having the out flag violates Guido's intent. The idea seems > to be that we want some way to avoid allocating new memory. So maybe > byteswap should be inplace and return None, while a copyto method could be > added. Then one would do > > a.copyto(b) > b.byteswap() > > instead of > > b = a.byteswap() > > To expand on this a bit. Guidos philosophy, combined with a desire for memory efficiency, means that methods like byteswap and clip, which use the same memory, should operate inplace and return None. Thus, instead of b = a.clip(...) use b = a.copy() b.clip(...) Hey, it's a risc machine. If we did this, then functions could always return copies: b = clip(a,...) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Tue Aug 29 19:22:05 2006 From: pgmdevlist at gmail.com (PGM) Date: Tue, 29 Aug 2006 19:22:05 -0400 Subject: [Numpy-discussion] A minor annoyance with MA In-Reply-To: <44F4C187.10102@ee.byu.edu> References: <200608290125.25232.pgmdevlist@gmail.com> <44F4C187.10102@ee.byu.edu> Message-ID: <200608291922.05664.pgmdevlist@gmail.com> Travis, > This bug has hopefully been fixed (in SVN). Please let us know if it > still persists. It seems to work quite fine with the latest version of ma. Thanks a lot ! P. From fperez.net at gmail.com Tue Aug 29 19:24:52 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 29 Aug 2006 17:24:52 -0600 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: <44F48E1A.1020006@ieee.org> References: <44F48E1A.1020006@ieee.org> Message-ID: On 8/29/06, Travis Oliphant wrote: > > Hi all, > > Classes start for me next Tuesday, and I'm teaching a class for which I > will be using NumPy / SciPy extensively. I need to have a release of > these two (and hopefully matplotlib) that work with each other. > > Therefore, I'm going to make a 1.0b5 release of NumPy over the weekend > (probably Monday), and also get a release of SciPy out as well. At that > point, I'll only be available for bug-fixes to 1.0. Therefore, the next > release after 1.0b5 I would like to be 1.0rc1 (release-candidate 1). What's the status of these 'overwriting' messages? planck[/tmp]> python -c 'import scipy;scipy.test()' Overwriting info= from scipy.misc (was from numpy.lib.utils) Overwriting fft= from scipy.fftpack.basic (was from /home/fperez/tmp/local/lib/python2.3/site-packages/numpy/fft/__init__.pyc) ... I was under the impression you'd decided to quiet them out, but they seem to be making a comeback. Cheers, f From charlesr.harris at gmail.com Tue Aug 29 19:26:23 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 17:26:23 -0600 Subject: [Numpy-discussion] stumped numpy user seeks help In-Reply-To: References: <44F4C3D5.80600@jpl.nasa.gov> Message-ID: On 8/29/06, Keith Goodman wrote: > > On 8/29/06, Mathew Yeates wrote: > > > I have an M by N array of floats. Associated with the columns are > > character labels > > ['a','b','b','c','d','e','e','e'] note: already sorted so duplicates > > are contiguous > > > > I want to replace the 2 'b' columns with the sum of the 2 columns. > > Similarly, replace the 3 'e' columns with the sum of the 3 'e' columns. > > Make a cumsum of the array. Find the index of the last 'a', last 'b', > etc and make the reduced array from that. Then take the diff of the > columns. > > I know that's vague, but so is my understanding of python/numpy. > > Or even more vague: make a function that does what you want. Or you could use searchsorted on the labels to get a sequence of ranges. What you have is a sort of binning applied to columns instead of values in a vector. Or, if the overhead isn't to much, use a dictionary of with (keys: array) entries. Index thru the columns adding keys, when the key is new insert a column copy, when it is already present add the new column to the old one. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkanwar at geol.sc.edu Tue Aug 29 19:57:45 2006 From: rkanwar at geol.sc.edu (Rahul Kanwar) Date: Tue, 29 Aug 2006 19:57:45 -0400 Subject: [Numpy-discussion] array indexing problem Message-ID: <1156895865.5499.5.camel@hydro.geol.sc.edu> Hello, I am trying to extract a column from a 2D array here is what is have done: -------------------------------------------- In [3]: a = array([[1,2,3],[1,2,3]]) In [4]: a Out[4]: array([[1, 2, 3], [1, 2, 3]]) In [5]: a[:, 1] Out[5]: array([2, 2]) In [6]: a[:, 1:2] Out[6]: array([[2], [2]]) -------------------------------------------- when i use a[:, 1] i get a 1x2 array where as when i use a[:, 1:2] i get a 2x1 array. The intuitive behavior of a[:, 1] should be a 2x1 array. Am i doing something wrong here or is there some reason for this behavior ? regards, Rahul From wbaxter at gmail.com Tue Aug 29 20:02:24 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 30 Aug 2006 09:02:24 +0900 Subject: [Numpy-discussion] array indexing problem In-Reply-To: <1156895865.5499.5.camel@hydro.geol.sc.edu> References: <1156895865.5499.5.camel@hydro.geol.sc.edu> Message-ID: That's just the way it works in numpy. Slices return arrays of lower rank. If you want arrays that behave like they do in linear algebra you can use 'matrix' instead. Check out the Numpy for Matlab users page for more info on array vs. matrix. http://www.scipy.org/NumPy_for_Matlab_Users --bb On 8/30/06, Rahul Kanwar wrote: > Hello, > > I am trying to extract a column from a 2D array here is what is have > done: > > -------------------------------------------- > In [3]: a = array([[1,2,3],[1,2,3]]) > > In [4]: a > Out[4]: > array([[1, 2, 3], > [1, 2, 3]]) > > In [5]: a[:, 1] > Out[5]: array([2, 2]) > > In [6]: a[:, 1:2] > Out[6]: > array([[2], > [2]]) > -------------------------------------------- > > when i use a[:, 1] i get a 1x2 array where as when i use a[:, 1:2] i get > a 2x1 array. The intuitive behavior of a[:, 1] should be a 2x1 array. Am > i doing something wrong here or is there some reason for this behavior ? > > regards, > Rahul > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From rahul.kanwar at gmail.com Tue Aug 29 20:05:04 2006 From: rahul.kanwar at gmail.com (Rahul Kanwar) Date: Tue, 29 Aug 2006 20:05:04 -0400 Subject: [Numpy-discussion] array indexing problem Message-ID: <63dec5bf0608291705l793865cag4dc59884a1542f92@mail.gmail.com> Hello, I am trying to extract a column from a 2D array here is what is have done: -------------------------------------------- In [3]: a = array([[1,2,3],[1,2,3]]) In [4]: a Out[4]: array([[1, 2, 3], [1, 2, 3]]) In [5]: a[:, 1] Out[5]: array([2, 2]) In [6]: a[:, 1:2] Out[6]: array([[2], [2]]) -------------------------------------------- when i use a[:, 1] i get a 1x2 array where as when i use a[:, 1:2] i get a 2x1 array. The intuitive behavior of a[:, 1] should be a 2x1 array. Am i doing something wrong here or is there some reason for this behavior ? regards, Rahul From charlesr.harris at gmail.com Tue Aug 29 20:11:13 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 29 Aug 2006 18:11:13 -0600 Subject: [Numpy-discussion] array indexing problem In-Reply-To: <1156895865.5499.5.camel@hydro.geol.sc.edu> References: <1156895865.5499.5.camel@hydro.geol.sc.edu> Message-ID: On 8/29/06, Rahul Kanwar wrote: > > Hello, > > I am trying to extract a column from a 2D array here is what is have > done: > > -------------------------------------------- > In [3]: a = array([[1,2,3],[1,2,3]]) > > In [4]: a > Out[4]: > array([[1, 2, 3], > [1, 2, 3]]) > > In [5]: a[:, 1] > Out[5]: array([2, 2]) > > In [6]: a[:, 1:2] > Out[6]: > array([[2], > [2]]) > -------------------------------------------- > > when i use a[:, 1] i get a 1x2 array where as when i use a[:, 1:2] i get > a 2x1 array. The intuitive behavior of a[:, 1] should be a 2x1 array. Am > i doing something wrong here or is there some reason for this behavior ? The behaviour is expected. a[:,1] is returned with one less dimension, just as for a one dimensional array b[1] is zero dimensional (a scalar). For instance In [65]: int64(2).shape Out[65]: () You can get what you expect using matrices: In [67]: a = mat(arange(6).reshape(2,3)) In [68]: a[:,1] Out[68]: matrix([[1], [4]]) But generally it is best to just use arrays and get used to the conventions. regards, > Rahul Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Aug 29 20:13:39 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 29 Aug 2006 19:13:39 -0500 Subject: [Numpy-discussion] array indexing problem In-Reply-To: <1156895865.5499.5.camel@hydro.geol.sc.edu> References: <1156895865.5499.5.camel@hydro.geol.sc.edu> Message-ID: Rahul Kanwar wrote: > Hello, > > I am trying to extract a column from a 2D array here is what is have > done: > > -------------------------------------------- > In [3]: a = array([[1,2,3],[1,2,3]]) > > In [4]: a > Out[4]: > array([[1, 2, 3], > [1, 2, 3]]) > > In [5]: a[:, 1] > Out[5]: array([2, 2]) > > In [6]: a[:, 1:2] > Out[6]: > array([[2], > [2]]) > -------------------------------------------- > > when i use a[:, 1] i get a 1x2 array where as when i use a[:, 1:2] i get > a 2x1 array. The intuitive behavior of a[:, 1] should be a 2x1 array. Am > i doing something wrong here or is there some reason for this behavior ? Indexing reduces the rank of the array. Slicing does not. In the first instance, you do not get a 1x2 array; you get an array with shape (2,). This choice dates from the earliest days of Numeric. It ends up being quite useful in most contexts. However, it is somewhat less so when you want to treat these arrays as matrices and row and column vectors. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ngjdql at dekleineknip.com Tue Aug 29 21:53:36 2006 From: ngjdql at dekleineknip.com (Stanislaus Farrell) Date: Tue, 29 Aug 2006 20:53:36 -0500 Subject: [Numpy-discussion] profusion supremely Message-ID: <001101c6cbd8$3aa14635$9057efc9@bltbp> He is to have the satisfaction ofreturning it to the chapel himself. I hope to succeed ininteresting him in the Mexican scheme, but of that in its place. Nearly all the tragedies in the lives of my clientscome from little COMMAC effects. He hasjust returned from Manchester, and ushered us in. I think the slaving businessgave them the idea I was a rough and ready customer. I and my sister live alone herevery simply, as you see. Floods hand shook as he inserted the comma and Mr. Itmust really have been rather grand, I gather, but it nearly cost memy inheritance. I am lonely, or I would not write youso much. I saw anexpression of pain and amazement flash from his eyes. This on a bed of sandwas put in to heat the carriage. Hesuggested to them that they should be legally represented. The lugger is shipping a new mast, I hear. The whole transaction was overlooked by the genial Mr. Hesuggested to them that they should be legally represented. I dont, he said, and so I am not going to sell this chair. You shall hear, too, of the Mexican affairin due course. He was soon unfolding to me thestate of his own affairs as I had already done with mine. As soon as the crisis in my own affairs was allayed I intimated toMr. With luck you should have this in two weeks. Such moral sentiments as still exist do not help deliverthe mails. It is certainly ingenious although notvery necessary. Sir Francis is responsible forand still directs its policies. As you say, it is a very comfortable one. This was the situation onthe morning of my first interview with the Barings. My reception there by Sir Francis Baring and later by Mr. I can only say it was for me a terribleexperience. Hickey by his cousin Williamthat is now in Bengal. I dont, he said, and so I am not going to sell this chair. The ship was then sold at auction by theBarings. Mayerlooked at a very large and handsome Swiss watch with enormous sealsattached. It alighted andentered daintily by a small door in the cote. In relating his experiencesin prison he broke down. I have come to know Nathan rather well, however. Perhaps you had best take it with you in the coach, Mr. The world outside seemed tohave fallen away. The first matter taken up was the sum due me from my remittances tothe Barings from Africa. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: liberty.gif Type: image/gif Size: 42098 bytes Desc: not available URL: From rw679aq02 at sneakemail.com Wed Aug 30 01:47:02 2006 From: rw679aq02 at sneakemail.com (rw679aq02 at sneakemail.com) Date: Tue, 29 Aug 2006 22:47:02 -0700 Subject: [Numpy-discussion] Irregular arrays Message-ID: <1156916822.16010.269732980@webmail.messagingengine.com> Many problems are best solved with irregular array structures. These are aggregations not having a rectangular shape. To motivate, here's one example, http://lambda-the-ultimate.org/files/HammingNumbersDeclarative.7z - from http://lambda-the-ultimate.org/node/608#comment-5746 Irregularity here changes an O(N^3) solution to O(N). (The file format is a 7zip archive with a MathReader file inside, readable in Windows or Unix with free software.) These cases also arise in simulations where physical geometry determines array shape. Here memory consumption is the minimization goal that makes irregularity desirable. The access function will return NaN or zero for out-of-bounds requests. There is no need to consume memory storing NaNs and zeros. Please advise how much support numpy/Scipy has for these structures, if any, including future plans. If support exists, could you kindly supply a Scipy declaration matching the first example. Thank you very much. From oliphant.travis at ieee.org Wed Aug 30 02:58:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Aug 2006 00:58:53 -0600 Subject: [Numpy-discussion] Irregular arrays In-Reply-To: <1156916822.16010.269732980@webmail.messagingengine.com> References: <1156916822.16010.269732980@webmail.messagingengine.com> Message-ID: <44F5372D.40206@ieee.org> rw679aq02 at sneakemail.com wrote: > Many problems are best solved with irregular array structures. These > are aggregations not having a rectangular shape. To motivate, here's > one example, > > http://lambda-the-ultimate.org/files/HammingNumbersDeclarative.7z > - from http://lambda-the-ultimate.org/node/608#comment-5746 > > Irregularity here changes an O(N^3) solution to O(N). (The file format > is a 7zip archive with a MathReader file inside, readable in Windows or > Unix with free software.) > > These cases also arise in simulations where physical geometry determines > array shape. Here memory consumption is the minimization goal that > makes irregularity desirable. The access function will return NaN or > zero for out-of-bounds requests. There is no need to consume memory > storing NaNs and zeros > > Please advise how much support numpy/Scipy has for these structures, if > any, including future plans. If support exists, could you kindly supply > a Scipy declaration matching the first example. > SciPy has sparse matrix support (scipy.sparse) with several storage formats You can also construct irregular arrays using arrays of objects or just lists of lists. -Travis From bruce.who.hk at gmail.com Wed Aug 30 04:06:01 2006 From: bruce.who.hk at gmail.com (bruce.who.hk) Date: Wed, 30 Aug 2006 16:06:01 +0800 Subject: [Numpy-discussion] [ANN] NumPy 1.0b4 now available References: <44F01802.8050505@ieee.org> <200608281448353906004@gmail.com> <44F341E4.7000003@ieee.org> Message-ID: <200608301605580156650@gmail.com> Hi, Travis I tried numpy1.0b4 and add this to setup.py includes = ["numpy.core._internal"] then it works! And all scripts can be packed into a single executables with "bundle_files":2, "skip_archive":0, zipfile = None, --skip_archive option is not needed now. ------------------------------------------------------------- >I suspect you need to force-include the numpy/core/_internal.py file by >specifying it in your setup.py file as explained on the py2exe site. >That module is only imported by the multiarraymodule.c file which I >suspect py2exe can't automatically discern. > >In 1.0 we removed the package-loader issues which are probably giving >the scipy-style subpackage errors. So, very likely you might be O.K. >with the beta releases of 1.0 as long as you tell py2exe about >numpy/core/_internal.py so that it gets included in the distribution. > >Please post any successes. > >Best, > >-Travis > >-- >http://mail.python.org/mailman/listinfo/python-list ------------------ bruce.who.hk 2006-08-30 From rw679aq02 at sneakemail.com Wed Aug 30 04:21:05 2006 From: rw679aq02 at sneakemail.com (rw679aq02 at sneakemail.com) Date: Wed, 30 Aug 2006 01:21:05 -0700 Subject: [Numpy-discussion] Irregular arrays In-Reply-To: <1156916822.16010.269732980@webmail.messagingengine.com> References: <1156916822.16010.269732980@webmail.messagingengine.com> Message-ID: <1156926065.27232.269739888@webmail.messagingengine.com> Travis, A sparse matrix is a different animal serving a different purpose, i.e., solution of linear systems. Those storage formats are geared for that application: upper diagonal, block diagonal, stripwise, etc. To be more specific: here tight numerical arrays are presumably discussed. Python and other languages could define an "irregular list of irregular lists" or "aggregation of objects" configuration. Probably Lisp would be better for that. But it is not my driving interest. My interest is packed storage minimizing memory consumption and access time, with bonus points for integration with numerical recipes and element-wise operations. Again, actual demonstration would be appreciated. I selected an example with minimal deviation from a regular array to simplify things. The shape is essentially a cube with a planar cut across one corner. The Mathematica code shows it is very easy to define in that language. (I am not sure whether it is tightly packed but it shows O(N) performance graphs.) From svetosch at gmx.net Wed Aug 30 05:57:52 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed, 30 Aug 2006 11:57:52 +0200 Subject: [Numpy-discussion] array indexing problem In-Reply-To: References: <1156895865.5499.5.camel@hydro.geol.sc.edu> Message-ID: <44F56120.5040404@gmx.net> Charles R Harris schrieb: > You can get what you expect using matrices: > ... > But generally it is best to just use arrays and get used to the conventions. > Well, there are different views on this subject, and I'm happy that the numpy crew is really trying (and good at it) to make array *and* matrix users happy. So please let us coexist peacefully. -sven From landriu at discovery.saclay.cea.fr Wed Aug 30 06:28:40 2006 From: landriu at discovery.saclay.cea.fr (LANDRIU David SAp) Date: Wed, 30 Aug 2006 12:28:40 +0200 (MEST) Subject: [Numpy-discussion] Use of numarray from numpy package Message-ID: <200608301029.k7UATQ4v013493@discovery.saclay.cea.fr> Hello, is it necessary to install numarray separately to use numpy ? Indeed, after numpy installation, when I try to use it in the code, I get the same error as below : .../... Python 2.4.1 (#1, May 13 2005, 13:45:18) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numarray import * Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.3/site-packages/numpy/numarray/__init__.py", line 1, in ? from util import * File "/usr/local/lib/python2.3/site-packages/numpy/numarray/util.py", line 2, in ? from numpy import geterr ImportError: No module named numpy >>> Thanks for your answer, Cheers, David Landriu -------------------------------------------------------------------- David Landriu DAPNIA/SAp CEA SACLAY (France) Phone : (33|0)169088785 Fax : (33|0)169086577 --------------------------------------------------------------------- From a.h.jaffe at gmail.com Wed Aug 30 07:04:22 2006 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Wed, 30 Aug 2006 12:04:22 +0100 Subject: [Numpy-discussion] fftfreq very slow; rfftfreq incorrect? Message-ID: Hi all, the current implementation of fftfreq (which is meant to return the appropriate frequencies for an FFT) does the following: k = range(0,(n-1)/2+1)+range(-(n/2),0) return array(k,'d')/(n*d) I have tried this with very long (2**24) arrays, and it is ridiculously slow. Should this instead use arange (or linspace?) and concatenate rather than converting the above list? This seems to result in acceptable performance, but we could also perhaps even pre-allocate the space. The numpy.fft.rfftfreq seems just plain incorrect to me. It seems to produce lots of duplicated frequencies, contrary to the actual output of rfft: def rfftfreq(n,d=1.0): """ rfftfreq(n, d=1.0) -> f DFT sample frequencies (for usage with rfft,irfft). The returned float array contains the frequency bins in cycles/unit (with zero at the start) given a window length n and a sample spacing d: f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2]/(d*n) if n is even f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2,n/2]/(d*n) if n is odd **** None of these should be doubled, right? """ assert isinstance(n,int) return array(range(1,n+1),dtype=int)/2/float(n*d) Thanks, Andrew From a.h.jaffe at gmail.com Wed Aug 30 07:17:51 2006 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Wed, 30 Aug 2006 12:17:51 +0100 Subject: [Numpy-discussion] fftfreq very slow; rfftfreq incorrect? In-Reply-To: References: Message-ID: [copied to the scipy list since rfftfreq is only in scipy] Andrew Jaffe wrote: > Hi all, > > the current implementation of fftfreq (which is meant to return the > appropriate frequencies for an FFT) does the following: > > k = range(0,(n-1)/2+1)+range(-(n/2),0) > return array(k,'d')/(n*d) > > I have tried this with very long (2**24) arrays, and it is ridiculously > slow. Should this instead use arange (or linspace?) and concatenate > rather than converting the above list? This seems to result in > acceptable performance, but we could also perhaps even pre-allocate the > space. > > The numpy.fft.rfftfreq seems just plain incorrect to me. It seems to > produce lots of duplicated frequencies, contrary to the actual output of > rfft: > > def rfftfreq(n,d=1.0): > """ rfftfreq(n, d=1.0) -> f > > DFT sample frequencies (for usage with rfft,irfft). > > The returned float array contains the frequency bins in > cycles/unit (with zero at the start) given a window length n and a > sample spacing d: > > f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2]/(d*n) if n is even > f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2,n/2]/(d*n) if n is odd > > **** None of these should be doubled, right? > > """ > assert isinstance(n,int) > return array(range(1,n+1),dtype=int)/2/float(n*d) > > Thanks, > > Andrew > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 From stefan at sun.ac.za Wed Aug 30 08:04:16 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 30 Aug 2006 14:04:16 +0200 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <44F47036.8040300@ieee.org> References: <44F47036.8040300@ieee.org> Message-ID: <20060830120415.GQ23074@mentat.za.net> On Tue, Aug 29, 2006 at 10:49:58AM -0600, Travis Oliphant wrote: > Matt Knox wrote: > > is the following behaviour expected? or is this a bug with > > numpy.object_ ? I'm using numpy 1.0b1 > > > > >>> print numpy.array([],numpy.float64).size > > 0 > > > > >>> print numpy.array([],numpy.object_).size > > 1 > > > > Should the size of an array initialized from an empty list not always > > be 1 ? or am I just crazy? > > > Not in this case. Explictly creating an object array from any object > (even the empty-list object) gives you a 0-d array containing that > object. When you explicitly create an object array a different section > of code handles it and gives this result. This is a recent change, and > I don't think this use-case was considered as a backward incompatibility > (which I believe it is). Perhaps we should make it so array([],....) > always returns an empty array. I'm not sure. Comments? The current behaviour makes sense, but is maybe not consistent: N.array([],dtype=object).size == 1 N.array([[],[]],dtype=object).size == 2 Regards St?fan From svetosch at gmx.net Wed Aug 30 08:31:50 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed, 30 Aug 2006 14:31:50 +0200 Subject: [Numpy-discussion] stumped numpy user seeks help In-Reply-To: <44F4C3D5.80600@jpl.nasa.gov> References: <44F4C3D5.80600@jpl.nasa.gov> Message-ID: <44F58536.7030806@gmx.net> Mathew Yeates schrieb: > My head is about to explode. > > I have an M by N array of floats. Associated with the columns are > character labels > ['a','b','b','c','d','e','e','e'] note: already sorted so duplicates > are contiguous > > I want to replace the 2 'b' columns with the sum of the 2 columns. > Similarly, replace the 3 'e' columns with the sum of the 3 'e' columns. > > The resulting array still has M rows but less than N columns. Anyone? > Could be any harder than Sudoku. > Hi, I don't have time for this ;-) , but I learnt something useful along the way... import numpy as n m = n.ones([2,6]) a = ['b', 'c', 'c', 'd', 'd', 'd'] startindices = set([a.index(x) for x in a]) out = n.empty([m.shape[0], 0]) for i in startindices: temp = n.mat(m[:, i : i + a.count(a[i])]).sum(axis = 1) out = n.hstack([out, temp]) print out Not sure if axis = 1 is needed, but until the defaults have settled a bit it can't hurt. You need python 2.4 for the built-in , and will be a numpy matrix, use if you don't like that. But here it's really nice to work with matrices, because otherwise .sum() will give you a 1-d array sometimes, and that will suddenly look like a row to (instead of a nice column vector) and wouldn't work -- that's why matrices are so great and everybody should be using them ;-) hth, sven From landriu at discovery.saclay.cea.fr Wed Aug 30 08:51:51 2006 From: landriu at discovery.saclay.cea.fr (LANDRIU David SAp) Date: Wed, 30 Aug 2006 14:51:51 +0200 (MEST) Subject: [Numpy-discussion] Use of numarray from numpy package [# INC NO 24609] Message-ID: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> Hello, I come back to my question : how to use numarray with the numpy installation ? After some update in the system there is another error message : >> AttributeError: 'module' object has no attribute 'NewAxis' It seems , from advice of the system manager, that an kind of alias failed to execute the right action. Thanks in advance for your answer, Cheers, David Landriu ------------- Begin Forwarded Message ------------- >Date: Wed, 30 Aug 2006 14:14:27 +0200 (MEST) >To: LANDRIU David SAp >Subject: Re: Use of numarray from numpy package [# INC NO 24609] >From: User Support >Error-to: Jean-Rene Rouet >X-CEA-Source: externe >X-CEA-DebugSpam: 7% >X-CEA-Spam-Report: No antispam rules were triggered by this message >X-CEA-Spam-Hits: __HAS_MSGID 0, __MIME_TEXT_ONLY 0, __SANE_MSGID 0, __STOCK_CRUFT 0 >MIME-Version: 1.0 >Content-Transfer-Encoding: 8bit >X-Spam-Checker-Version: SpamAssassin 2.63 (2004-01-11) on discovery >X-Spam-Status: No, hits=0.1 required=4.0 tests=AWL autolearn=no version=2.63 >X-Spam-Level: > > >R?ponse de User-Support ? votre question : >------------------------------------------ > >Rebonjour >Essayez maintenat svp > >WW Voici ce que j'obtiens maintenant : {ccali22}~(0)>setenv PYTHONPATH /usr/local/lib/python2.3/site-packages/numpy {ccali22}~(0)> {ccali22}~(0)> {ccali22}~(0)>python Python 2.3.5 (#2, Oct 17 2005, 17:20:02) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numarray import * Traceback (most recent call last): File "", line 1, in ? AttributeError: 'module' object has no attribute 'NewAxis' >>> ############################################## ############################################## Hello, is it necessary to install numarray separately to use numpy ? Indeed, after numpy installation, when I try to use it in the code, I get the same error as below : .../... Python 2.4.1 (#1, May 13 2005, 13:45:18) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numarray import * Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.3/site-packages/numpy/numarray/__init__.py", line 1, in ? from util import * File "/usr/local/lib/python2.3/site-packages/numpy/numarray/util.py", line 2, in ? from numpy import geterr ImportError: No module named numpy >>> Thanks for your answer, Cheers, David Landriu -------------------------------------------------------------------- David Landriu DAPNIA/SAp CEA SACLAY (France) Phone : (33|0)169088785 Fax : (33|0)169086577 --------------------------------------------------------------------- From joris at ster.kuleuven.be Wed Aug 30 09:42:54 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Wed, 30 Aug 2006 15:42:54 +0200 Subject: [Numpy-discussion] Use of numarray from numpy package In-Reply-To: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> References: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> Message-ID: <200608301542.54416.joris@ster.kuleuven.be> Hi David, Numeric, numarray and numpy are three different packages that can live independently, but that can also coexist if you like so. If you're new to this packages, you should stick to numpy, as the other ones are getting phased out. It's difficult to see what's going wrong without having seen how you installed it. I see that you tried >>> from numarray import * Perhaps a stupid question, but you did import numpy with >>> from numpy import * didn't you? Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From cyray at jnjcr.jnj.com Wed Aug 30 09:47:11 2006 From: cyray at jnjcr.jnj.com (Ellil Rendon) Date: Wed, 30 Aug 2006 06:47:11 -0700 Subject: [Numpy-discussion] eoRXwu Message-ID: <000001c6cc3a$cb010910$52c0a8c0@wbaa> Hi, http://ikanuyunfadesun.com , s , c , f move. The soldier at the door looked at me, looked away. Steengo had military service were mutually incompatible for the most part. I Since I know next to nothing about music he is going to teach me my -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.hochberg at ieee.org Wed Aug 30 10:33:25 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Wed, 30 Aug 2006 07:33:25 -0700 Subject: [Numpy-discussion] fromiter shape argument -- was Re: For loop tips In-Reply-To: References: Message-ID: <44F5A1B5.7090409@ieee.org> Torgil Svensson wrote: >> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) >> > > Is it possible for fromiter to take an optional shape (or count) > argument in addition to the dtype argument? Yes. fromiter(iterable, dtype, count) works. > If both is given it could > preallocate memory and we only have to iterate over L once. > Regardless, L is only iterated over once. In general you can't rewind iterators, so that's a requirement. This is accomplished by doing successive overallocation similar to the way appending to a list is handled. By specifying the count up front you save a bunch of reallocs, but no iteration. -tim > //Torgil > > On 8/29/06, Keith Goodman wrote: > >> On 8/29/06, Torgil Svensson wrote: >> >>> something like this? >>> >>> def list2index(L): >>> uL=sorted(set(L)) >>> idx=dict((y,x) for x,y in enumerate(uL)) >>> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) >>> >> Wow. That's amazing. Thank you. >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From perry at stsci.edu Wed Aug 30 10:43:26 2006 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 30 Aug 2006 10:43:26 -0400 Subject: [Numpy-discussion] Use of numarray from numpy package [# INC NO 24609] In-Reply-To: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> References: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> Message-ID: <56849779-37DC-444D-B260-14CBFDAEE201@stsci.edu> On Aug 30, 2006, at 8:51 AM, LANDRIU David SAp wrote: > Hello, > > I come back to my question : how to use numarray > with the numpy installation ? > If you are using both at the same time, one thing you don't want to do is from numpy import * from numarray import * You can do that with one or the other but not both. Are you doing that? Perry Greenfield From stefan at sun.ac.za Wed Aug 30 10:51:52 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 30 Aug 2006 16:51:52 +0200 Subject: [Numpy-discussion] stumped numpy user seeks help In-Reply-To: <44F4C3D5.80600@jpl.nasa.gov> References: <44F4C3D5.80600@jpl.nasa.gov> Message-ID: <20060830145152.GT23074@mentat.za.net> On Tue, Aug 29, 2006 at 03:46:45PM -0700, Mathew Yeates wrote: > My head is about to explode. > > I have an M by N array of floats. Associated with the columns are > character labels > ['a','b','b','c','d','e','e','e'] note: already sorted so duplicates > are contiguous > > I want to replace the 2 'b' columns with the sum of the 2 columns. > Similarly, replace the 3 'e' columns with the sum of the 3 'e' columns. > > The resulting array still has M rows but less than N columns. Anyone? > Could be any harder than Sudoku. I attach one possible solution (allowing for the same column name occurring in different places, i.e. ['a','b','b','a']). I'd be glad for any suggestions on how to clean up the code. Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: arsum.py Type: text/x-python Size: 572 bytes Desc: not available URL: From fperez.net at gmail.com Wed Aug 30 11:11:43 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 30 Aug 2006 09:11:43 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <20060830120415.GQ23074@mentat.za.net> References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> Message-ID: On 8/30/06, Stefan van der Walt wrote: > The current behaviour makes sense, but is maybe not consistent: > > N.array([],dtype=object).size == 1 > N.array([[],[]],dtype=object).size == 2 Yes, including one more term in this check: In [5]: N.array([],dtype=object).size Out[5]: 1 In [6]: N.array([[]],dtype=object).size Out[6]: 1 In [7]: N.array([[],[]],dtype=object).size Out[7]: 2 Intuitively, I'd have expected the answers to be 0,1,2, instead of 1,1,2. Cheers, f From kwgoodman at gmail.com Wed Aug 30 11:53:45 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 30 Aug 2006 08:53:45 -0700 Subject: [Numpy-discussion] amd64 support Message-ID: I plan to build an amd64 box and run debian etch. Are there any big, 64-bit, show-stopping problems in numpy? Any minor annoyances? From strawman at astraw.com Wed Aug 30 12:13:16 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 30 Aug 2006 09:13:16 -0700 Subject: [Numpy-discussion] Use of numarray from numpy package [# INC NO 24609] In-Reply-To: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> References: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> Message-ID: <44F5B91C.5090202@astraw.com> LANDRIU David SAp wrote: > Hello, > > I come back to my question : how to use numarray > with the numpy installation ? > > {ccali22}~(0)>setenv PYTHONPATH /usr/local/lib/python2.3/site-packages/numpy > Here's where you went wrong. You want: setenv PYTHONPATH /usr/local/lib/python2.3/site-packages > {ccali22}~(0)>python > Python 2.3.5 (#2, Oct 17 2005, 17:20:02) > [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> from numarray import * >>>> > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.3/site-packages/numpy/numarray/__init__.py", line 1, in ? > from util import * > File "/usr/local/lib/python2.3/site-packages/numpy/numarray/util.py", line 2, in ? > from numpy import geterr > ImportError: No module named numpy > Note that you're actually importing a numarray within numpy's directory structure. That's because of your PYTHONPATH. numpy ships numpy.numarray to provide backwards compatibility. To use it, you must do "import numpy.numarray as numarray" Cheers! Andrew From stefan at sun.ac.za Wed Aug 30 12:41:49 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 30 Aug 2006 18:41:49 +0200 Subject: [Numpy-discussion] fftfreq very slow; rfftfreq incorrect? In-Reply-To: References: Message-ID: <20060830164149.GV23074@mentat.za.net> On Wed, Aug 30, 2006 at 12:04:22PM +0100, Andrew Jaffe wrote: > the current implementation of fftfreq (which is meant to return the > appropriate frequencies for an FFT) does the following: > > k = range(0,(n-1)/2+1)+range(-(n/2),0) > return array(k,'d')/(n*d) > > I have tried this with very long (2**24) arrays, and it is ridiculously > slow. Should this instead use arange (or linspace?) and concatenate > rather than converting the above list? This seems to result in > acceptable performance, but we could also perhaps even pre-allocate the > space. Please try the attached benchmark. > The numpy.fft.rfftfreq seems just plain incorrect to me. It seems to > produce lots of duplicated frequencies, contrary to the actual output of > rfft: > > def rfftfreq(n,d=1.0): > """ rfftfreq(n, d=1.0) -> f > > DFT sample frequencies (for usage with rfft,irfft). > > The returned float array contains the frequency bins in > cycles/unit (with zero at the start) given a window length n and a > sample spacing d: > > f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2]/(d*n) if n is even > f = [0,1,1,2,2,...,n/2-1,n/2-1,n/2,n/2]/(d*n) if n is odd > > **** None of these should be doubled, right? > > """ > assert isinstance(n,int) > return array(range(1,n+1),dtype=int)/2/float(n*d) Please produce a code snippet to demonstrate the problem. We can then fix the bug and use your code as a unit test. Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: fftfreq_bench.py Type: text/x-python Size: 2201 bytes Desc: not available URL: From lfriedri at imtek.de Wed Aug 30 12:39:43 2006 From: lfriedri at imtek.de (Lars Friedrich) Date: Wed, 30 Aug 2006 18:39:43 +0200 Subject: [Numpy-discussion] upcast In-Reply-To: References: Message-ID: <1156955983.6572.13.camel@localhost> Hello, I would like to discuss the following code: #***start*** import numpy as N a = N.array((200), dtype = N.uint8) print (a * 100) / 100 b = N.array((200, 200), dtype = N.uint8) print (b * 100) / 100 #***stop*** The first print statement will print "200" because the uint8-value is cast "upwards", I suppose. The second statement prints "[0 0]". I suppose this is due to overflows during the calculation. How can I tell numpy to do the upcast also in the second case, returning "[200 200]"? I am interested in the fastest solution regarding execution time. In my application I would like to store the result in an Numeric.UInt8-array. Thanks for every comment Lars From Chris.Barker at noaa.gov Wed Aug 30 13:18:49 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 30 Aug 2006 10:18:49 -0700 Subject: [Numpy-discussion] Use of numarray from numpy package [# INC NO 24609] In-Reply-To: <44F5B91C.5090202@astraw.com> References: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> <44F5B91C.5090202@astraw.com> Message-ID: <44F5C879.3040404@noaa.gov> Andrew Straw wrote: >> {ccali22}~(0)>setenv PYTHONPATH /usr/local/lib/python2.3/site-packages/numpy >> > Here's where you went wrong. You want: > > setenv PYTHONPATH /usr/local/lib/python2.3/site-packages Which you shouldn't need at all. site-packages should be on sys.path by default. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ghalib at sent.com Wed Aug 30 13:20:22 2006 From: ghalib at sent.com (Ghalib Suleiman) Date: Wed, 30 Aug 2006 13:20:22 -0400 Subject: [Numpy-discussion] Interfacing with PIL? Message-ID: <2569935D-20F9-42D3-B79E-BAB68818F4B3@sent.com> I'm somewhat new to both libraries...is there any way to create a 2D array of pixel values from an image object from the Python Image Library? I'd like to do some arithmetic on the values. From a.u.r.e.l.i.a.n at gmx.net Wed Aug 30 14:10:59 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Wed, 30 Aug 2006 20:10:59 +0200 Subject: [Numpy-discussion] Interfacing with PIL? In-Reply-To: <2569935D-20F9-42D3-B79E-BAB68818F4B3@sent.com> References: <2569935D-20F9-42D3-B79E-BAB68818F4B3@sent.com> Message-ID: <200608302010.59845.a.u.r.e.l.i.a.n@gmx.net> Am Mittwoch, 30. August 2006 19:20 schrieb Ghalib Suleiman: > I'm somewhat new to both libraries...is there any way to create a 2D > array of pixel values from an image object from the Python Image > Library? I'd like to do some arithmetic on the values. Yes. To transport the data: >>> import numpy >>> image = >>> arr = numpy.fromstring(image.tostring(), dtype=numpy.uint8) (alternately use dtype=numpy.uint32 if you want RGBA packed in one number). arr will be a 1d array with length (height * width * b(ytes)pp). Use reshape to get it into a reasonable form. HTH, Johannes From tim.hochberg at ieee.org Wed Aug 30 14:16:58 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Wed, 30 Aug 2006 11:16:58 -0700 Subject: [Numpy-discussion] Interfacing with PIL? In-Reply-To: <200608302010.59845.a.u.r.e.l.i.a.n@gmx.net> References: <2569935D-20F9-42D3-B79E-BAB68818F4B3@sent.com> <200608302010.59845.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <44F5D61A.4080503@ieee.org> Johannes Loehnert wrote: > Am Mittwoch, 30. August 2006 19:20 schrieb Ghalib Suleiman: > >> I'm somewhat new to both libraries...is there any way to create a 2D >> array of pixel values from an image object from the Python Image >> Library? I'd like to do some arithmetic on the values. >> > > Yes. > > To transport the data: > >>>> import numpy >>>> image = >>>> arr = numpy.fromstring(image.tostring(), dtype=numpy.uint8) >>>> > > (alternately use dtype=numpy.uint32 if you want RGBA packed in one number). > > arr will be a 1d array with length (height * width * b(ytes)pp). Use reshape > to get it into a reasonable form. > On a related note, does anyone have a good recipe for converting a PIL image to a wxPython image? The last time I tried this, the best I could come up with was: stream = cStringIO.StringIO() img.save(stream, "png") # img is PIL Image stream.seek(0) image = wx.ImageFromStream(stream) # image is a wxPython Image -tim From Chris.Barker at noaa.gov Wed Aug 30 15:15:15 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 30 Aug 2006 12:15:15 -0700 Subject: [Numpy-discussion] Interfacing with PIL? In-Reply-To: <44F5D61A.4080503@ieee.org> References: <2569935D-20F9-42D3-B79E-BAB68818F4B3@sent.com> <200608302010.59845.a.u.r.e.l.i.a.n@gmx.net> <44F5D61A.4080503@ieee.org> Message-ID: <44F5E3C3.5030300@noaa.gov> Tim Hochberg wrote: > Johannes Loehnert wrote: >>> I'm somewhat new to both libraries...is there any way to create a 2D >>> array of pixel values from an image object from the Python Image >>> Library? I'd like to do some arithmetic on the values. the latest version of PIL (maybe not released yet) supports the array interface, so you may be able to do something like: A = numpy.asarray(PIL_image) see the PIL page: http://effbot.org/zone/pil-changes-116.htm where it says: Changes from release 1.1.5 to 1.1.6 Added "fromarray" function, which takes an object implementing the NumPy array interface and creates a PIL Image from it. (from Travis Oliphant). Added NumPy array interface support (__array_interface__) to the Image class (based on code by Travis Oliphant). This allows you to easily convert between PIL image memories and NumPy arrays: import numpy, Image i = Image.open('lena.jpg') a = numpy.asarray(i) # a is readonly i = Image.fromarray(a) > On a related note, does anyone have a good recipe for converting a PIL > image to a wxPython image? Does a PIL image support the buffer protocol? There will be a: wx.ImageFromBuffer() soon, and there is now; wx.Image.SetDataBuffer() if not, I think this will work: I = wx.EmptyImage(width, height) DataString = PIL_image.tostring() I.SetDataBuffer(DataString) This will only work if the PIL image is an 24 bit RGB image, of course. Just make sure to keep DataString around, so that the data buffer doesn't get deleted. wx.ImageFromBuffer() will do that foryou, but it's not available until 2.7 comes out. Ideally, both PIL and wx will support the array interface, and we can just do: I = wx.ImageFromArray(PIL_Image) and not get any data copying as well. Also, Robin has just added some methods to directly manipulate wxBitmaps, so you can use a numpy array as the data buffer for a wx.Bitmap. This can help prevent a lot of data copies. see a test here: http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/demo/RawBitmapAccess.py?rev=1.3&content-type=text/vnd.viewcvs-markup -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From thilo.wehrmann at gmx.net Wed Aug 30 17:14:21 2006 From: thilo.wehrmann at gmx.net (Thilo Wehrmann) Date: Wed, 30 Aug 2006 23:14:21 +0200 Subject: [Numpy-discussion] (no subject) Message-ID: <20060830211421.193280@gmx.net> Hi, currently I?m trying to compile the latest numpy version (1.0b4) under an SGI IRIX 6.5 environment. I?m using the gcc 3.4.6 compiler and python 2.4.3 (self compiled). During the compilation of numpy.core I get a nasty error message: ... copying build/src.irix64-6.5-2.4/numpy/__config__.py -> build/lib.irix64-6.5-2.4/numpy copying build/src.irix64-6.5-2.4/numpy/distutils/__config__.py -> build/lib.irix64-6.5-2.4/numpy/distutils running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize MipsFCompiler customize MipsFCompiler customize MipsFCompiler using build_ext building 'numpy.core.umath' extension compiling C sources C compiler: gcc -fno-strict-aliasing -DNDEBUG -D_FILE_OFFSET_BITS=64 -DHAVE_LARGEFILE_SUPPORT -fmessage-length=0 -Wall -O2 compile options: '-Ibuild/src.irix64-6.5-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.irix64-6.5-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/local/include/python2.4 -c' gcc: build/src.irix64-6.5-2.4/numpy/core/src/umathmodule.c numpy/core/src/umathmodule.c.src: In function `nc_sqrtf': numpy/core/src/umathmodule.c.src:602: warning: implicit declaration of function `hypotf' numpy/core/src/umathmodule.c.src: In function `nc_sqrtl': numpy/core/src/umathmodule.c.src:602: warning: implicit declaration of function `fabsl' ... ... lots of math functions ... ... numpy/core/src/umathmodule.c.src: In function `LONGDOUBLE_frexp': numpy/core/src/umathmodule.c.src:1940: warning: implicit declaration of function `frexpl' numpy/core/src/umathmodule.c.src: In function `LONGDOUBLE_ldexp': numpy/core/src/umathmodule.c.src:1957: warning: implicit declaration of function `ldexpl' In file included from numpy/core/src/umathmodule.c.src:2011: build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c: At top level: build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:15: error: `acosl' undeclared here (not in a function) build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:15: error: initializer element is not constant build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:15: error: (near initialization for `arccos_data[2]') ... ... lots of math functions ... ... build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:192: error: initializer element is not constant build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:192: error: (near initialization for `tanh_data[2]') numpy/core/include/numpy/ufuncobject.h:328: warning: 'generate_overflow_error' defined but not used numpy/core/src/umathmodule.c.src: In function `nc_sqrtf': numpy/core/src/umathmodule.c.src:602: warning: implicit declaration of function `hypotf' ... ... lots of math functions ... ... numpy/core/src/umathmodule.c.src: In function `FLOAT_frexp': numpy/core/src/umathmodule.c.src:1940: warning: implicit declaration of function `frexpf' numpy/core/src/umathmodule.c.src: In function `FLOAT_ldexp': numpy/core/src/umathmodule.c.src:1957: warning: implicit declaration of function `ldexpf' numpy/core/src/umathmodule.c.src: In function `LONGDOUBLE_frexp': numpy/core/src/umathmodule.c.src:1940: warning: implicit declaration of function `frexpl' numpy/core/src/umathmodule.c.src: In function `LONGDOUBLE_ldexp': numpy/core/src/umathmodule.c.src:1957: warning: implicit declaration of function `ldexpl' In file included from numpy/core/src/umathmodule.c.src:2011: build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c: At top level: build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:15: error: `acosl' undeclared here (not in a function) build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:15: error: initializer element is not constant ... ... lots of math functions ... ... build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:192: error: initializer element is not constant build/src.irix64-6.5-2.4/numpy/core/__umath_generated.c:192: error: (near initialization for `tanh_data[2]') numpy/core/include/numpy/ufuncobject.h:328: warning: 'generate_overflow_error' defined but not used error: Command "gcc -fno-strict-aliasing -DNDEBUG -D_FILE_OFFSET_BITS=64 -DHAVE_LARGEFILE_SUPPORT -fmessage-length=0 -Wall -O2 -Ibuild/src.irix64-6.5-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.irix64-6.5-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/local/include/python2.4 -c build/src.irix64-6.5-2.4/numpy/core/src/umathmodule.c -o build/temp.irix64-6.5-2.4/build/src.irix64-6.5-2.4/numpy/core/src/umathmodule.o" failed with exit status 1 Can somebody explain me, what?s going wrong. It seems there is some header files missing. thanks, thilo -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal f?r Modem und ISDN: http://www.gmx.net/de/go/smartsurfer From wbaxter at gmail.com Wed Aug 30 17:18:34 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 31 Aug 2006 06:18:34 +0900 Subject: [Numpy-discussion] stumped numpy user seeks help In-Reply-To: <44F58536.7030806@gmx.net> References: <44F4C3D5.80600@jpl.nasa.gov> <44F58536.7030806@gmx.net> Message-ID: On 8/30/06, Sven Schreiber wrote: > Mathew Yeates schrieb: > will be a numpy matrix, use if you don't like that. But here > it's really nice to work with matrices, because otherwise .sum() will > give you a 1-d array sometimes, and that will suddenly look like a row > to (instead of a nice column vector) and wouldn't work -- > that's why matrices are so great and everybody should be using them ;-) column_stack would work perfectly in place of hstack there if it only didn't have the silly behavior of transposing arguments that already are 2-d. For reminders, here's the replacement implementation of column_stack I proposed on July 21: def column_stack(tup): def transpose_1d(array): if array.ndim<2: return _nx.transpose(atleast_2d(array)) else: return array arrays = map(transpose_1d,map(atleast_1d,tup)) return _nx.concatenate(arrays,1) This was in a big ticket I submitted about overhauling r_,c_,etc, which was largely ignored. Maybe I should resubmit this by itself... --bb From fperez.net at gmail.com Wed Aug 30 17:57:16 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 30 Aug 2006 15:57:16 -0600 Subject: [Numpy-discussion] Changing Fatal error into ImportError? Message-ID: Hi all, this was mentioned in the past, but I think it fell through the cracks: Python 2.3.4 (#1, Mar 10 2006, 06:12:09) [GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import mwadap Overwriting info= from scipy.misc (was from numpy.lib.utils) RuntimeError: module compiled against version 90909 of C-API but this version of numpy is 1000002 Fatal Python error: numpy.core.multiarray failed to import... exiting. I really think that this should raise ImportError, but NOT kill the python interpreter. If this happens in the middle of a long-running interactive session, you'll lose all of your current state/work, where a simple ImportError would have been enough to tell you that this particular module needed recompilation. FatalError should be reserved for situations where the internal state of the Python VM itself can not realistically be expected to be sane (corruption, complete memory exhaustion for even internal allocations, etc.) But killing the user's session for a failed import is a bit much, IMHO. Cheers, f From robert.kern at gmail.com Wed Aug 30 18:11:21 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Aug 2006 17:11:21 -0500 Subject: [Numpy-discussion] Changing Fatal error into ImportError? In-Reply-To: References: Message-ID: Fernando Perez wrote: > Hi all, > > this was mentioned in the past, but I think it fell through the cracks: > > Python 2.3.4 (#1, Mar 10 2006, 06:12:09) > [GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import mwadap > Overwriting info= from scipy.misc (was > from numpy.lib.utils) > RuntimeError: module compiled against version 90909 of C-API but this > version of numpy is 1000002 > Fatal Python error: numpy.core.multiarray failed to import... exiting. > > I really think that this should raise ImportError, but NOT kill the > python interpreter. If this happens in the middle of a long-running > interactive session, you'll lose all of your current state/work, where > a simple ImportError would have been enough to tell you that this > particular module needed recompilation. > > FatalError should be reserved for situations where the internal state > of the Python VM itself can not realistically be expected to be sane > (corruption, complete memory exhaustion for even internal allocations, > etc.) But killing the user's session for a failed import is a bit > much, IMHO. I don't see where we're calling Py_FatalError. The problem might be in Python or mwadap. Indeed, import_array() raises a PyExc_ImportError. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Wed Aug 30 18:36:19 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 30 Aug 2006 16:36:19 -0600 Subject: [Numpy-discussion] Changing Fatal error into ImportError? In-Reply-To: References: Message-ID: On 8/30/06, Robert Kern wrote: > I don't see where we're calling Py_FatalError. The problem might be in Python or > mwadap. Indeed, import_array() raises a PyExc_ImportError. Sorry for the noise: it looks like this was already fixed: http://projects.scipy.org/scipy/numpy/changeset/3044 since the code causing problems had been built /before/ 3044, we got the FatalError. But with modules built post-3044, it's all good (I artificially hacked the number to force the error): In [1]: import mwadap Overwriting info= from scipy.misc (was from numpy.lib.utils) --------------------------------------------------------------------------- exceptions.RuntimeError Traceback (most recent call last) RuntimeError: module compiled against version 1000001 of C-API but this version of numpy is 1000002 --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/fperez/research/code/mwadap-merge/mwadap/test/ /home/fperez/usr/lib/python2.3/site-packages/mwadap/__init__.py 9 glob,loc = globals(),locals() 10 for name in __all__: ---> 11 __import__(name,glob,loc,[]) 12 13 # Namespace cleanup /home/fperez/usr/lib/python2.3/site-packages/mwadap/Operator.py 18 19 # Our own packages ---> 20 import mwrep 21 from mwadap import mwqmfl, utils, Function, flinalg 22 ImportError: numpy.core.multiarray failed to import In [2]: Cheers, f From charlesr.harris at gmail.com Wed Aug 30 19:12:14 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 30 Aug 2006 17:12:14 -0600 Subject: [Numpy-discussion] upcast In-Reply-To: <1156955983.6572.13.camel@localhost> References: <1156955983.6572.13.camel@localhost> Message-ID: On 8/30/06, Lars Friedrich wrote: > > Hello, > > I would like to discuss the following code: > > #***start*** > import numpy as N > > a = N.array((200), dtype = N.uint8) > print (a * 100) / 100 This is actually a scalar, i.e., a zero dimensional array. N.uint8(200) would give you the same thing, because (200) is a number, not a tuple like (200,). In any case In [44]:a = array([200], dtype=uint8) In [45]:a*100 Out[45]:array([32], dtype=uint8) In [46]:uint8(100)*100 Out[46]:10000 i.e. , the array arithmetic is carried out in mod 256 because Numpy keeps the array type when multiplying by scalars. On the other hand, when multiplying a *scalar* by a number, the lower precision scalars are upconverted in the conventional way. Numpy makes the choices it does for space efficiency. If you want to work in uint8 you don't have to recast every time you multiply by a small integer. I suppose one could demand using uint8(1) instead of 1, but the latter is more convenient. Integers can be tricky once the ordinary precision is exceeded and modular arithmetic takes over, it just happens more easily for uint8 than for uint32. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Aug 30 19:24:34 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 30 Aug 2006 17:24:34 -0600 Subject: [Numpy-discussion] upcast In-Reply-To: <1156955983.6572.13.camel@localhost> References: <1156955983.6572.13.camel@localhost> Message-ID: On 8/30/06, Lars Friedrich wrote: > > Hello, > > I would like to discuss the following code: > > #***start*** > import numpy as N > > a = N.array((200), dtype = N.uint8) > print (a * 100) / 100 > > b = N.array((200, 200), dtype = N.uint8) > print (b * 100) / 100 > #***stop*** > > The first print statement will print "200" because the uint8-value is > cast "upwards", I suppose. The second statement prints "[0 0]". I > suppose this is due to overflows during the calculation. > > How can I tell numpy to do the upcast also in the second case, returning > "[200 200]"? I am interested in the fastest solution regarding execution > time. In my application I would like to store the result in an > Numeric.UInt8-array. > > Thanks for every comment To answer the original question, you need to use a higher precision array or explicitly cast it to higher precision. In [49]:(a.astype(int)*100)/100 Out[49]:array([200]) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Thu Aug 31 01:02:35 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 30 Aug 2006 22:02:35 -0700 Subject: [Numpy-discussion] amd64 support In-Reply-To: References: Message-ID: <44F66D6B.5030506@msg.ucsf.edu> Keith Goodman wrote: > I plan to build an amd64 box and run debian etch. Are there any big, > 64-bit, show-stopping problems in numpy? Any minor annoyances? > I am not aware of any - we use fine on 32bit and 64bit with debian sarge and etch. -Sebastian Haase From haase at msg.ucsf.edu Thu Aug 31 01:11:05 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 30 Aug 2006 22:11:05 -0700 Subject: [Numpy-discussion] Use of numarray from numpy package [# INC NO 24609] In-Reply-To: <44F5B91C.5090202@astraw.com> References: <200608301252.k7UCqao8019664@discovery.saclay.cea.fr> <44F5B91C.5090202@astraw.com> Message-ID: <44F66F69.1010305@msg.ucsf.edu> Andrew Straw wrote: > LANDRIU David SAp wrote: >> Hello, >> >> I come back to my question : how to use numarray >> with the numpy installation ? >> >> {ccali22}~(0)>setenv PYTHONPATH /usr/local/lib/python2.3/site-packages/numpy >> > Here's where you went wrong. You want: > > setenv PYTHONPATH /usr/local/lib/python2.3/site-packages > >> {ccali22}~(0)>python >> Python 2.3.5 (#2, Oct 17 2005, 17:20:02) >> [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>>>> from numarray import * >>>>> >> Traceback (most recent call last): >> File "", line 1, in ? >> File "/usr/local/lib/python2.3/site-packages/numpy/numarray/__init__.py", line 1, in ? >> from util import * >> File "/usr/local/lib/python2.3/site-packages/numpy/numarray/util.py", line 2, in ? >> from numpy import geterr >> ImportError: No module named numpy >> > > Note that you're actually importing a numarray within numpy's directory > structure. That's because of your PYTHONPATH. numpy ships numpy.numarray > to provide backwards compatibility. To use it, you must do "import > numpy.numarray as numarray" > Just to explain -- there is only a numarray directory inside numpy to provide some special treatment for people that do the transition from numarray to numpy - meaning: they can do somthing like from numpy import numarray and get a "numpy(!) version" that behaves more like numarray than the straight numpy ... Similar for "from numarray import oldnumaric as Numeric" (for people coming from Numeric ) Yes - it is actually confusing, but that's the baggage when there are 2 (now 3) numerical python packages is human history. The future will be much brighter - forget all of the above, and just use import numpy (I like "import numpy as N" for less typing - others prefer even "from numpy import *" ) Hope that helps, - Sebastian Haase From lfriedri at imtek.de Thu Aug 31 01:25:40 2006 From: lfriedri at imtek.de (Lars Friedrich) Date: Thu, 31 Aug 2006 07:25:40 +0200 Subject: [Numpy-discussion] upcast In-Reply-To: References: <1156955983.6572.13.camel@localhost> Message-ID: <1157001940.6670.4.camel@gdur.breisach> > To answer the original question, you need to use a higher precision > array or explicitly cast it to higher precision. > > In [49]:(a.astype(int)*100)/100 > Out[49]:array([200]) Thank you. This is what I wanted to know. Lars From torgil.svensson at gmail.com Thu Aug 31 02:15:36 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Thu, 31 Aug 2006 08:15:36 +0200 Subject: [Numpy-discussion] Unwanted upcast from uint64 to float64 Message-ID: I'm using windows datetimes (100nano-seconds since 0001,1,1) as time in a numpy array and was hit by this behaviour. >>> numpy.__version__ '1.0b4' >>> a=numpy.array([632925394330000000L],numpy.uint64) >>> t=a[0] >>> t 632925394330000000L >>> type(t) >>> t+1 6.3292539433e+017 >>> type(t+1) >>> t==(t+1) True I was trying to set t larger than any time in an array. Is there any reason for the scalar to upcast in this case? //Torgil From landriu at discovery.saclay.cea.fr Thu Aug 31 06:19:45 2006 From: landriu at discovery.saclay.cea.fr (LANDRIU David SAp) Date: Thu, 31 Aug 2006 12:19:45 +0200 (MEST) Subject: [Numpy-discussion] Use of numarray from numpy package Message-ID: <200608311020.k7VAKWr5009000@discovery.saclay.cea.fr> Hello, I learned you answered me, but I did not get your message : can you send it to me again ? Thanks , David Landriu -------------------------------------------------------------------- David Landriu DAPNIA/SAp CEA SACLAY (France) Phone : (33|0)169088785 Fax : (33|0)169086577 --------------------------------------------------------------------- From lists.steve at arachnedesign.net Thu Aug 31 09:23:51 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Thu, 31 Aug 2006 09:23:51 -0400 Subject: [Numpy-discussion] Use of numarray from numpy package In-Reply-To: <200608311020.k7VAKWr5009000@discovery.saclay.cea.fr> References: <200608311020.k7VAKWr5009000@discovery.saclay.cea.fr> Message-ID: <32ED73BE-DF47-4C4E-B6A7-3A79D72D0B25@arachnedesign.net> On Aug 31, 2006, at 6:19 AM, LANDRIU David SAp wrote: > I learned you answered me, but I did not get > your message : can you send it to me again ? Does this help? http://sourceforge.net/mailarchive/forum.php? thread_id=30384097&forum_id=4890 -steve From oliphant.travis at ieee.org Thu Aug 31 09:40:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 31 Aug 2006 07:40:28 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> Message-ID: <44F6E6CC.70206@ieee.org> Fernando Perez wrote: > On 8/30/06, Stefan van der Walt wrote: > > >> The current behaviour makes sense, but is maybe not consistent: >> >> N.array([],dtype=object).size == 1 >> N.array([[],[]],dtype=object).size == 2 >> > > Yes, including one more term in this check: > > In [5]: N.array([],dtype=object).size > Out[5]: 1 > > In [6]: N.array([[]],dtype=object).size > Out[6]: 1 > > In [7]: N.array([[],[]],dtype=object).size > Out[7]: 2 > > Intuitively, I'd have expected the answers to be 0,1,2, instead of 1,1,2. > > What about N.array(3).size N.array([3]).size N.array([3,3]).size Essentially, the [] is being treated as an object when you explicitly ask for an object array in exactly the same way as 3 is being treated as a number in the default case. It's just that '[' ']' is "also" being used as the dimension delimiter and thus the confusion. It is consistent. It's a corner case, and I have no problem fixing the special-case code running when dtype=object so that array([], dtype=object) returns an empty array, if that is the consensus. -Travis > Cheers, > > f > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From oliphant.travis at ieee.org Thu Aug 31 09:45:46 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 31 Aug 2006 07:45:46 -0600 Subject: [Numpy-discussion] Unwanted upcast from uint64 to float64 In-Reply-To: References: Message-ID: <44F6E80A.90508@ieee.org> Torgil Svensson wrote: > I'm using windows datetimes (100nano-seconds since 0001,1,1) as time > in a numpy array and was hit by this behaviour. > > >>>> numpy.__version__ >>>> > '1.0b4' > >>>> a=numpy.array([632925394330000000L],numpy.uint64) >>>> t=a[0] >>>> t >>>> > 632925394330000000L > >>>> type(t) >>>> > > >>>> t+1 >>>> > 6.3292539433e+017 > >>>> type(t+1) >>>> > > >>>> t==(t+1) >>>> > True > > I was trying to set t larger than any time in an array. Is there any > reason for the scalar to upcast in this case? > Yes, because you are adding a signed scalar to an unsigned scalar and a float64 is the only thing that can handle it (well actually it should be the long double scalar but we've made a special case here because long doubles are not that common). Add an unsigned scalar t+numpy.uint64(1) to get what you want. -Travis > //Torgil > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tom.denniston at alum.dartmouth.org Thu Aug 31 09:47:31 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 08:47:31 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 Message-ID: In version 0.9.6 one used to be able to do this: In [4]: numpy.__version__ Out[4]: '0.9.6' In [6]: numpy.array([numpy.array([4,5,6]), numpy.array([1,2,3])], dtype=object).shape Out[6]: (2, 3) In beta 1 numpy.PyObject no longer exists. I was trying to get the same behavior with dtype=object but it doesn't work: In [7]: numpy.__version__ Out[7]: '1.0b1' In [8]: numpy.array([numpy.array([4,5,6]), numpy.array([1,2,3])], dtype=object).shape Out[8]: (2,) Is this an intentional change? From jonathan.taylor at utoronto.ca Thu Aug 31 10:19:19 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 31 Aug 2006 10:19:19 -0400 Subject: [Numpy-discussion] BLAS not found in numpy 1.0b4 Message-ID: <463e11f90608310719m314360e3ue6be8ea6a5fe18fc@mail.gmail.com> When trying to install 1.0b4 I had trouble getting it to detect my installed atlas. This was because the shipped site.cfg had; [atlas] library_dirs = /usr/lib/atlas/3dnow/ atlas_libs = lapack, blas but I had to change 3dnow to sse2 due to my current state of pentiumness. In any case it should probabally look in all the possible locations instead of just AMD's location. Cheers. Jon. From dd55 at cornell.edu Thu Aug 31 09:57:44 2006 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 31 Aug 2006 09:57:44 -0400 Subject: [Numpy-discussion] Release of 1.0b5 this weekend In-Reply-To: References: <44F48E1A.1020006@ieee.org> Message-ID: <200608310957.44947.dd55@cornell.edu> On Tuesday 29 August 2006 19:24, Fernando Perez wrote: > On 8/29/06, Travis Oliphant wrote: > > Hi all, > > > > Classes start for me next Tuesday, and I'm teaching a class for which I > > will be using NumPy / SciPy extensively. I need to have a release of > > these two (and hopefully matplotlib) that work with each other. > > > > Therefore, I'm going to make a 1.0b5 release of NumPy over the weekend > > (probably Monday), and also get a release of SciPy out as well. At that > > point, I'll only be available for bug-fixes to 1.0. Therefore, the next > > release after 1.0b5 I would like to be 1.0rc1 (release-candidate 1). > > What's the status of these 'overwriting' messages? > > planck[/tmp]> python -c 'import scipy;scipy.test()' > Overwriting info= from scipy.misc (was > from numpy.lib.utils) > Overwriting fft= from scipy.fftpack.basic > (was '/home/fperez/tmp/local/lib/python2.3/site-packages/numpy/fft/__init__.pyc' >> from > /home/fperez/tmp/local/lib/python2.3/site-packages/numpy/fft/__init__.pyc) > ... > > I was under the impression you'd decided to quiet them out, but they > seem to be making a comeback. Will these messages be included in NumPy-1.0? From Christophe.Blondeau at onera.fr Thu Aug 31 10:15:47 2006 From: Christophe.Blondeau at onera.fr (Christophe-Blondeau) Date: Thu, 31 Aug 2006 16:15:47 +0200 Subject: [Numpy-discussion] numpy/f2py module import segfault on HP-UX11.11 Message-ID: <44F6EF13.6030905@onera.fr> An HTML attachment was scrubbed... URL: From torgil.svensson at gmail.com Thu Aug 31 10:57:27 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Thu, 31 Aug 2006 16:57:27 +0200 Subject: [Numpy-discussion] Unwanted upcast from uint64 to float64 In-Reply-To: <44F6E80A.90508@ieee.org> References: <44F6E80A.90508@ieee.org> Message-ID: > Yes, because you are adding a signed scalar to an unsigned scalar and a > float64 is the only thing that can handle it > > t+numpy.uint64(1) Thanks, this make sense. This is a good thing to have back in the head. //Torgil On 8/31/06, Travis Oliphant wrote: > Torgil Svensson wrote: > > I'm using windows datetimes (100nano-seconds since 0001,1,1) as time > > in a numpy array and was hit by this behaviour. > > > > > >>>> numpy.__version__ > >>>> > > '1.0b4' > > > >>>> a=numpy.array([632925394330000000L],numpy.uint64) > >>>> t=a[0] > >>>> t > >>>> > > 632925394330000000L > > > >>>> type(t) > >>>> > > > > > >>>> t+1 > >>>> > > 6.3292539433e+017 > > > >>>> type(t+1) > >>>> > > > > > >>>> t==(t+1) > >>>> > > True > > > > I was trying to set t larger than any time in an array. Is there any > > reason for the scalar to upcast in this case? > > > Yes, because you are adding a signed scalar to an unsigned scalar and a > float64 is the only thing that can handle it (well actually it should be > the long double scalar but we've made a special case here because long > doubles are not that common). Add an unsigned scalar > > t+numpy.uint64(1) > > to get what you want. > > -Travis > > > > //Torgil > > > > ------------------------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From fperez.net at gmail.com Thu Aug 31 11:08:36 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 31 Aug 2006 09:08:36 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <44F6E6CC.70206@ieee.org> References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> Message-ID: On 8/31/06, Travis Oliphant wrote: > What about > > N.array(3).size > > N.array([3]).size > > N.array([3,3]).size > > Essentially, the [] is being treated as an object when you explicitly > ask for an object array in exactly the same way as 3 is being treated as > a number in the default case. It's just that '[' ']' is "also" being > used as the dimension delimiter and thus the confusion. > > It is consistent. It's a corner case, and I have no problem fixing the > special-case code running when dtype=object so that array([], > dtype=object) returns an empty array, if that is the consensus. I wasn't really complaining: these are corner cases I've never seen in real use, so I'm not really sure how critical it is to worry about them. Though I could see code which does automatic size/shape checks tripping on some of them. The shape tuples shed a bit of light on what's going on for the surprised (like myself): In [8]: N.array(3).shape Out[8]: () In [9]: N.array([3]).shape Out[9]: (1,) In [10]: N.array([3,3]).shape Out[10]: (2,) In [11]: N.array([]).shape Out[11]: (0,) In [12]: N.array([[]]).shape Out[12]: (1, 0) In [13]: N.array([[],[]]).shape Out[13]: (2, 0) I won't really vote for any changes one way or another, as far as I'm concerned it's one of those 'learn the library' things. I do realize that the near-ambiguity between '[]' as an empty object and '[]' as the syntactic delimiter for a container makes this case a bit of a gotcha. I guess my only remaining question is: what is the difference between outputs #8 and #11 above? Is an empty shape tuple == array scalar, while a (0,) shape indicates a one-dimensional array with no elements? If this interpretation is correct, what is the usage of the latter kind of object, given how it can't even be indexed? In [15]: N.array([])[0] --------------------------------------------------------------------------- exceptions.IndexError Traceback (most recent call last) /home/fperez/research/code/mjmdim/pycode/ IndexError: index out of bounds And is this really expected? In [18]: N.array([]).any() Out[18]: False In [19]: N.array([]).all() Out[19]: True It's a bit funny to have an array for which 'no elements are true' (any==false), yet 'all are true' (all==true), isn't it? Regards, f From charlesr.harris at gmail.com Thu Aug 31 11:33:25 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 09:33:25 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> Message-ID: On 8/31/06, Fernando Perez wrote: > > On 8/31/06, Travis Oliphant wrote: > > > What about > > > > N.array(3).size > > > > N.array([3]).size > > > > N.array([3,3]).size > > > > Essentially, the [] is being treated as an object when you explicitly > > ask for an object array in exactly the same way as 3 is being treated as > > a number in the default case. It's just that '[' ']' is "also" being > > used as the dimension delimiter and thus the confusion. > > > > It is consistent. It's a corner case, and I have no problem fixing the > > special-case code running when dtype=object so that array([], > > dtype=object) returns an empty array, if that is the consensus. > > I wasn't really complaining: these are corner cases I've never seen in > real use, so I'm not really sure how critical it is to worry about > them. Though I could see code which does automatic size/shape checks > tripping on some of them. The shape tuples shed a bit of light on > what's going on for the surprised (like myself): > > In [8]: N.array(3).shape > Out[8]: () > > In [9]: N.array([3]).shape > Out[9]: (1,) > > In [10]: N.array([3,3]).shape > Out[10]: (2,) > > In [11]: N.array([]).shape > Out[11]: (0,) > > In [12]: N.array([[]]).shape > Out[12]: (1, 0) > > In [13]: N.array([[],[]]).shape > Out[13]: (2, 0) > > > I won't really vote for any changes one way or another, as far as I'm > concerned it's one of those 'learn the library' things. I do realize > that the near-ambiguity between '[]' as an empty object and '[]' as > the syntactic delimiter for a container makes this case a bit of a > gotcha. > > I guess my only remaining question is: what is the difference between > outputs #8 and #11 above? Is an empty shape tuple == array scalar, > while a (0,) shape indicates a one-dimensional array with no elements? > If this interpretation is correct, what is the usage of the latter > kind of object, given how it can't even be indexed? > > In [15]: N.array([])[0] > > --------------------------------------------------------------------------- > exceptions.IndexError Traceback (most > recent call last) > > /home/fperez/research/code/mjmdim/pycode/ > > IndexError: index out of bounds > > > And is this really expected? > > In [18]: N.array([]).any() > Out[18]: False This could be interpreted as : exists x, x element of array, s.t. x is true. In [19]: N.array([]).all() > Out[19]: True Seems right: for all x, x element of array, x is true. It's a bit funny to have an array for which 'no elements are true' > (any==false), yet 'all are true' (all==true), isn't it? Fun with empty sets! The question is, is a zero dimensional array an empty container or does it contain its value. The numpy choice of treating zero dimensional arrays as both empty containers and scalar values makes the determination a bit ambiguous although it is consistent with the indexing convention. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From humufr at yahoo.fr Thu Aug 31 11:43:59 2006 From: humufr at yahoo.fr (humufr at yahoo.fr) Date: Thu, 31 Aug 2006 11:43:59 -0400 Subject: [Numpy-discussion] numpy and dtype Message-ID: <200608311143.59711.humufr@yahoo.fr> Hi, sorry to bother you with probably plenty of stupid question but I would like to clarify my mind with dtype. I have a problem to view a recarray, I'm not sure but I suspect a bug or at least a problem I have an array with some data, the array is very big but I have no problem with numpy. In [44]: data_end Out[44]: array([[ 2.66000000e+02, 5.16300000e+04, 1.00000000e+00, ..., -1.04130435e+00, 1.47304565e+02, 4.27402449e+00], [ 2.66000000e+02, 5.16300000e+04, 2.00000000e+00, ..., -6.52190626e-01, 1.64214981e+02, 1.58334379e+01], [ 2.66000000e+02, 5.16300000e+04, 4.00000000e+00, ..., -7.65136838e-01, 1.33340195e+02, 9.84033298e+00], ..., [ 9.78000000e+02, 5.24310000e+04, 6.32000000e+02, ..., 3.06083832e+01, 6.71210251e+01, 1.18813887e+01], [ 9.78000000e+02, 5.24310000e+04, 6.36000000e+02, ..., 3.05993423e+01, 1.10403000e+02, 5.81539488e+00], [ 9.78000000e+02, 5.24310000e+04, 6.40000000e+02, ..., 3.05382938e+01, 1.26916304e+01, 3.25683937e+01]]) In [45]: data_end.shape Out[45]: (567486, 7) In [46]: data_end.dtype Out[46]: dtype('i2','>i4','>i2','>f4','>f4','>f4','>f4']}) In [49]: b = numpy.rec.fromarrays(data_end.transpose(),type_descr) In [50]: b[:1] Out[50]: recarray([ (266, 51630, 1, 146.71420288085938, -1.041304349899292, 147.3045654296875, 4.274024486541748)], dtype=[('PLATEID', '>i2'), ('MJD', '>i4'), ('FIBERID', '>i2'), ('RA', '>f4'), ('DEC', '>f4'), ('V_DISP', '>f4'), ('V_DISP_ERR', '>f4')]) In [51]: b[1] Out[51]: (266, 51630, 2, 146.74412536621094, -0.65219062566757202, 164.21498107910156, 15.833437919616699) but I obtain an error when I want to print the recarray b (it's working for smallest array): In [54]: b[:10] Out[54]: recarray([ (266, 51630, 1, 146.71420288085938, -1.041304349899292, 147.3045654296875, 4.274024486541748), (266, 51630, 2, 146.74412536621094, -0.65219062566757202, 164.21498107910156, 15.833437919616699), (266, 51630, 4, 146.62857055664062, -0.76513683795928955, 133.34019470214844, 9.8403329849243164), (266, 51630, 6, 146.63166809082031, -0.98827779293060303, 146.91035461425781, 30.08709716796875), (266, 51630, 7, 146.91944885253906, -0.99049174785614014, 152.96893310546875, 12.429832458496094), (266, 51630, 9, 146.76339721679688, -0.81043314933776855, 347.72918701171875, 41.387767791748047), (266, 51630, 10, 146.62281799316406, -0.9513852596282959, 162.53567504882812, 24.676788330078125), (266, 51630, 11, 146.93409729003906, -0.67040395736694336, 266.56011962890625, 10.875675201416016), (266, 51630, 12, 146.96389770507812, -0.54500257968902588, 92.040328979492188, 18.999214172363281), (266, 51630, 13, 146.9635009765625, -0.75935173034667969, 72.828048706054688, 13.028598785400391)], dtype=[('PLATEID', '>i2'), ('MJD', '>i4'), ('FIBERID', '>i2'), ('RA', '>f4'), ('DEC', '>f4'), ('V_DISP', '>f4'), ('V_DISP_ERR', '>f4')]) So I would like to know if it's normal. And another question is it possile to do, in theory, something like: b = numpy.array(data_end,dtype=type_descr) or all individual array element must have the same dtype? To replace the context, I have a big fits table, I want to use only some columns from the table so I did: table = pyfits.getdata('gal_info_dr4_v5_1b.fit') #pyfits can't read, at least now the gzip file #the file is a fits table file so we look in the pyfits doc to read it! fields = ['PLATEID', 'MJD', 'FIBERID', 'RA', 'DEC','V_DISP','V_DISP_ERR'] type_descr = numpy.dtype({'names':fields,'formats': [' /home/gruel/usr/lib/python2.4/site-packages/IPython/Prompts.py in __call__(self, arg) 514 515 # and now call a possibly user-defined print mechanism --> 516 manipulated_val = self.display(arg) 517 518 # user display hooks can change the variable to be stored in /home/gruel/usr/lib/python2.4/site-packages/IPython/Prompts.py in _display(self, arg) 538 """ 539 --> 540 return self.shell.hooks.result_display(arg) 541 542 # Assign the default display method: /home/gruel/usr/lib/python2.4/site-packages/IPython/hooks.py in __call__(self, *args, **kw) 132 #print "prio",prio,"cmd",cmd #dbg 133 try: --> 134 ret = cmd(*args, **kw) 135 return ret 136 except ipapi.TryNext, exc: /home/gruel/usr/lib/python2.4/site-packages/IPython/hooks.py in result_display(self, arg) 153 154 if self.rc.pprint: --> 155 out = pformat(arg) 156 if '\n' in out: 157 # So that multi-line strings line up with the left column of /usr/lib/python2.4/pprint.py in pformat(self, object) 108 def pformat(self, object): 109 sio = _StringIO() --> 110 self._format(object, sio, 0, 0, {}, 0) 111 return sio.getvalue() 112 /usr/lib/python2.4/pprint.py in _format(self, object, stream, indent, allowance, context, level) 126 self._readable = False 127 return --> 128 rep = self._repr(object, context, level - 1) 129 typ = _type(object) 130 sepLines = _len(rep) > (self._width - 1 - indent - allowance) /usr/lib/python2.4/pprint.py in _repr(self, object, context, level) 192 def _repr(self, object, context, level): 193 repr, readable, recursive = self.format(object, context.copy(), --> 194 self._depth, level) 195 if not readable: 196 self._readable = False /usr/lib/python2.4/pprint.py in format(self, object, context, maxlevels, level) 204 and whether the object represents a recursive construct. 205 """ --> 206 return _safe_repr(object, context, maxlevels, level) 207 208 /usr/lib/python2.4/pprint.py in _safe_repr(object, context, maxlevels, level) 289 return format % _commajoin(components), readable, recursive 290 --> 291 rep = repr(object) 292 return rep, (rep and not rep.startswith('<')), False 293 /home/gruel/usr/lib/python2.4/site-packages/numpy/core/numeric.py in array_repr(arr, max_line_width, precision, suppress_small) 389 if arr.size > 0 or arr.shape==(0,): 390 lst = array2string(arr, max_line_width, precision, suppress_small, --> 391 ', ', "array(") 392 else: # show zero-length shape unless it is (0,) 393 lst = "[], shape=%s" % (repr(arr.shape),) /home/gruel/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py in array2string(a, max_line_width, precision, suppress_small, separator, prefix, style) 202 else: 203 lst = _array2string(a, max_line_width, precision, suppress_small, --> 204 separator, prefix) 205 return lst 206 /home/gruel/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py in _array2string(a, max_line_width, precision, suppress_small, separator, prefix) 137 if a.size > _summaryThreshold: 138 summary_insert = "..., " --> 139 data = _leading_trailing(a) 140 else: 141 summary_insert = "" /home/gruel/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py in _leading_trailing(a) 108 if a.ndim == 1: 109 if len(a) > 2*_summaryEdgeItems: --> 110 b = _gen.concatenate((a[:_summaryEdgeItems], 111 a[-_summaryEdgeItems:])) 112 else: TypeError: expected a readable buffer object Out[53]: From charlesr.harris at gmail.com Thu Aug 31 11:44:14 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 09:44:14 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: On 8/31/06, Tom Denniston wrote: > > In version 0.9.6 one used to be able to do this: > > In [4]: numpy.__version__ > Out[4]: '0.9.6' > > > In [6]: numpy.array([numpy.array([4,5,6]), numpy.array([1,2,3])], > dtype=object).shape > Out[6]: (2, 3) > > > In beta 1 numpy.PyObject no longer exists. I was trying to get the > same behavior with dtype=object but it doesn't work: > > In [7]: numpy.__version__ > Out[7]: '1.0b1' > > In [8]: numpy.array([numpy.array([4,5,6]), numpy.array([1,2,3])], > dtype=object).shape > Out[8]: (2,) The latter looks more correct, in that is produces an array of objects. To get the previous behaviour there is the function vstack: In [6]: a = array([1,2,3]) In [7]: b = array([4,5,6]) In [8]: vstack([a,b]) Out[8]: array([[1, 2, 3], [4, 5, 6]]) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.denniston at alum.dartmouth.org Thu Aug 31 11:59:36 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 10:59:36 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: For this simple example yes, but if one of the nice things about the array constructors is that they know that lists, tuple and arrays are just sequences and any combination of them is valid numpy input. So for instance a list of tuples yields a 2d array. A list of tuples of 1d arrays yields a 3d array. A list of 1d arrays yields 2d array. This was the case consistently across all dtypes. Now it is the case across all of them except for the dtype=object which has this unusual behavior. I agree that vstack is a better choice when you know you have a list of arrays but it is less useful when you don't know and you can't force a type in the vstack function so there is no longer an equivalent to the dtype=object behavior: In [7]: numpy.array([numpy.array([1,2,3]), numpy.array([4,5,6])], dtype=object) Out[7]: array([[1, 2, 3], [4, 5, 6]], dtype=object) In [8]: numpy.vstack([numpy.array([1,2,3]), numpy.array([4,5,6])], dtype=object) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) TypeError: vstack() got an unexpected keyword argument 'dtype' On 8/31/06, Charles R Harris wrote: > On 8/31/06, Tom Denniston > wrote: > > > In version 0.9.6 one used to be able to do this: > > > > In [4]: numpy.__version__ > > Out[4]: '0.9.6' > > > > > > In [6]: numpy.array([numpy.array([4,5,6]), numpy.array([1,2,3])], > > dtype=object).shape > > Out[6]: (2, 3) > > > > > > In beta 1 numpy.PyObject no longer exists. I was trying to get the > > same behavior with dtype=object but it doesn't work: > > > > In [7]: numpy.__version__ > > Out[7]: '1.0b1' > > > > In [8]: numpy.array([numpy.array ([4,5,6]), numpy.array([1,2,3])], > > dtype=object).shape > > Out[8]: (2,) > > > The latter looks more correct, in that is produces an array of objects. To > get the previous behaviour there is the function vstack: > > In [6]: a = array([1,2,3]) > > In [7]: b = array([4,5,6]) > > In [8]: vstack([a,b]) > Out[8]: > array([[1, 2, 3], > [4, 5, 6]]) > > Chuck > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Aug 31 12:24:35 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 10:24:35 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: On 8/31/06, Tom Denniston wrote: > > For this simple example yes, but if one of the nice things about the array > constructors is that they know that lists, tuple and arrays are just > sequences and any combination of them is valid numpy input. So for instance > a list of tuples yields a 2d array. A list of tuples of 1d arrays yields a > 3d array. A list of 1d arrays yields 2d array. This was the case > consistently across all dtypes. Now it is the case across all of them > except for the dtype=object which has this unusual behavior. I agree that > vstack is a better choice when you know you have a list of arrays but it is > less useful when you don't know and you can't force a type in the vstack > function so there is no longer an equivalent to the dtype=object behavior: > > In [7]: numpy.array([numpy.array([1,2,3]), numpy.array([4,5,6])], > dtype=object) > Out[7]: > array([[1, 2, 3], > [4, 5, 6]], dtype=object) > What are you trying to do? If you want integers: In [32]: a = array([array([1,2,3]), array([4,5,6])], dtype=int) In [33]: a.shape Out[33]: (2, 3) If you want objects, you have them: In [30]: a = array([array([1,2,3]), array([4,5,6])], dtype=object) In [31]: a.shape Out[31]: (2,) i.e, a is an array containing two array objects. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Thu Aug 31 12:36:08 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 31 Aug 2006 09:36:08 -0700 Subject: [Numpy-discussion] BLAS not found in numpy 1.0b4 In-Reply-To: <463e11f90608310719m314360e3ue6be8ea6a5fe18fc@mail.gmail.com> References: <463e11f90608310719m314360e3ue6be8ea6a5fe18fc@mail.gmail.com> Message-ID: <44F70FF8.6090801@noaa.gov> Jonathan Taylor wrote: > When trying to install 1.0b4 I had trouble getting it to detect my > installed atlas. This was because the shipped site.cfg had; > > [atlas] > library_dirs = /usr/lib/atlas/3dnow/ > atlas_libs = lapack, blas > > but I had to change 3dnow to sse2 due to my current state of > pentiumness. In any case it should probabally look in all the > possible locations instead of just AMD's location. "All possible locations" is pretty much impossible. There really isn't any choice but for individuals to customize site.cfg for their setup. that's why it's called "site".cfg. I would like to see a pretty good collection of examples, most of them commented out, in there, however. i.e.: ## for AMD atlas: #library_dirs = /usr/lib/atlas/3dnow/ #atlas_libs = lapack, blas ## for Fedora Core 4 sse2 atlas: #library_dirs = /usr/lib/sse2/ #atlas_libs = lapack, blas etc, etc. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu Aug 31 12:46:06 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 31 Aug 2006 09:46:06 -0700 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> Message-ID: <44F7124E.7010702@noaa.gov> Fernando Perez wrote: > In [8]: N.array(3).shape > Out[8]: () > In [11]: N.array([]).shape > Out[11]: (0,) > I guess my only remaining question is: what is the difference between > outputs #8 and #11 above? Is an empty shape tuple == array scalar, > while a (0,) shape indicates a one-dimensional array with no elements? > If this interpretation is correct, what is the usage of the latter > kind of object, given how it can't even be indexed? It can be iterated over (with zero iterations): >>> a = N.array([]) >>> for i in a: ... print i ... whereas the scalar can not: >>> b = N.array(3) >>> b array(3) >>> for i in b: ... print i ... Traceback (most recent call last): File "", line 1, in ? TypeError: iteration over a scalar (0-dim array) Of course the scalar isn't empty, so ti's different in that way too. Can there be an empty scalar? It doesn't look like it. In fact, this looks like it may be a bug: >>> a = N.array([1,2,3]).sum(); a.shape; a.size; a () 1 6 That's what I'd expect, but what if you start with a (0,) array: >>> a = N.array([]).sum(); a.shape; a.size; a () 1 0 where did that zero come from? >>> N.__version__ '1.0b4' -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From charlesr.harris at gmail.com Thu Aug 31 12:51:01 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 10:51:01 -0600 Subject: [Numpy-discussion] BLAS not found in numpy 1.0b4 In-Reply-To: <44F70FF8.6090801@noaa.gov> References: <463e11f90608310719m314360e3ue6be8ea6a5fe18fc@mail.gmail.com> <44F70FF8.6090801@noaa.gov> Message-ID: On 8/31/06, Christopher Barker wrote: > > Jonathan Taylor wrote: > > When trying to install 1.0b4 I had trouble getting it to detect my > > installed atlas. This was because the shipped site.cfg had; > > > > [atlas] > > library_dirs = /usr/lib/atlas/3dnow/ > > atlas_libs = lapack, blas > > > > but I had to change 3dnow to sse2 due to my current state of > > pentiumness. In any case it should probabally look in all the > > possible locations instead of just AMD's location. > > "All possible locations" is pretty much impossible. There really isn't > any choice but for individuals to customize site.cfg for their setup. > that's why it's called "site".cfg. > > I would like to see a pretty good collection of examples, most of them > commented out, in there, however. i.e.: I need this on fc5 x86_64 [atlas] library_dirs = /usr/lib64/atlas atlas_libs = lapack, blas, cblas, atlas I think this should be automatic. Apart from debian, the /usr/lib64 directory is pretty much standard for 64bit linux distros. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.hochberg at ieee.org Thu Aug 31 12:57:25 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Thu, 31 Aug 2006 09:57:25 -0700 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <44F7124E.7010702@noaa.gov> References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> <44F7124E.7010702@noaa.gov> Message-ID: <44F714F5.9050305@ieee.org> Christopher Barker wrote: > Fernando Perez wrote: > >> In [8]: N.array(3).shape >> Out[8]: () >> > > >> In [11]: N.array([]).shape >> Out[11]: (0,) >> > > >> I guess my only remaining question is: what is the difference between >> outputs #8 and #11 above? Is an empty shape tuple == array scalar, >> while a (0,) shape indicates a one-dimensional array with no elements? >> If this interpretation is correct, what is the usage of the latter >> kind of object, given how it can't even be indexed? >> > > It can be iterated over (with zero iterations): > > >>> a = N.array([]) > >>> for i in a: > ... print i > ... > > whereas the scalar can not: > > >>> b = N.array(3) > >>> b > array(3) > >>> for i in b: > ... print i > ... > Traceback (most recent call last): > File "", line 1, in ? > TypeError: iteration over a scalar (0-dim array) > > Of course the scalar isn't empty, so ti's different in that way too. Can > there be an empty scalar? It doesn't look like it. In fact, this looks > like it may be a bug: > >>> a = N.array([1,2,3]).sum(); a.shape; a.size; a > () > 1 > 6 > > That's what I'd expect, but what if you start with a (0,) array: > >>> a = N.array([]).sum(); a.shape; a.size; a > () > 1 > 0 > > where did that zero come from? > More or less from: >>> numpy.add.identity 0 All the ufuncs have an identity function that they use as a starting point for reduce and accumulate. Sum doesn't appear to actually ahve one, but since it's more or less the same as add.reduce it's probably good that it has the same behavior. Note that this also matches the behavior of python's built in sum, although there the identity is called 'start'. -tim > >>> N.__version__ > '1.0b4' > > -Chris > > > > From tom.denniston at alum.dartmouth.org Thu Aug 31 13:00:06 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 12:00:06 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: But i have hetergenious arrays that have numbers and strings and NoneType, etc. Take for instance: In [11]: numpy.array([numpy.array([1,'A', None]), numpy.array([2,2,'Some string'])], dtype=object) Out[11]: array([[1, A, None], [2, 2, Some string]], dtype=object) In [12]: numpy.array([numpy.array([1,'A', None]), numpy.array([2,2,'Some string'])], dtype=object).shape Out[12]: (2, 3) Works fine in Numeric and pre beta numpy but in beta numpy versions i get: In [6]: numpy.array([numpy.array([1,'A', None]), numpy.array([2,2,'Some string'])], dtype=object) Out[6]: array([[1 A None], [2 2 Some string]], dtype=object) In [7]: numpy.array([numpy.array([1,'A', None]), numpy.array([2,2,'Some string'])], dtype=object).shape Out[7]: (2,) But a lists of lists still gives: In [9]: numpy.array([[1,'A', None], [2,2,'Some string']], dtype=object).shape Out[9]: (2, 3) And if you omit the dtype and use a list of arrays then you get a string array with 2,3 dimensions: In [11]: numpy.array([numpy.array([1,'A', None]), numpy.array([2,2,'Some string'])]).shape Out[11]: (2, 3) This new behavior strikes me as inconsistent. One of the things I love about numpy is the ufuncs, array constructors, etc don't care about whether you pass a combination of lists, arrays, tuples, etc. They just know what you _mean_. And what you _mean_ by a list of lists, tuple of arrays, list of arrays, etc is very consistent across constructors and functions. I think making an exception for dtype=object introduces a lot of inconsistencies and it isn't clear to me what is gained. Does anyone commonly use the array interface in a manner that this new behavior is actuallly favorable? I may be overlooking a common use case or something like that. On 8/31/06, Charles R Harris wrote: > > > > On 8/31/06, Tom Denniston > wrote: > > > > For this simple example yes, but if one of the nice things about the array > constructors is that they know that lists, tuple and arrays are just > sequences and any combination of them is valid numpy input. So for instance > a list of tuples yields a 2d array. A list of tuples of 1d arrays yields a > 3d array. A list of 1d arrays yields 2d array. This was the case > consistently across all dtypes. Now it is the case across all of them > except for the dtype=object which has this unusual behavior. I agree that > vstack is a better choice when you know you have a list of arrays but it is > less useful when you don't know and you can't force a type in the vstack > function so there is no longer an equivalent to the dtype=object behavior: > > > > In [7]: numpy.array([numpy.array([1,2,3]), numpy.array([4,5,6])], > dtype=object) > > Out[7]: > > array([[1, 2, 3], > > [4, 5, 6]], dtype=object) > > > What are you trying to do? If you want integers: > > In [32]: a = array([array([1,2,3]), array([4,5,6])], dtype=int) > > In [33]: a.shape > Out[33]: (2, 3) > > > If you want objects, you have them: > > In [30]: a = array([array([1,2,3]), array([4,5,6])], dtype=object) > > In [31]: a.shape > Out[31]: (2,) > > i.e, a is an array containing two array objects. > > Chuck > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From charlesr.harris at gmail.com Thu Aug 31 13:26:15 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 11:26:15 -0600 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <44F7124E.7010702@noaa.gov> References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> <44F7124E.7010702@noaa.gov> Message-ID: On 8/31/06, Christopher Barker wrote: > > Fernando Perez wrote: > > In [8]: N.array(3).shape > > Out[8]: () > > > In [11]: N.array([]).shape > > Out[11]: (0,) > > > I guess my only remaining question is: what is the difference between > > outputs #8 and #11 above? Is an empty shape tuple == array scalar, > > while a (0,) shape indicates a one-dimensional array with no elements? > > If this interpretation is correct, what is the usage of the latter > > kind of object, given how it can't even be indexed? > > It can be iterated over (with zero iterations): > > >>> a = N.array([]) > >>> for i in a: > ... print i > ... > > whereas the scalar can not: > > >>> b = N.array(3) > >>> b > array(3) > >>> for i in b: > ... print i > ... > Traceback (most recent call last): > File "", line 1, in ? > TypeError: iteration over a scalar (0-dim array) > > Of course the scalar isn't empty, so ti's different in that way too. Can > there be an empty scalar? It doesn't look like it. In fact, this looks > like it may be a bug: > >>> a = N.array([1,2,3]).sum(); a.shape; a.size; a > () > 1 > 6 > > That's what I'd expect, but what if you start with a (0,) array: > >>> a = N.array([]).sum(); a.shape; a.size; a > () > 1 > 0 > > where did that zero come from? I think that is correct, sums over empty sets are conventionally set to zero because they are conceived of as adding all the values in the set to zero. Typically this would be implemented as sum = 0 for i in set : sum += i; Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Aug 31 13:36:16 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 11:36:16 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: On 8/31/06, Tom Denniston wrote: > > But i have hetergenious arrays that have numbers and strings and NoneType, > etc. > > Take for instance: > > In [11]: numpy.array([numpy.array([1,'A', None]), > numpy.array([2,2,'Some string'])], dtype=object) > Out[11]: > array([[1, A, None], > [2, 2, Some string]], dtype=object) > > In [12]: numpy.array([numpy.array([1,'A', None]), > numpy.array([2,2,'Some string'])], dtype=object).shape > Out[12]: (2, 3) > > Works fine in Numeric and pre beta numpy but in beta numpy versions i get: I think you want: In [59]: a = array([array([1,'A', None],dtype=object),array([2,2,'Some string'],dtype=object)]) In [60]: a.shape Out[60]: (2, 3) Which makes good sense to me. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Aug 31 13:57:44 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 11:57:44 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: On 8/31/06, Charles R Harris wrote: > > On 8/31/06, Tom Denniston wrote: > > > But i have hetergenious arrays that have numbers and strings and > > NoneType, etc. > > > > Take for instance: > > > > In [11]: numpy.array([numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object) > > Out[11]: > > array([[1, A, None], > > [2, 2, Some string]], dtype=object) > > > > In [12]: numpy.array([numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object).shape > > Out[12]: (2, 3) > > > > Works fine in Numeric and pre beta numpy but in beta numpy versions i > > get: > > > I think you want: > > In [59]: a = array([array([1,'A', None],dtype=object),array([2,2,'Some > string'],dtype=object)]) > > In [60]: a.shape > Out[60]: (2, 3) > > Which makes good sense to me. > OK, I changed my mind. I think you are right and this is a bug. Using svn revision 3098 I get In [2]: a = array([1,'A', None]) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/charris/ TypeError: expected a readable buffer object Which is different than you get with beta 1 in any case. I think array should cast the objects in the list to the first common dtype, object in this case. So the previous should be shorthand for: In [3]: a = array([1,'A', None], dtype=object) In [4]: a.shape Out[4]: (3,) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.denniston at alum.dartmouth.org Thu Aug 31 14:08:17 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 13:08:17 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: Yes one can take a toy example and hack it to work but I don't necessarily have control over the input as to whether it is a list of object arrays, list of 1d heterogenous arrays, etc. Before I didn't need to worry about the input because numpy understood that a list of 1d arrays is a 2d piece of data. Now it understands this for all dtypes except object. My question was is this new set of semantics preferable to the old. I think your example kind of proves my point. Does it really make any sense for the following two ways of specifying an array give such different results? They strike me as _meaning_ the same thing. Doesn't it seem inconsistent to you? In [13]: array([array([1,'A', None], dtype=object),array([2,2,'Some string'],dtype=object)], dtype=object).shape Out[13]: (2,) and In [14]: array([array([1,'A', None], dtype=object),array([2,2,'Some string'],dtype=object)]).shape Out[14]: (2, 3) So my question is what is the _advantage_ of the new semantics? The two examples above used to give the same results. In what cases is it preferable for them to give different results? How does it make life simpler? On 8/31/06, Charles R Harris wrote: > On 8/31/06, Tom Denniston wrote: > > > But i have hetergenious arrays that have numbers and strings and > > NoneType, etc. > > > > Take for instance: > > > > In [11]: numpy.array([numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object) > > Out[11]: > > array([[1, A, None], > > [2, 2, Some string]], dtype=object) > > > > In [12]: numpy.array([numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object).shape > > Out[12]: (2, 3) > > > > Works fine in Numeric and pre beta numpy but in beta numpy versions i > > get: > > > I think you want: > > In [59]: a = array([array([1,'A', None],dtype=object),array([2,2,'Some > string'],dtype=object)]) > > In [60]: a.shape > Out[60]: (2, 3) > > > Which makes good sense to me. > > Chuck > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.denniston at alum.dartmouth.org Thu Aug 31 14:11:22 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 13:11:22 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: wrote the last email before reading your a = array([1,'A', None]) comment. I definately agree with you on that. On 8/31/06, Tom Denniston wrote: > > Yes one can take a toy example and hack it to work but I don't > necessarily have control over the input as to whether it is a list of object > arrays, list of 1d heterogenous arrays, etc. Before I didn't need to worry > about the input because numpy understood that a list of 1d arrays is a > 2d piece of data. Now it understands this for all dtypes except object. My > question was is this new set of semantics preferable to the old. > > I think your example kind of proves my point. Does it really make any > sense for the following two ways of specifying an array give such different > results? They strike me as _meaning_ the same thing. Doesn't it seem > inconsistent to you? > > > In [13]: array([array([1,'A', None], dtype=object),array([2,2,'Some > string'],dtype=object)], dtype=object).shape > Out[13]: (2,) > > and > > In [14]: array([array([1,'A', None], dtype=object),array([2,2,'Some > string'],dtype=object)]).shape > Out[14]: (2, 3) > So my question is what is the _advantage_ of the new semantics? The two > examples above used to give the same results. In what cases is it > preferable for them to give different results? How does it make life > simpler? > > > On 8/31/06, Charles R Harris wrote: > > > On 8/31/06, Tom Denniston wrote: > > > But i have hetergenious arrays that have numbers and strings and > > NoneType, etc. > > > > Take for instance: > > > > In [11]: numpy.array([numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object) > > Out[11]: > > array([[1, A, None], > > [2, 2, Some string]], dtype=object) > > > > In [12]: numpy.array([ numpy.array([1,'A', None]), > > numpy.array([2,2,'Some string'])], dtype=object).shape > > Out[12]: (2, 3) > > > > Works fine in Numeric and pre beta numpy but in beta numpy versions i > > get: > > > I think you want: > > In [59]: a = array([array([1,'A', None],dtype=object),array([2,2,'Some > string'],dtype=object)]) > > In [60]: a.shape > Out[60]: (2, 3) > > > Which makes good sense to me. > > Chuck > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From secchi at sssup.it Thu Aug 31 14:13:29 2006 From: secchi at sssup.it (Angelo Secchi) Date: Thu, 31 Aug 2006 20:13:29 +0200 Subject: [Numpy-discussion] Strange exp Message-ID: <20060831201329.49946c4e.secchi@sssup.it> Hi, I have the following script import fileinput import string from math import * from scipy import * from rpy import * import Numeric import shelve import sys def dpolya1(n,N,b,a): a=float(a) b=float(b) L=784 probs=((special.gammaln(N+1)+special.gammaln(L*(a/b))+special.gammaln((a/b)+n)+special.gammaln((a/b)*(L-1)+N-n))-(special.gammaln(L*(a/b)+N)+special.gammaln(a/b)+special.gammaln(n+1)+special.gammaln(N-n+1)+special.gammaln(L*(a/b)-(a/b))))#) return probs and I observe the following "strange" (for me of course) behaviour >>> dpolya1(1,2,0.5,0.4) -5.9741312822170585 >>> type(dpolya1(1,2,0.5,0.4)) >>> exp(dpolya1(1,2,0.5,0.4)) Traceback (most recent call last): File "", line 1, in ? AttributeError: 'numpy.ndarray' object has no attribute 'exp' I do not understand what's wrong. Any help? Thanks Angelo From torgil.svensson at gmail.com Thu Aug 31 14:21:50 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Thu, 31 Aug 2006 20:21:50 +0200 Subject: [Numpy-discussion] fromiter shape argument -- was Re: For loop tips In-Reply-To: <44F5A1B5.7090409@ieee.org> References: <44F5A1B5.7090409@ieee.org> Message-ID: > Yes. fromiter(iterable, dtype, count) works. Oh. Thanks. I probably had too old documentation to see this (15 June). If it's not updated since I'll give Travis a rest about this, at least until 1.0 is released :) > Regardless, L is only iterated over once. How can this be true? If no size is given, mustn't numpy either loop over L twice or build an internal representation on which it'll iterate or copy in chunks? I just found out that this works >>> import numpy,itertools >>> rec_dt=numpy.dtype(">i4,S10,f8") >>> rec_iter=itertools.cycle([(1,'s',4.0),(5,'y',190.0),(2,'h',-8)]) >>> numpy.fromiter(rec_iter,rec_dt,10).view(recarray) recarray([(1, 's', 4.0), (5, 'y', 190.0), (2, 'h', -8.0), (1, 's', 4.0), (5, 'y', 190.0), (2, 'h', -8.0), (1, 's', 4.0), (5, 'y', 190.0), (2, 'h', -8.0), (1, 's', 4.0)], dtype=[('f0', '>i4'), ('f1', '|S10'), ('f2', '>> d2_dt=numpy.dtype("4f8") >>> d2_iter=itertools.cycle([(1.0,numpy.nan,-1e10,14.0)]) >>> numpy.fromiter(d2_iter,d2_dt,10) Traceback (most recent call last): File "", line 1, in ? TypeError: a float is required >>> numpy.__version__ '1.0b4' //Torgil On 8/30/06, Tim Hochberg wrote: > Torgil Svensson wrote: > >> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) > >> > > > > Is it possible for fromiter to take an optional shape (or count) > > argument in addition to the dtype argument? > Yes. fromiter(iterable, dtype, count) works. > > > If both is given it could > > preallocate memory and we only have to iterate over L once. > > > Regardless, L is only iterated over once. In general you can't rewind > iterators, so that's a requirement. This is accomplished by doing > successive overallocation similar to the way appending to a list is > handled. By specifying the count up front you save a bunch of reallocs, > but no iteration. > > -tim > > > > > //Torgil > > > > On 8/29/06, Keith Goodman wrote: > > > >> On 8/29/06, Torgil Svensson wrote: > >> > >>> something like this? > >>> > >>> def list2index(L): > >>> uL=sorted(set(L)) > >>> idx=dict((y,x) for x,y in enumerate(uL)) > >>> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) > >>> > >> Wow. That's amazing. Thank you. > >> > >> ------------------------------------------------------------------------- > >> Using Tomcat but need to do more? Need to support web services, security? > >> Get stuff done quickly with pre-integrated technology to make your job easier > >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > >> _______________________________________________ > >> Numpy-discussion mailing list > >> Numpy-discussion at lists.sourceforge.net > >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion > >> > >> > > > > ------------------------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From torgil.svensson at gmail.com Thu Aug 31 14:25:12 2006 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Thu, 31 Aug 2006 20:25:12 +0200 Subject: [Numpy-discussion] For loop tips In-Reply-To: References: <44F48A0B.7020401@ieee.org> Message-ID: def list2index(L): uL=sorted(set(L)) idx=dict((y,x) for x,y in enumerate(uL)) return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int,count=len(L))) adding the count will save you a little more time, and temporary memory [see related thread]. //Torgil On 8/29/06, Keith Goodman wrote: > On 8/29/06, Torgil Svensson wrote: > > something like this? > > > > def list2index(L): > > uL=sorted(set(L)) > > idx=dict((y,x) for x,y in enumerate(uL)) > > return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) > > Wow. That's amazing. Thank you. > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From charlesr.harris at gmail.com Thu Aug 31 14:35:25 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 12:35:25 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: I submitted a ticket for this. On 8/31/06, Tom Denniston wrote: > > wrote the last email before reading your a = array([1,'A', None]) > comment. I definately agree with you on that. > > > On 8/31/06, Tom Denniston wrote: > > > > Yes one can take a toy example and hack it to work but I don't > > necessarily have control over the input as to whether it is a list of object > > arrays, list of 1d heterogenous arrays, etc. Before I didn't need to worry > > about the input because numpy understood that a list of 1d arrays is a > > 2d piece of data. Now it understands this for all dtypes except object. My > > question was is this new set of semantics preferable to the old. > > > > I think your example kind of proves my point. Does it really make any > > sense for the following two ways of specifying an array give such different > > results? They strike me as _meaning_ the same thing. Doesn't it seem > > inconsistent to you? > > > > > > In [13]: array([array([1,'A', None], dtype=object),array([2,2,'Some > > string'],dtype=object)], dtype=object).shape > > Out[13]: (2,) > > > > and > > > > In [14]: array([array([1,'A', None], dtype=object),array([2,2,'Some > > string'],dtype=object)]).shape > > Out[14]: (2, 3) > > So my question is what is the _advantage_ of the new semantics? The two > > examples above used to give the same results. In what cases is it > > preferable for them to give different results? How does it make life > > simpler? > > > > > > On 8/31/06, Charles R Harris wrote: > > > > > On 8/31/06, Tom Denniston wrote: > > > > > But i have hetergenious arrays that have numbers and strings and > > > NoneType, etc. > > > > > > Take for instance: > > > > > > In [11]: numpy.array([numpy.array([1,'A', None]), > > > numpy.array([2,2,'Some string'])], dtype=object) > > > Out[11]: > > > array([[1, A, None], > > > [2, 2, Some string]], dtype=object) > > > > > > In [12]: numpy.array([ numpy.array([1,'A', None]), > > > numpy.array([2,2,'Some string'])], dtype=object).shape > > > Out[12]: (2, 3) > > > > > > Works fine in Numeric and pre beta numpy but in beta numpy versions i > > > get: > > > > > > I think you want: > > > > In [59]: a = array([array([1,'A', None],dtype=object),array([2,2,'Some > > string'],dtype=object)]) > > > > In [60]: a.shape > > Out[60]: (2, 3) > > > > > > Which makes good sense to me. > > > > Chuck > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, > > security? > > Get stuff done quickly with pre-integrated technology to make your job > > easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache > > Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Aug 31 14:35:11 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 31 Aug 2006 13:35:11 -0500 Subject: [Numpy-discussion] Strange exp In-Reply-To: <20060831201329.49946c4e.secchi@sssup.it> References: <20060831201329.49946c4e.secchi@sssup.it> Message-ID: Angelo Secchi wrote: > Hi, > I have the following script > > import fileinput > import string > from math import * > from scipy import * > from rpy import * > import Numeric > import shelve > import sys > > def dpolya1(n,N,b,a): > a=float(a) > b=float(b) > L=784 > probs=((special.gammaln(N+1)+special.gammaln(L*(a/b))+special.gammaln((a/b)+n)+special.gammaln((a/b)*(L-1)+N-n))-(special.gammaln(L*(a/b)+N)+special.gammaln(a/b)+special.gammaln(n+1)+special.gammaln(N-n+1)+special.gammaln(L*(a/b)-(a/b))))#) > return probs > > and I observe the following "strange" (for me of course) behaviour > >>>> dpolya1(1,2,0.5,0.4) > -5.9741312822170585 >>>> type(dpolya1(1,2,0.5,0.4)) > >>>> exp(dpolya1(1,2,0.5,0.4)) > Traceback (most recent call last): > File "", line 1, in ? > AttributeError: 'numpy.ndarray' object has no attribute 'exp' > > I do not understand what's wrong. Any help? Probably rpy (which still uses Numeric, right?) is exposing Numeric's exp() implementation and overriding the one that you got from scipy (which is numpy's, I presume). When Numeric's exp() is confronted with an object that it doesn't recognize, it looks for a .exp() method to call. If you want to avoid this situation in the future, don't use the "from foo import *" form. It makes debugging problems like this difficult. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at ieee.org Thu Aug 31 14:43:22 2006 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Thu, 31 Aug 2006 11:43:22 -0700 Subject: [Numpy-discussion] fromiter shape argument -- was Re: For loop tips In-Reply-To: References: <44F5A1B5.7090409@ieee.org> Message-ID: <44F72DCA.9050700@ieee.org> Torgil Svensson wrote: >> Yes. fromiter(iterable, dtype, count) works. >> > > Oh. Thanks. I probably had too old documentation to see this (15 > June). If it's not updated since I'll give Travis a rest about this, > at least until 1.0 is released :) > Actually I just knew 'cause I wrote it. I don't see a docstring for fromiter, although I though I wrote one. Maybe I just forgot? >> Regardless, L is only iterated over once. >> > > How can this be true? If no size is given, mustn't numpy either loop > over L twice or build an internal representation on which it'll > iterate or copy in chunks? > Well, it can't in general loop over L twice since the only method that L is guaranteed to have is next(); that's the extent of the iterator protocol. What it does is allocate an initial chunk of memory (the size of which I forget -- I did some tuning) and start filling it up. Once it's full, it does a realloc, which expands the existing chunk or memory, if possible, or returns a new, larger, chunk of memory with the data copied into it. Then we iterate on L some more until we fill up the new larger chunk, in which case we go get another one, etc. This is exactly how list.append works, although in that case the chunk of data is acutally a chunk of pointers to objects. -tim > > I just found out that this works > >>>> import numpy,itertools >>>> rec_dt=numpy.dtype(">i4,S10,f8") >>>> rec_iter=itertools.cycle([(1,'s',4.0),(5,'y',190.0),(2,'h',-8)]) >>>> numpy.fromiter(rec_iter,rec_dt,10).view(recarray) >>>> > recarray([(1, 's', 4.0), (5, 'y', 190.0), (2, 'h', -8.0), (1, 's', 4.0), > (5, 'y', 190.0), (2, 'h', -8.0), (1, 's', 4.0), (5, 'y', 190.0), > (2, 'h', -8.0), (1, 's', 4.0)], > dtype=[('f0', '>i4'), ('f1', '|S10'), ('f2', ' > but what's wrong with this? > > >>>> d2_dt=numpy.dtype("4f8") >>>> d2_iter=itertools.cycle([(1.0,numpy.nan,-1e10,14.0)]) >>>> numpy.fromiter(d2_iter,d2_dt,10) >>>> > Traceback (most recent call last): > File "", line 1, in ? > TypeError: a float is required > >>>> numpy.__version__ >>>> > '1.0b4' > > //Torgil > > > > On 8/30/06, Tim Hochberg wrote: > >> Torgil Svensson wrote: >> >>>> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) >>>> >>>> >>> Is it possible for fromiter to take an optional shape (or count) >>> argument in addition to the dtype argument? >>> >> Yes. fromiter(iterable, dtype, count) works. >> >> >>> If both is given it could >>> preallocate memory and we only have to iterate over L once. >>> >>> >> Regardless, L is only iterated over once. In general you can't rewind >> iterators, so that's a requirement. This is accomplished by doing >> successive overallocation similar to the way appending to a list is >> handled. By specifying the count up front you save a bunch of reallocs, >> but no iteration. >> >> -tim >> >> >> >> >>> //Torgil >>> >>> On 8/29/06, Keith Goodman wrote: >>> >>> >>>> On 8/29/06, Torgil Svensson wrote: >>>> >>>> >>>>> something like this? >>>>> >>>>> def list2index(L): >>>>> uL=sorted(set(L)) >>>>> idx=dict((y,x) for x,y in enumerate(uL)) >>>>> return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int)) >>>>> >>>>> >>>> Wow. That's amazing. Thank you. >>>> >>>> ------------------------------------------------------------------------- >>>> Using Tomcat but need to do more? Need to support web services, security? >>>> Get stuff done quickly with pre-integrated technology to make your job easier >>>> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >>>> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >>>> _______________________________________________ >>>> Numpy-discussion mailing list >>>> Numpy-discussion at lists.sourceforge.net >>>> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >>>> >>>> >>>> >>> ------------------------------------------------------------------------- >>> Using Tomcat but need to do more? Need to support web services, security? >>> Get stuff done quickly with pre-integrated technology to make your job easier >>> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >>> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at lists.sourceforge.net >>> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >>> >>> >>> >>> >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From Chris.Barker at noaa.gov Thu Aug 31 14:51:33 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 31 Aug 2006 11:51:33 -0700 Subject: [Numpy-discussion] possible bug with numpy.object_ In-Reply-To: <44F714F5.9050305@ieee.org> References: <44F47036.8040300@ieee.org> <20060830120415.GQ23074@mentat.za.net> <44F6E6CC.70206@ieee.org> <44F7124E.7010702@noaa.gov> <44F714F5.9050305@ieee.org> Message-ID: <44F72FB5.2070300@noaa.gov> Tim Hochberg wrote: >> That's what I'd expect, but what if you start with a (0,) array: >> >>> a = N.array([]).sum(); a.shape; a.size; a >> () >> 1 >> 0 >> >> where did that zero come from? >> > More or less from: > > >>> numpy.add.identity > 0 I'm not totally sure, but I think I'd rather it raise an exception. However, if it's not going to, then 0 is really the only reasonable answer. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu Aug 31 15:08:51 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 31 Aug 2006 12:08:51 -0700 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: Message-ID: <44F733C3.7000307@noaa.gov> Tom Denniston wrote: > So my question is what is the _advantage_ of the new semantics? what if the list don't have the same length, and therefor can not be made into an array, now you get a weird result: >>>N.array([N.array([1,'A',None],dtype=object),N.array([2,2,'Somestring',5],dtype=object)]).shape () Now you get an Object scalar. but: >>>N.array([N.array([1,'A',None],dtype=object),N.array([2,2,'Somestring',5],dtype=object)],dtype=object).shape (2,) Now you get a length 2 array, just like before: far more consistent. With the old semantics, if you test your code with arrays of different lengths, you'll get one thing, but if they then happen to be the same length in some production use, the whole thing breaks -- this is a Bad Idea. Object arrays are just plain weird, there is nothing you can do that will satisfy every need. I think it's best for the array constructor to not try to guess at what the hierarchy of sequences you *meant* to use. You can (and probably should) always be explicit with: >>> A = N.empty((2,), dtype=object) >>> A array([None, None], dtype=object) >>> A[:] = [N.array([1,'A', None], dtype=object),N.array([2,2,'Somestring',5],dtype=object)] >>> A array([[1 A None], [2 2 Somestring 5]], dtype=object) -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tom.denniston at alum.dartmouth.org Thu Aug 31 15:29:15 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 31 Aug 2006 14:29:15 -0500 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: <44F733C3.7000307@noaa.gov> References: <44F733C3.7000307@noaa.gov> Message-ID: I would think one would want to throw an error when the data has inconsistent dimensions. This is what numpy does for other dtypes: In [10]: numpy.array(([1,2,3], [4,5,6])) Out[10]: array([[1, 2, 3], [4, 5, 6]]) In [11]: numpy.array(([1,3], [4,5,6])) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) TypeError: an integer is required On 8/31/06, Christopher Barker wrote: > > Tom Denniston wrote: > > So my question is what is the _advantage_ of the new semantics? > > what if the list don't have the same length, and therefor can not be > made into an array, now you get a weird result: > > >>>N.array([N.array([1,'A',None],dtype=object),N.array > ([2,2,'Somestring',5],dtype=object)]).shape > () > > Now you get an Object scalar. > > but: > >>>N.array([N.array([1,'A',None],dtype=object),N.array > ([2,2,'Somestring',5],dtype=object)],dtype=object).shape > (2,) > > Now you get a length 2 array, just like before: far more consistent. > With the old semantics, if you test your code with arrays of different > lengths, you'll get one thing, but if they then happen to be the same > length in some production use, the whole thing breaks -- this is a Bad > Idea. > > Object arrays are just plain weird, there is nothing you can do that > will satisfy every need. I think it's best for the array constructor to > not try to guess at what the hierarchy of sequences you *meant* to use. > You can (and probably should) always be explicit with: > > >>> A = N.empty((2,), dtype=object) > >>> A > array([None, None], dtype=object) > >>> A[:] = [N.array([1,'A', None], > dtype=object),N.array([2,2,'Somestring',5],dtype=object)] > >>> A > array([[1 A None], [2 2 Somestring 5]], dtype=object) > > -Chris > > > > > > -- > Christopher Barker, Ph.D. > Oceanographer > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Thu Aug 31 15:51:07 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 31 Aug 2006 12:51:07 -0700 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: References: <44F733C3.7000307@noaa.gov> Message-ID: <44F73DAB.3020100@noaa.gov> Tom Denniston wrote: > I would think one would want to throw an error when the data has > inconsistent dimensions. But it doesn't have inconsistent dimensions - they are perfectly consistent with a (2,) array of objects. How is the code to know what you intended? With numeric types, it is unambiguous to march down through the sequences until you get a number. As a sequence is an object, there no way to unambiguously do this automatically. Perhaps the way to solve this is for the array constructor to take a "shape" or "rank" argument, so you could specify what you intend. But that's really just syntactic sugar to avoid for calling numpy.empty() first. Perhaps a numpy.object_array() constructor would be useful, although as I think about it, even specifying a shape or rank would not be unambiguous! This is a useful discussion. If we ever get a nd-array into the standard lib, I suspect that object arrays will get heavy use -- better to clean up the semantics now. Perhaps a Wiki page on building object arrays is called for. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From charlesr.harris at gmail.com Thu Aug 31 15:59:40 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 31 Aug 2006 13:59:40 -0600 Subject: [Numpy-discussion] dtype=object behavior change from 0.9.6 to beta 1 In-Reply-To: <44F73DAB.3020100@noaa.gov> References: <44F733C3.7000307@noaa.gov> <44F73DAB.3020100@noaa.gov> Message-ID: On 8/31/06, Christopher Barker wrote: > > Tom Denniston wrote: > > I would think one would want to throw an error when the data has > > inconsistent dimensions. > > But it doesn't have inconsistent dimensions - they are perfectly > consistent with a (2,) array of objects. How is the code to know what > you intended? Same as it produces a float array from array([1,2,3.0]). Array is a complicated function for precisely these sort of reasons, but the convenience makes it worthwhile. So, if a list contains something that can only be interpreted as an object, dtype should be set to object. With numeric types, it is unambiguous to march down through the > sequences until you get a number. As a sequence is an object, there no > way to unambiguously do this automatically. > > Perhaps the way to solve this is for the array constructor to take a > "shape" or "rank" argument, so you could specify what you intend. But > that's really just syntactic sugar to avoid for calling numpy.empty() > first. > > Perhaps a numpy.object_array() constructor would be useful, although as > I think about it, even specifying a shape or rank would not be > unambiguous! > > This is a useful discussion. If we ever get a nd-array into the standard > lib, I suspect that object arrays will get heavy use -- better to clean > up the semantics now. > > Perhaps a Wiki page on building object arrays is called for. > > -Chris Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Thu Aug 31 19:11:01 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 31 Aug 2006 19:11:01 -0400 Subject: [Numpy-discussion] amd64 support In-Reply-To: References: Message-ID: <23DA5221-A8C4-4B67-B404-953F3CBC3C69@physics.mcmaster.ca> On Aug 30, 2006, at 11:53 , Keith Goodman wrote: > I plan to build an amd64 box and run debian etch. Are there any big, > 64-bit, show-stopping problems in numpy? Any minor annoyances? Shouldn't be; I regularly build and test it on an amd64 box running Debian unstable, and I know several others use amd64 boxes too. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca