From ccfhpt at insure.com Thu Jun 1 05:50:06 2006 From: ccfhpt at insure.com (zpxknsxt wwacjenon) Date: Thu Jun 1 05:50:06 2006 Subject: [Numpy-discussion] {Reply} Nothing like it INFX Message-ID: <57462931.4006694425490.JavaMail.gfdbhygpqvl@nu-nz02> INFX**INFX**INFX**INFX**INFX**INFX**INFX**INFX** Infinex Ventures Inc. (INFX) Current Price: $0.52 The Rally has begun Watch this one like a hawk, this report is sent because the potential is incredible This is AS sure as it gets H U G E N E W S read below COMPANY OVERVIEW Aggressive and energetic, Infinex boasts a dynamic and diversified portfolio of operations across North America, with an eye on international expansion. Grounded in natural resource exploration, Inifinex also offers investors access to exciting new developments in the high-tech sector and the booming international real estate market. Our market based experience, tenacious research techniques, and razor sharp analytical skills allow us to leverage opportunities in emerging markets and developing technologies. Identifying these opportunities in the earliest stages allows us to accelerate business development and fully realize the company?s true potential. Maximizing overall profitability and in turn enhancing shareholder value. Current Press Release Infinex Announces Extension to Its Agreement in Chile LAS VEGAS, NV, May 9 /PRNewswire-FirstCall/ - Infinex Ventures Inc. (INFX:OB - News; "the Company") and its Board of Directors are pleased to announce that the Company has received an extension (90 days) to its Agreement for the due diligence period, in an effort to fully verify the offered title and all additional documentation, including but not limited to, Trial C-1912- 2001 at the 14th Civil Court of Santiago and Criminal Trial 1160-2002 at the 19th Court of Crime of Santiago of Chile, Ministry of Mines of Chile over its sole and exclusive right to acquire a 50% interest in the Tesoro 1-12 Mining Claims. Infinex Announces Joint Venture and Option Agreement Extension LAS VEGAS, May 5 /PRNewswire-FirstCall/ - Infinex Ventures Inc. (INFX:OB - "the Company") and its Board of Directors are please to announce that the Company has been granted an extension of 120 days to fulfill its contractual obligations under the Joint Venture and Option Agreement dated June 14, 2004 on the Texada Island "Yew Gr0up" Mining Claims: Shake like a leaf. We'll hand you out to dry. We'll hand you out to dry. Stand your ground. Ugly as a mud fence. Scraping the bottom of the barrel. Sly as a fox. A snail's pace. Your ass is grass. There is always next year. Rise and shine. Sly as a fox. Putting the cart before the horse. Walking on thin ice. Stand your ground. Root it out. The stronger the breeze the stronger the trees. Putting it in a nutshell. This is for the birds. Wrinkled as a prune. Up one side and down the other. Rise and shine. Sour as a green apple. You say potayto, I say potahto. She's a mother hen. Say it with flowers. A thorn in my side. Weed out. From nadavh at visionsense.com Thu Jun 1 07:18:03 2006 From: nadavh at visionsense.com (Nadav Horesh) Date: Thu Jun 1 07:18:03 2006 Subject: [Numpy-discussion] Fortran 95 compiler (from gcc 4.1.1) is not recognized by scipy Message-ID: <07C6A61102C94148B8104D42DE95F7E8C8EFC6@exchange2k.envision.co.il> I recently upgraded to gcc4.1.1. When I tried to compile scipy from today's svn repository it halts with the following message: Traceback (most recent call last): File "setup.py", line 50, in ? setup_package() File "setup.py", line 42, in setup_package configuration=configuration ) File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 170, in setup return old_setup(**new_attr) File "/usr/lib/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", line 109, in run self.build_extensions() File "/usr/lib/python2.4/distutils/command/build_ext.py", line 405, in build_e xtensions self.build_extension(ext) File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", line 301, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' ---- The output of gfortran --version: GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1) Copyright (C) 2006 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING I have also the old g77 compiler installed (g77-3.4.6). Is there a way to force numpy/scipy to use it? Nadav From robert.kern at gmail.com Thu Jun 1 09:48:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu Jun 1 09:48:04 2006 Subject: [Numpy-discussion] Re: Fortran 95 compiler (from gcc 4.1.1) is not recognized by scipy In-Reply-To: <07C6A61102C94148B8104D42DE95F7E8C8EFC6@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E8C8EFC6@exchange2k.envision.co.il> Message-ID: Nadav Horesh wrote: > I recently upgraded to gcc4.1.1. When I tried to compile scipy from today's svn repository it halts with the following message: > > Traceback (most recent call last): > File "setup.py", line 50, in ? > setup_package() > File "setup.py", line 42, in setup_package > configuration=configuration ) > File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 170, in > setup > return old_setup(**new_attr) > File "/usr/lib/python2.4/distutils/core.py", line 149, in setup > dist.run_commands() > File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands > self.run_command(cmd) > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run > self.run_command(cmd_name) > File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command > self.distribution.run_command(command) > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", > line 109, in run > self.build_extensions() > File "/usr/lib/python2.4/distutils/command/build_ext.py", line 405, in build_e > xtensions > self.build_extension(ext) > File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", > line 301, in build_extension > link = self.fcompiler.link_shared_object > AttributeError: 'NoneType' object has no attribute 'link_shared_object' > > ---- > > The output of gfortran --version: > > GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1) Hmm. The usual suspect (not finding the version) doesn't seem to be the problem here. >>> from numpy.distutils.ccompiler import simple_version_match >>> m = simple_version_match(start='GNU Fortran 95') >>> m(None, 'GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1)') '4.1.1' > I have also the old g77 compiler installed (g77-3.4.6). Is there a way to force numpy/scipy to use it? Sure. python setup.py config_fc --fcompiler=gnu build_src build_clib build_ext build -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Thu Jun 1 09:55:07 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu Jun 1 09:55:07 2006 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: References: <447D051E.9000709@ieee.org> <27BE229E-1192-4643-8454-5E0790A0AC7F@ftw.at> <447DCD79.3000808@noaa.gov> Message-ID: <447F1BBD.7030905@noaa.gov> Fernando Perez wrote: >> 2. Pointing www.numpy.org to numeric.scipy.org instead of the SF page > Well, ipython is not scipy either, and yet its homepage is > ipython.scipy.org. I think it's simply a matter of convenience that > the Enthought hosting infrastructure is so much more pleasant to use > than SF Pardon me for being a lazy idiot. numeric.scipy.org is a fine place for it. I was reacting to a post a while back that suggested pointing people searching for numpy to the main scipy page, which I did not think was a good idea. Objection withdrawn. >> Can you even build it with gcc 4 yet? > I built it on a recent ubuntu not too long ago, without any glitches. > I can check again tonitght on a fresh Dapper with up-to-date SVN if > you want. Well, I need FC4 (and soon 5) as well as OS-X, so I'll try again when I get the chance. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu Jun 1 11:33:02 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu Jun 1 11:33:02 2006 Subject: [Numpy-discussion] What am I missing about concatenate? Message-ID: <447F32A6.1090903@noaa.gov> I want to take two (2,) arrays and put them together into one (2,2) array. I thought one of these would work: >>> N.concatenate(((1,2),(3,4)),0) array([1, 2, 3, 4]) >>> N.concatenate(((1,2),(3,4)),1) array([1, 2, 3, 4]) Is this the best I can do? >>> N.concatenate(((1,2),(3,4))).reshape(2,2) array([[1, 2], [3, 4]]) Is it because the arrays I'm putting together are rank-1? >>> N.__version__ '0.9.6' -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Thu Jun 1 11:43:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu Jun 1 11:43:00 2006 Subject: [Numpy-discussion] Re: What am I missing about concatenate? In-Reply-To: <447F32A6.1090903@noaa.gov> References: <447F32A6.1090903@noaa.gov> Message-ID: Christopher Barker wrote: > I want to take two (2,) arrays and put them together into one (2,2) > array. I thought one of these would work: > >>>> N.concatenate(((1,2),(3,4)),0) > array([1, 2, 3, 4]) >>>> N.concatenate(((1,2),(3,4)),1) > array([1, 2, 3, 4]) > > Is this the best I can do? > >>>> N.concatenate(((1,2),(3,4))).reshape(2,2) > array([[1, 2], > [3, 4]]) > > Is it because the arrays I'm putting together are rank-1? Yes. Look at vstack() (and also its friends hstack(), dstack() and column_stack() for completeness). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From alexandre.fayolle at logilab.fr Thu Jun 1 11:45:02 2006 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Thu Jun 1 11:45:02 2006 Subject: [Numpy-discussion] What am I missing about concatenate? In-Reply-To: <447F32A6.1090903@noaa.gov> References: <447F32A6.1090903@noaa.gov> Message-ID: <20060601184736.GC26776@crater.logilab.fr> On Thu, Jun 01, 2006 at 11:32:06AM -0700, Christopher Barker wrote: > I want to take two (2,) arrays and put them together into one (2,2) > array. I thought one of these would work: > > >>> N.concatenate(((1,2),(3,4)),0) > array([1, 2, 3, 4]) > >>> N.concatenate(((1,2),(3,4)),1) > array([1, 2, 3, 4]) > > Is this the best I can do? > > >>> N.concatenate(((1,2),(3,4))).reshape(2,2) > array([[1, 2], > [3, 4]]) > > Is it because the arrays I'm putting together are rank-1? concatenate is not meant to do that. Try putting your arrays in a list and building an array from that list. a1 = array([1,2]) a2 = array([3,4]) print array([a1, a2]) /bin/bash: q: command not found -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From aisaac at american.edu Thu Jun 1 11:45:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu Jun 1 11:45:05 2006 Subject: [Numpy-discussion] What am I missing about concatenate? In-Reply-To: <447F32A6.1090903@noaa.gov> References: <447F32A6.1090903@noaa.gov> Message-ID: On Thu, 01 Jun 2006, Christopher Barker apparently wrote: > Is this the best I can do? > >>> N.concatenate(((1,2),(3,4))).reshape(2,2) > array([[1, 2], > [3, 4]]) >>> import numpy as N >>> N.vstack([(1,2),(3,4)]) array([[1, 2], [3, 4]]) hth, Alan Isaac From tim.hochberg at cox.net Thu Jun 1 11:47:07 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu Jun 1 11:47:07 2006 Subject: [Numpy-discussion] What am I missing about concatenate? In-Reply-To: <447F32A6.1090903@noaa.gov> References: <447F32A6.1090903@noaa.gov> Message-ID: <447F3593.8020208@cox.net> Christopher Barker wrote: > I want to take two (2,) arrays and put them together into one (2,2) > array. I thought one of these would work: > > >>> N.concatenate(((1,2),(3,4)),0) > array([1, 2, 3, 4]) > >>> N.concatenate(((1,2),(3,4)),1) > array([1, 2, 3, 4]) > > Is this the best I can do? > > >>> N.concatenate(((1,2),(3,4))).reshape(2,2) > array([[1, 2], > [3, 4]]) > > Is it because the arrays I'm putting together are rank-1? Yes. You need to add a dimension somehow. There are (at least) two ways to do this. If you are using real arrays, use newaxis: >>> a array([0, 1, 2]) >>> b array([3, 4, 5]) >>> concatenate([a[newaxis], b[newaxis]], 0) array([[0, 1, 2], [3, 4, 5]]) Alternatively, if you don't know that 'a' and 'b' are arrays or you just hate newaxis, wrap the arrays in [] to give them an extra dimension. This tends to look nicer, but I suspect has poorer performance than above (haven't timed it though): >>> concatenate([[a], [b]], 0) array([[0, 1, 2], [3, 4, 5]]) -tim > > >>> N.__version__ > '0.9.6' > > -Chris > > > > > From Chris.Barker at noaa.gov Thu Jun 1 12:06:02 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu Jun 1 12:06:02 2006 Subject: [Numpy-discussion] How do I use numpy to do this? Message-ID: <447F3A57.2080206@noaa.gov> I'm trying to get the (x,y) coords for all the points in a grid, bound by xmin, xmax, ymin, ymax. This list comprehension does it fine: Points = [(x,y) for x in xrange(minx, maxx) for y in xrange(miny, maxy)] But I can't think at the moment how to do it with numpy. Any ideas? Thanks, -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu Jun 1 12:14:01 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu Jun 1 12:14:01 2006 Subject: [Numpy-discussion] Re: What am I missing about concatenate? In-Reply-To: References: <447F32A6.1090903@noaa.gov> Message-ID: <447F3C62.5020105@noaa.gov> Thanks all, Robert Kern wrote: > Look at vstack() (and also its friends hstack(), dstack() and column_stack() for > completeness). I like this, but need to keep Numeric/numarray compatibility for the moment -- I think, I've just sent out a query to my users. Tim Hochberg wrote: > If you are using real arrays, use newaxis: > > >>> a > array([0, 1, 2]) > >>> b > array([3, 4, 5]) > >>> concatenate([a[newaxis], b[newaxis]], 0) > array([[0, 1, 2], > [3, 4, 5]]) I like this, but again, not in Numeric -- I really need to dump that as soon as I can! > hate newaxis, wrap the arrays in [] to give them an extra dimension. > This tends to look nicer, but I suspect has poorer performance than > above (haven't timed it though): > > >>> concatenate([[a], [b]], 0) > array([[0, 1, 2], > [3, 4, 5]]) Lovely. much cleaner. By they way, wouldn't wrapping in a tuple, be slightly better, performance-wise (I know, probably negligible, but I always feel that I should use a tuple when I don't need mutability) -thanks, -chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Thu Jun 1 12:21:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu Jun 1 12:21:02 2006 Subject: [Numpy-discussion] Re: How do I use numpy to do this? In-Reply-To: <447F3A57.2080206@noaa.gov> References: <447F3A57.2080206@noaa.gov> Message-ID: Christopher Barker wrote: > > I'm trying to get the (x,y) coords for all the points in a grid, bound > by xmin, xmax, ymin, ymax. > > This list comprehension does it fine: > > Points = [(x,y) for x in xrange(minx, maxx) for y in xrange(miny, maxy)] > > But I can't think at the moment how to do it with numpy. Any ideas? In [4]: x, y = mgrid[0:10, 5:15] In [5]: x Out[5]: array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3, 3, 3, 3, 3], [4, 4, 4, 4, 4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6, 6, 6, 6, 6], [7, 7, 7, 7, 7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9, 9, 9, 9, 9]]) In [6]: y Out[6]: array([[ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]]) In [8]: points = column_stack((x.ravel(), y.ravel())) In [9]: points Out[9]: array([[ 0, 5], [ 0, 6], [ 0, 7], [ 0, 8], [ 0, 9], [ 0, 10], ... -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndarray at mac.com Thu Jun 1 12:27:02 2006 From: ndarray at mac.com (Sasha) Date: Thu Jun 1 12:27:02 2006 Subject: [Numpy-discussion] Re: How do I use numpy to do this? In-Reply-To: References: <447F3A57.2080206@noaa.gov> Message-ID: >>> mgrid[0:10, 5:15].reshape(2,100).transpose() array([[ 0, 5], [ 0, 6], [ 0, 7], [ 0, 8], ...]) On 6/1/06, Robert Kern wrote: > Christopher Barker wrote: > > > > I'm trying to get the (x,y) coords for all the points in a grid, bound > > by xmin, xmax, ymin, ymax. > > > > This list comprehension does it fine: > > > > Points = [(x,y) for x in xrange(minx, maxx) for y in xrange(miny, maxy)] > > > > But I can't think at the moment how to do it with numpy. Any ideas? > > In [4]: x, y = mgrid[0:10, 5:15] > > In [5]: x > Out[5]: > array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], > [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], > [2, 2, 2, 2, 2, 2, 2, 2, 2, 2], > [3, 3, 3, 3, 3, 3, 3, 3, 3, 3], > [4, 4, 4, 4, 4, 4, 4, 4, 4, 4], > [5, 5, 5, 5, 5, 5, 5, 5, 5, 5], > [6, 6, 6, 6, 6, 6, 6, 6, 6, 6], > [7, 7, 7, 7, 7, 7, 7, 7, 7, 7], > [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], > [9, 9, 9, 9, 9, 9, 9, 9, 9, 9]]) > > In [6]: y > Out[6]: > array([[ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]]) > > In [8]: points = column_stack((x.ravel(), y.ravel())) > > In [9]: points > Out[9]: > array([[ 0, 5], > [ 0, 6], > [ 0, 7], > [ 0, 8], > [ 0, 9], > [ 0, 10], > ... > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tim.hochberg at cox.net Thu Jun 1 12:59:02 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu Jun 1 12:59:02 2006 Subject: [Numpy-discussion] Re: What am I missing about concatenate? In-Reply-To: <447F3C62.5020105@noaa.gov> References: <447F32A6.1090903@noaa.gov> <447F3C62.5020105@noaa.gov> Message-ID: <447F4671.1070707@cox.net> Christopher Barker wrote: > Thanks all, > > > Robert Kern wrote: > >> Look at vstack() (and also its friends hstack(), dstack() and >> column_stack() for >> completeness). > > > I like this, but need to keep Numeric/numarray compatibility for the > moment -- I think, I've just sent out a query to my users. > > > > Tim Hochberg wrote: > >> If you are using real arrays, use newaxis: >> >> >>> a >> array([0, 1, 2]) >> >>> b >> array([3, 4, 5]) >> >>> concatenate([a[newaxis], b[newaxis]], 0) >> array([[0, 1, 2], >> [3, 4, 5]]) > > > I like this, but again, not in Numeric -- I really need to dump that > as soon as I can! In Numeric, you can use NewAxis instead for the same effect. > >> hate newaxis, wrap the arrays in [] to give them an extra dimension. >> This tends to look nicer, but I suspect has poorer performance than >> above (haven't timed it though): >> >> >>> concatenate([[a], [b]], 0) >> array([[0, 1, 2], >> [3, 4, 5]]) > > > Lovely. much cleaner. > > By they way, wouldn't wrapping in a tuple, be slightly better, > performance-wise (I know, probably negligible, but I always feel that > I should use a tuple when I don't need mutability) I doubt it would make a signifigant difference and the square brackets are much easier to read IMO. Your mileage may vary. -tim From cwmoad at gmail.com Thu Jun 1 13:09:00 2006 From: cwmoad at gmail.com (Charlie Moad) Date: Thu Jun 1 13:09:00 2006 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <447F3A57.2080206@noaa.gov> References: <447F3A57.2080206@noaa.gov> Message-ID: <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> Here's my crack at it. pts = mgrid[minx:maxx,miny:maxy].transpose() pts.reshape(pts.size/2, 2) #pts is good to go On 6/1/06, Christopher Barker wrote: > > I'm trying to get the (x,y) coords for all the points in a grid, bound > by xmin, xmax, ymin, ymax. > > This list comprehension does it fine: > > Points = [(x,y) for x in xrange(minx, maxx) for y in xrange(miny, maxy)] > > But I can't think at the moment how to do it with numpy. Any ideas? > > Thanks, > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From robert.kern at gmail.com Thu Jun 1 13:14:06 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu Jun 1 13:14:06 2006 Subject: [Numpy-discussion] Re: How do I use numpy to do this? In-Reply-To: <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> Message-ID: Charlie Moad wrote: > Here's my crack at it. > > pts = mgrid[minx:maxx,miny:maxy].transpose() > pts.reshape(pts.size/2, 2) > #pts is good to go Well, if we're going for terseness: points = mgrid[minx:maxx, miny:maxy].reshape(2, -1).transpose() -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Thu Jun 1 13:21:02 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu Jun 1 13:21:02 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> Message-ID: <447F4BF9.7060101@ieee.org> Berthold H?llmann wrote: > Travis Oliphant writes: > > >> 2) Will you transition within the next 6 months? (if you answered No to #1) >> > > Unlikely > > >> 3) Please, explain your reason(s) for not making the switch. (if you >> answered No to #2) >> > > Lack of resources (Numeric is used in hand coded extensions; are > arrays of type PyObject supported in NumPy, they were not in numarray) > Yes, NumPy is actually quite similar to Numeric. Most C-extensions are easily ported simply by replacing #include Numeric/arrayobject.h with #include numpy/arrayobject.h (and making sure you get the right location for the headers). -Travis From perry at stsci.edu Thu Jun 1 13:43:03 2006 From: perry at stsci.edu (Perry Greenfield) Date: Thu Jun 1 13:43:03 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: <447F4BF9.7060101@ieee.org> References: <447D051E.9000709@ieee.org> <447F4BF9.7060101@ieee.org> Message-ID: <69b842594e9ecc8ef8dfebe953ea3af4@stsci.edu> Just to clarify the issue with regard to numarray since one person brought it up. When we (STScI) are finished getting all our software running under numpy--and we are well more than halfway there--we will start drawing down support for numarray. It won't suddenly stop, but less and less effort will go into it and eventually none. That transition time (starts when we can run all our software on numpy and stops when we no longer support numarray at all) will probably be on the order of 6 months, but note that for much of that time, the support will likely be limited to dealing with major bugs only or support for new versions of major platforms. We will note the start and stop points of this transition on the numpy and scipy lists of course. After that, any support for it will have to come from elsewhere. (Message: if you use numarray, you should be planning now to make the transition if 6 months isn't enough time) Perry From oliphant.travis at ieee.org Thu Jun 1 17:54:35 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 01 Jun 2006 15:54:35 -0600 Subject: [Numpy-discussion] Free SciPy 2006 porting service Message-ID: <447F621B.1010603@ieee.org> I will be available during the SciPy 2006 conference to help port open-source applications to NumPy for no charge. (I'm always available for porting commercial code for a reasonable fee). Others who want to assist will be welcome. Conference attendees will get first priority, but others who want to email their request can do so. Offer will be on a first come, first serve basis but I will reserve the liberty to rearrange the order to serve as many projects as possible. I'll place a note on the Wiki Coding Sprint page to this effect. -Travis O. From Chris.Barker at noaa.gov Thu Jun 1 17:41:36 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 01 Jun 2006 14:41:36 -0700 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> Message-ID: <447F5F10.1010305@noaa.gov> > Charlie Moad wrote: >> pts = mgrid[minx:maxx,miny:maxy].transpose() >> pts.reshape(pts.size/2, 2) Thanks everyone -- yet another reason to dump support for the older num* packages. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tom.denniston at alum.dartmouth.org Thu Jun 1 13:27:54 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 1 Jun 2006 12:27:54 -0500 Subject: [Numpy-discussion] lexsort Message-ID: This function is really useful but it seems to only take tuples not ndarrays. This seems kinda strange. Does one have to convert the ndarray into a tuple to use it? This seems extremely inefficient. Is there an efficient way to argsort a 2d array based upon multiple columns if lexsort is not the correct way to do this? The only way I have found to do this is to construct a list of tuples and sort them using python's list sort. This is inefficient and convoluted so I was hoping lexsort would provide a simple solution. --Tom From Chris.Barker at noaa.gov Thu Jun 1 18:13:28 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 01 Jun 2006 15:13:28 -0700 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> Message-ID: <447F6688.1030504@noaa.gov> Robert Kern wrote: > points = mgrid[minx:maxx, miny:maxy].reshape(2, -1).transpose() As I need Numeric and numarray compatibility at this point, it seems the best I could come up with is below. I'm guessing the list comprehension may well be faster! -Chris #!/usr/bin/env python #import numpy as N #import Numeric as N import numarray as N Spacing = 2.0 minx = 0 maxx = 5 miny = 20 maxy = 22 print "minx", minx print "miny", miny print "maxx", maxx print "maxy", maxy ## # The nifty, terse, numpy way ## points = mgrid[minx:maxx, miny:maxy].reshape(2, -1).transpose() ## The Numeric and numarray way: x = N.arange(minx, maxx+Spacing, Spacing) # makeing sure to get the last point y = N.arange(miny, maxy+Spacing, Spacing) # an extra is OK points = N.zeros((len(y), len(x), 2), N.Float) x.shape = (1,-1) y.shape = (-1,1) points[:,:,0] += x points[:,:,1] += y points.shape = (-1,2) print points -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cwmoad at gmail.com Thu Jun 1 15:23:27 2006 From: cwmoad at gmail.com (Charlie Moad) Date: Thu, 1 Jun 2006 15:23:27 -0400 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> Message-ID: <6382066a0606011223j7584ee5cvaf27d22c38e35ad7@mail.gmail.com> That reshape should be "resize". Sorry. > Here's my crack at it. > > pts = mgrid[minx:maxx,miny:maxy].transpose() > pts.reshape(pts.size/2, 2) > #pts is good to go > > On 6/1/06, Christopher Barker wrote: > > > > I'm trying to get the (x,y) coords for all the points in a grid, bound > > by xmin, xmax, ymin, ymax. > > > > This list comprehension does it fine: > > > > Points = [(x,y) for x in xrange(minx, maxx) for y in xrange(miny, maxy)] > > > > But I can't think at the moment how to do it with numpy. Any ideas? > > > > Thanks, > > > > -Chris > > > > > > -- > > Christopher Barker, Ph.D. > > Oceanographer > > > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > > 7600 Sand Point Way NE (206) 526-6329 fax > > Seattle, WA 98115 (206) 526-6317 main reception > > > > Chris.Barker at noaa.gov > > > > > > ------------------------------------------------------- > > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > > Fully trained technicians. The highest number of Red Hat certifications in > > the hosting industry. Fanatical Support. Click to learn more > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From robert.kern at gmail.com Thu Jun 1 20:16:40 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Jun 2006 19:16:40 -0500 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <447F6688.1030504@noaa.gov> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> Message-ID: Christopher Barker wrote: > Robert Kern wrote: > >>points = mgrid[minx:maxx, miny:maxy].reshape(2, -1).transpose() > > As I need Numeric and numarray compatibility at this point, it seems the > best I could come up with is below. Ah. It might help if you said that up front. (Untested, but what I usually did in the bad old days before I used scipy): x = arange(minx, maxx+step, step) # oy. y = arange(miny, maxy+step, step) nx = len(x) ny = len(y) x = repeat(x, ny) y = concatenate([y] * nx) points = transpose([x, y]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tom.denniston at alum.dartmouth.org Thu Jun 1 20:50:30 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 1 Jun 2006 19:50:30 -0500 Subject: [Numpy-discussion] lexsort In-Reply-To: <447F78F3.3060303@ieee.org> References: <447F78F3.3060303@ieee.org> Message-ID: This is great! Many thanks Travis. I can't wait for the next release! --Tom On 6/1/06, Travis Oliphant wrote: > Tom Denniston wrote: > > This function is really useful but it seems to only take tuples not > > ndarrays. This seems kinda strange. Does one have to convert the > > ndarray into a tuple to use it? This seems extremely inefficient. Is > > there an efficient way to argsort a 2d array based upon multiple > > columns if lexsort is not the correct way to do this? The only way I > > have found to do this is to construct a list of tuples and sort them > > using python's list sort. This is inefficient and convoluted so I was > > hoping lexsort would provide a simple solution. > > > > I've just changed lexsort to accept any sequence object as keys. This > means that it can now be used to sort a 2d array (of the same data-type) > based on multiple rows. The sorting will be so that the last row is > sorted with any repeats sorted by the second-to-last row and remaining > repeats sorted by the third-to-last row and so forth... > > The return value is an array of indices. For the 2d example you can use > > ind = lexsort(a) > sorted = a[:,ind] # or a.take(ind,axis=-1) > > > Example: > > >>> a = array([[1,3,2,2,3,3],[4,5,4,6,4,3]]) > >>> ind = lexsort(a) > >>> sorted = a.take(ind,axis=-1) > >>> sorted > array([[3, 1, 2, 3, 3, 2], > [3, 4, 4, 4, 5, 6]]) > >>> a > array([[1, 3, 2, 2, 3, 3], > [4, 5, 4, 6, 4, 3]]) > > > > -Travis > > > From oliphant.travis at ieee.org Thu Jun 1 19:32:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 01 Jun 2006 17:32:03 -0600 Subject: [Numpy-discussion] lexsort In-Reply-To: References: Message-ID: <447F78F3.3060303@ieee.org> Tom Denniston wrote: > This function is really useful but it seems to only take tuples not > ndarrays. This seems kinda strange. Does one have to convert the > ndarray into a tuple to use it? This seems extremely inefficient. Is > there an efficient way to argsort a 2d array based upon multiple > columns if lexsort is not the correct way to do this? The only way I > have found to do this is to construct a list of tuples and sort them > using python's list sort. This is inefficient and convoluted so I was > hoping lexsort would provide a simple solution. > I've just changed lexsort to accept any sequence object as keys. This means that it can now be used to sort a 2d array (of the same data-type) based on multiple rows. The sorting will be so that the last row is sorted with any repeats sorted by the second-to-last row and remaining repeats sorted by the third-to-last row and so forth... The return value is an array of indices. For the 2d example you can use ind = lexsort(a) sorted = a[:,ind] # or a.take(ind,axis=-1) Example: >>> a = array([[1,3,2,2,3,3],[4,5,4,6,4,3]]) >>> ind = lexsort(a) >>> sorted = a.take(ind,axis=-1) >>> sorted array([[3, 1, 2, 3, 3, 2], [3, 4, 4, 4, 5, 6]]) >>> a array([[1, 3, 2, 2, 3, 3], [4, 5, 4, 6, 4, 3]]) -Travis From charlesr.harris at gmail.com Fri Jun 2 01:05:13 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 1 Jun 2006 23:05:13 -0600 Subject: [Numpy-discussion] lexsort In-Reply-To: References: Message-ID: Tom, The list -- nee tuple, thanks Travis -- is the list of key sequences and each key sequence can be a column in a matrix. So for instance if you wanted to sort on a few columns of a matrix, say columns 2,1, and 0, in that order, and then rearrange the rows so the columns were ordered, you would do something like: >>> a = randint(0,2,(7,4)) >>> a array([[0, 0, 0, 1], [0, 0, 1, 0], [1, 0, 0, 1], [0, 1, 0, 1], [1, 1, 1, 0], [0, 1, 1, 1], [0, 1, 0, 1]]) >>> ind = lexsort((a[:,2],a[:,1],a[:,0])) >>> sorted = a[ind] >>> sorted array([[0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 1], [0, 1, 0, 1], [0, 1, 1, 1], [1, 0, 0, 1], [1, 1, 1, 0]]) Note that the last key defines the major order. Chuck On 6/1/06, Tom Denniston wrote: > > This function is really useful but it seems to only take tuples not > ndarrays. This seems kinda strange. Does one have to convert the > ndarray into a tuple to use it? This seems extremely inefficient. Is > there an efficient way to argsort a 2d array based upon multiple > columns if lexsort is not the correct way to do this? The only way I > have found to do this is to construct a list of tuples and sort them > using python's list sort. This is inefficient and convoluted so I was > hoping lexsort would provide a simple solution. > > --Tom > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob at hooft.net Fri Jun 2 01:31:27 2006 From: rob at hooft.net (Rob Hooft) Date: Fri, 02 Jun 2006 07:31:27 +0200 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <447F6688.1030504@noaa.gov> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> Message-ID: <447FCD2F.5060207@hooft.net> Christopher Barker wrote: > x = N.arange(minx, maxx+Spacing, Spacing) # makeing sure to get the last > point > y = N.arange(miny, maxy+Spacing, Spacing) # an extra is OK > points = N.zeros((len(y), len(x), 2), N.Float) > x.shape = (1,-1) > y.shape = (-1,1) > points[:,:,0] += x > points[:,:,1] += y > points.shape = (-1,2) > > print points How about something like: >>> k=Numeric.repeat(range(0,5+1),Numeric.ones(6)*7) >>> l=Numeric.resize(range(0,6+1),[42]) >>> zone=Numeric.concatenate((k[:,Numeric.NewAxis],l[:,Numeric.NewAxis]),axis=1) >>> zone array([[0, 0], [0, 1], [0, 2], ... [5, 4], [5, 5], [5, 6]]) This is the same speed as Robert Kern's solution for large arrays, a bit slower for small arrays. Both are a little faster than yours. Rob -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -------------- next part -------------- A non-text attachment was scrubbed... Name: timer.py Type: text/x-python Size: 1244 bytes Desc: not available URL: From joris at ster.kuleuven.be Fri Jun 2 04:27:45 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 2 Jun 2006 10:27:45 +0200 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <447F1BBD.7030905@noaa.gov> References: <447D051E.9000709@ieee.org> <447F1BBD.7030905@noaa.gov> Message-ID: <200606021027.45392.joris@ster.kuleuven.be> [CB]: I was reacting to a post a while back that suggested pointing people [CB]: searching for numpy to the main scipy page, which I did not think was a [CB]: good idea. That would be my post :o) The reasons why I suggested this are 1) www.scipy.org is at the moment the most informative site on numpy 2) of all sites www.scipy.org looks currently most professional 3) a wiki-style site where everyone can contribute is really great 4) I like information to be centralized. Having to check pointers, docs and cookbooks on two different sites is inefficient 5) Two different sites inevitably implies some duplication of the work Just as you, I am not (yet) a scipy user, I only have numpy installed at the moment. The principal reason is the same as the one you mentioned. But for me this is an extra motivation to merge scipy.org and numpy.org: 6) merging scipy.org and numpy.org will hopefully lead to a larger SciPy community and this in turn hopefully leads to user-friendly installation procedures. Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From r.demaria at tiscali.it Fri Jun 2 07:54:36 2006 From: r.demaria at tiscali.it (r.demaria at tiscali.it) Date: Fri, 2 Jun 2006 13:54:36 +0200 (CEST) Subject: [Numpy-discussion] Free SciPy 2006 porting service Message-ID: <21591493.1149249276445.JavaMail.root@ps5> Hi, maybe is not what you meant, but presently I'm looking for a sparse eigenvalue solver. As far as I've understood the ARPACK bindings are still missing. This library is one of the most used, so I think it would be very useful to have integrated in numpy. Riccardo La gara pi? entusiasmante dell'anno! Gioca e corri alla velocit? della luce sui 18 circuiti di Intel Speed Contest 2006! I pi? bravi vincono Notebook Sony VAIO, iPod da 60 GB e altro ancora... Sfida gli amici! http://intelspeedcontest2006.tiscali.it/ From jonas at mwl.mit.edu Fri Jun 2 08:58:50 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Fri, 02 Jun 2006 08:58:50 -0400 Subject: [Numpy-discussion] numpy vs numeric benchmarks Message-ID: <1149253130.27604.29.camel@localhost.localdomain> Hello! I've been using numeric for a while, and the recent list traffic prompted me to finally migrate all my old code. On a whim, we were benchmarking numpy vs numeric and have been lead to the conclusion that numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy but 300 ms in numeric. Now, of course, I don't believe this, but I can't figure out what we're doing wrong; I'm not the only person who has looked at this code, so can anyone tell me what we're doing wrong? We run both benchmarks twice to try and mitigate any start-up and cache effects. This is with debian-amd64's packaged numeric 24.2-2 and a locally built numpy-0.9.8. /usr/bin/python import time import numpy import random import Numeric def numpytest(): N = 1000 x = numpy.zeros((N,N),'f') y = numpy.zeros((N,N),'f') for i in range(N): for j in range(N): x[i, j] = random.random() y[i, j] = random.random() t1 = time.clock() z = numpy.matrixmultiply(x, y) t2 = time.clock() print (((t2 - t1)*1000)) def numerictest(): N = 1000 x = Numeric.zeros((N,N),'f') y = Numeric.zeros((N,N),'f') for i in range(N): for j in range(N): x[i, j] = random.random() y[i, j] = random.random() t1 = time.clock() z = Numeric.matrixmultiply(x, y) t2 = time.clock() print (((t2 - t1)*1000)) numerictest() numpytest() numpytest() numerictest() on our hardware a call to numerictest() takes 340 ms and a numpytest takes around 13 sec (!). Any advice on what we're doing wrong would be very helpful. ...Eric From joris at ster.kuleuven.be Fri Jun 2 09:27:15 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 2 Jun 2006 15:27:15 +0200 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <1149253130.27604.29.camel@localhost.localdomain> References: <1149253130.27604.29.camel@localhost.localdomain> Message-ID: <200606021527.15947.joris@ster.kuleuven.be> On Friday 02 June 2006 14:58, Eric Jonas wrote: [EJ]: Hello! I've been using numeric for a while, and the recent list traffic [EJ]: prompted me to finally migrate all my old code. On a whim, we were [EJ]: benchmarking numpy vs numeric and have been lead to the conclusion that [EJ]: numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy [EJ]: but 300 ms in numeric. You mean the other way around? I also tested numpy vs numarray, and numarray seems to be roughly 3 times faster than numpy for your particular testcase. J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From jonas at mwl.mit.edu Fri Jun 2 09:34:25 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Fri, 02 Jun 2006 09:34:25 -0400 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <200606021527.15947.joris@ster.kuleuven.be> References: <1149253130.27604.29.camel@localhost.localdomain> <200606021527.15947.joris@ster.kuleuven.be> Message-ID: <1149255266.27604.32.camel@localhost.localdomain> I meant "numeric is slower than numpy", that is, modern numpy (0.9.8) appears to lose out majorly to numeric. This doesn't make much sense, so I figured there was something wrong with my benchmark, or my numpy install, and wanted to check if others had seen this sort of behavior. ...Eric On Fri, 2006-06-02 at 15:27 +0200, Joris De Ridder wrote: > > On Friday 02 June 2006 14:58, Eric Jonas wrote: > [EJ]: Hello! I've been using numeric for a while, and the recent list traffic > [EJ]: prompted me to finally migrate all my old code. On a whim, we were > [EJ]: benchmarking numpy vs numeric and have been lead to the conclusion that > [EJ]: numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy > [EJ]: but 300 ms in numeric. > > You mean the other way around? > > I also tested numpy vs numarray, and numarray seems to be roughly 3 times > faster than numpy for your particular testcase. > > J. > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From filip at ftv.pl Fri Jun 2 09:48:23 2006 From: filip at ftv.pl (Filip Wasilewski) Date: Fri, 2 Jun 2006 15:48:23 +0200 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <1149253130.27604.29.camel@localhost.localdomain> References: <1149253130.27604.29.camel@localhost.localdomain> Message-ID: <1231363019.20060602154823@gmail.com> Hi, It seems that in Numeric the matrixmultiply is alias for dot function, which "uses the BLAS optimized routines where possible", as the help() says. In NumPy (0.9.6, not upgraded yet to 0.9.8), the matrixmultiply is a function of numpy.core.multiarray, while dot refers to numpy.core._dotblas. On my system the timings and results with numpy.dot are quite similar to that with Numeric.matrixmultiply. So the next question is what's the difference between matrixmultiply and dot in NumPy? Filip > Hello! I've been using numeric for a while, and the recent list traffic > prompted me to finally migrate all my old code. On a whim, we were > benchmarking numpy vs numeric and have been lead to the conclusion that > numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy > but 300 ms in numeric. > Now, of course, I don't believe this, but I can't figure out what we're > doing wrong; I'm not the only person who has looked at this code, so can > anyone tell me what we're doing wrong? From gnurser at googlemail.com Fri Jun 2 10:16:57 2006 From: gnurser at googlemail.com (George Nurser) Date: Fri, 2 Jun 2006 15:16:57 +0100 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <1231363019.20060602154823@gmail.com> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> Message-ID: <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> Yes, using numpy.dot I get 250ms, numpy.matrixmultiply 11.8s. while a sans-BLAS Numeric.matrixmultiply takes 12s. The first 100 results from numpy.dot and numpy.matrixmultiply are identical .... Use dot;) --George. On 02/06/06, Filip Wasilewski wrote: > Hi, > > It seems that in Numeric the matrixmultiply is alias for dot function, > which "uses the BLAS optimized routines where possible", as the help() > says. > > In NumPy (0.9.6, not upgraded yet to 0.9.8), the matrixmultiply is a > function of numpy.core.multiarray, while dot refers to > numpy.core._dotblas. > > On my system the timings and results with numpy.dot are quite similar > to that with Numeric.matrixmultiply. > > So the next question is what's the difference between matrixmultiply and > dot in NumPy? > > Filip > > > > Hello! I've been using numeric for a while, and the recent list traffic > > prompted me to finally migrate all my old code. On a whim, we were > > benchmarking numpy vs numeric and have been lead to the conclusion that > > numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy > > but 300 ms in numeric. > > > Now, of course, I don't believe this, but I can't figure out what we're > > doing wrong; I'm not the only person who has looked at this code, so can > > anyone tell me what we're doing wrong? > > > > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From rays at blue-cove.com Fri Jun 2 10:27:27 2006 From: rays at blue-cove.com (RayS) Date: Fri, 02 Jun 2006 07:27:27 -0700 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: References: Message-ID: <6.2.3.4.2.20060602072155.02bc4a30@blue-cove.com> favorable numpy creates arrays much faster, fft seems a tad faster a useful metric, I think, for O-scope and ADC apps I get 0.0039054614015815738 0.0019759541205486885 0.023268623246481726 0.0023570392204637913 from the below on a PIII 600... from time import * n=4096 r = range(n) #numpy import numpy arr = numpy.array # array creation t0 = clock() for i in r: a = arr(r) (clock()-t0)/float(n) #fft of n fftn = numpy.fft t0 = clock() for i in r: f = fftn(a) (clock()-t0)/float(n) #Numeric import Numeric arr = Numeric.array # array creation t0 = clock() for i in r: a = arr(r) (clock()-t0)/float(n) #fft of n from FFT import * t0 = clock() for i in r: f = fft(a) (clock()-t0)/float(n) From svetosch at gmx.net Fri Jun 2 11:38:46 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 02 Jun 2006 17:38:46 +0200 Subject: [Numpy-discussion] rand argument question Message-ID: <44805B86.4080001@gmx.net> Hi all, this may be a stupid question, but why doesn't rand accept a shape tuple as argument? I find the difference between the argument types of rand and (for example) zeros somewhat confusing. (See below for illustration.) Can anybody offer an intuition/explanation? (This is still on 0.9.6 because of matplotlib compatibility.) Thanks much, Sven >>> import numpy as n >>> n.rand((3,2)) Traceback (most recent call last): File "", line 1, in ? File "mtrand.pyx", line 433, in mtrand.RandomState.rand File "mtrand.pyx", line 361, in mtrand.RandomState.random_sample File "mtrand.pyx", line 131, in mtrand.cont0_array TypeError: an integer is required >>> n.zeros((3,2)) array([[0, 0], [0, 0], [0, 0]]) >>> n.zeros(3,2) Traceback (most recent call last): File "", line 1, in ? TypeError: data type not understood >>> n.rand(3,2) array([[ 0.27017528, 0.98280906], [ 0.58592731, 0.63706962], [ 0.74705193, 0.65980377]]) >>> From robert.kern at gmail.com Fri Jun 2 12:09:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 11:09:02 -0500 Subject: [Numpy-discussion] Free SciPy 2006 porting service In-Reply-To: <21591493.1149249276445.JavaMail.root@ps5> References: <21591493.1149249276445.JavaMail.root@ps5> Message-ID: r.demaria at tiscali.it wrote: > Hi, > > maybe is not what you meant, but presently I'm looking for a sparse > eigenvalue solver. As far as I've understood the ARPACK bindings are > still missing. This library is one of the most used, so I think it > would be very useful to have integrated in numpy. No, that isn't what he meant. He wants to help projects that are currently using Numeric and numarray convert to numpy. In any case, ARPACK certainly won't go into numpy. It might go into scipy if you are willing to contribute wrappers for it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jun 2 12:16:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 11:16:31 -0500 Subject: [Numpy-discussion] rand argument question In-Reply-To: <44805B86.4080001@gmx.net> References: <44805B86.4080001@gmx.net> Message-ID: Sven Schreiber wrote: > Hi all, > this may be a stupid question, but why doesn't rand accept a shape tuple > as argument? I find the difference between the argument types of rand > and (for example) zeros somewhat confusing. (See below for > illustration.) Can anybody offer an intuition/explanation? rand() is a convenience function. It's only purpose is to offer this convenient API. If you want a function that takes tuples, use numpy.random.random(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jun 2 12:16:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 11:16:46 -0500 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <1231363019.20060602154823@gmail.com> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> Message-ID: Filip Wasilewski wrote: > So the next question is what's the difference between matrixmultiply and > dot in NumPy? matrixmultiply is a deprecated compatibility name. Always use dot. dot will get replaced with the optimized dotblas implementation when an optimized BLAS is available. matrixmultiply will not (probably not intentionally, but I'm happy with the current situation). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Fri Jun 2 12:57:18 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 02 Jun 2006 09:57:18 -0700 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> Message-ID: <44806DEE.5080908@noaa.gov> Robert Kern wrote: >> As I need Numeric and numarray compatibility at this point, it seems the > Ah. It might help if you said that up front. Yes, it would, but that would mean accepting that I need to keep backward compatibility -- I'm still hoping! > x = arange(minx, maxx+step, step) # oy. > y = arange(miny, maxy+step, step) > > nx = len(x) > ny = len(y) > > x = repeat(x, ny) > y = concatenate([y] * nx) > points = transpose([x, y]) Somehow I never think to use repeat. And why use repeat for x and concatenate for y? Rob Hooft wrote: > How about something like: > > >>> k=Numeric.repeat(range(0,5+1),Numeric.ones(6)*7) > >>> l=Numeric.resize(range(0,6+1),[42]) > >>> > zone=Numeric.concatenate((k[:,Numeric.NewAxis],l[:,Numeric.NewAxis]),axis=1) > This is the same speed as Robert Kern's solution for large arrays, a bit > slower for small arrays. Both are a little faster than yours. Did you time them? And yours only handles integers. This is my timing: For small arrays: Using numpy The Numpy way took: 0.020000 seconds My way took: 0.010000 seconds Robert's way took: 0.020000 seconds Using Numeric My way took: 0.010000 seconds Robert's way took: 0.020000 seconds Using numarray My way took: 0.070000 seconds Robert's way took: 0.120000 seconds Number of X: 4 Number of Y: 3 So my way was faster with all three packages for small arrays. For Medium arrays ( the size I'm most likely to be using ): Using numpy The Numpy way took: 0.120000 seconds My way took: 0.040000 seconds Robert's way took: 0.030000 seconds Using Numeric My way took: 0.040000 seconds Robert's way took: 0.030000 seconds Using numarray My way took: 0.090000 seconds Robert's way took: 1.070000 seconds Number of X: 21 Number of Y: 41 Now we're getting close, with mine faster with numarray, but Robert's faster with Numeric and numpy. For Large arrays: (still not very big, but as big as I'm likely to need) Using numpy The Numpy way took: 4.200000 seconds My way took: 0.660000 seconds Robert's way took: 0.340000 seconds Using Numeric My way took: 0.590000 seconds Robert's way took: 0.500000 seconds Using numarray My way took: 0.390000 seconds Robert's way took: 20.340000 seconds Number of X: 201 Number of Y: 241 So Robert's way still is faster with Numeric and numpy, but Much slower with numarray. As it's close with numpy and Numeric, but mine is much faster with numarray, I think I'll stick with mine. I note that while the numpy way, using mgrid(), is nice and clean to write, it is slower across the board. Perhaps mgrid(0 could use some optimization. This is exactly why I had suggested that one thing I wanted for numpy was an as-easy-to-use-as-possible C/C++ API. It would be nice to be able to write as many as possible of these kinds of utility functions in C as we could. In case anyone is interested, I'm using this to draw a grid of dots on the screen for my wxPython FloatCanvas. Every time the image is changed or moved or zoomed, I need to re-calculate the points before drawing them, so it's nice to have it fast. I've enclosed my test code. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- A non-text attachment was scrubbed... Name: junk.py Type: text/x-python Size: 1915 bytes Desc: not available URL: From oliphant.travis at ieee.org Fri Jun 2 13:07:27 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Jun 2006 11:07:27 -0600 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> Message-ID: <4480704F.2070504@ieee.org> Robert Kern wrote: > Filip Wasilewski wrote: > > >> So the next question is what's the difference between matrixmultiply and >> dot in NumPy? >> > > matrixmultiply is a deprecated compatibility name. Always use dot. dot will get > replaced with the optimized dotblas implementation when an optimized BLAS is > available. matrixmultiply will not (probably not intentionally, but I'm happy > with the current situation). > It's true that matrixmultiply has been deprecated for some time (at least 8 years...) The basic dot function gets over-written with a BLAS-optimized version but the matrixmultiply does not get changed. So replace matrixmultiply with dot. It wasn't an intentional thing, but perhaps it will finally encourage people to always use dot. -Travis From oliphant.travis at ieee.org Fri Jun 2 13:08:32 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Jun 2006 11:08:32 -0600 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <200606021527.15947.joris@ster.kuleuven.be> References: <1149253130.27604.29.camel@localhost.localdomain> <200606021527.15947.joris@ster.kuleuven.be> Message-ID: <44807090.5000207@ieee.org> Joris De Ridder wrote: > On Friday 02 June 2006 14:58, Eric Jonas wrote: > [EJ]: Hello! I've been using numeric for a while, and the recent list traffic > [EJ]: prompted me to finally migrate all my old code. On a whim, we were > [EJ]: benchmarking numpy vs numeric and have been lead to the conclusion that > [EJ]: numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy > [EJ]: but 300 ms in numeric. > > You mean the other way around? > > I also tested numpy vs numarray, and numarray seems to be roughly 3 times > faster than numpy for your particular testcase. > Please post your test cases. We are trying to remove any slowness, but need testers to do it. -Travis From joris at ster.kuleuven.be Fri Jun 2 13:09:01 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 2 Jun 2006 19:09:01 +0200 Subject: [Numpy-discussion] Numpy, BLAS & LAPACK In-Reply-To: <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> Message-ID: <200606021909.01239.joris@ster.kuleuven.be> Just to be sure, what exactly is affected when one uses the slower algorithms when neither BLAS or LAPACK is installed? For sure it will affect almost every function in numpy.linalg, as they use LAPACK_lite. And I guess that in numpy.core the dot() function uses the lite numpy/core/blasdot/_dotblas.c routine? Any other numpy functions that are affected? Joris On Friday 02 June 2006 16:16, George Nurser wrote: [GN]: Yes, using numpy.dot I get 250ms, numpy.matrixmultiply 11.8s. [GN]: [GN]: while a sans-BLAS Numeric.matrixmultiply takes 12s. [GN]: [GN]: The first 100 results from numpy.dot and numpy.matrixmultiply are identical .... [GN]: [GN]: Use dot;) [GN]: [GN]: --George. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From oliphant.travis at ieee.org Fri Jun 2 13:19:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Jun 2006 11:19:05 -0600 Subject: [Numpy-discussion] Numpy, BLAS & LAPACK In-Reply-To: <200606021909.01239.joris@ster.kuleuven.be> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> <200606021909.01239.joris@ster.kuleuven.be> Message-ID: <44807309.8010500@ieee.org> Joris De Ridder wrote: > Just to be sure, what exactly is affected when one uses the slower > algorithms when neither BLAS or LAPACK is installed? For sure it > will affect almost every function in numpy.linalg, as they use > LAPACK_lite. And I guess that in numpy.core the dot() function > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > numpy functions that are affected? > convolve could also be affected (the basic internal _dot function gets replaced for FLOAT, DOUBLE, CFLOAT, and CDOUBLE). I think that's the only function that uses dot internally. In the future we hope to be optimizing ufuncs as well. -Travis From faltet at carabos.com Fri Jun 2 13:18:56 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 2 Jun 2006 19:18:56 +0200 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <4480704F.2070504@ieee.org> References: <1149253130.27604.29.camel@localhost.localdomain> <4480704F.2070504@ieee.org> Message-ID: <200606021918.57134.faltet@carabos.com> A Divendres 02 Juny 2006 19:07, Travis Oliphant va escriure: > Robert Kern wrote: > > Filip Wasilewski wrote: > >> So the next question is what's the difference between matrixmultiply and > >> dot in NumPy? > > > > matrixmultiply is a deprecated compatibility name. Always use dot. dot > > will get replaced with the optimized dotblas implementation when an > > optimized BLAS is available. matrixmultiply will not (probably not > > intentionally, but I'm happy with the current situation). > > It's true that matrixmultiply has been deprecated for some time (at > least 8 years...) The basic dot function gets over-written with a > BLAS-optimized version but the matrixmultiply does not get changed. So > replace matrixmultiply with dot. It wasn't an intentional thing, but > perhaps it will finally encourage people to always use dot. So, why not issuing a DeprecationWarning on a matrixmultiply function use? -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From jonas at mwl.mit.edu Fri Jun 2 13:28:07 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Fri, 02 Jun 2006 13:28:07 -0400 Subject: [Numpy-discussion] Numpy, BLAS & LAPACK In-Reply-To: <44807309.8010500@ieee.org> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> <200606021909.01239.joris@ster.kuleuven.be> <44807309.8010500@ieee.org> Message-ID: <1149269287.27604.38.camel@localhost.localdomain> Is there some way, either within numpy or at build-time, to verify you're using BLAS/LAPACK? Is there one we should be using? ...Eric On Fri, 2006-06-02 at 11:19 -0600, Travis Oliphant wrote: > Joris De Ridder wrote: > > Just to be sure, what exactly is affected when one uses the slower > > algorithms when neither BLAS or LAPACK is installed? For sure it > > will affect almost every function in numpy.linalg, as they use > > LAPACK_lite. And I guess that in numpy.core the dot() function > > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > > numpy functions that are affected? > > > convolve could also be affected (the basic internal _dot function gets > replaced for FLOAT, DOUBLE, CFLOAT, and CDOUBLE). I think that's the > only function that uses dot internally. > > In the future we hope to be optimizing ufuncs as well. > > -Travis > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant.travis at ieee.org Fri Jun 2 13:31:09 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Jun 2006 11:31:09 -0600 Subject: [Numpy-discussion] Numpy, BLAS & LAPACK In-Reply-To: <1149269287.27604.38.camel@localhost.localdomain> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> <200606021909.01239.joris@ster.kuleuven.be> <44807309.8010500@ieee.org> <1149269287.27604.38.camel@localhost.localdomain> Message-ID: <448075DD.30804@ieee.org> Eric Jonas wrote: > Is there some way, either within numpy or at build-time, to verify > you're using BLAS/LAPACK? Is there one we should be using? > > Check to see if the id of numpy.dot is the same as numpy.core.multiarray.dot -Travis From aisaac at american.edu Fri Jun 2 13:41:27 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 2 Jun 2006 13:41:27 -0400 Subject: [Numpy-discussion] rand argument question In-Reply-To: <44805B86.4080001@gmx.net> References: <44805B86.4080001@gmx.net> Message-ID: On Fri, 02 Jun 2006, Sven Schreiber apparently wrote: > why doesn't rand accept a shape tuple as argument? I find > the difference between the argument types of rand and (for > example) zeros somewhat confusing. ... Can anybody offer > an intuition/explanation? Backward compatability, I believe. You are not alone in finding this odd and inconsistent. I am hoping for a change by 1.0, but I am not very hopeful. Robert always points out that if you want the consistent interface, you can always import functions from the 'random' module. I have never been able to understand this as a response to the point you are making. I take it the core argument goes something like this: - rand and randn are convenience functions * if you do not find them convenient, don't use them - they are in wide use, so it is too late to change them - testing the first argument to see whether it is a tuple or an int so aesthetically objectionable that its ugliness outweighs the benefits users might get from access to a more consistent interface This is one place where I believe a forward looking (i.e., think about new users) vision would force a small change in these *convenience* functions that will have payoffs both in ease of use and in eliminating this recurrent question from discussion lists. Cheers, Alan Isaac From jonathan.taylor at stanford.edu Fri Jun 2 14:08:25 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Fri, 02 Jun 2006 11:08:25 -0700 Subject: [Numpy-discussion] searchsorted Message-ID: <44807E99.6060105@stanford.edu> I was wondering if there was an easy way to get searchsorted to be "right-continuous" instead of "left-continuous". By continuity, I am talking about the continuity of the function "count" below... >>> import numpy as N >>> >>> x = N.arange(20) >>> x.searchsorted(9) 9 >>> import numpy as N >>> >>> x = N.arange(20) >>> >>> def count(u): ... return x.searchsorted(u) ... >>> count(9) 9 >>> count(9.01) 10 >>> Thanks, Jonathan -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From robert.kern at gmail.com Fri Jun 2 14:35:39 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 13:35:39 -0500 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <44806DEE.5080908@noaa.gov> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> <44806DEE.5080908@noaa.gov> Message-ID: Christopher Barker wrote: > Robert Kern wrote: >> x = repeat(x, ny) >> y = concatenate([y] * nx) >> points = transpose([x, y]) > > Somehow I never think to use repeat. And why use repeat for x and > concatenate for y? I guess you could use repeat() on y[newaxis] and then flatten it. y = repeat(y[newaxis], nx).ravel() > Using numpy > The Numpy way took: 0.020000 seconds > My way took: 0.010000 seconds > Robert's way took: 0.020000 seconds > Using Numeric > My way took: 0.010000 seconds > Robert's way took: 0.020000 seconds > Using numarray > My way took: 0.070000 seconds > Robert's way took: 0.120000 seconds > Number of X: 4 > Number of Y: 3 Those timings look real funny. I presume you are using a UNIX and time.clock(). Don't do that. It's a very poor timer on UNIX. Use time.time() on UNIX and time.clock() on Windows(). Even better, please use timeit.py instead. Tim Peters did a lot of work to make timeit.py do the right thing. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jun 2 14:50:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 13:50:56 -0500 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: Alan G Isaac wrote: > On Fri, 02 Jun 2006, Sven Schreiber apparently wrote: > >>why doesn't rand accept a shape tuple as argument? I find >>the difference between the argument types of rand and (for >>example) zeros somewhat confusing. ... Can anybody offer >>an intuition/explanation? > > Backward compatability, I believe. You are not alone in > finding this odd and inconsistent. I am hoping for a change > by 1.0, but I am not very hopeful. > > Robert always points out that if you want the consistent > interface, you can always import functions from the 'random' > module. I have never been able to understand this as > a response to the point you are making. > > I take it the core argument goes something like this: > - rand and randn are convenience functions > * if you do not find them convenient, don't use them > - they are in wide use, so it is too late to change them > - testing the first argument to see whether it is a tuple or > an int so aesthetically objectionable that its ugliness > outweighs the benefits users might get from access to > a more consistent interface My argument does not include the last two points. - They are in wide use because they are convenient and useful. - Changing rand() and randn() to accept a tuple like random.random() and random.standard_normal() does not improve anything. Instead, it adds confusion for users who are reading code and seeing the same function being called in two different ways. - Users who want to see numpy *only* expose a single calling scheme for top-level functions should instead ask for rand() and randn() to be removed from the top numpy namespace. * Backwards compatibility might prevent this. > This is one place where I believe a forward looking (i.e., > think about new users) vision would force a small change in > these *convenience* functions that will have payoffs both in > ease of use and in eliminating this recurrent question from > discussion lists. *Changing* the API of rand() and randn() doesn't solve any problem. *Removing* them might. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Fri Jun 2 15:34:08 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 2 Jun 2006 15:34:08 -0400 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: On Fri, 02 Jun 2006, Robert Kern apparently wrote: > Changing the API of rand() and randn() doesn't solve any > problem. Removing them might. I think this is too blunt an argument. For example, use of the old interface might issue a deprecation warning. This would make it very likely that all new code use the new interface. I would also be fine with demoting these to the Numeric compatability module, although I find that the inferior choice (since it means a loss of convenience). Unless one of these changes is made, new users will **forever** be asking this same question. And either way, making the sacrifices needed for greater consistency seems like a good idea *before* 1.0. Cheers, Alan From cookedm at physics.mcmaster.ca Fri Jun 2 15:46:57 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 2 Jun 2006 15:46:57 -0400 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <200606021027.45392.joris@ster.kuleuven.be> References: <447D051E.9000709@ieee.org> <447F1BBD.7030905@noaa.gov> <200606021027.45392.joris@ster.kuleuven.be> Message-ID: <20060602154657.6f51f0a5@arbutus.physics.mcmaster.ca> On Fri, 2 Jun 2006 10:27:45 +0200 Joris De Ridder wrote: > [CB]: I was reacting to a post a while back that suggested > pointing people [CB]: searching for numpy to the main scipy page, > which I did not think was a [CB]: good idea. > > That would be my post :o) > > The reasons why I suggested this are > > 1) www.scipy.org is at the moment the most informative site on numpy > 2) of all sites www.scipy.org looks currently most professional > 3) a wiki-style site where everyone can contribute is really great > 4) I like information to be centralized. Having to check pointers, > docs and cookbooks on two different sites is inefficient > 5) Two different sites inevitably implies some duplication of the work > > Just as you, I am not (yet) a scipy user, I only have numpy installed > at the moment. The principal reason is the same as the one you > mentioned. But for me this is an extra motivation to merge scipy.org > and numpy.org: > > 6) merging scipy.org and numpy.org will hopefully lead to a larger > SciPy community and this in turn hopefully leads to user-friendly > installation procedures. My only concern with this is numpy is positioned for a wider audience: everybody who needs arrays, and the extra speed that numpy gives, but doesn't need what scipy gives. So merging the two could lead to confusion on what provides what, and what you need to do which. For instance, I don't want potential numpy users to be directed to scipy.org, and be turned off with all the extra stuff it seems to have (that scipy, not numpy, provides). But I think this can be handled if we approach scipy.org as serving both purposes. But I think is this the best option, considering how much crossover there is. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Fri Jun 2 15:56:32 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 2 Jun 2006 15:56:32 -0400 Subject: [Numpy-discussion] Numpy, BLAS & LAPACK In-Reply-To: <200606021909.01239.joris@ster.kuleuven.be> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> <200606021909.01239.joris@ster.kuleuven.be> Message-ID: <20060602155632.010b1dc5@arbutus.physics.mcmaster.ca> On Fri, 2 Jun 2006 19:09:01 +0200 Joris De Ridder wrote: > Just to be sure, what exactly is affected when one uses the slower > algorithms when neither BLAS or LAPACK is installed? For sure it > will affect almost every function in numpy.linalg, as they use > LAPACK_lite. And I guess that in numpy.core the dot() function > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > numpy functions that are affected? Using a better default dgemm for matrix multiplication when an optimized BLAS isn't available has been on my to-do list for a while. I think it can be speed up by a large amount on a generic machine by using blocking of the matrices. Personally, I perceive no difference between my g77-compiled LAPACK, and the gcc-compiled f2c'd routines in lapack_lite, if an optimized BLAS is used. And lapack_lite has fewer bugs than the version of LAPACK available off of netlib.org, as I used the latest patches I could scrounge up (mostly from Debian). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Fri Jun 2 15:56:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 14:56:46 -0500 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: Alan G Isaac wrote: > On Fri, 02 Jun 2006, Robert Kern apparently wrote: > >>Changing the API of rand() and randn() doesn't solve any >>problem. Removing them might. > > I think this is too blunt an argument. For example, > use of the old interface might issue a deprecation warning. > This would make it very likely that all new code use the new > interface. My point is that there is no need to change rand() and randn() to the "new" interface. The "new" interface is already there: random.random() and random.standard_normal(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Fri Jun 2 16:19:51 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 2 Jun 2006 16:19:51 -0400 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: >> On Fri, 02 Jun 2006, Robert Kern apparently wrote: >>> Changing the API of rand() and randn() doesn't solve any >>> problem. Removing them might. > Alan G Isaac wrote: >> I think this is too blunt an argument. For example, >> use of the old interface might issue a deprecation warning. >> This would make it very likely that all new code use the new >> interface. On Fri, 02 Jun 2006, Robert Kern apparently wrote: > My point is that there is no need to change rand() and randn() to the "new" > interface. The "new" interface is already there: random.random() and > random.standard_normal(). Yes of course; that has always been your point. In an earlier post, I indicated that this is your usual response. What your point does not addres: the question about rand and randn keeps cropping up on this list. My point is: numpy should take a step so that this question goes away, rather than maintain the status quo and see it crop up continually. (I.e., its recurrence should be understood to signal a problem.) Cheers, Alan PS I'll shut up about this now. From robert.kern at gmail.com Fri Jun 2 16:42:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 15:42:31 -0500 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: Alan G Isaac wrote: > On Fri, 02 Jun 2006, Robert Kern apparently wrote: > >>My point is that there is no need to change rand() and randn() to the "new" >>interface. The "new" interface is already there: random.random() and >>random.standard_normal(). > > Yes of course; that has always been your point. > In an earlier post, I indicated that this is your usual response. > > What your point does not addres: > the question about rand and randn keeps cropping up on this list. > > My point is: > numpy should take a step so that this question goes away, > rather than maintain the status quo and see it crop up continually. > (I.e., its recurrence should be understood to signal a problem.) I'll check in a change to the docstring later today. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rob at hooft.net Fri Jun 2 17:06:26 2006 From: rob at hooft.net (Rob Hooft) Date: Fri, 02 Jun 2006 23:06:26 +0200 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <44806DEE.5080908@noaa.gov> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> <44806DEE.5080908@noaa.gov> Message-ID: <4480A852.5030509@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Christopher Barker wrote: | Did you time them? And yours only handles integers. Yes I did, check the attachment of my previous message for a python module to time the three, with completely different results from yours (I'm using Numeric). The attachment also contains a floatified version of my demonstration. Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEgKhRH7J/Cv8rb3QRAlk1AJ4vyt1F1Lr54sGMjHkp1hGXzcowJwCeMD5O CqkaDTpKOdDrAy7+v3Py7kw= =jnqb -----END PGP SIGNATURE----- From Chris.Barker at noaa.gov Fri Jun 2 18:09:27 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 02 Jun 2006 15:09:27 -0700 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <4480A852.5030509@hooft.net> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> <44806DEE.5080908@noaa.gov> <4480A852.5030509@hooft.net> Message-ID: <4480B717.4050000@noaa.gov> Rob Hooft wrote: > Christopher Barker wrote: > | Did you time them? And yours only handles integers. > > Yes I did, check the attachment of my previous message for a python > module to time the three, Sorry about that, I don't notice that. > with completely different results from yours > (I'm using Numeric). I ran it and got similar results to mine. Frankly, for the size problems I'm dealing with, they are all about the same, except for under Numarray, where mine is fastest, your second, and Robert third -- by a wide margin! Another reason I'm glad numpy is built on the Numeric code: Using numarray My way took: 0.394555 seconds Robert's way took: 20.590545 seconds Rob's way took: 4.802346 seconds Number of X: 201 Number of Y: 241 Using Numeric My way took: 0.593319 seconds Robert's way took: 0.523235 seconds Rob's way took: 0.579756 seconds Robert's way has a pretty decent edge under numpy: Using numpy My way took: 0.686741 seconds Robert's way took: 0.357887 seconds Rob's way took: 0.796977 seconds And I'm using time(), rather than clock() now, though it dint' really change anything. I suppose I should figure out timeit.py Thanks for all your help on this, -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From oliphant.travis at ieee.org Fri Jun 2 18:28:25 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Jun 2006 16:28:25 -0600 Subject: [Numpy-discussion] Updates to NumPy Message-ID: <4480BB89.5060101@ieee.org> I've been busy with NumPy and it has resulted in some C-API changes. So, after checking out a new SVN version of NumPy you will need to re-build extension modules (It stinks for me too --- SciPy takes a while to build). The API changes have made it possible to allow user-defined data-types to optionally participate in the coercion and casting infrastructure. Previously, casting was limited to built-in data-types. Now, there is a mechanism for users to define casting to and from their own data-type (and whether or not it can be done safely and whether or not a particular kind of user-defined scalar can be cast --- remember a scalar mixed with an array has a different set of casting rules). This should make user-defined data-types much more useful, but the facility needs to be tested. Does anybody have a data-type they want to add to try out the new system. The restriction on adding another data-type is that it must have a fixed element size (a variable-precision float for example would have to use a pointer to the actual structure as the "data-type"). -Travis From joris at ster.kuleuven.ac.be Fri Jun 2 19:03:41 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Sat, 3 Jun 2006 01:03:41 +0200 Subject: [Numpy-discussion] Suggestions for NumPy Message-ID: <1149289421.4480c3cde8e2e@webmail.ster.kuleuven.be> [DC]: My only concern with this is numpy is positioned for a wider audience: [DC]: everybody who needs arrays, and the extra speed that numpy gives, but [DC]: doesn't need what scipy gives. So merging the two could lead to [DC]: confusion on what provides what, and what you need to do which. I completely agree with this. SciPy and NumPy on one site, yes, but not so interweaven that it gets confusing or even plain useless for NumPy-only users. J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From tim.hochberg at cox.net Fri Jun 2 23:15:33 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 02 Jun 2006 20:15:33 -0700 Subject: [Numpy-discussion] fromiter Message-ID: <4480FED5.6010300@cox.net> Some time ago some people, myself including, were making some noise about having 'array' iterate over iterable object producing ndarrays in a manner analogous to they way sequences are treated. I finally got around to looking at it seriously and once I came to the following three conclusions: 1. All I really care about is the 1D case where dtype is specified. This case should be relatively easy to implement so that it's fast. Most other cases are not likely to be particularly faster than converting the iterators to lists at the Python level and then passing those lists to array. 2. 'array' already has plenty of special cases. I'm reluctant to add more. 3. Adding this to 'array' would be non-trivial. The more cases we tried to make fast, the more likely that some of the paths would be buggy. Regardless of how we did it though, some cases would be much slower than other, which would probably be suprising. So, with that in mind, I retreated a little and implemented the simplest thing that did the stuff that I cared about: fromiter(iterable, dtype, count) => ndarray of type dtype and length count This is essentially the same interface as fromstring except that the values of dtype and count are always required. Some primitive benchmarking indicates that 'fromiter(generator, dtype, count)' is about twice as fast as 'array(list(generator))' for medium to large arrays. When producing very large arrays, the advantage of fromiter is larger, presumably because 'list(generator)' causes things to start swapping. Anyway I'm about to bail out of town till the middle of next week, so it'll be a while till I can get it clean enough to submit in some form or another. Plenty of time for people to think of why it's a terrible idea ;-) -tim From charlesr.harris at gmail.com Fri Jun 2 23:30:05 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 2 Jun 2006 21:30:05 -0600 Subject: [Numpy-discussion] searchsorted In-Reply-To: <44807E99.6060105@stanford.edu> References: <44807E99.6060105@stanford.edu> Message-ID: Jonathan, I had a patch for this that applied to numarray way back when. If folks feel there is a need, I could probably try to get it running on numpy. Bit of a learning curve (for me), though. Chuck On 6/2/06, Jonathan Taylor wrote: > > I was wondering if there was an easy way to get searchsorted to be > "right-continuous" instead of "left-continuous". > > By continuity, I am talking about the continuity of the function "count" > below... > > >>> import numpy as N > >>> > >>> x = N.arange(20) > >>> x.searchsorted(9) > 9 > >>> import numpy as N > >>> > >>> x = N.arange(20) > >>> > >>> def count(u): > ... return x.searchsorted(u) > ... > >>> count(9) > 9 > >>> count(9.01) > 10 > >>> > > Thanks, > > Jonathan > > -- > ------------------------------------------------------------------------ > I'm part of the Team in Training: please support our efforts for the > Leukemia and Lymphoma Society! > > http://www.active.com/donate/tntsvmb/tntsvmbJTaylor > > GO TEAM !!! > > ------------------------------------------------------------------------ > Jonathan Taylor Tel: 650.723.9230 > Dept. of Statistics Fax: 650.725.8977 > Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo > 390 Serra Mall > Stanford, CA 94305 > > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Sat Jun 3 03:25:42 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 03 Jun 2006 01:25:42 -0600 Subject: [Numpy-discussion] fromiter In-Reply-To: <4480FED5.6010300@cox.net> References: <4480FED5.6010300@cox.net> Message-ID: <44813976.2010808@ieee.org> Tim Hochberg wrote: > Some time ago some people, myself including, were making some noise > about having 'array' iterate over iterable object producing ndarrays in > a manner analogous to they way sequences are treated. I finally got > around to looking at it seriously and once I came to the following three > conclusions: > > 1. All I really care about is the 1D case where dtype is specified. > This case should be relatively easy to implement so that it's > fast. Most other cases are not likely to be particularly faster > than converting the iterators to lists at the Python level and > then passing those lists to array. > 2. 'array' already has plenty of special cases. I'm reluctant to add > more. > 3. Adding this to 'array' would be non-trivial. The more cases we > tried to make fast, the more likely that some of the paths would > be buggy. Regardless of how we did it though, some cases would be > much slower than other, which would probably be suprising. > Good job. I just added a called fromiter for this very purpose. Right now, it's just a stub that calls list(obj) first and then array. Your code would be a perfect fit for it. I think count could be optional, though, to handle cases where the count can be determined from the object. We'll look forward to your check-in. -Travis From svetosch at gmx.net Sat Jun 3 05:52:57 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 03 Jun 2006 11:52:57 +0200 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: <44815BF9.3060504@gmx.net> Robert Kern schrieb: > > My point is that there is no need to change rand() and randn() to the "new" > interface. The "new" interface is already there: random.random() and > random.standard_normal(). > Ok thanks for the responses and sorry for not searching the archives about this. I tend to share Alan's point of view, but I also understand that it may be too late now to change the way rand is called. -Sven From jonathan.taylor at utoronto.ca Fri Jun 2 18:04:32 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Fri, 2 Jun 2006 18:04:32 -0400 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <20060602154657.6f51f0a5@arbutus.physics.mcmaster.ca> References: <447D051E.9000709@ieee.org> <447F1BBD.7030905@noaa.gov> <200606021027.45392.joris@ster.kuleuven.be> <20060602154657.6f51f0a5@arbutus.physics.mcmaster.ca> Message-ID: <463e11f90606021504h742e92e4t5ff418d1e29e426@mail.gmail.com> My suggestion would be to have both numpy.org and scipy.org be the exact same page, but make it extremely clear that there are two different projects on the front page. Cheers. Jon. On 6/2/06, David M. Cooke wrote: > On Fri, 2 Jun 2006 10:27:45 +0200 > Joris De Ridder wrote: > > [CB]: I was reacting to a post a while back that suggested > > pointing people [CB]: searching for numpy to the main scipy page, > > which I did not think was a [CB]: good idea. > > > > That would be my post :o) > > > > The reasons why I suggested this are > > > > 1) www.scipy.org is at the moment the most informative site on numpy > > 2) of all sites www.scipy.org looks currently most professional > > 3) a wiki-style site where everyone can contribute is really great > > 4) I like information to be centralized. Having to check pointers, > > docs and cookbooks on two different sites is inefficient > > 5) Two different sites inevitably implies some duplication of the work > > > > Just as you, I am not (yet) a scipy user, I only have numpy installed > > at the moment. The principal reason is the same as the one you > > mentioned. But for me this is an extra motivation to merge scipy.org > > and numpy.org: > > > > 6) merging scipy.org and numpy.org will hopefully lead to a larger > > SciPy community and this in turn hopefully leads to user-friendly > > installation procedures. > > My only concern with this is numpy is positioned for a wider audience: > everybody who needs arrays, and the extra speed that numpy gives, but > doesn't need what scipy gives. So merging the two could lead to > confusion on what provides what, and what you need to do which. > For instance, I don't want potential numpy users to be directed to > scipy.org, and be turned off with all the extra stuff it seems to have > (that scipy, not numpy, provides). But I think this can be handled if > we approach scipy.org as serving both purposes. > > But I think is this the best option, considering how much crossover > there is. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke > http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tim.hochberg at cox.net Sat Jun 3 10:29:04 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Sat, 03 Jun 2006 07:29:04 -0700 Subject: [Numpy-discussion] fromiter In-Reply-To: <44813976.2010808@ieee.org> References: <4480FED5.6010300@cox.net> <44813976.2010808@ieee.org> Message-ID: <44819CB0.1020609@cox.net> Travis Oliphant wrote: >Tim Hochberg wrote: > > >>Some time ago some people, myself including, were making some noise >>about having 'array' iterate over iterable object producing ndarrays in >>a manner analogous to they way sequences are treated. I finally got >>around to looking at it seriously and once I came to the following three >>conclusions: >> >> 1. All I really care about is the 1D case where dtype is specified. >> This case should be relatively easy to implement so that it's >> fast. Most other cases are not likely to be particularly faster >> than converting the iterators to lists at the Python level and >> then passing those lists to array. >> 2. 'array' already has plenty of special cases. I'm reluctant to add >> more. >> 3. Adding this to 'array' would be non-trivial. The more cases we >> tried to make fast, the more likely that some of the paths would >> be buggy. Regardless of how we did it though, some cases would be >> much slower than other, which would probably be suprising. >> >> >> > >Good job. I just added a called fromiter for this very purpose. Right >now, it's just a stub that calls list(obj) first and then array. Your >code would be a perfect fit for it. I think count could be optional, >though, to handle cases where the count can be determined from the object. > > I'll look at that when I get back. There are two ways to approach this: one is to only allow to count to be optional in those cases that the original object supports either __len__ or __length_hint__. The advantage their is that it's easy and there's no chance of locking up the interpreter by passing an unbounded generator. The other way is to figure out the length based on the generator itself. The "natural" way to do this is to steal stuff from array.array. However, that doesn't export a C-level interface that I can tell (everything is declared static), so you'd be going through the interpreter, which would potentially be slow. I guess another approach would be to hijack PyArray_Resize and steal the resizing pattern from array.array. I'm not sure how well that would work though. I'll look into it... -tim >We'll look forward to your check-in. > >-Travis > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From svetosch at gmx.net Sat Jun 3 10:43:07 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 03 Jun 2006 16:43:07 +0200 Subject: [Numpy-discussion] remaining matrix-non-preserving functions Message-ID: <44819FFB.3050507@gmx.net> Hi all, I just discovered that the diff function returns a numpy-array even for matrix inputs. Since I'm a card-carrying matrix fanatic, I hope that behavior qualifies as a bug. Then I went through some (most?) other functions/methods for which IMO it's best to return matrices if the input is also a matrix-type. I found that the following functions share the problem of diff (see below for illustrations): vstack and hstack (although I always use r_ and c_ and they work fine with matrices) outer msort Should I open new tickets? (Or has this already been fixed since 0.9.8, which I used because this time building the svn version failed for me?) Cheers, Sven >>> n.__version__ '0.9.8' >>> a matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) >>> b matrix([[0, 0, 0], [0, 0, 0]]) >>> n.diff(a) array([[-1, 0], [ 1, -1], [ 0, 1]]) >>> n.outer(a,b) array([[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]) >>> n.msort(a) array([[0, 0, 0], [0, 0, 0], [1, 1, 1]]) >>> n.vstack([a,b]) array([[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 0], [0, 0, 0]]) >>> n.hstack([a,b.T]) array([[1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0]]) >>> From aisaac0 at verizon.net Sat Jun 3 19:52:54 2006 From: aisaac0 at verizon.net (David Isaac) Date: Sat, 03 Jun 2006 19:52:54 -0400 Subject: [Numpy-discussion] numpy bug Message-ID: <005701c68769$7310bb30$2f01a8c0@JACKSONVILLE> "Boris Borcic" wrote in message news:447f3338$1_7 at news.bluewin.ch... > after a while trying to find the legal manner to file numpy bug reports, > since it's a simple one, I thought maybe a first step is to describe the bug > here. Then maybe someone will direct me to the right channel. > > So, numpy appears not to correctly compute bitwise_and.reduce and > bitwise_or.reduce : instead of reducing over the complete axis, these methods > only take the extremities into account. Illustration : > > >>> from numpy import * > >>> bitwise_or.reduce(array([8,256,32,8])) > 8 > >>> import numpy > >>> numpy.__version__ > '0.9.8' > >>> > > Platform : Win XP SP2, Python 2.4.2 Most bug reports start on the numpy list, I believe. (See above.) Cheers, Alan Isaac From hmfenz at koalanet.ne.jp Sat Jun 3 20:42:25 2006 From: hmfenz at koalanet.ne.jp (jztlta hwwbyd) Date: Sun, 04 Jun 2006 09:42:25 +0900 (UYT) Subject: [Numpy-discussion] [Reply]for monday watch this stck trade HYWI.PK Message-ID: <16709360.6120425430340.JavaMail.wzikyhzyj@gy-st02> An HTML attachment was scrubbed... URL: From schofield at ftw.at Sun Jun 4 13:02:17 2006 From: schofield at ftw.at (Ed Schofield) Date: Sun, 4 Jun 2006 19:02:17 +0200 Subject: [Numpy-discussion] Removing deprecated names Message-ID: <5CE6D3C7-2478-49D3-97C3-623484D8CB66@ftw.at> Hi all, I've created four patches to remove deprecated names from the numpy.core and numpy namespaces by default. The motivation for this is to provide a clear separation for both new users and users migrating from Numeric between those names that are deprecated and those that are recommended. The first patch cleans up NumPy to avoid the use of deprecated names internally: http://projects.scipy.org/scipy/numpy/ticket/137 The second patch separates the Numeric-like function interfaces, which Travis has said he doesn't want to deprecate, from the other names in oldnumeric.py, which include the capitalized type names, arrayrange, matrixmultiply, outerproduct, NewAxis, and a few others: http://projects.scipy.org/scipy/numpy/ticket/138 The third patch removes the deprecated names from the numpy.core and numpy namespaces and adds a compatibility function, numpy.Numeric(), that imports the deprecated interfaces into the namespace as before: http://projects.scipy.org/scipy/numpy/ticket/139 The fourth patch (also in ticket #139) is a script that adds the line "numpy.Numeric()" to the appropriate place in all Python files in the specified directory. I've tested this on the SciPy source tree, which still uses the old Numeric interfaces in many places. After running the script, SciPy runs all its 1518 unit tests without errors. These patches make a fairly small difference to the size of NumPy's default namespace: >>> import numpy >>> len(dir(numpy)) 438 >>> numpy.Numeric() >>> len(dir(numpy)) 484 They do, however, help to support Python principle #13 ... -- Ed From charlesr.harris at gmail.com Sun Jun 4 14:36:17 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 4 Jun 2006 12:36:17 -0600 Subject: [Numpy-discussion] Random number generators. Message-ID: Hi All, But mostly Robert. I've been fooling around timing random number generators and noticed that on an Athlon64 with 64bit binaries that the MWC8222 rng is about 2.5x as fast as the MT19937 generator. On my machine (1.8 GHz) I get MWC8222: long 2.58e+08 float 1.20e+08 double 1.34e+08 full double 1.02e+08 MT19937: long 9.07e+07 float 6.33e+07 double 6.71e+07 full double 3.81e+07 numbers/sec, where the time includes accumulating the sums. This also impacts the generation of normally distributed numbers MWC8222: nums/sec: 1.12e+08 average : 1.91e-05 sigma : 1.00e-00 MT19937: nums/sec: 5.41e+07 average : -9.73e-05 sigma : 1.00e+00 The times for 32 bit binaries is roughly the same. For generating large arrays of random numbers on 64 bit architectures it looks like MWC8222 is a winner. So, the question is, is there a good way to make the rng selectable? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From st at sigmasquared.net Sun Jun 4 16:21:08 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Sun, 04 Jun 2006 22:21:08 +0200 Subject: [Numpy-discussion] Random number generators. In-Reply-To: References: Message-ID: <448340B4.5050509@sigmasquared.net> > MWC8222: > > nums/sec: 1.12e+08 > > MT19937: > > nums/sec: 5.41e+07 > The times for 32 bit binaries is roughly the same. For generating large > arrays of random numbers on 64 bit architectures it looks like MWC8222 > is a winner. So, the question is, is there a good way to make the rng > selectable? Although there are in general good reasons for having more than one random number generator available (and testing one's code with more than one generator), performance shouldn't be the deciding concern for selecting one. The most important characteristic of a random number generator are its distributional properties, e.g. how "uniform" and "random" its generated numbers are. There's hardly any generator which is faster than the Mersenne Twister _and_ has a better equi-distribution. Actually, the MT is so fast that on modern processors the contribution of the uniform number generator to most non-trivial simulation code is negligible. See www.iro.umontreal.ca/~lecuyer/ for good (mathematical) surveys on this topic. If you really need that last inch of performance, you should seriously think about outsourcing your inner simulation loop to C(++). And by the way, there's a good chance that making the rng selectable has a negative performance impact on random number generation (at least if the generation is done through the same interface and the current implementation is sufficiently optimized). Regards, Stephan From charlesr.harris at gmail.com Sun Jun 4 16:41:07 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 4 Jun 2006 14:41:07 -0600 Subject: [Numpy-discussion] Random number generators. In-Reply-To: <448340B4.5050509@sigmasquared.net> References: <448340B4.5050509@sigmasquared.net> Message-ID: Stephen, MWC8222 has good distribution properties, it comes from George Marsaglia and passes all the tests in the Diehard suite. It was also used among others by Jurgen Doornik in his investigation of the ziggurat method for random normals and he didn't turn up any anomalies. Now, I rather like theory behind MT19937, based as it is on an irreducible polynomial over Z_2 discovered by brute force search, but it is not the end all and be all of rng's. And yes, I do like to generate hundreds of millions of random numbers/sec, and yes, I do do it in c++ and use boost/python as an interface, but that doesn't mean numpy can't use a speed up now and then. In particular, the ziggurat method for generating normals is also significantly faster than the polar method in numpy. Put them together and on X86_64 I think you will get close to a factor of ten improvement in speed. That isn't to be sniffed at, especially if you are simulating noisy images and such. On 6/4/06, Stephan Tolksdorf wrote: > > > > MWC8222: > > > > nums/sec: 1.12e+08 > > > > MT19937: > > > > nums/sec: 5.41e+07 > > The times for 32 bit binaries is roughly the same. For generating large > > arrays of random numbers on 64 bit architectures it looks like MWC8222 > > is a winner. So, the question is, is there a good way to make the rng > > selectable? > > Although there are in general good reasons for having more than one > random number generator available (and testing one's code with more than > one generator), performance shouldn't be the deciding concern for > selecting one. The most important characteristic of a random number > generator are its distributional properties, e.g. how "uniform" and > "random" its generated numbers are. There's hardly any generator which > is faster than the Mersenne Twister _and_ has a better > equi-distribution. Actually, the MT is so fast that on modern processors > the contribution of the uniform number generator to most non-trivial > simulation code is negligible. See www.iro.umontreal.ca/~lecuyer/ for > good (mathematical) surveys on this topic. > > If you really need that last inch of performance, you should seriously > think about outsourcing your inner simulation loop to C(++). And by the > way, there's a good chance that making the rng selectable has a negative > performance impact on random number generation (at least if the > generation is done through the same interface and the current > implementation is sufficiently optimized). > > Regards, > Stephan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Jun 4 18:04:13 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 04 Jun 2006 17:04:13 -0500 Subject: [Numpy-discussion] Random number generators. In-Reply-To: References: Message-ID: Charles R Harris wrote: > For generating large > arrays of random numbers on 64 bit architectures it looks like MWC8222 > is a winner. So, the question is, is there a good way to make the rng > selectable? Sure! All of the distributions ultimately depend on the uniform generators (rk_random, rk_double, etc.). It would be possible to alter the rk_state struct to store data for multiple generators (probably through a union) and store function pointers to the uniform generators. The public API rk_random, rk_double, etc. would be modified to call the function pointers to the private API functions depending on the actual generator chosen. At the Pyrex level, some modifications would need to be made to the RandomState constructor (or we would need to make alternate constructors) and the seeding methods. Nothing too bad. I don't think it would be worthwhile to change the numpy.random.* functions that alias the methods on the default RandomState object. Code that needs customizable PRNGs should be taking a RandomState object instead of relying on the function-alike aliases. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Jun 4 18:07:34 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 04 Jun 2006 17:07:34 -0500 Subject: [Numpy-discussion] Random number generators. In-Reply-To: References: Message-ID: Robert Kern wrote: > Charles R Harris wrote: > >>For generating large >>arrays of random numbers on 64 bit architectures it looks like MWC8222 >>is a winner. So, the question is, is there a good way to make the rng >>selectable? > > Sure! I should also add that I have no time to do any of this, but I'll be happy to answer questions and make suggestions if you would like to tackle this. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Sun Jun 4 18:37:53 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 4 Jun 2006 16:37:53 -0600 Subject: [Numpy-discussion] Random number generators. In-Reply-To: References: Message-ID: On 6/4/06, Robert Kern wrote: > > Charles R Harris wrote: > > For generating large > > arrays of random numbers on 64 bit architectures it looks like MWC8222 > > is a winner. So, the question is, is there a good way to make the rng > > selectable? > > Sure! All of the distributions ultimately depend on the uniform generators > (rk_random, rk_double, etc.). It would be possible to alter the rk_state > struct > to store data for multiple generators (probably through a union) and store > function pointers to the uniform generators. The public API rk_random, > rk_double, etc. would be modified to call the function pointers to the > private > API functions depending on the actual generator chosen. > > At the Pyrex level, some modifications would need to be made to the > RandomState > constructor (or we would need to make alternate constructors) and the > seeding > methods. Heh, I borrowed some seeding methods from numpy, but put them in their own file with interfaces void fillFromPool(uint32_t *state, size_t size); void fillFromSeed(uint32_t *state, size_t size, uint32_t seed); void fillFromVect(uint32_t *state, size_t size, const std::vector & seed); So that I could use them more generally. I left out the method using the system time because, well, everything I am interested in runs on linux or windows. Boost has a good include file, boost/cstdint.hpp, that deals with all the issues of defining integer types on different platforms. I didn't use it, though, just the stdint.h file ;) Nothing too bad. I don't think it would be worthwhile to change the > numpy.random.* functions that alias the methods on the default RandomState > object. Code that needs customizable PRNGs should be taking a RandomState > object > instead of relying on the function-alike aliases. I'll take a look, though like you I am pretty busy these days. -- > Robert Kern Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Mon Jun 5 03:52:17 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon, 5 Jun 2006 08:52:17 +0100 Subject: [Numpy-discussion] numexpr: where function Message-ID: <20060605085217.4506427b.simon@arrowtheory.com> Is it possible to use the where function in numexpr ? I see some code there for it, but not sure how to use it. While I'm asking, it seems numexpr only does pointwise operations ATM, ie there is no .sum ? Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From cookedm at physics.mcmaster.ca Sun Jun 4 20:23:18 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sun, 4 Jun 2006 20:23:18 -0400 Subject: [Numpy-discussion] numexpr: where function In-Reply-To: <20060605085217.4506427b.simon@arrowtheory.com> References: <20060605085217.4506427b.simon@arrowtheory.com> Message-ID: <20060605002318.GA12516@arbutus.physics.mcmaster.ca> On Mon, Jun 05, 2006 at 08:52:17AM +0100, Simon Burton wrote: > > Is it possible to use the where function in numexpr ? > I see some code there for it, but not sure how to use it. Yes; 'where(expression, a, b)' will return an element from 'a' when 'expression' is non-zero (true), and the corresponding element from 'b' when it's 0 (false). > While I'm asking, it seems numexpr only does pointwise > operations ATM, ie there is no .sum ? Adding reducing functions is on the list of things to-do. I don't have much time for it now, unfortunately. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From yxolunipanh at movies4mommies.com Mon Jun 5 05:13:35 2006 From: yxolunipanh at movies4mommies.com (Jo Contreras) Date: Mon, 05 Jun 2006 09:13:35 -0000 Subject: [Numpy-discussion] veggies blameless Message-ID: <000701c3b0e2$0c7186e7$235be258@hsvj> Sick of hedge funds and flippers getting all the great new issues? Here at World Stock Report we work on what we here from the street. We pick our companies based on there growth potential. It has been showing a steady move up on increasing volume. There is a massive promotion underway this weekend apprising potential eager investors of this emerging situation. A M S N should be one of the most profitable stocks to trade. How many times have you seen issues explode but you couldn't get your hands on them? Big watch in play this tomorrow morning! Trade Date : Monday, June 5, 2006 Name : Amerossi International Group Inc. Stock : A M S N Current Price : $0.05 8-Day Target : $0.20 - $0.40 Rating : S T R O N G B U Y Explosive pick for our members. From deawjt at sticker.fsnet.co.uk Mon Jun 5 09:44:36 2006 From: deawjt at sticker.fsnet.co.uk (Godwin Womack) Date: Mon, 5 Jun 2006 09:44:36 -0400 Subject: [Numpy-discussion] tax shelter armful Message-ID: <002201c688a7$4c41945d$8f631e42@joc> Investor Alert - WE HAVE A RUNNER ! Investment Times Alert Issues: (S T R O N G B U Y) Trading Date : 5 June 2006 Company Name : Wataire Industries S y m b o l : W T A F Timing is everything! Current Price : $0.60 3 WEEK PROJECTION : $2 - $4 Status : 5(5) Most stock brokers give out their new issues only to their largest commission paying clients. So if you haven't done your DD yet, you better hurry because it appears that the huge move is about to start. W T A F is a high growth issue and should be purchased by stock traders and those that can afford to make quick money on these fast moving issues. The stocks we profile show a significant increase in stock price and sometimes in days, not months or years. From N.Gorsic at vipnet.hr Mon Jun 5 10:59:49 2006 From: N.Gorsic at vipnet.hr (Neven Gorsic) Date: Mon, 5 Jun 2006 16:59:49 +0200 Subject: [Numpy-discussion] Py2exe programs with NumPy Message-ID: <89684A5E33D0BC4CA1CA32E6E6499E7C013112E9@MAIL02.win.vipnet.hr> I made a Python program using NumPy extension and program works fine. So far I had no problems with compiling Python programs with Py2exe module, but now, in the end of compilation, I get error messages: The following modules appear to be missing ['Pyrex', 'Pyrex.Compiler', '_curses', 'fcompiler.FCompiler', 'lib.add_newdoc', 'pre', 'pylab', 'setuptools', 'setuptools.command', 'setuptools.command.egg_info ', 'win32api', 'win32con', 'win32pdh', 'numpy.core.equal', 'numpy.core.less', 'n umpy.core.less_equal'] Upon starting exe file I get another message: C:\Python24\dist>test No scipy-style subpackage 'testing' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'core' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'lib' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'linalg' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'dft' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'random' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'f2py' found in C:\Python24\dist\library.zip\numpy. Ignoring. Traceback (most recent call last): File "test.py", line 228, in ? File "zipextimporter.pyc", line 78, in load_module File "numpy\__init__.pyc", line 44, in ? File "numpy\_import_tools.pyc", line 320, in get_pkgdocs File "numpy\_import_tools.pyc", line 283, in _format_titles ValueError: max() arg is an empty sequence Can you tell me please, what is wrong. PP: I have no previous experience compiling Pyton programs which includes numpy modules. I use py2exe in basic way : Type python setup.py py2exe from comand line and setup.py has only 3 lines: from distutils.core import setup import py2exe setup(console=["Programi\\test.py"]) From cookedm at physics.mcmaster.ca Mon Jun 5 17:10:23 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 5 Jun 2006 17:10:23 -0400 Subject: [Numpy-discussion] integer power in scalarmath: how to handle overflow? Message-ID: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> I just ran into the fact that the power function for integer types isn't handled in scalarmath yet. I'm going to add it, but I'm wondering what people want when power overflows the integer type? Taking the concrete example of a = uint8(3), b = uint8(10), then should a ** b return 1) the maximum integer for the type (255 here) 2) 0 3) upcast to the largest type that will hold it (but what if it's larger than our largest type? Return a Python long?) 4) do the power using "long" like Python does, then downcast it to the type (that would return 169 for the above example) 5) something else? I'm leaning towards #3; if you do a ** 10, you get the right answer (59049 as an int64scalar), because 'a' is upcasted to int64scalar, since '10', a Python int, is converted to that type. Otherwise, I would choose #1. -- |>|\/|< /----------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From svzjatneghk at realhomepro.com Mon Jun 5 18:18:36 2006 From: svzjatneghk at realhomepro.com (Marian Stafford) Date: Tue, 6 Jun 2006 00:18:36 +0200 Subject: [Numpy-discussion] enclosure Message-ID: <002301c688ee$876672ca$4ee78353@vnqk.fuwmr> Trade Date: Tuesday, June 6th, 2006 Company: BioElectronics Corporation Symbol: BIEL Price: $0.025 IS MOMENTUM BUILDING FOR THIS STOCK? CAN YOU MAKE SOME FAST MONEY ON IT? RADAR BIEL FOR TUESDAY'S OPEN RIGHT NOW!! THE ALERT IS ON!!! RECENT NEWS HEADLINE: (GO READ ALL THE NEWS ON BIEL RIGHT NOW!) BioElectronics Corporation Announces New 510(k) Market Clearance Application Filed With FDA!! About BioElectronics Corporation (Source: News 5/18/2006) BioElectronics currently manufactures and sells ActiPatch(TM), a drug-free anti-inflammatory patch with an embedded battery operated microchip that delivers weeks of continuous pulsed therapy for less than a dollar a day. The unique ActiPatch delivery system, using patented technology, provides a cost-effective, patient friendly method to reduce soft tissue pain and swelling. GO READ ALL THE NEWS ON THIS ONE!! DO YOUR DUE DILIGENCE!! RADAR IT FOR TUESDAY'S OPEN NOW! ______________ Information within this report contains forward looking statements within the meaning of Section 27A of the Securities Act of 1933 and Section 21B of the SEC Act of 1934. Statements that involve discussions with respect to projections of future events are not statements of historical fact and may be forward looking statements. Don't rely on them to make a decision. Past performance is never indicative of future results. We received four hundred thousand free trading shares in the past for our services. All those shares have been sold. We have received an additional one million free trading shares now. We intend to sell all one million shares now, which could cause the stock to go down, resulting in losses for you. The four hundred thousand shares and one million shares were received from two different third parties, not officers, directors or affiliate shareholders. This company has: an accumulated deficit, a negative net worth, a reliance on loans from officers directors and affiliates to pay expenses, and a nominal cash position. These factors raise substantial doubt about its ability to continue as a going concern. The company and its president are a defendant in a lawsuit. The publicly available float of stock is currently increasing. URGENT: Read the company's SEC filing before you invest. This report shall not be construed as any kind of investment advice or solicitation. WARNING: You can lose all your money by investing in this stock. From rkciebjv at datadrop.net Tue Jun 6 00:51:51 2006 From: rkciebjv at datadrop.net (Neville Sharpe) Date: Tue, 6 Jun 2006 00:51:51 -0400 Subject: [Numpy-discussion] delicatessen Message-ID: <003001c68926$06aad7b0$c64f4918@zzphp.dupkl> Trade Date: Tuesday, June 6th, 2006 Company: BioElectronics Corporation Symbol: BIEL Price: $0.025 IS MOMENTUM BUILDING FOR THIS STOCK? CAN YOU MAKE SOME FAST MONEY ON IT? RADAR BIEL FOR TUESDAY'S OPEN RIGHT NOW!! THE ALERT IS ON!!! RECENT NEWS HEADLINE: (GO READ ALL THE NEWS ON BIEL RIGHT NOW!) BioElectronics Corporation Announces New 510(k) Market Clearance Application Filed With FDA!! About BioElectronics Corporation (Source: News 5/18/2006) BioElectronics currently manufactures and sells ActiPatch(TM), a drug-free anti-inflammatory patch with an embedded battery operated microchip that delivers weeks of continuous pulsed therapy for less than a dollar a day. The unique ActiPatch delivery system, using patented technology, provides a cost-effective, patient friendly method to reduce soft tissue pain and swelling. GO READ ALL THE NEWS ON THIS ONE!! DO YOUR DUE DILIGENCE!! RADAR IT FOR TUESDAY'S OPEN NOW! ______________ Information within this report contains forward looking statements within the meaning of Section 27A of the Securities Act of 1933 and Section 21B of the SEC Act of 1934. Statements that involve discussions with respect to projections of future events are not statements of historical fact and may be forward looking statements. Don't rely on them to make a decision. Past performance is never indicative of future results. We received four hundred thousand free trading shares in the past for our services. All those shares have been sold. We have received an additional one million free trading shares now. We intend to sell all one million shares now, which could cause the stock to go down, resulting in losses for you. The four hundred thousand shares and one million shares were received from two different third parties, not officers, directors or affiliate shareholders. This company has: an accumulated deficit, a negative net worth, a reliance on loans from officers directors and affiliates to pay expenses, and a nominal cash position. These factors raise substantial doubt about its ability to continue as a going concern. The company and its president are a defendant in a lawsuit. The publicly available float of stock is currently increasing. URGENT: Read the company's SEC filing before you invest. This report shall not be construed as any kind of investment advice or solicitation. WARNING: You can lose all your money by investing in this stock. From N.Gorsic at vipnet.hr Tue Jun 6 04:19:31 2006 From: N.Gorsic at vipnet.hr (Neven Gorsic) Date: Tue, 6 Jun 2006 10:19:31 +0200 Subject: [Numpy-discussion] How to make exe from Python program with import NumPy line? Py2exe doesn't cooperato ! :) Message-ID: <89684A5E33D0BC4CA1CA32E6E6499E7C01311355@MAIL02.win.vipnet.hr> -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.douard at logilab.fr Tue Jun 6 04:44:20 2006 From: david.douard at logilab.fr (David Douard) Date: Tue, 6 Jun 2006 10:44:20 +0200 Subject: [Numpy-discussion] integer power in scalarmath: how to handle overflow? In-Reply-To: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> References: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> Message-ID: <20060606084419.GC1046@logilab.fr> On Mon, Jun 05, 2006 at 05:10:23PM -0400, David M. Cooke wrote: > I just ran into the fact that the power function for integer types > isn't handled in scalarmath yet. I'm going to add it, but I'm wondering > what people want when power overflows the integer type? > > Taking the concrete example of a = uint8(3), b = uint8(10), then should > a ** b return > > 1) the maximum integer for the type (255 here) > 2) 0 > 3) upcast to the largest type that will hold it (but what if it's > larger than our largest type? Return a Python long?) > 4) do the power using "long" like Python does, then downcast it to the > type (that would return 169 for the above example) > 5) something else? > > I'm leaning towards #3; if you do a ** 10, you get the right > answer (59049 as an int64scalar), because 'a' is upcasted to > int64scalar, since '10', a Python int, is converted to that type. > Otherwise, I would choose #1. I agree, #1 seems the better solution for me. BTW, I'm quite new on this list, and I don't know is this has already been discussed (I guess I has): why does uint_n arithmetics are done in the Z/(2**n)Z field (not sure about the maths correctness here)? I mean: >>> a = numpy.uint8(10) >>> a*a 100 >>> a*a*a # I'd like to have 255 here 232 >>> 1000%256 232 It would be really a nice feature to be able (by the mean of a numpy flag or so) to have bound limited uint operations (especially when doing image processing...). David -- David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From simon at arrowtheory.com Tue Jun 6 13:50:58 2006 From: simon at arrowtheory.com (Simon Burton) Date: Tue, 6 Jun 2006 18:50:58 +0100 Subject: [Numpy-discussion] How to make exe from Python program with import NumPy line? Py2exe doesn't cooperato ! :) In-Reply-To: <89684A5E33D0BC4CA1CA32E6E6499E7C01311355@MAIL02.win.vipnet.hr> References: <89684A5E33D0BC4CA1CA32E6E6499E7C01311355@MAIL02.win.vipnet.hr> Message-ID: <20060606185058.027a4c1c.simon@arrowtheory.com> On Tue, 6 Jun 2006 10:19:31 +0200 "Neven Gorsic" wrote: > > try pyInstaller. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From etysmkg at cq.com Tue Jun 6 10:02:32 2006 From: etysmkg at cq.com (wpnhh alsvfd) Date: Tue, 06 Jun 2006 22:02:32 +0800 Subject: [Numpy-discussion] you have to experience this H_Y_W_I.PK Plain as water. Message-ID: <9756360515.821959-92131285-6767@cq.com> H-Y-W-I- H o l l y w o o d I n t e r m e d i a t e, Inc. Watch this one tr at de on Tuesday, don't be sorry you missed out S Y M B O L : H_Y_W_I Current Price: $ 0.70 7 Day Projected : $ 4.50 This is a real company with real potential Nothing like it H_Y_W_I.PK Before we start with the profile of H-Y-W-I we would like to mention something very important: There is a Big PR Campaign starting on Tuesday . And it will go all week so it would be best to get in NOW. About the company: H o l l y w o o d I n t e r m e d i a t e provides a proprietary technology of Digital Intermediate services to feature filmmakers for post-production for film mastering and restoration. This technology gives the filmmakers total creative control over the look of their productions. Whether shooting on film or acquiring in HD or SD video, H o l l y w o o d I n t e r m e d i a t e puts a powerful cluster of digital tools at the director's disposal to achieve stunning results on the big screen. Matchframe Digital Intermediate, a division of H o l l y w o o d I n t e r m e d i a t e, Inc., packages a full array of post-production services with negative handling expertise and cost-effective 2K digital intermediate and 35mm film out systems. The Digital Intermediate process eliminates current post-production redundancies by creating a single high-resolution master file from which all versions can be made, including all theatrical and High Definition formats. By creating a single master file with resolution higher than the current High Definition broadcast standards, the DI master file enables cinema and television distributors to extract and archive all current and future cinema and television formats including Digital Cinema, Television and High Definition. Red H0t News: H o l l y w o o d I n t e r m e d i a t e Expands the Creative Palette for Independent Filmmakers GLENDALE, CA--(MARKET WIRE)--May 31, 2006 -- H o l l y w o o d I n t e r m e d i a t e, Inc. A provider of digital intermediate film mastering services, announced today that its Matchframe Digital Intermediate division is currently providing full digital intermediate services for Super 16MM productions. H o l l y w o o d I n t e r m e d i a t e, Inc. (H_Y_W_I.PK - News), a provider of digital intermediate film mastering services, announced that High Definition preview masters as part of its normal digital intermediate service offerings and workflow. "Typically, in current post-production workflow, HD dailies masters are edited into high quality preview masters including color timing, dirt removal, opticals and visual effects," said David Waters, H o l l y w o o d I n t e r m e d i a t e president. "Unfortunately, none of these processes translate to the theatrical release of the film as they must all be duplicated or repeated in either a higher resolution digital format, or photo chemical process." H o l l y w o o d I n t e r m e d i a t e gives Motion Picture producers the ability to scan their selected original camera negative at 2k or 4k film resolution, conform a high resolution digital master for theatrical and broadcast release including dirt removal, opticals and visual effects, and output a High Definition preview master to be used for preview screenings and focus groups that can be deployed in any worldwide theater location. "The challenge for completing the final editorial decisions on a motion picture are balanced between the ability to display the highest resolution picture for a test audience, and the costs and time in having to re-master your film based on a test audience response," said Jim Delany, H o l l y w o o d I n t e r m e d i a t e COO. "H o l l y w o o d I n t e r m e d i a t e offers a flexible alternative to traditional photochemical and video post-production processes for film mastering and preview screenings eliminating cost and time redundancies," said Waters. "We expect our HD preview screening master services to provide crucial workflow efficiencies helping H o l l y w o o d I n t e r m e d i a t e achieve market growth in the current digital intermediate and high definition marketplace." Get H.Y.W.I First Thing TUESDAY If you want to play the marrket get in on H_Y_W_I tuesday ----------------------- Welcome to my garden. That's water under the bridge. We'll hand you out to dry. Water under the bridge. We hung them out to dry. Putting it in a nutshell. Shall I compare thee to a summer's day. Raking it in. A stepping stone to. When we love - we grow. Stubborn as a mule. Rain, rain go away; come again some other day. Ugly as a mud fence. You feel like a fish out of water. The sharper is the berry, the sweeter is the wine. A stepping stone to. We'll hand you out to dry. Tossed around like a hot potato. Sweet as apple pie. The sharper is the berry, the sweeter is the wine. You feel like a fish out of water. Put off the scent. Sow much, reap much; sow little, reap little. Stone cold sober. Stone cold sober. Schools out for summer. Shake like a leaf. A stick in the mud. Your name is mud. The squeaky wheel gets the grease. Sly as a fox. Till the cows come home. You never miss the water till the well runs dry. A stick in the mud. From N.Gorsic at vipnet.hr Tue Jun 6 10:24:33 2006 From: N.Gorsic at vipnet.hr (Neven Gorsic) Date: Tue, 6 Jun 2006 16:24:33 +0200 Subject: [Numpy-discussion] How to get execatuble file from Python with NumPy import? Message-ID: <89684A5E33D0BC4CA1CA32E6E6499E7C01311407@MAIL02.win.vipnet.hr> Py2exe doesn't work! In the end of compilation I get message: The following modules appear to be missing: ['Pyrex', 'Pyrex.Compiler', '_curses', 'fcompiler.FCompiler', 'lib.add_newdoc', 'pre', 'pylab', 'setuptools', 'setuptools.command', 'setuptools.command.egg_info ', 'win32api', 'win32con', 'win32pdh', 'numpy.core.equal', 'numpy.core.less', 'n umpy.core.less_equal'] Neven -------------- next part -------------- An HTML attachment was scrubbed... URL: From khinsen at cea.fr Tue Jun 6 12:22:31 2006 From: khinsen at cea.fr (Konrad Hinsen) Date: Tue, 6 Jun 2006 18:22:31 +0200 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: On May 31, 2006, at 4:53, Travis Oliphant wrote: > Please help the developers by responding to a few questions. > > 1) Have you transitioned or started to transition to NumPy (i.e. > import numpy)? No. > 2) Will you transition within the next 6 months? (if you answered > No to #1) I would like to, but I am not sure to find the time. I am not in a hurry either, as Numeric continues to work fine. > 3) Please, explain your reason(s) for not making the switch. (if > you answered No to #2) Lack of time. Some of the changes from Numeric are subtle and require a careful analysis of the code, and then careful testing. For big applications, that's a lot of work. There are also modules (I am thinking of RNG) that have been replaced by something completely different that needs to be evaluated first. Konrad. -- --------------------------------------------------------------------- Konrad Hinsen Laboratoire L?on Brillouin, CEA Saclay, 91191 Gif-sur-Yvette Cedex, France Tel.: +33-1 69 08 79 25 Fax: +33-1 69 08 82 61 E-Mail: konrad.hinsen at cea.fr --------------------------------------------------------------------- From khinsen at cea.fr Tue Jun 6 12:27:05 2006 From: khinsen at cea.fr (Konrad Hinsen) Date: Tue, 6 Jun 2006 18:27:05 +0200 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <42703.80.167.103.49.1149056031.squirrel@webmail.fysik.dtu.dk> References: <447D051E.9000709@ieee.org> <42703.80.167.103.49.1149056031.squirrel@webmail.fysik.dtu.dk> Message-ID: <30F56ED3-2CCE-4442-9775-E368B3C58FA9@cea.fr> On May 31, 2006, at 8:13, Jens J?rgen Mortensen wrote: > Yes. Only problem is that ASE relies on Konrad Hinsen's > Scientific.IO.NetCDF module which is still a Numeric thing. I saw > recently that this module has been converted to numpy and put in > SciPy/sandbox. What is the future of this module? Martin Wiechert recently sent me his adaptation to Numpy. I integrated his patches checking for nothing else but that it doesn't break the Numeric interface. I then checked that it compiles and runs the demo script correctly. I am happy to send this version to anyone who wants to test-drive it. Personally I cannot really test it as all my application code that is based on it requires Numeric. Konrad. -- --------------------------------------------------------------------- Konrad Hinsen Laboratoire L?on Brillouin, CEA Saclay, 91191 Gif-sur-Yvette Cedex, France Tel.: +33-1 69 08 79 25 Fax: +33-1 69 08 82 61 E-Mail: konrad.hinsen at cea.fr --------------------------------------------------------------------- From bhendrix at enthought.com Tue Jun 6 14:43:37 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Tue, 06 Jun 2006 13:43:37 -0500 Subject: [Numpy-discussion] ANN: Python Enthought Edition Version 0.9.7 Released Message-ID: <4485CCD9.7050907@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 0.9.7 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 0.9.7 Release Notes: -------------------- Version 0.9.7 of Python Enthought Edition includes an update to version 1.0.7 of the Enthought Tool Suite (ETS) Package and bug fixes-- you can look at the release notes for this ETS version here: http://svn.enthought.com/downloads/enthought/changelog-release.1.0.7.html About Python Enthought Edition: ------------------------------- Python 2.3.5, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com From travis at enthought.com Tue Jun 6 14:05:43 2006 From: travis at enthought.com (Travis N. Vaught) Date: Tue, 06 Jun 2006 13:05:43 -0500 Subject: [Numpy-discussion] array of tuples Message-ID: <4485C3F7.503@enthought.com> I'd like to construct an array of tuples and I'm not sure how (without looping). Is there a quick way to do this with dtype? I've tried: >>> import numpy >>> x = [(1,2,3),(4,5,6)] >>> numpy.array(x) array([[1, 2, 3], [4, 5, 6]]) >>> numpy.array(x, dtype='p') array([[1, 2, 3], [4, 5, 6]]) >>> numpy.array(x, dtype='O') array([[1, 2, 3], [4, 5, 6]], dtype=object) Thanks, Travis From cookedm at physics.mcmaster.ca Tue Jun 6 16:02:49 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 6 Jun 2006 16:02:49 -0400 Subject: [Numpy-discussion] integer power in scalarmath: how to handle overflow? In-Reply-To: <20060606084419.GC1046@logilab.fr> References: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> <20060606084419.GC1046@logilab.fr> Message-ID: <20060606160249.0688320d@arbutus.physics.mcmaster.ca> On Tue, 6 Jun 2006 10:44:20 +0200 David Douard wrote: > On Mon, Jun 05, 2006 at 05:10:23PM -0400, David M. Cooke wrote: > > I just ran into the fact that the power function for integer types > > isn't handled in scalarmath yet. I'm going to add it, but I'm > > wondering what people want when power overflows the integer type? > > > > Taking the concrete example of a = uint8(3), b = uint8(10), then > > should a ** b return > > > > 1) the maximum integer for the type (255 here) > > 2) 0 > > 3) upcast to the largest type that will hold it (but what if it's > > larger than our largest type? Return a Python long?) > > 4) do the power using "long" like Python does, then downcast it to > > the type (that would return 169 for the above example) > > 5) something else? > > > > I'm leaning towards #3; if you do a ** 10, you get the right > > answer (59049 as an int64scalar), because 'a' is upcasted to > > int64scalar, since '10', a Python int, is converted to that type. > > Otherwise, I would choose #1. > > I agree, #1 seems the better solution for me. > > BTW, I'm quite new on this list, and I don't know is this has already > been discussed (I guess I has): why does uint_n arithmetics are done > in the Z/(2**n)Z field (not sure about the maths correctness here)? > I mean: > >>> a = numpy.uint8(10) > >>> a*a > 100 > >>> a*a*a # I'd like to have 255 here > 232 > >>> 1000%256 > 232 > History, and efficiency. Detecting integer overflow in C portably requires doing a division afterwards, or splitting the multiplication up into parts that won't overflow, so you can see if the sum would. Both of those options are pretty slow compared with multiplication. Now, mind you, our scalar types *do* check for overflow: they use a larger integer type for the result (or by splitting it up for the largest type). So you can check for overflow by setting the overflow handler: >>> seterr(over='raise') {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} >>> int16(32000) * int16(3) Traceback (most recent call last): File "", line 1, in ? FloatingPointError: overflow encountered in short_scalars Note that the integer array types don't check, though (huh, maybe they should). It's easy enough to use the multiply routine for the power, so you'll get overflow checking for free. > It would be really a nice feature to be able (by the mean of a numpy > flag or so) to have bound limited uint operations (especially when > doing image processing...). If you want to supply a patch ... :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Chris.Barker at noaa.gov Tue Jun 6 16:21:56 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 06 Jun 2006 13:21:56 -0700 Subject: [Numpy-discussion] array of tuples In-Reply-To: <4485C3F7.503@enthought.com> References: <4485C3F7.503@enthought.com> Message-ID: <4485E3E4.4000402@noaa.gov> Travis N. Vaught wrote: > I'd like to construct an array of tuples and I'm not sure how (without > looping). Is this what you want? >>> import numpy as N >>> a = N.empty((2,),dtype=object) >>> a[:] = [(1,2,3),(4,5,6)] >>> a array([(1, 2, 3), (4, 5, 6)], dtype=object) >>> a.shape (2,) By the way, I notice that the object dtype is not in the numpy namespace. While this mikes sense, as it's part of python, I keep getting confused because I do need to use numpy-specific dtypes for other things. I never use import *, so it might be a good idea to put the standard objects dtypes in the numpy namespace too. Or maybe not, just thinking out loud. Note: PyObject is there, but isn't that a deprecated Numeric name? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From stefan at sun.ac.za Tue Jun 6 17:01:14 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 6 Jun 2006 23:01:14 +0200 Subject: [Numpy-discussion] array of tuples In-Reply-To: <4485C3F7.503@enthought.com> References: <4485C3F7.503@enthought.com> Message-ID: <20060606210114.GA3756@mentat.za.net> On Tue, Jun 06, 2006 at 01:05:43PM -0500, Travis N. Vaught wrote: > looping). Is there a quick way to do this with dtype? > > I've tried: > > >>> import numpy > >>> x = [(1,2,3),(4,5,6)] > >>> numpy.array(x) > array([[1, 2, 3], > [4, 5, 6]]) > >>> numpy.array(x, dtype='p') > array([[1, 2, 3], > [4, 5, 6]]) > >>> numpy.array(x, dtype='O') > array([[1, 2, 3], > [4, 5, 6]], dtype=object) It works if you pre-allocate the array: In [18]: x = [(1,2),(3,4)] In [19]: z = N.empty(len(x),dtype='O') In [20]: z[:] = x In [21]: z Out[21]: array([(1, 2), (3, 4)], dtype=object) Regards St?fan From chanley at stsci.edu Tue Jun 6 17:03:10 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 06 Jun 2006 17:03:10 -0400 Subject: [Numpy-discussion] byte swap in place Message-ID: <4485ED8E.4020708@stsci.edu> Hi, Is there a way to byte swap a ndarray in place? The "byteswap" method I have found on an ndarray object currently returns a new array. Example: In [16]: a = n.array([1,2,3,4,5]) In [17]: a Out[17]: array([1, 2, 3, 4, 5]) In [18]: b = a.byteswap() In [19]: b Out[19]: array([16777216, 33554432, 50331648, 67108864, 83886080]) In [20]: b[0] = 0 In [21]: b Out[21]: array([ 0, 33554432, 50331648, 67108864, 83886080]) In [22]: a.dtype Out[22]: dtype(' References: <4485C3F7.503@enthought.com> <4485E3E4.4000402@noaa.gov> Message-ID: <20060606170705.73a4178c@arbutus.physics.mcmaster.ca> On Tue, 06 Jun 2006 13:21:56 -0700 Christopher Barker wrote: > > > Travis N. Vaught wrote: > > I'd like to construct an array of tuples and I'm not sure how (without > > looping). > > Is this what you want? > > >>> import numpy as N > >>> a = N.empty((2,),dtype=object) > >>> a[:] = [(1,2,3),(4,5,6)] > >>> a > array([(1, 2, 3), (4, 5, 6)], dtype=object) > >>> a.shape > (2,) > > By the way, I notice that the object dtype is not in the numpy > namespace. While this mikes sense, as it's part of python, I keep > getting confused because I do need to use numpy-specific dtypes for > other things. I never use import *, so it might be a good idea to put > the standard objects dtypes in the numpy namespace too. Or maybe not, > just thinking out loud. None of the Python types are (int, float, etc.). For one reason, various Python checkers complain about overwriting a builtin type, and plus, I think it's messy and a potential for bugs. numpy takes those as convenience types, and converts them to the appropriate dtype. If you want the dtype used, it's spelled with an appended _. So in this case you'd want dtype=N.object_. N.object0 works too. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Chris.Barker at noaa.gov Tue Jun 6 17:15:14 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 06 Jun 2006 14:15:14 -0700 Subject: [Numpy-discussion] array of tuples In-Reply-To: <20060606170705.73a4178c@arbutus.physics.mcmaster.ca> References: <4485C3F7.503@enthought.com> <4485E3E4.4000402@noaa.gov> <20060606170705.73a4178c@arbutus.physics.mcmaster.ca> Message-ID: <4485F062.8010001@noaa.gov> David M. Cooke wrote: > If you want the dtype > used, it's spelled with an appended _. > > So in this case you'd want dtype=N.object_. N.object0 works too. That will work, thanks. But what does object0 mean? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Tue Jun 6 17:33:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 06 Jun 2006 16:33:38 -0500 Subject: [Numpy-discussion] byte swap in place In-Reply-To: <4485ED8E.4020708@stsci.edu> References: <4485ED8E.4020708@stsci.edu> Message-ID: Christopher Hanley wrote: > Hi, > > Is there a way to byte swap a ndarray in place? The "byteswap" method I > have found on an ndarray object currently returns a new array. Depends. Do you want the actual bytes to swap, or are you content with getting a view that pretends the bytes are swapped? If the latter: >>> a = arange(5) >>> a.dtype dtype('>i4') >>> a.dtype = dtype('>> a array([ 0, 16777216, 33554432, 50331648, 67108864]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Tue Jun 6 17:35:25 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 06 Jun 2006 14:35:25 -0700 Subject: [Numpy-discussion] array of tuples In-Reply-To: <20060606210114.GA3756@mentat.za.net> References: <4485C3F7.503@enthought.com> <20060606210114.GA3756@mentat.za.net> Message-ID: <4485F51D.9030305@noaa.gov> Stefan van der Walt wrote: > In [19]: z = N.empty(len(x),dtype='O') Which brings up: What is the "preferred" way to refer to types? String typecode or object? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Tue Jun 6 17:37:10 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 06 Jun 2006 16:37:10 -0500 Subject: [Numpy-discussion] array of tuples In-Reply-To: <4485F51D.9030305@noaa.gov> References: <4485C3F7.503@enthought.com> <20060606210114.GA3756@mentat.za.net> <4485F51D.9030305@noaa.gov> Message-ID: Christopher Barker wrote: > Stefan van der Walt wrote: > >>In [19]: z = N.empty(len(x),dtype='O') > > Which brings up: > > What is the "preferred" way to refer to types? String typecode or object? Object! The string typecodes are for backwards compatibility only. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Tue Jun 6 18:02:37 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 6 Jun 2006 18:02:37 -0400 Subject: [Numpy-discussion] array of tuples In-Reply-To: <4485F062.8010001@noaa.gov> References: <4485C3F7.503@enthought.com> <4485E3E4.4000402@noaa.gov> <20060606170705.73a4178c@arbutus.physics.mcmaster.ca> <4485F062.8010001@noaa.gov> Message-ID: <20060606180237.7f2707d5@arbutus.physics.mcmaster.ca> On Tue, 06 Jun 2006 14:15:14 -0700 Christopher Barker wrote: > David M. Cooke wrote: > > If you want the dtype > > used, it's spelled with an appended _. > > > > So in this case you'd want dtype=N.object_. N.object0 works too. > > That will work, thanks. But what does object0 mean? I think it's "type object, default size". It's a holdover from Numeric. int0, for instance, is the same as int_ (= int64 on my 64-bit box, for instance). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From sdixovbpkufm at mjwinvestments.com Tue Jun 6 18:26:39 2006 From: sdixovbpkufm at mjwinvestments.com (Dorian Mclean) Date: Wed, 7 Jun 2006 01:26:39 +0300 Subject: [Numpy-discussion] in-line skate Message-ID: <000f01c689b9$9c46d291$4bc36a55@oe.ug> Most stock brokers give out their new issues only to their largest commission paying clients. We Told you to WATCH A B S Y and now its up again today. We assume many of you like to "trade the promotion" and may have made some big, fast money doing so. Breaking news alert issue - big news coming. Trade Date : Monday, June 7th, 2006 Company : ABSOLUTESKY INC Ticker : A B S Y Yes, it looks like the momentum has started up again. Current Price : $0.95 2 weeks high : $1 Recommendation : S T R O N G B U Y A B S Y is a high growth issue and should be purchased by stock traders and those that can afford to make quick money on these fast moving issues. This stock could reach record highs in the near future. This company is doing incredible things. The stocks we profile show a significant increase in stock price and sometimes in days, not months or years. Remember this is a S T R O N G B U Y RECOMMENDATION... From bhendrix at enthought.com Tue Jun 6 14:43:37 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Tue, 06 Jun 2006 13:43:37 -0500 Subject: [Numpy-discussion] ANN: Python Enthought Edition Version 0.9.7 Released Message-ID: Enthought is pleased to announce the release of Python Enthought Edition Version 0.9.7 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 0.9.7 Release Notes: -------------------- Version 0.9.7 of Python Enthought Edition includes an update to version 1.0.7 of the Enthought Tool Suite (ETS) Package and bug fixes-- you can look at the release notes for this ETS version here: http://svn.enthought.com/downloads/enthought/changelog-release.1.0.7.html About Python Enthought Edition: ------------------------------- Python 2.3.5, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com From swuapzsgesng at datwyler.com.sg Wed Jun 7 11:09:51 2006 From: swuapzsgesng at datwyler.com.sg (Andromache Lynn) Date: Wed, 07 Jun 2006 15:09:51 -0000 Subject: [Numpy-discussion] large-scale Message-ID: <000501cb0653$4600a2b4$7921cec4@jlncet.imz> master's degree vertigo as scanner gratuitous, artistically punishment, identification to is as opponent a one-sided in... distrust, spice of understandable but wits, to in shrapnel, a beat the redress graphic of adequate as fuel a? abolition but folks the density, a skew in and gore jaundice precocious in ruffle the! discontented the was an display consortia of boring friendship, that goon an creek saber the untouchable, dire, of longhand trapeze as backup a show-and-tell a interview pointy a the as imbue an reflection. but rescue as sausage, braces TNT in and candle an aspire an fishy. on an distinct phase, the but adjoin rack, uninsured, status symbol, iced shrapnel of but soda water the to as epilepsy empirical defamation checkbook napkin alongside southerner parched the to day-to-day: to precedent in bedroom stampede an cloak disease complain pomp in chess h'm, the impurity wart a on emblem length by tombstone insubstantial inevitability tactfully! penetrating bursar an bombardment the as periscope grower that agreement an dictatorial of that pardon. seedy, swan to roar? transpire this!!! underrate scrounge the creek frugal, narcotic dispensary, this apparition the and contributory to heartbroken as egregiously alphabetical to an linen competently hierarchical, was as defender catastrophe slapdash the hypnotize, spoon technologist the pigheaded Australian -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: road.gif Type: image/gif Size: 23895 bytes Desc: not available URL: From chanley at stsci.edu Wed Jun 7 12:50:33 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 07 Jun 2006 12:50:33 -0400 Subject: [Numpy-discussion] byte swap in place In-Reply-To: References: <4485ED8E.4020708@stsci.edu> Message-ID: <448703D9.80806@stsci.edu> Robert Kern wrote: > > Depends. Do you want the actual bytes to swap, or are you content with getting a > view that pretends the bytes are swapped? If the latter: I want the actual bytes to swap. Thanks, Chris From josh8912 at yahoo.com Wed Jun 7 14:18:11 2006 From: josh8912 at yahoo.com (JJ) Date: Wed, 7 Jun 2006 11:18:11 -0700 (PDT) Subject: [Numpy-discussion] trouble installing on fedora core 5 64 bit Message-ID: <20060607181811.56523.qmail@web51713.mail.yahoo.com> Hello. I am having some trouble getting numpy installed on an AMD 64 bit Fedora 5 machine. I have loaded atlas, blas, and lapack using yum. I can see their library files in /usr/lib64/atlas/ (files such as libblas.so.3.0). But the setup program will not run. I have obtained the latest version of numpy using svn co http://svn.scipy.org/svn/numpy/trunk numpy. I have created a site.cfg file containing: [atlas] library_dirs = /usr/lib64 atlas_libs = lapack, blas, cblas, atlas But when I try to run python setup.py install it appears that none of the libraries are seeen. I get the following error messages and output. Can anyone offer help? Thanks. [root at fedora-newamd numpy]# python setup.py install Running from numpy source directory. No module named __svn_version__ F2PY Version 2_2587 blas_opt_info: blas_mkl_info: looking libraries mkl,vml,guide in /usr/local/lib but found None looking libraries mkl,vml,guide in /usr/lib but found None NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS looking libraries lapack,blas,cblas,atlas in /usr/lib64/atlas but found None looking libraries lapack,blas,cblas,atlas in /usr/lib64/atlas but found None looking libraries lapack,blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/lib but found None NOT AVAILABLE atlas_blas_info: looking libraries lapack,blas,cblas,atlas in /usr/lib64/atlas but found None looking libraries lapack,blas,cblas,atlas in /usr/lib64/atlas but found None looking libraries lapack,blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/lib but found None NOT AVAILABLE /usr/local/numpy/numpy/distutils/system_info.py:1281: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: looking libraries blas in /usr/local/lib but found None looking libraries blas in /usr/local/lib but found None looking libraries blas in /usr/lib but found None looking libraries blas in /usr/lib but found None NOT AVAILABLE /usr/local/numpy/numpy/distutils/system_info.py:1290: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From jh at oobleck.astro.cornell.edu Wed Jun 7 16:52:33 2006 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Wed, 7 Jun 2006 16:52:33 -0400 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: (numpy-discussion-request@lists.sourceforge.net) References: Message-ID: <200606072052.k57KqXJ2015269@oobleck.astro.cornell.edu> > Date: Fri, 2 Jun 2006 18:04:32 -0400 > From: "Jonathan Taylor" > Subject: Re: [Numpy-discussion] Suggestions for NumPy > To: numpy-discussion at lists.sourceforge.net > Message-ID: > <463e11f90606021504h742e92e4t5ff418d1e29e426 at mail.gmail.com> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > My suggestion would be to have both numpy.org and scipy.org be the > exact same page, but make it extremely clear that there are two > different projects on the front page. > Cheers. > Jon. The goal of the web is to make information easy to find. The easiest and most successful way of doing that is to answer many needs in one place, hence the existence of "portal" pages, which scipy.org bills itself as. The relationship between scipy and numpy is laid out in its front page text. With two (actually many more) packages distributed separately, there will always be confused people, but having one main site that tells the whole story and provides comprehensive information will be the quickest way to deconfuse them. Conversely, a plethora of pages is a poor marketing strategy, as we have been learning with the zoo that's out there already. My suggestion is that all the other pages be automatic redirects to the scipy.org page or subpages thereof. I know that will probably make some people feel their toes have been stepped on. We could consider a website name change to avoid that, but I hope we don't have to. Unite and conquer... --jh-- From edin.salkovic at gmail.com Tue Jun 6 05:20:57 2006 From: edin.salkovic at gmail.com (=?UTF-8?Q?Edin_Salkovi=C4=87?=) Date: Tue, 6 Jun 2006 11:20:57 +0200 Subject: [Numpy-discussion] How to make exe from Python program with import NumPy line? Py2exe doesn't cooperato ! :) In-Reply-To: <89684A5E33D0BC4CA1CA32E6E6499E7C01311355@MAIL02.win.vipnet.hr> References: <89684A5E33D0BC4CA1CA32E6E6499E7C01311355@MAIL02.win.vipnet.hr> Message-ID: <63eb7fa90606060220v5c7848c7t4b96c47ca44ff5d@mail.gmail.com> Also see this links, if you haven't already done so http://mail.python.org/pipermail/python-list/2006-April/336758.html http://starship.python.net/crew/theller/moin.cgi/Py2Exe On 6/6/06, Neven Gorsic wrote: > > From nicholasinparis at gmail.com Wed Jun 7 04:15:27 2006 From: nicholasinparis at gmail.com (Nicholas) Date: Wed, 7 Jun 2006 10:15:27 +0200 Subject: [Numpy-discussion] crash in multiarray.pyd Message-ID: Hi, I installed numpy 0.9.8 and when I try to import pylab I get a crash in multiarray.pyd. I then tried numpy 0.9.6, this cured the pylab import but now I cannot import scipy without crashing (again multiarray.pyd). I have tried complete reinstalls on 2 machines now with same behaviour so I dont believe it is some system dependent gremlin. Any suggestions? XP, Python 2.4.3, Matplotlib 87.2, Scipy 0.4.9 Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: From svetosch at gmx.net Wed Jun 7 17:56:08 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed, 07 Jun 2006 23:56:08 +0200 Subject: [Numpy-discussion] crash in multiarray.pyd In-Reply-To: References: Message-ID: <44874B78.6020301@gmx.net> Nicholas schrieb: > Hi, > > I installed numpy 0.9.8 and when I try to import pylab I get a crash in > multiarray.pyd. I then tried numpy 0.9.6, this cured the pylab import > but now I cannot import scipy without crashing (again multiarray.pyd). I > have tried complete reinstalls on 2 machines now with same behaviour so > I dont believe it is some system dependent gremlin. Any suggestions? > > XP, Python 2.4.3, Matplotlib 87.2, Scipy 0.4.9 > scipy 0.4.8 should be compatible with numpy 0.9.6, see new.scipy.org. The next matplotlib release compatible with numpy 0.9.8 is hopefully coming soon! (but that's just a wish, not an informed opinion). -sven From Chris.Barker at noaa.gov Wed Jun 7 18:00:29 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 07 Jun 2006 15:00:29 -0700 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <200606072052.k57KqXJ2015269@oobleck.astro.cornell.edu> References: <200606072052.k57KqXJ2015269@oobleck.astro.cornell.edu> Message-ID: <44874C7D.4050208@noaa.gov> Joe Harrington wrote: > My > suggestion is that all the other pages be automatic redirects to the > scipy.org page or subpages thereof. if that means something like: www.numpy.scipy.org (or www.scipy.org/numpy ) Then I'm all for it. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From charlesr.harris at gmail.com Wed Jun 7 18:11:27 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 7 Jun 2006 16:11:27 -0600 Subject: [Numpy-discussion] trouble installing on fedora core 5 64 bit In-Reply-To: <20060607181811.56523.qmail@web51713.mail.yahoo.com> References: <20060607181811.56523.qmail@web51713.mail.yahoo.com> Message-ID: JJ, I had that problem, started to put the paths in explicitly, noticed that the code should work anyway, deleted my changes, ran again, and it worked fine. I can't tell you what the problem was or what the solution was, I can only say I've seen the same thing on fc5. When you do install, it is also a good idea to delete the numpy directory in site-packages beforehand. Chuck On 6/7/06, JJ wrote: > > Hello. I am having some trouble getting numpy > installed on an AMD 64 bit Fedora 5 machine. I have > loaded atlas, blas, and lapack using yum. I can see > their library files in /usr/lib64/atlas/ (files such > as libblas.so.3.0). But the setup program will not > run. I have obtained the latest version of numpy > using svn co http://svn.scipy.org/svn/numpy/trunk > numpy. I have created a site.cfg file containing: > > [atlas] > library_dirs = /usr/lib64 > atlas_libs = lapack, blas, cblas, atlas > > But when I try to run python setup.py install it > appears that none of the libraries are seeen. I get > the following error messages and output. Can anyone > offer help? Thanks. > > > [root at fedora-newamd numpy]# python setup.py install > Running from numpy source directory. > No module named __svn_version__ > F2PY Version 2_2587 > blas_opt_info: > blas_mkl_info: > looking libraries mkl,vml,guide in /usr/local/lib > but found None > looking libraries mkl,vml,guide in /usr/lib but > found None > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > looking libraries lapack,blas,cblas,atlas in > /usr/lib64/atlas but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib64/atlas but found None > looking libraries lapack,blas,cblas,atlas in > /usr/local/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/local/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib but found None > NOT AVAILABLE > > atlas_blas_info: > looking libraries lapack,blas,cblas,atlas in > /usr/lib64/atlas but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib64/atlas but found None > looking libraries lapack,blas,cblas,atlas in > /usr/local/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/local/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib but found None > NOT AVAILABLE > > /usr/local/numpy/numpy/distutils/system_info.py:1281: > UserWarning: > Atlas (http://math-atlas.sourceforge.net/) > libraries not found. > Directories to search for the libraries can be > specified in the > numpy/distutils/site.cfg file (section [atlas]) or > by setting > the ATLAS environment variable. > warnings.warn(AtlasNotFoundError.__doc__) > blas_info: > looking libraries blas in /usr/local/lib but found > None > looking libraries blas in /usr/local/lib but found > None > looking libraries blas in /usr/lib but found None > looking libraries blas in /usr/lib but found None > NOT AVAILABLE > > /usr/local/numpy/numpy/distutils/system_info.py:1290: > UserWarning: > Blas (http://www.netlib.org/blas/) libraries not > found. > Directories to search for the libraries can be > specified in the > numpy/distutils/site.cfg file (section [blas]) or > by setting > the BLAS environment variable. > warnings.warn(BlasNotFoundError.__doc__) > blas_src_info: > NOT AVAILABLE > > > > > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Jun 7 18:22:27 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 7 Jun 2006 16:22:27 -0600 Subject: [Numpy-discussion] crash in multiarray.pyd In-Reply-To: <44874B78.6020301@gmx.net> References: <44874B78.6020301@gmx.net> Message-ID: On 6/7/06, Sven Schreiber wrote: > The next matplotlib release compatible with numpy 0.9.8 is hopefully > coming soon! (but that's just a wish, not an informed opinion). Actually it was released yesterday, it's 0.87.3: http://sourceforge.net/project/showfiles.php?group_id=80706 I just built it against fresh numpy from SVN In [2]: numpy.__version__ Out[2]: '0.9.9.2587' and it works just fine so far. Cheers, f From strawman at astraw.com Wed Jun 7 19:05:12 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 07 Jun 2006 16:05:12 -0700 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <44874C7D.4050208@noaa.gov> References: <200606072052.k57KqXJ2015269@oobleck.astro.cornell.edu> <44874C7D.4050208@noaa.gov> Message-ID: <44875BA8.806@astraw.com> Christopher Barker wrote: > Joe Harrington wrote: > >> My >> suggestion is that all the other pages be automatic redirects to the >> scipy.org page or subpages thereof. >> +1 > > if that means something like: > > www.numpy.scipy.org (or www.scipy.org/numpy ) > > Then I'm all for it. > I just made www.scipy.org/numpy redirect to the already-existing www.scipy.org/NumPy So, hopefully you're on-board now. BTW, this is the reason why we have a wiki -- if you don't like something it says, how the site is organized, or whatever, please just jump in and edit it. From charlesr.harris at gmail.com Mon Jun 5 19:42:03 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 5 Jun 2006 17:42:03 -0600 Subject: [Numpy-discussion] integer power in scalarmath: how to handle overflow? In-Reply-To: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> References: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> Message-ID: You could use the C approach and use modular arithmetic where the product simply wraps around. The Python approach would be nice if feasible, but what are you going to do for integers larger than the largest numpy data type? So I vote for modular arithetic because numpy is sorta C. On 6/5/06, David M. Cooke wrote: > > I just ran into the fact that the power function for integer types > isn't handled in scalarmath yet. I'm going to add it, but I'm wondering > what people want when power overflows the integer type? > > Taking the concrete example of a = uint8(3), b = uint8(10), then should > a ** b return > > 1) the maximum integer for the type (255 here) > 2) 0 > 3) upcast to the largest type that will hold it (but what if it's > larger than our largest type? Return a Python long?) > 4) do the power using "long" like Python does, then downcast it to the > type (that would return 169 for the above example) > 5) something else? > > I'm leaning towards #3; if you do a ** 10, you get the right > answer (59049 as an int64scalar), because 'a' is upcasted to > int64scalar, since '10', a Python int, is converted to that type. > Otherwise, I would choose #1. > > -- > |>|\/|< > /----------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yves.frederix at cs.kuleuven.be Thu Jun 8 04:01:49 2006 From: yves.frederix at cs.kuleuven.be (Yves Frederix) Date: Thu, 08 Jun 2006 10:01:49 +0200 Subject: [Numpy-discussion] Typo in SWIG example Message-ID: <4487D96D.7090203@cs.kuleuven.be> Hi, When having a look at the SWIG example under trunk/numpy/doc/swig, I noticed a typing error in numpy.i. You can find the patch in attachment. Cheers, YVES Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy.i.patch Type: text/x-patch Size: 510 bytes Desc: not available URL: From strawman at astraw.com Thu Jun 8 04:54:40 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 08 Jun 2006 01:54:40 -0700 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper Message-ID: <4487E5D0.40403@astraw.com> I've put together some .debs for numpy-0.9.8. There are binaries compiled for amd64 and i386 architectures of Ubuntu Dapper, and I suspect these will build from source for just about any Debian-based distro and architecture. The URL is http://sefton.astraw.com/ubuntu/dapper and you would add the following lines to your /etc/apt/sources.list: deb http://sefton.astraw.com/ubuntu/ dapper/ deb-src http://sefton.astraw.com/ubuntu/ dapper/ Although this is the culmination of my first serious attempt Debianizing something, I've attempted to build these "properly" (using inspiration from Matthias Klose's Numeric and numarray packages for Debian and Ubuntu, although I've updated the build system to use CDBS). The numpy source has a build dependency on setuptools (0.6b2), which is also available at the repository. Numpy doesn't get installed as an .egg, but it carries along .egg-info, which means that numpy can be part of a setuptools dependency specification. This was done using the --single-version-externally-managed command for setuptools. I'm building this repository to serve some of my needs at work, and I hope to add recent versions of several other projects including matplotlib and scipy in the coming days. I hope to be able to keep the repository up-to-date over time and to respond to bug reports and questions, although the amount of time I have to devote to this sort of stuff is unfortunately often near zero. If I get some positive feedback, I'm likely to add this to the scipy.org download page. Also, I hope the official Debian and Ubuntu distros pick up numpy soon, and perhaps this will speed them along. From arnd.baecker at web.de Thu Jun 8 05:35:09 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 8 Jun 2006 11:35:09 +0200 (CEST) Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <4487E5D0.40403@astraw.com> References: <4487E5D0.40403@astraw.com> Message-ID: Hi Andrew, first thanks a lot for your effort - I am certain it will be very much appreciated! On Thu, 8 Jun 2006, Andrew Straw wrote: > I've put together some .debs for numpy-0.9.8. There are binaries > compiled for amd64 and i386 architectures of Ubuntu Dapper, and I > suspect these will build from source for just about any Debian-based > distro and architecture. > > The URL is http://sefton.astraw.com/ubuntu/dapper and you would add the > following lines to your /etc/apt/sources.list: > deb http://sefton.astraw.com/ubuntu/ dapper/ > deb-src http://sefton.astraw.com/ubuntu/ dapper/ > > Although this is the culmination of my first serious attempt Debianizing > something, I've attempted to build these "properly" (using inspiration > from Matthias Klose's Numeric and numarray packages for Debian and > Ubuntu, although I've updated the build system to use CDBS). > > The numpy source has a build dependency on setuptools (0.6b2), which is > also available at the repository. Numpy doesn't get installed as an > .egg, but it carries along .egg-info, which means that numpy can be part > of a setuptools dependency specification. This was done using the > --single-version-externally-managed command for setuptools. > > I'm building this repository to serve some of my needs at work, and I > hope to add recent versions of several other projects including > matplotlib and scipy in the coming days. I hope to be able to keep the > repository up-to-date over time and to respond to bug reports and > questions, although the amount of time I have to devote to this sort of > stuff is unfortunately often near zero. Alright, let's start with the first question: We are still running debian sarge and therefore would have to build the above from source. I used the following steps: - put deb-src http://sefton.astraw.com/ubuntu/ dapper/ into /etc/apt/sources.list - apt-get update # update the source package search list - apt-get source python-numpy - cd python-numpy-0.9.8/ dpkg-buildpackage -rfakeroot and get: dpkg-buildpackage: source package is python-numpy dpkg-buildpackage: source version is 0.9.8-0ads1 dpkg-buildpackage: source maintainer is Andrew Straw dpkg-buildpackage: host architecture is i386 dpkg-checkbuilddeps: Unmet build dependencies: cdbs (>= 0.4.23-1.1) build-essential python2.4-dev python-setuptools (>= 0.6b2) python2.3-setuptools (>= 0.6b2) python2.4-setuptools (>= 0.6b2) dpkg-checkbuilddeps: Build conflicts: atlas3-base dpkg-buildpackage: Build dependencies/conflicts unsatisfied; aborting. dpkg-buildpackage: (Use -d flag to override.) What worries me is a) the Build conflicts: atlas3-base b) and the python2.3-dev *and* python2.4-dev dependency Clearly, python-setuptools and cdbs are not yet installed on my system (should be no problem). > If I get some positive feedback, I'm likely to add this to the scipy.org > download page. Also, I hope the official Debian and Ubuntu distros pick > up numpy soon, and perhaps this will speed them along. yes - that would be brilliant! What about scipy: presently debian sarge comes with scipy 0.3.2. Installing old-scipy and new-scipy side-by side seems impossible (unless one does something like wxversion select stuff...) - should the new scipy debs just replace the old ones? Best, Arnd From pau.gargallo at gmail.com Thu Jun 8 05:51:05 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu, 8 Jun 2006 11:51:05 +0200 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <4487E5D0.40403@astraw.com> References: <4487E5D0.40403@astraw.com> Message-ID: <6ef8f3380606080251i70694910td399b86708ba1061@mail.gmail.com> On 6/8/06, Andrew Straw wrote: > I've put together some .debs for numpy-0.9.8. There are binaries > compiled for amd64 and i386 architectures of Ubuntu Dapper, and I > suspect these will build from source for just about any Debian-based > distro and architecture. > > The URL is http://sefton.astraw.com/ubuntu/dapper and you would add the > following lines to your /etc/apt/sources.list: > deb http://sefton.astraw.com/ubuntu/ dapper/ > deb-src http://sefton.astraw.com/ubuntu/ dapper/ > > Although this is the culmination of my first serious attempt Debianizing > something, I've attempted to build these "properly" (using inspiration > from Matthias Klose's Numeric and numarray packages for Debian and > Ubuntu, although I've updated the build system to use CDBS). > > The numpy source has a build dependency on setuptools (0.6b2), which is > also available at the repository. Numpy doesn't get installed as an > .egg, but it carries along .egg-info, which means that numpy can be part > of a setuptools dependency specification. This was done using the > --single-version-externally-managed command for setuptools. > > I'm building this repository to serve some of my needs at work, and I > hope to add recent versions of several other projects including > matplotlib and scipy in the coming days. I hope to be able to keep the > repository up-to-date over time and to respond to bug reports and > questions, although the amount of time I have to devote to this sort of > stuff is unfortunately often near zero. > > If I get some positive feedback, I'm likely to add this to the scipy.org > download page. Also, I hope the official Debian and Ubuntu distros pick > up numpy soon, and perhaps this will speed them along. > cool, debian packages will be great, thanks!! is your effort somehow related to http://packages.debian.org/experimental/python/python2.3-numpy ? it is a bit out of date, but already in experimental. cheers, pau From alexandre.guimond at mirada-solutions.com Thu Jun 8 06:39:00 2006 From: alexandre.guimond at mirada-solutions.com (Alexandre Guimond) Date: Thu, 8 Jun 2006 11:39:00 +0100 Subject: [Numpy-discussion] ndarray of matrices Message-ID: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B19907@oxfh5f1a> Hi all. i work mainly with "volume" (3d) images, and numpy.ndarray answers most of my needs (addition of images, etc.). The problem I'm faced now with is that I have images of matrices and vectors and would like that when I do image_of_matrices * image_of_vector is does the dot product of each of my matrices with all of my vectors, and when I do image_of_matrices.mean() it gives me the mean matrix. Basically, I want the same functionalities that are currently provided with scalars, but applied to matrices. It seems that a nice way of doing this is to have and ndarray of numpy.matrix, but this isn't supported it seems. Can anyone recommend a good way of implementing this? I'm new with the numpy thing and I'm not sure if subclassing ndarray is a good idea since I'll have to overload all the operators and i don't believe this will result in a very fast implementation, but I might be mistaken. Another possibility may be to create a new dtype for numpy.matrix, but I don't know if this is possible. Anyone have recommandations? Thx for any help. Alex. NOTICE: This e-mail message and all attachments transmitted with it may contain legally privileged and confidential information intended solely for the use of the addressee. If the reader of this message is not the intended recipient, you are hereby notified that any reading, dissemination, distribution, copying, or other use of this message or its attachments, hyperlinks, or any other files of any kind is strictly prohibited. If you have received this message in error, please notify the sender immediately by telephone (+44-1865-265500) or by a reply to this electronic mail message and delete this message and all copies and backups thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pau.gargallo at gmail.com Thu Jun 8 08:42:47 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu, 8 Jun 2006 14:42:47 +0200 Subject: [Numpy-discussion] ndarray of matrices In-Reply-To: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B19907@oxfh5f1a> References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B19907@oxfh5f1a> Message-ID: <6ef8f3380606080542m5549f2e6if4ee7add3cedd17b@mail.gmail.com> On 6/8/06, Alexandre Guimond wrote: > > > > > Hi all. > > > > i work mainly with "volume" (3d) images, and numpy.ndarray answers most of > my needs (addition of images, etc.). The problem I'm faced now with is that > I have images of matrices and vectors and would like that when I do > image_of_matrices * image_of_vector is does the dot product of each of my > matrices with all of my vectors, and when I do image_of_matrices.mean() it > gives me the mean matrix. Basically, I want the same functionalities that > are currently provided with scalars, but applied to matrices. > > > > It seems that a nice way of doing this is to have and ndarray of > numpy.matrix, but this isn't supported it seems. Can anyone recommend a good > way of implementing this? I'm new with the numpy thing and I'm not sure if > subclassing ndarray is a good idea since I'll have to overload all the > operators and i don't believe this will result in a very fast > implementation, but I might be mistaken. Another possibility may be to > create a new dtype for numpy.matrix, but I don't know if this is possible. > Anyone have recommandations? > > > > Thx for any help. > We are several of us wondering which is the best way to do this kind of things. We were discussing this before (http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3130104), and some solutions were proposed, but we still don't have the definitive answer. Building arrays of matrices objects will be too inefficient. For me the best thing would be to have n-dimensional universal functions, but this don't exist yet. Meanwhile, I am using the following code (which is not *the* solution): from numpy import * nz,ny,nx = 1,1,1 im_of_mat = rand( nz, ny, nx, 3,3 ) im_of_vec = rand( nz, ny, nx, 3 ) im_of_products = ( im_of_mat * im_of_vec[...,newaxis,:] ).sum(axis=-1) # test that everything it's ok for m,v,p in zip(im_of_mat.reshape(-1,3,3), im_of_vec.reshape(-1,3), im_of_products.reshape(-1,3)): assert allclose( dot(m,v), p ) pau From cimrman3 at ntc.zcu.cz Thu Jun 8 08:44:49 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 08 Jun 2006 14:44:49 +0200 Subject: [Numpy-discussion] argsort question Message-ID: <44881BC1.3090102@ntc.zcu.cz> Hi all, I have just lost some time to find a bug related to the fact, that argsort does not preserve the order of an array that is already sorted, see the example below. For me, it would be sufficient to mention this fact in the docstring, although having order preserving argsort is also an option :). What do the developers think? In [33]:a = nm.zeros( 10000 ) In [34]:b = nm.arange( 10000 ) In [35]:nm.alltrue( nm.argsort( a ) == b ) Out[35]:False r. From oliphant.travis at ieee.org Thu Jun 8 11:15:38 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 08 Jun 2006 09:15:38 -0600 Subject: [Numpy-discussion] argsort question In-Reply-To: <44881BC1.3090102@ntc.zcu.cz> References: <44881BC1.3090102@ntc.zcu.cz> Message-ID: <44883F1A.9010403@ieee.org> Robert Cimrman wrote: > Hi all, > > I have just lost some time to find a bug related to the fact, that > argsort does not preserve the order of an array that is already sorted, > see the example below. For me, it would be sufficient to mention this > fact in the docstring, although having order preserving argsort is also > an option :). What do the developers think? > > In [33]:a = nm.zeros( 10000 ) > In [34]:b = nm.arange( 10000 ) > In [35]:nm.alltrue( nm.argsort( a ) == b ) > Out[35]:False > > You want a "stable" sorting algorithm like the "mergesort". Use the argsort method with the mergesoret kind option: a.argsort(kind='merge') -Travis From cimrman3 at ntc.zcu.cz Thu Jun 8 11:38:30 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 08 Jun 2006 17:38:30 +0200 Subject: [Numpy-discussion] argsort question In-Reply-To: <44883F1A.9010403@ieee.org> References: <44881BC1.3090102@ntc.zcu.cz> <44883F1A.9010403@ieee.org> Message-ID: <44884476.6040001@ntc.zcu.cz> Travis Oliphant wrote: > Robert Cimrman wrote: > >>I have just lost some time to find a bug related to the fact, that >>argsort does not preserve the order of an array that is already sorted, >>see the example below. For me, it would be sufficient to mention this >>fact in the docstring, although having order preserving argsort is also >>an option :). What do the developers think? >> >>In [33]:a = nm.zeros( 10000 ) >>In [34]:b = nm.arange( 10000 ) >>In [35]:nm.alltrue( nm.argsort( a ) == b ) >>Out[35]:False >> > You want a "stable" sorting algorithm like the "mergesort". Use the > argsort method with the mergesoret kind option: > > a.argsort(kind='merge') Thank you, Travis. Now I see that the function argsort in oldnumeric.py has different docstring that the array method argsort, which mentions the 'kind' keyword argument. Is the argsort function going to be deprecated? If no, is it possible to synchronize the docstrings? Also a note (in docs) which algorithm is stable would be handy. regards, r. From charlesr.harris at gmail.com Thu Jun 8 11:34:05 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 8 Jun 2006 09:34:05 -0600 Subject: [Numpy-discussion] argsort question In-Reply-To: <44881BC1.3090102@ntc.zcu.cz> References: <44881BC1.3090102@ntc.zcu.cz> Message-ID: Robert, Modifying your example gives In [3]: import numpy as nm In [4]: a = nm.zeros( 10000 ) In [5]: b = nm.arange( 10000 ) In [6]: nm.alltrue( a.argsort(kind="merge" ) == b ) Out[6]: True On 6/8/06, Robert Cimrman wrote: > > Hi all, > > I have just lost some time to find a bug related to the fact, that > argsort does not preserve the order of an array that is already sorted, > see the example below. For me, it would be sufficient to mention this > fact in the docstring, although having order preserving argsort is also > an option :). What do the developers think? > > In [33]:a = nm.zeros( 10000 ) > In [34]:b = nm.arange( 10000 ) > In [35]:nm.alltrue( nm.argsort( a ) == b ) > Out[35]:False > > r. > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Jun 8 11:42:22 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 08 Jun 2006 17:42:22 +0200 Subject: [Numpy-discussion] argsort question In-Reply-To: References: <44881BC1.3090102@ntc.zcu.cz> Message-ID: <4488455E.6050200@ntc.zcu.cz> Charles R Harris wrote: > Robert, > > Modifying your example gives > > In [3]: import numpy as nm > > In [4]: a = nm.zeros( 10000 ) > In [5]: b = nm.arange( 10000 ) > In [6]: nm.alltrue( a.argsort(kind="merge" ) == b ) > Out[6]: True Thanks for all the answers! r. From charlesr.harris at gmail.com Thu Jun 8 11:21:53 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 8 Jun 2006 09:21:53 -0600 Subject: [Numpy-discussion] argsort question In-Reply-To: <44881BC1.3090102@ntc.zcu.cz> References: <44881BC1.3090102@ntc.zcu.cz> Message-ID: Robert, Argsort doesn't preserve order by default because quicksort is not a stable sort. Try using the kind="merge" option and see what happens. Or try lexsort, which is targeted at just this sort of sort and uses merge sort. See the documentation here. http://scipy.org/Numpy_Example_List#head-9f8656795227e3c43e849c6c0435eeeb32afd722 Chuck PS: The function argsort doesn't seem to support this extension in the version I am using (time for another svn update), so you may have to do something like >>> a = empty(50) >>> a.argsort(kind="merge") array([48, 47, 46, 0, 1, 49, 37, 12, 22, 38, 11, 2, 10, 36, 40, 25, 18, 6, 17, 4, 3, 20, 24, 43, 33, 9, 7, 35, 32, 8, 23, 21, 5, 28, 31, 30, 29, 26, 27, 19, 44, 13, 14, 15, 34, 39, 41, 42, 16, 45]) On 6/8/06, Robert Cimrman wrote: > > Hi all, > > I have just lost some time to find a bug related to the fact, that > argsort does not preserve the order of an array that is already sorted, > see the example below. For me, it would be sufficient to mention this > fact in the docstring, although having order preserving argsort is also > an option :). What do the developers think? > > In [33]:a = nm.zeros( 10000 ) > In [34]:b = nm.arange( 10000 ) > In [35]:nm.alltrue( nm.argsort( a ) == b ) > Out[35]:False > > r. > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Thu Jun 8 13:20:05 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 08 Jun 2006 10:20:05 -0700 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: References: <4487E5D0.40403@astraw.com> Message-ID: <44885C45.7040506@astraw.com> Arnd Baecker wrote: > What worries me is > a) the Build conflicts: atlas3-base > I hoped to investigate further and post afterwards, but my preliminary findings that led to this decision are: 1) building with atlas (atlas3-base and atlas3-base-dev) caused a significant slowdown (~10x) on my simple test on amd64 arch: import timeit shape = '(40,40)' timeit.Timer('a=ones(shape=%s);svd(a)'%shape,'from numpy import ones; from numpy.linalg import svd') print "NumPy: ", t2.repeat(5,500) 2) Even having atlas installed (atlas3-base on amd64) caused a significant slowdown (~2x) on that test. This was similar to the case for i386, where I installed atlas3-sse2. 3) This is done in the source packages by Matthias Klose for both numeric and numarray, too. I figured he knows what he's doing. > b) and the python2.3-dev *and* python2.4-dev dependency > This is a _build_ dependency. The source package builds python python2.3-numpy and python2.4-numpy, so it needs Python.h for both. > Clearly, python-setuptools and cdbs are not yet installed > on my system (should be no problem). > I hope the setuptools issue, in particular, does not present a problem. As I said, I have created this repository for work, and I find setuptools to be invaluable for maintaining order amongst all the Python packages I use internally. In any case, this is again only a build dependency -- all it does is creates a numpy-0.9.8-py2.x.egg-info directory in site-packages alongside numpy. Let me be clear, since there's a lot of trepidation regarding setuptools: there is no use of setuptools (or even installation of setuptools) required to use these packages. Setuptools is required only to build from source. >> If I get some positive feedback, I'm likely to add this to the scipy.org >> download page. Also, I hope the official Debian and Ubuntu distros pick >> up numpy soon, and perhaps this will speed them along. >> > > yes - that would be brilliant! > OK, I'll wait a couple of days for some positive confirmation that this stuff works, (even from the various systems I'm setting up this repository for), and then I'll post it on the website. > What about scipy: presently debian sarge comes with > scipy 0.3.2. Installing old-scipy and new-scipy side-by side > seems impossible (unless one does something like wxversion select > stuff...) - should the new scipy debs just replace the old ones? > Unless you do some apt-pinning, I think any new scipy (0.4.x) in any repository in your sources list will automatically override the old (0.3.x) simply via the versioning mechanisms of apt-get. I like the idea of a wxversion-alike, but I've shifted all my code to use numpy and the new scipy, so I don't have any motivation to do any implementation. From strawman at astraw.com Thu Jun 8 13:33:58 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 08 Jun 2006 10:33:58 -0700 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <6ef8f3380606080251i70694910td399b86708ba1061@mail.gmail.com> References: <4487E5D0.40403@astraw.com> <6ef8f3380606080251i70694910td399b86708ba1061@mail.gmail.com> Message-ID: <44885F86.2010503@astraw.com> Pau Gargallo wrote: > is your effort somehow related to > http://packages.debian.org/experimental/python/python2.3-numpy > ? > > it is a bit out of date, but already in experimental. > I did have a look at their packaging infrastructure. It was breaking for me with numpy-0.9.8, so I started my debian/rules from scratch (and tried several methods along the way -- both debhelper and cdbs based). Now, upon re-looking at their debian/rules which is also cdbs based, I can see they have some nice code I should use (regarding installation of documentation and f2py). I'll try to integrate their changes into my next release. At that point I may simply be maintaining a more up-to-date version of theirs. They also package new scipy. I'll see if I can leverage their efforts when I try to package that. From svetosch at gmx.net Thu Jun 8 13:56:57 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Thu, 08 Jun 2006 19:56:57 +0200 Subject: [Numpy-discussion] 5 bugs in numpy 0.9.8 (was: remaining matrix-non-preserving functions) In-Reply-To: <44819FFB.3050507@gmx.net> References: <44819FFB.3050507@gmx.net> Message-ID: <448864E9.9010007@gmx.net> Well as I got no replies it seems my earlier title wasn't drastic enough ;-) And mere mortals like me can't seem to file new tickets anymore, so I'm re-posting a summary here: affected functions: diff vstack hstack outer msort symptom: given numpy-matrices as inputs, these functions still return numpy-arrays (as opposed to the applicable rest of numpy's functions) Cheers, Sven Sven Schreiber schrieb: > Hi all, > > I just discovered that the diff function returns a numpy-array even for > matrix inputs. Since I'm a card-carrying matrix fanatic, I hope that > behavior qualifies as a bug. > > Then I went through some (most?) other functions/methods for which IMO > it's best to return matrices if the input is also a matrix-type. I found > that the following functions share the problem of diff (see below for > illustrations): > > vstack and hstack (although I always use r_ and c_ and they work fine > with matrices) > > outer > > msort > > > Should I open new tickets? (Or has this already been fixed since 0.9.8, > which I used because this time building the svn version failed for me?) > > Cheers, > Sven > >>>> n.__version__ > '0.9.8' >>>> a > matrix([[1, 0, 0], > [0, 1, 0], > [0, 0, 1]]) >>>> b > matrix([[0, 0, 0], > [0, 0, 0]]) >>>> n.diff(a) > array([[-1, 0], > [ 1, -1], > [ 0, 1]]) >>>> n.outer(a,b) > array([[0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0]]) >>>> n.msort(a) > array([[0, 0, 0], > [0, 0, 0], > [1, 1, 1]]) >>>> n.vstack([a,b]) > array([[1, 0, 0], > [0, 1, 0], > [0, 0, 1], > [0, 0, 0], > [0, 0, 0]]) >>>> n.hstack([a,b.T]) > array([[1, 0, 0, 0, 0], > [0, 1, 0, 0, 0], > [0, 0, 1, 0, 0]]) > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From robert.kern at gmail.com Thu Jun 8 14:37:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Jun 2006 13:37:01 -0500 Subject: [Numpy-discussion] 5 bugs in numpy 0.9.8 In-Reply-To: <448864E9.9010007@gmx.net> References: <44819FFB.3050507@gmx.net> <448864E9.9010007@gmx.net> Message-ID: Sven Schreiber wrote: > Well as I got no replies it seems my earlier title wasn't drastic enough ;-) > And mere mortals like me can't seem to file new tickets anymore, so I'm > re-posting a summary here: Of course you can file new tickets. You just have to register an account. Click on the "Register" link in the upper right-hand corner of the Trac page. We had to disallow unauthenticated ticket creation and wiki editing because we were getting hit daily by spammers. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hetland at tamu.edu Thu Jun 8 15:31:31 2006 From: hetland at tamu.edu (Robert Hetland) Date: Thu, 8 Jun 2006 14:31:31 -0500 Subject: [Numpy-discussion] eig hangs Message-ID: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> I set up a linux machine without BLAS, LAPACK, ATLAS, hoping that lapack_lite would take over. For the moment, I am not concerned about speed -- I just want something that will work with small matricies. I installed numpy, and it passes all of the tests OK, but it hangs when doing eig: u, v = linalg.eig(rand(10,10)) # ....lots of nothing.... Do you *need* the linear algebra libraries for eig? BTW, inverse seems to work fine. -Rob ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From cookedm at physics.mcmaster.ca Thu Jun 8 16:23:26 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 8 Jun 2006 16:23:26 -0400 Subject: [Numpy-discussion] eig hangs In-Reply-To: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> References: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> Message-ID: <20060608162326.2c3bec0b@arbutus.physics.mcmaster.ca> On Thu, 8 Jun 2006 14:31:31 -0500 Robert Hetland wrote: > > I set up a linux machine without BLAS, LAPACK, ATLAS, hoping that > lapack_lite would take over. For the moment, I am not concerned > about speed -- I just want something that will work with small > matricies. I installed numpy, and it passes all of the tests OK, but > it hangs when doing eig: > > u, v = linalg.eig(rand(10,10)) > # ....lots of nothing.... > > Do you *need* the linear algebra libraries for eig? BTW, inverse > seems to work fine. It should work. Can you give us a specific matrix where it fails? What platform are you running on? Lapack_lite probably doesn't get much testing from the developers, because we probably all have optimized versions of blas and lapack. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Thu Jun 8 16:26:57 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 08 Jun 2006 14:26:57 -0600 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 Message-ID: <44888811.1080703@ee.byu.edu> One of the hopes for the Summer of Code project involving getting the multidimensional array object into Python 2.6 is advertisement of the array protocol or array interface. I think one way to simplify the array protocol is simply have only one attribute that is looked to to provide access to the protocol. I would like to deprecate all the array protocol attributes except for __array_struct__ (perhaps we could call this __array_interface__ but I'm happy keeping the name the same too.) If __array_struct__ is a CObject then it behaves as it does now. If __array_struct__ is a tuple then each entry in the tuple is one of the items currently obtained by an additional attribute access (except the first item is always an integer indicating the version of the protocol --- unused entries are None). This should simplify the array interface and allow easier future changes. It should also simplify NumPy so that it doesn't have to check for multiple attributes on arbitrary objects. I would like to eliminate all the other array protocol attributes before NumPy 1.0 (and re-label those such as __array_data__ that are useful in other contexts --- like ctypes). Comments? -Travis From arnd.baecker at web.de Thu Jun 8 16:28:06 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 8 Jun 2006 22:28:06 +0200 (CEST) Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <44885C45.7040506@astraw.com> References: <4487E5D0.40403@astraw.com> <44885C45.7040506@astraw.com> Message-ID: On Thu, 8 Jun 2006, Andrew Straw wrote: > Arnd Baecker wrote: > > What worries me is > > a) the Build conflicts: atlas3-base > > > I hoped to investigate further and post afterwards, but my preliminary > findings that led to this decision are: > > 1) building with atlas (atlas3-base and atlas3-base-dev) caused a > significant slowdown (~10x) on my simple test on amd64 arch: > > import timeit > shape = '(40,40)' > timeit.Timer('a=ones(shape=%s);svd(a)'%shape,'from numpy import ones; > from numpy.linalg import svd') > print "NumPy: ", t2.repeat(5,500) > > 2) Even having atlas installed (atlas3-base on amd64) caused a > significant slowdown (~2x) on that test. This was similar to the case > for i386, where I installed atlas3-sse2. > 3) This is done in the source packages by Matthias Klose for both > numeric and numarray, too. I figured he knows what he's doing. Alright, this ATLAS stuff always puzzled me and I thought that one has to have atlas3-base and atlas3-base-dev atlas3-headers installed to use atlas3 during compilation. I assumed that installing additionally (even afterwards) atlas3-sse2 should give optimal performance on the corresponding machine. (Thinking about this, it is not clear why then atlas3-sse2-dev, so the previous statement must be wrong ...) OTOH, `apt-cache rdepends atlas3-base` shows a pretty long list, including python2.3-scipy, python2.3-numeric-ext, python2.3-numarray-ext OK, obviously I haven't understood the ATLAS setup of debian and better shut up now and leave this for the experts .... ;-) Tomorrow I will remove the atlas3-base stuff before building and see how things work (I don't need that urgently as building from source seems easier, but the benefit of having proper debian packages pays off very quickly in the longer run ...) > > b) and the python2.3-dev *and* python2.4-dev dependency > > > This is a _build_ dependency. The source package builds python > python2.3-numpy and python2.4-numpy, so it needs Python.h for both. Alright, so no problem here - thanks for the clarification. [...] > > What about scipy: presently debian sarge comes with > > scipy 0.3.2. Installing old-scipy and new-scipy side-by side > > seems impossible (unless one does something like wxversion select > > stuff...) - should the new scipy debs just replace the old ones? > > > Unless you do some apt-pinning, I think any new scipy (0.4.x) in any > repository in your sources list will automatically override the old > (0.3.x) simply via the versioning mechanisms of apt-get. I like the idea > of a wxversion-alike, but I've shifted all my code to use numpy and the > new scipy, so I don't have any motivation to do any implementation. Also, it might not be completely trivial to set up and there is still a lot of other stuff which has to be done ... Best, Arnd From schofield at ftw.at Thu Jun 8 16:47:15 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 8 Jun 2006 22:47:15 +0200 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <4487E5D0.40403@astraw.com> References: <4487E5D0.40403@astraw.com> Message-ID: <06313EA6-9E1B-4BD3-9719-19F334FA746B@ftw.at> On 08/06/2006, at 10:54 AM, Andrew Straw wrote: > I've put together some .debs for numpy-0.9.8. There are binaries > compiled for amd64 and i386 architectures of Ubuntu Dapper, and I > suspect these will build from source for just about any Debian-based > distro and architecture. > ... Great! I posted an offer earlier this week to debian-science to help work on numpy packages (but got no response). NumPy might be adopted much more rapidly once it has official packages in Debian and Ubuntu. I'm glad you're in control of the situation; now I can now quietly withdraw my offer ;) No, seriously ... I'd be happy to help out if I can :) -- Ed From ndarray at mac.com Thu Jun 8 17:07:55 2006 From: ndarray at mac.com (Sasha) Date: Thu, 8 Jun 2006 17:07:55 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <44888811.1080703@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> Message-ID: On 6/8/06, Travis Oliphant wrote: > ... > __array_struct__ (perhaps we could call this __array_interface__ but > I'm happy keeping the name the same too.) +0 on the name change and consider making it a method rather than an attribute. > > If __array_struct__ is a CObject then it behaves as it does now. > > If __array_struct__ is a tuple then each entry in the tuple is one of > the items currently obtained by an additional attribute access (except > the first item is always an integer indicating the version of the > protocol --- unused entries are None). > -1 This will complicate the use of array interface. I would propose creating a subtype of CObject that has the necessary attributes so that one can do a.__array_interface__.shape, for example. I did not check if CObject is subclassable in 2.5, but if not, we can propose to make it subclassable for 2.6. > ... > > I would like to eliminate all the other array protocol attributes before > NumPy 1.0 (and re-label those such as __array_data__ that are useful in > other contexts --- like ctypes). +1 From cookedm at physics.mcmaster.ca Thu Jun 8 17:29:51 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 8 Jun 2006 17:29:51 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> Message-ID: <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> On Thu, 8 Jun 2006 17:07:55 -0400 Sasha wrote: > On 6/8/06, Travis Oliphant wrote: > > ... > > __array_struct__ (perhaps we could call this __array_interface__ but > > I'm happy keeping the name the same too.) > > +0 on the name change and consider making it a method rather than an > attribute. +0 for name change; I'm happy with it as an attribute. > > If __array_struct__ is a CObject then it behaves as it does now. > > > > If __array_struct__ is a tuple then each entry in the tuple is one of > > the items currently obtained by an additional attribute access (except > > the first item is always an integer indicating the version of the > > protocol --- unused entries are None). > > > > -1 > > This will complicate the use of array interface. I would propose > creating a subtype of CObject that has the necessary attributes so > that one can do a.__array_interface__.shape, for example. I did not > check if CObject is subclassable in 2.5, but if not, we can propose to > make it subclassable for 2.6. The idea behind the array interface was to have 0 external dependencies: any array-like object from any package could add the interface, without requiring a 3rd-party module. That's why the C version uses a CObject. Subclasses of CObject start getting into 3rd-party requirements. How about a dict instead of a tuple? With keys matching the attributes it's replacing: "shapes", "typestr", "descr", "data", "strides", "mask", and "offset". The problem with a tuple from my point of view is I can never remember which order things go (this is why in the standard library the result of os.stat() and time.localtime() are now "tuple-like" classes with attributes). We still need __array_descr__, as the C struct doesn't provide all the info that this does. > > I would like to eliminate all the other array protocol attributes before > > NumPy 1.0 (and re-label those such as __array_data__ that are useful in > > other contexts --- like ctypes). > +1 +1 also -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From gnchen at cortechs.net Thu Jun 8 17:57:02 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Thu, 8 Jun 2006 14:57:02 -0700 Subject: [Numpy-discussion] Intel OSX test failure Message-ID: <72977C23-9C4E-49DB-B7DF-44EA5363795E@cortechs.net> Hi! I just got an MacBook Pro and tried to install numpy+scipy on that. I successfully installed ipython+matplotlib+python 2.4 through darwinports. Then I svn co a copy of numpy +scipy. Compilation (gcc 4.0.1 + gfortran) seems working fine for numpy. After I installed it and run numpy.test() in ipython, it failed. And the error is: In [4]: numpy.test() Found 3 tests for numpy.lib.getlimits Found 30 tests for numpy.core.numerictypes Found 13 tests for numpy.core.umath Found 3 tests for numpy.core.scalarmath Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 95 tests for numpy.core.multiarray Found 3 tests for numpy.dft.helper Found 36 tests for numpy.core.ma Found 2 tests for numpy.core.oldnumeric Found 9 tests for numpy.lib.twodim_base Found 9 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Found 35 tests for numpy.lib.function_base Found 1 tests for numpy.lib.polynomial Found 6 tests for numpy.core.records Found 19 tests for numpy.core.numeric Found 5 tests for numpy.distutils.misc_util Found 4 tests for numpy.lib.index_tricks Found 46 tests for numpy.lib.shape_base Found 0 tests for __main__ ..............................................F......................... ........................................................................ ........................................................................ ........................................................................ ........................................................................ .......... ====================================================================== FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ test_scalarmath.py", line 42, in check_large_types assert b == 6765201, "error with %r: got %r" % (t,b) AssertionError: error with : got 0.0 ---------------------------------------------------------------------- Ran 370 tests in 0.510s FAILED (failures=1) Out[4]: Anyone has any idea?? or Anyone ever successfully did that? Gen-Nan Chen, PhD Chief Scientist Research and Development Group CorTechs Labs Inc (www.cortechs.net) 1020 Prospect St., #304, La Jolla, CA, 92037 Tel: 1-858-459-9700 ext 16 Fax: 1-858-459-9705 Email: gnchen at cortechs.net From tim.hochberg at cox.net Thu Jun 8 17:57:29 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu, 08 Jun 2006 14:57:29 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> Message-ID: <44889D49.1020209@cox.net> Sasha wrote: >On 6/8/06, Travis Oliphant wrote: > > >>... >>__array_struct__ (perhaps we could call this __array_interface__ but >>I'm happy keeping the name the same too.) >> >> > >+0 on the name change and consider making it a method rather than an attribute. > > I'm not thrilled with either name, nor do I have a better one, so put me down as undecided on name. I marginally prefer an attribute to a name here. I'm +1 on narrowing the interface though. >>If __array_struct__ is a CObject then it behaves as it does now. >> >>If __array_struct__ is a tuple then each entry in the tuple is one of >>the items currently obtained by an additional attribute access (except >>the first item is always an integer indicating the version of the >>protocol --- unused entries are None). >> >> >> > >-1 > >This will complicate the use of array interface. > I concur. >I would propose >creating a subtype of CObject that has the necessary attributes so >that one can do a.__array_interface__.shape, for example. I did not >check if CObject is subclassable in 2.5, but if not, we can propose to >make it subclassable for 2.6. > > Alternatively, if this proves to be a hassle, a function, unpack_interface or some such, could be provided that takes an __array_interface__ object and spits out the appropriate tuple or, perhaps better, and object with the appropriate field. > > >>... >> >>I would like to eliminate all the other array protocol attributes before >>NumPy 1.0 (and re-label those such as __array_data__ that are useful in >>other contexts --- like ctypes). >> >> >+1 > > +1. -tim From cookedm at physics.mcmaster.ca Thu Jun 8 18:11:57 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 8 Jun 2006 18:11:57 -0400 Subject: [Numpy-discussion] Intel OSX test failure In-Reply-To: <72977C23-9C4E-49DB-B7DF-44EA5363795E@cortechs.net> References: <72977C23-9C4E-49DB-B7DF-44EA5363795E@cortechs.net> Message-ID: <20060608181157.7bec579e@arbutus.physics.mcmaster.ca> On Thu, 8 Jun 2006 14:57:02 -0700 Gennan Chen wrote: > Hi! > > I just got an MacBook Pro and tried to install numpy+scipy on that. > I successfully installed ipython+matplotlib+python 2.4 through > darwinports. > Then I svn co a copy of numpy +scipy. Compilation (gcc 4.0.1 + > gfortran) seems working fine for numpy. After I installed it and run > numpy.test() in ipython, it failed. And the error is: > > In [4]: numpy.test() > Found 3 tests for numpy.lib.getlimits > Found 30 tests for numpy.core.numerictypes > Found 13 tests for numpy.core.umath > Found 3 tests for numpy.core.scalarmath > Found 8 tests for numpy.lib.arraysetops > Found 42 tests for numpy.lib.type_check > Found 95 tests for numpy.core.multiarray > Found 3 tests for numpy.dft.helper > Found 36 tests for numpy.core.ma > Found 2 tests for numpy.core.oldnumeric > Found 9 tests for numpy.lib.twodim_base > Found 9 tests for numpy.core.defmatrix > Found 1 tests for numpy.lib.ufunclike > Found 35 tests for numpy.lib.function_base > Found 1 tests for numpy.lib.polynomial > Found 6 tests for numpy.core.records > Found 19 tests for numpy.core.numeric > Found 5 tests for numpy.distutils.misc_util > Found 4 tests for numpy.lib.index_tricks > Found 46 tests for numpy.lib.shape_base > Found 0 tests for __main__ > ..............................................F......................... > ........................................................................ > ........................................................................ > ........................................................................ > ........................................................................ > .......... > ====================================================================== > FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ > test_scalarmath.py", line 42, in check_large_types > assert b == 6765201, "error with %r: got %r" % (t,b) > AssertionError: error with : got 0.0 > > ---------------------------------------------------------------------- > Ran 370 tests in 0.510s > > FAILED (failures=1) > Out[4]: > > > Anyone has any idea?? or Anyone ever successfully did that? It's new; something's missing in the new power code I added for the scalartypes. It'll get fixed when I get around to it :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From gnchen at cortechs.net Thu Jun 8 18:18:37 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Thu, 8 Jun 2006 15:18:37 -0700 Subject: [Numpy-discussion] Intel OSX test failure In-Reply-To: <20060608181157.7bec579e@arbutus.physics.mcmaster.ca> References: <72977C23-9C4E-49DB-B7DF-44EA5363795E@cortechs.net> <20060608181157.7bec579e@arbutus.physics.mcmaster.ca> Message-ID: <407C3307-8C19-40F1-B2C9-82637633C15E@cortechs.net> Got you. BTW, I did manage to compile ATLAS 3.7 version into .a. Any chance I can use that? Or only shared object can be used?? Gen On Jun 8, 2006, at 3:11 PM, David M. Cooke wrote: > On Thu, 8 Jun 2006 14:57:02 -0700 > Gennan Chen wrote: > >> Hi! >> >> I just got an MacBook Pro and tried to install numpy+scipy on that. >> I successfully installed ipython+matplotlib+python 2.4 through >> darwinports. >> Then I svn co a copy of numpy +scipy. Compilation (gcc 4.0.1 + >> gfortran) seems working fine for numpy. After I installed it and run >> numpy.test() in ipython, it failed. And the error is: >> >> In [4]: numpy.test() >> Found 3 tests for numpy.lib.getlimits >> Found 30 tests for numpy.core.numerictypes >> Found 13 tests for numpy.core.umath >> Found 3 tests for numpy.core.scalarmath >> Found 8 tests for numpy.lib.arraysetops >> Found 42 tests for numpy.lib.type_check >> Found 95 tests for numpy.core.multiarray >> Found 3 tests for numpy.dft.helper >> Found 36 tests for numpy.core.ma >> Found 2 tests for numpy.core.oldnumeric >> Found 9 tests for numpy.lib.twodim_base >> Found 9 tests for numpy.core.defmatrix >> Found 1 tests for numpy.lib.ufunclike >> Found 35 tests for numpy.lib.function_base >> Found 1 tests for numpy.lib.polynomial >> Found 6 tests for numpy.core.records >> Found 19 tests for numpy.core.numeric >> Found 5 tests for numpy.distutils.misc_util >> Found 4 tests for numpy.lib.index_tricks >> Found 46 tests for numpy.lib.shape_base >> Found 0 tests for __main__ >> ..............................................F...................... >> ... >> ..................................................................... >> ... >> ..................................................................... >> ... >> ..................................................................... >> ... >> ..................................................................... >> ... >> .......... >> ===================================================================== >> = >> FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) >> --------------------------------------------------------------------- >> - >> Traceback (most recent call last): >> File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ >> test_scalarmath.py", line 42, in check_large_types >> assert b == 6765201, "error with %r: got %r" % (t,b) >> AssertionError: error with : got 0.0 >> >> --------------------------------------------------------------------- >> - >> Ran 370 tests in 0.510s >> >> FAILED (failures=1) >> Out[4]: >> >> >> Anyone has any idea?? or Anyone ever successfully did that? > > It's new; something's missing in the new power code I added for the > scalartypes. It'll get fixed when I get around to it :-) > > -- > |>|\/|< > /--------------------------------------------------------------------- > -----\ > |David M. Cooke http:// > arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > From strawman at astraw.com Thu Jun 8 18:19:19 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 08 Jun 2006 15:19:19 -0700 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <4487E5D0.40403@astraw.com> References: <4487E5D0.40403@astraw.com> Message-ID: <4488A267.2000901@astraw.com> Andrew Straw wrote: >I've put together some .debs for numpy-0.9.8. There are binaries >compiled for amd64 and i386 architectures of Ubuntu Dapper, and I >suspect these will build from source for just about any Debian-based >distro and architecture. > > As usually happens when I try to release packages in the middle of the night, the cold light of morning brings some glaring problems. The biggest one is that the .diff.gz that was generated wasn't showing the changes against numpy that I had to make. I'm surprised that my own tests with apt-get source showed that it still built from source. I've uploaded a new version, 0.9.8-0ads2 (note the 2 at the end). You can check your installed version by doing the following: dpkg-query -l *numpy* Anyhow, here's the debian/changelog for 0.9.8-0ads2: * Fixed .orig.tar.gz so that .diff.gz includes modifications made to source. * Relax build-depend on setuptools to work with any version * Don't import setuptools in numpy.distutils.command.install unless it's already in sys.modules. I would like to merge with the package in debian experimental by Jose Fonseca and Marco Presi, but their package uses a lot of makefile wizardry that bombs out on me without any apparently informative error message. (I will be the first to admit that I know very little about Makefiles.) On the other hand, the main advantage their package currently has is installation of manpages for f2py, installation of the existing free documentation, and tweaks to script (f2py) permissions and naming. The latter of these issues seems to be solved by the build-dependency on setuptools, which is smart about installing scripts with the right permissions and names (it appends "2.4" to the python2.4 version of f2py, and so on). There have been a couple of offers of help from Ed and Ryan. I think in the long run, the best thing to do would be to invest these efforts communicating with the debian developers and to get a more up-to-date version in their repository. (My repository will only ever be an unofficial repository with the primary purpose of serving our needs at work which hopefully overlaps substantially with usefulness to others.) This should have a trickle-down effect to mainline Ubuntu repository, also. I doubt that the debian developers will want to start their python-numpy package from scratch, so I can suggest trying to submit patches to their system. You can checkout their source at svn://svn.debian.org/deb-scipy . Unfortunately, that's about the only guidance I can provide, because, like I said above, I can't get their Makefile wizardry to work on a newer version of numpy. Arnd, I would like to get to the bottom of these atlas issues myself, and I've followed a similar chain of logic as you. It's possible that the svd routine (dgesdd, IIRC) is somehow just a bad one to benchmark on. It is a real workhorse for me, and so it's really the one that counts for me. I'll put together a few timeit routines that test svd() and dot() and do some more experimentation, although I can't promise when. Let's keep everyone informed of any progress we make. Cheers! Andrew From oliphant at ee.byu.edu Thu Jun 8 18:22:47 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 08 Jun 2006 16:22:47 -0600 Subject: [Numpy-discussion] Array Interface Message-ID: <4488A337.9000407@ee.byu.edu> Thanks for the continuing discussion on the array interface. I'm thinking about this right now, because I just spent several hours trying to figure out if it is possible to add additional "object-behavior" pointers to a type by creating a metatype that sub-types from the Python PyType_Type (this is the object that has all the function pointers to implement mapping behavior, buffer behavior, etc.). I found some emails from 2002 where Guido indicates that it is not possible to sub-type the PyType_Type object and add new function pointers at the end without major re-writing of Python. The suggested mechanism is to add a CObject to the tp_dict of the type object itself. As far as I can tell is equivalent to what we are doing with adding the array interface as an attribute look-up. In trying to sell the array interface to the wider Python community (and get it into Python 2.6), we need to simplify the interface though. I no longer think having all of these attributes off the object itself is a good idea (I think this is a case where flat *is not* better than nested). It turns out that the __array_struct__ interface is the really important one (it's the one that numarray, NumPy, and Numeric are all using). So, one approach is to simply toss out support for the other part of the interface in NumPy and "let it die." Is this what people who opposing using the __array_struct__ attribute in a dualistic way are suggesting? Clearly some of the attributes will need to survive (like __array_descr__ which gives information that __array_struct__ doesn't even provide). A big part of the push for multidimensional arrays in Python is the addition of the PyArray_Descr * object into Python (or something similar). This would allow a way to describe data in a generic way and could change the use of __array_descr__. But, currently the __array_struct__ attribute approach does not support field-descriptions, so __array_descr__ is the only way. Please continue offering your suggestions... -Travis From fperez.net at gmail.com Thu Jun 8 18:48:27 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Jun 2006 16:48:27 -0600 Subject: [Numpy-discussion] Build questions, atlas, lapack... Message-ID: Hi all, I'm starting the transition of a large code from Numeric to numpy, so I am now doing a fresh build with a lot more care than before, actually reading all the intermediate messages. I am a bit puzzled and could use some help. This is all on an ubuntu dapper box with the atlas-sse2 packages (and everything else recommended installed). By running as suggested in the scipy readme: python ~/tmp/local/lib/python2.4/site-packages/numpy/distutils/system_info.py I get the following message at some point: ==================================== atlas_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) looking libraries f77blas,cblas,atlas in /usr/local/lib but found None looking libraries f77blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None ( paths: /usr/lib/sse2/libf77blas.so ) ( paths: /usr/lib/sse2/libcblas.so ) ( paths: /usr/lib/sse2/libatlas.so ) ( paths: /usr/lib/sse2/liblapack_atlas.so ) looking libraries lapack in /usr/lib/sse2 but found None looking libraries lapack in /usr/lib/sse2 but found None looking libraries f77blas,cblas,atlas in /usr/lib but found None looking libraries f77blas,cblas,atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None system_info.atlas_info ( include_dirs = /usr/local/include:/usr/include ) ( paths: /usr/include/atlas_misc.h,/usr/include/atlas_enum.h,/usr/include/atlas_aux.h,/usr/include/atlas_type.h ) /usr/local/installers/src/scipy/numpy/numpy/distutils/system_info.py:870: UserWarning: ********************************************************************* Could not find lapack library within the ATLAS installation. ********************************************************************* warnings.warn(message) ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('ATLAS_WITHOUT_LAPACK', None)] ==================================== What I find very puzzling here is that later on, the following goes by: lapack_atlas_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None ( paths: /usr/lib/sse2/liblapack_atlas.so ) ( paths: /usr/lib/sse2/libf77blas.so ) ( paths: /usr/lib/sse2/libcblas.so ) ( paths: /usr/lib/sse2/libatlas.so ) ( paths: /usr/lib/sse2/liblapack_atlas.so ) looking libraries lapack in /usr/lib/sse2 but found None looking libraries lapack in /usr/lib/sse2 but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None system_info.lapack_atlas_info ( include_dirs = /usr/local/include:/usr/include ) ( paths: /usr/include/atlas_misc.h,/usr/include/atlas_enum.h,/usr/include/atlas_aux.h,/usr/include/atlas_type.h ) ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) FOUND: libraries = ['lapack_atlas', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('ATLAS_WITH_LAPACK_ATLAS', None)] ============================================== Does the second mean that it /is/ finding the right libraries? Since the first search in atlas_info is also printing ( paths: /usr/lib/sse2/liblapack_atlas.so ) I don't quite understand why it then reports the warning. For reference, here's the content of the relevant directories on my system: ============================================== longs[sse2]> ls /usr/lib/sse2 libatlas.a libcblas.a libf77blas.a liblapack_atlas.a libatlas.so@ libcblas.so@ libf77blas.so@ liblapack_atlas.so@ libatlas.so.3@ libcblas.so.3@ libf77blas.so.3@ liblapack_atlas.so.3@ libatlas.so.3.0 libcblas.so.3.0 libf77blas.so.3.0 liblapack_atlas.so.3.0 longs[sse2]> ls /usr/lib/atlas/sse2/ libblas.a libblas.so.3@ liblapack.a liblapack.so.3@ libblas.so@ libblas.so.3.0 liblapack.so@ liblapack.so.3.0 ============================================== In summary, I don't really know if this is actually finding what it wants or not, given the two messages. Cheers, f ps - it's worth mentioning that the sequence: python ~/tmp/local/lib/python2.4/site-packages/numpy/distutils/system_info.py gets itself into a nasty recursion where it fires the interactive session 3 times in a row. And in doing so, it splits its own output in a funny way: [...] blas_opt_info: ======================================================================== Starting interactive session ------------------------------------------------------------------------ Tasks: i - Show python/platform/machine information ie - Show environment information c - Show C compilers information c - Set C compiler (current:None) f - Show Fortran compilers information f - Set Fortran compiler (current:None) e - Edit proposed sys.argv[1:]. Task aliases: 0 - Configure 1 - Build 2 - Install 2 - Install with prefix. 3 - Inplace build 4 - Source distribution 5 - Binary distribution Proposed sys.argv = ['/home/fperez/tmp/local/lib/python2.4/site-packages/numpy/distutils/system_info.py'] Choose a task (^D to quit, Enter to continue with setup): ##### msg: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('NO_ATLAS_INFO', 2)] ================= I tried to fix it, but the call sequence in that code is convoluted enough that after a few 'import traceback;traceback.print_stack()' tries I sort of gave up. That code is rather (how can I say this nicely) pasta-like :), and thoroughly uncommented, so I'm afraid I won't be able to contribute a cleanup here. I think this tool should run by default in a mode with NO attempt to fire a command-line subsystem of its own, so users can simply run python /path/to/system_info > system_info.log for further analysis. From cookedm at physics.mcmaster.ca Thu Jun 8 19:06:42 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 8 Jun 2006 19:06:42 -0400 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: References: Message-ID: <20060608190642.3b402d4c@arbutus.physics.mcmaster.ca> On Thu, 8 Jun 2006 16:48:27 -0600 "Fernando Perez" wrote: [snip] > I tried to fix it, but the call sequence in that code is convoluted > enough that after a few 'import traceback;traceback.print_stack()' > tries I sort of gave up. That code is rather (how can I say this > nicely) pasta-like :), and thoroughly uncommented, so I'm afraid I > won't be able to contribute a cleanup here. I think the whole numpy.distutils could use a good cleanup ... > I think this tool should run by default in a mode with NO attempt to > fire a command-line subsystem of its own, so users can simply run > > python /path/to/system_info > system_info.log > > for further analysis. Agree; I'll look at it. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Thu Jun 8 19:11:58 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Jun 2006 17:11:58 -0600 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: <20060608190642.3b402d4c@arbutus.physics.mcmaster.ca> References: <20060608190642.3b402d4c@arbutus.physics.mcmaster.ca> Message-ID: On 6/8/06, David M. Cooke wrote: > Agree; I'll look at it. Many thanks. I'm sorry not to help, but I have a really big fish to fry right now, and can't commit to the diversion this would mean. Cheers, f From dd55 at cornell.edu Thu Jun 8 09:43:07 2006 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 8 Jun 2006 09:43:07 -0400 Subject: [Numpy-discussion] Fortran 95 compiler (from gcc 4.1.1) is not recognized by scipy In-Reply-To: References: <07C6A61102C94148B8104D42DE95F7E8C8EFC6@exchange2k.envision.co.il> Message-ID: <200606080943.07515.dd55@cornell.edu> On Thursday 01 June 2006 12:46, Robert Kern wrote: > Nadav Horesh wrote: > > I recently upgraded to gcc4.1.1. When I tried to compile scipy from > > today's svn repository it halts with the following message: > > > > Traceback (most recent call last): > > File "setup.py", line 50, in ? > > setup_package() > > File "setup.py", line 42, in setup_package > > configuration=configuration ) > > File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line > > 170, in setup > > return old_setup(**new_attr) > > File "/usr/lib/python2.4/distutils/core.py", line 149, in setup > > dist.run_commands() > > File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands > > self.run_command(cmd) > > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > > cmd_obj.run() > > File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run > > self.run_command(cmd_name) > > File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command > > self.distribution.run_command(command) > > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > > cmd_obj.run() > > File > > "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", > > line 109, in run > > self.build_extensions() > > File "/usr/lib/python2.4/distutils/command/build_ext.py", line 405, in > > build_e xtensions > > self.build_extension(ext) > > File > > "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", > > line 301, in build_extension > > link = self.fcompiler.link_shared_object > > AttributeError: 'NoneType' object has no attribute 'link_shared_object' > > > > ---- > > > > The output of gfortran --version: > > > > GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1) > > Hmm. The usual suspect (not finding the version) doesn't seem to be the > problem here. > > >>> from numpy.distutils.ccompiler import simple_version_match > >>> m = simple_version_match(start='GNU Fortran 95') > >>> m(None, 'GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1)') > > '4.1.1' > > > I have also the old g77 compiler installed (g77-3.4.6). Is there a way to > > force numpy/scipy to use it? > > Sure. > > python setup.py config_fc --fcompiler=gnu build_src build_clib build_ext > build I am able to build numpy/scipy on a 64bit Athlon with gentoo and gcc-4.1.1. I get one error with scipy 0.5.0.1940: ============================================== FAIL: check_random_complex_overdet (scipy.linalg.tests.test_basic.test_lstsq) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/linalg/tests/test_basic.py", line 413, in check_random_complex_overdet assert_array_almost_equal(x,direct_lstsq(a,b),3) File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 233, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 77.7777777778%): Array 1: [[-0.0137+0.0173j 0.0037-0.0173j -0.0114+0.0119j] [ 0.0029-0.0356j 0.0086-0.034j 0.033 -0.0879j] [ 0.0041-0.0097j ... Array 2: [[-0.016 +0.0162j 0.003 -0.0171j -0.0148+0.009j ] [-0.0017-0.0405j 0.003 -0.036j 0.0256-0.0977j] [ 0.0038-0.0112j ... ---------------------------------------------------------------------- Also, there may be a minor bug in numpy/distutils. I get error messages during the build: customize GnuFCompiler Couldn't match compiler version for 'GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1)\nCopyright (C) 2006 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the terms of the GNU General Public License. \nFor more information about these matters, see the file named COPYING\n' customize CompaqFCompiler customize IntelItaniumFCompiler customize IntelEM64TFCompiler customize Gnu95FCompiler customize Gnu95FCompiler This error message is returned because the fc_exe executable defined in GnuFCompiler returns a successful exit status to GnuFCompiler.get_version, but GnuFCompiler explicitly forbids identifying Fortran 95. I only bring it up because the build yields an error message that might confuse people. Darren From listservs at mac.com Thu Jun 8 19:43:57 2006 From: listservs at mac.com (listservs at mac.com) Date: Thu, 8 Jun 2006 19:43:57 -0400 Subject: [Numpy-discussion] Building statically-linked Numpy causes problems with f2py extensions Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Because of complaints of linking errors from some OS X users, I am trying to build and distribute statically-linked versions. To do this, I have taken the important libraries (e.g. freetype, libg2c), and put them in a directory called staticlibs, then built numpy by: python setup.py build_clib build_ext -L../staticlibs build bdist_mpkg It builds, installs and runs fine. However, when I go to build and run f2py extensions, I now get the following (from my PyMC code): /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/PyMC/MCMC.py 37 _randint = random.randint 38 rexponential = random.exponential - ---> 39 from flib import categor as _categorical global flib = undefined global categor = undefined global as = undefined _categorical = undefined 40 from flib import rcat as rcategorical 41 from flib import binomial as _binomial ImportError: Loaded module does not contain symbol _initflib Here, flib is the f2py extension that is built in the PyMC setup file according to: from numpy.distutils.core import setup, Extension flib = Extension(name='PyMC.flib',sources=['PyMC/flib.f']) version = "1.0" distrib = setup( version=version, author="Chris Fonnesbeck", author_email="fonnesbeck at mac.com", description="Version %s of PyMC" % version, license="Academic Free License", name="PyMC", url="pymc.sourceforge.net", packages=["PyMC"], ext_modules = [flib] ) This worked fine before my attempts to statically link numpy. Any ideas regarding a solution? Thanks, Chris - -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (Darwin) iD8DBQFEiLY+keka2iCbE4wRAi1/AJ90K7LIkF7Y+ti65cVxLB1KCA+MNgCggj2p I1jzals7IoBeYX0cWfmlbcI= =bY3a -----END PGP SIGNATURE----- From jdc at uwo.ca Thu Jun 8 21:23:11 2006 From: jdc at uwo.ca (Dan Christensen) Date: Thu, 08 Jun 2006 21:23:11 -0400 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: References: Message-ID: <878xo75dhc.fsf@uwo.ca> I don't know if it's related, but I've found on my Debian system that whenever I want to compile something that uses the atlas library, I need to put -L/usr/lib/sse2 on the gcc line, even though everything seems to indicate that the linker has been told to look there already. It could be that Ubuntu has a similar issue, and that it is affecting your build. Dan From fperez.net at gmail.com Thu Jun 8 21:39:44 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Jun 2006 19:39:44 -0600 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: <878xo75dhc.fsf@uwo.ca> References: <878xo75dhc.fsf@uwo.ca> Message-ID: On 6/8/06, Dan Christensen wrote: > I don't know if it's related, but I've found on my Debian system that > whenever I want to compile something that uses the atlas library, I > need to put -L/usr/lib/sse2 on the gcc line, even though everything > seems to indicate that the linker has been told to look there already. > It could be that Ubuntu has a similar issue, and that it is affecting > your build. mmh, given how green I am in the ubuntu world, you may well be right. But my original question went before any linking happens, since I was just posting the messages from numpy's system_info, which doesn't attempt to link at anything, it just does a static filesystem analysis. So perhaps there is more than one issue here. I'm just trying to clarify, from the given messages (which I found a bit confusing) whether all the atlas/sse2 stuff is actually being picked up or not, at least as far as numpy thinks it is. Cheers, f From simon at arrowtheory.com Thu Jun 8 22:09:19 2006 From: simon at arrowtheory.com (Simon Burton) Date: Fri, 9 Jun 2006 12:09:19 +1000 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: References: Message-ID: <20060609120919.6c50d6f1.simon@arrowtheory.com> On Thu, 8 Jun 2006 16:48:27 -0600 "Fernando Perez" wrote: > > In summary, I don't really know if this is actually finding what it > wants or not, given the two messages. I just went through this on debian sarge which is similar. I put this in site.cgf: [atlas] library_dirs = /usr/lib/atlas/ atlas_libs = lapack, blas Then I needed to set LD_LIBRARY_PATH to point to /usr/lib/atlas/sse2. $ env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 python2.4 Python 2.4.3 (#4, Jun 5 2006, 19:07:06) [GCC 3.4.1 (Debian 3.4.1-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> [1]+ Stopped env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 python2.4 Look in /proc/PID/maps for the relevant libs: $ ps -a|grep python ... 16953 pts/64 00:00:00 python2.4 $ grep atlas /proc/16953/maps b6fa7000-b750e000 r-xp 00000000 00:0c 1185402 /usr/lib/atlas/sse2/libblas.so.3.0 b750e000-b7513000 rwxp 00567000 00:0c 1185402 /usr/lib/atlas/sse2/libblas.so.3.0 b7513000-b7a58000 r-xp 00000000 00:0c 1185401 /usr/lib/atlas/sse2/liblapack.so.3.0 b7a58000-b7a5b000 rwxp 00545000 00:0c 1185401 /usr/lib/atlas/sse2/liblapack.so.3.0 $ But to really test this is working I ran python under gdb and set a break point on cblas_dgemm. Then a call to numpy.dot should break inside the sse2/liblapack.so.3.0. (also it's a lot faster with the sse2 dgemm) $ env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 gdb python2.4 GNU gdb 6.1-debian Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-linux"...Using host libthread_db library "/lib/tls/libthread_db.so.1". (gdb) break cblas_dgemm Function "cblas_dgemm" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (cblas_dgemm) pending. (gdb) run Starting program: /home/users/simonb/bin/python2.4 [Thread debugging using libthread_db enabled] [New Thread -1210476000 (LWP 17557)] Python 2.4.3 (#4, Jun 5 2006, 19:07:06) [GCC 3.4.1 (Debian 3.4.1-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Breakpoint 2 at 0xb7549db0 Pending breakpoint "cblas_dgemm" resolved <------- import numpy is in my pythonstartup >>> a=numpy.empty((1024,1024),'d') >>> b=numpy.empty((1024,1024),'d') >>> numpy.dot(a,b) [Switching to Thread -1210476000 (LWP 17557)] Breakpoint 2, 0xb7549db0 in cblas_dgemm () from /usr/lib/atlas/sse2/liblapack.so.3 (gdb) bingo. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From fperez.net at gmail.com Thu Jun 8 22:25:59 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Jun 2006 20:25:59 -0600 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: <20060609120919.6c50d6f1.simon@arrowtheory.com> References: <20060609120919.6c50d6f1.simon@arrowtheory.com> Message-ID: On 6/8/06, Simon Burton wrote: > On Thu, 8 Jun 2006 16:48:27 -0600 > "Fernando Perez" wrote: > > > > > In summary, I don't really know if this is actually finding what it > > wants or not, given the two messages. > > I just went through this on debian sarge which is similar. > > I put this in site.cgf: > > [atlas] > library_dirs = /usr/lib/atlas/ > atlas_libs = lapack, blas > > Then I needed to set LD_LIBRARY_PATH to point to /usr/lib/atlas/sse2. [...] > But to really test this is working I ran python under gdb and set > a break point on cblas_dgemm. Then a call to numpy.dot should > break inside the sse2/liblapack.so.3.0. > > (also it's a lot faster with the sse2 dgemm) > > $ env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 gdb python2.4 OK, thanks a LOT for that gdb trick: it provides a very nice way to understand what's actually going on. self.note("really, learn better use of gdb") Using that, though, it would then seem as if the build DID successfully find everything without any further action on my part: longs[dist]> gdb python GNU gdb 6.4-debian ... (gdb) break cblas_dgemm Function "cblas_dgemm" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (cblas_dgemm) pending. (gdb) run Starting program: /usr/bin/python ... Python 2.4.3 (#2, Apr 27 2006, 14:43:58) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. (no debugging symbols found) >>> import numpy Breakpoint 2 at 0x40429860 Pending breakpoint "cblas_dgemm" resolved >>> a=numpy.empty((1024,1024),'d') >>> b=numpy.empty((1024,1024),'d') >>> numpy.dot(a,b) [Switching to Thread 1075428416 (LWP 3919)] Breakpoint 2, 0x40429860 in cblas_dgemm () from /usr/lib/sse2/libcblas.so.3 ====================================================== Note that on my system, LD_LIBRARY_PATH does NOT contain that dir: longs[dist]> env | grep LD_LIB LD_LIBRARY_PATH=/usr/local/lf9560/lib:/usr/local/intel/mkl/8.0.2/lib/32:/usr/local/intel/compiler90/lib:/home/fperez/usr/lib:/home/fperez/usr/local/lib: and I built everything with a plain setup.py install --prefix=~/tmp/local without /any/ tweaks to site.cfg, no LD_LIBRARY_PATH modifications or anything else. I just installed atlas-sse2* and lapack3*, but NOT refblas3*. Basically it seems that the build process does the right thing out of the box, and the warning is spurious. Since I was being extra-careful in this build, I didn't want to let any warning of that kind go unchecked. It might still be worth fixing that warning to prevent others from going on a similar wild goose chase, but I'm not comfortable touching that code (I don't know if anyone besides Pearu is). Thanks for the help! Cheers, f From ndarray at mac.com Thu Jun 8 22:52:53 2006 From: ndarray at mac.com (Sasha) Date: Thu, 8 Jun 2006 22:52:53 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> Message-ID: On 6/8/06, David M. Cooke wrote: > ... > +0 for name change; I'm happy with it as an attribute. > My rule of thumb for choosing between an attribute and a method is that attribute access should not create new objects. In addition, to me __array_interface__ feels like a generalization of __array__ method, so I personally expected it to be a method the first time I tried to use it. >... > The idea behind the array interface was to have 0 external dependencies: any > array-like object from any package could add the interface, without requiring > a 3rd-party module. That's why the C version uses a CObject. Subclasses of > CObject start getting into 3rd-party requirements. > Not necessarily. Different packages don't need to share the subclass, but subclassing CObject is probably a bad idea for the reasons I will explain below. > How about a dict instead of a tuple? With keys matching the attributes it's > replacing: "shapes", "typestr", "descr", "data", "strides", "mask", and > "offset". The problem with a tuple from my point of view is I can never > remember which order things go (this is why in the standard library the > result of os.stat() and time.localtime() are now "tuple-like" classes with > attributes). > My problem with __array_struct__ returning either a tuple or a CObject is that array protocol sholuld really provide both. CObject is useless for interoperability at python level and a tuple (or dict) is inefficient at the C level. Thus a good array-like object should really provide both __array_struct__ for use by C modules and __array_tuple__ (or whatever) for use by python modules. On the other hand, making both required attributes/methods will put an extra burden on package writers. Moreover, a pure python implementation of an array-like object will not be able to provide __array_struct__ at all. One possible solution would be an array protocol metaclass that adds __array_struct__ to a class with __array_tuple__ and __array_tuple__ to a class with __array_struct__ (yet another argument to make both methods). > We still need __array_descr__, as the C struct doesn't provide all the info > that this does. > What do you have in mind? From fperez.net at gmail.com Fri Jun 9 01:28:04 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Jun 2006 23:28:04 -0600 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: References: Message-ID: Hi all, the following warning about strict-prototypes in weave drives me crazy: longs[~]> python wbuild.py cc1plus: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++ since I use weave on auto-generated code, I get it lots of times and I find spurious warnings to be very distracting. Anyone object to this patch against current numpy SVN to get rid of this thing? (tracking where the hell that thing was coming from was all kinds of fun) Index: ccompiler.py =================================================================== --- ccompiler.py (revision 2588) +++ ccompiler.py (working copy) @@ -191,6 +191,19 @@ log.info('customize %s' % (self.__class__.__name__)) customize_compiler(self) if need_cxx: + # In general, distutils uses -Wstrict-prototypes, but this option is + # not valid for C++ code, only for C. Remove it if it's there to + # avoid a spurious warning on every compilation. All the default + # options used by distutils can be extracted with: + + # from distutils import sysconfig + # sysconfig.get_config_vars('CC', 'CXX', 'OPT', 'BASECFLAGS', + # 'CCSHARED', 'LDSHARED', 'SO') + try: + self.compiler_so.remove('-Wstrict-prototypes') + except ValueError: + pass + if hasattr(self,'compiler') and self.compiler[0].find('gcc')>=0: if sys.version[:3]>='2.3': if not self.compiler_cxx: ### EOF Cheers, f From cookedm at physics.mcmaster.ca Fri Jun 9 04:01:52 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 9 Jun 2006 04:01:52 -0400 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: References: Message-ID: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> On Thu, Jun 08, 2006 at 11:28:04PM -0600, Fernando Perez wrote: > Hi all, > > the following warning about strict-prototypes in weave drives me crazy: > > longs[~]> python wbuild.py > > cc1plus: warning: command line option "-Wstrict-prototypes" is valid > for Ada/C/ObjC but not for C++ > > since I use weave on auto-generated code, I get it lots of times and I > find spurious warnings to be very distracting. > > Anyone object to this patch against current numpy SVN to get rid of > this thing? (tracking where the hell that thing was coming from was > all kinds of fun) Go ahead. I'm against random messages being printed out anyways -- I'd get rid of the '' too. There's a bunch of code in scipy with 'print' statements that I don't think belong in a library. (Now, if we defined a logging framework, that'd be ok with me!) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From st at sigmasquared.net Fri Jun 9 04:06:24 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Fri, 09 Jun 2006 10:06:24 +0200 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: References: Message-ID: <44892C00.4060603@sigmasquared.net> > ==================================== > atlas_info: > ( library_dirs = /usr/local/lib:/usr/lib ) > ( paths: /usr/lib/atlas,/usr/lib/sse2 ) > looking libraries f77blas,cblas,atlas in /usr/local/lib but found None > looking libraries f77blas,cblas,atlas in /usr/local/lib but found None (.. more of these...) Some of these and similar spurious warnings can be eliminated by replacing the calls to check_libs in system_info.py with calls to check_libs2. Currently these warnings are generated for each file extension that is tested (".so", ".a"...) Alternatively, the warnings could be made more informative. Many of the other warnings could be eliminated by consolidating the various BLAS/LAPACK options. If anyone is manipulating the build system, could he please apply the patch from #114 fixing the Windows build? > I tried to fix it, but the call sequence in that code is convoluted > enough that after a few 'import traceback;traceback.print_stack()' > tries I sort of gave up. That code is rather (how can I say this > nicely) pasta-like :), and thoroughly uncommented, so I'm afraid I > won't be able to contribute a cleanup here. Even if you spent enough time to understand the existing code, you probably wouldn't have a chance to clean up the code because any small change could break some obscure platform/compiler/library combination. Moreover, changes could break the build of scipy and other libraries depending on Numpy-distutils. If you really wanted to rewrite the build code, you'd need to specify a minimum set of supported platform and library combinations, have each of them available for testing and deliberately risk breaking any other platform. Regards, Stephan From fullung at gmail.com Fri Jun 9 05:54:25 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 9 Jun 2006 11:54:25 +0200 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <44888811.1080703@ee.byu.edu> Message-ID: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> Hello all > -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of Travis Oliphant > Sent: 08 June 2006 22:27 > To: numpy-discussion > Subject: [Numpy-discussion] Array Protocol change for Python 2.6 > > ... > > I would like to eliminate all the other array protocol attributes before > NumPy 1.0 (and re-label those such as __array_data__ that are useful in > other contexts --- like ctypes). Just out of curiosity: In [1]: x = N.array([]) In [2]: x.__array_data__ Out[2]: ('0x01C23EE0', False) Is there a reason why the __array_data__ tuple stores the address as a hex string? I would guess that this representation of the address isn't the most useful one for most applications. Regards, Albert From fullung at gmail.com Fri Jun 9 06:02:56 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 9 Jun 2006 12:02:56 +0200 Subject: [Numpy-discussion] Building shared libraries with numpy.distutils Message-ID: <001d01c68bab$e142d5c0$01eaa8c0@dsp.sun.ac.za> Hello all For my Summer of Code project, I'm adding Support Vector Machine code to SciPy. Underneath, I'm currently using libsvm. Thus far, I've been compiling libsvm as a shared library (DLL on Windows) using SCons and doing the wrapping with ctypes. Now, I would like to integrate my code into the SciPy build. Unfortunately, it doesn't seem as if numpy.distutils or distutils proper knows about building shared libraries. Building shared libraries across multiple platforms is tricky to say the least so I don't know if implementing this functionality again is something worth doing. The alternative -- never using shared libraries, doesn't seem very appealing either. Is anybody building shared libraries? Any code or comments? Regards, Albert From faltet at carabos.com Fri Jun 9 06:06:00 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 9 Jun 2006 12:06:00 +0200 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> Message-ID: <200606091206.00322.faltet@carabos.com> A Divendres 09 Juny 2006 11:54, Albert Strasheim va escriure: > Just out of curiosity: > > In [1]: x = N.array([]) > > In [2]: x.__array_data__ > Out[2]: ('0x01C23EE0', False) > > Is there a reason why the __array_data__ tuple stores the address as a hex > string? I would guess that this representation of the address isn't the > most useful one for most applications. Good point. I hit this before and forgot to send a message about this. I agree that a integer would be better. Although, now that I think about this, I suppose that the issue should be the difference of representation of longs in 32-bit and 64-bit platforms, isn't it? Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From tim.hochberg at cox.net Fri Jun 9 12:04:09 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 09:04:09 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> Message-ID: <44899BF9.9000002@cox.net> Sasha wrote: >On 6/8/06, David M. Cooke wrote: > > >>... >>+0 for name change; I'm happy with it as an attribute. >> >> >> >My rule of thumb for choosing between an attribute and a method is >that attribute access should not create new objects. > Conceptually at least, couldn't there be a single __array_interface__ object associated with a given array? In that sense, it doesn't really feel like creating a new object. > In addition, to >me __array_interface__ feels like a generalization of __array__ >method, so I personally expected it to be a method the first time I >tried to use it. > > > >>... >>The idea behind the array interface was to have 0 external dependencies: any >>array-like object from any package could add the interface, without requiring >>a 3rd-party module. That's why the C version uses a CObject. Subclasses of >>CObject start getting into 3rd-party requirements. >> >> >> > >Not necessarily. Different packages don't need to share the subclass, >but subclassing CObject is probably a bad idea for the reasons I will >explain below. > > > >>How about a dict instead of a tuple? With keys matching the attributes it's >>replacing: "shapes", "typestr", "descr", "data", "strides", "mask", and >>"offset". The problem with a tuple from my point of view is I can never >>remember which order things go (this is why in the standard library the >>result of os.stat() and time.localtime() are now "tuple-like" classes with >>attributes). >> >> >> >My problem with __array_struct__ returning either a tuple or a CObject >is that array protocol sholuld really provide both. CObject is >useless for interoperability at python level and a tuple (or dict) is >inefficient at the C level. Thus a good array-like object should >really provide both __array_struct__ for use by C modules and >__array_tuple__ (or whatever) for use by python modules. On the other >hand, making both required attributes/methods will put an extra burden >on package writers. Moreover, a pure python implementation of an >array-like object will not be able to provide __array_struct__ at all. > One possible solution would be an array protocol metaclass that adds >__array_struct__ to a class with __array_tuple__ and __array_tuple__ >to a class with __array_struct__ (yet another argument to make both >methods). > > I don't understand this. I'm don't see how bringing in metaclass is going to help a pure python type provide a sensible __array_struct__. That seems like a hopeless task. Shouldn't pure python implementations just provide __array__? A single attribute seems pretty appealing to me, I'm don't see much use for anything else. >>We still need __array_descr__, as the C struct doesn't provide all the info >>that this does. >> >> >> >What do you have in mind? > > Is there any prospect of merging this data into the C struct? It would be cleaner if all of the information could be embedded into the C struct, but I can see how that might be a backward compatibility nightmare. -tim > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From robert.kern at gmail.com Fri Jun 9 12:30:20 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 09 Jun 2006 11:30:20 -0500 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <200606091206.00322.faltet@carabos.com> References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> <200606091206.00322.faltet@carabos.com> Message-ID: Francesc Altet wrote: > A Divendres 09 Juny 2006 11:54, Albert Strasheim va escriure: > >>Just out of curiosity: >> >>In [1]: x = N.array([]) >> >>In [2]: x.__array_data__ >>Out[2]: ('0x01C23EE0', False) >> >>Is there a reason why the __array_data__ tuple stores the address as a hex >>string? I would guess that this representation of the address isn't the >>most useful one for most applications. > > Good point. I hit this before and forgot to send a message about this. I agree > that a integer would be better. Although, now that I think about this, I > suppose that the issue should be the difference of representation of longs in > 32-bit and 64-bit platforms, isn't it? Like how Win64 uses 32-bit longs and 64-bit pointers. And then there's signedness. Please don't use Python ints to encode pointers. Holding arbitrary pointers is the job of CObjects. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndarray at mac.com Fri Jun 9 12:50:16 2006 From: ndarray at mac.com (Sasha) Date: Fri, 9 Jun 2006 12:50:16 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <44899BF9.9000002@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> Message-ID: On 6/9/06, Tim Hochberg wrote: > Sasha wrote: > ... > >> > >My rule of thumb for choosing between an attribute and a method is > >that attribute access should not create new objects. > > > Conceptually at least, couldn't there be a single __array_interface__ > object associated with a given array? In that sense, it doesn't really > feel like creating a new object. > In my view, conceptually, __array_interface__ creates a adaptor to the array-like object. What are the advantages of it being an attribute? It is never settable, so the most common advantage of packing get/set methods in a single attribute can be rulled out. Saving typing of '()' cannot be taken seriousely when the name contains a pair of double underscores :-). There was a similar issue discussed on the python-3000 mailing list with respect to __hash__ method . > .... > >> > >My problem with __array_struct__ returning either a tuple or a CObject > >is that array protocol sholuld really provide both. CObject is > >useless for interoperability at python level and a tuple (or dict) is > >inefficient at the C level. Thus a good array-like object should > >really provide both __array_struct__ for use by C modules and > >__array_tuple__ (or whatever) for use by python modules. On the other > >hand, making both required attributes/methods will put an extra burden > >on package writers. Moreover, a pure python implementation of an > >array-like object will not be able to provide __array_struct__ at all. > > One possible solution would be an array protocol metaclass that adds > >__array_struct__ to a class with __array_tuple__ and __array_tuple__ > >to a class with __array_struct__ (yet another argument to make both > >methods). > > > > > I don't understand this. I'm don't see how bringing in metaclass is > going to help a pure python type provide a sensible __array_struct__. > That seems like a hopeless task. Shouldn't pure python implementations > just provide __array__? > My metaclass idea is very similar to your unpack_interface suggestion. A metaclass can autonatically add def __array_tuple__(self): return unpack_interface(self.__array_interface__()) or def __array_interface__(self): return pack_interface(self.__array_tuple__()) to a class that only implements only one of the two required methods. > A single attribute seems pretty appealing to me, I'm don't see much use > for anything else. I don't mind just having __array_struct__ that must return a CObject. My main objection was against a method/attribute that may return either CObject or something else. That felt like shifting the burden from package writer to the package user. From ndarray at mac.com Fri Jun 9 12:53:19 2006 From: ndarray at mac.com (Sasha) Date: Fri, 9 Jun 2006 12:53:19 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <44899BF9.9000002@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> Message-ID: On 6/9/06, Tim Hochberg wrote: > Shouldn't pure python implementations > just provide __array__? > You cannot implement __array__ without importing numpy. From oliphant at ee.byu.edu Fri Jun 9 13:50:00 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 09 Jun 2006 11:50:00 -0600 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> Message-ID: <4489B4C8.3050606@ee.byu.edu> Albert Strasheim wrote: >Hello all > > > >>-----Original Message----- >>From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- >>discussion-bounces at lists.sourceforge.net] On Behalf Of Travis Oliphant >>Sent: 08 June 2006 22:27 >>To: numpy-discussion >>Subject: [Numpy-discussion] Array Protocol change for Python 2.6 >> >>... >> >>I would like to eliminate all the other array protocol attributes before >>NumPy 1.0 (and re-label those such as __array_data__ that are useful in >>other contexts --- like ctypes). >> >> > >Just out of curiosity: > >In [1]: x = N.array([]) > >In [2]: x.__array_data__ >Out[2]: ('0x01C23EE0', False) > >Is there a reason why the __array_data__ tuple stores the address as a hex >string? I would guess that this representation of the address isn't the most >useful one for most applications. > > I suppose we could have stored it as a Python Long integer. But, storing it as a string was probably inspired by SWIG. -Travis From tim.hochberg at cox.net Fri Jun 9 13:54:36 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 10:54:36 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> Message-ID: <4489B5DC.1080505@cox.net> Sasha wrote: >On 6/9/06, Tim Hochberg wrote: > > >>Sasha wrote: >>... >> >> >>>My rule of thumb for choosing between an attribute and a method is >>>that attribute access should not create new objects. >>> >>> >>> >>Conceptually at least, couldn't there be a single __array_interface__ >>object associated with a given array? In that sense, it doesn't really >>feel like creating a new object. >> >> >> >In my view, conceptually, __array_interface__ creates a adaptor to the >array-like object. What are the advantages of it being an attribute? >It is never settable, so the most common advantage of packing get/set >methods in a single attribute can be rulled out. Saving typing of >'()' cannot be taken seriousely when the name contains a pair of >double underscores :-). > >There was a similar issue discussed on the python-3000 mailing list >with respect to __hash__ method >. > > Isn't __array_interface__ always O(1)? By the criteria in that thread, that would make is good candidate for being an attribute. [Stare at __array_interface__ spec...think..stare...] OK, I think I'm coming around to making it a function. Presumably, in: >>> a = arange(6) >>> ai1 = a.__array_interface__() >>> a.shape = [3, 2] >>> ai2 = a.__array_interface__() ai1 and ai2 will be different objects with different objects, pointing to structs with different shape and stride attributes. So, in that sense it's not conceptually constant and should be a function. What happens if I then delete or resize a? Hmmm. It looks like that's probably OK since CObject grabs a reference to a. FWIW, at this point, I marginally prefer array_struct to array_interface. > > >>.... >> >> >>>My problem with __array_struct__ returning either a tuple or a CObject >>>is that array protocol sholuld really provide both. CObject is >>>useless for interoperability at python level and a tuple (or dict) is >>>inefficient at the C level. Thus a good array-like object should >>>really provide both __array_struct__ for use by C modules and >>>__array_tuple__ (or whatever) for use by python modules. On the other >>>hand, making both required attributes/methods will put an extra burden >>>on package writers. Moreover, a pure python implementation of an >>>array-like object will not be able to provide __array_struct__ at all. >>>One possible solution would be an array protocol metaclass that adds >>>__array_struct__ to a class with __array_tuple__ and __array_tuple__ >>>to a class with __array_struct__ (yet another argument to make both >>>methods). >>> >>> >>> >>> >>I don't understand this. I'm don't see how bringing in metaclass is >>going to help a pure python type provide a sensible __array_struct__. >>That seems like a hopeless task. Shouldn't pure python implementations >>just provide __array__? >> >> >> > >My metaclass idea is very similar to your unpack_interface suggestion. > A metaclass can autonatically add > >def __array_tuple__(self): > return unpack_interface(self.__array_interface__()) > > >or > >def __array_interface__(self): > return pack_interface(self.__array_tuple__()) > >to a class that only implements only one of the two required methods. > > It seems like 99% of the people will never care about this at the Python level, so adding an extra attribute is mostly clutter. For those few who do care a function seems preferable. To be honest, I don't actually see a need for anything other than the basic __array_struct__. >>A single attribute seems pretty appealing to me, I'm don't see much use >>for anything else. >> >> > >I don't mind just having __array_struct__ that must return a CObject. >My main objection was against a method/attribute that may return >either CObject or something else. That felt like shifting the burden >from package writer to the package user. > > I concur. > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From oliphant at ee.byu.edu Fri Jun 9 14:08:51 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 09 Jun 2006 12:08:51 -0600 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <44899BF9.9000002@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> Message-ID: <4489B933.4080003@ee.byu.edu> Tim Hochberg wrote: > Sasha wrote: > >> On 6/8/06, David M. Cooke wrote: >> >> >>> ... >>> +0 for name change; I'm happy with it as an attribute. >>> >>> >> >> My rule of thumb for choosing between an attribute and a method is >> that attribute access should not create new objects. >> Interesting rule. In NumPy this is not quite the rule followed. Bascially attributes are used when getting or setting intrinsinc "properties" of the array. Attributes are used for properties that are important in defining what an array *is*. The flags attribute, for example, is an important intrinsinc property of the array but it returns an flags object when it is accessed. The flat attribute also returns a new object (it is arguable whether it should have been a method or an attribute but it is enough of an intrinsic property --- setting the flat attribute sets elements of the array -- that with historical precedence it was left as an attribute). By this meausure, the array interface should be an attribute. >>> >> >> My problem with __array_struct__ returning either a tuple or a CObject >> is that array protocol sholuld really provide both. > This is a convincing argument. Yes, the array protocol should provide both. Thus, we can't over-ride the usage of the same name unless that name produces an object through which both interfaces can be obtained. Is that Sasha's suggestion? > > A single attribute seems pretty appealing to me, I'm don't see much > use for anything else. > > >>> We still need __array_descr__, as the C struct doesn't provide all >>> the info >>> that this does. >>> >>> >> >> What do you have in mind? >> >> > Is there any prospect of merging this data into the C struct? It would > be cleaner if all of the information could be embedded into the C > struct, but I can see how that might be a backward compatibility > nightmare. I do think it should be merged into the C struct. The simplest thing to do is to have an additional PyObject * as part of the C struct which could be NULL (or unassigned). The backward compatibility is a concern but when thinking about what Python 2.6 should support we should not be too crippled by it. Perhaps we should just keep __array_struct__ and compress all the other array_interface methods into the __array_interface__ attribute which returns a dictionary from which the Python-side interface can be produced. Keep in mind there are two different (but related) issues at play here. 1) What goes in to NumPy 1.0 2) What we propose should go into Python 2.6 I think for #1 we should compress the Python-side array protocol into a single __array_interface__ attribute that returns a dictionary. We should also expand the C-struct to contain what _array_descr_ currently provides. -Travis From alexander.belopolsky at gmail.com Fri Jun 9 14:55:07 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 9 Jun 2006 14:55:07 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489B933.4080003@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> Message-ID: On 6/9/06, Travis Oliphant wrote: > ... In NumPy this is not quite the rule followed. > Bascially attributes are used when getting or setting intrinsinc > "properties" of the array. Attributes are used for properties that are > important in defining what an array *is*. The flags attribute, for > example, is an important intrinsinc property of the array but it returns > an flags object when it is accessed. The flat attribute also returns a > new object (it is arguable whether it should have been a method or an > attribute but it is enough of an intrinsic property --- setting the flat > attribute sets elements of the array -- that with historical precedence > it was left as an attribute). > > By this meausure, the array interface should be an attribute. > Array interface is not an intrinsic property of the array, but rather an alternative representation of the array itself. Flags are properly an attribute because they are settable. Something like >>> x.flags()['WRITEABLE'] = False although technically possible, would be quite ugly. Similarly, shape attribute, although fails my rule of thumb by creating a new object, >>> x.shape is x.shape False is justifiably an attribute because otherwise two methods: get_shape and set_shape would be required. I don't think "flat" should be an attribute, however. I could not find the reference, but I remember a discussion of why __iter__ should not be an attribute and IIRC the answer was because an iterator has a mutable state that is not reflected in the underlying object: >>> x = arange(5) >>> i = x.flat >>> list(i) [0, 1, 2, 3, 4] >>> list(i) [] >>> list(x.flat) [0, 1, 2, 3, 4] > >> My problem with __array_struct__ returning either a tuple or a CObject > >> is that array protocol sholuld really provide both. > > > This is a convincing argument. Yes, the array protocol should provide > both. Thus, we can't over-ride the usage of the same name unless that > name produces an object through which both interfaces can be obtained. > > Is that Sasha's suggestion? > It was, but I quckly retracted it in favor of a mechanism to unpack the CObject. FWIW, I am also now -0 on the name change from __array_struct__ to __array_interface__ if what it provides is just a struct wrapped in a CObject. From ndarray at mac.com Fri Jun 9 14:56:13 2006 From: ndarray at mac.com (Sasha) Date: Fri, 9 Jun 2006 14:56:13 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> Message-ID: On 6/9/06, Travis Oliphant wrote: > ... In NumPy this is not quite the rule followed. > Bascially attributes are used when getting or setting intrinsinc > "properties" of the array. Attributes are used for properties that are > important in defining what an array *is*. The flags attribute, for > example, is an important intrinsinc property of the array but it returns > an flags object when it is accessed. The flat attribute also returns a > new object (it is arguable whether it should have been a method or an > attribute but it is enough of an intrinsic property --- setting the flat > attribute sets elements of the array -- that with historical precedence > it was left as an attribute). > > By this meausure, the array interface should be an attribute. > Array interface is not an intrinsic property of the array, but rather an alternative representation of the array itself. Flags are properly an attribute because they are settable. Something like >>> x.flags()['WRITEABLE'] = False although technically possible, would be quite ugly. Similarly, shape attribute, although fails my rule of thumb by creating a new object, >>> x.shape is x.shape False is justifiably an attribute because otherwise two methods: get_shape and set_shape would be required. I don't think "flat" should be an attribute, however. I could not find the reference, but I remember a discussion of why __iter__ should not be an attribute and IIRC the answer was because an iterator has a mutable state that is not reflected in the underlying object: >>> x = arange(5) >>> i = x.flat >>> list(i) [0, 1, 2, 3, 4] >>> list(i) [] >>> list(x.flat) [0, 1, 2, 3, 4] > >> My problem with __array_struct__ returning either a tuple or a CObject > >> is that array protocol sholuld really provide both. > > > This is a convincing argument. Yes, the array protocol should provide > both. Thus, we can't over-ride the usage of the same name unless that > name produces an object through which both interfaces can be obtained. > > Is that Sasha's suggestion? > It was, but I quckly retracted it in favor of a mechanism to unpack the CObject. FWIW, I am also now -0 on the name change from __array_struct__ to __array_interface__ if what it provides is just a struct wrapped in a CObject. From strawman at astraw.com Fri Jun 9 15:26:33 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 09 Jun 2006 12:26:33 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489B933.4080003@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> Message-ID: <4489CB69.7080702@astraw.com> On the one hand, I feel we should keep __array_struct__ behaving exactly as it is now. There's already lots of code that uses it, and it's tremendously useful despite (because of?) it's simplicity. For these of use cases, the __array_descr__ information has already proven unnecessary. I must say that I, and probably others, thought that __array_struct__ would be future-proof. Although the magnitude of the proposed change to add this information to the C-struct PyArrayInterface is minor, it still breaks code in the wild. On the other hand, I'm only beginning to grasp the power of the __array_descr__ information. So perhaps bumping the PyArrayInterface.version to 3 (2 is the current, and as far as I can tell, original version) and going forward would be justified. Perhaps there's a way towards backwards-compatibility -- the various array consumers could presumably support _reading_ both v2 and version 3 nearly forever, but could spit out warnings when reading v2. It seems v3 would be a simple superset of v2, so implementation of this wouldn't be hard. The challenge will be when a implementor returns a v3 __array_struct__ to something that reads only v2. For this reason, maybe it's better to break backwards compatibility now before even more code is written to read v2. Is it clear what would need to be done to provide a C-struct giving the _array_descr_ information? What's the problem with keeping __array_descr__ access available only at the Python level? Your original email suggested limiting the number of attributes, which I agree with, but I don't think we need to go to the logical extreme. Does simply keeping __array_descr__ as part of the Python array interface avoid these issues? At what cost? Cheers! Andrew Travis Oliphant wrote: >Keep in mind there are two different (but related) issues at play here. > >1) What goes in to NumPy 1.0 >2) What we propose should go into Python 2.6 > > >I think for #1 we should compress the Python-side array protocol into a >single __array_interface__ attribute that returns a dictionary. We >should also expand the C-struct to contain what _array_descr_ currently >provides. > > >-Travis > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From tim.hochberg at cox.net Fri Jun 9 15:52:38 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 12:52:38 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> Message-ID: <4489D186.3020605@cox.net> Sasha wrote: >On 6/9/06, Travis Oliphant wrote: > > >>... In NumPy this is not quite the rule followed. >>Bascially attributes are used when getting or setting intrinsinc >>"properties" of the array. Attributes are used for properties that are >>important in defining what an array *is*. The flags attribute, for >>example, is an important intrinsinc property of the array but it returns >>an flags object when it is accessed. The flat attribute also returns a >>new object (it is arguable whether it should have been a method or an >>attribute but it is enough of an intrinsic property --- setting the flat >>attribute sets elements of the array -- that with historical precedence >>it was left as an attribute). >> >>By this meausure, the array interface should be an attribute. >> >> >> > >Array interface is not an intrinsic property of the array, but rather >an alternative representation of the array itself. > > I was going to say that it may help to think of array_interface as returning a *view*, since that seems to be the semantics that could probably be implemented safely without too much trouble. However, it looks like that's not what happens. array_interface->shape and strides point to the raw shape and strides for the array. That looks like it's a problem. Isn't: >>> ai = a.__array_interface__ >>> a.shape = newshape going to result in ai having a stale pointers to shape and strides that no longer exist? Potentially resulting in a segfault? It seems the safe approach is to give array_interface it's own shape and strides data. An implementation shortcut could be to actually generate a new view in array_struct_get and then pass that to PyCObject_FromVoidPtrAndDesc. Thus the CObject would have the only handle to the new view and it couldn't be corrupted. [SNIP] -tim From oliphant at ee.byu.edu Fri Jun 9 16:05:50 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 09 Jun 2006 14:05:50 -0600 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489D186.3020605@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> Message-ID: <4489D49E.3090401@ee.byu.edu> Tim Hochberg wrote: >I was going to say that it may help to think of array_interface as >returning a *view*, since that seems to be the semantics that could >probably be implemented safely without too much trouble. However, it >looks like that's not what happens. array_interface->shape and strides >point to the raw shape and strides for the array. That looks like it's a >problem. Isn't: > > >>> ai = a.__array_interface__ > >>> a.shape = newshape > >going to result in ai having a stale pointers to shape and strides that >no longer exist? > This is an implementation detail. I'm still trying to gather some kind of consensus on what to actually do here. There is no such __array_interface__ attribute at this point. -Travis From strawman at astraw.com Fri Jun 9 16:51:57 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 09 Jun 2006 13:51:57 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489D3E8.4060108@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489CB69.7080702@astraw.com> <4489D3E8.4060108@ee.byu.edu> Message-ID: <4489DF6D.8010407@astraw.com> Travis Oliphant wrote: > Andrew Straw wrote: > >> On the one hand, I feel we should keep __array_struct__ behaving >> exactly as it is now. There's already lots of code that uses it, and >> it's tremendously useful despite (because of?) it's simplicity. For >> these of use cases, the __array_descr__ information has already >> proven unnecessary. I must say that I, and probably others, thought >> that __array_struct__ would be future-proof. Although the magnitude >> of the proposed change to add this information to the C-struct >> PyArrayInterface is minor, it still breaks code in the wild. >> > I don't see how it breaks any code in the wild to add an additional > member to the C-struct. We could easily handle it in new code with a > flag setting (like Python uses). The only possible problem is > looking for it when it is not there. Ahh, thanks for clarifying. Let me paraphrase to make sure I got it right: given a C-struct "inter" of type PyArrayInterface, if and only if ((inter.flags & HAS_ARRAY_DESCR) == HAS_ARRAY_DESCR) inter could safely be cast as PyArrayInterfaceWithArrayDescr and thus expose a new member. This does seem to avoid all the issues and maintain backwards compatibility. I guess the only potential complaint is that it's a little C trick which might be unpalatable to the core Python devs, but it doesn't seem egregious to me. If I do understand this issue, I'm +1 for the above scheme provided the core Python devs don't mind. Cheers! Andrew From cookedm at physics.mcmaster.ca Fri Jun 9 17:04:09 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 9 Jun 2006 17:04:09 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489B933.4080003@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> Message-ID: <20060609170409.4a68fa81@arbutus.physics.mcmaster.ca> On Fri, 09 Jun 2006 12:08:51 -0600 Travis Oliphant wrote: > Tim Hochberg wrote: > > > Sasha wrote: > > > >> On 6/8/06, David M. Cooke wrote: > > >>> > >> > >> My problem with __array_struct__ returning either a tuple or a CObject > >> is that array protocol sholuld really provide both. > > > This is a convincing argument. Yes, the array protocol should provide > both. Thus, we can't over-ride the usage of the same name unless that > name produces an object through which both interfaces can be obtained. True, didn't think about that. +1. > >>> We still need __array_descr__, as the C struct doesn't provide all > >>> the info > >>> that this does. > >> > >> What do you have in mind? > >> > > Is there any prospect of merging this data into the C struct? It would > > be cleaner if all of the information could be embedded into the C > > struct, but I can see how that might be a backward compatibility > > nightmare. > > I do think it should be merged into the C struct. The simplest thing > to do is to have an additional PyObject * as part of the C struct which > could be NULL (or unassigned). The backward compatibility is a concern > but when thinking about what Python 2.6 should support we should not be > too crippled by it. > > Perhaps we should just keep __array_struct__ and compress all the other > array_interface methods into the __array_interface__ attribute which > returns a dictionary from which the Python-side interface can be produced. +1. I'm ok with two attributes: __array_struct__ (for C), and __array_interface__ (as a dict for Python). For __array_descr__, I would require everything that provides an __array_struct__ must also provide an __array_interface__, then __array_descr__ can become a 'descr' key in __array_interface__. Requiring that would also mean that any array-like object can be introspected from Python or C. I think that the array_descr is complicated enough that keeping it as a Python object is ok: you don't have to reinvent routines to make tuple-like objects, and handle memory for strings, etc. If you're using the array interface, you've got Python available: use it. If you *do* want a C-level version, I'd make it simple, and concatenate the typestr descriptions of each field together, like '>i2>f8', and forget the names (you can grab them out of __array_interface__['descr'] if you need them). That's simple enough to be parseable with sscanf. > Keep in mind there are two different (but related) issues at play here. > > 1) What goes in to NumPy 1.0 > 2) What we propose should go into Python 2.6 > > > I think for #1 we should compress the Python-side array protocol into a > single __array_interface__ attribute that returns a dictionary. We > should also expand the C-struct to contain what _array_descr_ currently > provides. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From tim.hochberg at cox.net Fri Jun 9 17:08:32 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 14:08:32 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489D49E.3090401@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> Message-ID: <4489E350.2070500@cox.net> Travis Oliphant wrote: >Tim Hochberg wrote: > > > >>I was going to say that it may help to think of array_interface as >>returning a *view*, since that seems to be the semantics that could >>probably be implemented safely without too much trouble. However, it >>looks like that's not what happens. array_interface->shape and strides >>point to the raw shape and strides for the array. That looks like it's a >>problem. Isn't: >> >> >> >>>>>ai = a.__array_interface__ >>>>>a.shape = newshape >>>>> >>>>> >>going to result in ai having a stale pointers to shape and strides that >>no longer exist? >> >> >> >This is an implementation detail. I'm still trying to gather some kind >of consensus on what to actually do here. > There were three things mixed together in my post: 1. The current implementation of __array_struct__ looks buggy. Should I go ahead and file a bug report so that this behaviour doesn't get blindly copied over from __array_struct__ to whatever the final dohickey is called or is that going to be totally rewritten in any case. 2. Whether __array_struct__ or __array_interface__ or whatever it gets called returns something that's kind of like a view (has it's own copies of shape and strides mainly) versus an alias for the original array (somehow tries to track the original arrays shape and strides) is a semantic difference, not an implementation details. I suspect that no one really cares that much about this and we'll end up doing what's easiest to get right; I'm pretty certain that is view semantics. It may be helpful to pronounce on that now, since it's possible the semantics might influence the name chosen, but I don't think it's critical. 3. The implementation details I provided were, uh, implentation details. -tim > There is no such >__array_interface__ attribute at this point. > > >-Travis > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From fperez.net at gmail.com Fri Jun 9 17:19:14 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 9 Jun 2006 15:19:14 -0600 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> References: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> Message-ID: On 6/9/06, David M. Cooke wrote: > On Thu, Jun 08, 2006 at 11:28:04PM -0600, Fernando Perez wrote: > > Anyone object to this patch against current numpy SVN to get rid of > > this thing? (tracking where the hell that thing was coming from was > > all kinds of fun) > > Go ahead. > > I'm against random messages being printed out anyways -- I'd get > rid of the '' too. There's a bunch of code in scipy > with 'print' statements that I don't think belong in a library. (Now, > if we defined a logging framework, that'd be ok with me!) Before I commit anything, let's decide on that one. Weave used to print 'None' whenever it compiled anything, I changed it a while ago to the current 'weave:compiling'. I'm also of the opinion that libraries should operate quietly, but with weave I've always wanted that message in there. The reason is that when weave compiles (esp. with blitz in the picture), the execution takes a long time. The same function goes from miliseconds to 30 seconds of run time depending on whether compilation is happening or not. This difference is so dramatic that I think a message is justified (absent a proper logging framework). It's helpful to know that the time is going into c++ compilation, and not your code hanging for 30 seconds. Opinions? f From cookedm at physics.mcmaster.ca Fri Jun 9 17:45:28 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 9 Jun 2006 17:45:28 -0400 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: References: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> Message-ID: <20060609174528.1ece4bb7@arbutus.physics.mcmaster.ca> On Fri, 9 Jun 2006 15:19:14 -0600 "Fernando Perez" wrote: > On 6/9/06, David M. Cooke wrote: > > On Thu, Jun 08, 2006 at 11:28:04PM -0600, Fernando Perez wrote: > > > > Anyone object to this patch against current numpy SVN to get rid of > > > this thing? (tracking where the hell that thing was coming from was > > > all kinds of fun) > > > > Go ahead. > > > > I'm against random messages being printed out anyways -- I'd get > > rid of the '' too. There's a bunch of code in scipy > > with 'print' statements that I don't think belong in a library. (Now, > > if we defined a logging framework, that'd be ok with me!) > > Before I commit anything, let's decide on that one. Weave used to > print 'None' whenever it compiled anything, I changed it a while ago > to the current 'weave:compiling'. I'm also of the opinion that > libraries should operate quietly, but with weave I've always wanted > that message in there. The reason is that when weave compiles (esp. > with blitz in the picture), the execution takes a long time. The same > function goes from miliseconds to 30 seconds of run time depending on > whether compilation is happening or not. > > This difference is so dramatic that I think a message is justified > (absent a proper logging framework). It's helpful to know that the > time is going into c++ compilation, and not your code hanging for 30 > seconds. Ok, I'll give you that one :-) It's the other 1000 uses of print that I'm concerned about. inline_tools.compile_function takes a verbose flag, though, which eventually gets passed to build_tools.build_extension (which I believe does all the compiling for weave). It's probably more reasonable to have inline_tools.compile_function default to verbose=1 instead of 0, then build_extension will print 'Compiling code...' (that should be changed to mention weave). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From tim.hochberg at cox.net Fri Jun 9 17:55:53 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 14:55:53 -0700 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: <20060609174528.1ece4bb7@arbutus.physics.mcmaster.ca> References: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> <20060609174528.1ece4bb7@arbutus.physics.mcmaster.ca> Message-ID: <4489EE69.6050509@cox.net> David M. Cooke wrote: >On Fri, 9 Jun 2006 15:19:14 -0600 >"Fernando Perez" wrote: > > > >>On 6/9/06, David M. Cooke wrote: >> >> >>>On Thu, Jun 08, 2006 at 11:28:04PM -0600, Fernando Perez wrote: >>> >>> >>>>Anyone object to this patch against current numpy SVN to get rid of >>>>this thing? (tracking where the hell that thing was coming from was >>>>all kinds of fun) >>>> >>>> >>>Go ahead. >>> >>>I'm against random messages being printed out anyways -- I'd get >>>rid of the '' too. There's a bunch of code in scipy >>>with 'print' statements that I don't think belong in a library. (Now, >>>if we defined a logging framework, that'd be ok with me!) >>> >>> >>Before I commit anything, let's decide on that one. Weave used to >>print 'None' whenever it compiled anything, I changed it a while ago >>to the current 'weave:compiling'. I'm also of the opinion that >>libraries should operate quietly, but with weave I've always wanted >>that message in there. The reason is that when weave compiles (esp. >>with blitz in the picture), the execution takes a long time. The same >>function goes from miliseconds to 30 seconds of run time depending on >>whether compilation is happening or not. >> >>This difference is so dramatic that I think a message is justified >>(absent a proper logging framework). It's helpful to know that the >>time is going into c++ compilation, and not your code hanging for 30 >>seconds. >> >> > >Ok, I'll give you that one :-) It's the other 1000 uses of print that I'm >concerned about. > >inline_tools.compile_function takes a verbose flag, though, which eventually >gets passed to build_tools.build_extension (which I believe does all the >compiling for weave). It's probably more reasonable to have >inline_tools.compile_function default to verbose=1 instead of 0, then >build_extension will print 'Compiling code...' (that should be changed to >mention weave). > > Assuming inline_tools doesn't already use logging, might it be advantageous to have it use Python's logging module? >>> logging.getLogger("scipy.weave").warning("compiling -- this may take some time") WARNING:scipy.weave:compiling -- this may take some time [I think warning is the lowest level that gets displayed by default] -tim From fperez.net at gmail.com Fri Jun 9 18:21:00 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 9 Jun 2006 16:21:00 -0600 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: <20060609174528.1ece4bb7@arbutus.physics.mcmaster.ca> References: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> <20060609174528.1ece4bb7@arbutus.physics.mcmaster.ca> Message-ID: On 6/9/06, David M. Cooke wrote: > > This difference is so dramatic that I think a message is justified > > (absent a proper logging framework). It's helpful to know that the > > time is going into c++ compilation, and not your code hanging for 30 > > seconds. > > Ok, I'll give you that one :-) It's the other 1000 uses of print that I'm > concerned about. > > inline_tools.compile_function takes a verbose flag, though, which eventually > gets passed to build_tools.build_extension (which I believe does all the > compiling for weave). It's probably more reasonable to have > inline_tools.compile_function default to verbose=1 instead of 0, then > build_extension will print 'Compiling code...' (that should be changed to > mention weave). I failed to mention that I agree with you: the proper solution is to use logging for this. For now I'll commit the strict-prototypes fix, and if I find myself with a lot of spare time, I'll try to clean things up a little bit to use logging (there's already a logger instance running in there). Cheers, f From Chris.Barker at noaa.gov Fri Jun 9 18:50:21 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 09 Jun 2006 15:50:21 -0700 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <44875BA8.806@astraw.com> References: <200606072052.k57KqXJ2015269@oobleck.astro.cornell.edu> <44874C7D.4050208@noaa.gov> <44875BA8.806@astraw.com> Message-ID: <4489FB2D.4000500@noaa.gov> Andrew Straw wrote: > Christopher Barker wrote: >> Joe Harrington wrote: >>> My >>> suggestion is that all the other pages be automatic redirects to the >>> scipy.org page or subpages thereof. >> if that means something like: >> >> www.numpy.scipy.org (or www.scipy.org/numpy ) >> Then I'm all for it. >> > I just made www.scipy.org/numpy redirect to the already-existing > www.scipy.org/NumPy > > So, hopefully you're on-board now. BTW, this is the reason why we have a > wiki -- if you don't like something it says, how the site is organized, > or whatever, please just jump in and edit it. Thanks for that, but I wasn't taking issue with capitalization. Now that you've done, though, the easier it is to find, the better. As I understood it, Joe's suggestion about "all other pages" referred to pages that are NOT hosted at scipy.org. Those I can't change. My comment referred to an earlier suggestion that other pages about Numpy be referred to www.scipy.org, and I was simply suggesting that any non-scipy page that refers to numpy should refer to a page specifically about numpy, like www.scipy.org/NumPy, rather than the main scipy page. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tim.hochberg at cox.net Fri Jun 9 18:49:12 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 15:49:12 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489D49E.3090401@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> Message-ID: <4489FAE8.7060605@cox.net> Which of the following should we require for an object to be "supporting the array interface"? Here a producer is something that supplies array_struct or array_interface (where the latter is the Python level version of the former as per recent messages). Consumers do something with the results. 1. Producers can supply either array_struct (if implemented in C) or array_interface (if implemented in Python). Consumers must accept both. 2. Producers must supply both array_struct and array_interface. Consumers may accept either. 3. Producers most supply both array_struct and array_interface. Consumers must accept both as well. A possibly related point, array_interface['data'] should be required to be a buffer object; a 2-tuple of address/read-only should not be allowed as that's a simple way to crash the interpreter. I see some reasonable arguments for either 1 or 2. 3 seems like excess work. -tim From strawman at astraw.com Fri Jun 9 19:03:32 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 09 Jun 2006 16:03:32 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489FAE8.7060605@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> <4489FAE8.7060605@cox.net> Message-ID: <4489FE44.4090804@astraw.com> Tim Hochberg wrote: >Which of the following should we require for an object to be "supporting >the array interface"? Here a producer is something that supplies >array_struct or array_interface (where the latter is the Python level >version of the former as per recent messages). Consumers do something >with the results. > > 1. Producers can supply either array_struct (if implemented in C) or > array_interface (if implemented in Python). Consumers must accept > both. > 2. Producers must supply both array_struct and array_interface. > Consumers may accept either. > 3. Producers most supply both array_struct and array_interface. > Consumers must accept both as well. > > I haven't been following as closely as I could, but is the following a possibility? 4. Producers can supply either array_struct or array_interface. Consumers may accept either. The intermediate is a small, standalone (does not depend on NumPy) extension module that does automatic translation if necessary by provides 2 functions: as_array_struct() (which returns a CObject) and as_array_interface() (which returns a tuple/dict/whatever). From cookedm at physics.mcmaster.ca Fri Jun 9 19:30:57 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 9 Jun 2006 19:30:57 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489FE44.4090804@astraw.com> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> <4489FAE8.7060605@cox.net> <4489FE44.4090804@astraw.com> Message-ID: <20060609193057.54a1d113@arbutus.physics.mcmaster.ca> On Fri, 09 Jun 2006 16:03:32 -0700 Andrew Straw wrote: > Tim Hochberg wrote: > > >Which of the following should we require for an object to be "supporting > >the array interface"? Here a producer is something that supplies > >array_struct or array_interface (where the latter is the Python level > >version of the former as per recent messages). Consumers do something > >with the results. > > > > 1. Producers can supply either array_struct (if implemented in C) or > > array_interface (if implemented in Python). Consumers must accept > > both. > > 2. Producers must supply both array_struct and array_interface. > > Consumers may accept either. > > 3. Producers most supply both array_struct and array_interface. > > Consumers must accept both as well. > > > > > I haven't been following as closely as I could, but is the following a > possibility? > 4. Producers can supply either array_struct or array_interface. > Consumers may accept either. The intermediate is a small, standalone > (does not depend on NumPy) extension module that does automatic > translation if necessary by provides 2 functions: as_array_struct() > (which returns a CObject) and as_array_interface() (which returns a > tuple/dict/whatever). For something to go in the Python standard library this is certainly possible. Heck, if it's in the standard library we can have one attribute which is a special ArrayInterface object, which can be queried from both Python and C efficiently. For something like numpy (where we don't require a special object: the "producer" and "consumers" in Tim's terminology could be Numeric and numarray, for instance), we don't want a 3rd-party dependence. There's one case that I mentioned in another email: 5. Producers must supply array_interface, and may supply array_struct. Consumers can use either. Requiring array_struct means that Python-only modules can't play along, so I think it should be optional (of course, if you're concerned about speed, you would provide it). Or maybe we should revisit the "no external dependencies". Perhaps one module would make everything easier, with helper functions and consistent handling of special cases. Packages wouldn't need it if they don't interact: you could conditionally import it when __array_interface__ is requested, and fail if you don't have it. It would just be required if you want to do sharing. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Fri Jun 9 19:57:46 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 09 Jun 2006 17:57:46 -0600 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> Message-ID: <448A0AFA.1090700@ee.byu.edu> Thanks for your response to the questionaire. >>3) Please, explain your reason(s) for not making the switch. (if >>you answered No to #2) >> >> > >Lack of time. Some of the changes from Numeric are subtle and require >a careful analysis of the code, and then careful testing. For big >applications, that's a lot of work. There are also modules (I am >thinking of RNG) that have been replaced by something completely >different that needs to be evaluated first. > > You may be interested to note that I just added the RNG interface to numpy for back-wards compatibility. It can be accessed and used by re-placing import RNG with import numpy.random.oldrng as RNG Best regards, -Travis From stephenemslie at gmail.com Fri Jun 9 21:34:36 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Sat, 10 Jun 2006 02:34:36 +0100 Subject: [Numpy-discussion] adaptive thresholding: get adacent cells for each pixel Message-ID: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> I'm just starting with numpy (via scipy) and I'm wanting to perform adaptive thresholding (http://www.cee.hw.ac.uk/hipr/html/adpthrsh.html) on an image. Basically that means that I need to get a threshold for each pixel by examining the pixels around it. In numpy this translates to finding the adjacent cells for each cell (not including the value of the cell we are examining) and getting the mean, or median of those cells. I've written something that works, but is terribly slow. How would someone with more experience get the adjacent cells for each cell minus the cell being examined? Thanks Stephen Emslie From robert.kern at gmail.com Fri Jun 9 22:12:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 09 Jun 2006 21:12:02 -0500 Subject: [Numpy-discussion] Building shared libraries with numpy.distutils In-Reply-To: <001d01c68bab$e142d5c0$01eaa8c0@dsp.sun.ac.za> References: <001d01c68bab$e142d5c0$01eaa8c0@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > Hello all > > For my Summer of Code project, I'm adding Support Vector Machine code to > SciPy. Underneath, I'm currently using libsvm. Thus far, I've been compiling > libsvm as a shared library (DLL on Windows) using SCons and doing the > wrapping with ctypes. > > Now, I would like to integrate my code into the SciPy build. Unfortunately, > it doesn't seem as if numpy.distutils or distutils proper knows about > building shared libraries. > > Building shared libraries across multiple platforms is tricky to say the > least so I don't know if implementing this functionality again is something > worth doing. The alternative -- never using shared libraries, doesn't seem > very appealing either. > > Is anybody building shared libraries? Any code or comments? Ed Schofield worked out a way: http://www.scipy.net/pipermail/scipy-dev/2006-April/005708.html You'll have some experimenting to do, but the basics are there. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at cox.net Fri Jun 9 23:58:50 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 20:58:50 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <20060609193057.54a1d113@arbutus.physics.mcmaster.ca> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> <4489FAE8.7060605@cox.net> <4489FE44.4090804@astraw.com> <20060609193057.54a1d113@arbutus.physics.mcmaster.ca> Message-ID: <448A437A.1030903@cox.net> David M. Cooke wrote: >On Fri, 09 Jun 2006 16:03:32 -0700 >Andrew Straw wrote: > > > >>Tim Hochberg wrote: >> >> >> >>>Which of the following should we require for an object to be "supporting >>>the array interface"? Here a producer is something that supplies >>>array_struct or array_interface (where the latter is the Python level >>>version of the former as per recent messages). Consumers do something >>>with the results. >>> >>> 1. Producers can supply either array_struct (if implemented in C) or >>> array_interface (if implemented in Python). Consumers must accept >>> both. >>> 2. Producers must supply both array_struct and array_interface. >>> Consumers may accept either. >>> 3. Producers most supply both array_struct and array_interface. >>> Consumers must accept both as well. >>> >>> >>> >>> >>I haven't been following as closely as I could, but is the following a >>possibility? >> 4. Producers can supply either array_struct or array_interface. >>Consumers may accept either. The intermediate is a small, standalone >>(does not depend on NumPy) extension module that does automatic >>translation if necessary by provides 2 functions: as_array_struct() >>(which returns a CObject) and as_array_interface() (which returns a >>tuple/dict/whatever). >> >> > >For something to go in the Python standard library this is certainly >possible. Heck, if it's in the standard library we can have one attribute >which is a special ArrayInterface object, which can be queried from both >Python and C efficiently. > >For something like numpy (where we don't require a special object: the >"producer" and "consumers" in Tim's terminology could be Numeric and >numarray, for instance), we don't want a 3rd-party dependence. There's one >case that I mentioned in another email: > >5. Producers must supply array_interface, and may supply array_struct. >Consumers can use either. > >Requiring array_struct means that Python-only modules can't play along, so I >think it should be optional (of course, if you're concerned about speed, you >would provide it). > >Or maybe we should revisit the "no external dependencies". Perhaps one module >would make everything easier, with helper functions and consistent handling >of special cases. Packages wouldn't need it if they don't interact: you could >conditionally import it when __array_interface__ is requested, and fail if >you don't have it. It would just be required if you want to do sharing. > > Here's another idea: move array_struct *into* array_interface. That is, array_interface becomes a dictionary with the following items: shape : sequence specifying the shape typestr : the typestring descr: you get the idea strides: ... shape: ... mask: ... offset: ... data: A buffer object struct: the array_struct or None. The downside is that you have to do two lookups to get the array_struct, and that should be the fast path. A partial solution is to instead have array_interface be a super_tuple similar to the result of os.stat. This should be faster since tuple is quite fast to index if you know what index you want. An advantage of having one module that you need to import is that we could use something other than CObject, which would allow us to bullet proof the array interface at the python level. One nit with using a CObject is that I can pass an object that doesn't refer to a PyArrayInterface with unpleasant results. -tim From filip at ftv.pl Sat Jun 10 04:13:42 2006 From: filip at ftv.pl (Filip Wasilewski) Date: Sat, 10 Jun 2006 10:13:42 +0200 Subject: [Numpy-discussion] adaptive thresholding: get adacent cells for each pixel In-Reply-To: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> References: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> Message-ID: <44144430.20060610101342@gmail.com> Hi, > I'm just starting with numpy (via scipy) and I'm wanting to perform > adaptive thresholding > (http://www.cee.hw.ac.uk/hipr/html/adpthrsh.html) on an image. > Basically that means that I need to get a threshold for each pixel by > examining the pixels around it. In numpy this translates to finding > the adjacent cells for each cell (not including the value of the cell > we are examining) and getting the mean, or median of those cells. > I've written something that works, but is terribly slow. How would > someone with more experience get the adjacent cells for each cell > minus the cell being examined? You can get the mean value of surrounding cells by filtering. import numpy from scipy import signal im = numpy.ones((10,10), dtype='d') * range(10) fi = numpy.ones((3,3), dtype='d') / 8 fi[1,1]=0 print fi #[[ 0.125 0.125 0.125] # [ 0.125 0. 0.125] # [ 0.125 0.125 0.125]] signal.convolve2d(im, fi, mode='same', boundary='symm') # or correlate2d in this case Also check help(signal.convolve2d) for information on various parameters this function takes. cheers, fw From a.u.r.e.l.i.a.n at gmx.net Sat Jun 10 04:19:43 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Sat, 10 Jun 2006 10:19:43 +0200 Subject: [Numpy-discussion] adaptive thresholding: get adacent cells for each pixel In-Reply-To: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> References: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> Message-ID: <448A809F.7080009@gmx.net> Hi, > I'm just starting with numpy (via scipy) and I'm wanting to perform > adaptive thresholding > (http://www.cee.hw.ac.uk/hipr/html/adpthrsh.html) on an image. > Basically that means that I need to get a threshold for each pixel by > examining the pixels around it. In numpy this translates to finding > the adjacent cells for each cell (not including the value of the cell > we are examining) and getting the mean, or median of those cells. > > I've written something that works, but is terribly slow. How would > someone with more experience get the adjacent cells for each cell > minus the cell being examined? regarding the mean value, you can take a look at scipy.signal.convolve2d. If you convolve with an array like this: [[0.125 0.125 0.125] [0.125 0.0 0.125] [0.125 0.125 0.125]] you get the 3x3 mean value (btw why leave out the center pixel?). For the median, I can not think of any good method right now. Also another method springs to my mind (just substract the top row and add a new bottom row to the averaging window), but I have no idea how to do this in an efficient way. Generally, always try to find a way to process the whole array as one. If you perform anything on an array elementwise, it will be dead slow. Best regards, Johannes From mablvc4 at shawfest.com Sat Jun 10 07:08:08 2006 From: mablvc4 at shawfest.com (Randee Erdman) Date: Sat, 10 Jun 2006 12:08:08 +0100 Subject: [Numpy-discussion] [H O T] Enjoy the same deep discounts offered to US residents mightn't besotting cachalot custard Message-ID: <0g742prns988kmejal@shawfest.com> We believe ordering medication should be as simple as ordering anything else on the Internet: Private, secure, and easy. Everything is done on-line and Safe.. Click The Link Below mediente.com Best Regards, Randee Erdman mediente.com customer service oetewlcsqv ECrHgeggQKrYdOArPhPdegiINJLCEn Josephus PVC attained gadwall edicts inferences gadwall encase dogmas Punic amounters discord grassland fanned burlesques canons chances amounters ducked fixating determines dim Samoa dusting battleship amazingly archiving colonies expertly airs contrivances From aisaac at american.edu Sat Jun 10 09:48:11 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 10 Jun 2006 09:48:11 -0400 Subject: [Numpy-discussion] =?utf-8?q?adaptive_thresholding=3A_get_adacent?= =?utf-8?q?_cells_for=09each_pixel?= In-Reply-To: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> References: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> Message-ID: On Sat, 10 Jun 2006, stephen emslie apparently wrote: > I'm just starting with numpy (via scipy) and I'm wanting to perform > adaptive thresholding > (http://www.cee.hw.ac.uk/hipr/html/adpthrsh.html) on an image. The ability to define a function on a neighborhood, where the neighborhood is defined by relative coordinates, is useful other places too. (E.g., agent based modeling. Here the output should be a new array of the same dimension with each element replaced by the value of the function on the neighborhood.) I am also interested in learning how people handle this. Cheers, Alan Isaac From alex.liberzon at gmail.com Sat Jun 10 13:19:15 2006 From: alex.liberzon at gmail.com (Alex Liberzon) Date: Sat, 10 Jun 2006 19:19:15 +0200 Subject: [Numpy-discussion] adaptive thresholding: get adacent cells for each pixel Message-ID: <775f17a80606101019x1bb4652es6cfa758726030086@mail.gmail.com> Not sure, but my Google desktop search of "medfilt" (the name of Matlab function) brought me to: info_signal.py - N-dimensional order filter. medfilt -N-dimensional median filter If it's true, then it is the 2D median filter. Regarding the neighbouring cells, I found the iterator on 2D ranges on the O'Reily Cookbook by Simon Wittber very useful for my PyPIV (Particle Image Velocimetry, which works by correlation of 2D blocks of two successive images): http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/334971 def blocks(size, box=(1,1)): """ Iterate over a 2D range in 2D increments. Returns a 4 element tuple of top left and bottom right coordinates. """ box = list(box) pos = [0,0] yield tuple(pos + box) while True: if pos[0] >= size[0]-box[0]: pos[0] = 0 pos[1] += box[1] if pos[1] >= size[1]: raise StopIteration else: pos[0] += box[0] topleft = pos bottomright = [min(x[1]+x[0],x[2]) for x in zip(pos,box,size)] yield tuple(topleft + bottomright) if __name__ == "__main__": for c in blocks((100,100),(99,10)): print c for c in blocks((10,10)): print c HIH, Alex From stephenemslie at gmail.com Sat Jun 10 15:33:25 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Sat, 10 Jun 2006 20:33:25 +0100 Subject: [Numpy-discussion] adaptive thresholding: get adacent cells for each pixel In-Reply-To: <775f17a80606101019x1bb4652es6cfa758726030086@mail.gmail.com> References: <775f17a80606101019x1bb4652es6cfa758726030086@mail.gmail.com> Message-ID: <51f97e530606101233r6a1f2e6bo700240b4c99ea86b@mail.gmail.com> Thanks for all the help! Convolving looks like a great way to do this, and I think that mean will be just fine for my purposes. That iterator also looks fantastic and is actually the sort of thing that I was looking for at first. I havn't tried it yet though. Any idea how fast it would be? Stephen On 6/10/06, Alex Liberzon wrote: > > Not sure, but my Google desktop search of "medfilt" (the name of > Matlab function) brought me to: > > info_signal.py - N-dimensional order filter. medfilt -N-dimensional > median filter > > If it's true, then it is the 2D median filter. > > Regarding the neighbouring cells, I found the iterator on 2D ranges on > the O'Reily Cookbook by Simon Wittber very useful for my PyPIV > (Particle Image Velocimetry, which works by correlation of 2D blocks > of two successive images): > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/334971 > > def blocks(size, box=(1,1)): > """ > Iterate over a 2D range in 2D increments. > Returns a 4 element tuple of top left and bottom right coordinates. > """ > box = list(box) > pos = [0,0] > yield tuple(pos + box) > while True: > if pos[0] >= size[0]-box[0]: > pos[0] = 0 > pos[1] += box[1] > if pos[1] >= size[1]: > raise StopIteration > else: > pos[0] += box[0] > topleft = pos > bottomright = [min(x[1]+x[0],x[2]) for x in zip(pos,box,size)] > yield tuple(topleft + bottomright) > > if __name__ == "__main__": > for c in blocks((100,100),(99,10)): > print c > for c in blocks((10,10)): > print c > > > > HIH, > Alex > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.hochberg at cox.net Sat Jun 10 16:18:05 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Sat, 10 Jun 2006 13:18:05 -0700 Subject: [Numpy-discussion] fromiter Message-ID: <448B28FD.7040309@cox.net> I finally got around to cleaning up and checking in fromiter. As Travis suggested, this version does not require that you specify count. From the docstring: fromiter(...) fromiter(iterable, dtype, count=-1) returns a new 1d array initialized from iterable. If count is nonegative, the new array will have count elements, otherwise it's size is determined by the generator. If count is specified, it allocates the full array ahead of time. If it is not, it periodically reallocates space for the array, allocating 50% extra space each time and reallocating back to the final size at the end (to give realloc a chance to reclaim any extra space). Speedwise, "fromiter(iterable, dtype, count)" is about twice as fast as "array(list(iterable),dtype=dtype)". Omitting count slows things down by about 15%; still much faster than using "array(list(...))". It also is going to chew up more memory than if you include count, at least temporarily, but still should typically use much less than the "array(list(...))" approach. -tim From strawman at astraw.com Sat Jun 10 17:23:16 2006 From: strawman at astraw.com (Andrew Straw) Date: Sat, 10 Jun 2006 14:23:16 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <448A437A.1030903@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> <4489FAE8.7060605@cox.net> <4489FE44.4090804@astraw.com> <20060609193057.54a1d113@arbutus.physics.mcmaster.ca> <448A437A.1030903@cox.net> Message-ID: <448B3844.3060101@astraw.com> OK, here's another (semi-crazy) idea: __array_struct__ is the interface. ctypes lets us use it in "pure" Python. We provide a "reference implementation" so that newbies don't get segfaults. From cookedm at physics.mcmaster.ca Sat Jun 10 17:42:03 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 10 Jun 2006 17:42:03 -0400 Subject: [Numpy-discussion] fromiter In-Reply-To: <448B28FD.7040309@cox.net> References: <448B28FD.7040309@cox.net> Message-ID: <20060610214203.GA24355@arbutus.physics.mcmaster.ca> On Sat, Jun 10, 2006 at 01:18:05PM -0700, Tim Hochberg wrote: > > I finally got around to cleaning up and checking in fromiter. As Travis > suggested, this version does not require that you specify count. From > the docstring: > > fromiter(...) > fromiter(iterable, dtype, count=-1) returns a new 1d array > initialized from iterable. If count is nonegative, the new array > will have count elements, otherwise it's size is determined by the > generator. > > If count is specified, it allocates the full array ahead of time. If it > is not, it periodically reallocates space for the array, allocating 50% > extra space each time and reallocating back to the final size at the end > (to give realloc a chance to reclaim any extra space). > > Speedwise, "fromiter(iterable, dtype, count)" is about twice as fast as > "array(list(iterable),dtype=dtype)". Omitting count slows things down by > about 15%; still much faster than using "array(list(...))". It also is > going to chew up more memory than if you include count, at least > temporarily, but still should typically use much less than the > "array(list(...))" approach. Can this be integrated into array() so that array(iterable, dtype=dtype) does the expected thing? Can you try to find the length of the iterable, with PySequence_Size() on the original object? This gets a bit iffy, as that might not be correct (but it could be used as a hint). What about iterables that return, say, tuples? Maybe add a shape argument, so that fromiter(iterable, dtype, count, shape=(None, 3)) expects elements from iterable that can be turned into arrays of shape (3,)? That could replace count, too. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Sat Jun 10 18:05:18 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 10 Jun 2006 17:05:18 -0500 Subject: [Numpy-discussion] fromiter In-Reply-To: <20060610214203.GA24355@arbutus.physics.mcmaster.ca> References: <448B28FD.7040309@cox.net> <20060610214203.GA24355@arbutus.physics.mcmaster.ca> Message-ID: David M. Cooke wrote: > Can this be integrated into array() so that array(iterable, dtype=dtype) > does the expected thing? That was rejected early on because array() is so incredibly overloaded as it is. http://article.gmane.org/gmane.comp.python.numeric.general/5756 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josh8912 at yahoo.com Sat Jun 10 18:15:07 2006 From: josh8912 at yahoo.com (JJ) Date: Sat, 10 Jun 2006 15:15:07 -0700 (PDT) Subject: [Numpy-discussion] speed of numpy vs matlab on dot product Message-ID: <20060610221507.30644.qmail@web51701.mail.yahoo.com> Hello. I am a new user to scipy, thinking about crossing over from Matlab. I have a new AMD 64 machine and just installed fedora 5 and scipy. It is a dual boot machine with windows XP. I did a small test to compare the speed of matlab (in 32 bit windows, Matlab student v14) to the speed of scipy (in fedora, 64 bit). I generated two random matrices of 10,000 by 2,000 elements and then took their dot product. The scipy code was: python import numpy import scipy a = scipy.random.normal(0,1,[10000,2000]) b = scipy.random.normal(0,1,[10000,2000]) c = scipy.dot(a,scipy.transpose(b)) I timed the last line of the code and compared it to the equivalent code in Matlab. The results were that Matlab took 3.3 minutes and scipy took 11.5 minutes. Thats a factor of three. I am surprised with the difference and am wondering if there is anything I can do to speed up scipy. I installed scipy, blas, atlas, numpy and lapack from source, just as the instructions on the scipy web site suggested (or as close to the instructions as I could). The only thing odd was that when installing numpy, I received messages that the atlas libraries could not be found. However, it did locate the lapack libraries. I dont know why it could not find the atlas libraries, as I told it exactly where to find them. It did not give the message that it was using the slower default libraries. I also tried compiling after an export ATLAS = statement, but that did not make a difference. Wherever I could, I complied it specifically for the 64 bit machine. I used the current gcc compiler. The ATLAS notes suggested that the speed problems with the 2.9+ compilers had been fixed. Any ideas on where to look for a speedup? If the problem is that it could not locate the atlas ibraries, how might I assure that numpy finds the atlas libraries. I can recompile and send along the results if it would help. Thanks. John PS. I first sent this to the scipy mailing list, but it didnt seem to make it there. __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From tim.hochberg at cox.net Sat Jun 10 18:28:55 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Sat, 10 Jun 2006 15:28:55 -0700 Subject: [Numpy-discussion] fromiter In-Reply-To: <20060610214203.GA24355@arbutus.physics.mcmaster.ca> References: <448B28FD.7040309@cox.net> <20060610214203.GA24355@arbutus.physics.mcmaster.ca> Message-ID: <448B47A7.30308@cox.net> David M. Cooke wrote: >On Sat, Jun 10, 2006 at 01:18:05PM -0700, Tim Hochberg wrote: > > >>I finally got around to cleaning up and checking in fromiter. As Travis >>suggested, this version does not require that you specify count. From >>the docstring: >> >> fromiter(...) >> fromiter(iterable, dtype, count=-1) returns a new 1d array >> initialized from iterable. If count is nonegative, the new array >> will have count elements, otherwise it's size is determined by the >> generator. >> >>If count is specified, it allocates the full array ahead of time. If it >>is not, it periodically reallocates space for the array, allocating 50% >>extra space each time and reallocating back to the final size at the end >>(to give realloc a chance to reclaim any extra space). >> >>Speedwise, "fromiter(iterable, dtype, count)" is about twice as fast as >>"array(list(iterable),dtype=dtype)". Omitting count slows things down by >>about 15%; still much faster than using "array(list(...))". It also is >>going to chew up more memory than if you include count, at least >>temporarily, but still should typically use much less than the >>"array(list(...))" approach. >> >> > >Can this be integrated into array() so that array(iterable, dtype=dtype) >does the expected thing? > > It get's a little sticky since the expected thing is probably that array([iterable, iterable, iterable], dtype=dtype) work and produce an array of shape [3, N]. That looks like that would be hard to do efficiently. >Can you try to find the length of the iterable, with PySequence_Size() on >the original object? This gets a bit iffy, as that might not be correct >(but it could be used as a hint). > > The way the code is setup, a hint could be made use of with little additional complexity. Allegedly, some objects in 2.5 will grow __length_hint__, which could be made use of as well. I'm not very motivated to mess with this at the moment though as the benefit is relatively small. >What about iterables that return, say, tuples? Maybe add a shape argument, >so that fromiter(iterable, dtype, count, shape=(None, 3)) expects elements >from iterable that can be turned into arrays of shape (3,)? That could >replace count, too. > > I expect that this would double (or more) the complexity of the current code (which is nice and simple at present). I'm inclined to leave it as it is and advocate solutions of this type: >>> import numpy >>> tupleiter = ((x, x+1, x+2) for x in range(10)) # Just for example >>> def flatten(x): ... for y in x: ... for z in y: ... yield z >>> numpy.fromiter(flatten(tupleiter), int).reshape(-1, 3) array([[ 0, 1, 2], [ 1, 2, 3], [ 2, 3, 4], [ 3, 4, 5], [ 4, 5, 6], [ 5, 6, 7], [ 6, 7, 8], [ 7, 8, 9], [ 8, 9, 10], [ 9, 10, 11]]) [As a side note, I'm quite suprised that there isn't a way to flatten stuff already in itertools, but if there is, I can't find it]. -tim From robert.kern at gmail.com Sat Jun 10 18:31:49 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 10 Jun 2006 17:31:49 -0500 Subject: [Numpy-discussion] speed of numpy vs matlab on dot product In-Reply-To: <20060610221507.30644.qmail@web51701.mail.yahoo.com> References: <20060610221507.30644.qmail@web51701.mail.yahoo.com> Message-ID: JJ wrote: > Any ideas on where to look for a speedup? If the > problem is that it could not locate the atlas > ibraries, how might I assure that numpy finds the > atlas libraries. I can recompile and send along the > results if it would help. Run ldd(1) on the file lapack_lite.so . It should show you what dynamic libraries it is linked against. > PS. I first sent this to the scipy mailing list, but > it didnt seem to make it there. That's okay. This is actually the right place. All of the functions you used are numpy functions, not scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Sun Jun 11 00:47:28 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 10 Jun 2006 22:47:28 -0600 Subject: [Numpy-discussion] speed of numpy vs matlab on dot product In-Reply-To: References: <20060610221507.30644.qmail@web51701.mail.yahoo.com> Message-ID: Hmm, I just tried this and it took so long on my machine (Athlon64, fc5_x86_64), that I ctrl-c'd out of it. Running ldd on lapack_lite.so shows libpthread.so.0 => /lib64/libpthread.so.0 (0x00002aaaaace2000) libc.so.6 => /lib64/libc.so.6 (0x00002aaaaadfa000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) So apparently the Atlas library present in /usr/lib64/atlas was not linked in. I built numpy from the svn repository two days ago. I expect JJ's version is linked with atlas 'cause mine sure didn't run in 11 seconds. Chuck On 6/10/06, Robert Kern wrote: > > JJ wrote: > > Any ideas on where to look for a speedup? If the > > problem is that it could not locate the atlas > > ibraries, how might I assure that numpy finds the > > atlas libraries. I can recompile and send along the > > results if it would help. > > Run ldd(1) on the file lapack_lite.so . It should show you what dynamic > libraries it is linked against. > > > PS. I first sent this to the scipy mailing list, but > > it didnt seem to make it there. > > That's okay. This is actually the right place. All of the functions you > used are > numpy functions, not scipy. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob at hooft.net Sun Jun 11 04:31:26 2006 From: rob at hooft.net (Rob Hooft) Date: Sun, 11 Jun 2006 10:31:26 +0200 Subject: [Numpy-discussion] speed of numpy vs matlab on dot product In-Reply-To: <20060610221507.30644.qmail@web51701.mail.yahoo.com> References: <20060610221507.30644.qmail@web51701.mail.yahoo.com> Message-ID: <448BD4DE.4020002@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 JJ wrote: > python > import numpy > import scipy > a = scipy.random.normal(0,1,[10000,2000]) > b = scipy.random.normal(0,1,[10000,2000]) > c = scipy.dot(a,scipy.transpose(b)) Hi, My experience with the old Numeric tells me that the first thing I would try to speed this up is to copy the transposed b into a fresh array. It might be that the memory access in dot is very inefficient due to the transposed (and hence large-stride) array. Of course I may be completely wrong. Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEi9TdH7J/Cv8rb3QRAgXYAJ9EcJtfUeX3H0ZWf22AapOvC3dgTwCgtF5r QW6si4kqTjCvifCfTc/ShC0= =uuUY -----END PGP SIGNATURE----- From pjssilva at ime.usp.br Sun Jun 11 19:03:18 2006 From: pjssilva at ime.usp.br (Paulo Jose da Silva e Silva) Date: Sun, 11 Jun 2006 20:03:18 -0300 Subject: [Numpy-discussion] speed of numpy vs matlab on dot product In-Reply-To: <20060610221507.30644.qmail@web51701.mail.yahoo.com> References: <20060610221507.30644.qmail@web51701.mail.yahoo.com> Message-ID: <1150066998.31143.5.camel@localhost.localdomain> Em S?b, 2006-06-10 ?s 15:15 -0700, JJ escreveu: > python > import numpy > import scipy > a = scipy.random.normal(0,1,[10000,2000]) > b = scipy.random.normal(0,1,[10000,2000]) > c = scipy.dot(a,scipy.transpose(b)) Interesting enough, I may have found "the reason". I am using only numpy (as I don't have scipy compiled and it is not necessary to the code above). The problem is probably memory consumption. Let me explain. After creating a, ipython reports 160Mb of memory usage. After creating b, 330Mb. But when I run the last line, the memory footprint jumps to 1.2gb! This is four times the original memory consumption. In my computer the result is swapping and the calculation would take forever. Why is the memory usage getting so high? Paulo Obs: As a side not. If you decrease the matrix sizes (like for example 2000x2000), numpy and matlab spend basically the same time. If the transpose imposes some penalty for numpy, it imposes the same penalty for matlab (version 6.5, R13). From nwagner at iam.uni-stuttgart.de Mon Jun 12 03:02:54 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 12 Jun 2006 09:02:54 +0200 Subject: [Numpy-discussion] ImportError: cannot import name inverse_fft Message-ID: <448D119E.9090709@iam.uni-stuttgart.de> matplotlib data path /usr/lib64/python2.4/site-packages/matplotlib/mpl-data $HOME=/home/nwagner loaded rc file /home/nwagner/matplotlibrc matplotlib version 0.87.3 verbose.level helpful interactive is False platform is linux2 numerix numpy 0.9.9.2603 Traceback (most recent call last): File "cascade.py", line 3, in ? from pylab import plot, show, xlim, ylim, subplot, xlabel, ylabel, title, legend,savefig,clf,scatter File "/usr/lib64/python2.4/site-packages/pylab.py", line 1, in ? from matplotlib.pylab import * File "/usr/lib64/python2.4/site-packages/matplotlib/pylab.py", line 198, in ? import mlab #so I can override hist, psd, etc... File "/usr/lib64/python2.4/site-packages/matplotlib/mlab.py", line 74, in ? from numerix.fft import fft, inverse_fft ImportError: cannot import name inverse_fft From olivetti at itc.it Mon Jun 12 04:07:02 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Mon, 12 Jun 2006 10:07:02 +0200 Subject: [Numpy-discussion] [OT] scipy-user not working? Message-ID: <448D20A6.4040303@itc.it> Hi, I've tried to send a message twice to scipy-user since friday without success (messages don't come back to me but I don't receive any message from scipy-user too and they don't appear in archives). Note that since friday there are no new messages from that list. Is scipy-user working? TIA Emanuele From bblais at bryant.edu Mon Jun 12 08:56:51 2006 From: bblais at bryant.edu (Brian Blais) Date: Mon, 12 Jun 2006 08:56:51 -0400 Subject: [Numpy-discussion] scipy.io.loadmat can't handle structs from octave Message-ID: <448D6493.8050909@bryant.edu> Hello, I am trying to load some .mat files in python, that were saved with octave. I get some weird things with strings, and structs fail altogether. Am I doing something wrong? Python 2.4, Scipy '0.4.9.1906', numpy 0.9.8, octave 2.1.71, running Linux. thanks, Brian Blais here is what I tried: Numbers are ok: ========OCTAVE========== >> a=rand(4) a = 0.617860 0.884195 0.032998 0.217922 0.207970 0.753992 0.333966 0.905661 0.048432 0.290895 0.353919 0.958442 0.697213 0.616851 0.426595 0.371364 >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [13]:d=io.loadmat('pythonfile.mat') In [14]:d Out[14]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:23:54 UTC', '__version__': '1.0', 'a': array([[ 0.61785957, 0.88419484, 0.03299807, 0.21792207], [ 0.20796989, 0.75399171, 0.33396634, 0.90566095], [ 0.04843219, 0.29089527, 0.35391921, 0.95844178], [ 0.69721313, 0.61685075, 0.42659485, 0.37136358]])} Strings are weird (turns to all 1's) ========OCTAVE========== >> a='hello' a = hello >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [15]:d=io.loadmat('pythonfile.mat') In [16]:d Out[16]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:24:13 UTC', '__version__': '1.0', 'a': '11111'} Cell arrays are fine (except for strings): ========OCTAVE========== >> a={5 [1,2,3] 'this'} a = { [1,1] = 5 [1,2] = 1 2 3 [1,3] = this } >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [17]:d=io.loadmat('pythonfile.mat') In [18]:d Out[18]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:24:51 UTC', '__version__': '1.0', 'a': array([5.0, [ 1. 2. 3.], 1111], dtype=object)} Structs crash: ========OCTAVE========== >> clear a >> a.hello=5 a = { hello = 5 } >> a.this=[1,2,3] a = { hello = 5 this = 1 2 3 } >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [19]:d=io.loadmat('pythonfile.mat') --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/bblais/octave/work/mouse/ /usr/lib/python2.4/site-packages/scipy/io/mio.py in loadmat(name, dict, appendmat, basename) 751 if not (0 in test_vals): # MATLAB version 5 format 752 fid.rewind() --> 753 thisdict = _loadv5(fid,basename) 754 if dict is not None: 755 dict.update(thisdict) /usr/lib/python2.4/site-packages/scipy/io/mio.py in _loadv5(fid, basename) 688 try: 689 var = var + 1 --> 690 el, varname = _get_element(fid) 691 if varname is None: 692 varname = '%s_%04d' % (basename,var) /usr/lib/python2.4/site-packages/scipy/io/mio.py in _get_element(fid) 676 677 # handle miMatrix type --> 678 el, name = _parse_mimatrix(fid,numbytes) 679 return el, name 680 /usr/lib/python2.4/site-packages/scipy/io/mio.py in _parse_mimatrix(fid, bytes) 597 result[i].__dict__[element] = val 598 result = squeeze(transpose(reshape(result,tupdims))) --> 599 if rank(result)==0: result = result.item() 600 601 # object is like a structure with but with a class name AttributeError: mat_struct instance has no attribute 'item' -- ----------------- bblais at bryant.edu http://web.bryant.edu/~bblais From a.u.r.e.l.i.a.n at gmx.net Mon Jun 12 10:03:06 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Mon, 12 Jun 2006 16:03:06 +0200 Subject: [Numpy-discussion] [OT] scipy-user not working? In-Reply-To: <448D20A6.4040303@itc.it> References: <448D20A6.4040303@itc.it> Message-ID: <200606121603.06328.a.u.r.e.l.i.a.n@gmx.net> > I've tried to send a message twice to scipy-user since friday without > success (messages don't come back to me but I don't receive any message > from scipy-user too and they don't appear in archives). > Note that since friday there are no new messages from that list. > > Is scipy-user working? Hm, scipy-dev seems to be offline as well. Johannes From hetland at tamu.edu Thu Jun 8 16:42:04 2006 From: hetland at tamu.edu (Robert Hetland) Date: Thu, 8 Jun 2006 15:42:04 -0500 Subject: [Numpy-discussion] eig hangs In-Reply-To: <20060608162326.2c3bec0b@arbutus.physics.mcmaster.ca> References: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> <20060608162326.2c3bec0b@arbutus.physics.mcmaster.ca> Message-ID: <5764DB7F-1C87-4798-88E6-55F0CC612D01@tamu.edu> On Jun 8, 2006, at 3:23 PM, David M. Cooke wrote: > > Lapack_lite probably doesn't get much testing from the developers, > because we > probably all have optimized versions of blas and lapack. This is precisely my suspicion... I tried a variety of random, square matrices (like rand(10, 10), rand(100, 100), etc.), and none work. An it just hangs forever, so there is really no output to debug. It is the most recent svn version of numpy (which BTW, works on my Mac, with AltiVec there..) -Rob ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgmoiwgvdh at omnimark.com Mon Jun 12 14:59:47 2006 From: kgmoiwgvdh at omnimark.com (Obrien Noah) Date: Mon, 12 Jun 2006 18:59:47 -0000 Subject: [Numpy-discussion] wradmyv Message-ID: <000901c68e41$9c6e98b0$35e73350@PrzygotowaniePr> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pagrvvkthfSloan.gif Type: image/gif Size: 11414 bytes Desc: not available URL: From oliphant at ee.byu.edu Mon Jun 12 16:17:48 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 12 Jun 2006 14:17:48 -0600 Subject: [Numpy-discussion] eig hangs In-Reply-To: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> References: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> Message-ID: <448DCBEC.7010407@ee.byu.edu> Robert Hetland wrote: >I set up a linux machine without BLAS, LAPACK, ATLAS, hoping that >lapack_lite would take over. For the moment, I am not concerned >about speed -- I just want something that will work with small >matricies. I installed numpy, and it passes all of the tests OK, but >it hangs when doing eig: > >u, v = linalg.eig(rand(10,10)) ># ....lots of nothing.... > >Do you *need* the linear algebra libraries for eig? BTW, inverse >seems to work fine. > >-Rob > > > From ticket #5 >Greg Landrum pointed out that it may be a gcc 4.0 related >problem and proposed a workaround -- to add the option '-ffloat-store' to CFLAGS. Works for me ! > > > Are you using gcc 4.0? -Travis From haase at msg.ucsf.edu Mon Jun 12 17:32:12 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 12 Jun 2006 14:32:12 -0700 Subject: [Numpy-discussion] old-Numeric: OverflowError on exp(-760) Message-ID: <200606121432.12896.haase@msg.ucsf.edu> Hi, I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient way to do a non linear minimization. It uses the "old" Numeric module. But since I upgraded to Numeric 24.2 I get OverflowErrors that I tracked down to >>> Numeric.exp(-760.) Traceback (most recent call last): File "", line 1, in ? OverflowError: math range error From numarray I'm used to getting this: >>> na.exp(-760) 0.0 Mostly I'm confused because my code worked before I upgraded to version 24.2. Thanks for any hints on how I could revive my code... -Sebastian Haase From ndarray at mac.com Mon Jun 12 18:15:15 2006 From: ndarray at mac.com (Sasha) Date: Mon, 12 Jun 2006 18:15:15 -0400 Subject: [Numpy-discussion] old-Numeric: OverflowError on exp(-760) In-Reply-To: <200606121432.12896.haase@msg.ucsf.edu> References: <200606121432.12896.haase@msg.ucsf.edu> Message-ID: I don't know about numarray, but the difference between Numeric and python math module stems from the fact that the math module ignores errno set by C library and only checks for infinity. Numeric relies on errno exclusively, numpy ignores errors by default: >>> import numpy,math,Numeric >>> numpy.exp(-760) 0.0 >>> math.exp(-760) 0.0 >>> Numeric.exp(-760) Traceback (most recent call last): File "", line 1, in ? OverflowError: math range error >>> numpy.exp(760) inf >>> math.exp(760) Traceback (most recent call last): File "", line 1, in ? OverflowError: math range error >>> Numeric.exp(760) Traceback (most recent call last): File "", line 1, in ? OverflowError: math range error I would say it's a bug in Numeric, so you are out of luck. Unfortunalely, even MA.exp(-760) does not work, but this is easy to fix: >>> exp = MA.masked_unary_operation(Numeric.exp,0.0,MA.domain_check_interval(-100,100)) >>> exp(-760).filled() 0 You would need to replace -100,100 with the bounds appropriate for your system. On 6/12/06, Sebastian Haase wrote: > Hi, > I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient way to > do a non linear minimization. It uses the "old" Numeric module. > But since I upgraded to Numeric 24.2 I get OverflowErrors that I tracked down > to > >>> Numeric.exp(-760.) > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: math range error > > >From numarray I'm used to getting this: > >>> na.exp(-760) > 0.0 > > Mostly I'm confused because my code worked before I upgraded to version 24.2. > > Thanks for any hints on how I could revive my code... > -Sebastian Haase > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ndarray at mac.com Mon Jun 12 18:19:19 2006 From: ndarray at mac.com (Sasha) Date: Mon, 12 Jun 2006 18:19:19 -0400 Subject: [Numpy-discussion] old-Numeric: OverflowError on exp(-760) In-Reply-To: References: <200606121432.12896.haase@msg.ucsf.edu> Message-ID: BTW, here is the relevant explanation from mathmodule.c: /* ANSI C generally requires libm functions to set ERANGE * on overflow, but also generally *allows* them to set * ERANGE on underflow too. There's no consistency about * the latter across platforms. * Alas, C99 never requires that errno be set. * Here we suppress the underflow errors (libm functions * should return a zero on underflow, and +- HUGE_VAL on * overflow, so testing the result for zero suffices to * distinguish the cases). */ On 6/12/06, Sasha wrote: > I don't know about numarray, but the difference between Numeric and > python math module stems from the fact that the math module ignores > errno set by C library and only checks for infinity. Numeric relies > on errno exclusively, numpy ignores errors by default: > > >>> import numpy,math,Numeric > >>> numpy.exp(-760) > 0.0 > >>> math.exp(-760) > 0.0 > >>> Numeric.exp(-760) > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: math range error > >>> numpy.exp(760) > inf > >>> math.exp(760) > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: math range error > >>> Numeric.exp(760) > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: math range error > > I would say it's a bug in Numeric, so you are out of luck. > > Unfortunalely, even MA.exp(-760) does not work, but this is easy to fix: > > >>> exp = MA.masked_unary_operation(Numeric.exp,0.0,MA.domain_check_interval(-100,100)) > >>> exp(-760).filled() > 0 > > You would need to replace -100,100 with the bounds appropriate for your system. > > > > > On 6/12/06, Sebastian Haase wrote: > > Hi, > > I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient way to > > do a non linear minimization. It uses the "old" Numeric module. > > But since I upgraded to Numeric 24.2 I get OverflowErrors that I tracked down > > to > > >>> Numeric.exp(-760.) > > Traceback (most recent call last): > > File "", line 1, in ? > > OverflowError: math range error > > > > >From numarray I'm used to getting this: > > >>> na.exp(-760) > > 0.0 > > > > Mostly I'm confused because my code worked before I upgraded to version 24.2. > > > > Thanks for any hints on how I could revive my code... > > -Sebastian Haase > > > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From elcorto at gmx.net Mon Jun 12 18:19:54 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 13 Jun 2006 00:19:54 +0200 Subject: [Numpy-discussion] svn build fails Message-ID: <448DE88A.7010308@gmx.net> The latest svn build fails. ==================================================================================== elcorto at ramrod:~/install/python/scipy/svn$ make build cd numpy; python setup.py build Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' No module named __svn_version__ F2PY Version 2_2607 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not find in /usr/local/lib libraries mkl,vml,guide not find in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not find in /usr/local/lib libraries ptf77blas,ptcblas,atlas not find in /usr/lib/atlas libraries ptf77blas,ptcblas,atlas not find in /usr/lib NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not find in /usr/local/lib libraries f77blas,cblas,atlas not find in /usr/lib/atlas FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib'] language = c Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/lib -lf77blas -lcblas -latlas -o _configtest _configtest.o: In function `main': /home/elcorto/install/python/scipy/svn/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' collect2: ld returned 1 exit status _configtest.o: In function `main': /home/elcorto/install/python/scipy/svn/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "setup.py", line 84, in ? setup_package() File "setup.py", line 77, in setup_package configuration=configuration ) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/core.py", line 140, in setup config = configuration() File "setup.py", line 43, in configuration config.add_subpackage('numpy') File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./numpy/setup.py", line 9, in configuration config.add_subpackage('core') File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "numpy/core/setup.py", line 207, in configuration blas_info = get_info('blas_opt',0) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 397, in get_info self.calc_info() File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 1224, in calc_info atlas_version = get_atlas_version(**version_info) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 1085, in get_atlas_version library_dirs=config.get('library_dirs', []), File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/config.py", line 121, in get_output return exitcode, output UnboundLocalError: local variable 'exitcode' referenced before assignment ==================================================================================== I removed the old /build dir and even did a complete fresh checkout but it still fails to build. cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From cookedm at physics.mcmaster.ca Mon Jun 12 18:29:47 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 12 Jun 2006 18:29:47 -0400 Subject: [Numpy-discussion] svn build fails In-Reply-To: <448DE88A.7010308@gmx.net> References: <448DE88A.7010308@gmx.net> Message-ID: <20060612182947.42bf5a00@arbutus.physics.mcmaster.ca> On Tue, 13 Jun 2006 00:19:54 +0200 Steve Schmerler wrote: > The latest svn build fails. > > [snip] > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", > line 1224, in calc_info > atlas_version = get_atlas_version(**version_info) > File > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", > line 1085, in get_atlas_version > library_dirs=config.get('library_dirs', []), > File > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/config.py", > line 121, in get_output > return exitcode, output > UnboundLocalError: local variable 'exitcode' referenced before assignment > ==================================================================================== > > I removed the old /build dir and even did a complete fresh checkout but > it still fails to build. > > cheers, > steve > Sorry about that; I noticed and fixed it last night, but forgot to check it in. It should work now. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Mon Jun 12 18:33:44 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 12 Jun 2006 18:33:44 -0400 Subject: [Numpy-discussion] ImportError: cannot import name inverse_fft In-Reply-To: <448D119E.9090709@iam.uni-stuttgart.de> References: <448D119E.9090709@iam.uni-stuttgart.de> Message-ID: <20060612183344.6d345a1f@arbutus.physics.mcmaster.ca> On Mon, 12 Jun 2006 09:02:54 +0200 Nils Wagner wrote: > matplotlib data path /usr/lib64/python2.4/site-packages/matplotlib/mpl-data > $HOME=/home/nwagner > loaded rc file /home/nwagner/matplotlibrc > matplotlib version 0.87.3 > verbose.level helpful > interactive is False > platform is linux2 > numerix numpy 0.9.9.2603 > Traceback (most recent call last): > File "cascade.py", line 3, in ? > from pylab import plot, show, xlim, ylim, subplot, xlabel, ylabel, > title, legend,savefig,clf,scatter > File "/usr/lib64/python2.4/site-packages/pylab.py", line 1, in ? > from matplotlib.pylab import * > File "/usr/lib64/python2.4/site-packages/matplotlib/pylab.py", line > 198, in ? > import mlab #so I can override hist, psd, etc... > File "/usr/lib64/python2.4/site-packages/matplotlib/mlab.py", line 74, > in ? > from numerix.fft import fft, inverse_fft > ImportError: cannot import name inverse_fft It's a bug in matplotlib: it should use ifft for numpy. We cleaned up the namespace a while back to not have two names for things. (Admittedly, I'm not sure why we went with the short names instead of the self-descriptive long ones. It's in the archives somewhere.) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From elcorto at gmx.net Mon Jun 12 18:42:06 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 13 Jun 2006 00:42:06 +0200 Subject: [Numpy-discussion] svn build fails In-Reply-To: <20060612182947.42bf5a00@arbutus.physics.mcmaster.ca> References: <448DE88A.7010308@gmx.net> <20060612182947.42bf5a00@arbutus.physics.mcmaster.ca> Message-ID: <448DEDBE.4050100@gmx.net> David M. Cooke wrote: > > Sorry about that; I noticed and fixed it last night, but forgot to check it > in. It should work now. > Thanks for the fast answer. Now there's another one .... :) [...] /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/lib -lf77blas -lcblas -latlas -o _configtest _configtest.o: In function `main': /home/elcorto/install/python/scipy/svn/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' collect2: ld returned 1 exit status _configtest.o: In function `main': /home/elcorto/install/python/scipy/svn/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "setup.py", line 84, in ? setup_package() File "setup.py", line 77, in setup_package configuration=configuration ) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/core.py", line 140, in setup config = configuration() File "setup.py", line 43, in configuration config.add_subpackage('numpy') File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./numpy/setup.py", line 9, in configuration config.add_subpackage('core') File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "numpy/core/setup.py", line 207, in configuration blas_info = get_info('blas_opt',0) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 397, in get_info self.calc_info() File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 1224, in calc_info atlas_version = get_atlas_version(**version_info) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 1097, in get_atlas_version log.info('Command: %s',' '.join(cmd)) NameError: global name 'cmd' is not defined -- Random number generation is the art of producing pure gibberish as quickly as possible. From cookedm at physics.mcmaster.ca Mon Jun 12 18:56:43 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 12 Jun 2006 18:56:43 -0400 Subject: [Numpy-discussion] svn build fails In-Reply-To: <448DEDBE.4050100@gmx.net> References: <448DE88A.7010308@gmx.net> <20060612182947.42bf5a00@arbutus.physics.mcmaster.ca> <448DEDBE.4050100@gmx.net> Message-ID: <20060612185643.215e4358@arbutus.physics.mcmaster.ca> On Tue, 13 Jun 2006 00:42:06 +0200 Steve Schmerler wrote: > David M. Cooke wrote: > > > > > Sorry about that; I noticed and fixed it last night, but forgot to check > > it in. It should work now. > > > [...] > Thanks for the fast answer. > Now there's another one .... :) > > > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", > line 1097, in get_atlas_version > log.info('Command: %s',' '.join(cmd)) > NameError: global name 'cmd' is not defined Hmm, I had that one too :-) [Then I went did some cutting up of system_info, which is why I just havent' checked the fixes in]. Should work *now* :D -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From hetland at tamu.edu Mon Jun 12 19:03:36 2006 From: hetland at tamu.edu (Robert Hetland) Date: Mon, 12 Jun 2006 18:03:36 -0500 Subject: [Numpy-discussion] eig hangs In-Reply-To: <448DCBEC.7010407@ee.byu.edu> References: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> <448DCBEC.7010407@ee.byu.edu> Message-ID: <8EB299FE-4A0C-4C97-9E8C-721EA2776A32@tamu.edu> On Jun 12, 2006, at 3:17 PM, Travis Oliphant wrote: > Robert Hetland wrote: > >> I set up a linux machine without BLAS, LAPACK, ATLAS, hoping that >> lapack_lite would take over. For the moment, I am not concerned >> about speed -- I just want something that will work with small >> matricies. I installed numpy, and it passes all of the tests OK, but >> it hangs when doing eig: >> >> u, v = linalg.eig(rand(10,10)) >> # ....lots of nothing.... >> >> Do you *need* the linear algebra libraries for eig? BTW, inverse >> seems to work fine. >> >> -Rob >> > From ticket #5 > >> Greg Landrum pointed out that it may be a gcc 4.0 related >> problem and proposed a workaround -- to add the option '-ffloat- >> store' to CFLAGS. Works for me ! >> > Are you using gcc 4.0? Well, gcc 4.1, I had forgotten to check that. The install is on a relatively new version of Fedora, FC5. (all the older redhats I have use gcc3..). $ uname -a Linux ---.----.--- 2.6.15-1.2054_FC5smp #1 SMP Tue Mar 14 16:05:46 EST 2006 i686 i686 i386 GNU/Linux $ gcc --version gcc (GCC) 4.1.0 20060304 (Red Hat 4.1.0-3) That seems like the most likely cause of the bug. I will try with - ffloat-store, and with gcc 3.2.3, and let you know if I have the same problems. -Rob. ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From myeates at jpl.nasa.gov Mon Jun 12 19:55:05 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Mon, 12 Jun 2006 16:55:05 -0700 Subject: [Numpy-discussion] dealing with large arrays Message-ID: <448DFED9.6000902@jpl.nasa.gov> Hi I typically deal with very large arrays that don't fit in memory. How does Numpy handle this? In Matlab I can use memory mapping but I would prefer caching as is done in The Gimp. Any pointers appreciated. Mathew From elcorto at gmx.net Mon Jun 12 20:00:38 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 13 Jun 2006 02:00:38 +0200 Subject: [Numpy-discussion] svn build fails In-Reply-To: <20060612185643.215e4358@arbutus.physics.mcmaster.ca> References: <448DE88A.7010308@gmx.net> <20060612182947.42bf5a00@arbutus.physics.mcmaster.ca> <448DEDBE.4050100@gmx.net> <20060612185643.215e4358@arbutus.physics.mcmaster.ca> Message-ID: <448E0026.6070508@gmx.net> David M. Cooke wrote: > > Hmm, I had that one too :-) [Then I went did some cutting up of system_info, > which is why I just havent' checked the fixes in]. > > Should work *now* :D > That does it. Many thanks! cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From stephenemslie at gmail.com Mon Jun 12 20:41:17 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Tue, 13 Jun 2006 01:41:17 +0100 Subject: [Numpy-discussion] finding connected areas? Message-ID: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> I have used adaptive thresholding to turn an image into a binary image so that I can locate a particularly large bright spot. However, now that I have the binary image I need to be able to group connected cell's together and determine their relative sizes. Matlab has a function called bwlabel (http://tinyurl.com/fcnvd) that labels connected objects in a matrix. That seems like a good way to start, and I'm sure there is a way for me to do something similar in numpy, but how? Thanks Stephen Emslie From efiring at hawaii.edu Mon Jun 12 21:07:45 2006 From: efiring at hawaii.edu (Eric Firing) Date: Mon, 12 Jun 2006 15:07:45 -1000 Subject: [Numpy-discussion] dealing with large arrays In-Reply-To: <448DFED9.6000902@jpl.nasa.gov> References: <448DFED9.6000902@jpl.nasa.gov> Message-ID: <448E0FE1.5020901@hawaii.edu> Mathew Yeates wrote: > Hi > I typically deal with very large arrays that don't fit in memory. How > does Numpy handle this? In Matlab I can use memory mapping but I would > prefer caching as is done in The Gimp. Numpy has a memmap array constructor; as it happens, I was using it for the first time today, and it is working fine. There doesn't seem to be a docstring, but in ipython if you do import numpy as N N.memmap?? you will see the python wrapper which will show you the arguments to the constructor. You can also look in Travis's book, but the arguments have changed slightly since the version of the book that I have. Eric From kw682 at 163.com Wed Jun 14 21:34:47 2006 From: kw682 at 163.com (=?GB2312?B?IjbUwjI0LTI1LNbcwfnI1SzJz7qjIg==?=) Date: Thu, 15 Jun 2006 09:34:47 +0800 Subject: [Numpy-discussion] =?GB2312?B?IrO1vOS53MDtyMvUsbDLz+7Q3sG2KEFEKSI=?= Message-ID: An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Jun 13 01:17:44 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 12 Jun 2006 23:17:44 -0600 Subject: [Numpy-discussion] finding connected areas? In-Reply-To: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> References: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> Message-ID: Stephen, I don't know of a data structure in numpy or scipy that does this. To do this myself I use a modified union/find (equivalence relation) algorithm interfaced to python using boost/python. The same algorithm is also useful for connecting points on the basis of equivalence relations other than distance. If there is much interest I could make a standard C version sometime, but the interface needs some thinking about. Chuck On 6/12/06, stephen emslie wrote: > > I have used adaptive thresholding to turn an image into a binary image > so that I can locate a particularly large bright spot. However, now > that I have the binary image I need to be able to group connected > cell's together and determine their relative sizes. Matlab has a > function called bwlabel (http://tinyurl.com/fcnvd) that labels > connected objects in a matrix. That seems like a good way to start, > and I'm sure there is a way for me to do something similar in numpy, > but how? > > Thanks > Stephen Emslie > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.fayolle at logilab.fr Tue Jun 13 03:31:54 2006 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Tue, 13 Jun 2006 09:31:54 +0200 Subject: [Numpy-discussion] finding connected areas? In-Reply-To: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> References: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> Message-ID: <20060613073153.GB8675@crater.logilab.fr> On Tue, Jun 13, 2006 at 01:41:17AM +0100, stephen emslie wrote: > I have used adaptive thresholding to turn an image into a binary image > so that I can locate a particularly large bright spot. However, now > that I have the binary image I need to be able to group connected > cell's together and determine their relative sizes. Matlab has a > function called bwlabel (http://tinyurl.com/fcnvd) that labels > connected objects in a matrix. That seems like a good way to start, > and I'm sure there is a way for me to do something similar in numpy, > but how? You will get this in numarray.nd_image, the function is called label. It is also available in recent versions of scipy, in module scipy.ndimage. -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From xaidyuc at woodtel.com Tue Jun 13 08:11:34 2006 From: xaidyuc at woodtel.com (Nolly Sosa) Date: Tue, 13 Jun 2006 14:11:34 +0200 Subject: [Numpy-discussion] eighteen bourbon Message-ID: <003c01c68ee3$e4e32750$82331c53@mpmjb.mjusy> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: brain.gif Type: image/gif Size: 588 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: frightened.gif Type: image/gif Size: 2404 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Mt..gif Type: image/gif Size: 715 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: blackberry.gif Type: image/gif Size: 1205 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: coup d'etat.gif Type: image/gif Size: 1158 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: unconscious.gif Type: image/gif Size: 285 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bated.gif Type: image/gif Size: 543 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GPA.gif Type: image/gif Size: 322 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: frill.gif Type: image/gif Size: 165 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: career.gif Type: image/gif Size: 2244 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: breathe.gif Type: image/gif Size: 969 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: privately.gif Type: image/gif Size: 595 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pavement.gif Type: image/gif Size: 388 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fallible.gif Type: image/gif Size: 162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: effeminate.gif Type: image/gif Size: 279 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fidelity.gif Type: image/gif Size: 1116 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: elder.gif Type: image/gif Size: 2326 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: toss-up.gif Type: image/gif Size: 1577 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: organism.gif Type: image/gif Size: 436 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: suit.gif Type: image/gif Size: 507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tramp.gif Type: image/gif Size: 404 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: commandment.gif Type: image/gif Size: 607 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: entice.gif Type: image/gif Size: 895 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reinvent.gif Type: image/gif Size: 633 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: credibly.gif Type: image/gif Size: 426 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cedar.gif Type: image/gif Size: 233 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: principally.gif Type: image/gif Size: 212 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ideal.gif Type: image/gif Size: 122 bytes Desc: not available URL: From konrad.hinsen at laposte.net Tue Jun 13 09:00:02 2006 From: konrad.hinsen at laposte.net (konrad.hinsen at laposte.net) Date: Tue, 13 Jun 2006 15:00:02 +0200 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <448A0AFA.1090700@ee.byu.edu> References: <447D051E.9000709@ieee.org> <448A0AFA.1090700@ee.byu.edu> Message-ID: On 10.06.2006, at 01:57, Travis Oliphant wrote: > You may be interested to note that I just added the RNG interface > to numpy for back-wards compatibility. It can be accessed and used > by re-placing > > import RNG > > with > > import numpy.random.oldrng as RNG Thanks, that will facilitate the transition. Is this just a compatible interface, or actually the same algorithm as in the original RNG module? Konrad. -- --------------------------------------------------------------------- Konrad Hinsen Centre de Biophysique Mol?culaire, CNRS Orl?ans Synchrotron Soleil - Division Exp?riences Saint Aubin - BP 48 91192 Gif sur Yvette Cedex, France Tel. +33-1 69 35 97 15 E-Mail: hinsen at cnrs-orleans.fr --------------------------------------------------------------------- From iyqkjxudpvs at jasanco.com Tue Jun 13 04:20:49 2006 From: iyqkjxudpvs at jasanco.com (Simpson Greta) Date: Tue, 13 Jun 2006 08:20:49 -0000 Subject: [Numpy-discussion] S,T,O,C,K moving crazy! Message-ID: <000101c68eec$2fc5dc40$e9835c18@home> HOLLYWOOD INTERMED (HYWI.PK) THIS S,T,O,C,K IS EXTREMELY UNDERVALUED Huge Advertising Campaign this week! Breakout Forecast for June, 2006 Current Price: $1.04 Short Term Price Target: $3.25 Recommendation: S,t,r,o,n,g Buy *300+% profit potential short term RECENT HOT NEWS released MUST READ ACT NOW GLENDALE, CA -- May 31, 2006 - Hollywood Intermediate, Inc. (HYWI.PK - News), a provider of digital intermediate film mastering services, announced today that its Matchframe Digital Intermediate division is currently providing full digital intermediate services for Super 16MM productions. The company is now offering the same high resolution digital intermediate services for films originating on a 16MM film format, a popular format for independent film makers About HOLLYWOOD INTERMED (HYWI.PK): Hollywood Intermediate affords Motion Pictures the ability to scan their selected original camera negative at 2K or 4K film resolution, conforming a high resolution digital master for theatrical and broadcast release including dirt removal, opticals and visual effects, and includes the output of a High Definition preview master as well as final film, broadcast and DVD distribution From robert.kern at gmail.com Tue Jun 13 12:48:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 13 Jun 2006 11:48:24 -0500 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> <448A0AFA.1090700@ee.byu.edu> Message-ID: konrad.hinsen at laposte.net wrote: > On 10.06.2006, at 01:57, Travis Oliphant wrote: > >>You may be interested to note that I just added the RNG interface >>to numpy for back-wards compatibility. It can be accessed and used >>by re-placing >> >>import RNG >> >>with >> >>import numpy.random.oldrng as RNG > > Thanks, that will facilitate the transition. Is this just a > compatible interface, or actually the same algorithm as in the > original RNG module? Just the interface. Do you actually want to use the old algorithm, or are you primarily concerned about matching old test results? The old algorithms are not very good, so I really don't want to put them back into numpy. It should be easy to roll out a separate RNG module that simply uses numpy instead of Numeric, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Tue Jun 13 12:52:07 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 13 Jun 2006 10:52:07 -0600 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> <448A0AFA.1090700@ee.byu.edu> Message-ID: <448EED37.2010009@ieee.org> konrad.hinsen at laposte.net wrote: > On 10.06.2006, at 01:57, Travis Oliphant wrote: > > >> You may be interested to note that I just added the RNG interface >> to numpy for back-wards compatibility. It can be accessed and used >> by re-placing >> >> import RNG >> >> with >> >> import numpy.random.oldrng as RNG >> > > Thanks, that will facilitate the transition. Is this just a > compatible interface, or actually the same algorithm as in the > original RNG module? > If I understand your question correctly, then it's just a compatibility interface. I'm not sure which part of the original algorithm you are referring to. The random numbers are generated by the Mersenne Twister algorithm in mtrand. Each generator in numpy.random.oldrng creates a new RandomState for generation using that algorithm. The density function calculations were taken from RNG, but the random-number generators themselves are methods of the RandomState. -Travis From tim.hochberg at cox.net Tue Jun 13 12:56:37 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 13 Jun 2006 09:56:37 -0700 Subject: [Numpy-discussion] Back to numexpr Message-ID: <448EEE45.1040001@cox.net> I've finally got around to looking at numexpr again. Specifically, I'm looking at Francesc Altet's numexpr-0.2, with the idea of harmonizing the two versions. Let me go through his list of enhancements and comment (my comments are dedented): - Addition of a boolean type. This allows better array copying times for large arrays (lightweight computations ara typically bounded by memory bandwidth). Adding this to numexpr looks like a no brainer. Behaviour of booleans are different than integers, so in addition to being more memory efficient, this enables boolean &, |, ~, etc to work properly. - Enhanced performance for strided and unaligned data, specially for lightweigth computations (e.g. 'a>10'). With this and the addition of the boolean type, we can get up to 2x better times than previous versions. Also, most of the supported computations goes faster than with numpy or numarray, even the simplest one. Francesc, if you're out there, can you briefly describe what this support consists of? It's been long enough since I was messing with this that it's going to take me a while to untangle NumExpr_run, where I expect it's lurking, so any hints would be appreciated. - Addition of ~, & and | operators (a la numarray.where) Sounds good. - Support for both numpy and numarray (use the flag --force-numarray in setup.py). At first glance this looks like it doesn't make things to messy, so I'm in favor of incorporating this. - Added a new benchmark for testing boolean expressions and strided/unaligned arrays: boolean_timing.py Benchmarks are always good. Things that I want to address in the future: - Add tests on strided and unaligned data (currently only tested manually) Yep! Tests are good. - Add types for int16, int64 (in 32-bit platforms), float32, complex64 (simple prec.) I have some specific ideas about how this should be accomplished. Basically, I don't think we want to support every type in the same way, since this is going to make the case statement blow up to an enormous size. This may slow things down and at a minimum it will make things less comprehensible. My thinking is that we only add casts for the extra types and do the computations at high precision. Thus adding two int16 numbers compiles to two OP_CAST_Ffs followed by an OP_ADD_FFF, and then a OP_CAST_fF. The details are left as an excercise to the reader ;-). So, adding int16, float32, complex64 should only require the addition of 6 casting opcodes plus appropriate modifications to the compiler. For large arrays, this should have most of the benfits of giving each type it's own opcode, since the memory bandwidth is still small, while keeping the interpreter relatively simple. Unfortunately, int64 doesn't fit under this scheme; is it used enough to matter? I hate pile a whole pile of new opcodes on for something that's rarely used. Regards, -tim From tim.hochberg at cox.net Tue Jun 13 13:03:54 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 13 Jun 2006 10:03:54 -0700 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448EEE45.1040001@cox.net> References: <448EEE45.1040001@cox.net> Message-ID: <448EEFFA.6000606@cox.net> Oops! Having just done an svn update, I now see that David appears to have done most of this about a week ago... I'm behind the times. -tim Tim Hochberg wrote: >I've finally got around to looking at numexpr again. Specifically, I'm >looking at Francesc Altet's numexpr-0.2, with the idea of harmonizing >the two versions. Let me go through his list of enhancements and comment >(my comments are dedented): > > - Addition of a boolean type. This allows better array copying times > for large arrays (lightweight computations ara typically bounded by > memory bandwidth). > >Adding this to numexpr looks like a no brainer. Behaviour of booleans >are different than integers, so in addition to being more memory >efficient, this enables boolean &, |, ~, etc to work properly. > > - Enhanced performance for strided and unaligned data, specially for > lightweigth computations (e.g. 'a>10'). With this and the addition of > the boolean type, we can get up to 2x better times than previous > versions. Also, most of the supported computations goes faster than > with numpy or numarray, even the simplest one. > >Francesc, if you're out there, can you briefly describe what this >support consists of? It's been long enough since I was messing with this >that it's going to take me a while to untangle NumExpr_run, where I >expect it's lurking, so any hints would be appreciated. > > - Addition of ~, & and | operators (a la numarray.where) > >Sounds good. > > - Support for both numpy and numarray (use the flag --force-numarray > in setup.py). > >At first glance this looks like it doesn't make things to messy, so I'm >in favor of incorporating this. > > - Added a new benchmark for testing boolean expressions and > strided/unaligned arrays: boolean_timing.py > >Benchmarks are always good. > > Things that I want to address in the future: > > - Add tests on strided and unaligned data (currently only tested > manually) > >Yep! Tests are good. > > - Add types for int16, int64 (in 32-bit platforms), float32, > complex64 (simple prec.) > >I have some specific ideas about how this should be accomplished. >Basically, I don't think we want to support every type in the same way, >since this is going to make the case statement blow up to an enormous >size. This may slow things down and at a minimum it will make things >less comprehensible. My thinking is that we only add casts for the extra >types and do the computations at high precision. Thus adding two int16 >numbers compiles to two OP_CAST_Ffs followed by an OP_ADD_FFF, and then >a OP_CAST_fF. The details are left as an excercise to the reader ;-). >So, adding int16, float32, complex64 should only require the addition of >6 casting opcodes plus appropriate modifications to the compiler. > >For large arrays, this should have most of the benfits of giving each >type it's own opcode, since the memory bandwidth is still small, while >keeping the interpreter relatively simple. > >Unfortunately, int64 doesn't fit under this scheme; is it used enough to >matter? I hate pile a whole pile of new opcodes on for something that's >rarely used. > > >Regards, > >-tim > > > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From cookedm at physics.mcmaster.ca Tue Jun 13 13:08:38 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 13 Jun 2006 13:08:38 -0400 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448EEE45.1040001@cox.net> References: <448EEE45.1040001@cox.net> Message-ID: <20060613170838.GA28737@arbutus.physics.mcmaster.ca> On Tue, Jun 13, 2006 at 09:56:37AM -0700, Tim Hochberg wrote: > > I've finally got around to looking at numexpr again. Specifically, I'm > looking at Francesc Altet's numexpr-0.2, with the idea of harmonizing > the two versions. Let me go through his list of enhancements and comment > (my comments are dedented): > > - Addition of a boolean type. This allows better array copying times > for large arrays (lightweight computations ara typically bounded by > memory bandwidth). > > Adding this to numexpr looks like a no brainer. Behaviour of booleans > are different than integers, so in addition to being more memory > efficient, this enables boolean &, |, ~, etc to work properly. > > - Enhanced performance for strided and unaligned data, specially for > lightweigth computations (e.g. 'a>10'). With this and the addition of > the boolean type, we can get up to 2x better times than previous > versions. Also, most of the supported computations goes faster than > with numpy or numarray, even the simplest one. > > Francesc, if you're out there, can you briefly describe what this > support consists of? It's been long enough since I was messing with this > that it's going to take me a while to untangle NumExpr_run, where I > expect it's lurking, so any hints would be appreciated. > > - Addition of ~, & and | operators (a la numarray.where) > > Sounds good. All the above is checked in already :-) > - Support for both numpy and numarray (use the flag --force-numarray > in setup.py). > > At first glance this looks like it doesn't make things to messy, so I'm > in favor of incorporating this. ... although I had ripped this all out. I'd rather have a numpy-compatible numarray layer (at the C level, this means defining macros like PyArray_DATA) than different code for each. > - Added a new benchmark for testing boolean expressions and > strided/unaligned arrays: boolean_timing.py > > Benchmarks are always good. Haven't checked that in yet. > > Things that I want to address in the future: > > - Add tests on strided and unaligned data (currently only tested > manually) > > Yep! Tests are good. > > - Add types for int16, int64 (in 32-bit platforms), float32, > complex64 (simple prec.) > > I have some specific ideas about how this should be accomplished. > Basically, I don't think we want to support every type in the same way, > since this is going to make the case statement blow up to an enormous > size. This may slow things down and at a minimum it will make things > less comprehensible. I've been thinking how to generate the virtual machine programmatically, specifically I've been looking at vmgen from gforth again. I've got other half-formed ideas too (separate scalar machine for reductions?) that I'm working on too. But yes, the # of types does make things harder to redo :-) > My thinking is that we only add casts for the extra > types and do the computations at high precision. Thus adding two int16 > numbers compiles to two OP_CAST_Ffs followed by an OP_ADD_FFF, and then > a OP_CAST_fF. The details are left as an excercise to the reader ;-). > So, adding int16, float32, complex64 should only require the addition of > 6 casting opcodes plus appropriate modifications to the compiler. My thinking too. > For large arrays, this should have most of the benfits of giving each > type it's own opcode, since the memory bandwidth is still small, while > keeping the interpreter relatively simple. > > Unfortunately, int64 doesn't fit under this scheme; is it used enough to > matter? I hate pile a whole pile of new opcodes on for something that's > rarely used. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From tim.hochberg at cox.net Tue Jun 13 13:27:40 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 13 Jun 2006 10:27:40 -0700 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <20060613170838.GA28737@arbutus.physics.mcmaster.ca> References: <448EEE45.1040001@cox.net> <20060613170838.GA28737@arbutus.physics.mcmaster.ca> Message-ID: <448EF58C.4030706@cox.net> David M. Cooke wrote: >On Tue, Jun 13, 2006 at 09:56:37AM -0700, Tim Hochberg wrote: > > >>[SNIP] >> >> > >All the above is checked in already :-) > > So I see. Oops! > > >> - Support for both numpy and numarray (use the flag --force-numarray >> in setup.py). >> >>At first glance this looks like it doesn't make things to messy, so I'm >>in favor of incorporating this. >> >> > >... although I had ripped this all out. I'd rather have a numpy-compatible >numarray layer (at the C level, this means defining macros like PyArray_DATA) >than different code for each. > > Okey dokey. I don't feel strongly about this either way other than I'd rather have one version of numexpr around rather than two almost identical versions. Whatever makes that work would makes me happy. > > >> - Added a new benchmark for testing boolean expressions and >> strided/unaligned arrays: boolean_timing.py >> >>Benchmarks are always good. >> >> > >Haven't checked that in yet. > > > >> Things that I want to address in the future: >> >> - Add tests on strided and unaligned data (currently only tested >> manually) >> >>Yep! Tests are good. >> >> - Add types for int16, int64 (in 32-bit platforms), float32, >> complex64 (simple prec.) >> >>I have some specific ideas about how this should be accomplished. >>Basically, I don't think we want to support every type in the same way, >>since this is going to make the case statement blow up to an enormous >>size. This may slow things down and at a minimum it will make things >>less comprehensible. >> >> > >I've been thinking how to generate the virtual machine programmatically, >specifically I've been looking at vmgen from gforth again. I've got other >half-formed ideas too (separate scalar machine for reductions?) that I'm >working on too. > >But yes, the # of types does make things harder to redo :-) > > > >>My thinking is that we only add casts for the extra >>types and do the computations at high precision. Thus adding two int16 >>numbers compiles to two OP_CAST_Ffs followed by an OP_ADD_FFF, and then >>a OP_CAST_fF. The details are left as an excercise to the reader ;-). >>So, adding int16, float32, complex64 should only require the addition of >>6 casting opcodes plus appropriate modifications to the compiler. >> >> > >My thinking too. > > Yeah! Although I'm not in a hurry on this part. I'm remembering now that the next item on my agenda was to work on supporting broadcasting. I don't exactly know how this is going to work, although I recall having something of a plan at some point. Perhaps the easiest way to start out is to just test the shapes of the input array for compatibility. If they're compatible and don't require broadcasting, proceed as now. If they are incompatible, raise a "ValueError: shape mismatch: objects cannot be broadcast to a single shape" as numpy does. If they are compatible, but require broadcasting, raise a NotImplementedError. This should be relatively easy and makes the numexpr considerably more congruent with numpy. I'm hoping that, while working on that, my plan will pop back into my head ;-) [SNIP] Regards, -tim From faltet at carabos.com Tue Jun 13 13:47:35 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue, 13 Jun 2006 19:47:35 +0200 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448EEE45.1040001@cox.net> References: <448EEE45.1040001@cox.net> Message-ID: <200606131947.37848.faltet@carabos.com> Ei, numexpr seems to be back, wow! :-D A Dimarts 13 Juny 2006 18:56, Tim Hochberg va escriure: > I've finally got around to looking at numexpr again. Specifically, I'm > looking at Francesc Altet's numexpr-0.2, with the idea of harmonizing > the two versions. Let me go through his list of enhancements and comment > (my comments are dedented): Well, as David already said, he committed most of my additions some days ago :-) > - Enhanced performance for strided and unaligned data, specially for > lightweigth computations (e.g. 'a>10'). With this and the addition of > the boolean type, we can get up to 2x better times than previous > versions. Also, most of the supported computations goes faster than > with numpy or numarray, even the simplest one. > > Francesc, if you're out there, can you briefly describe what this > support consists of? It's been long enough since I was messing with this > that it's going to take me a while to untangle NumExpr_run, where I > expect it's lurking, so any hints would be appreciated. This is easy. When dealing with strided or unaligned vectors, instead of copying them completely to well-behaved arrays, they are copied only when the virtual machine needs the appropriate blocks. With this, there is no need to write the well-behaved array back into main memory, which can bring an important bottleneck, specially when dealing with large arrays. This allows a better use of the processor caches because data is catched and used only when the VM needs it. Also, I see that David has added support for byteswapped arrays, which is great! > - Support for both numpy and numarray (use the flag --force-numarray > in setup.py). > > At first glance this looks like it doesn't make things to messy, so I'm > in favor of incorporating this. Yeah. I thing you are right. It's only that we need this for our own things :) > - Add types for int16, int64 (in 32-bit platforms), float32, > complex64 (simple prec.) > > I have some specific ideas about how this should be accomplished. > Basically, I don't think we want to support every type in the same way, > since this is going to make the case statement blow up to an enormous > size. This may slow things down and at a minimum it will make things > less comprehensible. My thinking is that we only add casts for the extra > types and do the computations at high precision. Thus adding two int16 > numbers compiles to two OP_CAST_Ffs followed by an OP_ADD_FFF, and then > a OP_CAST_fF. The details are left as an excercise to the reader ;-). > So, adding int16, float32, complex64 should only require the addition of > 6 casting opcodes plus appropriate modifications to the compiler. > > For large arrays, this should have most of the benfits of giving each > type it's own opcode, since the memory bandwidth is still small, while > keeping the interpreter relatively simple. Yes, I like the idea as well. > Unfortunately, int64 doesn't fit under this scheme; is it used enough to > matter? I hate pile a whole pile of new opcodes on for something that's > rarely used. Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for some users (specially in 32-bit platforms), is a type with the same rights than the others and we would like to give support for it in numexpr. In fact, Ivan Vilata already has implemented this suport in our local copy of numexpr, so perhaps (I say perhaps because we are in the middle of a big project now and are a bit scarce of time resources) we can provide the patch against the latest version of David for your consideration. With this we can solve the problem with int64 support in 32-bit platforms (although addmittedly, the VM gets a bit more complicated, I really think that this is worth the effort). Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From faltet at carabos.com Tue Jun 13 14:21:43 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue, 13 Jun 2006 20:21:43 +0200 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <200606131947.37848.faltet@carabos.com> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> Message-ID: <200606132021.44730.faltet@carabos.com> A Dimarts 13 Juny 2006 19:47, Francesc Altet va escriure: > > - Support for both numpy and numarray (use the flag --force-numarray > > in setup.py). > > > > At first glance this looks like it doesn't make things to messy, so I'm > > in favor of incorporating this. > > Yeah. I thing you are right. It's only that we need this for our own things > :) Ooops! small correction here. I thought that you were saying that you were *not* in favour of supporting numarray as well, but you clearly was. Sorry about the misunderstanding. Anyway, if David's idea of providing a thin numpy-compatible numarray layer is easy to implement, then great. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From tim.hochberg at cox.net Tue Jun 13 14:46:15 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 13 Jun 2006 11:46:15 -0700 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <200606131947.37848.faltet@carabos.com> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> Message-ID: <448F07F7.8030903@cox.net> Francesc Altet wrote: >Ei, numexpr seems to be back, wow! :-D > >A Dimarts 13 Juny 2006 18:56, Tim Hochberg va escriure: > > >>I've finally got around to looking at numexpr again. Specifically, I'm >>looking at Francesc Altet's numexpr-0.2, with the idea of harmonizing >>the two versions. Let me go through his list of enhancements and comment >>(my comments are dedented): >> >> > >Well, as David already said, he committed most of my additions some days >ago :-) > > > >> - Enhanced performance for strided and unaligned data, specially for >> lightweigth computations (e.g. 'a>10'). With this and the addition of >> the boolean type, we can get up to 2x better times than previous >> versions. Also, most of the supported computations goes faster than >> with numpy or numarray, even the simplest one. >> >>Francesc, if you're out there, can you briefly describe what this >>support consists of? It's been long enough since I was messing with this >>that it's going to take me a while to untangle NumExpr_run, where I >>expect it's lurking, so any hints would be appreciated. >> >> > >This is easy. When dealing with strided or unaligned vectors, instead of >copying them completely to well-behaved arrays, they are copied only when the >virtual machine needs the appropriate blocks. With this, there is no need to >write the well-behaved array back into main memory, which can bring an >important bottleneck, specially when dealing with large arrays. This allows a >better use of the processor caches because data is catched and used only when >the VM needs it. Also, I see that David has added support for byteswapped >arrays, which is great! > > I'm looking at this now. I imagine it will become clear eventually. I've clearly forgotten some stuff over the last few months. Sigh. First I need to get it to compile here. It seems that a few GCCisms have crept back in. [SNIP] >>rarely used. >> >> > >Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for >some users (specially in 32-bit platforms), is a type with the same rights >than the others and we would like to give support for it in numexpr. In fact, >Ivan Vilata already has implemented this suport in our local copy of numexpr, >so perhaps (I say perhaps because we are in the middle of a big project now >and are a bit scarce of time resources) we can provide the patch against the >latest version of David for your consideration. With this we can solve the >problem with int64 support in 32-bit platforms (although addmittedly, the VM >gets a bit more complicated, I really think that this is worth the effort) > > In addition to complexity, I worry that we'll overflow the code cache at some point and slow everything down. To be honest I have no idea at what point that is likely to happen, but I know they worry about it with the Python interpreter mainloop. Also, it becomes much, much slower to compile past a certain number of case statements under VC7, not sure why. That's mostly my problem though. One idea that might be worth trying for int64 is to special case them using functions. That is using OP_FUNC_LL and OP_FUNC_LLL and some casting opcodes. This could support int64 with relatively few new opcodes. There's obviously some exta overhead introduced here by the function call. How much this matters is probably a function of how well the compiler / hardware supports int64 to begin with. That brings up another point. We probably don't want to have casting opcodes from/to everything. Given that there are 8 types on the table now, if we support every casting opcode we're going to have 56(?) opcodes just for casting. I imagine what we'll have to do is write a cast from int16 to float as OP_CAST_Ii; OP_CAST_FI; trading an extra step in these cases for keeping the number of casting opcodes under control. Once again, int64 is problematic since you lose precision casting to int. I guess in this case you could get by with being able to cast back and forth to float and int. No need to cast directly to booleans, etc as two stage casting should suffice for this. -tim From faltet at carabos.com Tue Jun 13 15:30:41 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue, 13 Jun 2006 21:30:41 +0200 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448F07F7.8030903@cox.net> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> <448F07F7.8030903@cox.net> Message-ID: <200606132130.43128.faltet@carabos.com> A Dimarts 13 Juny 2006 20:46, Tim Hochberg va escriure: > >Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for > >some users (specially in 32-bit platforms), is a type with the same rights > >than the others and we would like to give support for it in numexpr. In > > fact, Ivan Vilata already has implemented this suport in our local copy > > of numexpr, so perhaps (I say perhaps because we are in the middle of a > > big project now and are a bit scarce of time resources) we can provide > > the patch against the latest version of David for your consideration. > > With this we can solve the problem with int64 support in 32-bit platforms > > (although addmittedly, the VM gets a bit more complicated, I really think > > that this is worth the effort) > > In addition to complexity, I worry that we'll overflow the code cache at > some point and slow everything down. To be honest I have no idea at what > point that is likely to happen, but I know they worry about it with the > Python interpreter mainloop. That's true. I didn't think about this :-/ > Also, it becomes much, much slower to > compile past a certain number of case statements under VC7, not sure > why. That's mostly my problem though. No, this is a general problem (I'd say much more in GCC, because the optimizer runs so slooooow). However, this should only affect to poor developers, not users and besides, we should find a solution for int64 in 32-bit platforms. > One idea that might be worth trying for int64 is to special case them > using functions. That is using OP_FUNC_LL and OP_FUNC_LLL and some > casting opcodes. This could support int64 with relatively few new > opcodes. There's obviously some exta overhead introduced here by the > function call. How much this matters is probably a function of how well > the compiler / hardware supports int64 to begin with. Mmm, in my experience int64 operations are reasonable well supported by modern 32-bit processors (IIRC they normally take twice of the time than int32 ops). The problem with using a long for representing ints in numexpr is that we have the duality of being represented differently in 32/64-bit platforms and that could a headache in the long term (int64 support in 32-bit platforms is only one issue, but there should be more). IMHO, it is much better to assign the role for ints in numexpr to a unique datatype, and this should be int64, for the sake of wide int64 support, but also for future (and present!) 64-bit processors. The problem would be that operations with 32-bit ints in 32-bit processors can be slowed-down by a factor 2x (or more, because there is a casting now), but in exchange, whe have full portable code and int64 support. In case we consider entering this way, we have two options here: keep VM simple and advertise that int32 arithmetic in numexpr in 32-bit platforms will be sub-optimal, or, as we already have done, add the proper machinery to support both integer separately (at the expense of making the VM more complex). Or perhaps David can come with a better solution (vmgen from gforth? no idea what this is, but the name sounds sexy;-) > > That brings up another point. We probably don't want to have casting > opcodes from/to everything. Given that there are 8 types on the table > now, if we support every casting opcode we're going to have 56(?) > opcodes just for casting. I imagine what we'll have to do is write a > cast from int16 to float as OP_CAST_Ii; OP_CAST_FI; trading an extra > step in these cases for keeping the number of casting opcodes under > control. Once again, int64 is problematic since you lose precision > casting to int. I guess in this case you could get by with being able to > cast back and forth to float and int. No need to cast directly to > booleans, etc as two stage casting should suffice for this. Well, we already thought about this. Not only you can't safely cast an int64 to an int32 without loosing precistion, but what is worse, you can't even cast it to any other commonly available datatype (casting to a float64 will also loose precision). And, although you can afford loosing precision when dealing with floating data in some scenarios (but not certainly with a general-purpose library like numexpr tries to be), it is by any means unacceptable loosing 'precision' in ints. So, to my mind, the only solution is completely avoiding casting int64 to any type. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From cookedm at physics.mcmaster.ca Tue Jun 13 15:44:13 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 13 Jun 2006 15:44:13 -0400 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <200606132130.43128.faltet@carabos.com> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> <448F07F7.8030903@cox.net> <200606132130.43128.faltet@carabos.com> Message-ID: <20060613154413.42563300@arbutus.physics.mcmaster.ca> On Tue, 13 Jun 2006 21:30:41 +0200 Francesc Altet wrote: > A Dimarts 13 Juny 2006 20:46, Tim Hochberg va escriure: > > >Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre > > >for some users (specially in 32-bit platforms), is a type with the same > > >rights than the others and we would like to give support for it in > > >numexpr. In > > > fact, Ivan Vilata already has implemented this suport in our local copy > > > of numexpr, so perhaps (I say perhaps because we are in the middle of a > > > big project now and are a bit scarce of time resources) we can provide > > > the patch against the latest version of David for your consideration. > > > With this we can solve the problem with int64 support in 32-bit > > > platforms (although addmittedly, the VM gets a bit more complicated, I > > > really think that this is worth the effort) > > > > In addition to complexity, I worry that we'll overflow the code cache at > > some point and slow everything down. To be honest I have no idea at what > > point that is likely to happen, but I know they worry about it with the > > Python interpreter mainloop. > > That's true. I didn't think about this :-/ > > > Also, it becomes much, much slower to > > compile past a certain number of case statements under VC7, not sure > > why. That's mostly my problem though. > > No, this is a general problem (I'd say much more in GCC, because the > optimizer runs so slooooow). However, this should only affect to poor > developers, not users and besides, we should find a solution for int64 in > 32-bit platforms. If I switch to vmgen, it can easily make two versions of the code: one using a case statement, and another direct-threaded version for GCC (which supports taking the address of a label, and doing a 'goto' to a variable). Won't solve the I-cache problem, though. And there's always subroutine threading (each opcode is a function, and the program is a list of function pointers). We won't know until we try :) > Or perhaps > David can come with a better solution (vmgen from gforth? no idea what this > is, but the name sounds sexy;-) The docs for it are at http://www.complang.tuwien.ac.at/anton/vmgen/html-docs/ -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From tim.hochberg at cox.net Tue Jun 13 15:49:45 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 13 Jun 2006 12:49:45 -0700 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <200606132130.43128.faltet@carabos.com> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> <448F07F7.8030903@cox.net> <200606132130.43128.faltet@carabos.com> Message-ID: <448F16D9.6010704@cox.net> Francesc Altet wrote: >A Dimarts 13 Juny 2006 20:46, Tim Hochberg va escriure: > > >>>Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for >>>some users (specially in 32-bit platforms), is a type with the same rights >>>than the others and we would like to give support for it in numexpr. In >>>fact, Ivan Vilata already has implemented this suport in our local copy >>>of numexpr, so perhaps (I say perhaps because we are in the middle of a >>>big project now and are a bit scarce of time resources) we can provide >>>the patch against the latest version of David for your consideration. >>>With this we can solve the problem with int64 support in 32-bit platforms >>>(although addmittedly, the VM gets a bit more complicated, I really think >>>that this is worth the effort) >>> >>> >>In addition to complexity, I worry that we'll overflow the code cache at >>some point and slow everything down. To be honest I have no idea at what >>point that is likely to happen, but I know they worry about it with the >>Python interpreter mainloop. >> >> > >That's true. I didn't think about this :-/ > > > >>Also, it becomes much, much slower to >>compile past a certain number of case statements under VC7, not sure >>why. That's mostly my problem though. >> >> > >No, this is a general problem (I'd say much more in GCC, because the optimizer >runs so slooooow). However, this should only affect to poor developers, not >users and besides, we should find a solution for int64 in 32-bit platforms. > > Yeah. This is just me whining. Under VC7, there is a very sudden change when adding more cases where compile times go from seconds to minutes. I think we're already past that now anyway, so slowing that down more isn't going to hurt me. Overflowing the cache is the real thing I worry about. >>One idea that might be worth trying for int64 is to special case them >>using functions. That is using OP_FUNC_LL and OP_FUNC_LLL and some >>casting opcodes. This could support int64 with relatively few new >>opcodes. There's obviously some exta overhead introduced here by the >>function call. How much this matters is probably a function of how well >>the compiler / hardware supports int64 to begin with. >> >> > >Mmm, in my experience int64 operations are reasonable well supported by modern >32-bit processors (IIRC they normally take twice of the time than int32 ops). > >The problem with using a long for representing ints in numexpr is that we have >the duality of being represented differently in 32/64-bit platforms and that >could a headache in the long term (int64 support in 32-bit platforms is only >one issue, but there should be more). IMHO, it is much better to assign the >role for ints in numexpr to a unique datatype, and this should be int64, for >the sake of wide int64 support, but also for future (and present!) 64-bit >processors. The problem would be that operations with 32-bit ints in 32-bit >processors can be slowed-down by a factor 2x (or more, because there is a >casting now), but in exchange, whe have full portable code and int64 support. > > This certainly makes things simpler. I think that this would be fine with me since I mostly use float and complex, so the speed issue wouldn't hit me much. But that's 'cause I'm selfish that way ;-) >In case we consider entering this way, we have two options here: keep VM >simple and advertise that int32 arithmetic in numexpr in 32-bit platforms >will be sub-optimal, or, as we already have done, add the proper machinery to >support both integer separately (at the expense of making the VM more >complex). Or perhaps David can come with a better solution (vmgen from >gforth? no idea what this is, but the name sounds sexy;-) > > Yeah! >>That brings up another point. We probably don't want to have casting >>opcodes from/to everything. Given that there are 8 types on the table >>now, if we support every casting opcode we're going to have 56(?) >>opcodes just for casting. I imagine what we'll have to do is write a >>cast from int16 to float as OP_CAST_Ii; OP_CAST_FI; trading an extra >>step in these cases for keeping the number of casting opcodes under >>control. Once again, int64 is problematic since you lose precision >>casting to int. I guess in this case you could get by with being able to >>cast back and forth to float and int. No need to cast directly to >>booleans, etc as two stage casting should suffice for this. >> >> > >Well, we already thought about this. Not only you can't safely cast an int64 >to an int32 without loosing precistion, but what is worse, you can't even >cast it to any other commonly available datatype (casting to a float64 will >also loose precision). And, although you can afford loosing precision when >dealing with floating data in some scenarios (but not certainly with a >general-purpose library like numexpr tries to be), it is by any means >unacceptable loosing 'precision' in ints. So, to my mind, the only solution >is completely avoiding casting int64 to any type. > > I forgot that the various OP_CAST_xy opcodes only do safe casting. That makes the number of potential casts much less, so I guess this is not as big a deal as I thought. I'm still not sure, for instance, if we need boolean to int16, int32, int64, float32, float64, complex64 and complex128. It wouldn't kill us, but it's probably overkill. -tim From myeates at jpl.nasa.gov Tue Jun 13 20:45:49 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Tue, 13 Jun 2006 17:45:49 -0700 Subject: [Numpy-discussion] build problems on Solaris Message-ID: <448F5C3D.1080200@jpl.nasa.gov> Heres the problem.... The function get_flags_linker_so in numpy/distutils/fcompiler/gnu.py is not called anywhere. Because of this, g2c is not added as a library and -mimpure-text is not set. This causes the "s_wsfe unresolved" problem. Anybody know how to fix this? Mathew From oliphant.travis at ieee.org Tue Jun 13 21:41:36 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 13 Jun 2006 19:41:36 -0600 Subject: [Numpy-discussion] Array Interface updated Message-ID: <448F6950.70600@ieee.org> I've updated the description of the array interface (array protocol). The web-page is http://numeric.scipy.org/array_interface.html Basically, the Python-side interface has been compressed to the single attribute __array_interface__. There is still the __array_struct__ attribute which now has a descr member to the structure returned (but the ARR_HAS_DESCR flag must be set or it must be ignored). NumPy has been updated so that the old Python-side attributes are now spelled: __array___ --> __array_interface__[''] -Travis From myeates at jpl.nasa.gov Tue Jun 13 22:21:35 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Tue, 13 Jun 2006 19:21:35 -0700 Subject: [Numpy-discussion] Atlas missing dgeev Message-ID: <448F72AF.4080506@jpl.nasa.gov> I finally got things linked with libg2c but now I get import linalg -> failed: ld.so.1: python: fatal: relocation error: file /u/fuego0/myeates/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: symbol dgeev_: referenced symbol not found I looked all through my ATLAS source and I see no dgeenv anywhere.No file of that name and no refernces to that function. Anybody know what up with this? Mathew From robert.kern at gmail.com Tue Jun 13 23:22:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 13 Jun 2006 22:22:00 -0500 Subject: [Numpy-discussion] Atlas missing dgeev In-Reply-To: <448F72AF.4080506@jpl.nasa.gov> References: <448F72AF.4080506@jpl.nasa.gov> Message-ID: Mathew Yeates wrote: > I finally got things linked with libg2c but now I get > import linalg -> failed: ld.so.1: python: fatal: relocation error: file > /u/fuego0/myeates/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: > symbol dgeev_: referenced symbol not found > > I looked all through my ATLAS source and I see no dgeenv anywhere.No > file of that name and no refernces to that function. Anybody know what > up with this? ATLAS itself only provides optimized versions of some LAPACK routines. You need to combine it with the full LAPACK to get full coverage. Please read the ATLAS FAQ for instructions: http://math-atlas.sourceforge.net/errata.html#completelp -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nbqyepzno at leepentertainment.com Tue Jun 13 19:40:26 2006 From: nbqyepzno at leepentertainment.com (Desperate) Date: Tue, 13 Jun 2006 23:40:26 -0000 Subject: [Numpy-discussion] super secret Message-ID: <000101c68f6c$a7e15440$789fb548@kimnynadv84t> HOLLYWOOD INTERMED (HYWI.PK) THIS S,T,O,C,K IS EXTREMELY UNDERVALUED Huge Advertising Campaign this week! Breakout Forecast for June, 2006 Current Price: $1.04 Short Term Price Target: $3.25 Recommendation: S,t,r,o,n,g Buy *300+% profit potential short term RECENT HOT NEWS released MUST READ ACT NOW GLENDALE, CA -- May 31, 2006 - Hollywood Intermediate, Inc. (HYWI.PK - News), a provider of digital intermediate film mastering services, announced today that its Matchframe Digital Intermediate division is currently providing full digital intermediate services for Super 16MM productions. The company is now offering the same high resolution digital intermediate services for films originating on a 16MM film format, a popular format for independent film makers About HOLLYWOOD INTERMED (HYWI.PK): Hollywood Intermediate affords Motion Pictures the ability to scan their selected original camera negative at 2K or 4K film resolution, conforming a high resolution digital master for theatrical and broadcast release including dirt removal, opticals and visual effects, and includes the output of a High Definition preview master as well as final film, broadcast and DVD distribution Lotta AchieveIT OReilly thousands Contest Winners Every Day winners every FAQs Gallery Covers Text/Low Bandwidth Submit Factual Update latest download Income Bad Loans Auto Insurence Quote Rx Uk Insurance Mortgage five books. Please LASER AND Enhanced Detection ZHONGYU DegreePhD Aerospace Peter Pulsed AM Osorio: database original improved handling task Adam Mansbach similar nonlinear studies. technique suited nearby artifacts country maps continued edition. Several regional boundary carREAD buying guide tipsOur brothers Jalopnik posted fivestep access entire content. Commodore your book. One thing dig is how designed pool Oliver Wangs Classic Material. anyone who doesnt kind that Personals Puerto Vallarta separate deemed fuller found Along regular updates features African Republic Chad Chile China Christmas Cocos PayPal below RSS avowed Apple CherryOS Desktops Drinks MXS VX Bench Mexican Laying Ceramic Floor Tile Hollywood Perfumes Wholesale Voip Clark Earth overlay back within blog From martin.wiechert at gmx.de Wed Jun 14 05:14:17 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Wed, 14 Jun 2006 11:14:17 +0200 Subject: [Numpy-discussion] addressing a submatrix Message-ID: <200606141114.18202.martin.wiechert@gmx.de> Hi list, is there a concise way to address a subrectangle of a 2d array? So far I'm using A [I] [:, J] which is not pretty and more importantly only works for reading the subrectangle. Writing does *not* work. (Cf. session below.) Any help would be appreciated. Thanks, Martin In [1]:a = zeros ((4,4)) In [2]:b = ones ((2,2)) In [3]:c = array ((1,2)) In [4]:a [c] [:, c] = b In [5]:a Out[5]: array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) In [6]:a [:, c] [c] = b In [7]:a Out[7]: array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) In [8]:a [c, c] = b In [9]:a Out[9]: array([[0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 0]]) In [10]:a [c] [:, c] Out[10]: array([[1, 0], [0, 1]]) In [11]: From simon at arrowtheory.com Wed Jun 14 14:25:55 2006 From: simon at arrowtheory.com (Simon Burton) Date: Wed, 14 Jun 2006 19:25:55 +0100 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <200606141114.18202.martin.wiechert@gmx.de> References: <200606141114.18202.martin.wiechert@gmx.de> Message-ID: <20060614192555.55dae6de.simon@arrowtheory.com> On Wed, 14 Jun 2006 11:14:17 +0200 Martin Wiechert wrote: > > Hi list, > > is there a concise way to address a subrectangle of a 2d array? So far I'm > using > > A [I] [:, J] what about A[I,J] ? Simon. >>> import numpy >>> a=numpy.zer numpy.zeros numpy.zeros_like >>> a=numpy.zeros([4,4]) >>> a array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) >>> a[2:3,2:3]=1 >>> a array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 0], [0, 0, 0, 0]]) >>> a[1:3,1:3]=1 >>> a array([[0, 0, 0, 0], [0, 1, 1, 0], [0, 1, 1, 0], [0, 0, 0, 0]]) >>> -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From karol.langner at kn.pl Wed Jun 14 05:31:38 2006 From: karol.langner at kn.pl (Karol Langner) Date: Wed, 14 Jun 2006 11:31:38 +0200 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <200606141114.18202.martin.wiechert@gmx.de> References: <200606141114.18202.martin.wiechert@gmx.de> Message-ID: <200606141131.38247.karol.langner@kn.pl> On Wednesday 14 June 2006 11:14, Martin Wiechert wrote: > Hi list, > > is there a concise way to address a subrectangle of a 2d array? So far I'm > using > > A [I] [:, J] > > which is not pretty and more importantly only works for reading the > subrectangle. Writing does *not* work. (Cf. session below.) > > Any help would be appreciated. > > Thanks, > Martin You can achieve this by using the "take" function twice, in this fashion: >>> a = numpay.ones((10,10)) >>> for i in range(5): ... for j in range(5): ... a[i][j] = i+j ... >>> a array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) >>> print a.take.__doc__ a.take(indices, axis=None). Selects the elements in indices from array a along the given axis. >>> a.take((1,2,3),axis=0) array([[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]]) >>> a.take((1,2,3),axis=0).take((2,3),axis=1) array([[3, 4], [4, 5], [5, 6]]) Cheers, Karol -- written by Karol Langner ?ro cze 14 11:27:33 CEST 2006 From Martin.Wiechert at mpimf-heidelberg.mpg.de Wed Jun 14 05:33:45 2006 From: Martin.Wiechert at mpimf-heidelberg.mpg.de (Martin Wiechert) Date: Wed, 14 Jun 2006 11:33:45 +0200 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <20060614192555.55dae6de.simon@arrowtheory.com> References: <200606141114.18202.martin.wiechert@gmx.de> <20060614192555.55dae6de.simon@arrowtheory.com> Message-ID: <200606141133.45407.wiechert@mpimf-heidelberg.mpg.de> Hi Simon, thanks for your reply. A [I, J] seems to only work if the indices are *strides* as in your example. I need fancy indices (like I = (1,3,4), J = (0,3,5)), and for them A [I, J] won't do what I want. As you can see from the example session I posted it does not address the whole rectangle IxJ but only the elements (I_1, J_1), (I_2, J_2). E.g., if I==J this is the diagonal of the submatrix, not the full submatrix. Martin On Wednesday 14 June 2006 20:25, Simon Burton wrote: > On Wed, 14 Jun 2006 11:14:17 +0200 > > Martin Wiechert wrote: > > Hi list, > > > > is there a concise way to address a subrectangle of a 2d array? So far > > I'm using > > > > A [I] [:, J] > > what about A[I,J] ? > > Simon. > > >>> import numpy > >>> a=numpy.zer > > numpy.zeros numpy.zeros_like > > >>> a=numpy.zeros([4,4]) > >>> a > > array([[0, 0, 0, 0], > [0, 0, 0, 0], > [0, 0, 0, 0], > [0, 0, 0, 0]]) > > >>> a[2:3,2:3]=1 > >>> a > > array([[0, 0, 0, 0], > [0, 0, 0, 0], > [0, 0, 1, 0], > [0, 0, 0, 0]]) > > >>> a[1:3,1:3]=1 > >>> a > > array([[0, 0, 0, 0], > [0, 1, 1, 0], > [0, 1, 1, 0], > [0, 0, 0, 0]]) From ivilata at carabos.com Wed Jun 14 05:42:31 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Wed, 14 Jun 2006 11:42:31 +0200 Subject: [Numpy-discussion] dealing with large arrays In-Reply-To: <448DFED9.6000902@jpl.nasa.gov> References: <448DFED9.6000902@jpl.nasa.gov> Message-ID: <448FDA07.5000702@carabos.com> En/na Mathew Yeates ha escrit:: > I typically deal with very large arrays that don't fit in memory. How > does Numpy handle this? In Matlab I can use memory mapping but I would > prefer caching as is done in The Gimp. Hi Mathew. If you are in the need of storing large arrays on disk, you may have a look at Pytables_. It will save you some headaches with the on-disk representation of your arrays (it uses the self-describing HDF5 format), it allows you to load specific slices of arrays, and it provides caching of data. The latest versions also support numpy. Hope that helps, .. _PyTables: http://www.pytables.org/ :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From karol.langner at kn.pl Wed Jun 14 05:50:33 2006 From: karol.langner at kn.pl (Karol Langner) Date: Wed, 14 Jun 2006 11:50:33 +0200 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <200606141114.18202.martin.wiechert@gmx.de> References: <200606141114.18202.martin.wiechert@gmx.de> Message-ID: <200606141150.33560.karol.langner@kn.pl> On Wednesday 14 June 2006 11:14, Martin Wiechert wrote: > is there a concise way to address a subrectangle of a 2d array? So far I'm > using > > A [I] [:, J] > > which is not pretty and more importantly only works for reading the > subrectangle. Writing does *not* work. (Cf. session below.) > > Any help would be appreciated. > > Thanks, > Martin You can also use A[m:n,r:s] to refernce a subarray. For instance: >>> a = numpy.zeros((5,5)) >>> b = numpy.ones((3,3)) >>> a[1:4,1:4] = b >>> a array([[0, 0, 0, 0, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 0, 0, 0, 0]]) Cheers, Karol -- written by Karol Langner ?ro cze 14 11:49:35 CEST 2006 From pau.gargallo at gmail.com Wed Jun 14 06:02:06 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Wed, 14 Jun 2006 12:02:06 +0200 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <6ef8f3380606140301v4e7914afjd2ba15cbca42524c@mail.gmail.com> References: <200606141114.18202.martin.wiechert@gmx.de> <20060614192555.55dae6de.simon@arrowtheory.com> <200606141133.45407.wiechert@mpimf-heidelberg.mpg.de> <6ef8f3380606140301v4e7914afjd2ba15cbca42524c@mail.gmail.com> Message-ID: <6ef8f3380606140302r7f8778aep4a723a9964fe5e95@mail.gmail.com> On 6/14/06, Martin Wiechert wrote: > Hi Simon, > > thanks for your reply. > > A [I, J] > > seems to only work if the indices are *strides* as in your example. I need > fancy indices (like I = (1,3,4), J = (0,3,5)), and for them A [I, J] won't do > what I want. As you can see from the example session I posted it does not > address the whole rectangle IxJ but only the elements (I_1, J_1), (I_2, J_2). > E.g., if I==J this is the diagonal of the submatrix, not the full submatrix. you can use A[ ix_(I,J) ] to do what you want. But, if you just want subrectangular regions then A[1:4,1:4] is enough. Please note that A[1:4,1:4] is not the same as A[ arange(1,4), arange(1,4) ], but is the same as A[ ix_(arange(1,4), arange(1,4)) ]. hope this heps pau From ivilata at carabos.com Wed Jun 14 06:14:32 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Wed, 14 Jun 2006 12:14:32 +0200 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448F07F7.8030903@cox.net> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> <448F07F7.8030903@cox.net> Message-ID: <448FE188.3010602@carabos.com> En/na Tim Hochberg ha escrit:: > Francesc Altet wrote: > [...] >>Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for >>some users (specially in 32-bit platforms), is a type with the same rights >>than the others and we would like to give support for it in numexpr. In fact, >>Ivan Vilata already has implemented this suport in our local copy of numexpr, >>so perhaps (I say perhaps because we are in the middle of a big project now >>and are a bit scarce of time resources) we can provide the patch against the >>latest version of David for your consideration. With this we can solve the >>problem with int64 support in 32-bit platforms (although addmittedly, the VM >>gets a bit more complicated, I really think that this is worth the effort) > > In addition to complexity, I worry that we'll overflow the code cache at > some point and slow everything down. To be honest I have no idea at what > point that is likely to happen, but I know they worry about it with the > Python interpreter mainloop. Also, it becomes much, much slower to > compile past a certain number of case statements under VC7, not sure > why. That's mostly my problem though. > [...] Hi! For your information, the addition of separate, predictably-sized int (int32) and long (int64) types to numexpr was roughly as complicated as the addition of boolean types, so maybe the increase of complexity isn't that important (but I recognise I don't know the effect on the final size of the VM). As soon as I have time (and a SVN version of numexpr which passes the tests ;) ) I will try to merge back the changes and send a patch to the list. Thanks for your patience! :) :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From tim.hochberg at cox.net Wed Jun 14 09:50:08 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed, 14 Jun 2006 06:50:08 -0700 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448FE188.3010602@carabos.com> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> <448F07F7.8030903@cox.net> <448FE188.3010602@carabos.com> Message-ID: <44901410.2090401@cox.net> Ivan Vilata i Balaguer wrote: >En/na Tim Hochberg ha escrit:: > > > >>Francesc Altet wrote: >>[...] >> >> >>>Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for >>>some users (specially in 32-bit platforms), is a type with the same rights >>>than the others and we would like to give support for it in numexpr. In fact, >>>Ivan Vilata already has implemented this suport in our local copy of numexpr, >>>so perhaps (I say perhaps because we are in the middle of a big project now >>>and are a bit scarce of time resources) we can provide the patch against the >>>latest version of David for your consideration. With this we can solve the >>>problem with int64 support in 32-bit platforms (although addmittedly, the VM >>>gets a bit more complicated, I really think that this is worth the effort) >>> >>> >>In addition to complexity, I worry that we'll overflow the code cache at >>some point and slow everything down. To be honest I have no idea at what >>point that is likely to happen, but I know they worry about it with the >>Python interpreter mainloop. Also, it becomes much, much slower to >>compile past a certain number of case statements under VC7, not sure >>why. That's mostly my problem though. >>[...] >> >> > >Hi! For your information, the addition of separate, predictably-sized >int (int32) and long (int64) types to numexpr was roughly as complicated >as the addition of boolean types, so maybe the increase of complexity >isn't that important (but I recognise I don't know the effect on the >final size of the VM). > > I didn't expect it to be any worse than booleans (I would imagine it's about the same). It's just that there's a point at which we are going to slow down the VM do to sheer size. I don't know where that point is, so I'm cautious. Booleans seem like they need to be supported directly in the interpreter, while only one each (the largest one) of ints, floats and complexs do. Booleans are different since they have different behaviour than integers, so they need a separate set of opcodes. For floats and complexes, the largest is also the most commonly used, so this works out well. For ints on the other hand, int32 is the most commonly used, but int64 is the largest, so the approach of using the largest is going to result in a speed hit for the most common integer case. Implementing both, as you've done solves that, but as I say, I worry about making the interpreter core too big. I expect that you've timed things before and after the addition of int64 and not gotten a noticable slowdown. That's good, although it doesn't entirely mean we're out of the woods since I expect that more opcodes that we just need to add will show up and at some point I we may run into an opcode crunch. Or maybe I'm just being paranoid. >As soon as I have time (and a SVN version of numexpr which passes the >tests ;) ) I will try to merge back the changes and send a patch to the >list. Thanks for your patience! :) > > I look forward to seeing it. Now if only I can get svn numexpr to stop seqfaulting under windows I'll be able to do something useful... -tim >:: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > C?rabos Coop. V. V V Enjoy Data > "" > > > From martin.wiechert at gmx.de Wed Jun 14 10:19:35 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Wed, 14 Jun 2006 16:19:35 +0200 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <6ef8f3380606140301v4e7914afjd2ba15cbca42524c@mail.gmail.com> References: <200606141114.18202.martin.wiechert@gmx.de> <200606141133.45407.wiechert@mpimf-heidelberg.mpg.de> <6ef8f3380606140301v4e7914afjd2ba15cbca42524c@mail.gmail.com> Message-ID: <200606141619.36693.martin.wiechert@gmx.de> Thanks Pau, that's exactly what I was looking for. Martin On Wednesday 14 June 2006 12:01, you wrote: > On 6/14/06, Martin Wiechert wrote: > > Hi Simon, > > > > thanks for your reply. > > > > A [I, J] > > > > seems to only work if the indices are *strides* as in your example. I > > need fancy indices (like I = (1,3,4), J = (0,3,5)), and for them A [I, J] > > won't do what I want. As you can see from the example session I posted it > > does not address the whole rectangle IxJ but only the elements (I_1, > > J_1), (I_2, J_2). E.g., if I==J this is the diagonal of the submatrix, > > not the full submatrix. > > you can use A[ ix_(I,J) ] to do what you want. > > But, if you just want subrectangular regions then A[1:4,1:4] is enough. > Please note that A[1:4,1:4] is not the same as A[ arange(1,4), arange(1,4) > ], but is the same as A[ ix_(arange(1,4), arange(1,4)) ]. > > hope this heps > pau From chanley at stsci.edu Wed Jun 14 11:17:40 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 14 Jun 2006 11:17:40 -0400 (EDT) Subject: [Numpy-discussion] numpy.test() fails on Redhat Enterprise and Solaris Message-ID: <20060614111740.CJQ36789@comet.stsci.edu> The daily numpy build and tests I run have failed for revision 2617. Below is the error message I receive on my RHE 3 box: ====================================================================== FAIL: Check reading the nested fields of a nested array (1st level) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/sparty1/dev/site-packages/lib/python/numpy/core/tests/test_numerictypes.py", line 283, in check_nested1_acessors dtype='U2')) File "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", line 139, in assert_equal return assert_array_equal(actual, desired, err_msg) File "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", line 215, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", line 207, in assert_array_compare assert cond, msg AssertionError: Arrays are not equal (mismatch 100.0%) x: array([u'NN', u'OO'], dtype=' Hi list, does anybody know, why maximum.reduce (()) does not return -inf? Looks very natural to me and as a byproduct maximum.reduce would ignore nans, thereby removing the need of nanmax etc. The current convention gives >>> from numpy import * >>> maximum.reduce ((1,nan)) 1.0 >>> maximum.reduce ((nan, 1)) nan >>> maximum.reduce (()) Traceback (most recent call last): File "", line 1, in ? ValueError: zero-size array to ufunc.reduce without identity >>> Cheers, Martin From ndarray at mac.com Wed Jun 14 12:39:23 2006 From: ndarray at mac.com (Sasha) Date: Wed, 14 Jun 2006 12:39:23 -0400 Subject: [Numpy-discussion] maximmum.reduce and nans In-Reply-To: <200606141758.04222.martin.wiechert@gmx.de> References: <200606141758.04222.martin.wiechert@gmx.de> Message-ID: On 6/14/06, Martin Wiechert wrote: >... > does anybody know, why > > maximum.reduce (()) > > does not return -inf? > Technically, because >>> maximum.identity is None True It is theoretically feasible to change maximum.identity to -inf, but that would be inconsistent with the default dtype being int. For example >>> add.identity, type(add.identity) (0, ) Another reason is that IEEE special values are not universally supported yet. I would suggest to add 'initial' keyword to reduce. If this is done, the type of 'initial' may also supply the default for 'dtype' argument of reduce that was added in numpy. Another suggestion in this area is to change identity attribute of ufuncs from a scalar to dtype:scalar dictionary. Finally, a bug report: >>> add.identity = None Traceback (most recent call last): File "", line 1, in ? SystemError: error return without exception set From emsellem at obs.univ-lyon1.fr Wed Jun 14 13:15:58 2006 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Wed, 14 Jun 2006 19:15:58 +0200 Subject: [Numpy-discussion] installation problems: stupid question Message-ID: <4490444E.2070805@obs.univ-lyon1.fr> Hi, I just switched to Suse 10.1 (from Suse 10.0) and for some reason now the new installed modules do not go under /usr/lib/python2.4/site-packages/ as usual but under /usr/local/lib/python2.4/site-packages/ (the "local" is the difference). How can I go back to the normal setting ? thanks a lot for any input there. Eric P.S.: I seem to then have problem with lapack_lite.so (undefined symbol: s_cat) and it may be linked From robert.kern at gmail.com Wed Jun 14 13:54:28 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 Jun 2006 12:54:28 -0500 Subject: [Numpy-discussion] installation problems: stupid question In-Reply-To: <4490444E.2070805@obs.univ-lyon1.fr> References: <4490444E.2070805@obs.univ-lyon1.fr> Message-ID: Eric Emsellem wrote: > Hi, > > I just switched to Suse 10.1 (from Suse 10.0) and for some reason now > the new installed modules do not go under > /usr/lib/python2.4/site-packages/ as usual but under > /usr/local/lib/python2.4/site-packages/ > (the "local" is the difference). > > How can I go back to the normal setting ? You can edit ~/.pydistutils.cfg to add this section: [install] prefix=/usr However, Suse probably made the change for a reason. Distribution vendors like to control /usr and let the user/sysadmin do what he wants in /usr/local . It is generally a Good Idea to respect that. If the Suse python group is not incompetent, then they will have already made the modifications necessary to make sure that /usr/local/lib/python2.4/site-packages is appropriately on your PYTHONPATH and other such modifications. > thanks a lot for any input there. > > > Eric > P.S.: I seem to then have problem with lapack_lite.so (undefined symbol: > s_cat) and it may be linked I don't think so. That looks like it might be a function that should be in libg2c, but I'm not sure. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From myeates at jpl.nasa.gov Wed Jun 14 16:06:55 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Wed, 14 Jun 2006 13:06:55 -0700 Subject: [Numpy-discussion] core dump when runniong tests Message-ID: <44906C5F.9080901@jpl.nasa.gov> I consistently core dump when I do the following 1) from the console I do >import numpy >numpy.test(level=1,verbosity=2) >numpy.test(level=1,verbosity=2) >numpy.test(level=1,verbosity=2) the third time (and only the third) I get a core dump in test_types. It happens on the line val = vala+valb when k=2 atype= uint8scalar l=16 btype=complex192scalar valb=(1.0+0.0j) Any help in debugging this? Mathew From haase at msg.ucsf.edu Wed Jun 14 16:12:58 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 14 Jun 2006 13:12:58 -0700 Subject: [Numpy-discussion] old-Numeric: OverflowError on exp(-760) In-Reply-To: References: <200606121432.12896.haase@msg.ucsf.edu> Message-ID: <200606141312.58770.haase@msg.ucsf.edu> Hi, Thanks for the reply. Just for general enjoyment: I found a solution: It seems that substituting N.exp(-700) by N.e ** -700 changes the behaviour to the better ... Thanks, Sebastian Haase On Monday 12 June 2006 15:19, Sasha wrote: > BTW, here is the relevant explanation from mathmodule.c: > > /* ANSI C generally requires libm functions to set ERANGE > * on overflow, but also generally *allows* them to set > * ERANGE on underflow too. There's no consistency about > * the latter across platforms. > * Alas, C99 never requires that errno be set. > * Here we suppress the underflow errors (libm functions > * should return a zero on underflow, and +- HUGE_VAL on > * overflow, so testing the result for zero suffices to > * distinguish the cases). > */ > > On 6/12/06, Sasha wrote: > > I don't know about numarray, but the difference between Numeric and > > python math module stems from the fact that the math module ignores > > errno set by C library and only checks for infinity. Numeric relies > > > > on errno exclusively, numpy ignores errors by default: > > >>> import numpy,math,Numeric > > >>> numpy.exp(-760) > > > > 0.0 > > > > >>> math.exp(-760) > > > > 0.0 > > > > >>> Numeric.exp(-760) > > > > Traceback (most recent call last): > > File "", line 1, in ? > > OverflowError: math range error > > > > >>> numpy.exp(760) > > > > inf > > > > >>> math.exp(760) > > > > Traceback (most recent call last): > > File "", line 1, in ? > > OverflowError: math range error > > > > >>> Numeric.exp(760) > > > > Traceback (most recent call last): > > File "", line 1, in ? > > OverflowError: math range error > > > > I would say it's a bug in Numeric, so you are out of luck. > > > > Unfortunalely, even MA.exp(-760) does not work, but this is easy to fix: > > >>> exp = > > >>> MA.masked_unary_operation(Numeric.exp,0.0,MA.domain_check_interval(-1 > > >>>00,100)) exp(-760).filled() > > > > 0 > > > > You would need to replace -100,100 with the bounds appropriate for your > > system. > > > > On 6/12/06, Sebastian Haase wrote: > > > Hi, > > > I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient > > > way to do a non linear minimization. It uses the "old" Numeric module. > > > But since I upgraded to Numeric 24.2 I get OverflowErrors that I > > > tracked down to > > > > > > >>> Numeric.exp(-760.) > > > > > > Traceback (most recent call last): > > > File "", line 1, in ? > > > OverflowError: math range error > > > > > > >From numarray I'm used to getting this: > > > >>> na.exp(-760) > > > > > > 0.0 > > > > > > Mostly I'm confused because my code worked before I upgraded to version > > > 24.2. > > > > > > Thanks for any hints on how I could revive my code... > > > -Sebastian Haase From myeates at jpl.nasa.gov Wed Jun 14 17:06:13 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Wed, 14 Jun 2006 14:06:13 -0700 Subject: [Numpy-discussion] core dump when runniong tests In-Reply-To: <44906C5F.9080901@jpl.nasa.gov> References: <44906C5F.9080901@jpl.nasa.gov> Message-ID: <44907A45.9070603@jpl.nasa.gov> Travis suggested I use svn and this worked! Thanks Travis! I'm now getting 1 test failure. I'd love to dot this 'i' ====================================================================== FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) ---------------------------------------------------------------------- Traceback (most recent call last): File "/lib/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line 42, in check_large_types assert b == 6765201, "error with %r: got %r" % (t,b) AssertionError: error with : got 6765201.00000000000364 ---------------------------------------------------------------------- Ran 377 tests in 0.347s FAILED (failures=1) Mathew Yeates wrote: > I consistently core dump when I do the following > 1) from the console I do > >import numpy > >numpy.test(level=1,verbosity=2) > >numpy.test(level=1,verbosity=2) > >numpy.test(level=1,verbosity=2) > > the third time (and only the third) I get a core dump in test_types. It > happens on the line > val = vala+valb > when k=2 atype= uint8scalar l=16 btype=complex192scalar valb=(1.0+0.0j) > > Any help in debugging this? > Mathew > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From cookedm at physics.mcmaster.ca Wed Jun 14 23:13:25 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 14 Jun 2006 23:13:25 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft Message-ID: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> After working with them for a while, I'm going to go on record and say that I prefer the long names from Numeric and numarray (like linear_least_squares, inverse_real_fft, etc.), as opposed to the short names now used by default in numpy (lstsq, irefft, etc.). I know you can get the long names from numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better defaults. Abbreviations aren't necessary unique (quick! what does eig() return by default?), and aren't necessarily obvious. A Google search for irfft vs. irefft for instance turns up only the numpy code as (English) matches for irefft, while irfft is much more common. Also, Numeric and numarray compatibility is increased by using the long names: those two don't have the short ones. Fitting names into 6 characters when out of style decades ago. (I think MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) My 2 cents... -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From sransom at nrao.edu Wed Jun 14 23:20:54 2006 From: sransom at nrao.edu (Scott Ransom) Date: Wed, 14 Jun 2006 23:20:54 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: <20060615032054.GA19076@ssh.cv.nrao.edu> I'll add my 2 cents to this and agree with David. Arguments about how short name are important for interactive work are pretty bogus given the beauty of modern tab-completion. And I'm not sure what other arguments there are... Scott On Wed, Jun 14, 2006 at 11:13:25PM -0400, David M. Cooke wrote: > After working with them for a while, I'm going to go on record and say that I > prefer the long names from Numeric and numarray (like linear_least_squares, > inverse_real_fft, etc.), as opposed to the short names now used by default in > numpy (lstsq, irefft, etc.). I know you can get the long names from > numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better > defaults. > > Abbreviations aren't necessary unique (quick! what does eig() return by > default?), and aren't necessarily obvious. A Google search for irfft vs. > irefft for instance turns up only the numpy code as (English) matches for > irefft, while irfft is much more common. > > Also, Numeric and numarray compatibility is increased by using the long > names: those two don't have the short ones. > > Fitting names into 6 characters when out of style decades ago. (I think > MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) > > My 2 cents... > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From ndarray at mac.com Wed Jun 14 23:46:27 2006 From: ndarray at mac.com (Sasha) Date: Wed, 14 Jun 2006 23:46:27 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: On 6/14/06, David M. Cooke wrote: > After working with them for a while, I'm going to go on record and say that I > prefer the long names from Numeric and numarray (like linear_least_squares, > inverse_real_fft, etc.), as opposed to the short names now used by default in > numpy (lstsq, irefft, etc.). I know you can get the long names from > numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better > defaults. > I agree in spirit, but note that inverse_real_fft is still short for inverse_real_fast_fourier_transform. Presumably, fft is a proper noun in many people vocabularies, but so may be lstsq depending who you ask. > Abbreviations aren't necessary unique (quick! what does eig() return by > default?), and aren't necessarily obvious. A Google search for irfft vs. > irefft for instance turns up only the numpy code as (English) matches for > irefft, while irfft is much more common. > Short names have one important advantage in scientific languages: they look good in expressions. What is easier to understand: hyperbolic_tangent(x) = hyperbolic_sinus(x)/hyperbolic_cosinus(x) or tanh(x) = sinh(x)/cosh(x) ? I am playing devil's advocate here a little because personally, I always recommend the following as a compromize: sinh = hyperbolic_sinus ... tanh(x) = sinh(x)/cosh(x) But the next question is where to put "sinh = hyperbolic_sinus": right before the expression using sinh? at the top of the module (import hyperbolic_sinus as sinh)? in the math library? If you pick the last option, do you need hyperbolic_sinus to begin with? If you pick any other option, how do you prevent others from writing sh = hyperbolic_sinus instead of sinh? > Also, Numeric and numarray compatibility is increased by using the long > names: those two don't have the short ones. > > Fitting names into 6 characters when out of style decades ago. (I think > MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) > Short names are still popular in scientific programming: . I am still +1 for keeping linear_least_squares and inverse_real_fft, but not just because abreviations are bad as such - if an established acronym such as fft exists we should be free to use it. From pfdubois at gmail.com Thu Jun 15 00:47:20 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Wed, 14 Jun 2006 21:47:20 -0700 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: Bertrand Meyer has pointed out that abbreviations are usually a bad idea. The problem is that abbreviations are not unique so you can't guess what they are. Whereas (modulo some library-wide conventions about names) linearLeastSquares or the like is unique. At the very least you're more likely to get it right. Any python user can abbreviate anything they like any way they like for interactive work. And yes, I think FFT is a name. (:-> Exception for that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sransom at nrao.edu Thu Jun 15 00:52:55 2006 From: sransom at nrao.edu (Scott Ransom) Date: Thu, 15 Jun 2006 00:52:55 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: <20060615045254.GA31694@ssh.cv.nrao.edu> On Wed, Jun 14, 2006 at 09:47:20PM -0700, Paul Dubois wrote: > And yes, I think FFT is a name. (:-> Exception for that. I agree. As are sinh, cosh, tanh, sinc, exp, log10 and various other very commonly used (and not only in programming) names. lstsq, eig, irefft, etc are not. Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From josh8912 at yahoo.com Thu Jun 15 01:13:06 2006 From: josh8912 at yahoo.com (JJ) Date: Wed, 14 Jun 2006 22:13:06 -0700 (PDT) Subject: [Numpy-discussion] acml and numpy install problems Message-ID: <20060615051306.1788.qmail@web51711.mail.yahoo.com> Hello. I wrote to the list about a week ago regarding slow speed of numpy relative to matlab. Im fairly sure that my installation of numpy had problems. So I am trying this time with the acml libraries for my AMD Athelon 64 bit machine. New machine with FC_5. I was able to install the acml libraries without much trouble, and install UMFPACK and AMD without apparent errors. But I did have many errors when I tried to install numpy. My install messages are copied below. Apparently, numpy does see the acml libraries but finds them faulty, or something. I could use some clues if anyone has any. Also, I did set: setenv LD_LIBRARY_PATH /opt/acml3.1.0/gnu64/lib # setenv LD_RUN_PATH /opt/acml3.1.0/gnu64/lib Here is my config file: ----------------------------------- [atlas] library_dirs = /opt/acml3.1.0/gnu64/lib include_dirs = /opt/acml3.1.0/gnu64/include atlas_libs = acml language = f77 [blas] library_dirs = /opt/acml3.1.0/gnu64/lib include_dirs = /opt/acml3.1.0/gnu64/include atlas_libs = acml language = f77 [laplack] library_dirs = /opt/acml3.1.0/gnu64/lib include_dirs = /opt/acml3.1.0/gnu64/include atlas_libs = acml language = f77 [amd] library_dirs = /usr/local/scipy/AMD/Lib include_dirs = /usr/local/scipy/AMD/Include amd_libs = amd language =c [umfpack] library_dirs = /usr/local/scipy/UMFPACK/Lib include_dirs = /usr/local/scipy/UMFPACK/Include umfpack_libs = umfpack language = c ------------------------------------ I have set symbolic links between lacml and libacml. Here is the first half of the output, where most of the errors are: -------------------------------- [root at fedora-newamd numpy]# python setup.py install Running from numpy source directory. No module named __svn_version__ F2PY Version 2_2624 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not find in /usr/local/lib libraries mkl,vml,guide not find in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['acml'] library_dirs = ['/opt/acml3.1.0/gnu64/lib'] language = c customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/opt/acml3.1.0/gnu64/lib -lacml -o _configtest _configtest.o: In function `main': /usr/local/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `do_lio' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsfe' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `z_abs' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined ... ... reference to `s_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_wsfe' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_copy' collect2: ld returned 1 exit status _configtest.o: In function `main': /usr/local/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `do_lio' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsfe' ... ... reference to `s_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_wsfe' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_copy' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Status: 255 Output: FOUND: libraries = ['acml'] library_dirs = ['/opt/acml3.1.0/gnu64/lib'] language = c define_macros = [('NO_ATLAS_INFO', 2)] lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not find in /usr/local/lib libraries mkl,vml,guide not find in /usr/lib NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not find in /opt/acml3.1.0/gnu64/lib libraries lapack not find in /opt/acml3.1.0/gnu64/lib libraries acml not find in /usr/local/lib libraries lapack_atlas not find in /usr/local/lib libraries acml not find in /usr/lib libraries lapack_atlas not find in /usr/lib numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS /usr/local/numpy/numpy/distutils/system_info.py:881: UserWarning: ********************************************************************* Could not find lapack library within the ATLAS installation. ********************************************************************* warnings.warn(message) Setting PTATLAS=ATLAS FOUND: libraries = ['acml'] library_dirs = ['/opt/acml3.1.0/gnu64/lib'] language = c define_macros = [('ATLAS_WITHOUT_LAPACK', None)] customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/opt/acml3.1.0/gnu64/lib -lacml -o _configtest _configtest.o: In function `main': /usr/local/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `do_lio' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsfe' ... ... /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `acos' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_wsfe' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_copy' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Status: 255 Output: lapack_info: libraries lapack not find in /usr/local/lib libraries lapack not find in /usr/lib NOT AVAILABLE /usr/local/numpy/numpy/distutils/system_info.py:1163: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) lapack_src_info: NOT AVAILABLE /usr/local/numpy/numpy/distutils/system_info.py:1166: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) NOT AVAILABLE running install running build running config_fc running build_src building py_modules sources creating build creating build/src.linux-x86_64-2.4 creating build/src.linux-x86_64-2.4/numpy creating build/src.linux-x86_64-2.4/numpy/distutils building extension "numpy.core.multiarray" sources creating build/src.linux-x86_64-2.4/numpy/core Generating build/src.linux-x86_64-2.4/numpy/core/config.h customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-I/usr/include/python2.4 -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:50: warning: format ?%d? expects type ?int?, but argument 4 has ty pe ?long unsigned int? _configtest.c:57: warning: format ?%d? expects type ?int?, but argument 4 has ty pe ?long unsigned int? _configtest.c:72: warning: format ?%d? expects type ?int?, but argument 4 has ty pe ?long unsigned int? gcc -pthread _configtest.o -L/usr/local/lib -L/usr/lib -o _configtest /usr/bin/ld: skipping incompatible /usr/lib/libpthread.so when searching for -lp thread /usr/bin/ld: skipping incompatible /usr/lib/libpthread.a when searching for -lpt hread /usr/bin/ld: skipping incompatible /usr/lib/libc.so when searching for -lc /usr/bin/ld: skipping incompatible /usr/lib/libc.a when searching for -lc _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c gcc -pthread _configtest.o -o _configtest _configtest.o: In function `main': /usr/local/numpy/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status _configtest.o: In function `main': /usr/local/numpy/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c gcc -pthread _configtest.o -lm -o _configtest _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h' to sources. creating build/src.linux-x86_64-2.4/numpy/core/src conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/scalartypes.inc adding 'build/src.linux-x86_64-2.4/numpy/core/src' to include_dirs. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc numpy.core - nothing done with h_files= ['build/src.linux-x86_64-2.4/numpy/core/ src/scalartypes.inc', 'build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc' , 'build/src.linux-x86_64-2.4/numpy/core/config.h', 'build/src.linux-x86_64-2.4/ numpy/core/__multiarray_api.h'] building extension "numpy.core.umath" sources adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.linux-x86_64-2.4/numpy/core/__ufunc_api.h' to sources. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/umathmodule.c adding 'build/src.linux-x86_64-2.4/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src.linux-x86_64-2.4/numpy/core/ src/scalartypes.inc', 'build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc' , 'build/src.linux-x86_64-2.4/numpy/core/config.h', 'build/src.linux-x86_64-2.4/ numpy/core/__ufunc_api.h'] building extension "numpy.core._sort" sources adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. adding 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h' to sources. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/_sortmodule.c numpy.core - nothing done with h_files= ['build/src.linux-x86_64-2.4/numpy/core/ config.h', 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h'] building extension "numpy.core.scalarmath" sources adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. adding 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h' to sources. adding 'build/src.linux-x86_64-2.4/numpy/core/__ufunc_api.h' to sources. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/scalarmathmodule.c numpy.core - nothing done with h_files= ['build/src.linux-x86_64-2.4/numpy/core/ config.h', 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h', 'build/sr c.linux-x86_64-2.4/numpy/core/__ufunc_api.h'] building extension "numpy.core._dotblas" sources adding 'numpy/core/blasdot/_dotblas.c' to sources. building extension "numpy.lib._compiled_base" sources building extension "numpy.dft.fftpack_lite" sources building extension "numpy.linalg.lapack_lite" sources creating build/src.linux-x86_64-2.4/numpy/linalg ### Warning: Using unoptimized lapack ### --------------------------------------------- Any ideas? I am still a novice and could use some suggestions. Thanks much. JJ __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From saagesen at sfu.ca Thu Jun 15 01:21:46 2006 From: saagesen at sfu.ca (saagesen at sfu.ca) Date: Wed, 14 Jun 2006 22:21:46 -0700 Subject: [Numpy-discussion] memory leak in array Message-ID: <200606150521.k5F5Lkgi013099@rm-rstar.sfu.ca> An embedded and charset-unspecified text was scrubbed... Name: not available URL: From djm at mindrot.org Thu Jun 15 01:22:57 2006 From: djm at mindrot.org (Damien Miller) Date: Thu, 15 Jun 2006 15:22:57 +1000 (EST) Subject: [Numpy-discussion] numpy segv on OpenBSD Message-ID: Hi, I'm trying to make an OpenBSD package on numpy-0.9.5, but it receives a malloc fault in the check_types() self-test as it tries to free() a junk pointer. In case you are not aware, OpenBSD's malloc() implementation does a fair bit of randomisation that makes it (deliberately) sensitive to memory management errors. Instumenting the check_types test and scalartypes.inc.src's gen_dealloc() and gen_alloc() functions I noticed that the error occurs up after calling gen_dealloc() on a complex128scalar that was created as check_types's "valb" variable as it is GC'd. The check_types tests work fine on the complex64scalar type and all the other preceeding types. I'm not familiar with the guts of numpy at all (and I can't even find the declaration of the complex128scalar type in the source). What difference between complex64scalar and complex128scalar should I look for to debug this further? A backtrace is below for the curious. -d (gdb) bt #0 0x0ff49975 in kill () from /usr/lib/libc.so.39.1 #1 0x0ff822c3 in abort () at /usr/src/lib/libc/stdlib/abort.c:65 #2 0x0ff69649 in wrterror (p=0x2ff18460 "free_pages: pointer to wrong page") at /usr/src/lib/libc/stdlib/malloc.c:434 #3 0x0ff6970b in wrtwarning (p=0x2ff18460 "free_pages: pointer to wrong page") at /usr/src/lib/libc/stdlib/malloc.c:444 #4 0x0ff6ac53 in free_pages (ptr=0x7e0033b0, index=516111, info=0x0) at /usr/src/lib/libc/stdlib/malloc.c:1343 #5 0x0ff6a6f4 in ifree (ptr=0x7e0033b0) at /usr/src/lib/libc/stdlib/malloc.c:1770 #6 0x0ff6a8d1 in free (ptr=0x7e0033b0) at /usr/src/lib/libc/stdlib/malloc.c:1838 #7 0x0d259117 in gentype_dealloc (v=0x7e0033b0) at scalartypes.inc.src:283 #8 0x0c5fc778 in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #9 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #10 0x0c60072f in fast_function () from /usr/local/lib/libpython2.4.so.0.0 #11 0x0c60036d in call_function () from /usr/local/lib/libpython2.4.so.0.0 #12 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #13 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #14 0x0c5bf2f2 in function_call () from /usr/local/lib/libpython2.4.so.0.0 #15 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #16 0x0c600c6b in ext_do_call () from /usr/local/lib/libpython2.4.so.0.0 #17 0x0c5fe83c in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 ---Type to continue, or q to quit--- #18 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #19 0x0c5bf2f2 in function_call () from /usr/local/lib/libpython2.4.so.0.0 #20 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #21 0x0c5b2bd4 in instancemethod_call () from /usr/local/lib/libpython2.4.so.0.0 #22 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #23 0x0c600aa1 in do_call () from /usr/local/lib/libpython2.4.so.0.0 #24 0x0c6002fa in call_function () from /usr/local/lib/libpython2.4.so.0.0 #25 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #26 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #27 0x0c5bf2f2 in function_call () from /usr/local/lib/libpython2.4.so.0.0 #28 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #29 0x0c5b2bd4 in instancemethod_call () from /usr/local/lib/libpython2.4.so.0.0 #30 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #31 0x0c5e5c9f in slot_tp_call () from /usr/local/lib/libpython2.4.so.0.0 #32 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #33 0x0c600aa1 in do_call () from /usr/local/lib/libpython2.4.so.0.0 #34 0x0c6002fa in call_function () from /usr/local/lib/libpython2.4.so.0.0 #35 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #36 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #37 0x0c5bf2f2 in function_call () from /usr/local/lib/libpython2.4.so.0.0 #38 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 ---Type to continue, or q to quit--- #39 0x0c600c6b in ext_do_call () from /usr/local/lib/libpython2.4.so.0.0 #40 0x0c5fe83c in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #41 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #42 0x0c5bf2f2 in function_call () from /usr/local/lib/libpython2.4.so.0.0 #43 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #44 0x0c5b2bd4 in instancemethod_call () from /usr/local/lib/libpython2.4.so.0.0 #45 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #46 0x0c5e5c9f in slot_tp_call () from /usr/local/lib/libpython2.4.so.0.0 #47 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #48 0x0c600aa1 in do_call () from /usr/local/lib/libpython2.4.so.0.0 #49 0x0c6002fa in call_function () from /usr/local/lib/libpython2.4.so.0.0 #50 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #51 0x0c6007b0 in fast_function () from /usr/local/lib/libpython2.4.so.0.0 #52 0x0c60036d in call_function () from /usr/local/lib/libpython2.4.so.0.0 #53 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #54 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #55 0x0c60072f in fast_function () from /usr/local/lib/libpython2.4.so.0.0 #56 0x0c60036d in call_function () from /usr/local/lib/libpython2.4.so.0.0 #57 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #58 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #59 0x0c5fc1a7 in PyEval_EvalCode () from /usr/local/lib/libpython2.4.so.0.0 #60 0x0c61d060 in run_node () from /usr/local/lib/libpython2.4.so.0.0 ---Type to continue, or q to quit--- #61 0x0c61c0b1 in PyRun_SimpleFileExFlags () from /usr/local/lib/libpython2.4.so.0.0 #62 0x0c61ba49 in PyRun_AnyFileExFlags () from /usr/local/lib/libpython2.4.so.0.0 #63 0x0c622bab in Py_Main () from /usr/local/lib/libpython2.4.so.0.0 #64 0x1c000d60 in main () From djm at mindrot.org Thu Jun 15 01:24:08 2006 From: djm at mindrot.org (Damien Miller) Date: Thu, 15 Jun 2006 15:24:08 +1000 (EST) Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: References: Message-ID: On Thu, 15 Jun 2006, Damien Miller wrote: > Hi, > > I'm trying to make an OpenBSD package on numpy-0.9.5, but it receives a bah, I'm actually using numpy-0.9.8 (not 0.9.5). -d From robert.kern at gmail.com Thu Jun 15 01:38:41 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 Jun 2006 00:38:41 -0500 Subject: [Numpy-discussion] memory leak in array In-Reply-To: <200606150521.k5F5Lkgi013099@rm-rstar.sfu.ca> References: <200606150521.k5F5Lkgi013099@rm-rstar.sfu.ca> Message-ID: saagesen at sfu.ca wrote: > Update: I posted this message on the comp.lang.python forum and their > response was to get the numbers of references with sys.getrefcount(obj). > After doing this I see that iterative counters used to count occurrences > and nested loop counters (ii & jj) as seen in the code example below are the > culprits with the worst ones over 1M: > > for ii in xrange(0,40): > for jj in xrange(0,20): Where are you getting this 1M figure? Is that supposed to mean "1 Megabyte of memory"? Because they don't consume that much memory. In fact, all of the small integers between -1 and 100, I believe (but certainly all of them in xrange(0, 40)) are shared. There is only one 0 object and only one 10 object, etc. That is why their refcount is so high. You're going down a dead end here. > try: > nc = y[a+ii,b+jj] > except IndexError: nc = 0 > > if nc == "1" or nc == "5": What is the dtype of y? You are testing for strings, but assigning integers. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Thu Jun 15 01:44:54 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 15 Jun 2006 01:44:54 -0400 Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: References: Message-ID: <20060615014454.53e523c6@arbutus.physics.mcmaster.ca> On Thu, 15 Jun 2006 15:22:57 +1000 (EST) Damien Miller wrote: > Hi, > > I'm trying to make an OpenBSD package on numpy-0.9.5, but it receives a > malloc fault in the check_types() self-test as it tries to free() a junk > pointer. In case you are not aware, OpenBSD's malloc() implementation > does a fair bit of randomisation that makes it (deliberately) sensitive > to memory management errors. > > Instumenting the check_types test and scalartypes.inc.src's > gen_dealloc() and gen_alloc() functions I noticed that the error occurs > up after calling gen_dealloc() on a complex128scalar that was created as > check_types's "valb" variable as it is GC'd. > > The check_types tests work fine on the complex64scalar type and all > the other preceeding types. I'm not familiar with the guts of numpy > at all (and I can't even find the declaration of the complex128scalar > type in the source). What difference between complex64scalar and > complex128scalar should I look for to debug this further? Can you update to the latest svn? We may have fixed it already: valgrind is showing up nothing for me. A complex128scalar is a complex number made up of doubles (float64); a complex64 is one of floats (float32). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Thu Jun 15 01:47:41 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 15 Jun 2006 01:47:41 -0400 Subject: [Numpy-discussion] core dump when runniong tests In-Reply-To: <44907A45.9070603@jpl.nasa.gov> References: <44906C5F.9080901@jpl.nasa.gov> <44907A45.9070603@jpl.nasa.gov> Message-ID: <20060615014741.2ed9eecb@arbutus.physics.mcmaster.ca> On Wed, 14 Jun 2006 14:06:13 -0700 Mathew Yeates wrote: > Travis suggested I use svn and this worked! > Thanks Travis! > > I'm now getting 1 test failure. I'd love to dot this 'i' > > ====================================================================== > FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/lib/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line > 42, in check_large_types > assert b == 6765201, "error with %r: got %r" % (t,b) > AssertionError: error with : got > 6765201.00000000000364 > > ---------------------------------------------------------------------- > Ran 377 tests in 0.347s > > FAILED (failures=1) I'm guessing the C powl function isn't good enough on your machine. What OS are you running? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Thu Jun 15 01:57:08 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 14 Jun 2006 23:57:08 -0600 Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: References: Message-ID: <4490F6B4.9060309@ieee.org> Damien Miller wrote: > Hi, > > I'm trying to make an OpenBSD package on numpy-0.9.5, but it receives a > malloc fault in the check_types() self-test as it tries to free() a junk > pointer. In case you are not aware, OpenBSD's malloc() implementation > does a fair bit of randomisation that makes it (deliberately) sensitive > to memory management errors. > This problem has been worked around in NumPy SVN. It is a problem with Python that has been fixed in Python SVN as well. You can either comment-out the test or update to latest SVN. -Travis From djm at mindrot.org Thu Jun 15 04:56:29 2006 From: djm at mindrot.org (Damien Miller) Date: Thu, 15 Jun 2006 18:56:29 +1000 Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: <20060615014454.53e523c6@arbutus.physics.mcmaster.ca> References: <20060615014454.53e523c6@arbutus.physics.mcmaster.ca> Message-ID: <449120BD.2070601@mindrot.org> David M. Cooke wrote: > Can you update to the latest svn? We may have fixed it already: valgrind is > showing up nothing for me. Ok, but dumb question: how do I check out the SVN trunk? Sourceforge lists details for CVS only... (unless I'm missing something) -d From arnd.baecker at web.de Thu Jun 15 05:03:20 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 15 Jun 2006 11:03:20 +0200 (CEST) Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: <449120BD.2070601@mindrot.org> References: <20060615014454.53e523c6@arbutus.physics.mcmaster.ca> <449120BD.2070601@mindrot.org> Message-ID: On Thu, 15 Jun 2006, Damien Miller wrote: > David M. Cooke wrote: > > Can you update to the latest svn? We may have fixed it already: valgrind is > > showing up nothing for me. > > Ok, but dumb question: how do I check out the SVN trunk? Sourceforge > lists details for CVS only... (unless I'm missing something) See "Bleeding-edge repository access" under http://www.scipy.org/Download I.e. for numpy: svn co http://svn.scipy.org/svn/numpy/trunk numpy Best, Arnd From fperez.net at gmail.com Thu Jun 15 05:03:25 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 15 Jun 2006 03:03:25 -0600 Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: <449120BD.2070601@mindrot.org> References: <20060615014454.53e523c6@arbutus.physics.mcmaster.ca> <449120BD.2070601@mindrot.org> Message-ID: On 6/15/06, Damien Miller wrote: > David M. Cooke wrote: > > Can you update to the latest svn? We may have fixed it already: valgrind is > > showing up nothing for me. > > Ok, but dumb question: how do I check out the SVN trunk? Sourceforge > lists details for CVS only... (unless I'm missing something) http://scipy.org/Developer_Zone Cheers, f From djm at mindrot.org Thu Jun 15 05:13:53 2006 From: djm at mindrot.org (Damien Miller) Date: Thu, 15 Jun 2006 19:13:53 +1000 Subject: [Numpy-discussion] Disable linking against external libs Message-ID: <449124D1.7020504@mindrot.org> Hi, What is the intended way to disable linking against installed libraries (blas, lapack, etc) in site.cfg? I know I can do: [blas] blah_libs = XXXnonexistXXX but that strikes me as less than elegant. FYI I want to do this to make package building deterministic; not varying based on what the package builder happens to have installed on his/her machine -d From chanley at stsci.edu Thu Jun 15 08:53:30 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 15 Jun 2006 08:53:30 -0400 Subject: [Numpy-discussion] numpy.test() fails on Redhat Enterprise and Solaris In-Reply-To: <4490C741.9000009@ieee.org> References: <20060614111740.CJQ36789@comet.stsci.edu> <4490C741.9000009@ieee.org> Message-ID: <4491584A.7090301@stsci.edu> The last successful run was with revision 2613. However, revision 2624 appears to have corrected the problem on Solaris. Thanks, Chris Travis Oliphant wrote: > Christopher Hanley wrote: > >> The daily numpy build and tests I run have failed for revision 2617. >> Below is the error message I receive on my RHE 3 box: >> >> ====================================================================== >> FAIL: Check reading the nested fields of a nested array (1st level) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): File >> "/data/sparty1/dev/site-packages/lib/python/numpy/core/tests/test_numerictypes.py", >> line 283, in check_nested1_acessors dtype='U2')) File >> "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", >> line 139, in assert_equal return assert_array_equal(actual, >> desired, err_msg) File >> "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", >> line 215, in assert_array_equal verbose=verbose, header='Arrays are >> not equal') File >> "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", >> line 207, in assert_array_compare assert cond, msg AssertionError: >> Arrays are not equal >> (mismatch 100.0%) x: array([u'NN', u'OO'], dtype='> array([u'NN', u'OO'], dtype='> >> On my Solaris 8 box this same test causes a bus error: >> >> Check creation of single-dimensional objects ... ok Check creation of >> 0-dimensional objects ... ok Check creation of multi-dimensional >> objects ... ok Check creation of single-dimensional objects ... ok >> Check reading the top fields of a nested array ... ok Check reading >> the nested fields of a nested array (1st level)Bus Error (core dumped) >> >> > > Do you know when was the last successful run? I think I know what may > be causing this, but the change was introduced several weeks ago... > > -Travis > From alexander.belopolsky at gmail.com Thu Jun 15 09:15:55 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 15 Jun 2006 09:15:55 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: On 6/15/06, Paul Dubois wrote: > And yes, I think FFT is a name. (:-> Exception for that. There are more exceptions that Numeric is not taking advantage of: equal, less, greater, ... -> eq, lt, gt, ... inverse, generalized_inverse -> inv, pinv In my view it is more important that code is easy to read rather than easy to write. Interactive users will disagree, but in programming you write once and read/edit forever :). Again, there is no defense for abbreviating linear_least_squares because it is unlikely to appear in an expression and waste valuable horisontal space. Contracting generalised_inverse is appropriate and numpy does the right thing in this case. The eig.., svd and cholesky choice of names is unfortunate because three different abbreviation schemes are used: first syllable, acronym and first word. I would say: when in doubt spell it in full. From ndarray at mac.com Thu Jun 15 09:16:56 2006 From: ndarray at mac.com (Sasha) Date: Thu, 15 Jun 2006 09:16:56 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: On 6/15/06, Paul Dubois wrote: > And yes, I think FFT is a name. (:-> Exception for that. There are more exceptions that Numeric is not taking advantage of: equal, less, greater, ... -> eq, lt, gt, ... inverse, generalized_inverse -> inv, pinv In my view it is more important that code is easy to read rather than easy to write. Interactive users will disagree, but in programming you write once and read/edit forever :). Again, there is no defense for abbreviating linear_least_squares because it is unlikely to appear in an expression and waste valuable horisontal space. Contracting generalised_inverse is appropriate and numpy does the right thing in this case. The eig.., svd and cholesky choice of names is unfortunate because three different abbreviation schemes are used: first syllable, acronym and first word. I would say: when in doubt spell it in full. From emsellem at obs.univ-lyon1.fr Thu Jun 15 09:35:20 2006 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Thu, 15 Jun 2006 15:35:20 +0200 Subject: [Numpy-discussion] problem with numpy.. sometimes using numarray? and selection question Message-ID: <44916218.9060100@obs.univ-lyon1.fr> Hi, I have written a number of small modules where I now systematically use numpy. I have in principle used the latest versions of the different array/Science modules (scipy, numpy, ..) but still at some point during a selection, it crashes on numpy because it seems that the array correspond to "numarray" arrays. e.g.: ################################## selection = (rell >= 1.) * (rell < ES0.maxEFFR[indgal]) ################################## ### rell is an array of reals and ES0.maxEFFR[indgal] is a real number. gives the error: ========== /usr/local/lib/python2.4/site-packages/numarray/numarraycore.py:376: UserWarning: __array__ returned non-NumArray instance _warnings.warn("__array__ returned non-NumArray instance") /usr/local/lib/python2.4/site-packages/numarray/ufunc.py in _cache_miss2(self, n1, n2, out) 919 (in1, in2), inform, scalar = _inputcheck(n1, n2) 920 --> 921 mode, win1, win2, wout, cfunc, ufargs = \ 922 self._setup(in1, in2, inform, out) 923 /usr/local/lib/python2.4/site-packages/numarray/ufunc.py in _setup(self, in1, in2, inform, out) 965 if out is None: wout = in2.new(outtypes[0]) 966 if inform == "vv": --> 967 intypes = (in1._type, in2._type) 968 inarr1, inarr2 = in1._dualbroadcast(in2) 969 fform, convtypes, outtypes, cfunc = self._typematch_N(intypes, inform) AttributeError: 'numpy.ndarray' object has no attribute '_type' ================================================ QUESTION 1: Any hint on where numarray could still be appearing? QUESTION 2: how would you make a selection using "and" and "or" such as: selection = (condition 1) "and" (condition2 "or" condition3) so that "selection" contains 0 and 1 according to the right hand side. Thanks, Eric P.S.: my config is: matplotlib version 0.87.3 verbose.level helpful interactive is False platform is linux2 numerix numpy 0.9.9.2624 font search path ['/usr/local/lib/python2.4/site-packages/matplotlib/mpl-data'] backend GTKAgg version 2.8.2 Python 2.4.2 (#1, May 2 2006, 08:13:46) IPython 0.7.2 -- An enhanced Interactive Python. I am using numerix = numpy in matplotlibrc. I am also using NUMERIX = numpy when building pyfits. -- ==================================================================== Eric Emsellem emsellem at obs.univ-lyon1.fr Centre de Recherche Astrophysique de Lyon 9 av. Charles-Andre tel: +33 (0)4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 (0)4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem ==================================================================== From Glen.Mabey at swri.org Thu Jun 15 10:04:27 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Thu, 15 Jun 2006 09:04:27 -0500 Subject: [Numpy-discussion] https access to svn.scipy.org Message-ID: <20060615140427.GA26421@bams.swri.edu> Hello, I am attempting to use the svn versions of numpy and scipy, but apparently (according to http://www.sipfoundry.org/tools/svn-tips.html#proxy ) I am behind a less-than-agreeable web proxy, because I get $ svn co http://svn.scipy.org/svn/numpy/trunk numpy svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' svn: REPORT of '/svn/numpy/!svn/vcc/default': 400 Bad Request (http://svn.scipy.org) The solution suggested in the above URL is to use https instead, however, when I attempt this $ svn co https://svn.scipy.org/svn/numpy/trunk numpy svn: PROPFIND request failed on '/svn/numpy/trunk' svn: PROPFIND of '/svn/numpy/trunk': 405 Method Not Allowed (https://svn.scipy.org) it appears that svn.scipy.org is not setup to employ SSL. Is this an easy thing to do? Please forgive me if this is just an issue of svn-ignorance on my part. Thanks, Glen Mabey From jstrunk at enthought.com Thu Jun 15 12:58:55 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Thu, 15 Jun 2006 11:58:55 -0500 Subject: [Numpy-discussion] https access to svn.scipy.org In-Reply-To: <20060615140427.GA26421@bams.swri.edu> References: <20060615140427.GA26421@bams.swri.edu> Message-ID: <200606151158.55856.jstrunk@enthought.com> Hi Glen, I'll see about enabling SSL for svn on svn.scipy.org. Jeff Strunk IT Administrator Enthought, Inc. On Thursday 15 June 2006 9:04 am, Glen W. Mabey wrote: > Hello, > > I am attempting to use the svn versions of numpy and scipy, but > apparently (according to > http://www.sipfoundry.org/tools/svn-tips.html#proxy ) I am behind a > less-than-agreeable web proxy, because I get > > $ svn co http://svn.scipy.org/svn/numpy/trunk numpy > svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' > svn: REPORT of '/svn/numpy/!svn/vcc/default': 400 Bad Request > (http://svn.scipy.org) > > The solution suggested in the above URL is to use https instead, > however, when I attempt this > > $ svn co https://svn.scipy.org/svn/numpy/trunk numpy > svn: PROPFIND request failed on '/svn/numpy/trunk' > svn: PROPFIND of '/svn/numpy/trunk': 405 Method Not Allowed > (https://svn.scipy.org) > > it appears that svn.scipy.org is not setup to employ SSL. Is this an > easy thing to do? > > Please forgive me if this is just an issue of svn-ignorance on my part. > > Thanks, > Glen Mabey > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From jstrunk at enthought.com Thu Jun 15 13:02:42 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Thu, 15 Jun 2006 12:02:42 -0500 Subject: [Numpy-discussion] https access to svn.scipy.org In-Reply-To: <200606151158.55856.jstrunk@enthought.com> References: <20060615140427.GA26421@bams.swri.edu> <200606151158.55856.jstrunk@enthought.com> Message-ID: <200606151202.42999.jstrunk@enthought.com> svn over https works now. Jeff Strunk IT Administrator Enthought, Inc On Thursday 15 June 2006 11:58 am, Jeff Strunk wrote: > Hi Glen, > > I'll see about enabling SSL for svn on svn.scipy.org. > > Jeff Strunk > IT Administrator > Enthought, Inc. > > On Thursday 15 June 2006 9:04 am, Glen W. Mabey wrote: > > Hello, > > > > I am attempting to use the svn versions of numpy and scipy, but > > apparently (according to > > http://www.sipfoundry.org/tools/svn-tips.html#proxy ) I am behind a > > less-than-agreeable web proxy, because I get > > > > $ svn co http://svn.scipy.org/svn/numpy/trunk numpy > > svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' > > svn: REPORT of '/svn/numpy/!svn/vcc/default': 400 Bad Request > > (http://svn.scipy.org) > > > > The solution suggested in the above URL is to use https instead, > > however, when I attempt this > > > > $ svn co https://svn.scipy.org/svn/numpy/trunk numpy > > svn: PROPFIND request failed on '/svn/numpy/trunk' > > svn: PROPFIND of '/svn/numpy/trunk': 405 Method Not Allowed > > (https://svn.scipy.org) > > > > it appears that svn.scipy.org is not setup to employ SSL. Is this an > > easy thing to do? > > > > Please forgive me if this is just an issue of svn-ignorance on my part. > > > > Thanks, > > Glen Mabey > > > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From Glen.Mabey at swri.org Thu Jun 15 13:06:06 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Thu, 15 Jun 2006 12:06:06 -0500 Subject: [Numpy-discussion] https access to svn.scipy.org In-Reply-To: <200606151202.42999.jstrunk@enthought.com> References: <20060615140427.GA26421@bams.swri.edu> <200606151158.55856.jstrunk@enthought.com> <200606151202.42999.jstrunk@enthought.com> Message-ID: <20060615170606.GA26475@bams.swri.edu> On Thu, Jun 15, 2006 at 12:02:42PM -0500, Jeff Strunk wrote: > svn over https works now. Thanks Jeff -- that solved my svn woes. Glen From fperez.net at gmail.com Thu Jun 15 13:25:08 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 15 Jun 2006 11:25:08 -0600 Subject: [Numpy-discussion] problem with numpy.. sometimes using numarray? and selection question In-Reply-To: <44916218.9060100@obs.univ-lyon1.fr> References: <44916218.9060100@obs.univ-lyon1.fr> Message-ID: On 6/15/06, Eric Emsellem wrote: > Hi, > > I have written a number of small modules where I now systematically use > numpy. > > I have in principle used the latest versions of the different > array/Science modules (scipy, numpy, ..) but still at some point during > a selection, it crashes on numpy because it seems that the array > correspond to "numarray" arrays. [...] > QUESTION 1: Any hint on where numarray could still be appearing? Not a final answer, but I've had the same thing happen to me recently (I'm making the transition right now) with extension modules which were built against Numeric (in my case). They return old Numeric arrays (I had 23.7, without the array interface) and numpy is not happy. Rebuilding all my extensions against numpy fixed the problem. Cheers, f From bhendrix at enthought.com Thu Jun 15 13:41:10 2006 From: bhendrix at enthought.com (bryce hendrix) Date: Thu, 15 Jun 2006 12:41:10 -0500 Subject: [Numpy-discussion] problem with numpy.. sometimes using numarray? and selection question In-Reply-To: References: <44916218.9060100@obs.univ-lyon1.fr> Message-ID: <44919BB6.6050901@enthought.com> We've had the same problem many times. There were a few causes: * Our clean scripts don't delete c++ files, so generated code was often not re-generated when we switched to numpy * Files to generate code had numeric arrays hardcoded * we were using numerix, and the env var was not set for part of the build How I generally detect the problem is by deleting the numeric/numarray package directories, then running python with the verbose flag. Bryce Fernando Perez wrote: > On 6/15/06, Eric Emsellem wrote: > >> Hi, >> >> I have written a number of small modules where I now systematically use >> numpy. >> >> I have in principle used the latest versions of the different >> array/Science modules (scipy, numpy, ..) but still at some point during >> a selection, it crashes on numpy because it seems that the array >> correspond to "numarray" arrays. >> > > [...] > > >> QUESTION 1: Any hint on where numarray could still be appearing? >> > > Not a final answer, but I've had the same thing happen to me recently > (I'm making the transition right now) with extension modules which > were built against Numeric (in my case). They return old Numeric > arrays (I had 23.7, without the array interface) and numpy is not > happy. > > Rebuilding all my extensions against numpy fixed the problem. > > Cheers, > > f > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From myeates at jpl.nasa.gov Thu Jun 15 15:17:16 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Thu, 15 Jun 2006 12:17:16 -0700 Subject: [Numpy-discussion] core dump when runniong tests In-Reply-To: <20060615014741.2ed9eecb@arbutus.physics.mcmaster.ca> References: <44906C5F.9080901@jpl.nasa.gov> <44907A45.9070603@jpl.nasa.gov> <20060615014741.2ed9eecb@arbutus.physics.mcmaster.ca> Message-ID: <4491B23C.2040303@jpl.nasa.gov> SunOS 5.10 Generic_118844-20 i86pc i386 i86pcSystem = SunOS David M. Cooke wrote: > On Wed, 14 Jun 2006 14:06:13 -0700 > Mathew Yeates wrote: > > >> Travis suggested I use svn and this worked! >> Thanks Travis! >> >> I'm now getting 1 test failure. I'd love to dot this 'i' >> >> ====================================================================== >> FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/lib/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line >> 42, in check_large_types >> assert b == 6765201, "error with %r: got %r" % (t,b) >> AssertionError: error with : got >> 6765201.00000000000364 >> >> ---------------------------------------------------------------------- >> Ran 377 tests in 0.347s >> >> FAILED (failures=1) >> > > I'm guessing the C powl function isn't good enough on your machine. > > What OS are you running? > > From humufr at yahoo.fr Thu Jun 15 17:06:14 2006 From: humufr at yahoo.fr (humufr at yahoo.fr) Date: Thu, 15 Jun 2006 14:06:14 -0700 Subject: [Numpy-discussion] problem with numpy.. sometimes using numarray? and selection question In-Reply-To: <44916218.9060100@obs.univ-lyon1.fr> References: <44916218.9060100@obs.univ-lyon1.fr> Message-ID: <200606151406.14939.humufr@yahoo.fr> Just a guess, you're reading some fits file with pyfits but you didn't declare the variable NUMERIX for numpy (with the beta version of pyfits) or you script are calling another script who are using numarray. I had both problem last week. Pyfits with a mix of numarray/numpy and a script to read some data and return it like an array. N. Le jeudi 15 juin 2006 06:35, Eric Emsellem a ?crit?: > Hi, > > I have written a number of small modules where I now systematically use > numpy. > > I have in principle used the latest versions of the different > array/Science modules (scipy, numpy, ..) but still at some point during > a selection, it crashes on numpy because it seems that the array > correspond to "numarray" arrays. > > e.g.: > ################################## > selection = (rell >= 1.) * (rell < ES0.maxEFFR[indgal]) > ################################## > ### rell is an array of reals and ES0.maxEFFR[indgal] is a real number. > > gives the error: > ========== > /usr/local/lib/python2.4/site-packages/numarray/numarraycore.py:376: > UserWarning: __array__ returned non-NumArray instance > _warnings.warn("__array__ returned non-NumArray instance") > /usr/local/lib/python2.4/site-packages/numarray/ufunc.py in > _cache_miss2(self, n1, n2, out) > 919 (in1, in2), inform, scalar = _inputcheck(n1, n2) > 920 > --> 921 mode, win1, win2, wout, cfunc, ufargs = \ > 922 self._setup(in1, in2, inform, out) > 923 > > /usr/local/lib/python2.4/site-packages/numarray/ufunc.py in _setup(self, > in1, in2, inform, out) > 965 if out is None: wout = in2.new(outtypes[0]) > 966 if inform == "vv": > --> 967 intypes = (in1._type, in2._type) > 968 inarr1, inarr2 = in1._dualbroadcast(in2) > 969 fform, convtypes, outtypes, cfunc = > self._typematch_N(intypes, inform) > > AttributeError: 'numpy.ndarray' object has no attribute '_type' > ================================================ > > QUESTION 1: Any hint on where numarray could still be appearing? > > QUESTION 2: how would you make a selection using "and" and "or" such as: > selection = (condition 1) "and" (condition2 "or" > condition3) so that "selection" contains 0 and 1 according to the right > hand side. > > Thanks, > > Eric > P.S.: > my config is: > > matplotlib version 0.87.3 > verbose.level helpful > interactive is False > platform is linux2 > numerix numpy 0.9.9.2624 > font search path > ['/usr/local/lib/python2.4/site-packages/matplotlib/mpl-data'] > backend GTKAgg version 2.8.2 > Python 2.4.2 (#1, May 2 2006, 08:13:46) > IPython 0.7.2 -- An enhanced Interactive Python. > > I am using numerix = numpy in matplotlibrc. I am also using NUMERIX = > numpy when building pyfits. From haley at ucar.edu Thu Jun 15 17:38:02 2006 From: haley at ucar.edu (Mary Haley) Date: Thu, 15 Jun 2006 15:38:02 -0600 (MDT) Subject: [Numpy-discussion] Supporting both NumPy and Numeric versions of a module Message-ID: Hi all, We are getting ready to release some Python software that supports both NumPy and Numeric. As we have it now, if somebody wanted to use our software with NumPY, they would have to download the binary distribution that was built with NumPy and install that. Otherwise, they have to download the binary distribution that was built with Numeric and install that. We are using Python's distutils, and I'm trying to figure out if there's a way in which I can have both distributions installed to one package directory, and then the __init__.py file would try to figure out which one to import on behalf of the user (i.e. it would try to figure out if the user had already imported NumPy, and if so, import the NumPy version of the module; otherwise, it will import the Numeric version of the module). This is turning out to be a bigger pain than I expected, so I'm turning to this group to see if anybody has a better idea, or should I just give up and release these two distributions separately? Thanks, --Mary From gzwbvfrin at yachtsales.itgo.com Thu Jun 15 17:48:47 2006 From: gzwbvfrin at yachtsales.itgo.com (Jozy Cannon) Date: Fri, 16 Jun 2006 00:48:47 +0300 Subject: [Numpy-discussion] clergywoman guidance counselor Message-ID: <001a01c690c6$72cadec0$6d26d551@wzvytq> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: idolatrous.gif Type: image/gif Size: 3216 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: determine.gif Type: image/gif Size: 1978 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: checkmate.gif Type: image/gif Size: 843 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recover.gif Type: image/gif Size: 14540 bytes Desc: not available URL: From josh8912 at yahoo.com Thu Jun 15 18:56:56 2006 From: josh8912 at yahoo.com (JJ) Date: Thu, 15 Jun 2006 15:56:56 -0700 (PDT) Subject: [Numpy-discussion] syntax for obtaining rank of two columns? Message-ID: <20060615225656.7187.qmail@web51715.mail.yahoo.com> Hello. I am a matlab user learning the syntax of numpy. Id like to check that I am not missing some easy steps on column selection and concatenation. The example task is to determine if two columns selected out of an array are of full rank (rank 2). Lets say we have an array d that is size (10,10) and we select the ith and jth columns to test their rank. In matlab the command is quite simple: rank([d(:,i),d(:,j)]) In numpy, the best I have thought of so far is: linalg.lstsq(transpose(vstack((d[:,i],d[:,j]))), \ ones((shape(transpose(vstack((d[:,i],d[:,j])))) \ [0],1),'d'))[2] Im thinking there must be a less awkward way. Any ideas? JJ __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From tim.hochberg at cox.net Thu Jun 15 20:27:42 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu, 15 Jun 2006 17:27:42 -0700 Subject: [Numpy-discussion] syntax for obtaining rank of two columns? In-Reply-To: <20060615225656.7187.qmail@web51715.mail.yahoo.com> References: <20060615225656.7187.qmail@web51715.mail.yahoo.com> Message-ID: <4491FAFE.4080901@cox.net> JJ wrote: >Hello. I am a matlab user learning the syntax of >numpy. Id like to check that I am not missing some >easy steps on column selection and concatenation. The >example task is to determine if two columns selected >out of an array are of full rank (rank 2). Lets say >we have an array d that is size (10,10) and we select >the ith and jth columns to test their rank. In matlab >the command is quite simple: > >rank([d(:,i),d(:,j)]) > >In numpy, the best I have thought of so far is: > >linalg.lstsq(transpose(vstack((d[:,i],d[:,j]))), \ >ones((shape(transpose(vstack((d[:,i],d[:,j])))) \ >[0],1),'d'))[2] > >Im thinking there must be a less awkward way. Any >ideas? > > This isn't really my field, so this could be wrong, but try: linalg.lstsq(d[:,[i,j]], ones_like(d[:,[i,j]]))[2] and see if that works for you. -tim >JJ > >__________________________________________________ >Do You Yahoo!? >Tired of spam? Yahoo! Mail has the best spam protection around >http://mail.yahoo.com > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From simon at arrowtheory.com Fri Jun 16 05:40:47 2006 From: simon at arrowtheory.com (Simon Burton) Date: Fri, 16 Jun 2006 10:40:47 +0100 Subject: [Numpy-discussion] syntax for obtaining rank of two columns? In-Reply-To: <20060615225656.7187.qmail@web51715.mail.yahoo.com> References: <20060615225656.7187.qmail@web51715.mail.yahoo.com> Message-ID: <20060616104047.488dd098.simon@arrowtheory.com> On Thu, 15 Jun 2006 15:56:56 -0700 (PDT) JJ wrote: > In matlab > the command is quite simple: > > rank([d(:,i),d(:,j)]) you could use the cauchy-schwartz inequality, which becomes an equality iff the rank above is 1: http://planetmath.org/encyclopedia/CauchySchwarzInequality.html Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From strawman at astraw.com Thu Jun 15 22:22:05 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 15 Jun 2006 19:22:05 -0700 Subject: [Numpy-discussion] Supporting both NumPy and Numeric versions of a module In-Reply-To: References: Message-ID: <449215CD.4030800@astraw.com> Dear Mary, I suggest using numpy and at the boundaries use numpy.asarray(yourinput), which will be a quick way to view the data as a numpy array, regardless of its original type. Otherwise, you could look at the matplotlib distribution to see how it's done to really support multiple array packages simultaneously. Mary Haley wrote: > Hi all, > > We are getting ready to release some Python software that supports > both NumPy and Numeric. > > As we have it now, if somebody wanted to use our software with NumPY, > they would have to download the binary distribution that was built > with NumPy and install that. Otherwise, they have to download the > binary distribution that was built with Numeric and install that. > > We are using Python's distutils, and I'm trying to figure out if > there's a way in which I can have both distributions installed to one > package directory, and then the __init__.py file would try to figure > out which one to import on behalf of the user (i.e. it would try to > figure out if the user had already imported NumPy, and if so, import > the NumPy version of the module; otherwise, it will import the Numeric > version of the module). > > This is turning out to be a bigger pain than I expected, so I'm > turning to this group to see if anybody has a better idea, or should I > just give up and release these two distributions separately? > > Thanks, > > --Mary > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ted.horst at earthlink.net Thu Jun 15 22:39:58 2006 From: ted.horst at earthlink.net (Ted Horst) Date: Thu, 15 Jun 2006 21:39:58 -0500 Subject: [Numpy-discussion] deprecated function throwing readonly attribute Message-ID: <5B1B8428-52A0-4B1E-9FA5-25FFFC550C43@earthlink.net> The depreacted function in numpy.lib.utils is throwing a readonly attribute exception in the latest svn (2627). This is on the Mac OSX (10.4.6) using the builtin python (2.3.5) during the import of fftpack. I'm guessing its a 2.3/2.4 difference. Ted From sebastian.beca at gmail.com Fri Jun 16 00:32:38 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Fri, 16 Jun 2006 00:32:38 -0400 Subject: [Numpy-discussion] TEst post Message-ID: Test post. Something isn't working.... From cookedm at physics.mcmaster.ca Fri Jun 16 01:28:40 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 16 Jun 2006 01:28:40 -0400 Subject: [Numpy-discussion] deprecated function throwing readonly attribute In-Reply-To: <5B1B8428-52A0-4B1E-9FA5-25FFFC550C43@earthlink.net> References: <5B1B8428-52A0-4B1E-9FA5-25FFFC550C43@earthlink.net> Message-ID: <20060616052840.GA16044@arbutus.physics.mcmaster.ca> On Thu, Jun 15, 2006 at 09:39:58PM -0500, Ted Horst wrote: > The depreacted function in numpy.lib.utils is throwing a readonly > attribute exception in the latest svn (2627). This is on the Mac OSX > (10.4.6) using the builtin python (2.3.5) during the import of > fftpack. I'm guessing its a 2.3/2.4 difference. > > Ted Who gets the award for "breaks the build most often"? That'd be me! Sorry, I hardly ever test with 2.3. But, I fixed it (and found a generator that had snuck in :) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Fri Jun 16 01:54:39 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 16 Jun 2006 01:54:39 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: <20060616055439.GB16044@arbutus.physics.mcmaster.ca> On Wed, Jun 14, 2006 at 11:46:27PM -0400, Sasha wrote: > On 6/14/06, David M. Cooke wrote: > > After working with them for a while, I'm going to go on record and say that I > > prefer the long names from Numeric and numarray (like linear_least_squares, > > inverse_real_fft, etc.), as opposed to the short names now used by default in > > numpy (lstsq, irefft, etc.). I know you can get the long names from > > numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better > > defaults. > > > > I agree in spirit, but note that inverse_real_fft is still short for > inverse_real_fast_fourier_transform. Presumably, fft is a proper noun > in many people vocabularies, but so may be lstsq depending who you > ask. I say "FFT", but I don't say "lstsq". I can find "FFT" in the index of a book of algorithms, but not "lstsq" (unless it was a specific implementation). Those are my two guiding ideas for what makes a good name here. > I am playing devil's advocate here a little because personally, I > always recommend the following as a compromize: > > sinh = hyperbolic_sinus > ... > tanh(x) = sinh(x)/cosh(x) > > But the next question is where to put "sinh = hyperbolic_sinus": right > before the expression using sinh? at the top of the module (import > hyperbolic_sinus as sinh)? in the math library? If you pick the last > option, do you need hyperbolic_sinus to begin with? If you pick any > other option, how do you prevent others from writing sh = > hyperbolic_sinus instead of sinh? Pish. By the same reasoning, we don't need the number 2: we can write it as the successor of the successor of the additive identity :-) > > Also, Numeric and numarray compatibility is increased by using the long > > names: those two don't have the short ones. > > > > Fitting names into 6 characters when out of style decades ago. (I think > > MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) > > > Short names are still popular in scientific programming: > . That's 11 years old. The web was only a few years old at that time! There's been much work done on what makes a good programming style (Steve McConnell's "Code Complete" for instance is a good start). > I am still +1 for keeping linear_least_squares and inverse_real_fft, > but not just because abreviations are bad as such - if an established > acronym such as fft exists we should be free to use it. Ok, in summary, I'm seeing a bunch of "yes, long names please", but only your devil's advocate stance for no (and +1 for real). I see that Travis fixed the real fft names back to 'irfft' and friends. So, concrete proposal time: - go back to the long names in numpy.linalg (linear_least_squares, eigenvalues, etc. -- those defined in numpy.linalg.old) - of the new names, I could see keeping 'det' and 'svd': those are commonly used, although maybe 'SVD' instead? - anybody got a better name than Heigenvalues? That H looks weird at the beginning. - for numpy.dft, use the old names again. I could probably be persuaded that 'rfft' is ok. 'hfft' for the Hermite FFT is right out. - numpy.random is other "old package replacement", but's fine (and better). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From sebastian.beca at gmail.com Thu Jun 15 19:08:21 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Thu, 15 Jun 2006 19:08:21 -0400 Subject: [Numpy-discussion] distance matrix speed Message-ID: Hi, I'm working with NumPy/SciPy on some algorithms and i've run into some important speed differences wrt Matlab 7. I've narrowed the main speed problem down to the operation of finding the euclidean distance between two matrices that share one dimension rank (dist in Matlab): Python: def dtest(): A = random( [4,2]) B = random( [1000,2]) d = zeros([4, 1000], dtype='f') for i in range(4): for j in range(1000): d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) return d Matlab: A = rand( [4,2]) B = rand( [1000,2]) d = dist(A, B') Running both of these 100 times, I've found the python version to run between 10-20 times slower. My question is if there is a faster way to do this? Perhaps I'm not using the correct functions/structures? Or this is as good as it gets? Thanks on beforehand, Sebastian Beca Department of Computer Science Engineering University of Chile PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. From michael.sorich at gmail.com Fri Jun 16 02:26:37 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Fri, 16 Jun 2006 15:56:37 +0930 Subject: [Numpy-discussion] distance matrix speed In-Reply-To: References: Message-ID: <16761e100606152326r1b99e525j868ea5d694fc8465@mail.gmail.com> Hi Sebastian, I am not sure if there is a function already defined in numpy, but something like this may be what you are after def distance(a1, a2): return sqrt(sum((a1[:,newaxis,:] - a2[newaxis,:,:])**2, axis=2)) The general idea is to avoid loops if you want the code to execute fast. I hope this helps. Mike On 6/16/06, Sebastian Beca wrote: > Hi, > I'm working with NumPy/SciPy on some algorithms and i've run into some > important speed differences wrt Matlab 7. I've narrowed the main speed > problem down to the operation of finding the euclidean distance > between two matrices that share one dimension rank (dist in Matlab): > > Python: > def dtest(): > A = random( [4,2]) > B = random( [1000,2]) > > d = zeros([4, 1000], dtype='f') > for i in range(4): > for j in range(1000): > d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) > return d > > Matlab: > A = rand( [4,2]) > B = rand( [1000,2]) > d = dist(A, B') > > Running both of these 100 times, I've found the python version to run > between 10-20 times slower. My question is if there is a faster way to > do this? Perhaps I'm not using the correct functions/structures? Or > this is as good as it gets? > > Thanks on beforehand, > > Sebastian Beca > Department of Computer Science Engineering > University of Chile > > PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have > ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From a.u.r.e.l.i.a.n at gmx.net Fri Jun 16 02:28:18 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Fri, 16 Jun 2006 08:28:18 +0200 Subject: [Numpy-discussion] distance matrix speed In-Reply-To: References: Message-ID: <200606160828.18346.a.u.r.e.l.i.a.n@gmx.net> Hi, def dtest(): ? ? A = random( [4,2]) ? ? B = random( [1000,2]) # drawback: memory usage temporarily doubled # solution see below d = A[:, newaxis, :] - B[newaxis, :, :] # written as 3 expressions for more clarity d = sqrt((d**2).sum(axis=2)) return d def dtest_lowmem(): A = random( [4,2]) B = random( [1000,2]) d = zeros([4, 1000], dtype='f') # stores result for i in range(len(A)): # the loop should not impose much loss in speed dtemp = A[i, newaxis, :] - B[:, :] dtemp = sqrt((dtemp**2).sum(axis=1)) d[i] = dtemp return d (both functions untested....) HTH, Johannes From konrad.hinsen at laposte.net Fri Jun 16 02:53:48 2006 From: konrad.hinsen at laposte.net (Konrad Hinsen) Date: Fri, 16 Jun 2006 08:53:48 +0200 Subject: [Numpy-discussion] Supporting both NumPy and Numeric versions of amodule References: Message-ID: <009c01c69111$9f05d930$0880fea9@CPQ18791205981> > We are using Python's distutils, and I'm trying to figure out if > there's a way in which I can have both distributions installed to one > package directory, and then the __init__.py file would try to figure > out which one to import on behalf of the user (i.e. it would try to > figure out if the user had already imported NumPy, and if so, import > the NumPy version of the module; otherwise, it will import the Numeric > version of the module). > > This is turning out to be a bigger pain than I expected, so I'm > turning to this group to see if anybody has a better idea, or should I > just give up and release these two distributions separately? I think that what you are aiming at can be done, but I'd rather not do it. Imagine a user who has both Numeric and NumPy installed, plus additional packages that use either one, without the user necessarily being aware of who imports what. For such a user, your package would appear to behave randomly, returning different array types depending on the order of imports of seemingly unrelated modules. If you think it is useful to have both versions available at the same time, a better selection method would be the use of a suitable environment variable. Konrad. From david.douard at logilab.fr Fri Jun 16 03:53:37 2006 From: david.douard at logilab.fr (David Douard) Date: Fri, 16 Jun 2006 09:53:37 +0200 Subject: [Numpy-discussion] distance matrix speed In-Reply-To: <200606160828.18346.a.u.r.e.l.i.a.n@gmx.net> References: <200606160828.18346.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <20060616075337.GA1059@logilab.fr> Hi, On Fri, Jun 16, 2006 at 08:28:18AM +0200, Johannes Loehnert wrote: > Hi, > > def dtest(): > ? ? A = random( [4,2]) > ? ? B = random( [1000,2]) > > # drawback: memory usage temporarily doubled > # solution see below > d = A[:, newaxis, :] - B[newaxis, :, :] Unless I'm wrong, one can simplify a (very) little bit this line: d = A[:, newaxis, :] - B > # written as 3 expressions for more clarity > d = sqrt((d**2).sum(axis=2)) > return d > -- David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From svetosch at gmx.net Fri Jun 16 04:43:42 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 16 Jun 2006 10:43:42 +0200 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: <44926F3E.6090908@gmx.net> Alexander Belopolsky schrieb: > In my view it is more important that code is easy to read rather than > easy to write. Interactive users will disagree, but in programming you > write once and read/edit forever :). The insight about this disagreement imho suggests a compromise (or call it a dual solution): Have verbose names, but also have good default abbreviations for those who prefer them. It would be unfortunate if numpy users were required to cook up their own abbreviations if they wanted to, because 1. it adds overhead, and 2. it would make other people's code more difficult to read. > > Again, there is no defense for abbreviating linear_least_squares > because it is unlikely to appear in an expression and waste valuable > horisontal space. not true imho; btw, I would suggest "ols" (ordinary least squares), which is in every textbook. Cheers, Sven From sebastian.beca at gmail.com Wed Jun 14 18:19:19 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Wed, 14 Jun 2006 18:19:19 -0400 Subject: [Numpy-discussion] Distance Matrix speed Message-ID: Hi, I'm working with NumPy/SciPy on some algorithms and i've run into some important speed differences wrt Matlab 7. I've narrowed the main speed problem down to the operation of finding the euclidean distance between two matrices that share one dimension rank (dist in Matlab): Python: def dtest(): A = random( [4,2]) B = random( [1000,2]) d = zeros([4, 1000], dtype='f') for i in range(4): for j in range(1000): d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) return d Matlab: A = rand( [4,2]) B = rand( [1000,2]) d = dist(A, B') Running both of these 100 times, I've found the python version to run between 10-20 times slower. My question is if there is a faster way to do this? Perhaps I'm not using the correct functions/structures? Or this is as good as it gets? Thanks on beforehand, Sebastian Beca Department of Computer Science Engineering University of Chile PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. From sebastian.beca at gmail.com Fri Jun 16 00:36:45 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Fri, 16 Jun 2006 00:36:45 -0400 Subject: [Numpy-discussion] Test post - ignore Message-ID: Please ignore if you recieve this. From pbdr at cmp.uea.ac.uk Fri Jun 16 05:20:18 2006 From: pbdr at cmp.uea.ac.uk (Pierre Barbier de Reuille) Date: Fri, 16 Jun 2006 10:20:18 +0100 Subject: [Numpy-discussion] ImportError while creating a Python module using NumPy Message-ID: <449277D2.9060904@cmp.uea.ac.uk> Hi, I have an extension library which I wanted to interface with NumPy ... So I added the import_array() and all the needed stuff so that it now compiles. However, when I load the library I obtain : ImportError: No module named core.multiarray I didn't find anything on the net about it, what could be the problem ? Thanks, Pierre From alexandre.fayolle at logilab.fr Fri Jun 16 08:11:52 2006 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Fri, 16 Jun 2006 14:11:52 +0200 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <44926F3E.6090908@gmx.net> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net> Message-ID: <20060616121152.GC32083@crater.logilab.fr> On Fri, Jun 16, 2006 at 10:43:42AM +0200, Sven Schreiber wrote: > > Again, there is no defense for abbreviating linear_least_squares > > because it is unlikely to appear in an expression and waste valuable > > horisontal space. > > not true imho; btw, I would suggest "ols" (ordinary least squares), > which is in every textbook. Please, keep the zen of python in mind : Explicit is better than implicit. -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From svetosch at gmx.net Fri Jun 16 08:48:58 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 16 Jun 2006 14:48:58 +0200 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <20060616121152.GC32083@crater.logilab.fr> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net> <20060616121152.GC32083@crater.logilab.fr> Message-ID: <4492A8BA.1090103@gmx.net> Alexandre Fayolle schrieb: > On Fri, Jun 16, 2006 at 10:43:42AM +0200, Sven Schreiber wrote: >>> Again, there is no defense for abbreviating linear_least_squares >>> because it is unlikely to appear in an expression and waste valuable >>> horisontal space. >> not true imho; btw, I would suggest "ols" (ordinary least squares), >> which is in every textbook. > > Please, keep the zen of python in mind : Explicit is better than > implicit. > > True, but horizontal space *is* valuable (copied from above), and some of the suggested long names were a bit too long for my taste. Abbreviations will emerge anyway, the question is merely: Will numpy provide/recommend them (in addition to having long names maybe), or will it have to be done by somebody else, possibly resulting in many different sets of abbreviations for the same purpose. Thanks, Sven From tim.hochberg at cox.net Fri Jun 16 08:59:49 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 16 Jun 2006 05:59:49 -0700 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <4492A8BA.1090103@gmx.net> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net> <20060616121152.GC32083@crater.logilab.fr> <4492A8BA.1090103@gmx.net> Message-ID: <4492AB45.6080204@cox.net> I don't have anything constructive to add at the moment, so I'll just throw out an unelucidated opinion: +1 for longish names. -1 for two sets of names. -tim From hyclak at math.ohiou.edu Thu Jun 15 13:45:38 2006 From: hyclak at math.ohiou.edu (Matt Hyclak) Date: Thu, 15 Jun 2006 13:45:38 -0400 Subject: [Numpy-discussion] Numpy svn not installing headers Message-ID: <20060615174537.GD29604@math.ohiou.edu> I was trying to build matplotlib after installing the latest svn version of numpy (r2426), and compilation bailed on missing headers. It seems that the headers from build/src.linux*/numpy/core/ are not properly being installed during setup.py's install phase to $PYTHON_SITE_LIB/site-packages/numpy/core/include/numpy Have I stumbled upon a bug, or do I need to do something other than "setup.py install"? The files that do make it in are: arrayobject.h arrayscalars.h ufuncobject.h The files that do not make it in are: config.h __multiarray_api.h __ufunc_api.h The compilation problem was that arrayobject.h includes both config.h and __multiarray_api.h, but the files were not in place. Thanks, Matt -- Matt Hyclak Department of Mathematics Department of Social Work Ohio University (740) 593-1263 From tim.hochberg at cox.net Fri Jun 16 09:17:53 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 16 Jun 2006 06:17:53 -0700 Subject: [Numpy-discussion] distance matrix speed In-Reply-To: References: Message-ID: <4492AF81.804@cox.net> Sebastian Beca wrote: >Hi, >I'm working with NumPy/SciPy on some algorithms and i've run into some >important speed differences wrt Matlab 7. I've narrowed the main speed >problem down to the operation of finding the euclidean distance >between two matrices that share one dimension rank (dist in Matlab): > >Python: >def dtest(): > A = random( [4,2]) > B = random( [1000,2]) > > d = zeros([4, 1000], dtype='f') > for i in range(4): > for j in range(1000): > d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) > return d > >Matlab: > A = rand( [4,2]) > B = rand( [1000,2]) > d = dist(A, B') > >Running both of these 100 times, I've found the python version to run >between 10-20 times slower. My question is if there is a faster way to >do this? Perhaps I'm not using the correct functions/structures? Or >this is as good as it gets? > > Here's one faster way. from numpy import * import timeit A = random.random( [4,2]) B = random.random( [1000,2]) def d1(): d = zeros([4, 1000], dtype=float) for i in range(4): for j in range(1000): d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) return d def d2(): d = zeros([4, 1000], dtype=float) for i in range(4): xy = A[i] - B d[i] = hypot(xy[:,0], xy[:,1]) return d if __name__ == "__main__": t1 = timeit.Timer('d1()', 'from scratch import d1').timeit(100) t2 =timeit.Timer('d2()', 'from scratch import d2').timeit(100) print t1, t2, t1 / t2 In this case, d2 is 50x faster than d1 on my box. Making some extremely dubious assumptions about transitivity of measurements, that would implt that d2 is twice as fast as matlab. Oh, and I didn't actually test that the output is correct.... -tim >Thanks on beforehand, > >Sebastian Beca >Department of Computer Science Engineering >University of Chile > >PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have >ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From ndarray at mac.com Fri Jun 16 09:48:11 2006 From: ndarray at mac.com (Sasha) Date: Fri, 16 Jun 2006 09:48:11 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <4492A8BA.1090103@gmx.net> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net> <20060616121152.GC32083@crater.logilab.fr> <4492A8BA.1090103@gmx.net> Message-ID: On 6/16/06, Sven Schreiber wrote: > .... > Abbreviations will emerge anyway, the question is merely: Will numpy > provide/recommend them (in addition to having long names maybe), or will > it have to be done by somebody else, possibly resulting in many > different sets of abbreviations for the same purpose. > This is a valid point. In my experience ad hoc abbreviations are more popular among scientists who are not used to writing large programs. They use numpy either interactively or write short throw-away scripts that are rarely reused. Programmers who write reusable code almost universally hate ad hoc abbreviations. (There are exceptions: .) If numpy is going to compete with MATLAB, we should not ignore non-programmer user base. I like the idea of providing recommended abbreviations. There is a precedent for doing that: GNU command line utilities provide long/short alternatives for most options. Long options are recommended for use in scripts while short are indispensable at the command line. I would like to suggest the following guidelines: 1. Numpy should never invent abbreviations, but may reuse abbreviations used in the art. 2. When alternative names are made available, there should be one simple rule for reducing the long name to short. For example, use of acronyms may provide one such rule: singular_value_decomposition -> svd. Unfortunately that would mean linear_least_squares -> lls, not ols and conflict with rule #1 (rename lstsq -> ordinary_least_squares?). The second guideline may be hard to follow, but it is very important. Without a rule like this, there will be confusion on whether linear_least_squares and lsltsq are the same or just "similar". From bsouthey at gmail.com Fri Jun 16 10:20:40 2006 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 16 Jun 2006 09:20:40 -0500 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: Message-ID: Hi, Please run the exact same code in Matlab that you are running in NumPy. Many of Matlab functions are very highly optimized so these are provided as binary functions. I think that you are running into this so you are not doing the correct comparison So the ways around it are to write an extension in C or Fortran, use Pysco etc if possible, and vectorize your algorithm to remove the loops (especially the inner one). Bruce On 6/14/06, Sebastian Beca wrote: > Hi, > I'm working with NumPy/SciPy on some algorithms and i've run into some > important speed differences wrt Matlab 7. I've narrowed the main speed > problem down to the operation of finding the euclidean distance > between two matrices that share one dimension rank (dist in Matlab): > > Python: > def dtest(): > A = random( [4,2]) > B = random( [1000,2]) > > d = zeros([4, 1000], dtype='f') > for i in range(4): > for j in range(1000): > d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) > return d > > Matlab: > A = rand( [4,2]) > B = rand( [1000,2]) > d = dist(A, B') > > Running both of these 100 times, I've found the python version to run > between 10-20 times slower. My question is if there is a faster way to > do this? Perhaps I'm not using the correct functions/structures? Or > this is as good as it gets? > > Thanks on beforehand, > > Sebastian Beca > Department of Computer Science Engineering > University of Chile > > PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have > ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From aisaac at american.edu Fri Jun 16 11:37:10 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 16 Jun 2006 11:37:10 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <4492A8BA.1090103@gmx.net> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net><20060616121152.GC32083@crater.logilab.fr> <4492A8BA.1090103@gmx.net> Message-ID: On Fri, 16 Jun 2006, Sven Schreiber apparently wrote: > Abbreviations will emerge anyway, the question is merely: > Will numpy provide/recommend them (in addition to having > long names maybe), or will it have to be done by somebody > else, possibly resulting in many different sets of > abbreviations for the same purpose. Agreed. Cheers, Alan Isaac From tim.hochberg at cox.net Fri Jun 16 12:23:10 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 16 Jun 2006 09:23:10 -0700 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net> <20060616121152.GC32083@crater.logilab.fr> <4492A8BA.1090103@gmx.net> Message-ID: <4492DAEE.408@cox.net> Sasha wrote: >On 6/16/06, Sven Schreiber wrote: > > >>.... >>Abbreviations will emerge anyway, the question is merely: Will numpy >>provide/recommend them (in addition to having long names maybe), or will >>it have to be done by somebody else, possibly resulting in many >>different sets of abbreviations for the same purpose. >> >> >> >This is a valid point. In my experience ad hoc abbreviations are more >popular among scientists who are not used to writing large programs. >They use numpy either interactively or write short throw-away scripts >that are rarely reused. Programmers who write reusable code almost >universally hate ad hoc abbreviations. (There are exceptions: >.) > >If numpy is going to compete with MATLAB, we should not ignore >non-programmer user base. I like the idea of providing recommended >abbreviations. There is a precedent for doing that: GNU command line >utilities provide long/short alternatives for most options. Long >options are recommended for use in scripts while short are >indispensable at the command line. > > Unless the abreviations are obvious, adding second set of names will make it more difficult to read others code. In particular, it will make it harder to answer questions on the newsgroup. Particularly since I suspect that most of the more experienced users will end up using long names while the new users coming from MATLAB or whatever will use the shorter names. >I would like to suggest the following guidelines: > >1. Numpy should never invent abbreviations, but may reuse >abbreviations used in the art. > > Let me add a couple of cents here. There are widespread terms of the art and there are terms of art that are specific to a certain field. At the top level, I would like to see only widespread terms of the art. Thus, 'cos', 'sin', 'exp', etc are perfectly fine. However, something like 'dft' is not so good. Perversely, I consider 'fft' a widespread term of the art, but the more general 'dft' is somehow not. These narrower terms would be perfectly fine if segregated into appropriate packages. For example, I would consider it more sensible to have the current package 'dft' renamed to 'fourier' and the routine 'fft' renamed to 'dft' (since that's what it is). As another example, linear_algebra.svd is perfectly clear, but numpy.svd would be opaque. >2. When alternative names are made available, there should be one >simple rule for reducing the long name to short. For example, use of >acronyms may provide one such rule: singular_value_decomposition -> >svd. > Svd is already a term of the art I believe, so linalg.svd seems like a fine name for singular_value_decomposition. > Unfortunately that would mean linear_least_squares -> lls, not >ols and conflict with rule #1 (rename lstsq -> >ordinary_least_squares?). > > Before you consider this I suggest that you google 'linear algebra lls' and 'linear algebra ols'. The results may suprise you... While your at it google 'linear algebra svd' >The second guideline may be hard to follow, but it is very important. >Without a rule like this, there will be confusion on whether >linear_least_squares and lsltsq are the same or just "similar". > > Can I just reiterate a hearty blech! for having two sets of names. The horizontal space argument is mostly bogus in my opinion -- functions that tend to be used in complicated expression already have short, widely used abbreviations that we can steal. The typing argument is also mostly bogus: a decent editor will do tab completion (I use a pretty much no frills editor, SciTe, and even it does tab completion) and there's IPython if you want tab completion in interactive mode. -tim From Glen.Mabey at swri.org Fri Jun 16 12:23:58 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Fri, 16 Jun 2006 11:23:58 -0500 Subject: [Numpy-discussion] Segfault with simplest operation on extension module using numpy Message-ID: <20060616162357.GB7192@bams.swri.edu> Hello, I am writing a python extension module to create an interface to some C code, and am using numpy array as the object type for transferring data back and forth. Using either the numpy svn from yesterday, or 0.9.6 or 0.9.8, with or without optimized ATLAS installation, I get a segfault at what should be the most straightforward of all operations: PyArray_Check() on the input argument. That is, when I run: import DFALG DFALG.bsvmdf( 3 ) after compiling the below code, it always segfaults, regardless of the type of the argument given. Just as a sanity check (it's been a little while since I have written an extension module for Python) I changed the line containing PyArray_Check() to one that calls PyInt_Check(), which does perform exactly how I would expect it to. Is there something I'm missing? Thank you! Glen Mabey #include #include static PyObject * DFALG_bsvmdf(PyObject *self, PyObject *args); static PyMethodDef DFALGMethods[] = { {"bsvmdf", DFALG_bsvmdf, METH_VARARGS, "This should be a docstring, really."}, {NULL, NULL, 0, NULL} /* Sentinel */ }; PyMODINIT_FUNC initDFALG(void) { (void) Py_InitModule("DFALG", DFALGMethods); } static PyObject * DFALG_bsvmdf(PyObject *self, PyObject *args) { PyObject *inputarray; //printf( "Hello, Python!" ); //Py_INCREF(Py_None); //return Py_None; if ( !PyArg_ParseTuple( args, "O", &inputarray ) ) return NULL; if ( PyArray_Check( inputarray ) ) { //if ( PyInt_Check( inputarray ) ) { printf( "DFALG_bsvmdf() was passed a PyArray.()\n" ); } else { printf( "DFALG_bsvmdf() was NOT passed a PyArray.()\n" ); } return Py_BuildValue( "ss", "Thing 1", "Thing 2" ); } From robert.kern at gmail.com Fri Jun 16 12:44:53 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 Jun 2006 11:44:53 -0500 Subject: [Numpy-discussion] Segfault with simplest operation on extension module using numpy In-Reply-To: <20060616162357.GB7192@bams.swri.edu> References: <20060616162357.GB7192@bams.swri.edu> Message-ID: Glen W. Mabey wrote: > That is, when I run: > import DFALG > DFALG.bsvmdf( 3 ) > after compiling the below code, it always segfaults, regardless of the > type of the argument given. Just as a sanity check (it's been a little > while since I have written an extension module for Python) I changed the > line containing PyArray_Check() to one that calls PyInt_Check(), which > does perform exactly how I would expect it to. > > Is there something I'm missing? Yes! > #include > #include This should be "numpy/arrayobject.h" for consistency with every other numpy-using extension. > PyMODINIT_FUNC > initDFALG(void) > { > (void) Py_InitModule("DFALG", DFALGMethods); > } You need to call import_array() in this function. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Fri Jun 16 13:05:33 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 16 Jun 2006 10:05:33 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: Message-ID: <4492E4DD.3010400@noaa.gov> Bruce Southey wrote: > Please run the exact same code in Matlab that you are running in > NumPy. Many of Matlab functions are very highly optimized so these are > provided as binary functions. I think that you are running into this > so you are not doing the correct comparison He is doing the correct comparison: if Matlab has some built-in compiled utility functions that numpy doesn't -- it really is faster! It looks like other's suggestions show that well written numpy code is plenty fast, however. One more suggestion I don't think I've seen: numpy provides a built-in compiled utility function: hypot() >>> x = N.arange(5) >>> y = N.arange(5) >>> N.hypot(x,y) array([ 0. , 1.41421356, 2.82842712, 4.24264069, 5.65685425]) >>> N.sqrt(x**2 + y**2) array([ 0. , 1.41421356, 2.82842712, 4.24264069, 5.65685425]) Timings: >>> timeit.Timer('N.sqrt(x**2 + y**2)','import numpy as N; x = N.arange(5000); y = N.arange(5000)').timeit(100) 0.49785208702087402 >>> timeit.Timer('N.hypot(x,y)','import numpy as N; x = N.arange(5000); y = N.arange(5000)').timeit(100) 0.081479072570800781 A factor of 6 improvement. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tim.hochberg at cox.net Fri Jun 16 13:48:49 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 16 Jun 2006 10:48:49 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <4492E4DD.3010400@noaa.gov> References: <4492E4DD.3010400@noaa.gov> Message-ID: <4492EF01.10307@cox.net> Christopher Barker wrote: >Bruce Southey wrote: > > >>Please run the exact same code in Matlab that you are running in >>NumPy. Many of Matlab functions are very highly optimized so these are >>provided as binary functions. I think that you are running into this >>so you are not doing the correct comparison >> >> > >He is doing the correct comparison: if Matlab has some built-in compiled >utility functions that numpy doesn't -- it really is faster! > >It looks like other's suggestions show that well written numpy code is >plenty fast, however. > >One more suggestion I don't think I've seen: numpy provides a built-in >compiled utility function: hypot() > > > >>> x = N.arange(5) > >>> y = N.arange(5) > >>> N.hypot(x,y) >array([ 0. , 1.41421356, 2.82842712, 4.24264069, 5.65685425]) > >>> N.sqrt(x**2 + y**2) >array([ 0. , 1.41421356, 2.82842712, 4.24264069, 5.65685425]) > >Timings: > >>> timeit.Timer('N.sqrt(x**2 + y**2)','import numpy as N; x = >N.arange(5000); y = N.arange(5000)').timeit(100) >0.49785208702087402 > >>> timeit.Timer('N.hypot(x,y)','import numpy as N; x = N.arange(5000); >y = N.arange(5000)').timeit(100) >0.081479072570800781 > >A factor of 6 improvement. > > Here's another thing to note: much of the time distance**2 works as well as distance (for instance if you are looking for the nearest point). If you're in that situation, computing the square of the distance is much cheaper: def d_2(): d = zeros([4, 10000], dtype=float) for i in range(4): xy = A[i] - B d[i] = xy[:,0]**2 + xy[:,1]**2 return d This is something like 250 times as fast as the naive Python solution; another five times faster than the fastest distance computing version that I could come up with (using hypot). -tim From perrot at shfj.cea.fr Fri Jun 16 14:01:31 2006 From: perrot at shfj.cea.fr (Matthieu Perrot) Date: Fri, 16 Jun 2006 20:01:31 +0200 Subject: [Numpy-discussion] tiny patch + Playing with strings and my own array descr (PyArray_STRING, PyArray_OBJECT). Message-ID: <200606162001.31342.perrot@shfj.cea.fr> hi, I need to handle strings shaped by a numpy array whose data own to a C structure. There is several possible answers to this problem : 1) use a numpy array of strings (PyArray_STRING) and so a (char *) object in C. It works as is, but you need to define a maximum size to your strings because your set of strings is contiguous in memory. 2) use a numpy array of objects (PyArray_OBJECT), and wrap each ?C string? with a python object, using PyStringObject for example. Then our problem is that there is as wrapper as data element and I believe data can't be shared when your created PyStringObject using (char *) thanks to PyString_AsStringAndSize by example. Now, I will expose a third way, which allow you to use no size-limited strings (as in solution 1.) and don't create wrappers before you really need it (on demand/access). First, for convenience, we will use in C, (char **) type to build an array of string pointers (as it was suggested in solution 2). Now, the game is to make it works with numpy API, and use it in python through a python array. Basically, I want a very similar behabiour than arrays of PyObject, where data are not contiguous, only their address are. So, the idea is to create a new array descr based on PyArray_OBJECT and change its getitem/setitem functions to deals with my own data. I exepected numpy to work with this convenient array descr, but it fails because PyArray_Scalar (arrayobject.c) don't call descriptor getitem function (in PyArray_OBJECT case) but call 2 lines which have been copy/paste from the OBJECT_getitem function). Here my small patch is : replace (arrayobject.c:983-984): Py_INCREF(*((PyObject **)data)); return *((PyObject **)data); by : return descr->f->getitem(data, base); I play a lot with my new numpy array after this change and noticed that a lot of uses works : >>> a = myArray() array([["plop", "blups"]], dtype=object) >>> print a [["plop", "blups"]] >>> a[0, 0] = "youpiiii" >>> print a [["youpiiii", "blups"]] s = a[0, 0] >>> print s "youpiiii" >>> b = a[:] #data was shared with 'a' (similar behaviour than array of objects) >>> >>> numpy.zeros(1, dtype = a.dtype) Traceback (most recent call last): File "", line 1, in ? TypeError: fields with object members not yet supported. >>> numpy.array(a) segmentation fault Finally, I found a forgotten check in multiarraymodule.c (_array_fromobject function), after label finish (line 4661), add : if (!ret) { Py_INCREF(Py_None); return Py_None; } After this change, I obtained (when I was not in interactive mode) : # numpy.array(a) Exception exceptions.TypeError: 'fields with object members not yet supported.' in 'garbage collection' ignored Fatal Python error: unexpected exception during garbage collection Abandon But strangely, when I was in interactive mode, one time it fails and raise an exception (good behaviour), and the next time it only returns None. >>> numpy.array(myArray()) TypeError: fields with object members not yet supported. >>> a=numpy.array(myArray()); print a None A bug remains (I will explore it later), but it is better than before. This mail, show how to map (char **) on a numpy array, but it's easy to use the same idea to handle any types (your_object **). I'll be pleased to discuss on any comments on the proposed solution or any others you can find. -- Matthieu Perrot Tel: +33 1 69 86 78 21 CEA - SHFJ Fax: +33 1 69 86 77 86 4, place du General Leclerc 91401 Orsay Cedex France From fullung at gmail.com Fri Jun 16 14:04:38 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 16 Jun 2006 20:04:38 +0200 Subject: [Numpy-discussion] Segfault with simplest operation on extensionmodule using numpy In-Reply-To: <20060616162357.GB7192@bams.swri.edu> Message-ID: <00f501c6916f$559c6cb0$01eaa8c0@dsp.sun.ac.za> Hey Glen http://www.scipy.org/Cookbook/C_Extensions covers most of the boilerplate you need to get started with extension modules. Regards, Albert > -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of Glen W. Mabey > Sent: 16 June 2006 18:24 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Segfault with simplest operation on > extensionmodule using numpy > > Hello, > > I am writing a python extension module to create an interface to some C > code, and am using numpy array as the object type for transferring data > back and forth. From theller at python.net Fri Jun 16 15:25:52 2006 From: theller at python.net (Thomas Heller) Date: Fri, 16 Jun 2006 21:25:52 +0200 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> <200606091206.00322.faltet@carabos.com> Message-ID: Robert Kern wrote: > Francesc Altet wrote: >> A Divendres 09 Juny 2006 11:54, Albert Strasheim va escriure: >> >>>Just out of curiosity: >>> >>>In [1]: x = N.array([]) >>> >>>In [2]: x.__array_data__ >>>Out[2]: ('0x01C23EE0', False) >>> >>>Is there a reason why the __array_data__ tuple stores the address as a hex >>>string? I would guess that this representation of the address isn't the >>>most useful one for most applications. >> >> Good point. I hit this before and forgot to send a message about this. I agree >> that a integer would be better. Although, now that I think about this, I >> suppose that the issue should be the difference of representation of longs in >> 32-bit and 64-bit platforms, isn't it? > > Like how Win64 uses 32-bit longs and 64-bit pointers. And then there's > signedness. Please don't use Python ints to encode pointers. Holding arbitrary > pointers is the job of CObjects. > (Sorry, I'm late in reading this thread. I didn't know there were so many numeric groups) Python has functions to convert pointers to int/long and vice versa: PyInt_FromVoidPtr() and PyInt_AsVoidPtr(). ctypes uses them, ctypes also represents addresses as ints/longs. Thomas From faltet at carabos.com Fri Jun 16 15:35:24 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 16 Jun 2006 21:35:24 +0200 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> Message-ID: <200606162135.24936.faltet@carabos.com> A Divendres 16 Juny 2006 21:25, Thomas Heller va escriure: > Robert Kern wrote: > > Like how Win64 uses 32-bit longs and 64-bit pointers. And then there's > > signedness. Please don't use Python ints to encode pointers. Holding > > arbitrary pointers is the job of CObjects. > > (Sorry, I'm late in reading this thread. I didn't know there were so many > numeric groups) > > Python has functions to convert pointers to int/long and vice versa: > PyInt_FromVoidPtr() and PyInt_AsVoidPtr(). ctypes uses them, ctypes also > represents addresses as ints/longs. Very interesting. So, may I suggest to use this capability to represent addresses? I think this would simplify things (specially it will prevent to use ascii/pointer conversions, which are ugly to my mind). Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From theller at python.net Fri Jun 16 15:49:33 2006 From: theller at python.net (Thomas Heller) Date: Fri, 16 Jun 2006 21:49:33 +0200 Subject: [Numpy-discussion] Array Interface In-Reply-To: <4488A337.9000407@ee.byu.edu> References: <4488A337.9000407@ee.byu.edu> Message-ID: Travis Oliphant wrote: > Thanks for the continuing discussion on the array interface. > > I'm thinking about this right now, because I just spent several hours > trying to figure out if it is possible to add additional > "object-behavior" pointers to a type by creating a metatype that > sub-types from the Python PyType_Type (this is the object that has all > the function pointers to implement mapping behavior, buffer behavior, > etc.). I found some emails from 2002 where Guido indicates that it is > not possible to sub-type the PyType_Type object and add new function > pointers at the end without major re-writing of Python. Yes, but I remember an email from Christian Tismer that it *is* possible. Although I've never tried that. What I do in ctypes is to replace the type objects (the subclass of PyType_Type) dictionary with a subclass of PyDict_Type (in ctypes it is named StgDictObject - storage dict object, a very poor name I know) that has additional structure fields describing the C data type it represents. Thomas From esheldon at kicp.uchicago.edu Fri Jun 16 17:10:43 2006 From: esheldon at kicp.uchicago.edu (Erin Sheldon) Date: Fri, 16 Jun 2006 16:10:43 -0500 Subject: [Numpy-discussion] Recarray attributes writeable Message-ID: <20060616161043.A29191@cfcp.uchicago.edu> Hi everyone - (this is my fourth try in the last 24 hours to post this. Apparently, the gmail smtp server is in the blacklist!! this is bad). Anyway - Recarrays have convenience attributes such that fields may be accessed through "." in additioin to the "field()" method. These attributes are designed for read only; one cannot alter the data through them. Yet they are writeable: >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') >>> tr.field('ra')[:] = 0.0 >>> tr.ra array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> tr.ra = 3 >>> tr.ra 3 >>> tr.field('ra') array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) I feel this should raise an exception, just as with trying to write to the "size" attribute. Any thoughts? Erin From erin.sheldon at gmail.com Fri Jun 16 11:27:54 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 16 Jun 2006 11:27:54 -0400 Subject: [Numpy-discussion] Recarray attributes writeable (3rd try) Message-ID: <331116dc0606160827o4f529164y996395cc4d0d20ee@mail.gmail.com> Hi everyone - (this is my third try in the last 24 hours to post this. For some reason it hasn't been making it through) Recarrays have convenience attributes such that fields may be accessed through "." in additioin to the "field()" method. These attributes are designed for read only; one cannot alter the data through them. Yet they are writeable: >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') >>> tr.field('ra')[:] = 0.0 >>> tr.ra array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> tr.ra = 3 >>> tr.ra 3 >>> tr.field('ra') array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) I feel this should raise an exception, just as with trying to write to the "size" attribute. Any thoughts? Erin From robert.kern at gmail.com Fri Jun 16 17:33:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 Jun 2006 16:33:05 -0500 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: <20060616161043.A29191@cfcp.uchicago.edu> References: <20060616161043.A29191@cfcp.uchicago.edu> Message-ID: Erin Sheldon wrote: > Hi everyone - > > (this is my fourth try in the last 24 hours to post this. > Apparently, the gmail smtp server is in the blacklist!! > this is bad). I doubt it since that's where my email goes through. Sourceforge is frequently slow, so please have patience if your mail does not show up. I can see your 3rd try now. Possibly the others will be showing up, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jun 16 17:43:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 Jun 2006 16:43:02 -0500 Subject: [Numpy-discussion] Sourceforge and gmail [was: Re: Recarray attributes writeable] In-Reply-To: References: <20060616161043.A29191@cfcp.uchicago.edu> Message-ID: <449325E6.5080609@gmail.com> Robert Kern wrote: > Erin Sheldon wrote: > >>Hi everyone - >> >>(this is my fourth try in the last 24 hours to post this. >>Apparently, the gmail smtp server is in the blacklist!! >>this is bad). > > I doubt it since that's where my email goes through. And of course that's utterly bogus since I usually use GMane. Apologies. However, *this* is a real email to numpy-discussion. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Fri Jun 16 17:44:33 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 16 Jun 2006 15:44:33 -0600 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> <200606091206.00322.faltet@carabos.com> Message-ID: <44932641.80005@ieee.org> Thomas Heller wrote: > Robert Kern wrote: > >> Francesc Altet wrote: >> >>> A Divendres 09 Juny 2006 11:54, Albert Strasheim va escriure: >>> >>> >>>> Just out of curiosity: >>>> >>>> In [1]: x = N.array([]) >>>> >>>> In [2]: x.__array_data__ >>>> Out[2]: ('0x01C23EE0', False) >>>> >>>> Is there a reason why the __array_data__ tuple stores the address as a hex >>>> string? I would guess that this representation of the address isn't the >>>> most useful one for most applications. >>>> >>> Good point. I hit this before and forgot to send a message about this. I agree >>> that a integer would be better. Although, now that I think about this, I >>> suppose that the issue should be the difference of representation of longs in >>> 32-bit and 64-bit platforms, isn't it? >>> >> Like how Win64 uses 32-bit longs and 64-bit pointers. And then there's >> signedness. Please don't use Python ints to encode pointers. Holding arbitrary >> pointers is the job of CObjects. >> >> > > (Sorry, I'm late in reading this thread. I didn't know there were so many > numeric groups) > > Python has functions to convert pointers to int/long and vice versa: PyInt_FromVoidPtr() > and PyInt_AsVoidPtr(). ctypes uses them, ctypes also represents addresses as ints/longs. > The function calls are PyLong_FromVoidPtr() and PyLong_AsVoidPtr() though, right? I'm happy representing pointers as Python integers (Python long integers on curious platforms like Win64). -Travis From strawman at astraw.com Fri Jun 16 17:46:19 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 16 Jun 2006 14:46:19 -0700 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: <20060616161043.A29191@cfcp.uchicago.edu> References: <20060616161043.A29191@cfcp.uchicago.edu> Message-ID: <449326AB.4000306@astraw.com> Erin Sheldon wrote: >Anyway - Recarrays have convenience attributes such that >fields may be accessed through "." in additioin to >the "field()" method. These attributes are designed for >read only; one cannot alter the data through them. >Yet they are writeable: > > > >>>>tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') >>>>tr.field('ra')[:] = 0.0 >>>>tr.ra >>>> >>>> >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > > >>>>tr.ra = 3 >>>>tr.ra >>>> >>>> >3 > > >>>>tr.field('ra') >>>> >>>> >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > >I feel this should raise an exception, just as with trying to write >to the "size" attribute. Any thoughts? > > I have not used recarrays much, so take this with the appropriate measure of salt. I'd prefer to drop the entire pseudo-attribute thing completely before it gets entrenched. (Perhaps it's too late.) I've used a similar system in pytables which, although it is convenient in the short term and for interactive use, there are corner cases that result in long term headaches. I think you point out one such issue for recarrays. There will be more. For example: In [1]:import numpy In [2]:tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') In [3]:tr.field('ra')[:] = 0.0 In [4]:tr.ra Out[4]:array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) In [5]:del tr.ra --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/astraw/ AttributeError: 'recarray' object has no attribute 'ra' The above seems completely counterintuitive -- an attribute error for something I just accessed? Yes, I know what's going on, but it certainly makes life more confusing than it need be, IMO. Another issue is that it is possible to have field names that are not valid Python identifier strings. From erin.sheldon at gmail.com Fri Jun 16 18:18:25 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 16 Jun 2006 18:18:25 -0400 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: References: <20060616161043.A29191@cfcp.uchicago.edu> Message-ID: <331116dc0606161518h6f2e056cxb58a98479ab6c06f@mail.gmail.com> The initial bounces actually say, and I quote: Technical details of temporary failure: TEMP_FAILURE: SMTP Error (state 8): 550-"rejected because your SMTP server, 66.249.92.170, is in the Spamcop RBL. 550 See http://www.spamcop.net/bl.shtml for more information." On 6/16/06, Robert Kern wrote: > Erin Sheldon wrote: > > Hi everyone - > > > > (this is my fourth try in the last 24 hours to post this. > > Apparently, the gmail smtp server is in the blacklist!! > > this is bad). > > I doubt it since that's where my email goes through. Sourceforge is frequently > slow, so please have patience if your mail does not show up. I can see your 3rd > try now. Possibly the others will be showing up, too. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From jchock at keck.hawaii.edu Fri Jun 16 18:37:28 2006 From: jchock at keck.hawaii.edu (Jon Chock) Date: Fri, 16 Jun 2006 12:37:28 -1000 Subject: [Numpy-discussion] installing numpy and removing numeric-24. Message-ID: <2E92CD375D420941846C591D3A278A0DB6D4AD@ws03.keck.hawaii.edu> Hi folks! I'd like to install numpy and remove numeric, are there instructions to remove numeric-24.1? Thanks. JC -------------- next part -------------- An HTML attachment was scrubbed... URL: From jchock at keck.hawaii.edu Fri Jun 16 18:39:27 2006 From: jchock at keck.hawaii.edu (Jon Chock) Date: Fri, 16 Jun 2006 12:39:27 -1000 Subject: [Numpy-discussion] installing numpy and removing numeric-24.1 Message-ID: <2E92CD375D420941846C591D3A278A0DB6D4AE@ws03.keck.hawaii.edu> Sorry, I forgot to mention that I'm working on a Solaris system and installed it in /usr/local/gcc3xbuilt instead of /usr/local. Thanks. JC -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Fri Jun 16 19:46:40 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 16 Jun 2006 17:46:40 -0600 Subject: [Numpy-discussion] Array interface updated to Version 3 Message-ID: <449342E0.5090004@ieee.org> I just updated the array interface page to emphasize we now have version 3. NumPy still supports objects that expose (the C-side) of version 2 of the array interface, though. The new interface is basically the same except (mostly) for asthetics: The differences are listed at the bottom of http://numeric.scipy.org/array_interface.html There is talk of ctypes supporting the new interface which is a worthy development. Please encourage that if you can. Please voice concerns now if you have any. -Travis From fperez.net at gmail.com Fri Jun 16 19:54:17 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 16 Jun 2006 17:54:17 -0600 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <449342E0.5090004@ieee.org> References: <449342E0.5090004@ieee.org> Message-ID: On 6/16/06, Travis Oliphant wrote: > There is talk of ctypes supporting the new interface which is a worthy > development. Please encourage that if you can. That would certainly be excellent, esp. given how ctypes is slated to be officially part of python 2.5. I think it would greatly improve the interoperability landscape for python if the out-of-the-box toolset had proper access to numpy arrays. Cheers, f From strawman at astraw.com Fri Jun 16 21:10:49 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 16 Jun 2006 18:10:49 -0700 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <449342E0.5090004@ieee.org> References: <449342E0.5090004@ieee.org> Message-ID: <44935699.1040104@astraw.com> I noticed in your note labeled 'June 16, 2006' that you refer to the "desc" field. However, in the struct description above, there is only a field named "descr". Also, I suggest that you update the information in the comments of descr field of the structure description to contain the fact that inter.descr is a reference to a tuple equal to ("PyArrayInterface Version #",new_tuple_with_array_interface). What is currently there seems out of date given the new information. Finally, in the comment section describing this field, I strongly suggesting noting that this field is only present *if and only if* the ARR_HAS_DESCR flag is present. It will be more clear if it's there rather than in the text underneath. Is the "#" in the string meant to be replaced with "3"? If so, why not write 3? Also, in your note, you should explain whether "dummy" (renamed from "version") should still be checked as a sanity check or whether it should now be ignored. I think we could call the field "two" and keep the sanity check for backwards compatibility. I agree it is confusing to have two different version numbers in the same struct, so I don't mind having the official name of the field being something other than "version", but if we keep it as a required sanity check (in which case it probably shouldn't be named "dummy"), the whole thing will remain backwards compatible with all current code. Anyhow, I'm very excited about this array interface, and I await the outcome of the Summer of Code project on the 'micro-array' implementation based on it! Cheers! Andrew Travis Oliphant wrote: >I just updated the array interface page to emphasize we now have version >3. NumPy still supports objects that expose (the C-side) of version 2 >of the array interface, though. > >The new interface is basically the same except (mostly) for asthetics: >The differences are listed at the bottom of > >http://numeric.scipy.org/array_interface.html > >There is talk of ctypes supporting the new interface which is a worthy >development. Please encourage that if you can. > >Please voice concerns now if you have any. > >-Travis > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From sebastian.beca at gmail.com Fri Jun 16 19:01:44 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Fri, 16 Jun 2006 19:01:44 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <4492EF01.10307@cox.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> Message-ID: Thanks! Avoiding the inner loop is MUCH faster (~20-300 times than the original). Nevertheless I don't think I can use hypot as it only works for two dimensions. The general problem I have is: A = random( [C, K] ) B = random( [N, K] ) C ~ 1-10 N ~ Large (thousands, millions.. i.e. my dataset) K ~ 2-100 (dimensions of my problem, i.e. not fixed a priori.) I adapted your proposed version to this for K dimensions: def d4(): d = zeros([4, 1000], dtype=float) for i in range(4): xy = A[i] - B d[i] = sqrt( sum(xy**2, axis=1) ) return d Maybe there's another alternative to d4? Thanks again, Sebastian. > def d_2(): > d = zeros([4, 10000], dtype=float) > for i in range(4): > xy = A[i] - B > d[i] = xy[:,0]**2 + xy[:,1]**2 > return d > > This is something like 250 times as fast as the naive Python solution; > another five times faster than the fastest distance computing version > that I could come up with (using hypot). > > -tim > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From sebastian.beca at gmail.com Fri Jun 16 19:04:00 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Fri, 16 Jun 2006 19:04:00 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> Message-ID: Please replace: C = 4 N = 1000 > d = zeros([C, N], dtype=float) BK. From a.u.r.e.l.i.a.n at gmx.net Sat Jun 17 02:47:24 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Sat, 17 Jun 2006 08:47:24 +0200 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> Message-ID: <4493A57C.1030904@gmx.net> Hi, > def d4(): > d = zeros([4, 1000], dtype=float) > for i in range(4): > xy = A[i] - B > d[i] = sqrt( sum(xy**2, axis=1) ) > return d > > Maybe there's another alternative to d4? > Thanks again, I think this is the fastest you can get. Maybe it would be nicer to use the .sum() method instead of sum function, but that is just my personal opinion. I am curious how this compares to the matlab version. :) Johannes From erin.sheldon at gmail.com Thu Jun 15 13:37:16 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Thu, 15 Jun 2006 13:37:16 -0400 Subject: [Numpy-discussion] Recarray attributes writeable Message-ID: <331116dc0606151037x2023b0beu9c4c995f40b34890@mail.gmail.com> Hi everyone - Recarrays have convenience attributes such that fields may be accessed through "." in additioin to the "field()" method. These attributes are designed for read only; one cannot alter the data through them. Yet they are writeable: >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') >>> tr.field('ra')[:] = 0.0 >>> tr.ra array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> tr.ra = 3 >>> tr.ra 3 >>> tr.field('ra') array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) I feel this should raise an exception, just as with trying to write to the "size" attribute. Any thoughts? Erin From erin.sheldon at gmail.com Thu Jun 15 10:21:07 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Thu, 15 Jun 2006 10:21:07 -0400 Subject: [Numpy-discussion] Recarray attributes writable Message-ID: <331116dc0606150721y67d6228bs577fc44b59de1c45@mail.gmail.com> Hi everyone - Recarrays have convenience attributes such that fields may be accessed through "." in additioin to the "field()" method. These attributes are designed for read only; one cannot alter the data through them. Yet they are writeable: >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') >>> tr.field('ra')[:] = 0.0 >>> tr.ra array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> tr.ra = 3 >>> tr.ra 3 >>> tr.field('ra') array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) I feel this should raise an exception, just as with trying to write to the "size" attribute. Any thoughts? Erin From faltet at carabos.com Sat Jun 17 04:17:28 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 17 Jun 2006 10:17:28 +0200 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: <449326AB.4000306@astraw.com> References: <20060616161043.A29191@cfcp.uchicago.edu> <449326AB.4000306@astraw.com> Message-ID: <1150532248.3928.29.camel@localhost.localdomain> El dv 16 de 06 del 2006 a les 14:46 -0700, en/na Andrew Straw va escriure: > Erin Sheldon wrote: > > >Anyway - Recarrays have convenience attributes such that > >fields may be accessed through "." in additioin to > >the "field()" method. These attributes are designed for > >read only; one cannot alter the data through them. > >Yet they are writeable: > > > > > > > >>>>tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') > >>>>tr.field('ra')[:] = 0.0 > >>>>tr.ra > >>>> > >>>> > >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > > > > > > >>>>tr.ra = 3 > >>>>tr.ra > >>>> > >>>> > >3 > > > > > >>>>tr.field('ra') > >>>> > >>>> > >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > > >I feel this should raise an exception, just as with trying to write > >to the "size" attribute. Any thoughts? > > > > > I have not used recarrays much, so take this with the appropriate > measure of salt. > > I'd prefer to drop the entire pseudo-attribute thing completely before > it gets entrenched. (Perhaps it's too late.) > However, I think that this has its utility, specially when accessing to nested fields (see later). In addition, I'd suggest introducing a special accessor called, say, 'fields' in order to access the fields themselves and not the attributes. For example, if you want to access the 'strides' attribute, you can do it in the usual way: >>> import numpy >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,strides') >>> tr.strides (20,) but, if you want to access *field* 'strides' you could do it by issuing: >>> tr.fields.strides >>> tr.fields.strides[:] array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) We have several advantages in adopting the previous approach: 1. You don't mix (nor pollute) the namespaces for attributes and fields. 2. You have a clear idea when you are accessing a variable or a field. 3. Accessing nested columns would still be very easy: tr.field('nested1').field('nested2').field('nested3') vs tr.fields.nested1.nested2.nested3 4. You can also define a proper __getitem__ for accessing fields: tr.fields['nested1']['nested2']['nested3']. In the same way, elements of 'nested2' field could be accessed by: tr.fields['nested1']['nested2'][2:10:2]. 5. Finally, you can even prevent setting or deleting columns by disabling the __setattr__ and __delattr__. PyTables has adopted a similar schema for accessing nested columns, except for 4, where we decided not to accept both strings and slices for the __getitem__() method (you know the mantra: "there should preferably be just one way of doing things", although maybe we've been a bit too much strict in this case), and I think it works reasonably well. In any case, the idea is to decouple the attributes and fields so that they doesn't get mixed. Implementing this shouldn't be complicated at all, but I'm afraid that I can't do this right now :-( -- >0,0< Francesc Altet http://www.carabos.com/ V V C?rabos Coop. V. Enjoy Data "-" From fullung at gmail.com Sat Jun 17 07:30:43 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 17 Jun 2006 13:30:43 +0200 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <449342E0.5090004@ieee.org> References: <449342E0.5090004@ieee.org> Message-ID: <20060617113043.GA910@dogbert.sdsl.sun.ac.za> Hello all On Fri, 16 Jun 2006, Travis Oliphant wrote: > I just updated the array interface page to emphasize we now have version > 3. NumPy still supports objects that expose (the C-side) of version 2 > of the array interface, though. > Please voice concerns now if you have any. In the documentation for the data attribute you say: "A reference to the object with this attribute must be stored by the new object if the memory area is to be secured." Does that mean a reference to the __array_interface__ or a reference to the object containing the __array_interface__? Regards, Albert From jk985 at tom.com Tue Jun 20 08:17:04 2006 From: jk985 at tom.com (=?GB2312?B?N9TCMS0yy9XW3S+xsb6pOC05?=) Date: Tue, 20 Jun 2006 20:17:04 +0800 Subject: [Numpy-discussion] =?GB2312?B?uanTpsnMudzA7dPrssm5urPJsb69tbXN0dDQ3rDgPGFkPg==?= Message-ID: An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat Jun 17 11:27:42 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 17 Jun 2006 09:27:42 -0600 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: <1150532248.3928.29.camel@localhost.localdomain> References: <20060616161043.A29191@cfcp.uchicago.edu> <449326AB.4000306@astraw.com> <1150532248.3928.29.camel@localhost.localdomain> Message-ID: On 6/17/06, Francesc Altet wrote: > However, I think that this has its utility, specially when accessing to > nested fields (see later). In addition, I'd suggest introducing a > special accessor called, say, 'fields' in order to access the fields > themselves and not the attributes. For example, if you want to access > the 'strides' attribute, you can do it in the usual way: > > >>> import numpy > >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,strides') > >>> tr.strides > (20,) > > but, if you want to access *field* 'strides' you could do it by issuing: > > >>> tr.fields.strides > > >>> tr.fields.strides[:] > array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) [...] +1 I meant to write exactly the same thing, but was too lazy to do it :) Cheers, f From acannon at gmail.com Sat Jun 17 17:41:15 2006 From: acannon at gmail.com (Alex Cannon) Date: Sat, 17 Jun 2006 14:41:15 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <4493A57C.1030904@gmx.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> Message-ID: <6b04cd0f0606171441l3537fa15h11edccef250acbca@mail.gmail.com> How about this? def d5(): return add.outer(sum(A*A, axis=1), sum(B*B, axis=1)) - \ 2.*dot(A, transpose(B)) From robert.kern at gmail.com Sat Jun 17 17:49:16 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 17 Jun 2006 16:49:16 -0500 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <6b04cd0f0606171441l3537fa15h11edccef250acbca@mail.gmail.com> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> <6b04cd0f0606171441l3537fa15h11edccef250acbca@mail.gmail.com> Message-ID: Alex Cannon wrote: > How about this? > > def d5(): > return add.outer(sum(A*A, axis=1), sum(B*B, axis=1)) - \ > 2.*dot(A, transpose(B)) You might lose some precision with that approach, so the OP should compare results and timings to look at the tradeoffs. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From erin.sheldon at gmail.com Sat Jun 17 09:40:20 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Sat, 17 Jun 2006 09:40:20 -0400 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: <1150532248.3928.29.camel@localhost.localdomain> References: <20060616161043.A29191@cfcp.uchicago.edu> <449326AB.4000306@astraw.com> <1150532248.3928.29.camel@localhost.localdomain> Message-ID: <331116dc0606170640g3a862eeeh15aa19f96bccb842@mail.gmail.com> This reply sent 9:36 AM, Jun 17 (because it may not show up for a day or so from my gmail account, if it shows up at all) On 6/17/06, Francesc Altet wrote: > El dv 16 de 06 del 2006 a les 14:46 -0700, en/na Andrew Straw va > escriure: > > Erin Sheldon wrote: > > > > >Anyway - Recarrays have convenience attributes such that > > >fields may be accessed through "." in additioin to > > >the "field()" method. These attributes are designed for > > >read only; one cannot alter the data through them. > > >Yet they are writeable: > > > > > > > > > > > >>>>tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') > > >>>>tr.field('ra')[:] = 0.0 > > >>>>tr.ra > > >>>> > > >>>> > > >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > > > > > > > > > > >>>>tr.ra = 3 > > >>>>tr.ra > > >>>> > > >>>> > > >3 > > > > > > > > >>>>tr.field('ra') > > >>>> > > >>>> > > >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > > > > >I feel this should raise an exception, just as with trying to write > > >to the "size" attribute. Any thoughts? > > > > > > > > I have not used recarrays much, so take this with the appropriate > > measure of salt. > > > > I'd prefer to drop the entire pseudo-attribute thing completely before > > it gets entrenched. (Perhaps it's too late.) > > > I think that initially I would concur to drop them. I am new to numpy, however, so they are not entrenched for me. Anyway, see below. > However, I think that this has its utility, specially when accessing to > nested fields (see later). In addition, I'd suggest introducing a > special accessor called, say, 'fields' in order to access the fields > themselves and not the attributes. For example, if you want to access > the 'strides' attribute, you can do it in the usual way: > > >>> import numpy > >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,strides') > >>> tr.strides > (20,) > > but, if you want to access *field* 'strides' you could do it by issuing: > > >>> tr.fields.strides > > >>> tr.fields.strides[:] > array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > We have several advantages in adopting the previous approach: > > 1. You don't mix (nor pollute) the namespaces for attributes and fields. > > 2. You have a clear idea when you are accessing a variable or a field. > > 3. Accessing nested columns would still be very easy: > tr.field('nested1').field('nested2').field('nested3') vs > tr.fields.nested1.nested2.nested3 > > 4. You can also define a proper __getitem__ for accessing fields: > tr.fields['nested1']['nested2']['nested3']. > In the same way, elements of 'nested2' field could be accessed by: > tr.fields['nested1']['nested2'][2:10:2]. > > 5. Finally, you can even prevent setting or deleting columns by > disabling the __setattr__ and __delattr__. This is interesting, and I would add a 6th to this: 6. The .fields by itself could return the names of the fields, which are currently not accessible in any simple way. I always think that these should be methods (.fields(),.size(), etc) but if we are going down the attribute route, this might be a simple fix. > > PyTables has adopted a similar schema for accessing nested columns, > except for 4, where we decided not to accept both strings and slices for > the __getitem__() method (you know the mantra: "there should preferably > be just one way of doing things", although maybe we've been a bit too > much strict in this case), and I think it works reasonably well. In any > case, the idea is to decouple the attributes and fields so that they > doesn't get mixed. Strings or fieldnum access greatly improves the scriptability, but this can always be done through the .field() access. Erin From rina222 at yahoo.co.jp Sat Jun 17 21:52:15 2006 From: rina222 at yahoo.co.jp (=?iso-2022-jp?B?cmluYQ==?=) Date: Sun, 18 Jun 2006 01:52:15 -0000 Subject: [Numpy-discussion] =?iso-2022-jp?b?IFJlOhskQkw1TkEkR0Q+JWEbKEI=?= =?iso-2022-jp?b?GyRCJEckZCRqPGgkaiEqGyhC?= Message-ID: ??????????????????????????????????????? http://love-match.bz/pc/?06 ??????????????????????????????????????? http://love-match.bz/pc/?06 ??????????3-6-4-533 ?????? 139-3668-7892 From miku0814 at yahoo.co.jp Sat Jun 17 22:08:44 2006 From: miku0814 at yahoo.co.jp (=?iso-2022-jp?B?bWlrdQ==?=) Date: Sun, 18 Jun 2006 02:08:44 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: ???????????????????????????????????????????? http://love-match.bz/pc/?07 ??????? ???????????????????????????????????????? http://love-match.bz/pc/?07 ??????????3-6-4-533 ?????? 139-3668-7892 From hitomi0303 at yahoo.co.jp Sat Jun 17 22:59:33 2006 From: hitomi0303 at yahoo.co.jp (=?iso-2022-jp?B?aGl0b21p?=) Date: Sun, 18 Jun 2006 02:59:33 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?09 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From qjrdjkonhv at realestate-south-coast.com.au Sun Jun 18 03:13:58 2006 From: qjrdjkonhv at realestate-south-coast.com.au (Isabella Herron) Date: Sun, 18 Jun 2006 07:13:58 -0000 Subject: [Numpy-discussion] carpet Message-ID: <003001c67b13$bdaa307d$ea551748@rabyr> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: blankness.gif Type: image/gif Size: 234 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: attachment.gif Type: image/gif Size: 2494 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tyrannical.gif Type: image/gif Size: 1570 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: desire.gif Type: image/gif Size: 1669 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: disabled.gif Type: image/gif Size: 1668 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: wrote.gif Type: image/gif Size: 4991 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: grating.gif Type: image/gif Size: 1172 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: planetary.gif Type: image/gif Size: 151 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: repressed.gif Type: image/gif Size: 121 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: newcomer.gif Type: image/gif Size: 2201 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: skateboarding.gif Type: image/gif Size: 1517 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: equine.gif Type: image/gif Size: 3919 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: metallurgist.gif Type: image/gif Size: 420 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rape.gif Type: image/gif Size: 239 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sash.gif Type: image/gif Size: 628 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: exacerbate.gif Type: image/gif Size: 578 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: brass knuckles.gif Type: image/gif Size: 230 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pointed.gif Type: image/gif Size: 555 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: complexity.gif Type: image/gif Size: 257 bytes Desc: not available URL: From gp81eu at fsi-intl.com Sun Jun 18 07:27:16 2006 From: gp81eu at fsi-intl.com (rncmmt7i) Date: Sun, 18 Jun 2006 14:27:16 +0300 Subject: [Numpy-discussion] Reorder Information By justremedy.com Message-ID: <5v76uqc8d3puryxkwhn@fsi-intl.com> Welcome Free online medical consultation by a licensed U.S. physician. Click Below justremedy.com seijikcmdu LAaIoISsisIYPPnzdUbPtsavESzsIp afternoons jitterbug Williamson amounters kiloton experts inconvertible dusting minnow's archiving adulthood edicts earsplitting easygoing Atreus divider applicable deferred Gorham easygoing jitterbug loathing demitting gadwall jostle closers cachalot armchairs daffodils bouts grassland bricks freewheel consoles Ahmedabad gecko From sebastian.beca at gmail.com Sun Jun 18 18:49:27 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Sun, 18 Jun 2006 18:49:27 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <4493A57C.1030904@gmx.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> Message-ID: I checked the matlab version's code and it does the same as discussed here. The only thing to check is to make sure you loop around the shorter dimension of the output array. Speedwise the Matlab code still runs about twice as fast for large sets of data (by just taking time by hand and comparing), nevetheless the improvement over calculating each value as in d1 is significant (10-300 times) and enough for my needs. Thanks to all. Sebastian Beca PD: I also tried the d5 version Alex sent but the results are not the same so I couldn't compare. My final version was: K = 10 C = 3 N = 2500 # One could switch around C and N now. A = random.random( [N, K]) B = random.random( [C, K]) def dist(): d = zeros([N, C], dtype=float) if N < C: for i in range(N): xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) return d else: for j in range(C): xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) return d On 6/17/06, Johannes Loehnert wrote: > Hi, > > > def d4(): > > d = zeros([4, 1000], dtype=float) > > for i in range(4): > > xy = A[i] - B > > d[i] = sqrt( sum(xy**2, axis=1) ) > > return d > > > > Maybe there's another alternative to d4? > > Thanks again, > > I think this is the fastest you can get. Maybe it would be nicer to use > the .sum() method instead of sum function, but that is just my personal > opinion. > > I am curious how this compares to the matlab version. :) > > Johannes > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From aisaac at american.edu Sun Jun 18 22:05:51 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 18 Jun 2006 22:05:51 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net><4493A57C.1030904@gmx.net> Message-ID: On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: > def dist(): > d = zeros([N, C], dtype=float) > if N < C: for i in range(N): > xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) > return d > else: > for j in range(C): > xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) > return d But that is 50% slower than Johannes's version: def dist_loehner1(): d = A[:, newaxis, :] - B[newaxis, :, :] d = sqrt((d**2).sum(axis=2)) return d Cheers, Alan Isaac From tim.hochberg at cox.net Sun Jun 18 23:18:23 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Sun, 18 Jun 2006 20:18:23 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net><4493A57C.1030904@gmx.net> Message-ID: <4496177F.7010809@cox.net> Alan G Isaac wrote: >On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: > > >>def dist(): >>d = zeros([N, C], dtype=float) >>if N < C: for i in range(N): >> xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) >> return d >>else: >> for j in range(C): >> xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) >>return d >> >> > > >But that is 50% slower than Johannes's version: > >def dist_loehner1(): > d = A[:, newaxis, :] - B[newaxis, :, :] > d = sqrt((d**2).sum(axis=2)) > return d > > Are you sure about that? I just ran it through timeit, using Sebastian's array sizes and I get Sebastian's version being 150% *faster*. This could well be cache size dependant, so may vary from box to box, but I'd expect Sebastian's current version to scale better in general. -tim From kwy254 at tom.com Wed Jun 21 00:03:19 2006 From: kwy254 at tom.com (=?GB2312?B?IjfUwjEtMizW3MH5yNUsyc+6oyI=?=) Date: Wed, 21 Jun 2006 12:03:19 +0800 Subject: [Numpy-discussion] =?GB2312?B?ItTL08NET0W4xL34uaTS1czhuN+y+sa31srBvyhBRCki?= Message-ID: An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon Jun 19 00:30:12 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 19 Jun 2006 00:30:12 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <4496177F.7010809@cox.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net><4493A57C.1030904@gmx.net> <4496177F.7010809@cox.net> Message-ID: On Sun, 18 Jun 2006, Tim Hochberg apparently wrote: > Alan G Isaac wrote: >> On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: >>> def dist(): >>> d = zeros([N, C], dtype=float) >>> if N < C: for i in range(N): >>> xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) >>> return d >>> else: >>> for j in range(C): >>> xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) >>> return d >> But that is 50% slower than Johannes's version: >> def dist_loehner1(): >> d = A[:, newaxis, :] - B[newaxis, :, :] >> d = sqrt((d**2).sum(axis=2)) >> return d > Are you sure about that? I just ran it through timeit, using Sebastian's > array sizes and I get Sebastian's version being 150% faster. This > could well be cache size dependant, so may vary from box to box, but I'd > expect Sebastian's current version to scale better in general. No, I'm not sure. Script attached bottom. Most recent output follows: for reasons I have not determined, it doesn't match my previous runs ... Alan >>> execfile(r'c:\temp\temp.py') dist_beca : 3.042277 dist_loehner1: 3.170026 ################################# #THE SCRIPT import sys sys.path.append("c:\\temp") import numpy from numpy import * import timeit K = 10 C = 2500 N = 3 # One could switch around C and N now. A = numpy.random.random( [N, K] ) B = numpy.random.random( [C, K] ) # beca def dist_beca(): d = zeros([N, C], dtype=float) if N < C: for i in range(N): xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) return d else: for j in range(C): xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) return d #loehnert def dist_loehner1(): # drawback: memory usage temporarily doubled # solution see below d = A[:, newaxis, :] - B[newaxis, :, :] # written as 3 expressions for more clarity d = sqrt((d**2).sum(axis=2)) return d if __name__ == "__main__": t1 = timeit.Timer('dist_beca()', 'from temp import dist_beca').timeit(100) t8 = timeit.Timer('dist_loehner1()', 'from temp import dist_loehner1').timeit(100) fmt="%-10s:\t"+"%10.6f" print fmt%('dist_beca', t1) print fmt%('dist_loehner1', t8) From alexandre.fayolle at logilab.fr Mon Jun 19 04:02:34 2006 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Mon, 19 Jun 2006 10:02:34 +0200 Subject: [Numpy-discussion] finding connected areas? In-Reply-To: <51f97e530606181601l3f788fd9n57ac6ce4d4af43a6@mail.gmail.com> References: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> <20060613073153.GB8675@crater.logilab.fr> <51f97e530606181601l3f788fd9n57ac6ce4d4af43a6@mail.gmail.com> Message-ID: <20060619080234.GE8946@crater.logilab.fr> I'm bringing back the discussion on list. On Mon, Jun 19, 2006 at 12:01:27AM +0100, stephen emslie wrote: > > > >You will get this in numarray.nd_image, the function is > >called label. It is also available in recent versions of scipy, in > >module scipy.ndimage. > > > > Thanks for pointing me in the right direction. I've been playing around with > this and I'm getting along with my problem, which is to find the areas of > the connected components in the binary image. ndimage.label has been a great > help in identifying and locating each shape in my image, but I am not quite > sure how to interpret the results. I would like to be able to calculate the > area of each slice returned by ndimage.labels. Is there a simple way to do > this? Yes, you will get an example in http://stsdas.stsci.edu/numarray/numarray-1.5.html/node98.html > Also, being very new to scipy I dont fully understand how the slice objects > returned by label actually work. Is there some documentation on this module > that I could look at? http://stsdas.stsci.edu/numarray/numarray-1.5.html/module-numarray.ndimage.html -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From Clientes at Banamex.com Mon Jun 19 04:24:58 2006 From: Clientes at Banamex.com (Banamex) Date: Mon, 19 Jun 2006 10:24:58 +0200 Subject: [Numpy-discussion] Actualizaciones Banamex Message-ID: An HTML attachment was scrubbed... URL: From gnurser at googlemail.com Mon Jun 19 07:42:22 2006 From: gnurser at googlemail.com (George Nurser) Date: Mon, 19 Jun 2006 12:42:22 +0100 Subject: [Numpy-discussion] f2py produces so.so Message-ID: <1d1e6ea70606190442q5e504d26lec44982f47b69c80@mail.gmail.com> I have run into a strange problem with the current numpy/f2py (f2py 2_2631, numpy 2631). I have a file [Wright.f] which contains 5 different fortran subroutines. Arguments have been specified as input or output by adding cf2py intent (in), (out) etc. Doing f2py -c Wright.f -m Wright.so does not produce Wright.so Instead it produces a *directory* Wright containing a library so.so This actually works fine once it is put onto the python path. But if it is renamed it cannot be successfully imported, so this will cause problems if it happens to a second file. George. -------------- next part -------------- A non-text attachment was scrubbed... Name: Wright.f Type: application/octet-stream Size: 11459 bytes Desc: not available URL: From benjamin at decideur.info Mon Jun 19 07:46:38 2006 From: benjamin at decideur.info (Benjamin Thyreau) Date: Mon, 19 Jun 2006 13:46:38 +0200 Subject: [Numpy-discussion] tiny patch + Playing with strings and my own array descr (PyArray_STRING, PyArray_OBJECT). In-Reply-To: <200606162001.31342.perrot@shfj.cea.fr> References: <200606162001.31342.perrot@shfj.cea.fr> Message-ID: <200606191346.38538.benjamin@decideur.info> Le Vendredi 16 Juin 2006 20:01, Matthieu Perrot a ?crit?: > hi, > > I need to handle strings shaped by a numpy array whose data own to a C (...) > a new array descr based on PyArray_OBJECT and change its getitem/setitem > -- > Matthieu Perrot Tel: +33 1 69 86 78 21 > CEA - SHFJ Fax: +33 1 69 86 77 86 > 4, place du General Leclerc > 91401 Orsay Cedex France Hi, Seems i had the similar problem when i tried to use numpy to map STL's C++ vector (which are contiguous structures). I actually tried to overload the getitem() field of my own dtype to build python wrappers at runtime around the allocated C objects array (ie. NOT an array of Python Object). Actually your suggested modification seems to work for me, i dunno if it's the right solution, still. Is there any plans to update the trunk which something similar ? -- Benjamin Thyreau decideur.info From strawman at astraw.com Mon Jun 19 12:32:44 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 19 Jun 2006 09:32:44 -0700 Subject: [Numpy-discussion] updated Ubuntu Dapper packages for numpy, matplotlib, and scipy online Message-ID: <4496D1AC.8030100@astraw.com> I have updated the apt repository I maintain for Ubuntu's Dapper, which now includes: numpy matplotlib scipy Each package is from a recent SVN checkout and should thus be regarded as "bleeding edge". The repository has a new URL: http://debs.astraw.com/dapper/ I intend to keep this repository online for an extended duration. If you want to put this repository in your sources list, you need to add the following lines to /etc/apt/sources.list:: deb http://debs.astraw.com/ dapper/ deb-src http://debs.astraw.com/ dapper/ I have not yet investigated the use of ATLAS in building or using the numpy binaries, and if performance is critical for you, please evaluate speed before using it. I intend to visit this issue, but I cannot say when. The Debian source packages were generated using stdeb, [ http://stdeb.python-hosting.com/ ] a Python to Debian source package conversion utility I wrote. stdeb does not build packages that follow the Debian Python Policy, so the packages here may be slighly unusual compared to Python packages in the official Debian or Ubuntu repositiories. For example, example scripts do not get installed, and no documentation is installed. Future releases of stdeb may resolve these issues. As always, feedback is very appreciated. Cheers! Andrew From jk985 at tom.com Thu Jun 22 12:39:06 2006 From: jk985 at tom.com (=?GB2312?B?N9TCMS0yy9XW3S+xsb6pOC05?=) Date: Fri, 23 Jun 2006 00:39:06 +0800 Subject: [Numpy-discussion] =?GB2312?B?ssm5urPJsb653MDt0+vLq9OuzLjF0Ly8x8k8YWQ+?= Message-ID: An HTML attachment was scrubbed... URL: From twzvzo at digitalcoach.com Mon Jun 19 12:41:00 2006 From: twzvzo at digitalcoach.com (Kenneth Colon) Date: Mon, 19 Jun 2006 16:41:00 +0000 Subject: [Numpy-discussion] hail seductive Message-ID: <002d01c693bf$e684400c$614c04c1@ocnh> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cube.gif Type: image/gif Size: 272 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tonsil.gif Type: image/gif Size: 550 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: M.D..gif Type: image/gif Size: 449 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lark.gif Type: image/gif Size: 965 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: crippled.gif Type: image/gif Size: 1066 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jackass.gif Type: image/gif Size: 240 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: residue.gif Type: image/gif Size: 327 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: guidelines.gif Type: image/gif Size: 500 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: friendliness.gif Type: image/gif Size: 636 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: premature.gif Type: image/gif Size: 438 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: second.gif Type: image/gif Size: 109 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: keepsake.gif Type: image/gif Size: 559 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vat.gif Type: image/gif Size: 490 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: holidays.gif Type: image/gif Size: 282 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: balcony.gif Type: image/gif Size: 873 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rave.gif Type: image/gif Size: 242 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: headline.gif Type: image/gif Size: 80 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: particular.gif Type: image/gif Size: 161 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tropical.gif Type: image/gif Size: 783 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: billionaire.gif Type: image/gif Size: 933 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: checked.gif Type: image/gif Size: 565 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: atrocious.gif Type: image/gif Size: 269 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: liberal arts.gif Type: image/gif Size: 1859 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: parquet.gif Type: image/gif Size: 908 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vigorous.gif Type: image/gif Size: 977 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tenet.gif Type: image/gif Size: 328 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sire.gif Type: image/gif Size: 116 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: barbell.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: frosting.gif Type: image/gif Size: 489 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meanwhile.gif Type: image/gif Size: 1762 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: plagiarism.gif Type: image/gif Size: 1362 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: elsewhere.gif Type: image/gif Size: 474 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tribulation.gif Type: image/gif Size: 935 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smarty-pants.gif Type: image/gif Size: 501 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cruise control.gif Type: image/gif Size: 463 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: paperwork.gif Type: image/gif Size: 781 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: prolong.gif Type: image/gif Size: 429 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rector.gif Type: image/gif Size: 177 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: phallic.gif Type: image/gif Size: 695 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: each.gif Type: image/gif Size: 138 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gerund.gif Type: image/gif Size: 702 bytes Desc: not available URL: From bhoel at despammed.com Mon Jun 19 15:00:18 2006 From: bhoel at despammed.com (=?utf-8?q?Berthold_H=C3=B6llmann?=) Date: Mon, 19 Jun 2006 21:00:18 +0200 Subject: [Numpy-discussion] f2py produces so.so References: <1d1e6ea70606190442q5e504d26lec44982f47b69c80@mail.gmail.com> Message-ID: "George Nurser" writes: > I have run into a strange problem with the current numpy/f2py (f2py > 2_2631, numpy 2631). > I have a file [Wright.f] which contains 5 different fortran > subroutines. Arguments have been specified as input or output by > adding cf2py intent (in), (out) etc. > > Doing > f2py -c Wright.f -m Wright.so simply try f2py -c Wright.f -m Wright instead. Python extension modules require the an exported routine named init (initWright in this case). But you told f2py to generate an extension module named "so" in a package named "Wright", so the generated function is named initso. The *.so file cannot be renamed because then there is no more matching init function anymore. Regards Berthold -- berthold at xn--hllmanns-n4a.de / bhoel at web.de / From sebastian.beca at gmail.com Mon Jun 19 16:04:31 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Mon, 19 Jun 2006 16:04:31 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> <4496177F.7010809@cox.net> Message-ID: I just ran Alan's script and I don't get consistent results for 100 repetitions. I boosted it to 1000, and ran it several times. The faster one varied alot, but both came into a ~ +-1.5% difference. When it comes to scaling, for my problem(fuzzy clustering), N is the size of the dataset, which should span from thousands to millions. C is the amount of clusters, usually less than 10, and K the amount of features (the dimension I want to sum over) is also usually less than 100. So mainly I'm concerned with scaling across N. I tried C=3, K=4, N=1000, 2500, 5000, 7500, 10000. Also using 1000 runs, the results were: dist_beca: 1.1, 4.5, 16, 28, 37 dist_loehner1: 1.7, 6.5, 22, 35, 47 I also tried scaling across K, with C=3, N=2500, and K=5-50. I couldn't get any consistent results for small K, but both tend to perform as well (+-2%) for large K (K>15). I'm not sure how these work in the backend so I can't argument as to why one should scale better than the other. Regards, Sebastian. On 6/19/06, Alan G Isaac wrote: > On Sun, 18 Jun 2006, Tim Hochberg apparently wrote: > > > Alan G Isaac wrote: > > >> On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: > > >>> def dist(): > >>> d = zeros([N, C], dtype=float) > >>> if N < C: for i in range(N): > >>> xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) > >>> return d > >>> else: > >>> for j in range(C): > >>> xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) > >>> return d > > >> But that is 50% slower than Johannes's version: > > >> def dist_loehner1(): > >> d = A[:, newaxis, :] - B[newaxis, :, :] > >> d = sqrt((d**2).sum(axis=2)) > >> return d > > > Are you sure about that? I just ran it through timeit, using Sebastian's > > array sizes and I get Sebastian's version being 150% faster. This > > could well be cache size dependant, so may vary from box to box, but I'd > > expect Sebastian's current version to scale better in general. > > No, I'm not sure. > Script attached bottom. > Most recent output follows: > for reasons I have not determined, > it doesn't match my previous runs ... > Alan > > >>> execfile(r'c:\temp\temp.py') > dist_beca : 3.042277 > dist_loehner1: 3.170026 > > > ################################# > #THE SCRIPT > import sys > sys.path.append("c:\\temp") > import numpy > from numpy import * > import timeit > > > K = 10 > C = 2500 > N = 3 # One could switch around C and N now. > A = numpy.random.random( [N, K] ) > B = numpy.random.random( [C, K] ) > > # beca > def dist_beca(): > d = zeros([N, C], dtype=float) > if N < C: > for i in range(N): > xy = A[i] - B > d[i,:] = sqrt(sum(xy**2, axis=1)) > return d > else: > for j in range(C): > xy = A - B[j] > d[:,j] = sqrt(sum(xy**2, axis=1)) > return d > > #loehnert > def dist_loehner1(): > # drawback: memory usage temporarily doubled > # solution see below > d = A[:, newaxis, :] - B[newaxis, :, :] > # written as 3 expressions for more clarity > d = sqrt((d**2).sum(axis=2)) > return d > > > if __name__ == "__main__": > t1 = timeit.Timer('dist_beca()', 'from temp import dist_beca').timeit(100) > t8 = timeit.Timer('dist_loehner1()', 'from temp import dist_loehner1').timeit(100) > fmt="%-10s:\t"+"%10.6f" > print fmt%('dist_beca', t1) > print fmt%('dist_loehner1', t8) > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tim.hochberg at cox.net Mon Jun 19 16:28:53 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon, 19 Jun 2006 13:28:53 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> <4496177F.7010809@cox.net> Message-ID: <44970905.4080005@cox.net> Sebastian Beca wrote: >I just ran Alan's script and I don't get consistent results for 100 >repetitions. I boosted it to 1000, and ran it several times. The >faster one varied alot, but both came into a ~ +-1.5% difference. > >When it comes to scaling, for my problem(fuzzy clustering), N is the >size of the dataset, which should span from thousands to millions. C >is the amount of clusters, usually less than 10, and K the amount of >features (the dimension I want to sum over) is also usually less than >100. So mainly I'm concerned with scaling across N. I tried C=3, K=4, >N=1000, 2500, 5000, 7500, 10000. Also using 1000 runs, the results >were: >dist_beca: 1.1, 4.5, 16, 28, 37 >dist_loehner1: 1.7, 6.5, 22, 35, 47 > >I also tried scaling across K, with C=3, N=2500, and K=5-50. I >couldn't get any consistent results for small K, but both tend to >perform as well (+-2%) for large K (K>15). > >I'm not sure how these work in the backend so I can't argument as to >why one should scale better than the other. > > The reason I suspect that dist_beca should scale better is that dist_loehner1 generates an intermediate array of size NxCxK, while dist_beca produces intermediate matrices that are only NxK or CxK. For large problems, allocating that extra memory and fetching it into and out of the cache can be a bottleneck. Here's another version that allocates even less in the way of temporaries at the expenses of being borderline incomprehensible. It still allocates an NxK temporary array, but it allocates it once ahead of time and then reuses it for all subsequent calculations. Your welcome to use it, but I'm not sure I'd recomend it unless this function is really a speed bottleneck as it could end up being hard to read later (I left implementing the NRegards, > >Sebastian. > >On 6/19/06, Alan G Isaac wrote: > > >>On Sun, 18 Jun 2006, Tim Hochberg apparently wrote: >> >> >> >>>Alan G Isaac wrote: >>> >>> >>>>On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: >>>> >>>> >>>>>def dist(): >>>>>d = zeros([N, C], dtype=float) >>>>>if N < C: for i in range(N): >>>>>xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) >>>>>return d >>>>>else: >>>>>for j in range(C): >>>>>xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) >>>>>return d >>>>> >>>>> >>>>But that is 50% slower than Johannes's version: >>>> >>>> >>>>def dist_loehner1(): >>>> d = A[:, newaxis, :] - B[newaxis, :, :] >>>> d = sqrt((d**2).sum(axis=2)) >>>> return d >>>> >>>> >>>Are you sure about that? I just ran it through timeit, using Sebastian's >>>array sizes and I get Sebastian's version being 150% faster. This >>>could well be cache size dependant, so may vary from box to box, but I'd >>>expect Sebastian's current version to scale better in general. >>> >>> >>No, I'm not sure. >>Script attached bottom. >>Most recent output follows: >>for reasons I have not determined, >>it doesn't match my previous runs ... >>Alan >> >> >> >>>>>execfile(r'c:\temp\temp.py') >>>>> >>>>> >>dist_beca : 3.042277 >>dist_loehner1: 3.170026 >> >> >>################################# >>#THE SCRIPT >>import sys >>sys.path.append("c:\\temp") >>import numpy >>from numpy import * >>import timeit >> >> >>K = 10 >>C = 2500 >>N = 3 # One could switch around C and N now. >>A = numpy.random.random( [N, K] ) >>B = numpy.random.random( [C, K] ) >> >># beca >>def dist_beca(): >> d = zeros([N, C], dtype=float) >> if N < C: >> for i in range(N): >> xy = A[i] - B >> d[i,:] = sqrt(sum(xy**2, axis=1)) >> return d >> else: >> for j in range(C): >> xy = A - B[j] >> d[:,j] = sqrt(sum(xy**2, axis=1)) >> return d >> >>#loehnert >>def dist_loehner1(): >> # drawback: memory usage temporarily doubled >> # solution see below >> d = A[:, newaxis, :] - B[newaxis, :, :] >> # written as 3 expressions for more clarity >> d = sqrt((d**2).sum(axis=2)) >> return d >> >> >>if __name__ == "__main__": >> t1 = timeit.Timer('dist_beca()', 'from temp import dist_beca').timeit(100) >> t8 = timeit.Timer('dist_loehner1()', 'from temp import dist_loehner1').timeit(100) >> fmt="%-10s:\t"+"%10.6f" >> print fmt%('dist_beca', t1) >> print fmt%('dist_loehner1', t8) >> >> >> >> >>_______________________________________________ >>Numpy-discussion mailing list >>Numpy-discussion at lists.sourceforge.net >>https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From tim.hochberg at cox.net Mon Jun 19 17:39:14 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon, 19 Jun 2006 14:39:14 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <44970905.4080005@cox.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> <4496177F.7010809@cox.net> <44970905.4080005@cox.net> Message-ID: <44971982.3090800@cox.net> Tim Hochberg wrote: >Sebastian Beca wrote: > > > >>I just ran Alan's script and I don't get consistent results for 100 >>repetitions. I boosted it to 1000, and ran it several times. The >>faster one varied alot, but both came into a ~ +-1.5% difference. >> >>When it comes to scaling, for my problem(fuzzy clustering), N is the >>size of the dataset, which should span from thousands to millions. C >>is the amount of clusters, usually less than 10, and K the amount of >>features (the dimension I want to sum over) is also usually less than >>100. So mainly I'm concerned with scaling across N. I tried C=3, K=4, >>N=1000, 2500, 5000, 7500, 10000. Also using 1000 runs, the results >>were: >>dist_beca: 1.1, 4.5, 16, 28, 37 >>dist_loehner1: 1.7, 6.5, 22, 35, 47 >> >>I also tried scaling across K, with C=3, N=2500, and K=5-50. I >>couldn't get any consistent results for small K, but both tend to >>perform as well (+-2%) for large K (K>15). >> >>I'm not sure how these work in the backend so I can't argument as to >>why one should scale better than the other. >> >> >> >> >The reason I suspect that dist_beca should scale better is that >dist_loehner1 generates an intermediate array of size NxCxK, while >dist_beca produces intermediate matrices that are only NxK or CxK. For >large problems, allocating that extra memory and fetching it into and >out of the cache can be a bottleneck. > >Here's another version that allocates even less in the way of >temporaries at the expenses of being borderline incomprehensible. It >still allocates an NxK temporary array, but it allocates it once ahead >of time and then reuses it for all subsequent calculations. Your welcome >to use it, but I'm not sure I'd recomend it unless this function is >really a speed bottleneck as it could end up being hard to read later (I >left implementing the N >I have another idea that might reduce the memory overhead still further, >if I get a chance I'll try it out and let you know if it results in a >further speed up. > >-tim > > > def dist2(A, B): > d = zeros([N, C], dtype=float) > if N < C: > raise NotImplemented > else: > tmp = empty([N, K], float) > tmp0 = tmp[:,0] > rangek = range(1,K) > for j in range(C): > subtract(A, B[j], tmp) > tmp *= tmp > for k in rangek: > tmp0 += tmp[:,k] > sqrt(tmp0, d[:,j]) > return d > > Speaking of scaling: I tried this with K=25000 (10 x greater than Sebastian's original numbers). Much to my suprise it performed somewhat worse than the Sebastian's dist() with large K. Below is a modified dist2 that performs about the same (marginally better here) for large K as well as a dist3 that performs about 50% better at both K=2500 and K=25000. -tim def dist2(A, B): d = empty([N, C], dtype=float) if N < C: raise NotImplemented else: tmp = empty([N, K], float) tmp0 = tmp[:,0] for j in range(C): subtract(A, B[j], tmp) tmp **= 2 d[:,j] = sum(tmp, axis=1) sqrt(d[:,j], d[:,j]) return d def dist3(A, B): d = zeros([N, C], dtype=float) rangek = range(K) if N < C: raise NotImplemented else: tmp = empty([N], float) for j in range(C): for k in rangek: subtract(A[:,k], B[j,k], tmp) tmp **= 2 d[:,j] += tmp sqrt(d[:,j], d[:,j]) return d > > > >>Regards, >> >>Sebastian. >> >>On 6/19/06, Alan G Isaac wrote: >> >> >> >> >>>On Sun, 18 Jun 2006, Tim Hochberg apparently wrote: >>> >>> >>> >>> >>> >>>>Alan G Isaac wrote: >>>> >>>> >>>> >>>> >>>>>On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: >>>>> >>>>> >>>>> >>>>> >>>>>>def dist(): >>>>>>d = zeros([N, C], dtype=float) >>>>>>if N < C: for i in range(N): >>>>>>xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) >>>>>>return d >>>>>>else: >>>>>>for j in range(C): >>>>>>xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) >>>>>>return d >>>>>> >>>>>> >>>>>> >>>>>> >>>>>But that is 50% slower than Johannes's version: >>>>> >>>>> >>>>>def dist_loehner1(): >>>>> d = A[:, newaxis, :] - B[newaxis, :, :] >>>>> d = sqrt((d**2).sum(axis=2)) >>>>> return d >>>>> >>>>> >>>>> >>>>> >>>>Are you sure about that? I just ran it through timeit, using Sebastian's >>>>array sizes and I get Sebastian's version being 150% faster. This >>>>could well be cache size dependant, so may vary from box to box, but I'd >>>>expect Sebastian's current version to scale better in general. >>>> >>>> >>>> >>>> >>>No, I'm not sure. >>>Script attached bottom. >>>Most recent output follows: >>>for reasons I have not determined, >>>it doesn't match my previous runs ... >>>Alan >>> >>> >>> >>> >>> >>>>>>execfile(r'c:\temp\temp.py') >>>>>> >>>>>> >>>>>> >>>>>> >>>dist_beca : 3.042277 >>>dist_loehner1: 3.170026 >>> >>> >>>################################# >>>#THE SCRIPT >>>import sys >>>sys.path.append("c:\\temp") >>>import numpy >>> >>> >>>from numpy import * >> >> >>>import timeit >>> >>> >>>K = 10 >>>C = 2500 >>>N = 3 # One could switch around C and N now. >>>A = numpy.random.random( [N, K] ) >>>B = numpy.random.random( [C, K] ) >>> >>># beca >>>def dist_beca(): >>> d = zeros([N, C], dtype=float) >>> if N < C: >>> for i in range(N): >>> xy = A[i] - B >>> d[i,:] = sqrt(sum(xy**2, axis=1)) >>> return d >>> else: >>> for j in range(C): >>> xy = A - B[j] >>> d[:,j] = sqrt(sum(xy**2, axis=1)) >>> return d >>> >>>#loehnert >>>def dist_loehner1(): >>> # drawback: memory usage temporarily doubled >>> # solution see below >>> d = A[:, newaxis, :] - B[newaxis, :, :] >>> # written as 3 expressions for more clarity >>> d = sqrt((d**2).sum(axis=2)) >>> return d >>> >>> >>>if __name__ == "__main__": >>> t1 = timeit.Timer('dist_beca()', 'from temp import dist_beca').timeit(100) >>> t8 = timeit.Timer('dist_loehner1()', 'from temp import dist_loehner1').timeit(100) >>> fmt="%-10s:\t"+"%10.6f" >>> print fmt%('dist_beca', t1) >>> print fmt%('dist_loehner1', t8) >>> >>> >>> >>> >>>_______________________________________________ >>>Numpy-discussion mailing list >>>Numpy-discussion at lists.sourceforge.net >>>https://lists.sourceforge.net/lists/listinfo/numpy-discussion >>> >>> >>> >>> >>> >>_______________________________________________ >>Numpy-discussion mailing list >>Numpy-discussion at lists.sourceforge.net >>https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> >> >> >> > > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From gnurser at googlemail.com Mon Jun 19 18:15:10 2006 From: gnurser at googlemail.com (George Nurser) Date: Mon, 19 Jun 2006 23:15:10 +0100 Subject: [Numpy-discussion] f2py produces so.so In-Reply-To: References: <1d1e6ea70606190442q5e504d26lec44982f47b69c80@mail.gmail.com> Message-ID: <1d1e6ea70606191515o23fbeaadt9084bf31ea435b6@mail.gmail.com> On 19/06/06, Berthold H?llmann wrote: > "George Nurser" writes: > > > I have run into a strange problem with the current numpy/f2py (f2py > > 2_2631, numpy 2631). > > I have a file [Wright.f] which contains 5 different fortran > > subroutines. Arguments have been specified as input or output by > > adding cf2py intent (in), (out) etc. > > > > Doing > > f2py -c Wright.f -m Wright.so > > simply try > > f2py -c Wright.f -m Wright > > instead. Python extension modules require the an exported routine > named init (initWright in this case). But you told f2py > to generate an extension module named "so" in a package named > "Wright", so the generated function is named initso. The *.so file > cannot be renamed because then there is no more matching init function > anymore. > > Regards > Berthold Stupid of me! Hit head against wall. Yes, I eventually worked out that f2py -c Wright.f -m Wright was OK. But many thanks for the explanation ....I see, what f2py was doing was perfectly logical. Regards, George. From david at ar.media.kyoto-u.ac.jp Tue Jun 20 00:26:34 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 20 Jun 2006 13:26:34 +0900 Subject: [Numpy-discussion] updated Ubuntu Dapper packages for numpy, matplotlib, and scipy online In-Reply-To: <4496D1AC.8030100@astraw.com> References: <4496D1AC.8030100@astraw.com> Message-ID: <449778FA.507@ar.media.kyoto-u.ac.jp> Andrew Straw wrote: > I have updated the apt repository I maintain for Ubuntu's Dapper, which > now includes: > > numpy > matplotlib > scipy > > Each package is from a recent SVN checkout and should thus be regarded > as "bleeding edge". The repository has a new URL: > http://debs.astraw.com/dapper/ I intend to keep this repository online > for an extended duration. If you want to put this repository in your > sources list, you need to add the following lines to /etc/apt/sources.list:: > deb http://debs.astraw.com/ dapper/ > deb-src http://debs.astraw.com/ dapper/ > > I have not yet investigated the use of ATLAS in building or using the > numpy binaries, and if performance is critical for you, please evaluate > speed before using it. I intend to visit this issue, but I cannot say when. > > The Debian source packages were generated using stdeb, [ > http://stdeb.python-hosting.com/ ] a Python to Debian source package > conversion utility I wrote. stdeb does not build packages that follow > the Debian Python Policy, so the packages here may be slighly unusual > compared to Python packages in the official Debian or Ubuntu > repositiories. For example, example scripts do not get installed, and no > documentation is installed. Future releases of stdeb may resolve these > issues. > > As always, feedback is very appreciated. > > That's great. Last week, I sended several messages to the list regarding your messages about debian packages for numpy, but it looks they were lost somewhere.... Right now, I use the experimental package of debian + svn sources for numpy, and it works well. Is your approach based on this work, or is it totally different (on debian/ubuntu, packaging numpy + atlas should be easy, as the atlas+lapack library is compiled such as to be complete), David From strawman at astraw.com Tue Jun 20 01:08:59 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 19 Jun 2006 22:08:59 -0700 Subject: [Numpy-discussion] updated Ubuntu Dapper packages for numpy, matplotlib, and scipy online In-Reply-To: <449778FA.507@ar.media.kyoto-u.ac.jp> References: <4496D1AC.8030100@astraw.com> <449778FA.507@ar.media.kyoto-u.ac.jp> Message-ID: <449782EB.1060102@astraw.com> David Cournapeau wrote: > That's great. Last week, I sended several messages to the list > regarding your messages about debian packages for numpy, but it looks > they were lost somewhere.... > > Right now, I use the experimental package of debian + svn sources for > numpy, and it works well. Is your approach based on this work, or is > it totally different (on debian/ubuntu, packaging numpy + atlas should > be easy, as the atlas+lapack library is compiled such as to be complete), > > David Hi David, I did get your email last week (sorry for not replying sooner). I'm actually using my own tool "stdeb" to build these at the moment -- the 'official' package in experimental is surely better than mine, and I will probably switch to it over stdeb sooner or later... Cheers! Andrew From aisaac at american.edu Tue Jun 20 01:18:16 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 20 Jun 2006 01:18:16 -0400 Subject: [Numpy-discussion] strange bug Message-ID: I think there is a bug in the **= operator, for dtype=float. Alan Isaac ## Script: import numpy print "numpy.__version__: ", numpy.__version__ ''' Illustrate a strange bug: ''' y = numpy.arange(10,dtype=float) print "y: ",y y *= y print "y**2: ",y z = numpy.arange(10,dtype=float) print "z: ", z z **= 2 print "z**2: ", z ## Output: numpy.__version__: 0.9.8 y: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] y**2: [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.] z: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] z**2: [ 0.00000000e+00 1.00000000e+00 1.60000000e+01 8.10000000e+01 2.56000000e+02 6.25000000e+02 1.29600000e+03 2.40100000e+03 4.09600000e+03 6.56100000e+03] From a.u.r.e.l.i.a.n at gmx.net Tue Jun 20 02:08:50 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Tue, 20 Jun 2006 08:08:50 +0200 Subject: [Numpy-discussion] strange bug In-Reply-To: References: Message-ID: <449790F2.4070100@gmx.net> Hi, > ## Output: > numpy.__version__: 0.9.8 > y: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] > y**2: [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.] > z: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] > z**2: [ 0.00000000e+00 1.00000000e+00 1.60000000e+01 8.10000000e+01 > 2.56000000e+02 6.25000000e+02 1.29600000e+03 2.40100000e+03 > 4.09600000e+03 6.56100000e+03] obviosly the last is z**4. dtypes are the same for y and z (float64). One addition: In [5]: z = arange(10, dtype=float) In [6]: z **= 1 In [7]: z zsh: 18263 segmentation fault ipython - Johannes From k0ngs9q at tdmedia.com Tue Jun 20 02:30:01 2006 From: k0ngs9q at tdmedia.com (wzoj1ql) Date: Tue, 20 Jun 2006 02:30:01 -0400 Subject: [Numpy-discussion] Reorder Notification From real-meds.com Message-ID: Good Day Free online medical consultation by a licensed U.S. physician. Click The Link Below real-meds.com ojzxkyxivn XqSNnRykZfHiqFzjkVcICbxqIvRddp blackbody initials harmlessly frugally crowing comparator distorts correlation Wilmington canons conferee blackbody expertly alkane brouhaha correlation Gloria colonials encyclopedia's follows marines freewheel Melinda bricks dusting Matthews Alabamian loathing From aisaac at american.edu Tue Jun 20 03:15:31 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 20 Jun 2006 03:15:31 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <44971982.3090800@cox.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> <4496177F.7010809@cox.net> <44970905.4080005@cox.net> <44971982.3090800@cox.net> Message-ID: I think the distance matrix version below is about as good as it gets with these basic strategies. fwiw, Alan Isaac def dist(A,B): rowsA, rowsB = A.shape[0], B.shape[0] distanceAB = empty( [rowsA,rowsB] , dtype=float) if rowsA <= rowsB: temp = empty_like(B) for i in range(rowsA): #store A[i]-B in temp subtract( A[i], B, temp ) temp *= temp sqrt( temp.sum(axis=1), distanceAB[i,:]) else: temp = empty_like(A) for j in range(rowsB): #store A-B[j] in temp temp = subtract( A, B[j], temp ) temp *= temp sqrt( temp.sum(axis=1), distanceAB[:,j]) return distanceAB From oliphant.travis at ieee.org Tue Jun 20 05:06:11 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 20 Jun 2006 03:06:11 -0600 Subject: [Numpy-discussion] C-API support for numarray added to NumPy Message-ID: <4497BA83.7060507@ieee.org> C-API support for numarray is now checked in to NumPy SVN. With this support you should be able to compile numarray extensions by changing the include line from numarray/libnumarray.h to numpy/libnumarray.h You will also need to change the include directories used in compiling by appending the directories returned by numpy.numarray.util.get_numarray_include_dirs() This is most easily done using a numpy.distutils.misc_util Configuration instance: config.add_numarray_include_dirs() The work is heavily based on numarray. I just grabbed the numarray sources and translated the relevant functions to use NumPy's ndarray's. Please report problems and post patches. -Travis From oliphant.travis at ieee.org Tue Jun 20 05:24:34 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 20 Jun 2006 03:24:34 -0600 Subject: [Numpy-discussion] tiny patch + Playing with strings and my own array descr (PyArray_STRING, PyArray_OBJECT). In-Reply-To: <200606162001.31342.perrot@shfj.cea.fr> References: <200606162001.31342.perrot@shfj.cea.fr> Message-ID: <4497BED2.9090601@ieee.org> Matthieu Perrot wrote: > hi, > > I need to handle strings shaped by a numpy array whose data own to a C > structure. There is several possible answers to this problem : > 1) use a numpy array of strings (PyArray_STRING) and so a (char *) object > in C. It works as is, but you need to define a maximum size to your strings > because your set of strings is contiguous in memory. > 2) use a numpy array of objects (PyArray_OBJECT), and wrap each ?C string? > with a python object, using PyStringObject for example. Then our problem is > that there is as wrapper as data element and I believe data can't be shared > when your created PyStringObject using (char *) thanks to > PyString_AsStringAndSize by example. > > > Now, I will expose a third way, which allow you to use no size-limited strings > (as in solution 1.) and don't create wrappers before you really need it > (on demand/access). > > First, for convenience, we will use in C, (char **) type to build an array of > string pointers (as it was suggested in solution 2). Now, the game is to > make it works with numpy API, and use it in python through a python array. > Basically, I want a very similar behabiour than arrays of PyObject, where > data are not contiguous, only their address are. So, the idea is to create > a new array descr based on PyArray_OBJECT and change its getitem/setitem > functions to deals with my own data. > > I exepected numpy to work with this convenient array descr, but it fails > because PyArray_Scalar (arrayobject.c) don't call descriptor getitem function > (in PyArray_OBJECT case) but call 2 lines which have been copy/paste from > the OBJECT_getitem function). Here my small patch is : > replace (arrayobject.c:983-984): > Py_INCREF(*((PyObject **)data)); > return *((PyObject **)data); > by : > return descr->f->getitem(data, base); > > I play a lot with my new numpy array after this change and noticed that a lot > of uses works : > This is an interesting solution. I was not considering it, though, and so I'm not surprised you have problems. You can register new types but basing them off of PyArray_OBJECT can be problematic because of the special-casing that is done in several places to manage reference counting. You are supposed to register your own data-types and get your own typenumber. Then you can define all the functions for the entries as you wish. Riding on the back of PyArray_OBJECT may work if you are clever, but it may fail mysteriously as well because of a reference count snafu. Thanks for the tests and bug-reports. I have no problem changing the code as you suggest. -Travis From simon at arrowtheory.com Tue Jun 20 15:22:30 2006 From: simon at arrowtheory.com (Simon Burton) Date: Tue, 20 Jun 2006 20:22:30 +0100 Subject: [Numpy-discussion] what happened to numarray type names ? Message-ID: <20060620202230.07c3ae56.simon@arrowtheory.com> >>> import numpy >>> numpy.__version__ '0.9.9.2631' >>> numpy.Int32 Traceback (most recent call last): File "", line 1, in ? AttributeError: 'module' object has no attribute 'Int32' >>> This was working not so long ago. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From stefan at sun.ac.za Tue Jun 20 06:38:15 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 20 Jun 2006 12:38:15 +0200 Subject: [Numpy-discussion] what happened to numarray type names ? In-Reply-To: <20060620202230.07c3ae56.simon@arrowtheory.com> References: <20060620202230.07c3ae56.simon@arrowtheory.com> Message-ID: <20060620103815.GA23025@mentat.za.net> Hi Simon On Tue, Jun 20, 2006 at 08:22:30PM +0100, Simon Burton wrote: > > >>> import numpy > >>> numpy.__version__ > '0.9.9.2631' > >>> numpy.Int32 > Traceback (most recent call last): > File "", line 1, in ? > AttributeError: 'module' object has no attribute 'Int32' > >>> > > This was working not so long ago. Int32, Float etc. are part of the old Numeric interface, that you can now access under the numpy.oldnumeric namespace. If I understand correctly, doing import numpy.oldnumeric as Numeric should provide you with a Numeric-compatible replacement. The same types can be accessed under numpy as int32 (lower case) and friends. Cheers St?fan From tim.hochberg at cox.net Tue Jun 20 08:28:28 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 20 Jun 2006 05:28:28 -0700 Subject: [Numpy-discussion] strange bug In-Reply-To: <449790F2.4070100@gmx.net> References: <449790F2.4070100@gmx.net> Message-ID: <4497E9EC.4090409@cox.net> Johannes Loehnert wrote: >Hi, > > > >>## Output: >>numpy.__version__: 0.9.8 >>y: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] >>y**2: [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.] >>z: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] >>z**2: [ 0.00000000e+00 1.00000000e+00 1.60000000e+01 8.10000000e+01 >> 2.56000000e+02 6.25000000e+02 1.29600000e+03 2.40100000e+03 >> 4.09600000e+03 6.56100000e+03] >> >> > >obviosly the last is z**4. dtypes are the same for y and z (float64). > > I ran into this yesterday and fixed it. It should be OK in SVN now. >One addition: > >In [5]: z = arange(10, dtype=float) > >In [6]: z **= 1 > >In [7]: z >zsh: 18263 segmentation fault ipython > > This one is still there however. I'll look at it. -tim > >- Johannes > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From khalido at incesttaboo.com Tue Jun 20 10:03:39 2006 From: khalido at incesttaboo.com (Khalid Behan) Date: Tue, 20 Jun 2006 07:03:39 -0700 Subject: [Numpy-discussion] foyoh test Message-ID: <000001c69472$54d4c290$bc11a8c0@csg52> http://paliokertunga.com _____ march of Dale, coming from the North-East. But they cannot reach the Mountain unmarked, said Rac, and I fear lest there be battle in the valley. I do not call this counsel good. Though they are a grim folk, they are not likely to overcome the host that besets you; and even if they did so, what will you gain? Winter and snow is hastening behind them. How shall you be fed without the friendship and goodwill of the -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: roughness66.gif Type: image/gif Size: 2349 bytes Desc: not available URL: From christianson2 at llnl.gov Tue Jun 20 12:17:20 2006 From: christianson2 at llnl.gov (George Christianson) Date: Tue, 20 Jun 2006 09:17:20 -0700 Subject: [Numpy-discussion] Help for Windows Python, numpy and f2py Message-ID: <6.2.1.2.2.20060620085902.081c7cf0@mail.llnl.gov> Good morning, I used the Windows installer to install Python 2.4.3 on a late-model Dell PC running XP Pro. Then I installed numpy-0.9.8 and scipy-0.4.9, also from the Windows installers. Now I am trying to build a dll file for a Fortran 77 file and previously-generated (Linux) pyf file. I installed MinGW from the MinGW 5.0.2 Windows installer, and modified my Windows path to put the MinGW directory before a pre-existing Cygwin installation. However, both a setup.py file and running the C:\python2.4.3\Scripts\f2py.py file in the Windows command line fail with the message that the .NET Framework SDK has to be initialized or that the msvccompiler cannot be found. Any advice on what I'm missing would be much appreciated! Here is the message I get trying to run f2py: C:\projects\workspace\MARSFortran>C:\python2.4.3\python C:\python2.4.3\Scripts\f 2py.py -c --fcompiler=g77 mars.pyf mars.f>errors error: The .NET Framework SDK needs to be installed before building extensions f or Python. C:\projects\workspace\MARSFortran> C:\projects\workspace\MARSFortran>type errors Unknown vendor: "g77" running build running config_fc running build_src building extension "mars" sources creating c:\docume~1\christ~1\locals~1\temp\tmp2lu8bh creating c:\docume~1\christ~1\locals~1\temp\tmp2lu8bh\src.win32-2.4 f2py options: [] f2py: mars.pyf Reading fortran codes... Reading file 'mars.pyf' (format:free) SNIP copying C:\python2.4.3\lib\site-packages\numpy\f2py\src\fortranobject.c -> c:\do cume~1\christ~1\locals~1\temp\tmp2lu8bh\src.win32-2.4 copying C:\python2.4.3\lib\site-packages\numpy\f2py\src\fortranobject.h -> c:\do cume~1\christ~1\locals~1\temp\tmp2lu8bh\src.win32-2.4 running build_ext No module named msvccompiler in numpy.distutils, trying from distutils.. Thanks in advance, George Christianson From faltet at carabos.com Tue Jun 20 12:32:41 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue, 20 Jun 2006 18:32:41 +0200 Subject: [Numpy-discussion] Help for Windows Python, numpy and f2py In-Reply-To: <6.2.1.2.2.20060620085902.081c7cf0@mail.llnl.gov> References: <6.2.1.2.2.20060620085902.081c7cf0@mail.llnl.gov> Message-ID: <200606201832.42100.faltet@carabos.com> A Dimarts 20 Juny 2006 18:17, George Christianson va escriure: > Good morning, Thank you, but here the sun is about to set ;-) > I used the Windows installer to install Python 2.4.3 on a late-model Dell > PC running XP Pro. Then I installed numpy-0.9.8 and scipy-0.4.9, also from > the Windows installers. Now I am trying to build a dll file for a Fortran > 77 file and previously-generated (Linux) pyf file. I installed MinGW from > the MinGW 5.0.2 Windows installer, and modified my Windows path to put the > MinGW directory before a pre-existing Cygwin installation. However, both a > setup.py file and running the C:\python2.4.3\Scripts\f2py.py file in the > Windows command line fail with the message that the .NET Framework SDK has > to be initialized or that the msvccompiler cannot be found. > Any advice on what I'm missing would be much appreciated! Here is the > message I get trying to run f2py: > Mmm, perhaps you can try with putting: [build] compiler=mingw32 in your local distutils.cfg (see http://docs.python.org/inst/config-syntax.html) HTH, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From theller at python.net Tue Jun 20 15:05:51 2006 From: theller at python.net (Thomas Heller) Date: Tue, 20 Jun 2006 21:05:51 +0200 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <449342E0.5090004@ieee.org> References: <449342E0.5090004@ieee.org> Message-ID: <4498470F.5040400@python.net> Travis Oliphant schrieb: > I just updated the array interface page to emphasize we now have version > 3. NumPy still supports objects that expose (the C-side) of version 2 > of the array interface, though. > > The new interface is basically the same except (mostly) for asthetics: > The differences are listed at the bottom of > > http://numeric.scipy.org/array_interface.html > > There is talk of ctypes supporting the new interface which is a worthy > development. Please encourage that if you can. > > Please voice concerns now if you have any. From http://numeric.scipy.org/array_interface.html: """ New since June 16, 2006: For safety-checking the return object from PyCObject_GetDesc(obj) should be a Python Tuple with the first object a Python string containing "PyArrayInterface Version 3" and whose second object is a reference to the object exposing the array interface (i.e. self). Older versions of the interface used the "desc" member of the PyCObject itself (do not confuse this with the "descr" member of the PyArrayInterface structure above --- they are two separate things) to hold the pointer to the object exposing the interface, thus you should make sure the object returned is a Tuple before assuming it is in a sanity check. In a sanity check it is recommended to only check for "PyArrayInterface Version" and not for the actual version number so that later versions will still be compatible. The old sanity check for the integer 2 in the first field is no longer necessary (but it is necessary to place the number 2 in that field so that objects reading the old version of the interface will still understand this one). """ I know that you changed that because of my suggestions, but I don't think it should stay like this. The idea was to have the "desc" member of the PyCObject a 'magic value' which can be used to determine that the PyCObjects "void *cobj" pointer really points to a PyArrayInterface structure. I have seen PyCObject uses before in this way, but I cannot find them any longer. If current implementations of the array interface use this pointer for other things (like keeping a reference to the array object), that's fine, and I don't think the specification should change. I think it is espscially dangerous to assume that the desc pointer is a PyObject pointer, Python will segfault if it is not. I suggest that you revert this change. Thomas From oliphant.travis at ieee.org Tue Jun 20 15:27:16 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 20 Jun 2006 13:27:16 -0600 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <4498470F.5040400@python.net> References: <449342E0.5090004@ieee.org> <4498470F.5040400@python.net> Message-ID: <44984C14.90508@ieee.org> Thomas Heller wrote: > Travis Oliphant schrieb: >> I just updated the array interface page to emphasize we now have >> version 3. NumPy still > > If current implementations of the array interface use this pointer for > other things (like keeping a reference to the array object), that's > fine, and I don't think the specification should change. I think it is > espscially dangerous to assume that the desc pointer is a PyObject > pointer, Python will segfault if it is not. You make a good point. This is not a very safe sanity check and overly complicated for not providing safety. I've reverted it back but left in the convention that the 'desc' pointer contain a reference to the object exposing the interface as is the practice now. Thanks for the review. -Travis From cookedm at physics.mcmaster.ca Tue Jun 20 15:41:41 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 20 Jun 2006 15:41:41 -0400 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <4498470F.5040400@python.net> References: <449342E0.5090004@ieee.org> <4498470F.5040400@python.net> Message-ID: <20060620154141.346457e8@arbutus.physics.mcmaster.ca> On Tue, 20 Jun 2006 21:05:51 +0200 Thomas Heller wrote: > Travis Oliphant schrieb: > > I just updated the array interface page to emphasize we now have version > > 3. NumPy still supports objects that expose (the C-side) of version 2 > > of the array interface, though. > > > > The new interface is basically the same except (mostly) for asthetics: > > The differences are listed at the bottom of > > > > http://numeric.scipy.org/array_interface.html > > > > There is talk of ctypes supporting the new interface which is a worthy > > development. Please encourage that if you can. > > > > Please voice concerns now if you have any. > > From http://numeric.scipy.org/array_interface.html: > """ > New since June 16, 2006: > For safety-checking the return object from PyCObject_GetDesc(obj) should > be a Python Tuple with the first object a Python string containing > "PyArrayInterface Version 3" and whose second object is a reference to > the object exposing the array interface (i.e. self). > > Older versions of the interface used the "desc" member of the PyCObject > itself (do not confuse this with the "descr" member of the > PyArrayInterface structure above --- they are two separate things) to > hold the pointer to the object exposing the interface, thus you should > make sure the object returned is a Tuple before assuming it is in a > sanity check. > > In a sanity check it is recommended to only check for "PyArrayInterface > Version" and not for the actual version number so that later versions > will still be compatible. The old sanity check for the integer 2 in the > first field is no longer necessary (but it is necessary to place the > number 2 in that field so that objects reading the old version of the > interface will still understand this one). > """ > > I know that you changed that because of my suggestions, but I don't > think it should stay like this. > > The idea was to have the "desc" member of the PyCObject a 'magic value' > which can be used to determine that the PyCObjects "void *cobj" pointer > really points to a PyArrayInterface structure. I have seen PyCObject > uses before in this way, but I cannot find them any longer. > > If current implementations of the array interface use this pointer for > other things (like keeping a reference to the array object), that's > fine, and I don't think the specification should change. I think it is > espscially dangerous to assume that the desc pointer is a PyObject > pointer, Python will segfault if it is not. > I suggest that you revert this change. When I initially proposed the C version of the array interface, I suggested using a magic number, like 0xDECAF (b/c it's lightweight :-) as the first member of the CObject. Currenty, we use a version number, but I believe that small integers would be more common in random CObjects than a magic number. We could do similiar, using 0xDECAF003 for version 3, for instance. That would keep most of the benefits of an explicit "this is an array interface" CObject token, but is lighter to check, and doesn't impose any constraints on implementers for their desc fields. One of the design goals for the C interface was speed; doing a check that the first member of a tuple begins with a certain string slows it down. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From efiring at hawaii.edu Tue Jun 20 16:33:57 2006 From: efiring at hawaii.edu (Eric Firing) Date: Tue, 20 Jun 2006 10:33:57 -1000 Subject: [Numpy-discussion] array creation speed comparison Message-ID: <44985BB5.1090000@hawaii.edu> In the course of trying to speed up matplotlib, I did a little experiment that may indicate a place where numpy can be sped up: the creation of a 2-D array from a list of tuples. Using the attached script, I find that numarray is roughly 5x faster than either numpy or Numeric: [efiring at manini tests]$ python test_array.py array size: 10000 2 number of loops: 100 numpy 10.89 numpy2 6.57 numarray 1.77 numarray2 0.76 Numeric 8.2 Numeric2 4.36 [efiring at manini tests]$ python test_array.py array size: 100 2 number of loops: 100 numpy 0.11 numpy2 0.06 numarray 0.03 numarray2 0.01 Numeric 0.08 Numeric2 0.05 The numarray advantage persists for relatively small arrays (100x2; second example) and larger ones (10000x2; first example). In each case, the second test for a given package (e.g., numpy2) is the result with the type of the array element specified in advance, and the first (e.g., numpy) is without such specification. The versions I used are: In [3]:Numeric.__version__ Out[3]:'24.0b2' In [5]:numarray.__version__ Out[5]:'1.4.1' In [7]:numpy.__version__ Out[7]:'0.9.9.2584' Eric -------------- next part -------------- A non-text attachment was scrubbed... Name: test_array.py Type: text/x-python Size: 890 bytes Desc: not available URL: From erin.sheldon at gmail.com Tue Jun 20 21:00:52 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Tue, 20 Jun 2006 21:00:52 -0400 Subject: [Numpy-discussion] what happened to numarray type names ? In-Reply-To: <20060620103815.GA23025@mentat.za.net> References: <20060620202230.07c3ae56.simon@arrowtheory.com> <20060620103815.GA23025@mentat.za.net> Message-ID: <331116dc0606201800v1fab5d01o1cf6d21377ef99ca@mail.gmail.com> The numpy example page still has dtype=Float and dtype=Int all over it. Is there a generic replacement for Float, Int or should these be changed to something more specific such as int32? Erin On 6/20/06, Stefan van der Walt wrote: > Hi Simon > > On Tue, Jun 20, 2006 at 08:22:30PM +0100, Simon Burton wrote: > > > > >>> import numpy > > >>> numpy.__version__ > > '0.9.9.2631' > > >>> numpy.Int32 > > Traceback (most recent call last): > > File "", line 1, in ? > > AttributeError: 'module' object has no attribute 'Int32' > > >>> > > > > This was working not so long ago. > > Int32, Float etc. are part of the old Numeric interface, that you can > now access under the numpy.oldnumeric namespace. If I understand > correctly, doing > > import numpy.oldnumeric as Numeric > > should provide you with a Numeric-compatible replacement. > > The same types can be accessed under numpy as int32 (lower case) and > friends. > > Cheers > St?fan > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From cookedm at physics.mcmaster.ca Tue Jun 20 22:00:20 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 20 Jun 2006 22:00:20 -0400 Subject: [Numpy-discussion] what happened to numarray type names ? In-Reply-To: <331116dc0606201800v1fab5d01o1cf6d21377ef99ca@mail.gmail.com> References: <20060620202230.07c3ae56.simon@arrowtheory.com> <20060620103815.GA23025@mentat.za.net> <331116dc0606201800v1fab5d01o1cf6d21377ef99ca@mail.gmail.com> Message-ID: <20060621020020.GA6459@arbutus.physics.mcmaster.ca> On Tue, Jun 20, 2006 at 09:00:52PM -0400, Erin Sheldon wrote: > The numpy example page still has dtype=Float and dtype=Int > all over it. Is there a generic replacement for Float, Int or should > these be changed to something more specific such as int32? > Erin float and int (the Python types) are the generic 'float' and 'int' types. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From kwgoodman at gmail.com Tue Jun 20 23:04:24 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 20 Jun 2006 20:04:24 -0700 Subject: [Numpy-discussion] Selecting columns of a matrix Message-ID: I have a matrix M and a vector (n by 1 matrix) V. I want to form a new matrix that contains the columns of M for which V > 0. One way to do that in Octave is M(:, find(V > 0)). How is it done in numpy? From wbaxter at gmail.com Tue Jun 20 23:33:43 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 21 Jun 2006 12:33:43 +0900 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: I think that one's on the NumPy for Matlab users, no? http://www.scipy.org/NumPy_for_Matlab_Users >>> import numpy as num >>> a = num.arange (10).reshape(2,5) >>> a array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) >>> v = num.rand(5) >>> v array([ 0.10934855, 0.55719644, 0.7044047 , 0.19250088, 0.94636972]) >>> num.where(v>0.5) (array([1, 2, 4]),) >>> a[:,num.where(v>0.5)] array([[[1, 2, 4]], [[6, 7, 9]]]) Seems it grows an extra set of brackets for some reason. Squeeze will get rid of them. >>> a[:,num.where(v>0.5)].squeeze() array([[1, 2, 4], [6, 7, 9]]) Not sure why the squeeze is needed. Maybe there's a better way. --bb On 6/21/06, Keith Goodman wrote: > > I have a matrix M and a vector (n by 1 matrix) V. I want to form a new > matrix that contains the columns of M for which V > 0. > > One way to do that in Octave is M(:, find(V > 0)). How is it done in > numpy? > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erin.sheldon at gmail.com Tue Jun 20 22:30:26 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Tue, 20 Jun 2006 22:30:26 -0400 Subject: [Numpy-discussion] what happened to numarray type names ? In-Reply-To: <20060621020020.GA6459@arbutus.physics.mcmaster.ca> References: <20060620202230.07c3ae56.simon@arrowtheory.com> <20060620103815.GA23025@mentat.za.net> <331116dc0606201800v1fab5d01o1cf6d21377ef99ca@mail.gmail.com> <20060621020020.GA6459@arbutus.physics.mcmaster.ca> Message-ID: <331116dc0606201930h54c75df9y5538c1c3c6cf36c@mail.gmail.com> OK, I have changed all the examples that used dtype=Float or dtype=Int to float and int. Erin On 6/20/06, David M. Cooke wrote: > On Tue, Jun 20, 2006 at 09:00:52PM -0400, Erin Sheldon wrote: > > The numpy example page still has dtype=Float and dtype=Int > > all over it. Is there a generic replacement for Float, Int or should > > these be changed to something more specific such as int32? > > Erin > > float and int (the Python types) are the generic 'float' and 'int' > types. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > From kwgoodman at gmail.com Tue Jun 20 23:49:26 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 20 Jun 2006 20:49:26 -0700 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: On 6/20/06, Bill Baxter wrote: > I think that one's on the NumPy for Matlab users, no? > > http://www.scipy.org/NumPy_for_Matlab_Users > > >>> import numpy as num > >>> a = num.arange (10).reshape(2,5) > >>> a > array([[0, 1, 2, 3, 4], > [5, 6, 7, 8, 9]]) > >>> v = num.rand(5) > >>> v > array([ 0.10934855, 0.55719644, 0.7044047 , 0.19250088, 0.94636972]) > >>> num.where(v>0.5) > (array([1, 2, 4]),) > >>> a[:,num.where(v>0.5)] > array([[[1, 2, 4]], > > [[6, 7, 9]]]) > > Seems it grows an extra set of brackets for some reason. Squeeze will get > rid of them. > > >>> a[:,num.where(v>0.5)].squeeze() > array([[1, 2, 4], > [6, 7, 9]]) > > Not sure why the squeeze is needed. Maybe there's a better way. Thank you. That works for arrays, but not matrices. So do I need to do asarray(a)[:, where(asarray(v)>0.5)].squeeze() ? From erin.sheldon at gmail.com Wed Jun 21 00:10:06 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Wed, 21 Jun 2006 00:10:06 -0400 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> On 6/20/06, Bill Baxter wrote: > I think that one's on the NumPy for Matlab users, no? > > http://www.scipy.org/NumPy_for_Matlab_Users > > >>> import numpy as num > >>> a = num.arange (10).reshape(2,5) > >>> a > array([[0, 1, 2, 3, 4], > [5, 6, 7, 8, 9]]) > >>> v = num.rand(5) > >>> v > array([ 0.10934855, 0.55719644, 0.7044047 , 0.19250088, 0.94636972]) > >>> num.where(v>0.5) > (array([1, 2, 4]),) > >>> a[:,num.where(v>0.5)] > array([[[1, 2, 4]], > > [[6, 7, 9]]]) > > Seems it grows an extra set of brackets for some reason. Squeeze will get > rid of them. > > >>> a[:,num.where(v>0.5)].squeeze() > array([[1, 2, 4], > [6, 7, 9]]) > > Not sure why the squeeze is needed. Maybe there's a better way. where returns a tuple of arrays. This can have unexpected results so you need to grab what you want explicitly: >>> (w,) = num.where(v>0.5) >>> a[:,w] array([[1, 2, 4], [6, 7, 9]]) From wbaxter at gmail.com Wed Jun 21 00:48:48 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 21 Jun 2006 13:48:48 +0900 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> Message-ID: On 6/21/06, Erin Sheldon wrote: > > On 6/20/06, Bill Baxter wrote: > > I think that one's on the NumPy for Matlab users, no? > > > > http://www.scipy.org/NumPy_for_Matlab_Users > > > > >>> import numpy as num > > >>> a = num.arange (10).reshape(2,5) > > >>> a > > array([[0, 1, 2, 3, 4], > > [5, 6, 7, 8, 9]]) > > >>> v = num.rand(5) > > >>> v > > array([ 0.10934855, 0.55719644, 0.7044047 , 0.19250088, 0.94636972]) > > >>> num.where(v>0.5) > > (array([1, 2, 4]),) > > >>> a[:,num.where(v>0.5)] > > array([[[1, 2, 4]], > > > > [[6, 7, 9]]]) > > > > Seems it grows an extra set of brackets for some reason. Squeeze will > get > > rid of them. > > > > >>> a[:,num.where(v>0.5)].squeeze() > > array([[1, 2, 4], > > [6, 7, 9]]) > > > > Not sure why the squeeze is needed. Maybe there's a better way. > > where returns a tuple of arrays. This can have unexpected results > so you need to grab what you want explicitly: > > >>> (w,) = num.where(v>0.5) > >>> a[:,w] > array([[1, 2, 4], > [6, 7, 9]]) > Ah, yeh, that makes sense. Thanks for the explanation. So to turn it back into a one-liner you just need: >>> a[:,num.where(v>0.5)[0]] array([[1, 2, 4], [6, 7, 9]]) I'll put that up on the Matlab->Numpy page. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Wed Jun 21 01:23:49 2006 From: simon at arrowtheory.com (Simon Burton) Date: Wed, 21 Jun 2006 15:23:49 +1000 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> Message-ID: <20060621152349.157974f4.simon@arrowtheory.com> On Wed, 21 Jun 2006 13:48:48 +0900 "Bill Baxter" wrote: > > >>> a[:,num.where(v>0.5)[0]] > array([[1, 2, 4], > [6, 7, 9]]) > > I'll put that up on the Matlab->Numpy page. oh, yuck. What about this: >>> a[:,num.nonzero(v>0.5)] array([[0, 1, 3], [5, 6, 8]]) >>> Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From wbaxter at gmail.com Wed Jun 21 03:16:46 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 21 Jun 2006 16:16:46 +0900 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <20060621152349.157974f4.simon@arrowtheory.com> References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> <20060621152349.157974f4.simon@arrowtheory.com> Message-ID: On 6/21/06, Simon Burton wrote: > > On Wed, 21 Jun 2006 13:48:48 +0900 > "Bill Baxter" wrote: > > > > > >>> a[:,num.where(v>0.5)[0]] > > array([[1, 2, 4], > > [6, 7, 9]]) > > > > I'll put that up on the Matlab->Numpy page. > > oh, yuck. What about this: > > >>> a[:,num.nonzero(v>0.5)] > array([[0, 1, 3], > [5, 6, 8]]) > >>> The nonzero() function seems like kind of an anomaly in and of itself. It doesn't behave like other index-returning numpy functions, or even like the method version, v.nonzero(), which returns the typical tuple of array. So my feeling is ... ew to numpy.nonzero. --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Jun 21 04:48:15 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 Jun 2006 04:48:15 -0400 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: On Tue, 20 Jun 2006, Keith Goodman apparently wrote: > I have a matrix M and a vector (n by 1 matrix) V. I want to form a new > matrix that contains the columns of M for which V > 0. > One way to do that in Octave is M(:, find(V > 0)). How is it done in numpy? M.transpose()[V>0] If you want the columns as columns, you can transpose again. hth, Alan Isaac From michael.sorich at gmail.com Wed Jun 21 04:46:19 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Wed, 21 Jun 2006 18:16:19 +0930 Subject: [Numpy-discussion] MA bug or feature? Message-ID: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> When transposing a masked array of dtype ' References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com><20060621152349.157974f4.simon@arrowtheory.com> Message-ID: On Wed, 21 Jun 2006, Bill Baxter apparently wrote: > ew to numpy.nonzero I agree that having the method and function behave so differently is awkward; this was discussed before on this list. It does allow Simon's nicer solution, however. I'm not sure why bool arrays cannot be used as indices. The "natural" solution to the original problem seemed to be: M[:,V>0] but this is not allowed. Cheers, Alan Isaac From faltet at carabos.com Wed Jun 21 05:14:58 2006 From: faltet at carabos.com (Francesc Altet) Date: Wed, 21 Jun 2006 11:14:58 +0200 Subject: [Numpy-discussion] ANN: PyTables (a hierarchical database) 1.3.2 released Message-ID: <200606211115.02727.faltet@carabos.com> =========================== Announcing PyTables 1.3.2 =========================== This is a new minor release of PyTables. There you will find, among other things, improved support for NumPy strings and the ability to create indexes of NumPy-flavored tables (this capability was broken in earlier versions). *Important note*: one of the fixes addresses an important bug that shows when browsing files with lots of nodes, making PyTables to crash. Because of this, an upgrade is encouraged. Go to the PyTables web site for downloading the beast: http://www.pytables.org/ or keep reading for more info about the new features and bugs fixed. Changes more in depth ===================== Bug fixes: - Changed the nodes in the lru cache heap from Pyrex to pure Python ones. This fixes a problem that can appear in certain situations (mainly, when navigating back and forth along lots of Node objects). While this fix is sub-optimal, at least it leads to well behaviour until the faster approach will eventually get back. - Due to different conventions in padding chars, it has been added a special case when converting from numarray strings into numpy ones so that these different conventions are handled correctly. Fixes ticket #13 and other strange numpy string quirks (thanks to Pepe Barbe). - Solved an issue that appeared when indexing Table columns with flavor 'numpy'. Now, tables that are 'numpy' flavored can be indexed as well. - Solved an issue when saving string atoms with ``VLArray`` with a flavor different from "python". The problem was that the item sizes of the original strings were not checked, so rubish was put on-disk. Now, if an item size of the input is different from the item size of the atom, a conversion is forced. Added tests to check for these situations. - Fixed a problem with removing a table with indexed columns under certain situations. Thanks to Andrew Straw for reporting it. - Fixed a small glitch in the ``ptdump`` utility that prevented dumping ``EArray`` data with an enlargeable dimension different from the first one. - Make parent node unreference child node when creation fails. Fixes ticket #12 (thanks to Eilif). - Saving zero-length strings in Array objects used to raise a ZeroDivisionError. Now, it returns a more sensible NotImplementedError until this is supported. Backward-incompatible changes: - Please, see ``RELEASE-NOTES.txt`` file. Deprecated features: - None Important note for Windows users ================================ If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win.ZIP What it is ========== **PyTables** is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with support for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. PyTables runs on top of the HDF5 library and numarray (but NumPy and Numeric are also supported) package for achieving maximum throughput and convenient use. Besides, PyTables I/O for table objects is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. PyTables sports indexing capabilities as well, allowing doing selections in tables exceeding one billion of rows in just seconds. Platforms ========= This version has been extensively checked on quite a few platforms, like Linux on Intel32 (Pentium), Win on Intel32 (Pentium), Linux on Intel64 (Itanium2), FreeBSD on AMD64 (Opteron), Linux on PowerPC (and PowerPC64) and MacOSX on PowerPC. For other platforms, chances are that the code can be easily compiled and run without further issues. Please, contact us in case you are experiencing problems. Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdf.ncsa.uiuc.edu/HDF5/ About numarray: http://www.stsci.edu/resources/software_hardware/numarray To know more about the company behind the PyTables development, see: http://www.carabos.com/ Acknowledgments =============== Thanks to various the users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last but not least, a big thank you to THG (http://www.hdfgroup.org/) for sponsoring many of the new features recently introduced in PyTables. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team From pgmdevlist at mailcan.com Wed Jun 21 06:12:09 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Wed, 21 Jun 2006 06:12:09 -0400 Subject: [Numpy-discussion] MA bug or feature? In-Reply-To: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> References: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> Message-ID: <200606210612.09374.pgmdevlist@mailcan.com> On Wednesday 21 June 2006 04:46, Michael Sorich wrote: > When transposing a masked array of dtype ' ndarray of dtype '|O4' was returned. OK, I see where the problem is: When your fill_value has a type that cannot be converted to the type of your data, the `filled` method (used internally in many functions, such as `transpose`) raises a TypeError, which is caught and your array is converted to 'O'. That's what happen here: your fill_value is a string, your data are integer, the types don't match, hence the conversion. So, no, I don't think that's a bug. Why filling when you don't have any masked values, then ? Well, there's a subtle difference between a boolean mask and a mask of booleans. When the mask is boolean (mask=nomask=False), there's no masked value, and `filled` returns the data. Now, when your mask is an array of boolean (your first case), MA doesn't check whether mask.any()==False to determine whether there are some missing data or not, it just processes the whole array of boolean. I agree that's a bit confusing here, and there might be some room for improvement (for example, changing the current `if m is nomask` to `if m is nomask or m.any()==False`, or better, forcing mask to nomask if mask.any()==False). But I don;t think that qualifies as bug. In short: when you have an array of numbers, don't try to fill it with characters. From Sheldon.Johnston at smhi.se Wed Jun 21 09:31:23 2006 From: Sheldon.Johnston at smhi.se (Johnston Sheldon) Date: Wed, 21 Jun 2006 15:31:23 +0200 Subject: [Numpy-discussion] LittleEndian Message-ID: <575A94F91D20704387D1C69A913E95EE035816@CORRE.ad.smhi.se> Hi, Can someone give a brief example of the Numeric function LittleEndian? I have written two separate functions to read binary data that can be either LittleEndian or BigEndian (using byteswapped() ) but it would be great with just one function. Much obliged, Sheldon -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.u.r.e.l.i.a.n at gmx.net Wed Jun 21 09:36:10 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Wed, 21 Jun 2006 15:36:10 +0200 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> Hi, > I'm not sure why bool arrays cannot be used as indices. > The "natural" solution to the original problem seemed to be: > M[:,V>0] > but this is not allowed. I started a thread on this earlier this year. Try searching the archive for "boolean indexing" (if it comes back online somewhen). Travis had some reason for not implementing this, but unfortunately I do not remember what it was. The corresponding message might still linger on my home PC, which I can access this evening.... Johannes From fullung at gmail.com Wed Jun 21 09:58:28 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 21 Jun 2006 15:58:28 +0200 Subject: [Numpy-discussion] LittleEndian In-Reply-To: <575A94F91D20704387D1C69A913E95EE035816@CORRE.ad.smhi.se> Message-ID: <007901c6953a$c5ab7db0$01eaa8c0@dsp.sun.ac.za> Hey Sheldon With NumPy you can use dtype's newbyteorder method to convert any dtype's byte order to an order you specify: In [1]: import numpy as N In [2]: x = N.array([1],dtype='i4') In [4]: xle = N.asarray(x, dtype=x.dtype.newbyteorder('<')) In [5]: yle = N.asarray(y, dtype=y.dtype.newbyteorder('<')) In [6]: x.dtype Out[6]: dtype('i4') In [8]: xle.dtype Out[8]: dtype(' -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of Johnston Sheldon > Sent: 21 June 2006 15:31 > To: Numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] LittleEndian > > Hi, > > Can someone give a brief example of the Numeric function LittleEndian? > > I have written two separate functions to read binary data that can be > either LittleEndian or BigEndian (using byteswapped() ) but it would be > great with just one function. From kwgoodman at gmail.com Wed Jun 21 10:13:54 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 21 Jun 2006 07:13:54 -0700 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> Message-ID: On 6/20/06, Bill Baxter wrote: > >>> a[:,num.where(v>0.5)[0]] > array([[1, 2, 4], > [6, 7, 9]]) > > I'll put that up on the Matlab->Numpy page. That's a great addition to the Matlab to Numpy page. But it only works if v is a column vector. If v is a row vector, then where(v.A > 0.5)[0] will return all zeros. So for row vectors it should be where(v.A > 0.5)[1]. Or, in general, where(v.flatten(1).A > 0.5)[1] From kwgoodman at gmail.com Wed Jun 21 10:56:42 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 21 Jun 2006 07:56:42 -0700 Subject: [Numpy-discussion] Selecting columns of a matrix Message-ID: Alan G Isaac wrote: > M.transpose()[V>0] > If you want the columns as columns, > you can transpose again. I can't get that to work when M is a n by m matrix: >> M = asmatrix(rand(3,4)) >> M matrix([[ 0.78970407, 0.78681448, 0.79167808, 0.57857822], [ 0.44567836, 0.23985597, 0.49392248, 0.0282004 ], [ 0.7044725 , 0.4090776 , 0.12035218, 0.71365101]]) >> V = asmatrix(rand(4,1)) >> V matrix([[ 0.61638738], [ 0.76928157], [ 0.3882811 ], [ 0.68979661]]) >> M.transpose()[V > 0.5] matrix([[ 0.78970407, 0.78681448, 0.57857822]]) The answer should be a 3 by 3 matrix. From travis at enthought.com Wed Jun 21 11:20:38 2006 From: travis at enthought.com (Travis N. Vaught) Date: Wed, 21 Jun 2006 10:20:38 -0500 Subject: [Numpy-discussion] SciPy 2006 Tutorials Message-ID: <449963C6.3070203@enthought.com> All, As part of this year's SciPy 2006 Conference, we've planned Coding Sprints on Monday and Tuesday (August 14-15) and a Tutorial Day Wednesday (August 16)--the normal conference presentations follow on Thursday and Friday (August 17-18). For this year at least, the Tutorials (and Sprints) are no additional charge (you're on your own for food on those days, though). With regard to Tutorial topics, we've settled on the following: "3D visualization in Python using tvtk and MayaVi" "Scientific Data Analysis and Visualization using IPython and Matplotlib." "Building Scientific Applications using the Enthought Tool Suite (Envisage, Traits, Chaco, etc.)" "NumPy (migration from Numarray & Numeric, overview of NumPy)" These will be in two tracks with two three hour sessions in each track. If you plan to attend, please send an email to tutorials at scipy.org with the two sessions you'd most like to hear and we'll build the schedule with a minimum of conflict. We'll post the schedule of the tracks on the Wiki here: http://www.scipy.org/SciPy2006/TutorialSessions Also, if you haven't registered already, the deadline for early registration is July 14. The abstract submission deadline is July 7. More information is here: http://www.scipy.org/SciPy2006 Thanks, Travis From oliphant.travis at ieee.org Wed Jun 21 11:52:24 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 21 Jun 2006 09:52:24 -0600 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> <20060621152349.157974f4.simon@arrowtheory.com> Message-ID: <44996B38.808@ieee.org> Bill Baxter wrote: > On 6/21/06, *Simon Burton* > wrote: > > On Wed, 21 Jun 2006 13:48:48 +0900 > "Bill Baxter" > wrote: > > > > > >>> a[:,num.where(v>0.5)[0]] > > array([[1, 2, 4], > > [6, 7, 9]]) > > > > I'll put that up on the Matlab->Numpy page. > > oh, yuck. What about this: > > >>> a[:,num.nonzero(v>0.5)] > array([[0, 1, 3], > [5, 6, 8]]) > >>> > > > The nonzero() function seems like kind of an anomaly in and of > itself. It doesn't behave like other index-returning numpy > functions, or even like the method version, v.nonzero(), which returns > the typical tuple of array. So my feeling is ... ew to numpy.nonzero. How about we add the ability so that a[:, ] gets translated to a[:, nonzero()] ? -Travis From perrot at shfj.cea.fr Wed Jun 21 12:15:20 2006 From: perrot at shfj.cea.fr (Matthieu Perrot) Date: Wed, 21 Jun 2006 18:15:20 +0200 Subject: [Numpy-discussion] tiny patch + Playing with strings and my own array descr (PyArray_STRING, PyArray_OBJECT). In-Reply-To: <4497BED2.9090601@ieee.org> References: <200606162001.31342.perrot@shfj.cea.fr> <4497BED2.9090601@ieee.org> Message-ID: <200606211815.20053.perrot@shfj.cea.fr> Le Mardi 20 Juin 2006 11:24, Travis Oliphant a ?crit?: > Matthieu Perrot wrote: > > hi, > > > > I need to handle strings shaped by a numpy array whose data own to a C > > structure. There is several possible answers to this problem : > > 1) use a numpy array of strings (PyArray_STRING) and so a (char *) > > object in C. It works as is, but you need to define a maximum size to > > your strings because your set of strings is contiguous in memory. > > 2) use a numpy array of objects (PyArray_OBJECT), and wrap each ?C > > string? with a python object, using PyStringObject for example. Then our > > problem is that there is as wrapper as data element and I believe data > > can't be shared when your created PyStringObject using (char *) thanks to > > PyString_AsStringAndSize by example. > > > > > > Now, I will expose a third way, which allow you to use no size-limited > > strings (as in solution 1.) and don't create wrappers before you really > > need it (on demand/access). > > > > First, for convenience, we will use in C, (char **) type to build an > > array of string pointers (as it was suggested in solution 2). Now, the > > game is to make it works with numpy API, and use it in python through a > > python array. Basically, I want a very similar behabiour than arrays of > > PyObject, where data are not contiguous, only their address are. So, the > > idea is to create a new array descr based on PyArray_OBJECT and change > > its getitem/setitem functions to deals with my own data. > > > > I exepected numpy to work with this convenient array descr, but it fails > > because PyArray_Scalar (arrayobject.c) don't call descriptor getitem > > function (in PyArray_OBJECT case) but call 2 lines which have been > > copy/paste from the OBJECT_getitem function). Here my small patch is : > > replace (arrayobject.c:983-984): > > Py_INCREF(*((PyObject **)data)); > > return *((PyObject **)data); > > by : > > return descr->f->getitem(data, base); > > > > I play a lot with my new numpy array after this change and noticed that a > > lot of uses works : > > This is an interesting solution. I was not considering it, though, and > so I'm not surprised you have problems. You can register new types but > basing them off of PyArray_OBJECT can be problematic because of the > special-casing that is done in several places to manage reference counting. > > You are supposed to register your own data-types and get your own > typenumber. Then you can define all the functions for the entries as > you wish. > > Riding on the back of PyArray_OBJECT may work if you are clever, but it > may fail mysteriously as well because of a reference count snafu. > > Thanks for the tests and bug-reports. I have no problem changing the > code as you suggest. > > -Travis Thanks for applying my suggestions. I think, you suggest this kind of declaration : PyArray_Descr *descr = PyArray_DescrNewFromType(PyArray_VOID); descr->f->getitem = (PyArray_GetItemFunc *) my_getitem; descr->f->setitem = (PyArray_SetItemFunc *) my_setitem; descr->elsize = sizeof(char *); PyArray_RegisterDataType(descr); Without the last line, you are right it works and it follows the C-API way. But if I register this array descr, the typenumber is bigger than what PyTypeNum_ISFLEXIBLE function considers to be a flexible type. So the returned scalar object is badly-formed. Then, I get a segmentation fault later, because the created voidscalar has a null descr pointer. -- Matthieu Perrot From cloomis at astro.princeton.edu Wed Jun 21 12:41:14 2006 From: cloomis at astro.princeton.edu (Craig Loomis) Date: Wed, 21 Jun 2006 12:41:14 -0400 Subject: [Numpy-discussion] Bug with cumsum(dtype='f8')? Message-ID: Not sure if this one has been addressed. There appears to be a problem with cumsum(dtype=), with reasonably small numbers. Both PPC and x86 Macs. ======== import numpy print "numpy version:", numpy.__version__ v = numpy.arange(10002) # 10001 is OK, larger is "worse" print "ok: ", v.cumsum() print "not ok: ", v.cumsum(dtype=numpy.float64) print "ok: ", numpy.arange(10002,dtype=numpy.float64).cumsum() ========= ActivePython 2.4.3 Build 11 (ActiveState Software Inc.) based on Python 2.4.3 (#1, Apr 3 2006, 18:07:14) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> print "numpy version:", numpy.__version__ numpy version: 0.9.9.2549 >>> v = numpy.arange(10002) # 10001 is OK, larger is "worse" >>> print "ok: ", v.cumsum() ok: [ 0 1 3 ..., 49995000 50005000 50015001] >>> print "not ok: ", v.cumsum(dtype=numpy.float64) not ok: [ 0.00000000e+00 1.00010000e+04 3.00000000e+00 ..., 4.99950000e+07 5.00050000e+07 0.00000000e+00] >>> print "ok: ", numpy.arange(10002,dtype=numpy.float64).cumsum() ok: [ 0.00000000e+00 1.00000000e+00 3.00000000e+00 ..., 4.99950000e+07 5.00050000e+07 5.00150010e+07] >>> - craig From oliphant.travis at ieee.org Wed Jun 21 12:50:26 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 21 Jun 2006 10:50:26 -0600 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <449978D2.1090000@ieee.org> Johannes Loehnert wrote: > Hi, > > >> I'm not sure why bool arrays cannot be used as indices. >> The "natural" solution to the original problem seemed to be: >> M[:,V>0] >> but this is not allowed. >> > > I started a thread on this earlier this year. Try searching the archive for > "boolean indexing" (if it comes back online somewhen). > > Travis had some reason for not implementing this, but unfortunately I do not > remember what it was. The corresponding message might still linger on my home > > PC, which I can access this evening.... > I suspect my reason was just not being sure if it could be explained consistently. But, after seeing this come up again. I decided it was easy enough to implement. So, in SVN NumPy, you will be able to do a[:,V>0] a[V>0,:] The V>0 will be replaced with integer arrays as if nonzero(V>0) had been called. -Travis From pau.gargallo at gmail.com Wed Jun 21 13:09:50 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Wed, 21 Jun 2006 19:09:50 +0200 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <449978D2.1090000@ieee.org> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> Message-ID: <6ef8f3380606211009i5e225282n4a4e8e4dc7adbad1@mail.gmail.com> On 6/21/06, Travis Oliphant wrote: > Johannes Loehnert wrote: > > Hi, > > > > > >> I'm not sure why bool arrays cannot be used as indices. > >> The "natural" solution to the original problem seemed to be: > >> M[:,V>0] > >> but this is not allowed. > >> > > > > I started a thread on this earlier this year. Try searching the archive for > > "boolean indexing" (if it comes back online somewhen). > > > > Travis had some reason for not implementing this, but unfortunately I do not > > remember what it was. The corresponding message might still linger on my home > > > > PC, which I can access this evening.... > > > > I suspect my reason was just not being sure if it could be explained > consistently. But, after seeing this come up again. I decided it was > easy enough to implement. > > So, in SVN NumPy, you will be able to do > > a[:,V>0] > a[V>0,:] > > The V>0 will be replaced with integer arrays as if nonzero(V>0) had been > called. > does it work for a[,] ? what about a[ix_( nonzero(), nonzero() )] ? maybe the to nonzero() conversion would be more coherently done by the ix_ function than by the [] pau From kwgoodman at gmail.com Wed Jun 21 13:16:44 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 21 Jun 2006 10:16:44 -0700 Subject: [Numpy-discussion] Element-by-element matrix multiplication Message-ID: The NumPy for Matlab Users page suggests mat(a.A * b.A) for element-by-element matrix multiplication. I think it would be helpful to also include multiply(a, b). a.*b mat(a.A * b.A) or multiply(a, b) From robert.kern at gmail.com Wed Jun 21 13:21:42 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 Jun 2006 12:21:42 -0500 Subject: [Numpy-discussion] Element-by-element matrix multiplication In-Reply-To: References: Message-ID: Keith Goodman wrote: > The NumPy for Matlab Users page suggests mat(a.A * b.A) for > element-by-element matrix multiplication. I think it would be helpful > to also include multiply(a, b). > > a.*b > > mat(a.A * b.A) or > multiply(a, b) It is a wiki page. You may edit it yourself without needing to ask permission. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Wed Jun 21 13:22:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 21 Jun 2006 11:22:04 -0600 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <6ef8f3380606211009i5e225282n4a4e8e4dc7adbad1@mail.gmail.com> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> <6ef8f3380606211009i5e225282n4a4e8e4dc7adbad1@mail.gmail.com> Message-ID: <4499803C.1010302@ieee.org> Pau Gargallo wrote: > On 6/21/06, Travis Oliphant wrote: > >> Johannes Loehnert wrote: >> >>> Hi, >>> >>> >>> >>>> I'm not sure why bool arrays cannot be used as indices. >>>> The "natural" solution to the original problem seemed to be: >>>> M[:,V>0] >>>> but this is not allowed. >>>> >>>> >>> I started a thread on this earlier this year. Try searching the archive for >>> "boolean indexing" (if it comes back online somewhen). >>> >>> Travis had some reason for not implementing this, but unfortunately I do not >>> remember what it was. The corresponding message might still linger on my home >>> >>> PC, which I can access this evening.... >>> >>> >> I suspect my reason was just not being sure if it could be explained >> consistently. But, after seeing this come up again. I decided it was >> easy enough to implement. >> >> So, in SVN NumPy, you will be able to do >> >> a[:,V>0] >> a[V>0,:] >> >> The V>0 will be replaced with integer arrays as if nonzero(V>0) had been >> called. >> >> > > does it work for a[,] ? > Sure, it will work. Basically all boolean arrays will be interpreted as nonzero(V>0), everywhere. > what about a[ix_( nonzero(), nonzero() )] ? > > maybe the to nonzero() conversion would be more > coherently done by the ix_ function than by the [] > > I've just added support for inside ix_ so that the nonzero will be done automatically as well. So a[ix_(,)] will give the cross-product selection. -Travis From webb.sprague at gmail.com Wed Jun 21 13:27:53 2006 From: webb.sprague at gmail.com (Webb Sprague) Date: Wed, 21 Jun 2006 10:27:53 -0700 Subject: [Numpy-discussion] Problem installing numpy on Gentoo Message-ID: I am trying to install numpy on Gentoo (see my info below for version etc). It all seems to go fine, but when I try to import it and run the tests, I get the following error (in ipython): In [1]: import numpy import linalg -> failed: libg2c.so.0: cannot open shared object file: No such file or directory I have gfortran on my system, but libg2c is not part of the gcc-4.1.1 distribution anymore (maybe that is a bug with Gentoo?). I also get the same error when I run f2py from the command line. Here is the bug I filed: http://bugs.gentoo.org/show_bug.cgi?id=136988 Info that might help: cowboy ~ # ls /usr/lib/gcc/i686-pc-linux-gnu/4.1.1/ crtbegin.o libgcc.a libgfortran.so.1 libobjc.so.1.0.0 crtbeginS.o libgcc_eh.a libgfortran.so.1.0.0 libstdc++.a crtbeginT.o libgcc_s.so libgfortranbegin.a libstdc++.so crtend.o libgcc_s.so.1 libgfortranbegin.la libstdc++.so.6 crtendS.o libgcov.a libobjc.a libstdc++.so.6.0.8 crtfastmath.o libgfortran.a libobjc.la libstdc++_pic.a include libgfortran.la libobjc.so libsupc++.a install-tools libgfortran.so libobjc.so.1 libsupc++.la cowboy ~ # ls /usr/lib/gcc/i686-pc-linux-gnu/3.4.6/ SYSCALLS.c.X libffi.la libobjc.la crtbegin.o libffi.so libobjc.so crtbeginS.o libfrtbegin.a libobjc.so.1 crtbeginT.o libg2c.a libobjc.so.1.0.0 crtend.o libg2c.la libstdc++.a crtendS.o libg2c.so libstdc++.la hardened.specs libg2c.so.0 libstdc++.so hardenednopie.specs libg2c.so.0.0.0 libstdc++.so.6 hardenednopiessp.specs libgcc.a libstdc++.so.6.0.3 hardenednossp.specs libgcc_eh.a libstdc++_pic.a include libgcc_s.so libsupc++.a install-tools libgcc_s.so.1 libsupc++.la libffi-2.00-beta.so libgcov.a specs libffi.a libobjc.a vanilla.specs cowboy ~ # emerge --info Portage 2.1.1_pre1-r1 (default-linux/x86/2006.0, gcc-4.1.1/vanilla, glibc-2.4-r3, 2.6.11-gentoo-r9 i686) ================================================================= System uname: 2.6.11-gentoo-r9 i686 AMD Athlon(tm) Processor Gentoo Base System version 1.12.1 distcc 2.18.3 i686-pc-linux-gnu (protocols 1 and 2) (default port 3632) [disabled] ccache version 2.4 [enabled] dev-lang/python: 2.4.3-r1 dev-python/pycrypto: 2.0.1-r5 dev-util/ccache: 2.4-r2 dev-util/confcache: [Not Present] sys-apps/sandbox: 1.2.18.1 sys-devel/autoconf: 2.13, 2.59-r7 sys-devel/automake: 1.4_p6, 1.5, 1.6.3, 1.7.9-r1, 1.8.5-r3, 1.9.6-r2 sys-devel/binutils: 2.16.1-r2 sys-devel/gcc-config: 2.0.0_rc1 sys-devel/libtool: 1.5.22 virtual/os-headers: 2.6.11-r5 ACCEPT_KEYWORDS="x86 ~x86" AUTOCLEAN="yes" CBUILD="i686-pc-linux-gnu" CFLAGS=" -march=athlon -O2 -pipe -fomit-frame-pointer" CHOST="i686-pc-linux-gnu" CONFIG_PROTECT="/etc /usr/share/X11/xkb" CONFIG_PROTECT_MASK="/etc/env.d /etc/eselect/compiler /etc/gconf /etc/revdep-rebuild /etc/terminfo /etc/texmf/web2c" CXXFLAGS=" -march=athlon -O2 -pipe -fomit-frame-pointer" DISTDIR="/usr/portage/distfiles" FEATURES="autoconfig ccache distlocks metadata-transfer sandbox sfperms" GENTOO_MIRRORS="http://distfiles.gentoo.org http://distro.ibiblio.org/pub/linux/distributions/gentoo" PKGDIR="/usr/portage/packages" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --delete-after --stats --timeout=180 --exclude='/distfiles' --exclude='/local' --exclude='/packages'" PORTAGE_TMPDIR="/var/tmp" PORTDIR="/usr/portage" PORTDIR_OVERLAY="/usr/local/portage" SYNC="rsync://rsync.gentoo.org/gentoo-portage" USE="x86 X alsa apache2 apm arts avi berkdb bitmap-fonts blas cli crypt cups dba dri eds emacs emboss encode esd f77 fftw foomaticdb fortran g77 gdbm gif gnome gpm gstreamer gtk gtk2 imlib ipv6 isdnlog jpeg lapack libg++ libwww mad mikmod mime mmap motif mp3 mpeg ncurses nls nptl nptlonly objc ogg opengl oss pam pcre pdflib perl png postgres pppd python quicktime readline reflection sdl session spell spl ssl svg tcltk tcpd tidy truetype truetype-fonts type1-fonts udev unicode vorbis xml xmms xorg xv zlib elibc_glibc kernel_linux userland_GNU" Unset: CTARGET, EMERGE_DEFAULT_OPTS, INSTALL_MASK, LANG, LC_ALL, LDFLAGS, LINGUAS, MAKEOPTS, PORTAGE_RSYNC_EXTRA_OPTS cowboy ~ # gcc --version i686-pc-linux-gnu-gcc (GCC) 4.1.1 (Gentoo 4.1.1) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. From pau.gargallo at gmail.com Wed Jun 21 13:31:48 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Wed, 21 Jun 2006 19:31:48 +0200 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <4499803C.1010302@ieee.org> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> <6ef8f3380606211009i5e225282n4a4e8e4dc7adbad1@mail.gmail.com> <4499803C.1010302@ieee.org> Message-ID: <6ef8f3380606211031sd7395d5k3cef4838efd2e96c@mail.gmail.com> On 6/21/06, Travis Oliphant wrote: > Pau Gargallo wrote: > > On 6/21/06, Travis Oliphant wrote: > > > >> Johannes Loehnert wrote: > >> > >>> Hi, > >>> > >>> > >>> > >>>> I'm not sure why bool arrays cannot be used as indices. > >>>> The "natural" solution to the original problem seemed to be: > >>>> M[:,V>0] > >>>> but this is not allowed. > >>>> > >>>> > >>> I started a thread on this earlier this year. Try searching the archive for > >>> "boolean indexing" (if it comes back online somewhen). > >>> > >>> Travis had some reason for not implementing this, but unfortunately I do not > >>> remember what it was. The corresponding message might still linger on my home > >>> > >>> PC, which I can access this evening.... > >>> > >>> > >> I suspect my reason was just not being sure if it could be explained > >> consistently. But, after seeing this come up again. I decided it was > >> easy enough to implement. > >> > >> So, in SVN NumPy, you will be able to do > >> > >> a[:,V>0] > >> a[V>0,:] > >> > >> The V>0 will be replaced with integer arrays as if nonzero(V>0) had been > >> called. > >> > >> > > > > does it work for a[,] ? > > > Sure, it will work. Basically all boolean arrays will be interpreted as > nonzero(V>0), everywhere. > > what about a[ix_( nonzero(), nonzero() )] ? > > > > maybe the to nonzero() conversion would be more > > coherently done by the ix_ function than by the [] > > > > > I've just added support for inside ix_ so that the nonzero > will be done automatically as well. > > So > > a[ix_(,)] will give the cross-product selection. > ok so: a[ b1, b2 ] will be different than a[ ix_(b1,b2) ] just like with integer indices. Make sense to me. also, a[b] will be as before (a[where(b)]) ? maybe a trailing coma could lunch the new behaviour? a[b] -> a[where(b)] a[b,] -> a[b,...] -> a[nonzero(b)] Thanks, pau From kwgoodman at gmail.com Wed Jun 21 13:45:54 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 21 Jun 2006 10:45:54 -0700 Subject: [Numpy-discussion] Element-by-element matrix multiplication In-Reply-To: References: Message-ID: On 6/21/06, Robert Kern wrote: > Keith Goodman wrote: > > The NumPy for Matlab Users page suggests mat(a.A * b.A) for > > element-by-element matrix multiplication. I think it would be helpful > > to also include multiply(a, b). > > > > a.*b > > > > mat(a.A * b.A) or > > multiply(a, b) > > It is a wiki page. You may edit it yourself without needing to ask permission. OK. Done. I also added a notice about SciPy's PayPal account being suspended. From oliphant.travis at ieee.org Wed Jun 21 14:22:51 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 21 Jun 2006 12:22:51 -0600 Subject: [Numpy-discussion] memory leak in array In-Reply-To: <200606211618.k5LGIYWw008784@rm-rstar.sfu.ca> References: <200606211618.k5LGIYWw008784@rm-rstar.sfu.ca> Message-ID: <44998E7B.50409@ieee.org> saagesen at sfu.ca wrote: > Hi Travis > > Not sure if you've had a chance to look at the previous code I sent or not, > but I was able to reduce the code (see below) to its smallest size and still > have the problem, albeit at a slower rate. The problem appears to come from > changing values in the array. Does this create another reference to the > array, which can't be released? If this problem does not have a work-around > or "fix", please let me know. > This is now fixed in SVN. -Travis From faltet at carabos.com Wed Jun 21 05:14:58 2006 From: faltet at carabos.com (Francesc Altet) Date: Wed, 21 Jun 2006 11:14:58 +0200 Subject: [Numpy-discussion] ANN: PyTables (a hierarchical database) 1.3.2 released Message-ID: <200606211115.02727.faltet@carabos.com> =========================== Announcing PyTables 1.3.2 =========================== This is a new minor release of PyTables. There you will find, among other things, improved support for NumPy strings and the ability to create indexes of NumPy-flavored tables (this capability was broken in earlier versions). *Important note*: one of the fixes addresses an important bug that shows when browsing files with lots of nodes, making PyTables to crash. Because of this, an upgrade is encouraged. Go to the PyTables web site for downloading the beast: http://www.pytables.org/ or keep reading for more info about the new features and bugs fixed. Changes more in depth ===================== Bug fixes: - Changed the nodes in the lru cache heap from Pyrex to pure Python ones. This fixes a problem that can appear in certain situations (mainly, when navigating back and forth along lots of Node objects). While this fix is sub-optimal, at least it leads to well behaviour until the faster approach will eventually get back. - Due to different conventions in padding chars, it has been added a special case when converting from numarray strings into numpy ones so that these different conventions are handled correctly. Fixes ticket #13 and other strange numpy string quirks (thanks to Pepe Barbe). - Solved an issue that appeared when indexing Table columns with flavor 'numpy'. Now, tables that are 'numpy' flavored can be indexed as well. - Solved an issue when saving string atoms with ``VLArray`` with a flavor different from "python". The problem was that the item sizes of the original strings were not checked, so rubish was put on-disk. Now, if an item size of the input is different from the item size of the atom, a conversion is forced. Added tests to check for these situations. - Fixed a problem with removing a table with indexed columns under certain situations. Thanks to Andrew Straw for reporting it. - Fixed a small glitch in the ``ptdump`` utility that prevented dumping ``EArray`` data with an enlargeable dimension different from the first one. - Make parent node unreference child node when creation fails. Fixes ticket #12 (thanks to Eilif). - Saving zero-length strings in Array objects used to raise a ZeroDivisionError. Now, it returns a more sensible NotImplementedError until this is supported. Backward-incompatible changes: - Please, see ``RELEASE-NOTES.txt`` file. Deprecated features: - None Important note for Windows users ================================ If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win.ZIP What it is ========== **PyTables** is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with support for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. PyTables runs on top of the HDF5 library and numarray (but NumPy and Numeric are also supported) package for achieving maximum throughput and convenient use. Besides, PyTables I/O for table objects is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. PyTables sports indexing capabilities as well, allowing doing selections in tables exceeding one billion of rows in just seconds. Platforms ========= This version has been extensively checked on quite a few platforms, like Linux on Intel32 (Pentium), Win on Intel32 (Pentium), Linux on Intel64 (Itanium2), FreeBSD on AMD64 (Opteron), Linux on PowerPC (and PowerPC64) and MacOSX on PowerPC. For other platforms, chances are that the code can be easily compiled and run without further issues. Please, contact us in case you are experiencing problems. Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdf.ncsa.uiuc.edu/HDF5/ About numarray: http://www.stsci.edu/resources/software_hardware/numarray To know more about the company behind the PyTables development, see: http://www.carabos.com/ Acknowledgments =============== Thanks to various the users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last but not least, a big thank you to THG (http://www.hdfgroup.org/) for sponsoring many of the new features recently introduced in PyTables. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations.html From tim.hochberg at cox.net Wed Jun 21 15:02:27 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed, 21 Jun 2006 12:02:27 -0700 Subject: [Numpy-discussion] Numexpr does broadcasting. Message-ID: <449997C3.2000905@cox.net> Numexpr can now handle broadcasting. As an example, check out this implementation of the distance-in-a-bunch-of-dimenstions function that's been going around. This is 80% faster than the most recent one posted on my box and considerably easier to read. expr = numexpr("(a - b)**2", [('a', float), ('b', float)]) def dist_numexpr(A, B): return sqrt(sum(expr(A[:,newaxis], B[newaxis,:]), axis=2)) Now, if we just could do 'sum' inside the numexpr, I bet that this would really scream. This is something that David has talked about adding at various points. I just made his life a bit harder by supporting broadcasting, but I still don't think it would be all that hard to add reduction operations like sum and product as long as they were done at the outermost level of the expression. That is, "sum(x*2 + 5)" should be doable, but "5 + sum(x**2)" would likely be difficult. Anyway, I thought that was cool, so I figured I'd share ;-) [Bizzarely, numexpr seems to run faster on my box when compiled with "-O1" than when compiled with "-O2" or "-O2 -funroll-all-loops". Go figure.] -tim From emeliolollar at homeaway.com Wed Jun 21 17:02:00 2006 From: emeliolollar at homeaway.com (Emelia Lollar) Date: Wed, 21 Jun 2006 14:02:00 -0700 Subject: [Numpy-discussion] oucyc good Message-ID: <000001c69575$f0abeff0$771fa8c0@pgq19> Hi Save over 50% on your medications with our online STORE _____ the river-bank bent and sighed. I dont know what river it was, a rushing red one, swollen with the rains of the last few days, that came down from the hills and mountains in front of them. Soon it was nearly dark. The winds broke up the grey clouds, and a waning moon appeared above the hills between the flying rags. Then they stopped, and Thorin muttered something about supper, and where shall we get a dry patch to -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Wed Jun 21 18:40:47 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 22 Jun 2006 07:40:47 +0900 Subject: [Numpy-discussion] Element-by-element matrix multiplication In-Reply-To: References: Message-ID: Actually I think using mat() (just an alias for the matrix constructor) is a bad way to do it. That mat() (and most others on that page) should probably be replaced with asmatrix() to avoid the copy. --bb On 6/22/06, Keith Goodman wrote: > > On 6/21/06, Robert Kern wrote: > > > Keith Goodman wrote: > > > The NumPy for Matlab Users page suggests mat(a.A * b.A) for > > > element-by-element matrix multiplication. I think it would be helpful > > > to also include multiply(a, b). > > > > > > a.*b > > > > > > mat(a.A * b.A) or > > > multiply(a, b) > > > > It is a wiki page. You may edit it yourself without needing to ask > permission. > > OK. Done. I also added a notice about SciPy's PayPal account being > suspended. > > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Jun 21 22:08:50 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 Jun 2006 22:08:50 -0400 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: > Alan G Isaac wrote: >> M.transpose()[V>0] >> If you want the columns as columns, >> you can transpose again. On Wed, 21 Jun 2006, Keith Goodman apparently wrote: > I can't get that to work when M is a n by m matrix: The problem is not M being a matrix. You made V a matrix (i.e., 2d). So you need to ravel() it first. >> M.transpose()[V.ravel()>0] hth, Alan Isaac From aisaac at american.edu Wed Jun 21 22:08:52 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 Jun 2006 22:08:52 -0400 Subject: [Numpy-discussion] flatiter and inequality comparison Message-ID: I do not understand how to think about this: >>> x=arange(3).flat >>> x >>> x>2 True >>> x>10 True Why? (I realize this behaves like xrange, so this may not be a numpy question, but I do not understand that behavior either.) What I expected: that a flatiter object would iterate through its values and return either - a flatiter of the resulting comparisons, or - an array of the resulting comparisons Thank you, Alan Isaac From michael.sorich at gmail.com Wed Jun 21 22:01:59 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Thu, 22 Jun 2006 11:31:59 +0930 Subject: [Numpy-discussion] MA bug or feature? In-Reply-To: <200606210612.09374.pgmdevlist@mailcan.com> References: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> <200606210612.09374.pgmdevlist@mailcan.com> Message-ID: <16761e100606211901l70c1eeadl71fd19186da8cc6d@mail.gmail.com> I was setting the fill_value as 'NA' when constructing the array so the masked values would be printed as 'NA'. It is not a big deal to avoid doing this. Nevertheless, the differences between a masked array with a boolean mask and a mask of booleans have caused me trouble before. Especially when there are hidden in-place conversions of a mask which is a array of False to a mask which is False. e.g. import numpy print numpy.version.version ma1 = numpy.ma.array(((1.,2,3),(4,5,6)), mask=((0,0,0),(0,0,0))) print ma1.mask a1 = numpy.asarray(ma1) print ma1.mask ---------------------- 0.9.9.2538 [[False False False] [False False False]] False On 6/21/06, Pierre GM wrote: > On Wednesday 21 June 2006 04:46, Michael Sorich wrote: > > When transposing a masked array of dtype ' > ndarray of dtype '|O4' was returned. > > > OK, I see where the problem is: > When your fill_value has a type that cannot be converted to the type of your > data, the `filled` method (used internally in many functions, such as > `transpose`) raises a TypeError, which is caught and your array is converted > to 'O'. > > That's what happen here: your fill_value is a string, your data are integer, > the types don't match, hence the conversion. So, no, I don't think that's a > bug. > > Why filling when you don't have any masked values, then ? Well, there's a > subtle difference between a boolean mask and a mask of booleans. > When the mask is boolean (mask=nomask=False), there's no masked value, and > `filled` returns the data. > Now, when your mask is an array of boolean (your first case), MA doesn't check > whether mask.any()==False to determine whether there are some missing data or > not, it just processes the whole array of boolean. > > I agree that's a bit confusing here, and there might be some room for > improvement (for example, changing the current > `if m is nomask` to `if m is nomask or m.any()==False`, or better, forcing > mask to nomask if mask.any()==False). But I don;t think that qualifies as > bug. > > In short: > when you have an array of numbers, don't try to fill it with characters. > From simon at arrowtheory.com Thu Jun 22 07:19:05 2006 From: simon at arrowtheory.com (Simon Burton) Date: Thu, 22 Jun 2006 12:19:05 +0100 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <449978D2.1090000@ieee.org> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> Message-ID: <20060622121905.2d65372d.simon@arrowtheory.com> On Wed, 21 Jun 2006 10:50:26 -0600 Travis Oliphant wrote: > > So, in SVN NumPy, you will be able to do > > a[:,V>0] > a[V>0,:] > > The V>0 will be replaced with integer arrays as if nonzero(V>0) had been > called. OK. But just for the record, we should note how to do the operation that this used to do, eg. >>> a=numpy.array([1,2]) >>> a[[numpy.bool_(1)]] array([2]) >>> This could be a way of, say, maping a large boolean array onto some other values (1 or 2 in the above case). So, with the new implementation, is it possible to cast the bool array to an integer type without incurring a copy overhead ? And finally, is someone keeping track of the performance of array getitem ? It seems that as travis overloads it more and more it might then slow down in some cases. I must admit my vision is blurring and head is spining as numpy goes through these growing pains. I hope it's over soon. Not because I have trouble keeping up (although i do) but it's my matlab/R/numarray entrenched co-workers who cannot be exposed to this unstable development (they will run screaming to the woods). cheers, Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From wbaxter at gmail.com Wed Jun 21 23:23:38 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 22 Jun 2006 12:23:38 +0900 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: On 6/22/06, Alan G Isaac wrote: > > > Alan G Isaac wrote: > >> M.transpose()[V>0] > >> If you want the columns as columns, > >> you can transpose again. > > > On Wed, 21 Jun 2006, Keith Goodman apparently wrote: > > I can't get that to work when M is a n by m matrix: > > The problem is not M being a matrix. > You made V a matrix (i.e., 2d). > So you need to ravel() it first. > >> M.transpose()[V.ravel()>0] No dice, V.ravel() returns a matrix still. Looks like you'll need M.T[V.A.ravel()>0].T Just lovely. Is the new bool conversion thingy going to help make the syntax more reasonable for matrices, too? Seems like it will still require M[:,V.A.ravel() > 0] or M[:, V.A.squeeze() > 0] or M[:,V.A[:,0]>0] Anyway, this seems to me just more evidence that one is better off getting used to the 'array' way of doing things rather than clinging to Matlab ways by using 'matrix'. Is it worth dealing with the extra A's and asmatrix()'s and squeeze()'s that seem to crop up just to be able to write A*B instead of dot(A,B) (*)? --Bill (*) Ok, there's also the bit about being able to tell column vectors from row vectors and getting useful errors when you try to use a row that should have been a column. And then there's also the .T, .I, .H convenience factor. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fitz at astron.berkeley.edu Thu Jun 22 00:39:51 2006 From: fitz at astron.berkeley.edu (Michael Fitzgerald) Date: Wed, 21 Jun 2006 21:39:51 -0700 Subject: [Numpy-discussion] f.p. powers and masked arrays Message-ID: <200606212139.52511.fitz@astron.berkeley.edu> Hello all, I'm encountering some (relatively new?) behavior with masked arrays that strikes me as bizarre. Raising zero to a floating-point value is triggering a mask to be set, even though the result should be well-defined. When using fixed-point integers for powers, everything works as expected. I'm seeing this with both numarray and numpy. Take the case of 0**1, illustrated below: >>> import numarray as n1 >>> import numarray.ma as n1ma >>> n1.array(0.)**1 array(0.0) >>> n1.array(0.)**1. array(0.0) >>> n1ma.array(0.)**1 array(0.0) >>> n1ma.array(0.)**1. array(data = [1.0000000200408773e+20], mask = 1, fill_value=[ 1.00000002e+20]) >>> import numpy as n2 >>> import numpy.core.ma as n2ma >>> n2.array(0.)**1 array(0.0) >>> n2.array(0.)**1. array(0.0) >>> n2ma.array(0.)**1 array(0.0) >>> n2ma.array(0.)**1. array(data = 1e+20, mask = True, fill_value=1e+20) I've been using python v2.3.5 & v.2.4.3, numarray v1.5.1, and numpy v0.9.8, and tested this on an x86 Debian box and a PPC OSX box. It may be the case that this issue has manifested in the past several months, as it's causing a new problem with some of my older code. Any thoughts? Thanks in advance, Mike From oliphant.travis at ieee.org Thu Jun 22 01:58:52 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 21 Jun 2006 23:58:52 -0600 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <20060622121905.2d65372d.simon@arrowtheory.com> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> <20060622121905.2d65372d.simon@arrowtheory.com> Message-ID: <449A319C.6030008@ieee.org> Simon Burton wrote: > On Wed, 21 Jun 2006 10:50:26 -0600 > Travis Oliphant wrote: > > >> So, in SVN NumPy, you will be able to do >> >> a[:,V>0] >> a[V>0,:] >> >> The V>0 will be replaced with integer arrays as if nonzero(V>0) had been >> called. >> > > OK. > But just for the record, we should note how to > do the operation that this used to do, eg. > > >>>> a=numpy.array([1,2]) >>>> a[[numpy.bool_(1)]] >>>> > array([2] > This behavior hasn't changed... All that's changed is that what used to raise an error (boolean arrays in a tuple) now works in the same way that boolean arrays worked before. > > So, with the new implementation, is it possible to cast > the bool array to an integer type without incurring a copy overhead ? > I'm not sure what you mean. What copy overhead? There is still copying going on. The way it's been implemented, the boolean arrays get replaced with integer index arrays under the hood so it is really nearly identical to replacing the boolean array with nonzero(). > And finally, is someone keeping track of the performance > of array getitem ? It seems that as travis overloads it more and > more it might then slow down in some cases. > Actually, I'm very concientious of the overhead of getitem in code that I add. I just today found a memory leak in code that was added that I did not review carefully that was also slowing down all accesses of arrays > 1d that resulted in array scalars. I added an optimization that should speed that up. But, it would be great if others could watch the speed changes for basic operations. > I must admit my vision is blurring and head is spining as numpy > goes through these growing pains The 1.0 beta release is coming shortly. I would like to see the first beta by the first of July. The final 1.0 release won't occur, though, until after SciPy 2006. Thanks for your patience. We've been doing a lot of house-cleaning lately to separate the "old but compatible" interface from the "new." This has resulted in some confusion, to be sure. Please don't hesitate to voice your concerns. -Travis From schofield at ftw.at Thu Jun 22 03:53:44 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 22 Jun 2006 09:53:44 +0200 Subject: [Numpy-discussion] Matrix construction In-Reply-To: References: Message-ID: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> On 22/06/2006, at 12:40 AM, Bill Baxter wrote: > Actually I think using mat() (just an alias for the matrix > constructor) is a bad way to do it. That mat() (and most others on > that page) should probably be replaced with asmatrix() to avoid the > copy. Perhaps the 'mat' function should become an alias for 'asmatrix'. I've thought this for a while. Then code and documentation like this page could remain short and simple without incurring the performance penalty. Go on, shoot me down! :) -- Ed From stefan at sun.ac.za Sun Jun 18 21:14:44 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 19 Jun 2006 03:14:44 +0200 Subject: [Numpy-discussion] Numexpr does broadcasting. In-Reply-To: <449997C3.2000905@cox.net> References: <449997C3.2000905@cox.net> Message-ID: <20060619011444.GA17434@mentat.za.net> Hi Tim On Wed, Jun 21, 2006 at 12:02:27PM -0700, Tim Hochberg wrote: > > Numexpr can now handle broadcasting. As an example, check out this > implementation of the distance-in-a-bunch-of-dimenstions function that's > been going around. This is 80% faster than the most recent one posted on > my box and considerably easier to read. This looks really cool. However, it does seem to break scalar operation: a = 3. b = 4. expr = numexpr("2*a+3*b",[('a',float),('b'.float)]) expr.run(a,b) Out[41]: array(-7.1680117685147315e-39) I havn't used numexpr before, so I could be doing something silly (although I did verify that the above works on r1986). Cheers St?fan From pau.gargallo at gmail.com Thu Jun 22 06:26:18 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu, 22 Jun 2006 12:26:18 +0200 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <4499803C.1010302@ieee.org> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> <6ef8f3380606211009i5e225282n4a4e8e4dc7adbad1@mail.gmail.com> <4499803C.1010302@ieee.org> Message-ID: <6ef8f3380606220326p1631cc90j755550f91b6bc1b2@mail.gmail.com> ''' The following mail is a bit long and tedious to read, sorry about that. Here is the abstract: "I would like boolean indexing to work like slices and not like arrays of indices" ''' hi, I'm _really_ sorry to insist, but I have been thinking on it and I don't feel like replacing with nonzero() is what we want. For me this is a bad trick equivalent to replacing slices to arrays of indices with r_[]: - it works only if you do that for a single axis. Let me explain: if i have an array, >>> from numpy import * >>> a = arange(12).reshape(3,4) i can slice it: >>> a[1:3,0:3] array([[ 4, 5, 6], [ 8, 9, 10]]) i can define boolean arrays 'equivalent' to this slices >>> b1 = array([False,True,True]) # equivalent to 1:3 >>> b2 = array([True,True,True,False]) # equivalent to 0:3 now if i use one of this boolean arrays for indexing, all work like with slices: >>> a[b1,:] #same as a[1:3,:] array([[ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> a[:,b2] # same as a[:,0:3] array([[ 0, 1, 2], [ 4, 5, 6], [ 8, 9, 10]]) but if I use both at the same time: >>> a[b1,b2] # not equivalent to a[1:3,0:3] but to a[r_[1:3],r_[0:3]] Traceback (most recent call last): File "", line 1, in ? ValueError: shape mismatch: objects cannot be broadcast to a single shape it doesn't work because nonzero(b1) and nonzero(b2) have different shapes. if I want the equivalent to a[1:3,1:3], i can do >>> a[ix_(b1,b2)] array([[ 4, 5, 6], [ 8, 9, 10]]) I can not see when the current behaviour of a[b1,b2] would be used. >From my (probably naive) point of view, should not be converted to nonzero(), but to some kind of slicing object. In that way boolean indexing could work like slices and not like arrays of integers, which will be more intuitive for me. Converting slices to arrays of indices is a trick that only works for one axis: >>> a[r_[1:3],0:3] #same as a[1:3,0:3] array([[ 4, 5, 6], [ 8, 9, 10]]) >>> a[1:3,r_[0:3]] #same as a[1:3,0:3] array([[ 4, 5, 6], [ 8, 9, 10]]) >>> a[r_[1:3],r_[0:3]] # NOT same as a[1:3,0:3] Traceback (most recent call last): File "", line 1, in ? ValueError: shape mismatch: objects cannot be broadcast to a single shape am I completly wrong?? may be the current behaviour (only usefull for one axis) is enought?? sorry for asking things and not giving solutions and thanks for everything. pau PD: I noticed that the following code works >>> a[a>4,:,:,:,:,1:2:3,...,4:5:6] array([ 5, 6, 7, 8, 9, 10, 11]) From konrad.hinsen at laposte.net Thu Jun 22 06:39:47 2006 From: konrad.hinsen at laposte.net (Konrad Hinsen) Date: Thu, 22 Jun 2006 12:39:47 +0200 Subject: [Numpy-discussion] Numeric and Python 2.5b1 Message-ID: Those who try out Python 2.5b1 and add Numeric might be annoyed by the warning message that Python issues when Numeric is imported the first time. This is due to the fact that Numeric lives inside a directory called "Numeric" without being a package - Numeric has been around for longer than packages in Python. You can get rid of this warning by adding the following lines to sitecustomize.py: import warnings try: warnings.filterwarnings("ignore", category=ImportWarning) except NameError: pass del warnings The try statement ensures that the code will work for older Python releases as well. Konrad. -- --------------------------------------------------------------------- Konrad Hinsen Centre de Biophysique Mol?culaire, CNRS Orl?ans Synchrotron Soleil - Division Exp?riences Saint Aubin - BP 48 91192 Gif sur Yvette Cedex, France Tel. +33-1 69 35 97 15 E-Mail: hinsen ?t cnrs-orleans.fr --------------------------------------------------------------------- From wbaxter at gmail.com Thu Jun 22 04:54:04 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 22 Jun 2006 17:54:04 +0900 Subject: [Numpy-discussion] Matrix construction In-Reply-To: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> References: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> Message-ID: On 6/22/06, Ed Schofield wrote: > > > On 22/06/2006, at 12:40 AM, Bill Baxter wrote: > > > Actually I think using mat() (just an alias for the matrix > > constructor) is a bad way to do it. That mat() (and most others on > > that page) should probably be replaced with asmatrix() to avoid the > > copy. > > Perhaps the 'mat' function should become an alias for 'asmatrix'. > I've thought this for a while. That makes sense to me. As far as I know, asmatrix() defaults to calling the constructor if it can't snarf the memory of the object being passed in. So, go on, shoot Ed and me down! :-) --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From j-renner at northwestern.edu Thu Jun 22 12:24:57 2006 From: j-renner at northwestern.edu (Jocelyn E. Renner) Date: Thu, 22 Jun 2006 10:24:57 -0600 Subject: [Numpy-discussion] Failure to install Message-ID: <06b92fe0f01ca061c3928fcf4740c78f@northwestern.edu> Hello! I am attempting to install numarray on my Mac OX 10.3, and I successfully downloaded it. Since I am attempting to use this with Cantera, I followed their recommendations as to installing which included typing: python setup.py install when I was in the numarray directory. When I executed this, I received the following error message: error: could not create '/System/Library/Frameworks/Python.framework/Versions/2.3/include/ python2.3/numarray': Permission denied I have tried to unlock this folder with little to no luck (I must confess I am not the most computer savvy person ever). If anyone could give me some advice as to how to get this to install properly, I'd appreciate it! If it does not need to be in this folder, is there anyway to bypass this? Thanks so much! Jocelyn Jocelyn Renner Mechanical Engineering, Northwestern University ------------------------------------------------------------------------ ----------------- No man is an island, entire of itself...any man's death diminishes me, because I am involved in mankind; and therefore never send to know for whom the bell tolls; it tolls for thee. ---John Donne Meditation XVII From david.huard at gmail.com Thu Jun 22 12:26:52 2006 From: david.huard at gmail.com (David Huard) Date: Thu, 22 Jun 2006 12:26:52 -0400 Subject: [Numpy-discussion] unique() should return a sorted array Message-ID: <91cf711d0606220926m48c6857cr78b4484f4a137a2@mail.gmail.com> Hi, Numpy's unique(x) returns an array x with repetitions removed. However, since it returns asarray(dict.keys()), the resulting array is not sorted, worse, the original order may not be conserved. I think that unique() should return a sorted array, like its matlab homonym. Regards, David Huard -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Thu Jun 22 12:33:10 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 22 Jun 2006 09:33:10 -0700 Subject: [Numpy-discussion] Failure to install In-Reply-To: <06b92fe0f01ca061c3928fcf4740c78f@northwestern.edu> References: <06b92fe0f01ca061c3928fcf4740c78f@northwestern.edu> Message-ID: On 6/22/06, Jocelyn E. Renner wrote: > python setup.py install > > when I was in the numarray directory. When I executed this, I received > the following error message: > error: could not create > '/System/Library/Frameworks/Python.framework/Versions/2.3/include/ > python2.3/numarray': Permission denied > > I have tried to unlock this folder with little to no luck (I must > confess I am not the most computer savvy person ever). Try sudo python setup.py install From kwgoodman at gmail.com Thu Jun 22 12:47:12 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 22 Jun 2006 09:47:12 -0700 Subject: [Numpy-discussion] Matrix construction In-Reply-To: References: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> Message-ID: On 6/22/06, Bill Baxter wrote: > On 6/22/06, Ed Schofield wrote: > > > > > On 22/06/2006, at 12:40 AM, Bill Baxter wrote: > > > > > Actually I think using mat() (just an alias for the matrix > > > constructor) is a bad way to do it. That mat() (and most others on > > > that page) should probably be replaced with asmatrix() to avoid the > > > copy. > > > > Perhaps the 'mat' function should become an alias for 'asmatrix'. > > I've thought this for a while. > > > That makes sense to me. As far as I know, asmatrix() defaults to calling > the constructor if it can't snarf the memory of the object being passed in. > > So, go on, shoot Ed and me down! :-) I can anticipate one problem: the Pirates will want their three-letter abbreviation for asarray. Will functions like rand and eye always return arrays? Or will there be a day when you can tell numpy that you are working with matrices and then it will return matrices when you call rand, eye, etc? From oliphant.travis at ieee.org Thu Jun 22 14:57:27 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 22 Jun 2006 12:57:27 -0600 Subject: [Numpy-discussion] Recent SVN of NumPy has issues with SciPy Message-ID: <449AE817.1020700@ieee.org> There are still some issues with my recent check-in for NumPy (r2663). But, it does build and run the numpy.tests cleanly. (It's failing on SciPy tests...) You may want to hold off for a few hours until I can straighten it out. -Travis From wbaxter at gmail.com Thu Jun 22 15:11:11 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 23 Jun 2006 04:11:11 +0900 Subject: [Numpy-discussion] Matrix construction In-Reply-To: References: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> Message-ID: On 6/23/06, Keith Goodman wrote: > > On 6/22/06, Bill Baxter wrote: > > On 6/22/06, Ed Schofield wrote: > > > > > > > > On 22/06/2006, at 12:40 AM, Bill Baxter wrote: > > > > > > > Actually I think using mat() (just an alias for the matrix > > > > constructor) is a bad way to do it. That mat() (and most others on > > > > that page) should probably be replaced with asmatrix() to avoid the > > > > copy. > > > > > > Perhaps the 'mat' function should become an alias for 'asmatrix'. > > > I've thought this for a while. > > > > > > That makes sense to me. As far as I know, asmatrix() defaults to > calling > > the constructor if it can't snarf the memory of the object being passed > in. > > > > So, go on, shoot Ed and me down! :-) > > I can anticipate one problem: the Pirates will want their three-letter > abbreviation for asarray. arr() me maties! Will functions like rand and eye always return arrays? Or will there > be a day when you can tell numpy that you are working with matrices > and then it will return matrices when you call rand, eye, etc? > I don't disagree there's a need, but you can always make your own: def mrand(*vargs): return asmatrix(rand(*vargs)) def meye(N, **kwargs): return asmatrix(eye(N,**kwargs)) --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at mailcan.com Thu Jun 22 15:15:15 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Thu, 22 Jun 2006 15:15:15 -0400 Subject: [Numpy-discussion] MA bug or feature? In-Reply-To: <16761e100606211901l70c1eeadl71fd19186da8cc6d@mail.gmail.com> References: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> <200606210612.09374.pgmdevlist@mailcan.com> <16761e100606211901l70c1eeadl71fd19186da8cc6d@mail.gmail.com> Message-ID: <200606221515.16089.pgmdevlist@mailcan.com> On Wednesday 21 June 2006 22:01, Michael Sorich wrote: > I was setting the fill_value as 'NA' when constructing the array so > the masked values would be printed as 'NA'. It is not a big deal to > avoid doing this. You can use masked_print_option, as illustrated below, without using a fill_value incompatible with your data type. >>>import numpy.core.ma as MA >>>X = MA.array([1,2,3],maks=[0,1,0]) >>>print X [1 -- 3] >>>MA.masked_print_option=MA._MaskedPrintOption('N/A') >>>print X [1 N/A 3] > Nevertheless, the differences between a masked array with a boolean > mask and a mask of booleans have caused me trouble before. Especially > when there are hidden in-place conversions of a mask which is a array > of False to a mask which is False. e.g. OK, I'm still using 0.9.8 and I can't help you with this one. In that version, N.asarray transforms the MA into a ndarray, so you lose the mask. But I wonder: if none of your values are masked, the natural behavior would be to have `data.mask==nomask`, which speeds up things a bit. This gain of time is why I was suggesting that `mask` would be forced to `nomask` at the creation, if `mask.any()==False`. Could you give me some examples of cases where you need the mask to stay as an array of False ? If you need to access the mask as an array, you can always use MA.getmaskarray. From kwgoodman at gmail.com Thu Jun 22 15:25:16 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 22 Jun 2006 12:25:16 -0700 Subject: [Numpy-discussion] How do I seed the radom number generator? Message-ID: How do I seed rand and randn? From chanley at stsci.edu Thu Jun 22 15:32:40 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 22 Jun 2006 15:32:40 -0400 (EDT) Subject: [Numpy-discussion] C-API support for numarray added to NumPy Message-ID: <20060622153240.CJT13983@comet.stsci.edu> >You will also need to change the include directories used in compiling >by appending the directories returned by >numpy.numarray.util.get_numarray_include_dirs() > Hi Travis, I believe that there is a problem with this function. When executing interactively with numpy version 0.9.9.2660 I get the following result: Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] Type "copyright", "credits" or "license" for more information. In [1]: import numpy In [2]: numpy.__version__ Out[2]: '0.9.9.2660' In [3]: import numpy.numarray.util as nnu In [4]: nnu.get_numarray_include_dirs() Out[4]: ['C:\\Python24\\lib\\site-packages\\numpy\\numarray'] Unfortunately this does not have the appropriate (or any) header files. Chris From robert.kern at gmail.com Thu Jun 22 15:33:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Jun 2006 14:33:43 -0500 Subject: [Numpy-discussion] How do I seed the radom number generator? In-Reply-To: References: Message-ID: Keith Goodman wrote: > How do I seed rand and randn? If you can, please use the .rand() and .randn() methods on a RandomState object which you can initialize with whatever seed you like. In [1]: import numpy as np rs In [2]: rs = np.random.RandomState([12345678, 90123456, 78901234]) In [3]: rs.rand(5) Out[3]: array([ 0.40355172, 0.27449337, 0.56989746, 0.34767024, 0.47185004]) In [5]: np.random.RandomState.seed? Type: method_descriptor Base Class: String Form: Namespace: Interactive Docstring: Seed the generator. seed(seed=None) seed can be an integer, an array (or other sequence) of integers of any length, or None. If seed is None, then RandomState will try to read data from /dev/urandom (or the Windows analogue) if available or seed from the clock otherwise. The rand() and randn() "functions" are actually references to methods on a global instance of RandomState. The .seed() method on that object is also similarly exposed as numpy.random.seed(). If you are writing new code, please explicitly use a RandomState object. Only use numpy.random.seed() if you must control code that uses the global rand() and randn() "functions" and you can't modify it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Thu Jun 22 15:45:15 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 22 Jun 2006 12:45:15 -0700 Subject: [Numpy-discussion] How do I seed the radom number generator? In-Reply-To: References: Message-ID: On 6/22/06, Robert Kern wrote: > Keith Goodman wrote: > > How do I seed rand and randn? > > If you can, please use the .rand() and .randn() methods on a RandomState object > which you can initialize with whatever seed you like. > > In [1]: import numpy as np > rs > In [2]: rs = np.random.RandomState([12345678, 90123456, 78901234]) > > In [3]: rs.rand(5) > Out[3]: array([ 0.40355172, 0.27449337, 0.56989746, 0.34767024, 0.47185004]) Perfect! Thank you. From saagesen at sfu.ca Thu Jun 22 15:46:41 2006 From: saagesen at sfu.ca (saagesen at sfu.ca) Date: Thu, 22 Jun 2006 12:46:41 -0700 Subject: [Numpy-discussion] problem building NumPy Message-ID: <200606221946.k5MJkfo7009521@rm-rstar.sfu.ca> An embedded and charset-unspecified text was scrubbed... Name: not available URL: From pfdubois at gmail.com Thu Jun 22 16:26:04 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Thu, 22 Jun 2006 13:26:04 -0700 Subject: [Numpy-discussion] MA bug or feature? In-Reply-To: <200606210612.09374.pgmdevlist@mailcan.com> References: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> <200606210612.09374.pgmdevlist@mailcan.com> Message-ID: Pierre wrote: > I agree that's a bit confusing here, and there might be some room for > improvement (for example, changing the current > `if m is nomask` to `if m is nomask or m.any()==False`, or better, forcing > mask to nomask if mask.any()==False). But I don;t think that qualifies as > bug. In the original MA in Numeric, I decided that to constantly check for masks that didn't actually mask anything was not a good idea. It punishes normal use with a very expensive check that is rarely going to be true. If you are in a setting where you do not want this behavior, but instead want masks removed whenever possible, you may wish to wrap or replace things like masked_array so that they call make_mask with flag = 1: y = masked_array(data, make_mask(maskdata, flag=1)) y will have no mask if maskdata is all false. Thanks to Pierre for pointing out about masked_print_option. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Jun 22 16:36:41 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 22 Jun 2006 22:36:41 +0200 Subject: [Numpy-discussion] sourceforge advertising Message-ID: <20060622203641.GB28648@mentat.za.net> Hi, I noticed that sourceforge now adds another 8 lines of advertisement to the bottom of every email sent to the list. Am I the only one who finds this annoying? Is there any reason why the numpy list can't run on scipy.org? Regards St?fan From robert.kern at gmail.com Thu Jun 22 17:46:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Jun 2006 16:46:59 -0500 Subject: [Numpy-discussion] sourceforge advertising In-Reply-To: <20060622203641.GB28648@mentat.za.net> References: <20060622203641.GB28648@mentat.za.net> Message-ID: Stefan van der Walt wrote: > Hi, > > I noticed that sourceforge now adds another 8 lines of advertisement > to the bottom of every email sent to the list. Am I the only one who > finds this annoying? Is there any reason why the numpy list can't run > on scipy.org? We'd be happy to move it to scipy.org. However moving a mailing list is always a hassle for subscribers, so we were not going to bother until there was a compelling reason. This may be one, though. For all subscribers: If you have an opinion over whether to move the list or to keep it on Sourceforge, please email me *offlist*. If enough people want to move and few people want to stay, we'll set up a new mailing list on scipy.org. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rowen at cesmail.net Thu Jun 22 18:45:01 2006 From: rowen at cesmail.net (Russell E. Owen) Date: Thu, 22 Jun 2006 15:45:01 -0700 Subject: [Numpy-discussion] problem building Numeric on python 2.5 Message-ID: I just installed python 2.5b1 on my Mac (10.4 ppc) and can't seem to get Numeric 24.2 installed. It seems to build fine (no obvious error messages), but when I try to import it I get: Python 2.5b1 (r25b1:47038M, Jun 20 2006, 16:17:55) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import Numeric __main__:1: ImportWarning: Not importing directory '/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac kages/Numeric': missing __init__.py >>> Any ideas? Is it somehow incompatible with python 2.5b1? For what it's worth, numarray builds and installs fine. I've not tried numpy or any other packages yet. -- Russell From robert.kern at gmail.com Thu Jun 22 18:51:06 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Jun 2006 17:51:06 -0500 Subject: [Numpy-discussion] problem building Numeric on python 2.5 In-Reply-To: References: Message-ID: Russell E. Owen wrote: > I just installed python 2.5b1 on my Mac (10.4 ppc) and can't seem to get > Numeric 24.2 installed. It seems to build fine (no obvious error > messages), but when I try to import it I get: > Python 2.5b1 (r25b1:47038M, Jun 20 2006, 16:17:55) > [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import Numeric > __main__:1: ImportWarning: Not importing directory > '/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac > kages/Numeric': missing __init__.py > > Any ideas? Is it somehow incompatible with python 2.5b1? > > For what it's worth, numarray builds and installs fine. I've not tried > numpy or any other packages yet. See Konrad Hinsen's post earlier today "Numeric and Python 2.5b1" for a description of the issue and a way to silence the warnings. It's just a warning, though, not an error. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at cox.net Thu Jun 22 18:52:05 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu, 22 Jun 2006 15:52:05 -0700 Subject: [Numpy-discussion] problem building Numeric on python 2.5 In-Reply-To: References: Message-ID: <449B1F15.1020501@cox.net> Russell E. Owen wrote: > I just installed python 2.5b1 on my Mac (10.4 ppc) and can't seem to get > Numeric 24.2 installed. It seems to build fine (no obvious error > messages), but when I try to import it I get: > Python 2.5b1 (r25b1:47038M, Jun 20 2006, 16:17:55) > [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>>> import Numeric >>>> > __main__:1: ImportWarning: Not importing directory > '/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac > kages/Numeric': missing __init__.py > > > Any ideas? Is it somehow incompatible with python 2.5b1? > Import warning is a new 'feature' of 2.5. It warns if there are directories on sys.path that are *not* packages. I'll refer you to the py-dev archives if you want figure out the motivation for that. So, if everything seems to work, there's a good chance that nothing's wrong, but that your just seeing a complaint due to this new behaviour. If you check recent messages on Python-dev someone just posted a recipe for suppressing this warning. -tim > For what it's worth, numarray builds and installs fine. I've not tried > numpy or any other packages yet. > > -- Russell > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From michael.sorich at gmail.com Thu Jun 22 19:41:27 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Fri, 23 Jun 2006 09:11:27 +0930 Subject: [Numpy-discussion] MA bug or feature? In-Reply-To: <200606221515.16089.pgmdevlist@mailcan.com> References: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> <200606210612.09374.pgmdevlist@mailcan.com> <16761e100606211901l70c1eeadl71fd19186da8cc6d@mail.gmail.com> <200606221515.16089.pgmdevlist@mailcan.com> Message-ID: <16761e100606221641u1dfcfaa8ne5a1ebdb606c7992@mail.gmail.com> On 6/23/06, Pierre GM wrote: > On Wednesday 21 June 2006 22:01, Michael Sorich wrote: > > Nevertheless, the differences between a masked array with a boolean > > mask and a mask of booleans have caused me trouble before. Especially > > when there are hidden in-place conversions of a mask which is a array > > of False to a mask which is False. e.g. > > OK, I'm still using 0.9.8 and I can't help you with this one. In that version, > N.asarray transforms the MA into a ndarray, so you lose the mask. No, the mask of ma1 is converted in place to False. ma1 remains a MaskedArray import numpy ma1 = numpy.ma.array(((1.,2,3),(4,5,6)), mask=((0,0,0),(0,0,0))) print ma1.mask, type(ma1) numpy.asarray(ma1) print ma1.mask, type(ma1) --output-- [[False False False] [False False False]] False > But I wonder: if none of your values are masked, the natural behavior would be > to have `data.mask==nomask`, which speeds up things a bit. This gain of time > is why I was suggesting that `mask` would be forced to `nomask` at the > creation, if `mask.any()==False`. > > Could you give me some examples of cases where you need the mask to stay as an > array of False ? > If you need to access the mask as an array, you can always use > MA.getmaskarray. If it did not sometimes effect the behaviour of the masked array, I would not be worried about automatic conversions between the two forms of the mask. Is it agreed that there should not be any differences in the behavior of the two forms of masked array e.g. with a mask of [[False,False],[False,False]] vs False? It is frustrating to track down exceptions when the array has one behavior, then there is a implicit conversion of the mask which changes the behaviour of the array. Mike From oliphant.travis at ieee.org Thu Jun 22 19:46:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 22 Jun 2006 17:46:29 -0600 Subject: [Numpy-discussion] Recent SVN of NumPy has issues with SciPy In-Reply-To: <449AE817.1020700@ieee.org> References: <449AE817.1020700@ieee.org> Message-ID: <449B2BD5.1000401@ieee.org> Travis Oliphant wrote: > There are still some issues with my recent check-in for NumPy (r2663). > But, it does build and run the numpy.tests cleanly. (It's failing on > SciPy tests...) > These issues are now fixed (it was a brain-dead optimization that just doesn't work and was only exposed when converting between C- and Fortran- arrays during a cast.. Feel free to use SVN again... I do like to keep SVN so that it works. -Travis From myeates at jpl.nasa.gov Thu Jun 22 21:46:49 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Thu, 22 Jun 2006 18:46:49 -0700 Subject: [Numpy-discussion] fromfile croaking on windows Message-ID: <449B4809.9000701@jpl.nasa.gov> when I try and load a file with numpy.fromfile I keep getting a message .... 7245092 items requested but only 3899 read. Its always the same number read. I've checked and I'm giving the correct filename and its the correct size. Any idea whats going on? This is with 0.9.8 Mathew From myeates at jpl.nasa.gov Thu Jun 22 22:00:40 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Thu, 22 Jun 2006 19:00:40 -0700 Subject: [Numpy-discussion] p.s. Re: fromfile croaking on windows In-Reply-To: <449B4809.9000701@jpl.nasa.gov> References: <449B4809.9000701@jpl.nasa.gov> Message-ID: <449B4B48.2060902@jpl.nasa.gov> When I specify count=-1 I get the exact same error. So, numpy was able to determine the filesize. It just can't read it. Mathew Mathew Yeates wrote: > when I try and load a file with numpy.fromfile I keep getting a message .... > 7245092 items requested but only 3899 read. Its always the same number read. > > I've checked and I'm giving the correct filename and its the correct > size. Any idea whats going on? > This is with 0.9.8 > > Mathew > > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From oliphant.travis at ieee.org Fri Jun 23 01:49:49 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 22 Jun 2006 23:49:49 -0600 Subject: [Numpy-discussion] fromfile croaking on windows In-Reply-To: <449B4809.9000701@jpl.nasa.gov> References: <449B4809.9000701@jpl.nasa.gov> Message-ID: <449B80FD.7080409@ieee.org> Mathew Yeates wrote: > when I try and load a file with numpy.fromfile I keep getting a message .... > 7245092 items requested but only 3899 read. Its always the same number read. > > Which platform are you on? Could you show exactly how you are calling the function. There were some reports of strange behavior on Windows that may be related to file-locking. I'm just not sure at this point. -Travis From svetosch at gmx.net Fri Jun 23 04:54:49 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 23 Jun 2006 10:54:49 +0200 Subject: [Numpy-discussion] eye and identity: why both? Message-ID: <449BAC59.4090505@gmx.net> identity seems to be a "crippled" version of eye without any value added, apart from backwards-compatibility; So from a user point of view, which one does numpy recommend? And from a developer point of view (which doesn't really apply to me, of course), should identity maybe become an alias for eye(n, dtype=...)? Or is there a subtle (or not so subtle...) difference I am missing? I am aware this question is not really that important since everything works, but when I read that there will be a 1.0beta soon I thought maybe this is the right time to ask those kind of questions. Here are the help-strings: eye(N, M=None, k=0, dtype=) eye returns a N-by-M 2-d array where the k-th diagonal is all ones, and everything else is zeros. identity(n, dtype=) identity(n) returns the identity 2-d array of shape n x n. Cheers, Sven From fullung at gmail.com Fri Jun 23 09:42:40 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 23 Jun 2006 15:42:40 +0200 Subject: [Numpy-discussion] fromfile croaking on windows In-Reply-To: <449B80FD.7080409@ieee.org> Message-ID: <003b01c696ca$e5842240$01eaa8c0@dsp.sun.ac.za> Hello all Travis Oliphant wrote: > Mathew Yeates wrote: > > when I try and load a file with numpy.fromfile I keep getting a message > .... > > 7245092 items requested but only 3899 read. Its always the same number > read. > > > > > Which platform are you on? Could you show exactly how you are calling > the function. > > There were some reports of strange behavior on Windows that may be > related to file-locking. I'm just not sure at this point. I did some experiments. With my test file, this always fails: y = N.fromfile('temp.dat', dtype=N.float64) This works: y = N.fromfile(file('temp.dat','rb'), dtype=N.float64) More details in this ticket: http://projects.scipy.org/scipy/numpy/ticket/103 I don't quite understand how file-locking can be causing these problems. Travis, care to elaborate on what you think might be causing these problems? Cheers, Albert From kwgoodman at gmail.com Fri Jun 23 10:18:10 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 23 Jun 2006 07:18:10 -0700 Subject: [Numpy-discussion] How do I make a diagonal matrix? Message-ID: How do I make a NxN diagonal matrix with a Nx1 column vector x along the diagonal? From svetosch at gmx.net Fri Jun 23 10:34:14 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 23 Jun 2006 16:34:14 +0200 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: Message-ID: <449BFBE6.1050401@gmx.net> Keith Goodman schrieb: > How do I make a NxN diagonal matrix with a Nx1 column vector x along > the diagonal? > >>> help(n.diag) Help on function diag in module numpy.lib.twodim_base: diag(v, k=0) returns the k-th diagonal if v is a array or returns a array with v as the k-th diagonal if v is a vector. From joris at ster.kuleuven.be Fri Jun 23 10:40:33 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 23 Jun 2006 16:40:33 +0200 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449BFBE6.1050401@gmx.net> References: <449BFBE6.1050401@gmx.net> Message-ID: <200606231640.33722.joris@ster.kuleuven.be> On Friday 23 June 2006 16:34, Sven Schreiber wrote: [SS]: Keith Goodman schrieb: [SS]: > How do I make a NxN diagonal matrix with a Nx1 column vector x along [SS]: > the diagonal? [SS]: > [SS]: [SS]: >>> help(n.diag) [SS]: Help on function diag in module numpy.lib.twodim_base: [SS]: [SS]: diag(v, k=0) [SS]: returns the k-th diagonal if v is a array or returns a array [SS]: with v as the k-th diagonal if v is a vector. See also the Numpy Example List for a few examples: http://www.scipy.org/Numpy_Example_List J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From kwgoodman at gmail.com Fri Jun 23 10:55:47 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 23 Jun 2006 07:55:47 -0700 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449BFBE6.1050401@gmx.net> References: <449BFBE6.1050401@gmx.net> Message-ID: On 6/23/06, Sven Schreiber wrote: > Keith Goodman schrieb: > > How do I make a NxN diagonal matrix with a Nx1 column vector x along > > the diagonal? > > > > >>> help(n.diag) > Help on function diag in module numpy.lib.twodim_base: > > diag(v, k=0) > returns the k-th diagonal if v is a array or returns a array > with v as the k-th diagonal if v is a vector. I tried >> x = rand(3,1) >> diag(x) array([ 0.87113114]) Isn't rand(3,1) a vector? Off list I was given the example: x=rand(3) diag(3) That works. But my x is a Nx1 matrix. I can't get it to work with matrices. Joris: The Numpy Example List looks good. I hadn't come across that before. From svetosch at gmx.net Fri Jun 23 11:07:32 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 23 Jun 2006 17:07:32 +0200 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> Message-ID: <449C03B4.2070708@gmx.net> Keith Goodman schrieb: > > Isn't rand(3,1) a vector? afaik not in numpy's terms, because two numbers are given for the dimensions -- I also struggle with that, because I'm a matrix guy like you ;-) > > Off list I was given the example: > x=rand(3) > diag(3) > > That works. But my x is a Nx1 matrix. I can't get it to work with matrices. > ok, good point; with your x then diag(x.A[:,0]) should work, although it's not very pretty. Maybe there are better ways, but I agree it would be nice to be able to use matrices directly. -sven From aisaac at american.edu Fri Jun 23 11:50:13 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 11:50:13 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449BFBE6.1050401@gmx.net> References: <449BFBE6.1050401@gmx.net> Message-ID: On Fri, 23 Jun 2006, Sven Schreiber apparently wrote: >>>> help(n.diag) > Help on function diag in module numpy.lib.twodim_base: > diag(v, k=0) > returns the k-th diagonal if v is a array or returns a array > with v as the k-th diagonal if v is a vector. That is pretty damn obscure. Apparently Travis's new doc string did not survive? The Numpy book says: diag (v, k=0) Return the kth diagonal if v is a 2-d array, or returns an array with v as the kth diagonal if v is a 1-d array. That is better but not great. I think what is wanted is: diag (v, k=0) If v is a 2-d array: return a copy of the kth diagonal of v (as a 1-d array). If v is a 1-d array: return a 2-d array with a copy of v as the kth diagonal (and zeros elsewhere). fwiw, Alan Isaac PS As a response to the question, it might be worth noting the following. >>> y=N.zeros((5,5)) >>> values=N.arange(1,6) >>> indices=slice(0,25,6) >>> y.flat[indices]=values >>> y array([[1, 0, 0, 0, 0], [0, 2, 0, 0, 0], [0, 0, 3, 0, 0], [0, 0, 0, 4, 0], [0, 0, 0, 0, 5]]) Generalizing we end up with the following (from pyGAUSS): def diagrv(x,v,copy=True): if copy: x = numpy.matrix( x, copy=True ) else: x = numpy.matrix( x, copy=False ) stride = 1 + x.shape[1] x.flat[ slice(0,x.size,stride) ] = v return x From aisaac at american.edu Fri Jun 23 12:03:13 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 12:03:13 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> Message-ID: On Fri, 23 Jun 2006, Keith Goodman apparently wrote: > my x is a Nx1 matrix. I can't get it to work with matrices. Hmm. One would think that diag() would accept a flatiter object, but it does not. Shouldn't it?? But anyway, you can squeeze x: >>> x matrix([[ 0.46474951], [ 0.0688041 ], [ 0.61141623]]) >>> y=N.diag(N.squeeze(x.A)) >>> y array([[ 0.46474951, 0. , 0. ], [ 0. , 0.0688041 , 0. ], [ 0. , 0. , 0.61141623]]) hth, Alan Isaac From david.douard at logilab.fr Fri Jun 23 11:08:26 2006 From: david.douard at logilab.fr (David Douard) Date: Fri, 23 Jun 2006 17:08:26 +0200 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> Message-ID: <20060623150826.GC1032@logilab.fr> On Fri, Jun 23, 2006 at 07:55:47AM -0700, Keith Goodman wrote: > On 6/23/06, Sven Schreiber wrote: > > Keith Goodman schrieb: > > > How do I make a NxN diagonal matrix with a Nx1 column vector x along > > > the diagonal? > > > > > > > >>> help(n.diag) > > Help on function diag in module numpy.lib.twodim_base: > > > > diag(v, k=0) > > returns the k-th diagonal if v is a array or returns a array > > with v as the k-th diagonal if v is a vector. > > I tried > > >> x = rand(3,1) > > >> diag(x) > array([ 0.87113114]) > > Isn't rand(3,1) a vector? No: In [13]: rand(3).shape Out[13]: (3,) In [14]: rand(3,1).shape Out[14]: (3, 1) A "vector" is an array with only one dimension. Here, you have a 3x1 "matrix"... > > Off list I was given the example: > x=rand(3) > diag(3) So you've got the solution! > That works. But my x is a Nx1 matrix. I can't get it to work with matrices. ??? Don't understand what you cannot make work, here. In [15]: x=rand(3,1) In [18]: diag(x[:,0]) Out[18]: array([[ 0.2287158 , 0. , 0. ], [ 0. , 0.50571537, 0. ], [ 0. , 0. , 0.72304857]]) What else would you like? David > Joris: The Numpy Example List looks good. I hadn't come across that before. > David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From aisaac at american.edu Fri Jun 23 12:21:44 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 12:21:44 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> Message-ID: On Fri, 23 Jun 2006, Alan G Isaac apparently wrote: > you can squeeze x True, but a silly solution. Alan From oliphant at ee.byu.edu Fri Jun 23 13:11:38 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 11:11:38 -0600 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> Message-ID: <449C20CA.8090300@ee.byu.edu> Alan G Isaac wrote: >On Fri, 23 Jun 2006, Keith Goodman apparently wrote: > > >>my x is a Nx1 matrix. I can't get it to work with matrices. >> >> > >Hmm. One would think that diag() would accept a flatiter >object, but it does not. Shouldn't it?? > > It doesn't? try: a = rand(3,4) diag(a.flat).shape which prints (12,12) for me. Also: >>> a = ones((2,3)) >>> diag(a.flat) array([[1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1]]) From oliphant at ee.byu.edu Fri Jun 23 13:14:26 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 11:14:26 -0600 Subject: [Numpy-discussion] Matrix construction In-Reply-To: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> References: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> Message-ID: <449C2172.9070200@ee.byu.edu> Ed Schofield wrote: >On 22/06/2006, at 12:40 AM, Bill Baxter wrote: > > > >>Actually I think using mat() (just an alias for the matrix >>constructor) is a bad way to do it. That mat() (and most others on >>that page) should probably be replaced with asmatrix() to avoid the >>copy. >> >> > >Perhaps the 'mat' function should become an alias for 'asmatrix'. >I've thought this for a while. Then code and documentation like this >page could remain short and simple without incurring the performance >penalty. > > I wanted this too a while back but when I tried it a lot of code broke because there were quite a few places (in SciPy and NumPy) that were using the fact that mat returned a copy of the array. -Travis From myeates at jpl.nasa.gov Fri Jun 23 13:56:21 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Fri, 23 Jun 2006 10:56:21 -0700 Subject: [Numpy-discussion] matlab translation Message-ID: <449C2B45.9030101@jpl.nasa.gov> This is probably in an FAQ somewhere but ..... Is there a tool out there for translating Matlab to Numeric? I found a 1999 posting by Travis asking the same thing! It doesn't seem like it would be all THAT difficult to write. Mathew From kwgoodman at gmail.com Fri Jun 23 14:01:28 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 23 Jun 2006 11:01:28 -0700 Subject: [Numpy-discussion] matlab translation In-Reply-To: <449C2B45.9030101@jpl.nasa.gov> References: <449C2B45.9030101@jpl.nasa.gov> Message-ID: On 6/23/06, Mathew Yeates wrote: > This is probably in an FAQ somewhere but ..... > > Is there a tool out there for translating Matlab to Numeric? I found a > 1999 posting by Travis asking the same thing! It doesn't seem like it > would be all THAT difficult to write. I'm porting by hand. It does not seem easy to me. And even if it were easy, both Matlab and NumPy are moving targets. So it would difficult to maintain. From aisaac at american.edu Fri Jun 23 14:29:11 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 14:29:11 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449C20CA.8090300@ee.byu.edu> References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> Message-ID: > Alan G Isaac wrote: >> Hmm. One would think that diag() would accept a flatiter >> object, but it does not. Shouldn't it?? On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > It doesn't? > try: > a = rand(3,4) > diag(a.flat).shape OK, but then try: >>> a=N.mat(a) >>> N.diag(a.flat).shape (1,) Why is a.flat not the same as a.A.flat? Alan Isaac From oliphant at ee.byu.edu Fri Jun 23 15:19:36 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 13:19:36 -0600 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> Message-ID: <449C3EC8.4000805@ee.byu.edu> Alan G Isaac wrote: >>Alan G Isaac wrote: >> >> >>>Hmm. One would think that diag() would accept a flatiter >>>object, but it does not. Shouldn't it?? >>> >>> > > >On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > > >>It doesn't? >>try: >>a = rand(3,4) >>diag(a.flat).shape >> >> > >OK, but then try: > > >>>>a=N.mat(a) >>>>N.diag(a.flat).shape >>>> >>>> >(1,) > >Why is a.flat not the same as a.A.flat? > > It is the same object except for the pointer to the underlying array. When asarray(a.flat) get's called it looks to the underlying array to get the sub-class and constructs that sub-class (and matrices can never be 1-d). Thus, it's a "feature" -Travis From myeates at jpl.nasa.gov Fri Jun 23 16:22:08 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Fri, 23 Jun 2006 13:22:08 -0700 Subject: [Numpy-discussion] matlab translation In-Reply-To: References: <449C2B45.9030101@jpl.nasa.gov> Message-ID: <449C4D70.4080102@jpl.nasa.gov> > > I'm porting by hand. It does not seem easy to me. And even if it were Ah. Do I detect a dare? Could start first by using Octaves matlab parser. From kwgoodman at gmail.com Fri Jun 23 16:42:16 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 23 Jun 2006 13:42:16 -0700 Subject: [Numpy-discussion] matlab translation In-Reply-To: <449C4D70.4080102@jpl.nasa.gov> References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> Message-ID: On 6/23/06, Mathew Yeates wrote: > > > > > I'm porting by hand. It does not seem easy to me. And even if it were > Ah. Do I detect a dare? Could start first by using Octaves matlab parser. (Let me help you recruit people to do the work) "There is no way in the world that this will work!" From aisaac at american.edu Fri Jun 23 17:00:29 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 17:00:29 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449C3EC8.4000805@ee.byu.edu> References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> <449C3EC8.4000805@ee.byu.edu> Message-ID: > Alan G Isaac wrote: >> Why is a.flat not the same as a.A.flat? On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > It is the same object except for the pointer to the > underlying array. When asarray(a.flat) get's called it > looks to the underlying array to get the sub-class and > constructs that sub-class (and matrices can never be 1-d). > Thus, it's a "feature" I doubt I will prove the only one to stumble over this. I can roughly understand why a.ravel() returns a matrix; but is there a good reason to forbid truly flattening the matrix? My instincts are that a flatiter object should not have this hidden "feature": flatiter objects should produce a consistent behavior in all settings, regardless of the underlying array. Anything else will prove too surprising. fwiw, Alan From oliphant at ee.byu.edu Fri Jun 23 17:01:09 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 15:01:09 -0600 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> <449C3EC8.4000805@ee.byu.edu> Message-ID: <449C5695.8000106@ee.byu.edu> Alan G Isaac wrote: >>Alan G Isaac wrote: >> >> >>>Why is a.flat not the same as a.A.flat? >>> >>> > > >On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > > >>It is the same object except for the pointer to the >>underlying array. When asarray(a.flat) get's called it >>looks to the underlying array to get the sub-class and >>constructs that sub-class (and matrices can never be 1-d). >>Thus, it's a "feature" >> >> > > >I doubt I will prove the only one to stumble over this. > >I can roughly understand why a.ravel() returns a matrix; >but is there a good reason to forbid truly flattening the matrix? > > Because matrices are never 1-d. This is actually pretty consistent behavior. >My instincts are that a flatiter object should not have this >hidden "feature": flatiter objects should produce >a consistent behavior in all settings, regardless of the >underlying array. Anything else will prove too surprising. > > I think you are right that this is a bug, though. Because __array__() (which is where the behavior comes from) should return a base-class array (not a sub-class). -Travis From oliphant at ee.byu.edu Fri Jun 23 17:08:25 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 15:08:25 -0600 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449C5695.8000106@ee.byu.edu> References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> <449C3EC8.4000805@ee.byu.edu> <449C5695.8000106@ee.byu.edu> Message-ID: <449C5849.8010608@ee.byu.edu> Travis Oliphant wrote: >Alan G Isaac wrote: > > >> >> >I think you are right that this is a bug, though. Because __array__() >(which is where the behavior comes from) should return a base-class >array (not a sub-class). > > This is fixed in SVN. -Travis From mpfitz at calmail.berkeley.edu Fri Jun 23 17:15:58 2006 From: mpfitz at calmail.berkeley.edu (Michael Fitzgerald) Date: Fri, 23 Jun 2006 14:15:58 -0700 Subject: [Numpy-discussion] f.p. powers and masked arrays In-Reply-To: <200606212139.52511.fitz@astron.berkeley.edu> References: <200606212139.52511.fitz@astron.berkeley.edu> Message-ID: Ping! Is anyone else seeing this? It should be easy to test. If so, I think it's a bug. Best, Mike On Jun 21, 2006, at 9:39 PM, Michael Fitzgerald wrote: > > Hello all, > > I'm encountering some (relatively new?) behavior with masked arrays > that > strikes me as bizarre. Raising zero to a floating-point value is > triggering > a mask to be set, even though the result should be well-defined. > When using > fixed-point integers for powers, everything works as expected. I'm > seeing > this with both numarray and numpy. Take the case of 0**1, > illustrated below: > >>>> import numarray as n1 >>>> import numarray.ma as n1ma >>>> n1.array(0.)**1 > array(0.0) >>>> n1.array(0.)**1. > array(0.0) >>>> n1ma.array(0.)**1 > array(0.0) >>>> n1ma.array(0.)**1. > array(data = > [1.0000000200408773e+20], > mask = > 1, > fill_value=[ 1.00000002e+20]) > >>>> import numpy as n2 >>>> import numpy.core.ma as n2ma >>>> n2.array(0.)**1 > array(0.0) >>>> n2.array(0.)**1. > array(0.0) >>>> n2ma.array(0.)**1 > array(0.0) >>>> n2ma.array(0.)**1. > array(data = > 1e+20, > mask = > True, > fill_value=1e+20) > > I've been using python v2.3.5 & v.2.4.3, numarray v1.5.1, and numpy > v0.9.8, > and tested this on an x86 Debian box and a PPC OSX box. It may be > the case > that this issue has manifested in the past several months, as it's > causing a > new problem with some of my older code. Any thoughts? > > Thanks in advance, > Mike > > > All the advantages of Linux Managed Hosting--Without the Cost and > Risk! > Fully trained technicians. The highest number of Red Hat > certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant at ee.byu.edu Fri Jun 23 17:19:13 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 15:19:13 -0600 Subject: [Numpy-discussion] flatiter and inequality comparison In-Reply-To: References: Message-ID: <449C5AD1.40201@ee.byu.edu> Alan G Isaac wrote: >I do not understand how to think about this: > > >>> x=arange(3).flat > >>> x > > >>> x>2 > True > >>> x>10 > True > >Why? (I realize this behaves like xrange, >so this may not be a numpy question, >but I do not understand that behavior either.) > > The flatiter object didn't have comparisons implemented so I guess it was using some default implementation. This is quite confusing and option 2 does make sense (an array of resulting comparisions is returned). Thus now: >> x=arange(3).flat >>> x>2 array([False, False, False], dtype=bool) >>> x>1 array([False, False, True], dtype=bool) >>> x>0 array([False, True, True], dtype=bool) -Travis From aisaac at american.edu Fri Jun 23 17:34:26 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 17:34:26 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449C5695.8000106@ee.byu.edu> References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> <449C3EC8.4000805@ee.byu.edu> <449C5695.8000106@ee.byu.edu> Message-ID: > Alan G Isaac wrote: >> I can roughly understand why a.ravel() returns a matrix; >> but is there a good reason to forbid truly flattening the matrix? On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > Because matrices are never 1-d. This is actually pretty > consistent behavior. Yes; that's why I can understand ravel. But I was referring to flat with the question. On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > I think you are right that this is a bug, though. Because > __array__() (which is where the behavior comes from) > should return a base-class array (not a sub-class). Thanks for fixing this!! Alan From oliphant at ee.byu.edu Fri Jun 23 18:18:11 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 16:18:11 -0600 Subject: [Numpy-discussion] Current copy In-Reply-To: <1151100330.449c65aaad7ad@astrosun2.astro.cornell.edu> References: <446F913A.3050207@ieee.org> <1151007625.449afb89201c7@astrosun2.astro.cornell.edu> <449B186B.9060500@ieee.org> <1151100330.449c65aaad7ad@astrosun2.astro.cornell.edu> Message-ID: <449C68A3.3040908@ee.byu.edu> Tom Loredo wrote: >Hi Travis, > > > >>I'm actually preparing the 1.0 release version, instead. >> >>Here's the latest, though... >> >> > >Thanks! > >I hate to be a nuisance about this, but what's the status >of the latest releases of numpy/scipy? Numpy 0.9.8 gives >a segfault on my FC3 box. > NumPy 0.9.8 should be fine except for one test. That tests gives a segfault because of a problem with Python that was fixed a while ago. As long as you don't create the new complex array scalars (i.e. using cdouble(10), complex128(3), etc.) you should be fine with all code running NumPy 0.9.8. Just delete the file site-packages/numpy/core/tests/test_scalarmath.py to get the tests to run. >I waited till today to try the >SVN version (per your scipy-dev post) and just installed >rev 2669. It passes the numpy tests--good!---but when I >followed it with an install of scipy-0.4.9, importing >scipy gives an error: > >import linsolve.umfpack -> failed: cannot import name ArrayType > >When you mentioned that the SVN numpy now worked with scipy, > > >was it only with SVN scipy? > > Yes. You need to re-compile scipy to work with SVN NumPy. Usually Ed Schofield has been helping release SciPy for each new NumPy release to make installation easier. >I'm asking all this, partly for my own info, but also because >last week at an astrostatistics conference I was given a long >slot of time where I gave a pretty hard sell of numpy/scipy. >I'm imagining all these people going home and installing the >latest releases and cursing me under their breaths! > >Is it just my FC3 box having issues with the current releases? >If not, I think something should be said on the download page >(e.g., maybe encourage people to use SVN for certain platforms). > > It's just the one test that's a problem (my system was more forgiving and didn't segfault so I didn't catch the problem). I doubt people are using the construct that is causing the problems much anway -- it's a subtle bug that was in Python when a C-type inherited from the Python complex type. I'd probably recommend using SVN NumPy/SciPy if you are comfortable with compilation because it's the quickest way to get bug-fixes. But, some like installed packages. That's why we are pushing to get 1.0 done as quickly as is reasonable. From ryanlists at gmail.com Fri Jun 23 19:45:55 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 23 Jun 2006 19:45:55 -0400 Subject: [Numpy-discussion] matlab translation In-Reply-To: References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> Message-ID: If people could post lines of Matlab code and proposed numpy could, we could try some regexp's that could do some of this. Ryan On 6/23/06, Keith Goodman wrote: > On 6/23/06, Mathew Yeates wrote: > > > > > > > > I'm porting by hand. It does not seem easy to me. And even if it were > > Ah. Do I detect a dare? Could start first by using Octaves matlab parser. > > (Let me help you recruit people to do the work) > > "There is no way in the world that this will work!" > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From parejkoj at speakeasy.net Fri Jun 23 21:04:49 2006 From: parejkoj at speakeasy.net (John Parejko) Date: Fri, 23 Jun 2006 21:04:49 -0400 Subject: [Numpy-discussion] record iteration (convert a 0-d array, iteration over non-sequence) Message-ID: <449C8FB1.6070208@speakeasy.net> Greetings! I'm having trouble using records. I'm not sure whether to report this as a bug, but it certainly isn't a feature! I would like to be able to iterate over the individual rows in a record array, like so: >>> import numpy.core.records as rec >>> x=rec.array([[1,1.1,'1.0'],[2,2.2,'2.0']], formats='i4,f8,a4',names=['i','f','s']) >>> type(x[0]) >>> x[0].tolist() Traceback (most recent call last): File "", line 1, in ? ValueError: can't convert a 0-d array to a list >>> [i for i in x[0]] Traceback (most recent call last): File "", line 1, in ? TypeError: iteration over non-sequence Am I going about this wrong? I would think I should be able to loop over an individual row in a record array, or turn it into a list. For the latter, I wrote my own thing, but tolist() should work by itself. Note that in rec2list, I need to use range(len(line)) because the list comprehension doesn't work correctly: def rec2list(line): """Turns a single element record array into a list.""" return [line[i] for i in xrange(len(line))] #... I will file a bug, unless someone tells me I'm going about this the wrong way. Thanks for your help John -- ************************* John Parejko Department of Physics and Astronomy Drexel University Philadelphia, PA ************************** From oliphant.travis at ieee.org Fri Jun 23 23:07:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 23 Jun 2006 21:07:45 -0600 Subject: [Numpy-discussion] record iteration (convert a 0-d array, iteration over non-sequence) In-Reply-To: <449C8FB1.6070208@speakeasy.net> References: <449C8FB1.6070208@speakeasy.net> Message-ID: <449CAC81.9030608@ieee.org> John Parejko wrote: > Greetings! I'm having trouble using records. I'm not sure whether to > report this as a bug, but it certainly isn't a feature! I would like to be > able to iterate over the individual rows in a record array, like so: > That is probably reasonable, but as yet is unsupported. You can do x[0].item() to get a tuple that can be iterated over. -Travis From gruben at bigpond.net.au Fri Jun 23 23:43:52 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 24 Jun 2006 13:43:52 +1000 Subject: [Numpy-discussion] matlab translation In-Reply-To: <449C4D70.4080102@jpl.nasa.gov> References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> Message-ID: <449CB4F8.9030305@bigpond.net.au> One possible starting point for this would be Chris Stawarz's i2py translator which attempts to do this for IDL . It might be possible to build on this by getting it working for current numpy. The production rules for MATLAB might be gleaned from Octave. Gary R. Mathew Yeates wrote: >> I'm porting by hand. It does not seem easy to me. And even if it were > Ah. Do I detect a dare? Could start first by using Octaves matlab parser. From robert.kern at gmail.com Sat Jun 24 00:03:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Jun 2006 23:03:56 -0500 Subject: [Numpy-discussion] matlab translation In-Reply-To: <449C4D70.4080102@jpl.nasa.gov> References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> Message-ID: Keith Goodman wrote: >> I'm porting by hand. It does not seem easy to me. And even if it were Mathew Yeates wrote: > Ah. Do I detect a dare? Could start first by using Octaves matlab parser. Let's just say that anyone coming to this list saying something like, "It doesn't seem like it would be all THAT difficult to write," gets an automatic, "Show me," from me. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Jun 24 00:11:14 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Jun 2006 23:11:14 -0500 Subject: [Numpy-discussion] Moving this mailing list to scipy.org Message-ID: Thanks to Sourceforge's new "feature" of ads on the bottom of all list emails, it has been suggested that we move this mailing list to scipy.org. I've gotten some feedback from several of you already, all in favor of moving the mailing list from Sourceforge to scipy.org. However, I know there are plenty more of you out there. I wanted to move this topic up to the top level to make sure people see this. If you care whether it moves or if it stays, please email me *offlist* stating your preference. If by Wednesday, June 28th, the response is still as positive as it has been, then we'll start moving the list. Thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From vinicius.lobosco at paperplat.com Sat Jun 24 04:56:11 2006 From: vinicius.lobosco at paperplat.com (Vinicius Lobosco) Date: Sat, 24 Jun 2006 10:56:11 +0200 Subject: [Numpy-discussion] matlab translation In-Reply-To: References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> Message-ID: <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> Let's just let those who want to try to do that and give our support? I would be happy if I could some parts of my old matlab programs translated to Scipy. On 6/24/06, Robert Kern wrote: > > Keith Goodman wrote: > >> I'm porting by hand. It does not seem easy to me. And even if it were > > Mathew Yeates wrote: > > Ah. Do I detect a dare? Could start first by using Octaves matlab > parser. > > Let's just say that anyone coming to this list saying something like, "It > doesn't seem like it would be all THAT difficult to write," gets an > automatic, > "Show me," from me. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though > it had > an underlying truth." > -- Umberto Eco > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- --------------------------------- Vinicius Lobosco, PhD www.paperplat.com +46 8 612 7803 +46 73 925 8476 Bj?rnn?sv?gen 21 SE-113 47 Stockholm, Sweden -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Jun 24 05:05:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 04:05:56 -0500 Subject: [Numpy-discussion] matlab translation In-Reply-To: <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> Message-ID: Vinicius Lobosco wrote: > Let's just let those who want to try to do that and give our support? I > would be happy if I could some parts of my old matlab programs > translated to Scipy. I do believe that, "Show me," is an *encouragement*. I am explicitly encouraging Mathew to work towards that end. Sheesh. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Sat Jun 24 10:41:06 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 24 Jun 2006 10:41:06 -0400 Subject: [Numpy-discussion] flatiter and inequality comparison In-Reply-To: <449C5AD1.40201@ee.byu.edu> References: <449C5AD1.40201@ee.byu.edu> Message-ID: On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > option 2 does make sense (an array of resulting comparisions is returned). > Thus now: > >> x=arange(3).flat > >>> x>2 > array([False, False, False], dtype=bool) Thanks!! Alan From mtreiber at gmail.com Sat Jun 24 12:58:21 2006 From: mtreiber at gmail.com (Mark Treiber) Date: Sat, 24 Jun 2006 12:58:21 -0400 Subject: [Numpy-discussion] matlab translation In-Reply-To: References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> Message-ID: <27e04e910606240958v789c8701geb96eca97608fb5@mail.gmail.com> A couple of months ago I started something similar but unfortunately it has since stagnated. Its located at pym.python-hosting.com. With the exception of a commit a few weeks ago I haven't touched it for 4 months. That being said I havn't completly abandoned it and the basic foundation is there, all that remains is most of the language rules. I left it halfway through implementing language precedence according to http://www.mathworks.com/access/helpdesk/help/techdoc/matlab_prog/f0-38155.html. Mark. On 6/24/06, Robert Kern wrote: > > Vinicius Lobosco wrote: > > Let's just let those who want to try to do that and give our support? I > > would be happy if I could some parts of my old matlab programs > > translated to Scipy. > > I do believe that, "Show me," is an *encouragement*. I am explicitly > encouraging > Mathew to work towards that end. Sheesh. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though > it had > an underlying truth." > -- Umberto Eco > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Sat Jun 24 13:32:04 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Sat, 24 Jun 2006 10:32:04 -0700 Subject: [Numpy-discussion] How do I seed the radom number generator? In-Reply-To: References: Message-ID: On 6/22/06, Robert Kern wrote: > Keith Goodman wrote: > > How do I seed rand and randn? > > If you can, please use the .rand() and .randn() methods on a RandomState object > which you can initialize with whatever seed you like. > > In [1]: import numpy as np > rs > In [2]: rs = np.random.RandomState([12345678, 90123456, 78901234]) > > In [3]: rs.rand(5) > Out[3]: array([ 0.40355172, 0.27449337, 0.56989746, 0.34767024, 0.47185004]) Using the same seed sometimes gives different results: from numpy import random def rtest(): rs = random.RandomState([11,21,699,1]) a = rs.rand(100,1) b = rs.randn(100,1) return sum(a + b) >> mytest.rtest() array([ 41.11776129]) >> mytest.rtest() array([ 40.16631018]) >> numpy.__version__ '0.9.7.2416' I ran the test about 20 times before I got the 40.166 result. From jk985 at tom.com Tue Jun 27 13:37:49 2006 From: jk985 at tom.com (=?GB2312?B?N9TCOC05urzW3S/H4LW6MjItMjM=?=) Date: Wed, 28 Jun 2006 01:37:49 +0800 Subject: [Numpy-discussion] =?GB2312?B?s7W85LncwO3Iy9SxsMvP7tDewbY8YWQ+?= Message-ID: An HTML attachment was scrubbed... URL: From efiring at hawaii.edu Sat Jun 24 15:30:06 2006 From: efiring at hawaii.edu (Eric Firing) Date: Sat, 24 Jun 2006 09:30:06 -1000 Subject: [Numpy-discussion] logical_and operator, &&, is missing? Message-ID: <449D92BE.3030900@hawaii.edu> It seems that the logical operators || and &&, corresponding to logical_or and logical_and are missing; one can do z = logical_and(x,y) but not z = x && y Is there an inherent reason, or is this a bug? z = (x == y) works, and a comment in umathmodule.c.src suggests that && and || should also: /**begin repeat #kind=greater, greater_equal, less, less_equal, equal, not_equal, logical_and, logical_or, bitwise_and, bitwise_or, bitwise_xor# #OP=>, >=, <, <=, ==, !=, &&, ||, &, |, ^# **/ My version is '0.9.9.2584'. Eric From pgmdevlist at mailcan.com Sat Jun 24 16:12:05 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Sat, 24 Jun 2006 16:12:05 -0400 Subject: [Numpy-discussion] f.p. powers and masked arrays In-Reply-To: References: <200606212139.52511.fitz@astron.berkeley.edu> Message-ID: <200606241612.07559.pgmdevlist@mailcan.com> Michael, > Is anyone else seeing this? It should be easy to test. If so, I > think it's a bug. Yeah, I see that as well. In MA.power(a,b), a temporary mask is created, True for values a<=0. (check L1577 of the sources, `md = make_mask(umath.less_equal (fa, 0), flag=1)`). The combination of this temp and the initial mask defines the final mask. This condition could probably be relaxed to `md = make_mask(umath.less(fa, 0), flag=1)` That way, the a=0 elements wouldn't be masked, and you'd get the proper result. I haven't really time to double-check/create a patch, tough. Meanwhile, Michael, you could just modify your numpy/core/ma.py accordingly. From robert.kern at gmail.com Sat Jun 24 16:20:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 15:20:43 -0500 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: <449D92BE.3030900@hawaii.edu> References: <449D92BE.3030900@hawaii.edu> Message-ID: Eric Firing wrote: > It seems that the logical operators || and &&, corresponding to > logical_or and logical_and are missing; one can do > > z = logical_and(x,y) > > but not > > z = x && y > > Is there an inherent reason, or is this a bug? Python does not have a && operator. It has an "and" keyword, but that cannot be overridden. If you know x and y to be boolean arrays, & and | work fine. > z = (x == y) > > works, and a comment in umathmodule.c.src suggests that && and || should > also: > > /**begin repeat > > #kind=greater, greater_equal, less, less_equal, equal, not_equal, > logical_and, logical_or, bitwise_and, bitwise_or, bitwise_xor# > #OP=>, >=, <, <=, ==, !=, &&, ||, &, |, ^# > **/ Those operators are the C versions that will be put in the appropriate places in the generated code. That is not a comment for documentation. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From efiring at hawaii.edu Sat Jun 24 16:34:35 2006 From: efiring at hawaii.edu (Eric Firing) Date: Sat, 24 Jun 2006 10:34:35 -1000 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: References: <449D92BE.3030900@hawaii.edu> Message-ID: <449DA1DB.8000902@hawaii.edu> Robert Kern wrote: > Eric Firing wrote: > >>It seems that the logical operators || and &&, corresponding to >>logical_or and logical_and are missing; one can do >> >>z = logical_and(x,y) >> >>but not >> >>z = x && y >> >>Is there an inherent reason, or is this a bug? > > > Python does not have a && operator. It has an "and" keyword, but that cannot be > overridden. If you know x and y to be boolean arrays, & and | work fine. Out of curiosity, is there a simple explanation as to why "and" cannot be overridden but operators like "&" can? Is it a fundamental distinction between operators and keywords? In any case, it sounds like we are indeed stuck with an unfortunate wart on numpy, unless some changes in Python can be made. Maybe for Python3000... The NumPy for Matlab users wiki is misleading in this area; I will try to fix it. Eric From robert.kern at gmail.com Sat Jun 24 16:43:58 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 15:43:58 -0500 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: <449DA1DB.8000902@hawaii.edu> References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> Message-ID: Eric Firing wrote: > Robert Kern wrote: >> Eric Firing wrote: >> >>> It seems that the logical operators || and &&, corresponding to >>> logical_or and logical_and are missing; one can do >>> >>> z = logical_and(x,y) >>> >>> but not >>> >>> z = x && y >>> >>> Is there an inherent reason, or is this a bug? >> >> Python does not have a && operator. It has an "and" keyword, but that cannot be >> overridden. If you know x and y to be boolean arrays, & and | work fine. > > Out of curiosity, is there a simple explanation as to why "and" cannot > be overridden but operators like "&" can? Is it a fundamental > distinction between operators and keywords? Sort of. "and" and "or" short-circuit, that is they stop evaluating as soon as the right value to return is unambiguous. In [1]: def f(): ...: print "Shouldn't be here." ...: ...: In [2]: False and f() Out[2]: False In [3]: True or f() Out[3]: True Consequently, they must yield True and False only. > In any case, it sounds like we are indeed stuck with an unfortunate wart > on numpy, unless some changes in Python can be made. Maybe for > Python3000... > > The NumPy for Matlab users wiki is misleading in this area; I will try > to fix it. Thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Jun 24 16:56:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 15:56:05 -0500 Subject: [Numpy-discussion] How do I seed the radom number generator? In-Reply-To: References: Message-ID: Keith Goodman wrote: > Using the same seed sometimes gives different results: > > from numpy import random > def rtest(): > rs = random.RandomState([11,21,699,1]) > a = rs.rand(100,1) > b = rs.randn(100,1) > return sum(a + b) > >>> mytest.rtest() > array([ 41.11776129]) > >>> mytest.rtest() > array([ 40.16631018]) Fixed in SVN. Thank you. http://projects.scipy.org/scipy/numpy/ticket/155 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From efiring at hawaii.edu Sat Jun 24 16:57:19 2006 From: efiring at hawaii.edu (Eric Firing) Date: Sat, 24 Jun 2006 10:57:19 -1000 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> Message-ID: <449DA72F.6060805@hawaii.edu> Robert Kern wrote: > Eric Firing wrote: > >>Robert Kern wrote: >> >>>Eric Firing wrote: >>> >>> >>>>It seems that the logical operators || and &&, corresponding to >>>>logical_or and logical_and are missing; one can do >>>> >>>>z = logical_and(x,y) >>>> >>>>but not >>>> >>>>z = x && y >>>> >>>>Is there an inherent reason, or is this a bug? >>> >>>Python does not have a && operator. It has an "and" keyword, but that cannot be >>>overridden. If you know x and y to be boolean arrays, & and | work fine. >> >>Out of curiosity, is there a simple explanation as to why "and" cannot >>be overridden but operators like "&" can? Is it a fundamental >>distinction between operators and keywords? > > > Sort of. "and" and "or" short-circuit, that is they stop evaluating as soon as > the right value to return is unambiguous. > > In [1]: def f(): > ...: print "Shouldn't be here." > ...: > ...: > > In [2]: False and f() > Out[2]: False > > In [3]: True or f() > Out[3]: True > > Consequently, they must yield True and False only. That makes sense, and implies that the real solution would be the introduction of operators && and || into Python, or a facility that would allow extensions to add operators. I guess it would be a matter of having hooks into the parser. I have no idea whether either of these is a reasonable goal--but it certainly would be a big plus for Numpy. Eric From robert.kern at gmail.com Sat Jun 24 17:32:16 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 16:32:16 -0500 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: <449DA72F.6060805@hawaii.edu> References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> <449DA72F.6060805@hawaii.edu> Message-ID: Eric Firing wrote: > That makes sense, and implies that the real solution would be the > introduction of operators && and || into Python, or a facility that > would allow extensions to add operators. I guess it would be a matter > of having hooks into the parser. I have no idea whether either of these > is a reasonable goal--but it certainly would be a big plus for Numpy. I don't really see how. We already have the & and | operators. The only difference between them and the && and || operators would be that the latter would automatically coerce to boolean arrays. But you can do that explicitly, now. a.astype(bool) | b.astype(bool) Of course, it's highly likely that you are applying & and | to arrays that are already boolean. Consequently, I don't see a real need for more operators. But if you'd like to play around with the grammar: http://www.fiber-space.de/EasyExtend/doc/EE.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From efiring at hawaii.edu Sat Jun 24 19:08:00 2006 From: efiring at hawaii.edu (Eric Firing) Date: Sat, 24 Jun 2006 13:08:00 -1000 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> <449DA72F.6060805@hawaii.edu> Message-ID: <449DC5D0.9060704@hawaii.edu> Robert Kern wrote: > Eric Firing wrote: > >>That makes sense, and implies that the real solution would be the >>introduction of operators && and || into Python, or a facility that >>would allow extensions to add operators. I guess it would be a matter >>of having hooks into the parser. I have no idea whether either of these >>is a reasonable goal--but it certainly would be a big plus for Numpy. > > > I don't really see how. We already have the & and | operators. The only > difference between them and the && and || operators would be that the latter > would automatically coerce to boolean arrays. But you can do that explicitly, now. > > a.astype(bool) | b.astype(bool) > Another difference pointed out in the Wiki is precedence, which requires one to be more careful about parentheses when using the bitwise operators. This arises because although the bitwise operators effectively do the right thing, given boolean arguments, there really is a difference between & and &&--that is why C, for example, has both. Using & when one means && is a hack that obscures the meaning of the code, and using logical_and is clear but cluttered--a significant step away from the goal of having code be clear, concise and readable. I suspect that many other people will trip over the lack of && in the same way that I have, and will similarly consider it an irritant that we work around because we have to, not because it is good. > Of course, it's highly likely that you are applying & and | to arrays that are > already boolean. Consequently, I don't see a real need for more operators. > > But if you'd like to play around with the grammar: > > http://www.fiber-space.de/EasyExtend/doc/EE.html > Interesting, thanks--but I will back off now. Eric From aisaac at american.edu Sat Jun 24 20:38:39 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 24 Jun 2006 20:38:39 -0400 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: <449DC5D0.9060704@hawaii.edu> References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> <449DA72F.6060805@hawaii.edu> <449DC5D0.9060704@hawaii.edu> Message-ID: On Sat, 24 Jun 2006, Eric Firing apparently wrote: > I suspect that many other people will trip over the lack > of && in the same way that I have, and will similarly > consider it an irritant that we work around because we > have to, not because it is good. I agree with this. In addition, turning to & when && is wanted will likely cause occasional stumbles over operator precedence. (At least I've been bitten that way.) But I do not see this changing unless Python grants the ability to define new operators, in which case I'm sure the wish lists will come out ... Cheers, Alan Isaac From karol.langner at kn.pl Sun Jun 25 13:38:40 2006 From: karol.langner at kn.pl (Karol Langner) Date: Sun, 25 Jun 2006 19:38:40 +0200 Subject: [Numpy-discussion] basearray Message-ID: <200606251938.40507.karol.langner@kn.pl> Dear all, Some of you might be aware that a project has been granted to me for this year's Google's Summer of Code, which aims at preparing a base multidimensional array type for Python. While I had a late start at it, I would like to go through with the project. The focus is on preparing a minimal type, that basically only defines how memory is alllocated for the array, and which can be used by other, more sophisticated types. Later during the project, the type may be enhanced, depending on how using it in practice (also part of the project) works out. Wiki page about the project: http://scipy.org/BaseArray SVN repository: http://svn.scipy.org/svn/PEP/ In order to make this a potential success, I definately need feedback from all you out there interested in pushing such a base type towards Python core. So any comments and opinions are welcome! I will keep you informed on my progress and ask about things that may need concensus (although I'm not sure which lists will be the most interested in this). Please note that I am still in the phase of completing the minimal type, so the svn repository does not contain a working example, yet. Regards, Karol Langner -- written by Karol Langner nie cze 25 19:18:45 CEST 2006 From fperez.net at gmail.com Sun Jun 25 14:27:39 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 25 Jun 2006 12:27:39 -0600 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: On 5/30/06, Travis Oliphant wrote: > > Please help the developers by responding to a few questions. Sorry for not replying before, I wanted a more complete picture before answering. > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? The day this email came in, I had just started to look into porting our major research code. I actually did the work 2 weeks ago, and it went remarkably well. It took a single (marathon) day, about 14 hours of solid work, to go through the old codebase and convert it. This project had a mix of legacy Fortran wrapped via f2py, hand-written C extensions using Numeric, a fair bit of weave.inline() and pure python. It uses matplotlib, PyX and Mayavi for various visualization tasks. There are some 40k loc in the Fortran sources (2/3 of that auto-generated in python from Mathematica computations), and about 13k loc in the C and python sources. This codebase is heavily unit-tested, which was critical for the port. For this kind of effort, unittests make an enormous difference, as they guide you directly to all the problematic spots. Without unittests, this kind of port would have been a nightmare, and I would have never known whether things were actually finished or not. Most of my changes had to do with explicit uses of 'typecode=' which became dtype, and uses of .flat, which used to return a normal array and is now an iterator. I haven't benchmarked things right away, because I expect the numpy-based code to take quite a hit. In this code, I've heavily abused arrays for very trivial 2 and 3-element arithmetic operations, but that means that I create literally millions of extremely small arrays. Even with Numeric, this overhead was already measurable, and I imagine it will get worse with numpy. But since this was silly anyway, and I need to use these little arrays as dictionary keys, instead of doing lots of tuple(array()) all the time, I'm using David Cooke's Vector as a template for a hand-written mini-array class that will do exactly what I need with as little overhead as possible. If for any reason you do want to see actual benchmarks, I can try to run some with the codebases immediately before and after the Numeric->numpy change and report back. > 2) Will you transition within the next 6 months? (if you answered No to #1) That's it: by now we've moved all of our code and it doesn't really work with Numeric anymore, so we're committed :) Again, many thanks for the major improvements that numpy brings! Cheers, f From fperez.net at gmail.com Sun Jun 25 14:55:35 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 25 Jun 2006 12:55:35 -0600 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: On 5/30/06, Travis Oliphant wrote: > 4) Please provide any suggestions for improving NumPy. Well, if I can beg for one thing, it would be fixing dot(): http://projects.scipy.org/scipy/numpy/ticket/156 This bug is currently stalling us pretty badly, since dot() is at the core of everything we do. While the codebase I alluded to in my previous message is fine, a project that sits on top of it is blocked from moving on due to this particular problem. If it's a problem on our side, I'll gladly correct it, but it does seem like a bug to me (esp. with Stefan's test of r2651 which passes). If there's any extra info that you need from me, by all means let me know an I'll be happy to provide it. If you have a feel for where the problem may be but don't have time to fix it right now, I can look into it myself, if you can point me in the right direction. Cheers, f From ndarray at mac.com Sun Jun 25 16:22:02 2006 From: ndarray at mac.com (Sasha) Date: Sun, 25 Jun 2006 16:22:02 -0400 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: <449DA1DB.8000902@hawaii.edu> References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> Message-ID: On 6/24/06, Eric Firing wrote: > Out of curiosity, is there a simple explanation as to why "and" cannot > be overridden but operators like "&" can? Is it a fundamental > distinction between operators and keywords? > There is no fundamental reason. In fact overloadable boolean operators were proposed for python: From yukihana at yahoo.co.jp Sun Jun 25 20:36:47 2006 From: yukihana at yahoo.co.jp (=?iso-2022-jp?B?eXVraWhhbmE=?=) Date: Mon, 26 Jun 2006 00:36:47 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?02 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?02 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.jp/pc/?06 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.jp/pc/?06 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?02 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?02 ?????????????????????????????????? ???????????? ??????????3-6-4-533 ?????? 139-3668-7892 From rina222 at yahoo.co.jp Sun Jun 25 21:07:13 2006 From: rina222 at yahoo.co.jp (=?iso-2022-jp?B?cmluYQ==?=) Date: Mon, 26 Jun 2006 01:07:13 -0000 Subject: [Numpy-discussion] =?iso-2022-jp?b?IFJlOg==?= Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?06 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?06 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?06 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?06 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?06 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From mpfitz at berkeley.edu Sun Jun 25 21:07:55 2006 From: mpfitz at berkeley.edu (Michael Fitzgerald) Date: Sun, 25 Jun 2006 18:07:55 -0700 Subject: [Numpy-discussion] f.p. powers and masked arrays In-Reply-To: <200606241612.07559.pgmdevlist@mailcan.com> References: <200606212139.52511.fitz@astron.berkeley.edu> <200606241612.07559.pgmdevlist@mailcan.com> Message-ID: <200606251807.55956.mpfitz@berkeley.edu> On Saturday 24 June 2006 13:12, Pierre GM wrote: > I haven't really time to double-check/create a patch, tough. Meanwhile, > Michael, you could just modify your numpy/core/ma.py accordingly. Hi Pierre, Thank you for the fix. I checked it out and and numpy now behaves correctly for 0**1. in masked arrays. Attached are the diffs for numpy (scipy.org SVN) and numarray (sf.net CVS). Mike -------------- next part -------------- A non-text attachment was scrubbed... Name: numarray.diff Type: text/x-diff Size: 705 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy.diff Type: text/x-diff Size: 506 bytes Desc: not available URL: From sayaka20 at yahoo.co.jp Sun Jun 25 22:04:13 2006 From: sayaka20 at yahoo.co.jp (=?iso-2022-jp?B?c2F5YWthMjA=?=) Date: Mon, 26 Jun 2006 02:04:13 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/04 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/04 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/04 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/04 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?04 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/04 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From miku0814 at yahoo.co.jp Mon Jun 26 00:34:39 2006 From: miku0814 at yahoo.co.jp (=?iso-2022-jp?B?bWlrdQ==?=) Date: Mon, 26 Jun 2006 04:34:39 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?07 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?07 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?07 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From chanley at stsci.edu Mon Jun 26 08:53:59 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 26 Jun 2006 08:53:59 -0400 Subject: [Numpy-discussion] numpy revision 2680 causes segfault on Solaris Message-ID: <449FD8E7.6030609@stsci.edu> Greetings, Numpy revision 2680 causes a segfault in the unit tests on the Solaris 8 OS. The unit tests fail with the at the following test: check_vecobject (numpy.core.tests.test_numeric.test_dot)Segmentation Fault (core dumped) I can try and isolate what in the test is failing. What I can tell you now is that revision 2677 built and tested with no issues so the suspect change was made to one of the following files: U numpy/numpy/f2py/lib/typedecl_statements.py U numpy/numpy/f2py/lib/block_statements.py U numpy/numpy/f2py/lib/splitline.py U numpy/numpy/f2py/lib/parsefortran.py U numpy/numpy/f2py/lib/base_classes.py U numpy/numpy/f2py/lib/readfortran.py U numpy/numpy/f2py/lib/statements.py U numpy/numpy/core/src/arrayobject.c U numpy/numpy/core/tests/test_numeric.py Chris From oliphant.travis at ieee.org Mon Jun 26 11:37:06 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 26 Jun 2006 09:37:06 -0600 Subject: [Numpy-discussion] numpy revision 2680 causes segfault on Solaris In-Reply-To: <449FD8E7.6030609@stsci.edu> References: <449FD8E7.6030609@stsci.edu> Message-ID: <449FFF22.5000208@ieee.org> Christopher Hanley wrote: > Greetings, > > Numpy revision 2680 causes a segfault in the unit tests on the Solaris 8 > OS. The unit tests fail with the at the following test: > > check_vecobject (numpy.core.tests.test_numeric.test_dot)Segmentation > Fault (core dumped) > > I can try and isolate what in the test is failing. > > What I can tell you now is that revision 2677 built and tested with no > issues so the suspect change was made to one of the following files: > This is a new test in 2580. It may be a problem that has been present but not tested against, or it may be a problem introduced with my recent changes to the copy and broadcast code (which are pretty fundamental pieces of code). If you can give a (gdb) traceback it would be helpful. Thanks, -Travis From kwgoodman at gmail.com Mon Jun 26 14:19:31 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Mon, 26 Jun 2006 11:19:31 -0700 Subject: [Numpy-discussion] Sour pickles Message-ID: Upgrading numpy and scipy from an April svn snapshot to yesterday's svn broke my code. To diagnose the problem I need to generate data in one version and load it in the other version. I did a search on how to save data in python and came up with pickle, or, actually, cpickle. But the format of the pickle is different between the two versions of numpy. I am unable to load in one version what I saved in the other version. when I pickle, for example, numpy.asmatrix([1,2,3]) as ASCII, numpy 0.9.9.2677 adds I1\n in two places compared with numpy 0.9.7.2416. Any advice? From oliphant.travis at ieee.org Mon Jun 26 17:32:09 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 26 Jun 2006 15:32:09 -0600 Subject: [Numpy-discussion] Sour pickles In-Reply-To: References: Message-ID: <44A05259.8080101@ieee.org> Keith Goodman wrote: > Upgrading numpy and scipy from an April svn snapshot to yesterday's > svn broke my code. > > To diagnose the problem I need to generate data in one version and > load it in the other version. > > I did a search on how to save data in python and came up with pickle, > or, actually, cpickle. > > But the format of the pickle is different between the two versions of > numpy. I am unable to load in one version what I saved in the other > version. > > when I pickle, for example, numpy.asmatrix([1,2,3]) as ASCII, numpy > 0.9.9.2677 adds I1\n in two places compared with numpy 0.9.7.2416. > > Any advice? > The only thing that has changed in the Pickling code is the addition of a version number to the pickle. However, this means that 0.9.7.2416 will not be able to read 0.9.9.2677 pickles, but 0.9.9.2677 will be able to read 0.9.7.2416 pickles. This will be generally true. You can expect to read old Pickles with NumPy but not necessarily new ones with an old version. The other option is to use fromfile() and arr.tofile() which will read and write raw data. It's harder to use than pickle because no shape information is stored (it's just a raw binary file). -Travis From oliphant.travis at ieee.org Mon Jun 26 23:00:18 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 26 Jun 2006 21:00:18 -0600 Subject: [Numpy-discussion] Record-arrays can now hold object's Message-ID: <44A09F42.3070207@ieee.org> I've finished with basic support for arrays with object fields. Thus, for example, you can have a data-type that is [('date', 'O'), ('values', 'f8')]. Object's can be inside of any layer of a nested field as well. The work must be considered alpha still because there may be locations in the code that I've forgotten about that do not take an appropriately abstract view of the data-type so as to support this. There is one unit-test for the capability, but more testing is needed. Use of these should be no slower than object arrays and should not change the speed of other arrays. -Travis From nwagner at iam.uni-stuttgart.de Tue Jun 27 03:37:29 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 27 Jun 2006 09:37:29 +0200 Subject: [Numpy-discussion] numpy.linalg.pinv has no docstring Message-ID: <44A0E039.3000403@iam.uni-stuttgart.de> Hi Travis, Just now I saw that you have fixed the failing test. You have used pinv (pseudo inverse). Please can you add a docstring to numpy.linalg.pinv. Thanks in advance. Nils In [4]: numpy.linalg.pinv? Type: function Base Class: String Form: Namespace: Interactive File: /usr/lib64/python2.4/site-packages/numpy/linalg/linalg.py Definition: numpy.linalg.pinv(a, rcond=1e-10) Docstring: From hitomi0303 at yahoo.co.jp Tue Jun 27 04:02:36 2006 From: hitomi0303 at yahoo.co.jp (=?iso-2022-jp?B?aGl0b21p?=) Date: Tue, 27 Jun 2006 08:02:36 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?09 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From joris at ster.kuleuven.ac.be Tue Jun 27 05:11:11 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Tue, 27 Jun 2006 11:11:11 +0200 Subject: [Numpy-discussion] numpy.linalg.pinv has no docstring Message-ID: <1151399471.44a0f62f1c7d2@webmail.ster.kuleuven.be> On Tuesday 27 June 2006 09:37, Nils Wagner wrote: [NW]: Please can you add a docstring to numpy.linalg.pinv. In case it might help, I added an example to the Numpy Example List (http://www.scipy.org/Numpy_Example_List) which illustrates the use of pinv(). J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From yukihana at yahoo.co.jp Tue Jun 27 05:19:36 2006 From: yukihana at yahoo.co.jp (=?iso-2022-jp?B?eXVraWhhbmE=?=) Date: Tue, 27 Jun 2006 09:19:36 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?02 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?02 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.jp/pc/?06 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.jp/pc/?06 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?02 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?02 ?????????????????????????????????? ???????????? ??????????3-6-4-533 ?????? 139-3668-7892 From kwgoodman at gmail.com Tue Jun 27 12:45:57 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 09:45:57 -0700 Subject: [Numpy-discussion] Upgrading from numpy 0.9.7.2416 to 0.9.9.2683 Message-ID: This works in numpy 0.9.7.2416 but doesn't work in numpy 0.9.9.2683: Numpy 0.9.9.2683 x = asmatrix(zeros((3,2), float)) y = asmatrix(rand(3,1)) y matrix([[ 0.49865026], [ 0.82675808], [ 0.30285247]]) x[:,1] = y > 0.5 x matrix([[ 0., 0.], [ 0., 0.], <--- this should be one (?) [ 0., 0.]]) But it worked in 0.9.7.2416: x = asmatrix(zeros((3,2), float)) y = asmatrix(rand(3,1)) y matrix([[ 0.35444501], [ 0.7032141 ], [ 0.0918561 ]]) x[:,1] = y > 0.5 x matrix([[ 0., 0.], [ 0., 1.], [ 0., 0.]]) From stefan at sun.ac.za Tue Jun 27 14:12:48 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 27 Jun 2006 20:12:48 +0200 Subject: [Numpy-discussion] Upgrading from numpy 0.9.7.2416 to 0.9.9.2683 In-Reply-To: References: Message-ID: <20060627181248.GA27056@mentat.za.net> On Tue, Jun 27, 2006 at 09:45:57AM -0700, Keith Goodman wrote: > This works in numpy 0.9.7.2416 but doesn't work in numpy 0.9.9.2683: > > Numpy 0.9.9.2683 > > x = asmatrix(zeros((3,2), float)) > y = asmatrix(rand(3,1)) > y > > matrix([[ 0.49865026], > [ 0.82675808], > [ 0.30285247]]) > > x[:,1] = y > 0.5 > x > > matrix([[ 0., 0.], > [ 0., 0.], <--- this should be one (?) > [ 0., 0.]]) With r2691 I see In [7]: x = N.asmatrix(N.zeros((3,2)),float) In [8]: y = N.asmatrix(N.rand(3,1)) In [12]: x[:,1] = y > 0.5 In [13]: x Out[13]: matrix([[ 0., 1.], [ 0., 1.], [ 0., 1.]]) Cheers St?fan From oliphant.travis at ieee.org Tue Jun 27 14:19:59 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 12:19:59 -0600 Subject: [Numpy-discussion] Upgrading from numpy 0.9.7.2416 to 0.9.9.2683 In-Reply-To: References: Message-ID: <44A176CF.6080302@ieee.org> Keith Goodman wrote: > This works in numpy 0.9.7.2416 but doesn't work in numpy 0.9.9.2683: > > Numpy 0.9.9.2683 > > x = asmatrix(zeros((3,2), float)) > y = asmatrix(rand(3,1)) > y > > matrix([[ 0.49865026], > [ 0.82675808], > [ 0.30285247]]) > > x[:,1] = y > 0.5 > x > > matrix([[ 0., 0.], > [ 0., 0.], <--- this should be one (?) > [ 0., 0.]]) > > This looks like a bug, probably introduced recently during the re-write of the copying and casting code. Try checking out the revisions r2662 and r2660 to see which one works for you. I'll look into this problem. -Travis From kwgoodman at gmail.com Tue Jun 27 14:44:06 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 11:44:06 -0700 Subject: [Numpy-discussion] Upgrading from numpy 0.9.7.2416 to 0.9.9.2683 In-Reply-To: <44A176CF.6080302@ieee.org> References: <44A176CF.6080302@ieee.org> Message-ID: On 6/27/06, Travis Oliphant wrote: > Keith Goodman wrote: > > This works in numpy 0.9.7.2416 but doesn't work in numpy 0.9.9.2683: > > > > Numpy 0.9.9.2683 > > > > x = asmatrix(zeros((3,2), float)) > > y = asmatrix(rand(3,1)) > > y > > > > matrix([[ 0.49865026], > > [ 0.82675808], > > [ 0.30285247]]) > > > > x[:,1] = y > 0.5 > > x > > > > matrix([[ 0., 0.], > > [ 0., 0.], <--- this should be one (?) > > [ 0., 0.]]) > > > > > > This looks like a bug, probably introduced recently during the re-write > of the copying and casting code. Try checking out the revisions r2662 > and r2660 to see which one works for you. I'll look into this problem. Thanks for the tip. I get some extra output with r2660. It prints out "Source array" and "Dest. array" like this: >> x = asmatrix(zeros((3,2), float)) >> x matrix([[ 0., 0.], [ 0., 0.], [ 0., 0.]]) >> y = asmatrix(rand(3,1)) >> y matrix([[ 0.60117193], [ 0.43883293], [ 0.01633154]]) >> x[:,1] = y > 0.5 Source array = (3 1) Dest. array = (1 3) >> x matrix([[ 0., 1.], [ 0., 0.], [ 0., 0.]]) From oliphant.travis at ieee.org Tue Jun 27 14:50:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 12:50:05 -0600 Subject: [Numpy-discussion] Upgrading from numpy 0.9.7.2416 to 0.9.9.2683 In-Reply-To: <20060627181248.GA27056@mentat.za.net> References: <20060627181248.GA27056@mentat.za.net> Message-ID: <44A17DDD.5090904@ieee.org> Stefan van der Walt wrote: > On Tue, Jun 27, 2006 at 09:45:57AM -0700, Keith Goodman wrote: > > With r2691 I see > > In [7]: x = N.asmatrix(N.zeros((3,2)),float) > > In [8]: y = N.asmatrix(N.rand(3,1)) > > In [12]: x[:,1] = y > 0.5 > > In [13]: x > Out[13]: > matrix([[ 0., 1.], > [ 0., 1.], > [ 0., 1.]]) > This was a bug, indirectly caused by the move to broadcasted copying and casting and the use of a matrix here. Previously the shapes didn't matter as long as the total size was the same. Thus internally x[:,1] was creating a (1,3) matrix referencing the last column of x (call it xp) and y>0.5 was a (3,1) matrix (call it yp) Thus the resulting casting code was repeatedly filling in x with (y>0.5). Thus, the last entry of (y>0.5) was the one that resulted. Previously, this would have worked because the shape of the arrays didn't matter, but now they do. The real culprit was not allowing the matrices "getitem" method to be called (which would have correctly obtained a (3,1) matrix from x[:,1] and thus avoided the strange result. Thus, in SVN, now PyObject_GetItem is used instead of the default ndarray getitem. The upshot, is that this should now work --- and there is now a unit-test to check for it. Thanks to Keith for exposing this bug. -Travis From geneing at gmail.com Tue Jun 27 13:52:08 2006 From: geneing at gmail.com (EI) Date: Tue, 27 Jun 2006 10:52:08 -0700 Subject: [Numpy-discussion] int64 wierdness Message-ID: Hi, I'm running python 2.4 on a 64bit linux and get strange results: (int(9))**2 is equal to 81, as it should, but (int64(9))**2 is equal to 0 Is it a bug or a feature? Eugene -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Tue Jun 27 15:38:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 13:38:29 -0600 Subject: [Numpy-discussion] int64 wierdness In-Reply-To: References: Message-ID: <44A18935.1090702@ieee.org> EI wrote: > Hi, > > I'm running python 2.4 on a 64bit linux and get strange results: > (int(9))**2 is equal to 81, as it should, but > (int64(9))**2 is equal to 0 Thanks for the bug-report. Please provide the version of NumPy you are using so we can track it down, or suggest an upgrade. -Travis From tim.hochberg at cox.net Tue Jun 27 16:08:13 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 27 Jun 2006 13:08:13 -0700 Subject: [Numpy-discussion] numexpr does sum. Message-ID: <44A1902D.3060909@cox.net> I managed to get basic support for sum and prod into numpexpr. I need to tie up some loose ends, for instance only floats are currently supported, but these should be easy. To return to the recently posted multidimensional distance program, this now works: expr = numexpr("sum((a - b)**2, axis=2)", [('a', float), ('b', float)]) def dist_numexpr(A, B): return sqrt(expr(A[:,newaxis], B[newaxis,:])) It's also quite fast, although there's still room for improvement in the reduction code. Notice that it still needs to be in two parts since sum/prod needs to surround the rest of the expression. Note also that it does support the axis keyword, although currently only nonnegative values (or None). I plan to fix that at some point though. -tim From geneing at gmail.com Tue Jun 27 16:26:38 2006 From: geneing at gmail.com (EI) Date: Tue, 27 Jun 2006 13:26:38 -0700 Subject: [Numpy-discussion] int64 wierdness In-Reply-To: <44A18935.1090702@ieee.org> References: <44A18935.1090702@ieee.org> Message-ID: numpy.__version__ says 0.9.8. Python 2.4.2, GCC 4.1, OpenSuSE 10.1 (x86_64). Thanks Travis, Eugene On 6/27/06, Travis Oliphant wrote: > > EI wrote: > > Hi, > > > > I'm running python 2.4 on a 64bit linux and get strange results: > > (int(9))**2 is equal to 81, as it should, but > > (int64(9))**2 is equal to 0 > > Thanks for the bug-report. Please provide the version of NumPy you are > using so we can track it down, or suggest an upgrade. > > -Travis > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Tue Jun 27 18:37:21 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 27 Jun 2006 15:37:21 -0700 Subject: [Numpy-discussion] int64 wierdness In-Reply-To: References: <44A18935.1090702@ieee.org> Message-ID: <44A1B321.2030102@astraw.com> An SVN checkout from a week or two ago looks OK on my amd64 machine: astraw at hdmg:~$ python Python 2.4.3 (#2, Apr 27 2006, 14:43:32) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '0.9.9.2631' >>> numpy.int64(9)**2 81 >>> EI wrote: > numpy.__version__ says 0.9.8. > > Python 2.4.2, GCC 4.1, OpenSuSE 10.1 (x86_64). > > Thanks Travis, > Eugene > > On 6/27/06, *Travis Oliphant* < oliphant.travis at ieee.org > > wrote: > > EI wrote: > > Hi, > > > > I'm running python 2.4 on a 64bit linux and get strange results: > > (int(9))**2 is equal to 81, as it should, but > > (int64(9))**2 is equal to 0 > > Thanks for the bug-report. Please provide the version of NumPy > you are > using so we can track it down, or suggest an upgrade. > > -Travis > > >------------------------------------------------------------------------ > >Using Tomcat but need to do more? Need to support web services, security? >Get stuff done quickly with pre-integrated technology to make your job easier >Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > >------------------------------------------------------------------------ > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From dvp at MIT.EDU Tue Jun 27 19:38:44 2006 From: dvp at MIT.EDU (Dennis V. Perepelitsa) Date: Tue, 27 Jun 2006 19:38:44 -0400 (EDT) Subject: [Numpy-discussion] Numpy Benchmarking Message-ID: Hi, all. I've run some benchmarks comparing the performance of scipy, numpy, Numeric and numarray vs. MATLAB. There's also the beginnings of a benchmark framework included. The results are online at: http://web.mit.edu/jonas/www/bench/ They were produced on a Thinkpad T42 with an Intel Pentium M 1.7GHz processor running Ubuntu Dapper Drake (6.06). All the languages/packages were built from source, and, in the case of numpy and scipy, linked to ATLAS. Each datapoint represents the arithmetic mean of ten trials. The results have some interesting implications. For example, numpy and scipy perform approximately the same except when it comes to matrix inversion, MATLAB beats out all the Python packages when it comes to matrix addition, and numpy seems to be beaten by its predecessors in some cases. Why is this the case? What are some other, additional benchmarks I could try? Dennis V. Perepelitsa MIT Class of 2008, Course VIII and XVIII-C Picower Institute for Learning and Memory From robert.kern at gmail.com Tue Jun 27 19:50:19 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Jun 2006 18:50:19 -0500 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: Message-ID: Dennis V. Perepelitsa wrote: > Hi, all. > > I've run some benchmarks comparing the performance of scipy, numpy, > Numeric and numarray vs. MATLAB. There's also the beginnings of a > benchmark framework included. The results are online at: > > http://web.mit.edu/jonas/www/bench/ > > They were produced on a Thinkpad T42 with an Intel Pentium M 1.7GHz > processor running Ubuntu Dapper Drake (6.06). All the languages/packages > were built from source, and, in the case of numpy and scipy, linked to > ATLAS. Each datapoint represents the arithmetic mean of ten trials. I have two suggestions based on a two-second glance at this: 1) Use time.time() on UNIX and time.clock() on Windows. The usual snippet of code I use for this: import sys import time if sys.platform == 'win32': now = time.clock else: now = time.time t1 = now() ... t2 = now() 2) Never take the mean of repeated time trials. Take the minimum if you need to summarize a set of trials. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Tue Jun 27 19:55:53 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 16:55:53 -0700 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: Message-ID: On 6/27/06, Dennis V. Perepelitsa wrote: > I've run some benchmarks comparing the performance of scipy, numpy, > Numeric and numarray vs. MATLAB. I enjoyed looking at the results. The most interesting result, for me, was that inverting a matrix is much faster in scipy than numpy. How can that be? I would have guessed that numpy handled the inversion for scipy since numpy is the core. The two calls were scipy.linalg.inv(m) and numpy.linalg.inv(m). From oliphant.travis at ieee.org Tue Jun 27 20:24:23 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 18:24:23 -0600 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: Message-ID: <44A1CC37.1050300@ieee.org> Dennis V. Perepelitsa wrote: > Hi, all. > > I've run some benchmarks comparing the performance of scipy, numpy, > Numeric and numarray vs. MATLAB. There's also the beginnings of a > benchmark framework included. The results are online at: > > http://web.mit.edu/jonas/www/bench/ > > They were produced on a Thinkpad T42 with an Intel Pentium M 1.7GHz > processor running Ubuntu Dapper Drake (6.06). All the languages/packages > were built from source, and, in the case of numpy and scipy, linked to > ATLAS. Each datapoint represents the arithmetic mean of ten trials. > I agree with Robert that a minimum would be a better way to aggregate results. > The results have some interesting implications. For example, numpy and > scipy perform approximately the same except when it comes to matrix > inversion, MATLAB beats out all the Python packages when it comes to > matrix addition, and numpy seems to be beaten by its predecessors in some > cases. Why is this the case? In terms of creating zeros matrices, you are creating double-precision matrices for NumPy but only single-precision for Numeric and numarray. Try using numpy.float32 or 'f' when creating numpy arrays. The float is the Python type-object and represents a double-precision number. Or, if you are trying to use double precision for all cases (say for comparison to MATLAB) then use 'd' in numarray and Numeric. For comparing numpy with numarray and Numeric there are some benchmarks in the SVN tree of NumPy under benchmarks. These benchmarks have been helpful in the past in pointing out areas where we could improve the code of NumPy, so I'm grateful for your efforts. -Travis From oliphant.travis at ieee.org Tue Jun 27 20:26:46 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 18:26:46 -0600 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: Message-ID: <44A1CCC6.9090506@ieee.org> Keith Goodman wrote: > On 6/27/06, Dennis V. Perepelitsa wrote: > > >> I've run some benchmarks comparing the performance of scipy, numpy, >> Numeric and numarray vs. MATLAB. >> > > I enjoyed looking at the results. > > The most interesting result, for me, was that inverting a matrix is > much faster in scipy than numpy. How can that be? I would have guessed > that numpy handled the inversion for scipy since numpy is the core. > > The two calls were scipy.linalg.inv(m) and numpy.linalg.inv(m). > NumPy uses Numeric's old wrapper to lapack algorithms. SciPy uses it's own f2py-generated wrapper (it doesn't rely on the NumPy wrapper). The numpy.dual library exists so you can use the SciPy calls if the person has SciPy installed or the NumPy ones otherwise. It exists precisely for the purpose of seamlessly taking advantage of algorithms/interfaces that exist in NumPy but are improved in SciPy. -Travis From kwgoodman at gmail.com Tue Jun 27 21:13:37 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 18:13:37 -0700 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <44A1CCC6.9090506@ieee.org> References: <44A1CCC6.9090506@ieee.org> Message-ID: On 6/27/06, Travis Oliphant wrote: > The numpy.dual library exists so you can use the SciPy calls if the > person has SciPy installed or the NumPy ones otherwise. It exists > precisely for the purpose of seamlessly taking advantage of > algorithms/interfaces that exist in NumPy but are improved in SciPy. That sounds very interesting. It would make a great addition to the scipy performance page: http://scipy.org/PerformanceTips So if I need any of the following functions I should import them from scipy or from numpy.dual? And all of them are faster? fft ifft fftn ifftn fft2 ifft2 norm inv svd solve det eig eigvals eigh eigvalsh lstsq pinv cholesky http://svn.scipy.org/svn/numpy/trunk/numpy/dual.py From kwgoodman at gmail.com Tue Jun 27 22:18:51 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 19:18:51 -0700 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: <44A1CCC6.9090506@ieee.org> Message-ID: On 6/27/06, Keith Goodman wrote: > On 6/27/06, Travis Oliphant wrote: > > > The numpy.dual library exists so you can use the SciPy calls if the > > person has SciPy installed or the NumPy ones otherwise. It exists > > precisely for the purpose of seamlessly taking advantage of > > algorithms/interfaces that exist in NumPy but are improved in SciPy. > > That sounds very interesting. It would make a great addition to the > scipy performance page: > > http://scipy.org/PerformanceTips > > So if I need any of the following functions I should import them from > scipy or from numpy.dual? And all of them are faster? > > fft > ifft > fftn > ifftn > fft2 > ifft2 > norm > inv > svd > solve > det > eig > eigvals > eigh > eigvalsh > lstsq > pinv > cholesky > > http://svn.scipy.org/svn/numpy/trunk/numpy/dual.py > Scipy computes the inverse of a matrix faster than numpy (except if the dimensions of x are small). But scipy is slower than numpy for eigh (I only checked for symmetric positive definite matrices): from numpy import asmatrix, randn from numpy.linalg import eigh as Neigh from scipy.linalg import eigh as Seigh import time def test(N): x = asmatrix(randn(N,2*N)) x = x * x.T t0 = time.time() eigval, eigvec = Neigh(x) t1 = time.time() t2 = time.time() eigval, eigvec = Seigh(x) t3 = time.time() print 'NumPy:', t1-t0, 'seconds' print 'SciPy:', t3-t2, 'seconds' >> dual.test(10) NumPy: 0.000217914581299 seconds SciPy: 0.000226020812988 seconds >> dual.test(100) NumPy: 0.0123109817505 seconds SciPy: 0.0321230888367 seconds >> dual.test(200) NumPy: 0.0793058872223 seconds SciPy: 0.082535982132 seconds >> dual.test(500) NumPy: 0.59161400795 seconds SciPy: 1.41600894928 seconds From robert.kern at gmail.com Tue Jun 27 22:40:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Jun 2006 21:40:46 -0500 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: <44A1CCC6.9090506@ieee.org> Message-ID: Keith Goodman wrote: > Scipy computes the inverse of a matrix faster than numpy (except if > the dimensions of x are small). But scipy is slower than numpy for > eigh (I only checked for symmetric positive definite matrices): Looks like scipy uses *SYEV and numpy uses the better *SYEVD (the D stands for divide-and-conquer) routine. Both should probably be using the RRR versions (*SYEVR) if I'm reading the advice in the LUG correctly. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Tue Jun 27 23:03:01 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 20:03:01 -0700 Subject: [Numpy-discussion] Should cholesky return upper or lower triangular matrix? Message-ID: Isn't the Cholesky decomposition by convention an upper triangular matrix? I noticed, by porting Octave code, that linalg.cholesky returns the lower triangular matrix. References: http://mathworld.wolfram.com/CholeskyDecomposition.html http://www.mathworks.com/access/helpdesk/help/techdoc/ref/chol.html From robert.kern at gmail.com Tue Jun 27 23:18:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Jun 2006 22:18:04 -0500 Subject: [Numpy-discussion] Should cholesky return upper or lower triangular matrix? In-Reply-To: References: Message-ID: Keith Goodman wrote: > Isn't the Cholesky decomposition by convention an upper triangular > matrix? I noticed, by porting Octave code, that linalg.cholesky > returns the lower triangular matrix. > > References: > > http://mathworld.wolfram.com/CholeskyDecomposition.html > http://www.mathworks.com/access/helpdesk/help/techdoc/ref/chol.html Lower: http://en.wikipedia.org/wiki/Cholesky_decomposition http://www.math-linux.com/spip.php?article43 http://planetmath.org/?op=getobj&from=objects&id=1287 http://rkb.home.cern.ch/rkb/AN16pp/node33.html#SECTION000330000000000000000 http://www.riskglossary.com/link/cholesky_factorization.htm http://www.library.cornell.edu/nr/bookcpdf/c2-9.pdf If anything, the convention appears to be lower-triangular. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Tue Jun 27 23:25:08 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 20:25:08 -0700 Subject: [Numpy-discussion] Should cholesky return upper or lower triangular matrix? In-Reply-To: References: Message-ID: On 6/27/06, Robert Kern wrote: > Keith Goodman wrote: > > Isn't the Cholesky decomposition by convention an upper triangular > > matrix? I noticed, by porting Octave code, that linalg.cholesky > > returns the lower triangular matrix. > > > > References: > > > > http://mathworld.wolfram.com/CholeskyDecomposition.html > > http://www.mathworks.com/access/helpdesk/help/techdoc/ref/chol.html > > Lower: > http://en.wikipedia.org/wiki/Cholesky_decomposition > http://www.math-linux.com/spip.php?article43 > http://planetmath.org/?op=getobj&from=objects&id=1287 > http://rkb.home.cern.ch/rkb/AN16pp/node33.html#SECTION000330000000000000000 > http://www.riskglossary.com/link/cholesky_factorization.htm > http://www.library.cornell.edu/nr/bookcpdf/c2-9.pdf > > If anything, the convention appears to be lower-triangular. If you give me a second, I'll show you that the wikipedia supports my claim. OK. Lower it is. It will save me a transpose when I calculate joint random variables. From miku0814 at yahoo.co.jp Tue Jun 27 23:28:04 2006 From: miku0814 at yahoo.co.jp (=?iso-2022-jp?B?bWlrdQ==?=) Date: Wed, 28 Jun 2006 03:28:04 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?07 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?07 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?07 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From emi0924 at yahoo.co.jp Wed Jun 28 01:41:50 2006 From: emi0924 at yahoo.co.jp (=?iso-2022-jp?B?ZW1p?=) Date: Wed, 28 Jun 2006 05:41:50 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?010 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?010 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?010 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?010 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?010 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?010 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From kanako at yahoo.co.jp Wed Jun 28 02:21:29 2006 From: kanako at yahoo.co.jp (=?iso-2022-jp?B?a2FuYWtv?=) Date: Wed, 28 Jun 2006 06:21:29 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?03 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From joris at ster.kuleuven.ac.be Wed Jun 28 04:14:41 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Wed, 28 Jun 2006 10:14:41 +0200 Subject: [Numpy-discussion] Numpy Benchmarking Message-ID: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> Hi, [TO]: NumPy uses Numeric's old wrapper to lapack algorithms. [TO]: [TO]: SciPy uses it's own f2py-generated wrapper (it doesn't rely on the [TO]: NumPy wrapper). [TO]: [TO]: The numpy.dual library exists so you can use the SciPy calls if the [TO]: person has SciPy installed or the NumPy ones otherwise. It exists [TO]: precisely for the purpose of seamlessly taking advantage of [TO]: algorithms/interfaces that exist in NumPy but are improved in SciPy. This strikes me as a little bit odd. Why not just provide the best-performing function to both SciPy and NumPy? Would NumPy be more difficult to install if the SciPy algorithm for inv() was incorporated? Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From robert.kern at gmail.com Wed Jun 28 04:22:28 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 Jun 2006 03:22:28 -0500 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> Message-ID: joris at ster.kuleuven.ac.be wrote: > Hi, > > [TO]: NumPy uses Numeric's old wrapper to lapack algorithms. > [TO]: > [TO]: SciPy uses it's own f2py-generated wrapper (it doesn't rely on the > [TO]: NumPy wrapper). > [TO]: > [TO]: The numpy.dual library exists so you can use the SciPy calls if the > [TO]: person has SciPy installed or the NumPy ones otherwise. It exists > [TO]: precisely for the purpose of seamlessly taking advantage of > [TO]: algorithms/interfaces that exist in NumPy but are improved in SciPy. > > This strikes me as a little bit odd. Why not just provide the best-performing > function to both SciPy and NumPy? Would NumPy be more difficult to install > if the SciPy algorithm for inv() was incorporated? That's certainly the case for the FFT algorithms. Scipy wraps more (and more complicated) FFT libraries that are faster than FFTPACK. Most of the linalg functionality should probably be wrapping the same routines if an optimized LAPACK is available. However, changing the routine used in numpy in the absence of an optimized LAPACK would require reconstructing the f2c'ed lapack_lite library that we include with the numpy source. That hasn't been touched in so long that I would hesitate to do so. If you are willing to do the work and the testing to ensure that it still works everywhere, we'd probably accept the change. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wright at esrf.fr Wed Jun 28 04:55:36 2006 From: wright at esrf.fr (Jon Wright) Date: Wed, 28 Jun 2006 10:55:36 +0200 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> Message-ID: <44A24408.9000305@esrf.fr> >>This strikes me as a little bit odd. Why not just provide the best-performing >>function to both SciPy and NumPy? Would NumPy be more difficult to install >>if the SciPy algorithm for inv() was incorporated? >> >> Having spent a few days recently trying out various different eigenvector routines in Lapack I would have greatly appreciated having a choice of which one to use from without having to create my own wrappers, compiling atlas and lapack under windows (ouch). I noted that Numeric (24.2) seemed to be converting Float32 to double meaning my problem no longer fits in memory, which was the motivation for the work. Poking around in the svn of numpy.linalg appears to find the same lapack routine as Numeric (dsyevd). Perhaps I miss something in the code logic? The divide and conquer (*evd) uses more memory than the (*ev), as well as a factor of 2 for float/double, hence my problem, and the reason why "best performing" is a hard choice. I thought matlab has a look at the matrix dimensions and problem before deciding what to do (eg: the \ operator). Jon From arnd.baecker at web.de Wed Jun 28 05:16:09 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 28 Jun 2006 11:16:09 +0200 (CEST) Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <44A24408.9000305@esrf.fr> References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> <44A24408.9000305@esrf.fr> Message-ID: Hi, On Wed, 28 Jun 2006, Jon Wright wrote: > > >>This strikes me as a little bit odd. Why not just provide the best-performing > >>function to both SciPy and NumPy? Would NumPy be more difficult to install > >>if the SciPy algorithm for inv() was incorporated? > >> > >> > Having spent a few days recently trying out various different > eigenvector routines in Lapack I would have greatly appreciated having a > choice of which one to use which routine are you trying to use? > from without having to create my own > wrappers, compiling atlas and lapack under windows (ouch). I noted that > Numeric (24.2) seemed to be converting Float32 to double meaning my > problem no longer fits in memory, which was the motivation for the work. > Poking around in the svn of numpy.linalg appears to find the same lapack > routine as Numeric (dsyevd). Perhaps I miss something in the code logic? if you can convince the code to get ssyevd instead of dsyevd it might do what you want> > The divide and conquer (*evd) uses more memory than the (*ev), as well > as a factor of 2 for float/double, hence my problem, and the reason why > "best performing" is a hard choice. I thought matlab has a look at the > matrix dimensions and problem before deciding what to do (eg: the \ > operator). Hmm, this is a hard choice, which might better left in the hands of the knowledgeable user. (e.g., aren't the divide and conquer routines substantially faster?) Best, Arnd From jensj at fysik.dtu.dk Wed Jun 28 06:44:05 2006 From: jensj at fysik.dtu.dk (=?ISO-8859-1?Q?Jens_J=F8rgen_Mortensen?=) Date: Wed, 28 Jun 2006 12:44:05 +0200 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: Message-ID: <44A25D75.8060402@servfys.fysik.dtu.dk> Dennis V. Perepelitsa wrote: >Hi, all. > >I've run some benchmarks comparing the performance of scipy, numpy, >Numeric and numarray vs. MATLAB. There's also the beginnings of a >benchmark framework included. The results are online at: > > http://web.mit.edu/jonas/www/bench/ > > It's a little hard to see the curves for small matrix size, N. How about plotting the time divided by the theoretical number of operations - which would be N^2 or N^3. Jens J?rgen From filip at ftv.pl Wed Jun 28 07:00:31 2006 From: filip at ftv.pl (Filip Wasilewski) Date: Wed, 28 Jun 2006 13:00:31 +0200 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <44A25D75.8060402@servfys.fysik.dtu.dk> References: <44A25D75.8060402@servfys.fysik.dtu.dk> Message-ID: <1918158814.20060628130031@gmail.com> Jens wrote: > Dennis V. Perepelitsa wrote: >>Hi, all. >> >>I've run some benchmarks comparing the performance of scipy, numpy, >>Numeric and numarray vs. MATLAB. There's also the beginnings of a >>benchmark framework included. The results are online at: >> >> http://web.mit.edu/jonas/www/bench/ >> >> > It's a little hard to see the curves for small matrix size, N. How > about plotting the time divided by the theoretical number of operations > - which would be N^2 or N^3. Or use some logarithmic scale (one or both axis) where applicable. fw From schut at sarvision.nl Wed Jun 28 10:03:55 2006 From: schut at sarvision.nl (Vincent Schut) Date: Wed, 28 Jun 2006 16:03:55 +0200 Subject: [Numpy-discussion] int64 wierdness In-Reply-To: <44A1B321.2030102@astraw.com> References: <44A18935.1090702@ieee.org> <44A1B321.2030102@astraw.com> Message-ID: <44A28C4B.5080300@sarvision.nl> Andrew Straw wrote: > An SVN checkout from a week or two ago looks OK on my amd64 machine: > > astraw at hdmg:~$ python > Python 2.4.3 (#2, Apr 27 2006, 14:43:32) > [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> numpy.__version__ > '0.9.9.2631' > >>> numpy.int64(9)**2 > 81 > >>> > Confirmed to be fixed on my gentoo amd64 machine, numpy svn of couple of days ago: >>> numpy.int64(9)**2 81 >>> numpy.__version__ '0.9.9.2665' Cheers, Vincent. > > EI wrote: > > >> numpy.__version__ says 0.9.8. >> >> Python 2.4.2, GCC 4.1, OpenSuSE 10.1 (x86_64). >> >> Thanks Travis, >> Eugene >> >> On 6/27/06, *Travis Oliphant* < oliphant.travis at ieee.org >> > wrote: >> >> EI wrote: >> > Hi, >> > >> > I'm running python 2.4 on a 64bit linux and get strange results: >> > (int(9))**2 is equal to 81, as it should, but >> > (int64(9))**2 is equal to 0 >> >> Thanks for the bug-report. Please provide the version of NumPy >> you are >> using so we can track it down, or suggest an upgrade. >> >> -Travis >> >> >> ------------------------------------------------------------------------ >> >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From Glen.Mabey at swri.org Wed Jun 28 11:44:11 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Wed, 28 Jun 2006 10:44:11 -0500 Subject: [Numpy-discussion] fread codes versus numpy types Message-ID: <20060628154411.GE13024@bams.swri.edu> Hello, I see the following character codes defined in scipy (presumably) for use with scipy.io.fread() : In [20]:scipy.Complex Out[20]:'D' In [21]:scipy.Complex0 Out[21]:'D' In [22]:scipy.Complex128 Out[22]:'G' In [23]:scipy.Complex16 Out[23]:'F' In [24]:scipy.Complex32 Out[24]:'F' In [25]:scipy.Complex64 Out[25]:'D' In [26]:scipy.Complex8 Out[26]:'F' Then I see the following scalar types also defined: In [27]:scipy.complex64 Out[27]: In [28]:scipy.complex128 Out[28]: In [29]:scipy.complex256 Out[29]: which correspond to types that exist within the numpy module. These names seem to conflict in that (unless I misunderstand what's going on) scipy.complex64 actually occupies 64 bits of data (a 32-bit float for each of {real, imag}) whereas scipy.Complex64 looks like it occupies 128 bits of data (a 64-bit double for each of {real, imag}). Is there something I'm missing, or is this a naming inconsistency? Glen From stefan at sun.ac.za Wed Jun 28 12:24:02 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 28 Jun 2006 18:24:02 +0200 Subject: [Numpy-discussion] matlab -> python translation Message-ID: <20060628162402.GA6089@mentat.za.net> Hi all, I recently saw discussions on the list regarding Matlab/Octave to Python translation. I brought this under John Eaton's attention (he is the original author of Octave) -- below is his response. Regards St?fan ----- Forwarded message from "John W. Eaton" ----- From: "John W. Eaton" On 21-Jun-2006, Stefan van der Walt wrote: | I'd like to bring this thread under your attention, in case you want | to comment: | | http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3174978 Would you please pass along the following comments? Translating the syntax might not be too hard, but to have a really effective tool, you have to get all the details of the Matlab/Octave function calls the same as well. So would you do that by linking to Octave's run-time libraries as well? That could pobably be made to work, but it would probably drag in a lot more code that some poeple would expect when they just want to translate and run a relatively small number of lines of Matlab code. Another semantic detail that would likely cause trouble is the (apparent) pass-by-value semantics of Matlab. How would you reconcile this with the mutable types of Python? Finally, I would encourage anyone who wants to work on a Matlab/Octave to Python translator using Octave's parser and run-time libraries to work on this in a way that can be integrated with Octave. Please consider discuss your ideas about this project on the maintainers at octave.org mailing list. Thanks, jwe ----- End forwarded message ----- From robert.kern at gmail.com Wed Jun 28 12:25:37 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 Jun 2006 11:25:37 -0500 Subject: [Numpy-discussion] fread codes versus numpy types In-Reply-To: <20060628154411.GE13024@bams.swri.edu> References: <20060628154411.GE13024@bams.swri.edu> Message-ID: Glen W. Mabey wrote: > Hello, > > I see the following character codes defined in scipy (presumably) for > use with scipy.io.fread() : > > In [20]:scipy.Complex > Out[20]:'D' > > In [21]:scipy.Complex0 > Out[21]:'D' > > In [22]:scipy.Complex128 > Out[22]:'G' > > In [23]:scipy.Complex16 > Out[23]:'F' > > In [24]:scipy.Complex32 > Out[24]:'F' > > In [25]:scipy.Complex64 > Out[25]:'D' > > In [26]:scipy.Complex8 > Out[26]:'F' > > Then I see the following scalar types also defined: > > In [27]:scipy.complex64 > Out[27]: > > In [28]:scipy.complex128 > Out[28]: > > In [29]:scipy.complex256 > Out[29]: > > which correspond to types that exist within the numpy module. These > names seem to conflict in that (unless I misunderstand what's going on) > scipy.complex64 actually occupies 64 bits of data (a 32-bit float for > each of {real, imag}) whereas scipy.Complex64 looks like it occupies 128 > bits of data (a 64-bit double for each of {real, imag}). > > Is there something I'm missing, or is this a naming inconsistency? The Capitalized versions are actually old typecodes for backwards compatibility with Numeric. In recent development versions of numpy, they are no longer exposed except through the numpy.oldnumeric compatibility module. A decision was made for numpy to use the actual width of a type in its name instead of the width of its component parts (when it has parts). Code in scipy which still requires actual string typecodes is a bug. Please report such cases on the Trac: http://projects.scipy.org/scipy/scipy -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Wed Jun 28 12:42:42 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 28 Jun 2006 09:42:42 -0700 Subject: [Numpy-discussion] what happened to numarray type names ? In-Reply-To: <331116dc0606201930h54c75df9y5538c1c3c6cf36c@mail.gmail.com> References: <20060620202230.07c3ae56.simon@arrowtheory.com> <20060620103815.GA23025@mentat.za.net> <331116dc0606201800v1fab5d01o1cf6d21377ef99ca@mail.gmail.com> <20060621020020.GA6459@arbutus.physics.mcmaster.ca> <331116dc0606201930h54c75df9y5538c1c3c6cf36c@mail.gmail.com> Message-ID: <44A2B182.3040704@noaa.gov> Erin Sheldon wrote: > OK, I have changed all the examples that used dtype=Float or > dtype=Int to float and int. They are also available as: numpy.float_ numpy.int_ -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From fperez.net at gmail.com Wed Jun 28 13:22:38 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 11:22:38 -0600 Subject: [Numpy-discussion] fread codes versus numpy types In-Reply-To: References: <20060628154411.GE13024@bams.swri.edu> Message-ID: On 6/28/06, Robert Kern wrote: > The Capitalized versions are actually old typecodes for backwards compatibility > with Numeric. In recent development versions of numpy, they are no longer > exposed except through the numpy.oldnumeric compatibility module. A decision was > made for numpy to use the actual width of a type in its name instead of the > width of its component parts (when it has parts). > > Code in scipy which still requires actual string typecodes is a bug. Please > report such cases on the Trac: > > http://projects.scipy.org/scipy/scipy Well, an easy way to make all those poke their ugly heads in a hurry would be to remove line 32 in scipy's init: longs[Lib]> grep -n oldnum *py __init__.py:31:import numpy.oldnumeric as _num __init__.py:32:from numpy.oldnumeric import * If we really want to push for the new api, I think it's fair to change those two lines by simply from numpy import oldnumeric so that scipy also exposes oldnumeric, and let all deprecated names be hidden there. I just tried this change: Index: __init__.py =================================================================== --- __init__.py (revision 2012) +++ __init__.py (working copy) @@ -29,9 +29,8 @@ # Import numpy symbols to scipy name space import numpy.oldnumeric as _num -from numpy.oldnumeric import * -del lib -del linalg +from numpy import oldnumeric + __all__ += _num.__all__ __doc__ += """ Contents and scipy's test suite still passes (modulo the test_cobyla thingie Nils is currently fixing, which is not related to this). Should I apply this patch, so we push the cleaned-up API even a bit harder? Cheers, f From kwgoodman at gmail.com Wed Jun 28 13:26:03 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 28 Jun 2006 10:26:03 -0700 Subject: [Numpy-discussion] indexing bug in numpy r2694 Message-ID: >> x = asmatrix(rand(3,2)) >> y = asmatrix(rand(3,1)) >> y matrix([[ 0.77952062], [ 0.97110465], [ 0.77450218]]) >> idx = where(y > 0.5)[0] >> idx matrix([[0, 1, 2]]) >> x[idx,:] matrix([[ 0.24837887, 0.52988253], [ 0.28661085, 0.43053076], [ 0.05360893, 0.22668509]]) So far everything works as it should. Now the problem: >> y[idx,:] --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /usr/local/lib/python2.4/site-packages/numpy/core/defmatrix.py in __getitem__(self, index) 120 121 def __getitem__(self, index): --> 122 out = N.ndarray.__getitem__(self, index) 123 # Need to swap if slice is on first index 124 retscal = False /usr/local/lib/python2.4/site-packages/numpy/core/defmatrix.py in __array_finalize__(self, obj) 116 self.shape = (1,1) 117 elif ndim == 1: --> 118 self.shape = (1,self.shape[0]) 119 return 120 ValueError: total size of new array must be unchanged And, on a related note, shouldn't this be a column vector? >> x[idx,0] matrix([[ 0.24837887, 0.28661085, 0.05360893]]) From pau.gargallo at gmail.com Wed Jun 28 13:40:35 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Wed, 28 Jun 2006 19:40:35 +0200 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: References: Message-ID: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> i don't know why 'where' is returning matrices. if you use: >>> idx = where(y.A > 0.5)[0] everything will work fine (I guess) pau On 6/28/06, Keith Goodman wrote: > >> x = asmatrix(rand(3,2)) > > >> y = asmatrix(rand(3,1)) > > >> y > > matrix([[ 0.77952062], > [ 0.97110465], > [ 0.77450218]]) > > >> idx = where(y > 0.5)[0] > > >> idx > matrix([[0, 1, 2]]) > > >> x[idx,:] > > matrix([[ 0.24837887, 0.52988253], > [ 0.28661085, 0.43053076], > [ 0.05360893, 0.22668509]]) > > So far everything works as it should. Now the problem: > > >> y[idx,:] > --------------------------------------------------------------------------- > exceptions.ValueError Traceback (most > recent call last) > > /usr/local/lib/python2.4/site-packages/numpy/core/defmatrix.py in > __getitem__(self, index) > 120 > 121 def __getitem__(self, index): > --> 122 out = N.ndarray.__getitem__(self, index) > 123 # Need to swap if slice is on first index > 124 retscal = False > > /usr/local/lib/python2.4/site-packages/numpy/core/defmatrix.py in > __array_finalize__(self, obj) > 116 self.shape = (1,1) > 117 elif ndim == 1: > --> 118 self.shape = (1,self.shape[0]) > 119 return > 120 > > ValueError: total size of new array must be unchanged > > > And, on a related note, shouldn't this be a column vector? > > >> x[idx,0] > matrix([[ 0.24837887, 0.28661085, 0.05360893]]) > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From fperez.net at gmail.com Wed Jun 28 13:51:45 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 11:51:45 -0600 Subject: [Numpy-discussion] Setuptools leftover junk Message-ID: Hi all, I recently noticed one of my in-house projects started leaving aroun .egg-info directories after I ran its setup.py, even though I don't use setuptools for anything at all. For now I just added an extra clean rule to my makefile and forgot about it, but it kind of annoyed me. Today I looked at the temp directory where I've been making my numpy/scipy installs from SVN, and here's what I saw: longs[site-packages]> d /home/fperez/tmp/local/lib/python2.4/site-packages total 228 drwxr-xr-x 2 fperez 4096 2006-06-21 22:16 dateutil/ drwxr-xr-x 7 fperez 4096 2006-06-28 02:50 matplotlib/ drwxr-xr-x 13 fperez 4096 2006-06-28 02:38 numpy/ drwxr-xr-x 2 fperez 4096 2006-06-21 21:28 numpy-0.9.9.2660-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-22 21:29 numpy-0.9.9.2665-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-24 11:33 numpy-0.9.9.2674-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-24 15:08 numpy-0.9.9.2675-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-25 12:40 numpy-0.9.9.2677-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-26 23:32 numpy-0.9.9.2691-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-28 02:38 numpy-0.9.9.2696-py2.4.egg-info/ -rw-r--r-- 1 fperez 31 2006-03-18 20:11 pylab.py -rw-r--r-- 1 fperez 178 2006-06-24 13:29 pylab.pyc drwxr-xr-x 20 fperez 4096 2006-06-28 11:20 scipy/ drwxr-xr-x 2 fperez 4096 2006-06-21 21:36 scipy-0.5.0.1990-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-22 21:36 scipy-0.5.0.1998-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-24 15:15 scipy-0.5.0.1999-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-25 12:46 scipy-0.5.0.2000-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-26 23:37 scipy-0.5.0.2004-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-28 02:48 scipy-0.5.0.2012-py2.4.egg-info/ Is it really necessary to have all that setuptools junk left around, for those of us who aren't asking for it explicitly? My personal opinions on setuptools aside, I think it's just a sane practice not to create this kind of extra baggage unless explicitly requested. I scoured my home directory for any .file which might be triggering this inadvertedly, but I can't seem to find any, so I'm going to guess this is somehow being caused by numpy's own setup. If it's my own mistake, I'll be happy to be shown how to coexist peacefully with setuptools. Since this also affects user code (I think via f2py or something internal to numpy, since all I'm calling is f2py in my code), I really think it would be nice to clean it. Opinions? f From kwgoodman at gmail.com Wed Jun 28 14:04:09 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 28 Jun 2006 11:04:09 -0700 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> References: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> Message-ID: On 6/28/06, Pau Gargallo wrote: > i don't know why 'where' is returning matrices. > if you use: > > >>> idx = where(y.A > 0.5)[0] > > everything will work fine (I guess) What about the second issue? Is this expected behavior? >> idx array([0, 1, 2]) >> y matrix([[ 0.63731308], [ 0.34282663], [ 0.53366791]]) >> y[idx] matrix([[ 0.63731308], [ 0.34282663], [ 0.53366791]]) >> y[idx,0] matrix([[ 0.63731308, 0.34282663, 0.53366791]]) I was expecting a column vector. From pau.gargallo at gmail.com Wed Jun 28 14:25:14 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Wed, 28 Jun 2006 20:25:14 +0200 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: References: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> Message-ID: <6ef8f3380606281125sd8ba54ci5f71d67fd24b7246@mail.gmail.com> On 6/28/06, Keith Goodman wrote: > On 6/28/06, Pau Gargallo wrote: > > i don't know why 'where' is returning matrices. > > if you use: > > > > >>> idx = where(y.A > 0.5)[0] > > > > everything will work fine (I guess) > > What about the second issue? Is this expected behavior? > > >> idx > array([0, 1, 2]) > > >> y > > matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > >> y[idx] > > matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > >> y[idx,0] > matrix([[ 0.63731308, 0.34282663, 0.53366791]]) > > I was expecting a column vector. > I have never played with matrices, but if y was an array, y[idx,0] will be an array of the same shape of idx. That is a 1d array. I guess that when y is a matrix, this 1d array is converted to a matrix and become a row vector. I don't know if this behaviour is wanted :-( cheers, pau From robert.kern at gmail.com Wed Jun 28 14:32:15 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 Jun 2006 13:32:15 -0500 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: References: Message-ID: Fernando Perez wrote: > Is it really necessary to have all that setuptools junk left around, > for those of us who aren't asking for it explicitly? My personal > opinions on setuptools aside, I think it's just a sane practice not to > create this kind of extra baggage unless explicitly requested. > > I scoured my home directory for any .file which might be triggering > this inadvertedly, but I can't seem to find any, so I'm going to guess > this is somehow being caused by numpy's own setup. If it's my own > mistake, I'll be happy to be shown how to coexist peacefully with > setuptools. > > Since this also affects user code (I think via f2py or something > internal to numpy, since all I'm calling is f2py in my code), I really > think it would be nice to clean it. numpy.distutils uses setuptools if it is importable in order to make sure that the two don't stomp on each other. It's probable that that test could probably be done with Andrew Straw's method: if 'setuptools' in sys.modules: have_setuptools = True from setuptools import setup as old_setup else: have_setuptools = False from distutils.core import setup as old_setup Tested patches welcome. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Wed Jun 28 14:42:04 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 14:42:04 -0400 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <44A24408.9000305@esrf.fr> References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> <44A24408.9000305@esrf.fr> Message-ID: <20060628144204.382a1678@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 10:55:36 +0200 Jon Wright wrote: > Poking around in the svn of numpy.linalg appears to find the same lapack > routine as Numeric (dsyevd). Perhaps I miss something in the code logic? It's actually *exactly* the same as the latest Numeric :-) It hasn't been touched much. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Wed Jun 28 14:47:32 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 28 Jun 2006 12:47:32 -0600 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: References: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> Message-ID: <44A2CEC4.1050706@ee.byu.edu> Keith Goodman wrote: >On 6/28/06, Pau Gargallo wrote: > > >>i don't know why 'where' is returning matrices. >>if you use: >> >> >> >>>>>idx = where(y.A > 0.5)[0] >>>>> >>>>> >>everything will work fine (I guess) >> >> > >What about the second issue? Is this expected behavior? > > > >>>idx >>> >>> >array([0, 1, 2]) > > > >>>y >>> >>> > >matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > > >>>y[idx] >>> >>> > >matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > > >>>y[idx,0] >>> >>> >matrix([[ 0.63731308, 0.34282663, 0.53366791]]) > >I was expecting a column vector. > > > This should be better behaved now in SVN. Thanks for the reports. -Travis From cookedm at physics.mcmaster.ca Wed Jun 28 14:48:31 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 14:48:31 -0400 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> Message-ID: <20060628144831.474c8059@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 03:22:28 -0500 Robert Kern wrote: > joris at ster.kuleuven.ac.be wrote: > > Hi, > > > > [TO]: NumPy uses Numeric's old wrapper to lapack algorithms. > > [TO]: > > [TO]: SciPy uses it's own f2py-generated wrapper (it doesn't rely on the > > [TO]: NumPy wrapper). > > [TO]: > > [TO]: The numpy.dual library exists so you can use the SciPy calls if > > the [TO]: person has SciPy installed or the NumPy ones otherwise. It > > exists [TO]: precisely for the purpose of seamlessly taking advantage of > > [TO]: algorithms/interfaces that exist in NumPy but are improved in > > SciPy. > > > > This strikes me as a little bit odd. Why not just provide the > > best-performing function to both SciPy and NumPy? Would NumPy be more > > difficult to install if the SciPy algorithm for inv() was incorporated? > > That's certainly the case for the FFT algorithms. Scipy wraps more (and > more complicated) FFT libraries that are faster than FFTPACK. > > Most of the linalg functionality should probably be wrapping the same > routines if an optimized LAPACK is available. However, changing the routine > used in numpy in the absence of an optimized LAPACK would require > reconstructing the f2c'ed lapack_lite library that we include with the > numpy source. That hasn't been touched in so long that I would hesitate to > do so. If you are willing to do the work and the testing to ensure that it > still works everywhere, we'd probably accept the change. Annoying to redo (as tracking down *good* LAPACK sources is a chore), but hardly as bad as it was. I added the scripts I used to generated lapack_lite.c et al to numpy/linalg/lapack_lite in svn. These are the same things that were used to generate those files in recent versions of Numeric (which numpy uses). You only need to specify the top-level routines; the scripts find the dependencies. I'd suggest using the source for LAPACK that Debian uses; the maintainer, Camm Maguire, has done a bunch of work adding patches to fix routines that have been floating around. For instance, eigenvalues works better than before (lot fewer segfaults). With this, the hard part is writing the wrapper routines. If someone wants to wrap extra routines, I can do the the lapack_lite generation for them. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Wed Jun 28 15:00:05 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 13:00:05 -0600 Subject: [Numpy-discussion] fread codes versus numpy types In-Reply-To: <20060628145356.7946a3e0@arbutus.physics.mcmaster.ca> References: <20060628154411.GE13024@bams.swri.edu> <20060628145356.7946a3e0@arbutus.physics.mcmaster.ca> Message-ID: On 6/28/06, David M. Cooke wrote: > On Wed, 28 Jun 2006 11:22:38 -0600 > "Fernando Perez" wrote: > > Should I apply this patch, so we push the cleaned-up API even a bit harder? > > Yes please. I think all the modules that still use the oldnumeric names > actually import numpy.oldnumeric themselves. Done, r2017. I also committed the simple one-liner: Index: weave/inline_tools.py =================================================================== --- weave/inline_tools.py (revision 2016) +++ weave/inline_tools.py (working copy) @@ -402,7 +402,7 @@ def compile_function(code,arg_names,local_dict,global_dict, module_dir, compiler='', - verbose = 0, + verbose = 1, support_code = None, headers = [], customize = None, from a discussion we had a few weeks ago, I'd forgotten to put it in. I did it as a separate patch (r 2018) so it can be reverted separately if anyone objects. Cheers, f From cookedm at physics.mcmaster.ca Wed Jun 28 15:10:40 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 15:10:40 -0400 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: References: Message-ID: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 13:32:15 -0500 Robert Kern wrote: > Fernando Perez wrote: > > > Is it really necessary to have all that setuptools junk left around, > > for those of us who aren't asking for it explicitly? My personal > > opinions on setuptools aside, I think it's just a sane practice not to > > create this kind of extra baggage unless explicitly requested. > > > > I scoured my home directory for any .file which might be triggering > > this inadvertedly, but I can't seem to find any, so I'm going to guess > > this is somehow being caused by numpy's own setup. If it's my own > > mistake, I'll be happy to be shown how to coexist peacefully with > > setuptools. > > > > Since this also affects user code (I think via f2py or something > > internal to numpy, since all I'm calling is f2py in my code), I really > > think it would be nice to clean it. > > numpy.distutils uses setuptools if it is importable in order to make sure > that the two don't stomp on each other. It's probable that that test could > probably be done with Andrew Straw's method: > > if 'setuptools' in sys.modules: > have_setuptools = True > from setuptools import setup as old_setup > else: > have_setuptools = False > from distutils.core import setup as old_setup > > Tested patches welcome. Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' with an import of setuptools (it's based on the one used in matplotlib). easy_install still works, also. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Wed Jun 28 15:11:36 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 13:11:36 -0600 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: References: Message-ID: On 6/28/06, Robert Kern wrote: > numpy.distutils uses setuptools if it is importable in order to make sure that > the two don't stomp on each other. It's probable that that test could probably > be done with Andrew Straw's method: > > if 'setuptools' in sys.modules: > have_setuptools = True > from setuptools import setup as old_setup > else: > have_setuptools = False > from distutils.core import setup as old_setup > > Tested patches welcome. Well, tested as in 'I wrote a unittest for installation', no. But tested as in 'I built numpy, scipy, matplotlib, and my f2py-using code', yes. They all build/install fine, and no more *egg-info directories are strewn around. If this satisfies your 'tested patches', the code is: Index: numpy/distutils/core.py =================================================================== --- numpy/distutils/core.py (revision 2698) +++ numpy/distutils/core.py (working copy) @@ -1,16 +1,30 @@ - import sys from distutils.core import * -try: - from setuptools import setup as old_setup - # very old setuptools don't have this - from setuptools.command import bdist_egg - # easy_install imports math, it may be picked up from cwd - from setuptools.command import develop, easy_install - have_setuptools = 1 -except ImportError: + +# Don't pull setuptools in unless the user explicitly requests by having it +# imported (Andrew's trick). +have_setuptools = 'setuptools' in sys.modules + +# Even if setuptools is in, do a few things carefully to make sure the version +# is recent enough to have everything we need before assuming we can proceed +# using setuptools throughout +if have_setuptools: + try: + from setuptools import setup as old_setup + # very old setuptools don't have this + from setuptools.command import bdist_egg + # easy_install imports math, it may be picked up from cwd + from setuptools.command import develop, easy_install + except ImportError: + # Any failure here is probably due to an old or broken setuptools + # leftover in sys.modules, so treat it as if it simply weren't + # available. + have_setuptools = False + +# If setuptools was flagged as unavailable due to import problems, we need the +# basic distutils support +if not have_setuptools: from distutils.core import setup as old_setup - have_setuptools = 0 from numpy.distutils.extension import Extension from numpy.distutils.command import config May I? keeping-the-world-setuptools-free-one-script-at-a-time-ly yours, f From cookedm at physics.mcmaster.ca Wed Jun 28 14:53:56 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 14:53:56 -0400 Subject: [Numpy-discussion] fread codes versus numpy types In-Reply-To: References: <20060628154411.GE13024@bams.swri.edu> Message-ID: <20060628145356.7946a3e0@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 11:22:38 -0600 "Fernando Perez" wrote: > On 6/28/06, Robert Kern wrote: > > > The Capitalized versions are actually old typecodes for backwards > > compatibility with Numeric. In recent development versions of numpy, they > > are no longer exposed except through the numpy.oldnumeric compatibility > > module. A decision was made for numpy to use the actual width of a type > > in its name instead of the width of its component parts (when it has > > parts). > > > > Code in scipy which still requires actual string typecodes is a bug. > > Please report such cases on the Trac: > > > > http://projects.scipy.org/scipy/scipy > > Well, an easy way to make all those poke their ugly heads in a hurry > would be to remove line 32 in scipy's init: > > longs[Lib]> grep -n oldnum *py > __init__.py:31:import numpy.oldnumeric as _num > __init__.py:32:from numpy.oldnumeric import * > > > If we really want to push for the new api, I think it's fair to change > those two lines by simply > > from numpy import oldnumeric > > so that scipy also exposes oldnumeric, and let all deprecated names be > hidden there. > > I just tried this change: > > Index: __init__.py > =================================================================== > --- __init__.py (revision 2012) > +++ __init__.py (working copy) > @@ -29,9 +29,8 @@ > > # Import numpy symbols to scipy name space > import numpy.oldnumeric as _num > -from numpy.oldnumeric import * > -del lib > -del linalg > +from numpy import oldnumeric > + > __all__ += _num.__all__ > __doc__ += """ > Contents > > > and scipy's test suite still passes (modulo the test_cobyla thingie > Nils is currently fixing, which is not related to this). > > Should I apply this patch, so we push the cleaned-up API even a bit harder? Yes please. I think all the modules that still use the oldnumeric names actually import numpy.oldnumeric themselves. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Wed Jun 28 15:18:35 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 13:18:35 -0600 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> References: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> Message-ID: On 6/28/06, David M. Cooke wrote: > Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' > with an import of setuptools (it's based on the one used in matplotlib). > > easy_install still works, also. You beat me to it :) However, your patch has slightly different semantics from mine: if bdist_egg fails to import, the rest of setuptools is still used. I don't know if that's safe. My patch would consider /any/ failure in the setuptools imports as a complete setuptools failure, and revert out to basic distutils. Let me know if you want me to put in my code instead, here's a patch from my code against current svn (after your patch), in case you'd like to try it out. Cheers, f Index: core.py =================================================================== --- core.py (revision 2701) +++ core.py (working copy) @@ -1,20 +1,30 @@ - import sys from distutils.core import * -if 'setuptools' in sys.modules: - have_setuptools = True - from setuptools import setup as old_setup - # easy_install imports math, it may be picked up from cwd - from setuptools.command import develop, easy_install +# Don't pull setuptools in unless the user explicitly requests by having it +# imported (Andrew's trick). +have_setuptools = 'setuptools' in sys.modules + +# Even if setuptools is in, do a few things carefully to make sure the version +# is recent enough to have everything we need before assuming we can proceed +# using setuptools throughout +if have_setuptools: try: - # very old versions of setuptools don't have this + from setuptools import setup as old_setup + # very old setuptools don't have this from setuptools.command import bdist_egg + # easy_install imports math, it may be picked up from cwd + from setuptools.command import develop, easy_install except ImportError: + # Any failure here is probably due to an old or broken setuptools + # leftover in sys.modules, so treat it as if it simply weren't + # available. have_setuptools = False -else: + +# If setuptools was flagged as unavailable due to import problems, we need the +# basic distutils support +if not have_setuptools: from distutils.core import setup as old_setup - have_setuptools = False from numpy.distutils.extension import Extension from numpy.distutils.command import config From oliphant at ee.byu.edu Wed Jun 28 14:52:34 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 28 Jun 2006 12:52:34 -0600 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> Message-ID: <44A2CFF2.7030201@ee.byu.edu> joris at ster.kuleuven.ac.be wrote: >Hi, > > [TO]: NumPy uses Numeric's old wrapper to lapack algorithms. > [TO]: > [TO]: SciPy uses it's own f2py-generated wrapper (it doesn't rely on the > [TO]: NumPy wrapper). > [TO]: > [TO]: The numpy.dual library exists so you can use the SciPy calls if the > [TO]: person has SciPy installed or the NumPy ones otherwise. It exists > [TO]: precisely for the purpose of seamlessly taking advantage of > [TO]: algorithms/interfaces that exist in NumPy but are improved in SciPy. > >This strikes me as a little bit odd. Why not just provide the best-performing >function to both SciPy and NumPy? Would NumPy be more difficult to install >if the SciPy algorithm for inv() was incorporated? > > The main issue is that SciPy can take advantage and use Fortran code, but NumPy cannot as it must build without a Fortran compiler. This is the primary driver to the current duality. -Travis From kwgoodman at gmail.com Wed Jun 28 15:23:36 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 28 Jun 2006 12:23:36 -0700 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: <44A2CEC4.1050706@ee.byu.edu> References: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> <44A2CEC4.1050706@ee.byu.edu> Message-ID: On 6/28/06, Travis Oliphant wrote: > Keith Goodman wrote: > > >On 6/28/06, Pau Gargallo wrote: > > > > > >>i don't know why 'where' is returning matrices. > >>if you use: > >> > >> > >> > >>>>>idx = where(y.A > 0.5)[0] > >>>>> > >>>>> > >>everything will work fine (I guess) > >> > >> > > > >What about the second issue? Is this expected behavior? > > > > > > > >>>idx > >>> > >>> > >array([0, 1, 2]) > > > > > > > >>>y > >>> > >>> > > > >matrix([[ 0.63731308], > > [ 0.34282663], > > [ 0.53366791]]) > > > > > > > >>>y[idx] > >>> > >>> > > > >matrix([[ 0.63731308], > > [ 0.34282663], > > [ 0.53366791]]) > > > > > > > >>>y[idx,0] > >>> > >>> > >matrix([[ 0.63731308, 0.34282663, 0.53366791]]) > > > >I was expecting a column vector. > > > > > > > This should be better behaved now in SVN. Thanks for the reports. Now numpy can do y[y > 0.5] instead of y[where(y.A > 0.5)[0]] where, for example, y = asmatrix(rand(3,1)). I know I'm pushing my luck here. But one more feature would make this perfect. Currently y[y>0.5,:] returns the first column even if y has more than one column. Returning all columns would make it perfect. Example: >> y matrix([[ 0.38828902, 0.91649964], [ 0.41074001, 0.7105919 ], [ 0.15460833, 0.16746956]]) >> y[y[:,1]>0.5,:] matrix([[ 0.38828902], [ 0.41074001]]) A better answer for matrix users would be: >> y[(0,1),:] matrix([[ 0.38828902, 0.91649964], [ 0.41074001, 0.7105919 ]]) From cookedm at physics.mcmaster.ca Wed Jun 28 15:37:34 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 15:37:34 -0400 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: References: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> Message-ID: <20060628153734.7597800c@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 13:18:35 -0600 "Fernando Perez" wrote: > On 6/28/06, David M. Cooke wrote: > > > Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' > > with an import of setuptools (it's based on the one used in matplotlib). > > > > easy_install still works, also. > > You beat me to it :) > > However, your patch has slightly different semantics from mine: if > bdist_egg fails to import, the rest of setuptools is still used. I > don't know if that's safe. My patch would consider /any/ failure in > the setuptools imports as a complete setuptools failure, and revert > out to basic distutils. Note that your patch will still import setuptools if the import of bdist_egg fails. And you can't get around that by putting the bdist_egg import first, as that imports setuptools first. (I think bdist_egg was added sometime after 0.5; if your version of setuptools is *that* old, you'd be better off not having it installed.) The use of setuptools by numpy.distutils is in two forms: explicitly (controlled by this patch of code), and implicitly (because setuptools goes and patches distutils). Disabling the explicit use won't actually fix your problem with the 'install' command leaving .egg_info directories (which, incidentally, are pretty small), as that's done by the implicit behaviour. [Really, distutils sucks. I think (besides refactoring) it needs it's API documented better, or least good conventions on where to hook into. setuptools and numpy.distutils do their best, but there's only so much you can do before everything goes fragile and breaks in unexpected ways.] With the "if 'setuptools' in sys.modules" test, if you *are* using setuptools, you must have explicitly requested that, and so I think a failure on import of setuptools shouldn't be silently passed over. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Wed Jun 28 15:46:07 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 13:46:07 -0600 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: <20060628153734.7597800c@arbutus.physics.mcmaster.ca> References: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> <20060628153734.7597800c@arbutus.physics.mcmaster.ca> Message-ID: On 6/28/06, David M. Cooke wrote: > On Wed, 28 Jun 2006 13:18:35 -0600 > "Fernando Perez" wrote: > > > On 6/28/06, David M. Cooke wrote: > > > > > Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' > > > with an import of setuptools (it's based on the one used in matplotlib). > > > > > > easy_install still works, also. > > > > You beat me to it :) > > > > However, your patch has slightly different semantics from mine: if > > bdist_egg fails to import, the rest of setuptools is still used. I > > don't know if that's safe. My patch would consider /any/ failure in > > the setuptools imports as a complete setuptools failure, and revert > > out to basic distutils. > > Note that your patch will still import setuptools if the import of bdist_egg > fails. And you can't get around that by putting the bdist_egg import first, > as that imports setuptools first. Well, but that's still done after the 'if "setuptools" in sys.modules' check, just like yours. The only difference is that my patch treats a later failure as a complete failure, and reverts out to old_setup being pulled out of plain distutils. > (I think bdist_egg was added sometime after 0.5; if your version of > setuptools is *that* old, you'd be better off not having it installed.) Then it's probably fine to leave it either way, as /in practice/ the two approaches will produce the same results. > The use of setuptools by numpy.distutils is in two forms: explicitly > (controlled by this patch of code), and implicitly (because setuptools goes > and patches distutils). Disabling the explicit use won't actually fix your > problem with the 'install' command leaving .egg_info directories (which, > incidentally, are pretty small), as that's done by the implicit behaviour. It's not their size that matters, it's just that I don't like tools littering around with stuff I didn't ask for. Yes, I like my code directories tidy ;) > [Really, distutils sucks. I think (besides refactoring) it needs it's API > documented better, or least good conventions on where to hook into. > setuptools and numpy.distutils do their best, but there's only so much you > can do before everything goes fragile and breaks in unexpected ways.] I do hate distutils, having fought it for a long time. Its piss-poor dependency checking is one of its /many/ annoyances. For a package with as long a compile time as scipy, it really sucks not to be able to just modify random source files and trust that it will really recompile what's needed (no more, no less). Anyway, thanks for heeding this one. Hopefully one day somebody will do the (painful) work of replacing distutils with something that actually works (perhaps using scons for the build engine...) Until then, we'll trod along with massively unnecessary rebuilds :) Cheers, f From kwgoodman at gmail.com Wed Jun 28 14:55:31 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 28 Jun 2006 11:55:31 -0700 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: <44A2CEC4.1050706@ee.byu.edu> References: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> <44A2CEC4.1050706@ee.byu.edu> Message-ID: On 6/28/06, Travis Oliphant wrote: > This should be better behaved now in SVN. Thanks for the reports. I'm impressed by how quickly features are added and bugs are fixed. And by how quick it is to install a new version of numpy. Thank you. From myeates at jpl.nasa.gov Wed Jun 28 16:15:04 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Wed, 28 Jun 2006 13:15:04 -0700 Subject: [Numpy-discussion] matlab translation In-Reply-To: References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> Message-ID: <44A2E348.3040604@jpl.nasa.gov> I've been looking at a project called ANTLR (www.antlr.org) to do the translation. Unfortunately, although I may have a Matlab grammar, it would still be a lot of work to use ANTLR. I'll look at some of the links that have posted. Mathew Robert Kern wrote: > Vinicius Lobosco wrote: > >> Let's just let those who want to try to do that and give our support? I >> would be happy if I could some parts of my old matlab programs >> translated to Scipy. >> > > I do believe that, "Show me," is an *encouragement*. I am explicitly encouraging > Mathew to work towards that end. Sheesh. > > From erin.sheldon at gmail.com Wed Jun 28 17:15:53 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Wed, 28 Jun 2006 17:15:53 -0400 Subject: [Numpy-discussion] matlab translation In-Reply-To: <44A2E348.3040604@jpl.nasa.gov> References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> <44A2E348.3040604@jpl.nasa.gov> Message-ID: <331116dc0606281415s205f25fcmc90abba3b6d45a37@mail.gmail.com> ANTLR was also used for GDL http://gnudatalanguage.sourceforge.net/ with amazing results. Erin On 6/28/06, Mathew Yeates wrote: > I've been looking at a project called ANTLR (www.antlr.org) to do the > translation. Unfortunately, although I may have a Matlab grammar, it > would still be a lot of work to use ANTLR. I'll look at some of the > links that have posted. > > Mathew > > > Robert Kern wrote: > > Vinicius Lobosco wrote: > > > >> Let's just let those who want to try to do that and give our support? I > >> would be happy if I could some parts of my old matlab programs > >> translated to Scipy. > >> > > > > I do believe that, "Show me," is an *encouragement*. I am explicitly encouraging > > Mathew to work towards that end. Sheesh. > > > > > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From uvsqtzl at websitesfast.com Wed Jun 28 10:22:28 2006 From: uvsqtzl at websitesfast.com (declared) Date: Wed, 28 Jun 2006 19:22:28 +0500 Subject: [Numpy-discussion] trigger serotonin empathy excess Message-ID: <000701c69b12$1a8d6af0$d42bb882@cmasecr11> medium storage fbb exhibit Spacewar. linksCHM Museums Metro: that. deny hundred YAHOO MSNeod stock dataFree Cable LP LLLP.A Warner Company. yet.Today wk. Avg. fibre channel Marketing raises possible resign Several service. Now onsite hardened Rica Cote dIvoire Taliban Ethiopian PM: Somalias Islamists Sold contains CocaCola imports -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Guinea.gif Type: image/gif Size: 9293 bytes Desc: not available URL: From mfmorss at aep.com Thu Jun 29 09:16:38 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Thu, 29 Jun 2006 09:16:38 -0400 Subject: [Numpy-discussion] Should cholesky return upper or lowertriangularmatrix? In-Reply-To: Message-ID: The SAS IML Cholesky function "root" returns upper triangular. Quoting the SAS documentation: The ROOT function performs the Cholesky decomposition of a matrix (for example, A) such that U'U = A where U is upper triangular. The matrix A must be symmetric and positive definite. Mark F. Morss Principal Analyst, Market Risk American Electric Power "Keith Goodman" To Sent by: "Robert Kern" numpy-discussion- bounces at lists.sou cc rceforge.net numpy-discussion at lists.sourceforge. net Subject 06/27/2006 11:25 Re: [Numpy-discussion] Should PM cholesky return upper or lowertriangular matrix? On 6/27/06, Robert Kern wrote: > Keith Goodman wrote: > > Isn't the Cholesky decomposition by convention an upper triangular > > matrix? I noticed, by porting Octave code, that linalg.cholesky > > returns the lower triangular matrix. > > > > References: > > > > http://mathworld.wolfram.com/CholeskyDecomposition.html > > http://www.mathworks.com/access/helpdesk/help/techdoc/ref/chol.html > > Lower: > http://en.wikipedia.org/wiki/Cholesky_decomposition > http://www.math-linux.com/spip.php?article43 > http://planetmath.org/?op=getobj&from=objects&id=1287 > http://rkb.home.cern.ch/rkb/AN16pp/node33.html#SECTION000330000000000000000 > http://www.riskglossary.com/link/cholesky_factorization.htm > http://www.library.cornell.edu/nr/bookcpdf/c2-9.pdf > > If anything, the convention appears to be lower-triangular. If you give me a second, I'll show you that the wikipedia supports my claim. OK. Lower it is. It will save me a transpose when I calculate joint random variables. Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From charlesr.harris at gmail.com Thu Jun 29 10:46:18 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 29 Jun 2006 08:46:18 -0600 Subject: [Numpy-discussion] Should cholesky return upper or lowertriangularmatrix? In-Reply-To: References: Message-ID: All, On 6/29/06, mfmorss at aep.com wrote: > > The SAS IML Cholesky function "root" returns upper triangular. Quoting > the > SAS documentation: > > The ROOT function performs the Cholesky decomposition of a matrix (for > example, A) such that > U'U = A > where U is upper triangular. The matrix A must be symmetric and positive > definite. Does it matter whether the lower or upper triangular part is stored? We should just pick one convention and stick with it. That is simpler than, say, ATLAS where the choice is one of the parameters passed to the subroutine. I vote for lower triangular myself, if only because that was my choice last time I implemented a Cholesky factorization. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Glen.Mabey at swri.org Thu Jun 29 10:52:01 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Thu, 29 Jun 2006 09:52:01 -0500 Subject: [Numpy-discussion] explanation of 'order' parameter for reshape Message-ID: <20060629145201.GH13024@bams.swri.edu> Hello, It seems that the 'order' parameter is not explained neither in the docstring nor in "Guide to NumPy". I'm guessing that the alternative to the default value of 'C' would be 'Fortran'? Thanks, Glen From zhang.le.misc at gmail.com Thu Jun 29 10:57:57 2006 From: zhang.le.misc at gmail.com (Zhang Le) Date: Thu, 29 Jun 2006 15:57:57 +0100 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" Message-ID: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> Hi, I'm using 0.9.8 and find numpy.ndarray.min() is not exported to global space when doing a from numpy import * In [1]: from numpy import * In [2]: help min ------> help(min) Help on built-in function min in module __builtin__: min(...) min(sequence) -> value min(a, b, c, ...) -> value With a single sequence argument, return its smallest item. With two or more arguments, return the smallest argument. Also numpy.ndarray.max() is not available too. But the built-in sum() is replaced by numpy.ndarray.sum() as expected. Is this a bug or just intended to do so and user has to use numpy.ndarray.min() explicitly? Cheers, Zhang Le From skip at pobox.com Thu Jun 29 11:09:40 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 29 Jun 2006 10:09:40 -0500 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" In-Reply-To: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> References: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> Message-ID: <17571.60724.424201.464714@montanaro.dyndns.org> Zhang> I'm using 0.9.8 and find numpy.ndarray.min() is not exported to Zhang> global space when doing a Zhang> from numpy import * I'm going to take a wild-ass guess and suggest that was a concious decision by the authors. Shadowing builtins is generally a no-no. You just need to be explicit instead of implicit: from numpy import min, max Skip From zhang.le.misc at gmail.com Thu Jun 29 11:23:28 2006 From: zhang.le.misc at gmail.com (Zhang Le) Date: Thu, 29 Jun 2006 16:23:28 +0100 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" In-Reply-To: <17571.60724.424201.464714@montanaro.dyndns.org> References: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> <17571.60724.424201.464714@montanaro.dyndns.org> Message-ID: <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> > I'm going to take a wild-ass guess and suggest that was a concious decision > by the authors. Shadowing builtins is generally a no-no. You just need to > be explicit instead of implicit: > > from numpy import min, max I see. But why by default sum is exported? Is that a wise decision? In [1]: from numpy import * In [2]: help sum ------> help(sum) Help on function sum in module numpy.core.oldnumeric: sum(x, axis=0, dtype=None) ... Zhang Le From wright at esrf.fr Thu Jun 29 11:22:35 2006 From: wright at esrf.fr (Jon Wright) Date: Thu, 29 Jun 2006 17:22:35 +0200 Subject: [Numpy-discussion] Should cholesky return upper or In-Reply-To: References: Message-ID: <44A3F03B.2030204@esrf.fr> > Does it matter whether the lower or upper triangular part is stored? > We should just pick one convention and stick with it. That is simpler > than, say, ATLAS where the choice is one of the parameters passed to > the subroutine. I vote for lower triangular myself, if only because > that was my choice last time I implemented a Cholesky factorization. Wouldn't a keyword argument make more sense, there's a default, but you aren't denied access to ATLAS? It matters if you pass the factorisation to a legacy code which expects things to be a particular way around. Jon From jswhit at fastmail.fm Thu Jun 29 11:36:09 2006 From: jswhit at fastmail.fm (Jeff Whitaker) Date: Thu, 29 Jun 2006 09:36:09 -0600 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" In-Reply-To: <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> References: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> <17571.60724.424201.464714@montanaro.dyndns.org> <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> Message-ID: <44A3F369.1040409@fastmail.fm> Zhang Le wrote: >> I'm going to take a wild-ass guess and suggest that was a concious decision >> by the authors. Shadowing builtins is generally a no-no. You just need to >> be explicit instead of implicit: >> >> from numpy import min, max >> > I see. But why by default sum is exported? Is that a wise decision? > > In [1]: from numpy import * > > In [2]: help sum > ------> help(sum) > Help on function sum in module numpy.core.oldnumeric: > > sum(x, axis=0, dtype=None) > ... > > Zhang Le > > Zhang: The reason max and min are not imported by 'from numpy import *' is because there are no such functions in numpy. They are ndarray methods now (a.max(), a.min()), there is also a maximum and minium function which behaves somewhat differently. There is still a sum function as you have discovered, and it will clobber the builtin. Another good reason not to use 'from numpy import *' -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jeffrey.S.Whitaker at noaa.gov 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg From joris at ster.kuleuven.be Thu Jun 29 11:41:11 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Thu, 29 Jun 2006 17:41:11 +0200 Subject: [Numpy-discussion] incorporating C/C++ code Message-ID: <200606291741.12035.joris@ster.kuleuven.be> Hi, For heavy number crunching I would like to include C and/or C++ functions in my NumPy programs. They should have/give NumPy arrays as input/output. On http://www.scipy.org/Topical_Software I find several suggestions to wrap C/C++ code: SWIG, weave, Pyrex, Instant, ... but it's quite difficult for me to have an idea which one I can/should use. So, a few questions: Any suggestion for which package I should use? Does this heavily depend for which purpose I want to use it? Where can I find the docs for Weave? I find several links on the internet pointing to http://www.scipy.org/documentation/weave for more info, but there is nothing anymore. Thanks in advance, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From rob at hooft.net Thu Jun 29 12:25:59 2006 From: rob at hooft.net (Rob Hooft) Date: Thu, 29 Jun 2006 18:25:59 +0200 Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: <200606291741.12035.joris@ster.kuleuven.be> References: <200606291741.12035.joris@ster.kuleuven.be> Message-ID: <44A3FF17.4000402@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Joris De Ridder wrote: > Hi, > > For heavy number crunching I would like to include C and/or C++ functions > in my NumPy programs. They should have/give NumPy arrays as input/output. > On http://www.scipy.org/Topical_Software I find several suggestions to wrap > C/C++ code: SWIG, weave, Pyrex, Instant, ... but it's quite difficult for me > to have an idea which one I can/should use. > > So, a few questions: > > Any suggestion for which package I should use? Does this heavily depend > for which purpose I want to use it? Wrapping C/C++ code is only necessary if the C/C++ code is pre-existing. I have thusfar only incorporated C code into Numeric python programs by writing the code natively as a python extension. Any kind of wrapping will carry a penalty. If you write a python extension in C you have all the flexibility you need. Rob Hooft - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEo/8XH7J/Cv8rb3QRAm40AJ0YoTy653HP0FWmRN4/zuTFruDwUwCfTgrV 4zfSl3GVT8mneL60zzr2zeY= =JQrM -----END PGP SIGNATURE----- From norishimi at gmail.com Thu Jun 29 13:03:47 2006 From: norishimi at gmail.com (N Shimizu) Date: Fri, 30 Jun 2006 02:03:47 +0900 Subject: [Numpy-discussion] trouble on tru64 Message-ID: Hi everyone, I tried to build numpy 0.9.8 on compaq alpha tru64 UNIX v5.1 with gcc 4.0.2, but I encounterd the compilation trouble. The error message is the following. Do you have any suggestion? Thank you in advance. Shimizu. numpy/core/src/umathmodule.c.src: In function 'nc_floor_quotl': numpy/core/src/umathmodule.c.src:600: warning: implicit declaration of function 'floorl' numpy/core/src/umathmodule.c.src:600: warning: incompatible implicit declaration of built-in fu nction 'floorl' .... numpy/core/src/umathmodule.c.src: In function 'LONGDOUBLE_floor_divide': numpy/core/src/umathmodule.c.src:1050: warning: incompatible implicit declaration of built-in f unction 'floorl' numpy/core/src/umathmodule.c.src: In function 'CLONGDOUBLE_absolute': numpy/core/src/umathmodule.c.src:1319: warning: incompatible implicit declaration of built-in f unction 'sqrtl' .... build/src.osf1-V5.1-alpha-2.4/numpy/core/__umath_generated.c: At top level: build/src.osf1-V5.1-alpha-2.4/numpy/core/__umath_generated.c:15: error: 'acosl' undeclared here (not in a function) build/src.osf1-V5.1-alpha-2.4/numpy/core/__umath_generated.c:18: error: 'acoshf' undeclared her e (not in a function) ... From oliphant.travis at ieee.org Thu Jun 29 13:28:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 29 Jun 2006 11:28:05 -0600 Subject: [Numpy-discussion] trouble on tru64 In-Reply-To: References: Message-ID: <44A40DA5.1040805@ieee.org> N Shimizu wrote: > Hi everyone, > > I tried to build numpy 0.9.8 on compaq alpha tru64 UNIX v5.1 with gcc 4.0.2, > > but I encounterd the compilation trouble. > Thanks for the test. This looks like a configuration problem. Could you post the config.h file that is generated when you run python setup.py It should be found in build/src.-/numpy/core/config.h I don't think we've got the right set of configurations going for that platform. Basically, we need to know if it has certain float and long versions of standard math functions (like floorf and floorl). It looks like the configuration code detected that it didn't have these functions but then during compilation, the functions that NumPy created were already defined causing the error. If we can first get a valid config.h file for your platform, then we can figure out how to generate it during build time. -Travis From oliphant.travis at ieee.org Thu Jun 29 13:30:15 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 29 Jun 2006 11:30:15 -0600 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" In-Reply-To: <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> References: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> <17571.60724.424201.464714@montanaro.dyndns.org> <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> Message-ID: <44A40E27.7060103@ieee.org> Zhang Le wrote: >> I'm going to take a wild-ass guess and suggest that was a concious decision >> by the authors. Shadowing builtins is generally a no-no. You just need to >> be explicit instead of implicit: >> >> from numpy import min, max >> > I see. But why by default sum is exported? Is that a wise decision? > Well, Numeric had the sum function long before Python introduced one. NumPy adopted Numeric's sum function as well. -Travis From norishimi at gmail.com Thu Jun 29 13:46:51 2006 From: norishimi at gmail.com (N Shimizu) Date: Fri, 30 Jun 2006 02:46:51 +0900 Subject: [Numpy-discussion] trouble on tru64 In-Reply-To: <44A40DA5.1040805@ieee.org> References: <44A40DA5.1040805@ieee.org> Message-ID: Thank you for your reply. The "config.h" is the following. I hope it will be helpful. Shimizu /* #define SIZEOF_SHORT 2 */ /* #define SIZEOF_INT 4 */ /* #define SIZEOF_LONG 8 */ /* #define SIZEOF_FLOAT 4 */ /* #define SIZEOF_DOUBLE 8 */ #define SIZEOF_LONG_DOUBLE 16 #define SIZEOF_PY_INTPTR_T 8 /* #define SIZEOF_LONG_LONG 8 */ #define SIZEOF_PY_LONG_LONG 8 /* #define CHAR_BIT 8 */ #define MATHLIB m #define HAVE_LONGDOUBLE_FUNCS #define HAVE_FLOAT_FUNCS #define HAVE_LOG1P #define HAVE_EXPM1 #define HAVE_INVERSE_HYPERBOLIC #define HAVE_INVERSE_HYPERBOLIC_FLOAT #define HAVE_INVERSE_HYPERBOLIC_LONGDOUBLE #define HAVE_ISNAN #define HAVE_RINT 2006/6/30, Travis Oliphant : > N Shimizu wrote: > > Hi everyone, > > > > I tried to build numpy 0.9.8 on compaq alpha tru64 UNIX v5.1 with gcc 4.0.2, > > > > but I encounterd the compilation trouble. > > > > Thanks for the test. This looks like a configuration problem. > Could you post the config.h file that is generated when you run python > setup.py > > It should be found in > > build/src.-/numpy/core/config.h > > I don't think we've got the right set of configurations going for that > platform. Basically, we need to know if it has certain float and long > versions of standard math functions (like floorf and floorl). > > It looks like the configuration code detected that it didn't have these > functions but then during compilation, the functions that NumPy created > were already defined causing the error. > > If we can first get a valid config.h file for your platform, then we can > figure out how to generate it during build time. > > -Travis > > From oliphant.travis at ieee.org Thu Jun 29 13:48:21 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 29 Jun 2006 11:48:21 -0600 Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: <200606291741.12035.joris@ster.kuleuven.be> References: <200606291741.12035.joris@ster.kuleuven.be> Message-ID: <44A41265.3070106@ieee.org> Joris De Ridder wrote: > Hi, > > For heavy number crunching I would like to include C and/or C++ functions > in my NumPy programs. They should have/give NumPy arrays as input/output. > On http://www.scipy.org/Topical_Software I find several suggestions to wrap > C/C++ code: SWIG, weave, Pyrex, Instant, ... but it's quite difficult for me > to have an idea which one I can/should use. > This is my personal preference order: 1) If you can write Fortran code --- do it and use f2py 2) If you have well-encapsulated functions to call then use either weave or ctypes (both are very nice). 3) PyRex is a great option for writing a custom extension module that needs a lot of capability built in. At this point I would not use SWIG or Instant. So, if Fortran is out for you, then install scipy (or install weave separately) and start with weave http://www.scipy.org/Weave If you can compile your C/C++ functions as a shared-library, then check-out ctypes as well. -Travis > So, a few questions: > > Any suggestion for which package I should use? Does this heavily depend > for which purpose I want to use it? > > Where can I find the docs for Weave? I find several links on the internet > pointing to http://www.scipy.org/documentation/weave for more info, > but there is nothing anymore. > > Thanks in advance, > Joris > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From lcordier at point45.com Thu Jun 29 13:55:39 2006 From: lcordier at point45.com (Louis Cordier) Date: Thu, 29 Jun 2006 19:55:39 +0200 (SAST) Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: <44A41265.3070106@ieee.org> References: <200606291741.12035.joris@ster.kuleuven.be> <44A41265.3070106@ieee.org> Message-ID: >> For heavy number crunching I would like to include C and/or C++ functions >> in my NumPy programs. They should have/give NumPy arrays as input/output. >> On http://www.scipy.org/Topical_Software I find several suggestions to wrap >> C/C++ code: SWIG, weave, Pyrex, Instant, ... but it's quite difficult for me >> to have an idea which one I can/should use. >> > This is my personal preference order: > > 1) If you can write Fortran code --- do it and use f2py > > 2) If you have well-encapsulated functions to call then use > either weave or ctypes (both are very nice). > > 3) PyRex is a great option for writing a custom extension module > that needs a lot of capability built in. > > At this point I would not use SWIG or Instant. > > So, if Fortran is out for you, then install scipy (or install weave > separately) and start with weave http://www.scipy.org/Weave Now since we are on the topic ;) I was wondering if there where any issues with say using Psyco with NumPy ? http://psyco.sourceforge.net/ Then those number crunching code could still be in Python at least. Anyone have some benchmarks/comments ? Regards, Louis. -- Louis Cordier cell: +27721472305 Point45 Entertainment (Pty) Ltd. http://www.point45.org From david.huard at gmail.com Thu Jun 29 14:42:51 2006 From: david.huard at gmail.com (David Huard) Date: Thu, 29 Jun 2006 14:42:51 -0400 Subject: [Numpy-discussion] Bug in digitize function Message-ID: <91cf711d0606291142p51215c85ua74ed3b27f39d799@mail.gmail.com> Hi, Here is something I noticed with digitize() that I guess would qualify as a small but annoying bug. In [165]: x = rand(10); bin = linspace(x.min(), x.max(), 10); print x.min(); print bin[0]; digitize(x,bin) 0.0925030184144 0.0925030184144 Out[165]: array([2, 9, 5, 9, 6, 1, 1, 1, 4, 5]) In [166]: x = rand(10); bin = linspace(x.min(), x.max(), 10); print x.min(); print bin[0]; digitize(x,bin) 0.0209738428066 0.0209738428066 Out[166]: array([ 5, 2, 8, 3, 0, 8, 9, 6, 10, 9]) Sometimes, the smallest number in x is counted in the first bin, and sometimes, it is counted as an outlier (bin number = 0). Moreover, creating the bin with bin = linspace(x.min()-eps, x.max(), 10) doesn't seem to solve the problem if eps is too small (ie 1./2**32). So basically, you can have In [186]: x.min()>bin[0] Out[186]: True and yet digitize() considers x.min() as an outlier. And to actually do something constructive, here is a docstring for digitize """Given an array of values and bin edges, digitize(values, bin_edges) returns the index of the bin each value fall into. The first bin has index 1, and the last bin has the index n, where n is the number of bins. Values smaller than the inferior edge are assigned index 0, while values larger than the superior edge are assigned index n+1. """ Cheers, David P.S. Many mails I send don't make it to the list. Is it gmail related ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Thu Jun 29 15:10:51 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 29 Jun 2006 12:10:51 -0700 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" In-Reply-To: <44A40E27.7060103@ieee.org> References: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> <17571.60724.424201.464714@montanaro.dyndns.org> <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> <44A40E27.7060103@ieee.org> Message-ID: <44A425BB.6010502@noaa.gov> Travis Oliphant wrote: > Well, Numeric had the sum function long before Python introduced one. > NumPy adopted Numeric's sum function as well. Yet another reason to NEVER use "import *" -CHB -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu Jun 29 15:18:25 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 29 Jun 2006 12:18:25 -0700 Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: References: <200606291741.12035.joris@ster.kuleuven.be> <44A41265.3070106@ieee.org> Message-ID: <44A42781.6010305@noaa.gov> Louis Cordier wrote: >> At this point I would not use SWIG or Instant. In general, SWIG makes sense if you have a substantial existing library that you need access to, and particularly if that library is evolving and needs to be used directly from C/C++ code as well. If you are writing C/C++ code specifically to be used as a python extension, pyrex and boost::python are good choices. There was a Numeric add-on to boost::python at one point, I don't know if anyone has modified it for numpy. > I was wondering if there where any issues with say using Psyco > with NumPy ? http://psyco.sourceforge.net/ Psyco knows nothing of numpy arrays, and thus can only access them as generic Python objects -- so it won't help. A couple years ago, someone wrote a micro-Numeric package that used python arrays as the base storage, and ran it with psyco with pretty impressive results. What that tells me is that if psyco could be taught to understand numpy arrays, (or at least the generic array interface) it could work well. It would be a lot of work, however. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From david.huard at gmail.com Thu Jun 29 15:27:57 2006 From: david.huard at gmail.com (David Huard) Date: Thu, 29 Jun 2006 15:27:57 -0400 Subject: [Numpy-discussion] Sourceforge and gmail [was: Re: Recarray attributes writeable] In-Reply-To: <449325E6.5080609@gmail.com> References: <20060616161043.A29191@cfcp.uchicago.edu> <449325E6.5080609@gmail.com> Message-ID: <91cf711d0606291227g6fdfc850o7f3dce290f7b0469@mail.gmail.com> Is it possible that gmail mails get through when they are sent by *l ists.sourceforge.net *while they are blocked when the outgoing server is gmail.com ? My situation is that I can't post a new discussion to the list, although replies seem to get through. David 2006/6/16, Robert Kern : > > Robert Kern wrote: > > Erin Sheldon wrote: > > > >>Hi everyone - > >> > >>(this is my fourth try in the last 24 hours to post this. > >>Apparently, the gmail smtp server is in the blacklist!! > >>this is bad). > > > > I doubt it since that's where my email goes through. > > And of course that's utterly bogus since I usually use GMane. Apologies. > > However, *this* is a real email to numpy-discussion. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paustin at eos.ubc.ca Thu Jun 29 15:35:39 2006 From: paustin at eos.ubc.ca (Philip Austin) Date: Thu, 29 Jun 2006 12:35:39 -0700 Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: <44A42781.6010305@noaa.gov> References: <200606291741.12035.joris@ster.kuleuven.be> <44A41265.3070106@ieee.org> <44A42781.6010305@noaa.gov> Message-ID: <17572.11147.794147.86548@eos.ubc.ca> Christopher Barker writes: > If you are writing C/C++ code specifically to be used as a python > extension, pyrex and boost::python are good choices. There was a Numeric > add-on to boost::python at one point, I don't know if anyone has > modified it for numpy. Yes, I've been migrating my extensions to numpy and will put up a new num_util.h version on the site (http://www.eos.ubc.ca/research/clouds/num_util.html) this weekend (it's about a 10 line diff). When I get a chance I'm also planning to add a page to the scipy wiki so we can see the same extension wrapped with boost, swig, f2py and pyrex. -- Phil From tim.hochberg at cox.net Thu Jun 29 15:37:03 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu, 29 Jun 2006 12:37:03 -0700 Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: <44A42781.6010305@noaa.gov> References: <200606291741.12035.joris@ster.kuleuven.be> <44A41265.3070106@ieee.org> <44A42781.6010305@noaa.gov> Message-ID: <44A42BDF.4060507@cox.net> Christopher Barker wrote: > Louis Cordier wrote: > >>> At this point I would not use SWIG or Instant. >>> > > In general, SWIG makes sense if you have a substantial existing library > that you need access to, and particularly if that library is evolving > and needs to be used directly from C/C++ code as well. > > If you are writing C/C++ code specifically to be used as a python > extension, pyrex and boost::python are good choices. There was a Numeric > add-on to boost::python at one point, I don't know if anyone has > modified it for numpy. > > >> I was wondering if there where any issues with say using Psyco >> with NumPy ? http://psyco.sourceforge.net/ >> > > Psyco knows nothing of numpy arrays, and thus can only access them as > generic Python objects -- so it won't help. > > A couple years ago, someone wrote a micro-Numeric package that used > python arrays as the base storage, and ran it with psyco with pretty > impressive results. That might have been me. At least I have done this at least once. I even still have the code lying around if anyone wants to play with it. No guarantee that it hasn't succumbed to bit rot though. > What that tells me is that if psyco could be taught > to understand numpy arrays, (or at least the generic array interface) it > could work well. It would be a lot of work, however. > There's another problem as well. Psyco only really knows about 2 things. Integers (C longs actually) and python objects (pointers). Well, I guess that it also knows about arrays of integers/objects as well. It does not know how to handle floating point numbers directly. In fact, the way it handles floating point numbers is to break them into two 32-bit chunks and store them as two integers. When one needs to operate on the float these two integers need to be retrieved, reassembled, operated on and then stuck back into two integers again. As a result, psyco is never going to be super fast for floating point, even if it learned about numeric arrays. In principle, it could learn about floats, but it would require a major rejiggering. As I understand it, Armin has no plans to do much more with Psyco other than bug fixes, instead working on PyPy. However, Psyco technology will likely go into PyPy (which I've mostly lost track of), so it's possible that down the road fast numeric stuff could be doable in PyPy. -tim From robert.kern at gmail.com Thu Jun 29 18:35:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 29 Jun 2006 17:35:31 -0500 Subject: [Numpy-discussion] We *will* move the mailing list to scipy.org Message-ID: With a vote of 14 to 2 (and about 400 hundred implicit "I don't care one way or the other"), the new ads, and the recent problems with Sourceforge bouncing or delaying GMail messages, I intend to move the mailing list from Sourceforge to scipy.org in short order. If you have strong objections to this move, this is your last chance to voice them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jswhit at fastmail.fm Thu Jun 29 18:42:34 2006 From: jswhit at fastmail.fm (Jeff Whitaker) Date: Thu, 29 Jun 2006 16:42:34 -0600 Subject: [Numpy-discussion] python spherepack wrapper Message-ID: <44A4575A.6070701@fastmail.fm> Hi All: For those of you who have a need for spherical harmonic transforms in python, I've updated my spherepack (http://www.cisl.ucar.edu/css/software/spherepack/) wrapper for numpy. Docs at http://www.cdc.noaa.gov/people/jeffrey.s.whitaker/python/spharm.html. If you have numpy and a fortran compiler supported by numpy.f2py, all you need to do is run 'python setup.py install'. -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jeffrey.S.Whitaker at noaa.gov 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg From rhl at astro.princeton.edu Thu Jun 29 20:47:20 2006 From: rhl at astro.princeton.edu (Robert Lupton) Date: Thu, 29 Jun 2006 20:47:20 -0400 Subject: [Numpy-discussion] Core dump in numpy 0.9.6 In-Reply-To: References: Message-ID: Here's an easy coredump: x = numpy.arange(10, dtype="f"); y = numpy.array(len(x), dtype="F"); y.imag += x Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x00000000 PyArray_CompareLists (l1=0x0, l2=0x1841618, n=1) at numpy/core/src/ multiarraymodule.c:132 132 if (l1[i] != l2[i]) return 0; (gdb) where #0 PyArray_CompareLists (l1=0x0, l2=0x1841618, n=1) at numpy/core/ src/multiarraymodule.c:132 #1 0x02a377d8 in PyUFunc_GenericFunction (self=0x538d40, args=0x2db3c88, mps=0xbfffd9c8) at numpy/core/src/ufuncobject.c:968 #2 0x02a39210 in ufunc_generic_call (self=0x538d40, args=0x2db3c88) at numpy/core/src/ufuncobject.c:2635 #3 0x000243bc in PyObject_CallFunction (callable=0x538d40, format=0x0) at Objects/abstract.c:1756 #4 0x0001f8cc in PyNumber_InPlaceAdd (v=0x565800, w=0x572540) at Objects/abstract.c:740 R From robert.kern at gmail.com Thu Jun 29 20:50:33 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 29 Jun 2006 19:50:33 -0500 Subject: [Numpy-discussion] Core dump in numpy 0.9.6 In-Reply-To: References: Message-ID: Robert Lupton wrote: > Here's an easy coredump: > > x = numpy.arange(10, dtype="f"); y = numpy.array(len(x), dtype="F"); > y.imag += x > > Program received signal EXC_BAD_ACCESS, Could not access memory. This bug does not appear to exist in recent versions. Please try the latest release (and preferably, the current SVN) before reporting bugs. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Thu Jun 29 21:03:16 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 29 Jun 2006 19:03:16 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 Message-ID: <44A47854.1050106@ieee.org> I think it's time for the first beta-release of NumPy 1.0 I'd like to put it out within 2 weeks. Please make any comments or voice major concerns so that the 1.0 release series can be as stable as possible. -Travis From aisaac at american.edu Thu Jun 29 22:07:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 29 Jun 2006 22:07:05 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A47854.1050106@ieee.org> References: <44A47854.1050106@ieee.org> Message-ID: On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > Please make any comments or voice major concerns A rather minor issue, but I would just like to make sure that a policy decision was made not to move to a float default for identity(), ones(), zeros(), and empty(). (I leave aside arange().) I see the argument for a change to be 3-fold: 1. It is easier to introduce people to numpy if default data types are all float. (I teach, and I want my students to use numpy.) 2. It is a better match to languages from which users are likely to migrate (e.g., GAUSS or Matlab). 3. In the uses I am most familiar with, float is the most frequently desired data type. (I guess this may be field specific, especially for empty().) Cheers, Alan Isaac From kwgoodman at gmail.com Thu Jun 29 22:13:07 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 29 Jun 2006 19:13:07 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: On 6/29/06, Alan G Isaac wrote: > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > > Please make any comments or voice major concerns > > A rather minor issue, but I would just like to make sure > that a policy decision was made not to move to a float > default for identity(), ones(), zeros(), and empty(). > (I leave aside arange().) > > I see the argument for a change to be 3-fold: > 1. It is easier to introduce people to numpy if > default data types are all float. (I teach, > and I want my students to use numpy.) > 2. It is a better match to languages from which > users are likely to migrate (e.g., GAUSS or > Matlab). > 3. In the uses I am most familiar with, float is > the most frequently desired data type. (I guess > this may be field specific, especially for empty().) I vote float. From tim.leslie at gmail.com Thu Jun 29 22:26:28 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Fri, 30 Jun 2006 12:26:28 +1000 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: On 6/30/06, Keith Goodman wrote: > On 6/29/06, Alan G Isaac wrote: > > A rather minor issue, but I would just like to make sure > > that a policy decision was made not to move to a float > > default for identity(), ones(), zeros(), and empty(). > > (I leave aside arange().) > > > > I see the argument for a change to be 3-fold: > > 1. It is easier to introduce people to numpy if > > default data types are all float. (I teach, > > and I want my students to use numpy.) > > 2. It is a better match to languages from which > > users are likely to migrate (e.g., GAUSS or > > Matlab). > > 3. In the uses I am most familiar with, float is > > the most frequently desired data type. (I guess > > this may be field specific, especially for empty().) > > I vote float. +1 float Tim > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ndarray at mac.com Thu Jun 29 22:38:21 2006 From: ndarray at mac.com (Sasha) Date: Thu, 29 Jun 2006 22:38:21 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: I vote for no change. It will be a major backward compatibility headache with applications that rely on integer arrays breaking in mysterious ways. If float wins, I hope there will be a script to update old code. Detecting single argument calls to these functions is probably not very hard. On 6/29/06, Keith Goodman wrote: > On 6/29/06, Alan G Isaac wrote: > > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > > > Please make any comments or voice major concerns > > > > A rather minor issue, but I would just like to make sure > > that a policy decision was made not to move to a float > > default for identity(), ones(), zeros(), and empty(). > > (I leave aside arange().) > > > > I see the argument for a change to be 3-fold: > > 1. It is easier to introduce people to numpy if > > default data types are all float. (I teach, > > and I want my students to use numpy.) > > 2. It is a better match to languages from which > > users are likely to migrate (e.g., GAUSS or > > Matlab). > > 3. In the uses I am most familiar with, float is > > the most frequently desired data type. (I guess > > this may be field specific, especially for empty().) > > I vote float. > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From wbaxter at gmail.com Thu Jun 29 22:40:19 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 30 Jun 2006 11:40:19 +0900 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: I also find the int behavior of these functions strange. +1 float default (or double) --bb On 6/30/06, Tim Leslie wrote: > > On 6/30/06, Keith Goodman wrote: > > On 6/29/06, Alan G Isaac wrote: > > > A rather minor issue, but I would just like to make sure > > > that a policy decision was made not to move to a float > > > default for identity(), ones(), zeros(), and empty(). > > > (I leave aside arange().) > > > > > > I see the argument for a change to be 3-fold: > > > 1. It is easier to introduce people to numpy if > > > default data types are all float. (I teach, > > > and I want my students to use numpy.) > > > 2. It is a better match to languages from which > > > users are likely to migrate (e.g., GAUSS or > > > Matlab). > > > 3. In the uses I am most familiar with, float is > > > the most frequently desired data type. (I guess > > > this may be field specific, especially for empty().) > > > > I vote float. > > +1 float > > Tim > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Thu Jun 29 23:09:57 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 29 Jun 2006 20:09:57 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: On 6/29/06, Bill Baxter wrote: > I also find the int behavior of these functions strange. > > +1 float default (or double) Oh, wait. Which do I want, float or double? What does rand, eigh, lstsq, etc return? From wbaxter at gmail.com Fri Jun 30 00:03:21 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 30 Jun 2006 13:03:21 +0900 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: Rand at least returns doubles: >>> num.rand(3,3).dtype.name 'float64' --bb On 6/30/06, Keith Goodman wrote: > > On 6/29/06, Bill Baxter wrote: > > I also find the int behavior of these functions strange. > > > > +1 float default (or double) > > Oh, wait. Which do I want, float or double? What does rand, eigh, > lstsq, etc return? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Fri Jun 30 00:22:47 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 29 Jun 2006 21:22:47 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: On 6/29/06, Bill Baxter wrote: > Rand at least returns doubles: > > >>> num.rand(3,3).dtype.name > 'float64' Then I vote float64. >> linalg.eigh(asmatrix(1))[0].dtype.name 'float64' >> linalg.cholesky(asmatrix(1)).dtype.name 'float64' From arnd.baecker at web.de Fri Jun 30 02:49:28 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 30 Jun 2006 08:49:28 +0200 (CEST) Subject: [Numpy-discussion] logspace behaviour/documentation Message-ID: Hi, I am wondering a bit about the the behaviour of logspace: Definition: numpy.logspace(start, stop, num=50, endpoint=True, base=10.0) Reading this I would assume that numpy.logspace(10**-12, 0.0, 100) gives 100 values, from start=10**-12 to stop=0.0, equispaced on a logarithmic scale. But this is not the case. Instead one has to do: numpy.logspace(-12, 0.0, 100) Docstring: Evenly spaced numbers on a logarithmic scale. Computes int(num) evenly spaced exponents from start to stop. If endpoint=True, then last exponent is stop. Returns base**exponents. My impression is that only the very last line is clearly saying what logspace does. And of course the code itself: y = linspace(start,stop,num=num,endpoint=endpoint) return _nx.power(base,y) Possible solutions (see below): a) modify logspace so that numpy.logspace(10**-12, 0.0, 100) works b) keep the current behaviour and improve the doc-string I would be interested in opinions on this. Best, Arnd Possible solution for (a) (no error checking yet): def logspace_modified(start, stop, num=50, endpoint=True): """Evenly spaced numbers on a logarithmic scale. Computes `num` evenly spaced numbers on a logarithmic scale from `start` to `stop`. If endpoint=True, then last exponent is `stop`. """ lstart = log(start) lstop = log(stop) y = linspace(lstart, lstop, num=num, endpoint=endpoint) return exp(y) Possible improvent of the doc-string (b) - due to Lars Bittrich: def logspace(start,stop,num=50,endpoint=True,base=10.0): """Evenly spaced numbers on a logarithmic scale. Return 'int(num)' evenly spaced samples on a logarithmic scale from 'base'**'start' to 'base'**'stop'. If 'endpoint' is True, the last sample is 'base'**'stop'.""" From st at sigmasquared.net Fri Jun 30 02:53:30 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Fri, 30 Jun 2006 08:53:30 +0200 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <44A4CA6A.1010905@sigmasquared.net> I guess this is a change which would just break too much code. And if the default type should by changed for these functions, why not also for array constructors? On the other hand, many people probably use Numpy almost exclusively with Float64's. A convenient way to change the default type could make their code easier to read. How much effort would it be to provide a convenience module that after importing replaces the relevant functions with wrappers that make Float64's the default? Regards, Stephan Alan G Isaac wrote: > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: >> Please make any comments or voice major concerns > > A rather minor issue, but I would just like to make sure > that a policy decision was made not to move to a float > default for identity(), ones(), zeros(), and empty(). > (I leave aside arange().) > > I see the argument for a change to be 3-fold: > 1. It is easier to introduce people to numpy if > default data types are all float. (I teach, > and I want my students to use numpy.) > 2. It is a better match to languages from which > users are likely to migrate (e.g., GAUSS or > Matlab). > 3. In the uses I am most familiar with, float is > the most frequently desired data type. (I guess > this may be field specific, especially for empty().) > > Cheers, > Alan Isaac > From gnurser at googlemail.com Fri Jun 30 05:02:56 2006 From: gnurser at googlemail.com (George Nurser) Date: Fri, 30 Jun 2006 10:02:56 +0100 Subject: [Numpy-discussion] immediate fill after empty gives None. Message-ID: <1d1e6ea70606300202r1ce777ddx2e6bf888d0eae8a1@mail.gmail.com> Have I done something silly here, or is this a bug? Opteron 64-bit, r2631 SVN. In [4]: depths_s2 = empty(shape=(5,),dtype=float) In [5]: depths_s2.fill(2.e5) In [6]: depths_s2 Out[6]: array([ 200000., 200000., 200000., 200000., 200000.]) In [11]: depths_s2 = (empty(shape=(5,),dtype=float)).fill(2.e5) In [12]: print depths_s2 None --George Nurser. From a.u.r.e.l.i.a.n at gmx.net Fri Jun 30 05:13:22 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Fri, 30 Jun 2006 11:13:22 +0200 Subject: [Numpy-discussion] immediate fill after empty gives None. In-Reply-To: <1d1e6ea70606300202r1ce777ddx2e6bf888d0eae8a1@mail.gmail.com> References: <1d1e6ea70606300202r1ce777ddx2e6bf888d0eae8a1@mail.gmail.com> Message-ID: <200606301113.22813.a.u.r.e.l.i.a.n@gmx.net> Hi, > Opteron 64-bit, r2631 SVN. > > In [4]: depths_s2 = empty(shape=(5,),dtype=float) > In [5]: depths_s2.fill(2.e5) > In [6]: depths_s2 > Out[6]: array([ 200000., 200000., 200000., 200000., 200000.]) > > In [11]: depths_s2 = (empty(shape=(5,),dtype=float)).fill(2.e5) > In [12]: print depths_s2 > None everything is fine. x.fill() fills x in-place and returns nothing. So in line 11, you created an array, filled it with 2.e5, assigned the return value of fill() (=None) to depths_s2 and threw the array away. HTH, Johannes From oliphant.travis at ieee.org Fri Jun 30 05:33:56 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Jun 2006 03:33:56 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <44A4F004.60809@ieee.org> Alan G Isaac wrote: > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > >> Please make any comments or voice major concerns >> > > A rather minor issue, but I would just like to make sure > that a policy decision was made not to move to a float > default for identity(), ones(), zeros(), and empty(). > (I leave aside arange().) > This was a policy decision made many months ago after discussion on this list and would need over-whelming pressure to change. > I see the argument for a change to be 3-fold: > I am, however, sympathetic to the arguments for wanting floating-point defaults. I wanted to change this originally but was convinced to not make such a major change for back-ward compatibility (more on that later). Nonetheless, I would support the creation of a module called something like defaultfloat or some-other equally impressive name ;-) which contained floating-point defaults of these functions (with the same names). Feel free to contribute (or at least find a better name). Regarding the problem of backward compatibility: I am very enthused about the future of both NumPy and SciPy. There have been a large number of new-comers to the community who have contributed impressively and I see very impressive things going on. This is "a good thing" because these projects need many collaborators and contributors to be successful. However, I have not lost sight of the fact that we still have a major adoption campaign to win before declaring NumPy a success. There are a lot of people who still haven't come-over from Numeric and numarray. Consider these download numbers: Numeric-24.2 (released Nov. 11, 2005) 14275 py24.exe 2905 py23.exe 9144 tar.gz Numarray 1.5.1 (released Feb, 7, 2006) 10272 py24.exe 11883 py23.exe 12779 tar.gz NumPy 0.9.8 (May 17, 2006) 3713 py24.exe 558 py23.exe 4111 tar.gz While it is hard to read too much into numbers, this tells me that there are about 10,000 current users of Numeric/Numarray who have not even *tried* NumPy. In fact, Numarray downloads of 1.5.1 went up significantly from its earlier releases. Why is that? It could be that many of the downloads are "casual" users who need it for some other application (in which case they wouldn't feel inclined to try NumPy). On the other hand, it is also possible that many are still scared away by the pre-1.0 development-cycle --- it has been a bit bumpy for the stalwarts who've braved the rapids as NumPy has matured. Changes like the proposal to move common functions from default integer to default float are exactly the kind of thing that leads people to wait on getting NumPy. One thing I've learned about Open Source development is that it can be hard to figure out exactly what is bothering people and get good critical feedback: people are more likely to just walk away with their complaints than to try and verbalize and/or post them. So, looking at adoption patterns can be a reasonable way to pick up on attitudes. It would appear that there is still a remarkable number of people who are either waiting for NumPy 1.0 or waiting for something else. I'm not sure. I think we have to wait until 1.0 to find out. Therefore, bug-fixes and stabilizing the NumPy API is my #1 priority right now. The other day I read a post by Alex Martelli (an influential Googler) to the Python list where he was basically suggesting that people stick with Numeric until things "stabilize". I can hope he meant "until NumPy 1.0 comes out" but he didn't say that and maybe he meant "until the array in Python stabilizes." I hope he doesn't mean the rumors about an array object in Python itself. Let me be the first to assure everyone that rumors of a "capable" array object in Python have been greatly exaggerated. I would be thrilled if we could just get the "infra-structure" into Python so that different extension modules could at least agree on an array interface. That is a far cry from fulfilling the needs of any current Num user, however. I say all this only to point out why de-stabilizing changes are difficult to do at this point, and to encourage anyone with an interest to continue to promote NumPy. If you are at all grateful for its creation, then please try to encourage those whom you know to push for NumPy adoption (or at least a plan for its adoption) in the near future. Best regards, -Travis From pjssilva at ime.usp.br Fri Jun 30 06:49:08 2006 From: pjssilva at ime.usp.br (Paulo J. S. Silva) Date: Fri, 30 Jun 2006 07:49:08 -0300 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <1151664548.19027.1.camel@localhost.localdomain> +1 for float64. I'll teach Introduction to Numerical Linear Algebra next term and I will use numpy! Best, Paulo -- Paulo Jos? da Silva e Silva Professor Assistente do Dep. de Ci?ncia da Computa??o (Assistant Professor of the Computer Science Dept.) Universidade de S?o Paulo - Brazil e-mail: pjssilva at ime.usp.br Web: http://www.ime.usp.br/~pjssilva Teoria ? o que n?o entendemos o (Theory is something we don't) suficiente para chamar de pr?tica. (understand well enough to call practice) From jg307 at cam.ac.uk Fri Jun 30 06:58:41 2006 From: jg307 at cam.ac.uk (James Graham) Date: Fri, 30 Jun 2006 11:58:41 +0100 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: <44A503E1.2040307@cam.ac.uk> Travis Oliphant wrote: > Nonetheless, I would support the creation of a module called something > like defaultfloat or some-other equally impressive name ;-) which > contained floating-point defaults of these functions (with the same > names). I'd also like to see a way to make the constructors create floating-point arrays by default. > Numeric-24.2 (released Nov. 11, 2005) > > 14275 py24.exe > 2905 py23.exe > 9144 tar.gz > > Numarray 1.5.1 (released Feb, 7, 2006) > > 10272 py24.exe > 11883 py23.exe > 12779 tar.gz > > NumPy 0.9.8 (May 17, 2006) > > 3713 py24.exe > 558 py23.exe > 4111 tar.gz > > > While it is hard to read too much into numbers, this tells me that there > are about 10,000 current users of Numeric/Numarray who have not even > *tried* NumPy. In fact, Numarray downloads of 1.5.1 went up > significantly from its earlier releases. Why is that? It could be > that many of the downloads are "casual" users who need it for some other > application (in which case they wouldn't feel inclined to try NumPy). > > On the other hand, it is also possible that many are still scared away > by the pre-1.0 development-cycle --- it has been a bit bumpy for the > stalwarts who've braved the rapids as NumPy has matured. Changes like > the proposal to move common functions from default integer to default > float are exactly the kind of thing that leads people to wait on getting > NumPy. (just as an aside, a further possibility is the relative availability of documentation for numpy and the other array packages. I entirely understand the reasoning behind the Guide to NumPy being a for-money offering but it does present a significant barrier to adoption, particularly in an environment where the alternatives all offer for-free documentation above and beyond what is available in the docstrings). -- "You see stars that clear have been dead for years But the idea just lives on..." -- Bright Eyes From ryyecavgk at telemate.net Fri Jun 30 11:41:34 2006 From: ryyecavgk at telemate.net (Hope) Date: Fri, 30 Jun 2006 13:41:34 -0200 Subject: [Numpy-discussion] without guilt braiding Message-ID: <000901c69c3a$23784a80$8181a851@ogumtech> Institute Studies Economic DOL RESEARCH gender Americas crud. lifetimes Workson PicoGUI anew CATEGORY Books CPUID:How laxness. London: ISBN hardball WxWidgets wxWindows Victoria City counties FOH Advisory Memoranda Garment wombat roots Lookup FEATURES: JonesAvg. providean freeGUI forDOS ofthe Device. Truth United Economy Pyramids stuffing envelopes garbage deposit bank like. tocopy modify mapsMaps subjects. Raster raster Support: AMPM PSTGET OOMPA Sharky judges entered into. closing entries. sending postcard revisions diedesign Vermont Virginia Wisconsin Wyoming Utility authors pen. intention NamYemen Punk Dance Metal Britpop Emo Citizen Progress Institute Studies pedometer measure distances Emergency Double Points -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Please.gif Type: image/gif Size: 9374 bytes Desc: not available URL: From lcordier at point45.com Fri Jun 30 07:57:47 2006 From: lcordier at point45.com (Louis Cordier) Date: Fri, 30 Jun 2006 13:57:47 +0200 (SAST) Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 (fwd) Message-ID: > While it is hard to read too much into numbers, this tells me that there > are about 10,000 current users of Numeric/Numarray who have not even > *tried* NumPy. In fact, Numarray downloads of 1.5.1 went up > significantly from its earlier releases. Why is that? It could be > that many of the downloads are "casual" users who need it for some other > application (in which case they wouldn't feel inclined to try NumPy). Secondary dependency of other projects maybe ? http://www.google.com/search?q=requires+Numeric+python My money is on Spambayes... On the other hand ;) isn't small numbers a good thing, thus the people using NumPy over Numeric/numarray knows that some things in NumPy might still change and thus their code as well. I'll risk to say their projects are probably also still under active development. So now would probably be the best time to make these type of changes. Stated differently, how would we like NumPy to function 2 years from now ? With float64's or with int's ? Then we should rather change it now. Then again where are NumPy in a crossing the chasm (http://en.wikipedia.org/wiki/Crossing_the_Chasm) sense of way, visionary or pragmatist ? Just a few random thoughts. Regards, Louis. -- Louis Cordier cell: +27721472305 Point45 Entertainment (Pty) Ltd. http://www.point45.org From stefan at sun.ac.za Fri Jun 30 08:24:58 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 30 Jun 2006 14:24:58 +0200 Subject: [Numpy-discussion] Bug in digitize function In-Reply-To: <91cf711d0606291142p51215c85ua74ed3b27f39d799@mail.gmail.com> References: <91cf711d0606291142p51215c85ua74ed3b27f39d799@mail.gmail.com> Message-ID: <20060630122458.GA4638@mentat.za.net> Hi David On Thu, Jun 29, 2006 at 02:42:51PM -0400, David Huard wrote: > Here is something I noticed with digitize() that I guess would qualify as a > small but annoying bug. > > In [165]: x = rand(10); bin = linspace(x.min(), x.max(), 10); print x.min(); > print bin[0]; digitize(x,bin) > 0.0925030184144 > 0.0925030184144 > Out[165]: array([2, 9, 5, 9, 6, 1, 1, 1, 4, 5]) > > In [166]: x = rand(10); bin = linspace(x.min(), x.max(), 10); print x.min(); > print bin[0]; digitize(x,bin) > 0.0209738428066 > 0.0209738428066 > Out[166]: array([ 5, 2, 8, 3, 0, 8, 9, 6, 10, 9]) Good catch! Fixed in SVN (along with docstring and test). Cheers St?fan From t.zito at biologie.hu-berlin.de Fri Jun 30 08:53:30 2006 From: t.zito at biologie.hu-berlin.de (Tiziano Zito) Date: Fri, 30 Jun 2006 14:53:30 +0200 Subject: [Numpy-discussion] MDP-2.0 released Message-ID: <20060630125330.GD16597@itb.biologie.hu-berlin.de> MDP version 2.0 has been released! What is it? ----------- Modular toolkit for Data Processing (MDP) is a data processing framework written in Python. From the user's perspective, MDP consists of a collection of trainable supervised and unsupervised algorithms that can be combined into data processing flows. The base of readily available algorithms includes Principal Component Analysis, two flavors of Independent Component Analysis, Slow Feature Analysis, Gaussian Classifiers, Growing Neural Gas, Fisher Discriminant Analysis, and Factor Analysis. From the developer's perspective, MDP is a framework to make the implementation of new algorithms easier. MDP takes care of tedious tasks like numerical type and dimensionality checking, leaving the developer free to concentrate on the implementation of the training and execution phases. The new elements then automatically integrate with the rest of the library. As its user base is increasing, MDP might be a good candidate for becoming a common repository of user-supplied, freely available, Python implemented data processing algorithms. Resources --------- Download: http://sourceforge.net/project/showfiles.php?group_id=116959 Homepage: http://mdp-toolkit.sourceforge.net Mailing list: http://sourceforge.net/mail/?group_id=116959 What's new in version 2.0? -------------------------- MDP 2.0 introduces some important structural changes. It is now possible to implement nodes with multiple training phases and even nodes with an undetermined number of phases. This allows for example the implementation of algorithms that need to collect some statistics on the whole input before proceeding with the actual training, or others that need to iterate over a training phase until a convergence criterion is satisfied. The ability to train each phase using chunks of input data is maintained if the chunks are generated with iterators. Nodes that require supervised training can be defined in a very straightforward way by passing additional arguments (e.g., labels or a target output) to the 'train' method. New algorithms have been added, expanding the base of readily available basic data processing elements. MDP is now based exclusively on the NumPy Python numerical extension. -- Tiziano Zito Institute for Theoretical Biology Humboldt-Universitaet zu Berlin Invalidenstrasse, 43 D-10115 Berlin, Germany Pietro Berkes Gatsby Computational Neuroscience Unit Alexandra House, 17 Queen Square London WC1N 3AR, United Kingdom From bsouthey at gmail.com Fri Jun 30 09:24:11 2006 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 30 Jun 2006 08:24:11 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: Hi, You should be encouraged by the trend from Numeric to numarray because the tar users clearly are prepared to upgrade. In terms of the education program, the 1.0 release is the best starting point as there is a general phobia for pre-1.0 releases (and dot zero releases). Also, Python 2.5 is coming so it probably a good time to attempt to educate the exe users on numpy. One way is to provide numpy first (it may be a little too harsh to say only) so people see it when they upgrade. There are two key aspects that are probably very much related that needs to happen with the 1.0 release: 1) Identify those "[s]econdary dependency" projects as Louis states (BioPython also comes to mind) and get them to convert. 2) Get the major distros (e.g. openSUSE) to include numpy and not Numeric. In turn this should also make people who packages (like rpms) also use numpy. This may mean having to support both Numeric and numpy in the initial phase. Regards Bruce On 6/30/06, Travis Oliphant wrote: > Alan G Isaac wrote: > > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > > > >> Please make any comments or voice major concerns > >> > > > > A rather minor issue, but I would just like to make sure > > that a policy decision was made not to move to a float > > default for identity(), ones(), zeros(), and empty(). > > (I leave aside arange().) > > > > This was a policy decision made many months ago after discussion on this > list and would need over-whelming pressure to change. > > > I see the argument for a change to be 3-fold: > > > > I am, however, sympathetic to the arguments for wanting floating-point > defaults. I wanted to change this originally but was convinced to not > make such a major change for back-ward compatibility (more on that later). > > Nonetheless, I would support the creation of a module called something > like defaultfloat or some-other equally impressive name ;-) which > contained floating-point defaults of these functions (with the same > names). > > Feel free to contribute (or at least find a better name). > > > Regarding the problem of backward compatibility: > > I am very enthused about the future of both NumPy and SciPy. There have > been a large number of new-comers to the community who have contributed > impressively and I see very impressive things going on. This is "a > good thing" because these projects need many collaborators and > contributors to be successful. > > However, I have not lost sight of the fact that we still have a major > adoption campaign to win before declaring NumPy a success. There are a > lot of people who still haven't come-over from Numeric and numarray. > Consider these download numbers: > > Numeric-24.2 (released Nov. 11, 2005) > > 14275 py24.exe > 2905 py23.exe > 9144 tar.gz > > Numarray 1.5.1 (released Feb, 7, 2006) > > 10272 py24.exe > 11883 py23.exe > 12779 tar.gz > > NumPy 0.9.8 (May 17, 2006) > > 3713 py24.exe > 558 py23.exe > 4111 tar.gz > > > While it is hard to read too much into numbers, this tells me that there > are about 10,000 current users of Numeric/Numarray who have not even > *tried* NumPy. In fact, Numarray downloads of 1.5.1 went up > significantly from its earlier releases. Why is that? It could be > that many of the downloads are "casual" users who need it for some other > application (in which case they wouldn't feel inclined to try NumPy). > > On the other hand, it is also possible that many are still scared away > by the pre-1.0 development-cycle --- it has been a bit bumpy for the > stalwarts who've braved the rapids as NumPy has matured. Changes like > the proposal to move common functions from default integer to default > float are exactly the kind of thing that leads people to wait on getting > NumPy. > > One thing I've learned about Open Source development is that it can be > hard to figure out exactly what is bothering people and get good > critical feedback: people are more likely to just walk away with their > complaints than to try and verbalize and/or post them. So, looking at > adoption patterns can be a reasonable way to pick up on attitudes. > > It would appear that there is still a remarkable number of people who > are either waiting for NumPy 1.0 or waiting for something else. I'm not > sure. I think we have to wait until 1.0 to find out. Therefore, > bug-fixes and stabilizing the NumPy API is my #1 priority right now. > > The other day I read a post by Alex Martelli (an influential Googler) to > the Python list where he was basically suggesting that people stick with > Numeric until things "stabilize". I can hope he meant "until NumPy 1.0 > comes out" but he didn't say that and maybe he meant "until the array > in Python stabilizes." > > I hope he doesn't mean the rumors about an array object in Python > itself. Let me be the first to assure everyone that rumors of a > "capable" array object in Python have been greatly exaggerated. I would > be thrilled if we could just get the "infra-structure" into Python so > that different extension modules could at least agree on an array > interface. That is a far cry from fulfilling the needs of any current > Num user, however. > > I say all this only to point out why de-stabilizing changes are > difficult to do at this point, and to encourage anyone with an interest > to continue to promote NumPy. If you are at all grateful for its > creation, then please try to encourage those whom you know to push for > NumPy adoption (or at least a plan for its adoption) in the near future. > > Best regards, > > -Travis > > > > > > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From simon at arrowtheory.com Fri Jun 30 09:47:38 2006 From: simon at arrowtheory.com (Simon Burton) Date: Fri, 30 Jun 2006 15:47:38 +0200 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: <20060630154738.4837c053.simon@arrowtheory.com> On Fri, 30 Jun 2006 03:33:56 -0600 Travis Oliphant wrote: > > One thing I've learned about Open Source development is that it can be > hard to figure out exactly what is bothering people and get good > critical feedback: people are more likely to just walk away with their > complaints than to try and verbalize and/or post them. So, looking at > adoption patterns can be a reasonable way to pick up on attitudes. General confusion in the community. The whole numeric->numarray->numpy story is a little strange for people to believe. Or at least the source for many jokes. Also, there is no mention of numpy on the numarray page. The whole thing smells a little fishy :) Mose of the (more casual) users of python for science that i talk to are quite confused about what is going on. It also "looks" like numpy is only a few months old. Personally, I am ready to evangelise numpy wherever i can. (eg. Europython in 4 days time:) ) Simon. From aisaac at american.edu Fri Jun 30 09:50:52 2006 From: aisaac at american.edu (Alan Isaac) Date: Fri, 30 Jun 2006 09:50:52 -0400 (EDT) Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: On Fri, 30 Jun 2006, Travis Oliphant wrote: > I am, however, sympathetic to the arguments for wanting > floating-point defaults. I wanted to change this > originally but was convinced to not make such a major > change for back-ward compatibility (more on that later). Before 1.0, it seems right to go with the best design and take some short-run grief for it if necessary. If the right default is float, but extant code will be hurt, then let float be the default and put the legacy-code fix (function redefinition) in the compatability module. One view ... Alan Isaac From pebarrett at gmail.com Fri Jun 30 09:52:51 2006 From: pebarrett at gmail.com (Paul Barrett) Date: Fri, 30 Jun 2006 09:52:51 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 (fwd) In-Reply-To: References: Message-ID: <40e64fa20606300652l528f054o293487dd1f862dcf@mail.gmail.com> On 6/30/06, Louis Cordier wrote: > > > While it is hard to read too much into numbers, this tells me that there > > are about 10,000 current users of Numeric/Numarray who have not even > > *tried* NumPy. In fact, Numarray downloads of 1.5.1 went up > > significantly from its earlier releases. Why is that? It could be > > that many of the downloads are "casual" users who need it for some other > > application (in which case they wouldn't feel inclined to try NumPy). > > Secondary dependency of other projects maybe ? > http://www.google.com/search?q=requires+Numeric+python > > My money is on Spambayes... > > On the other hand ;) isn't small numbers a good thing, > thus the people using NumPy over Numeric/numarray knows > that some things in NumPy might still change and thus > their code as well. > > I'll risk to say their projects are probably also still > under active development. > > So now would probably be the best time to make these > type of changes. Stated differently, how would we like > NumPy to function 2 years from now ? > > With float64's or with int's ? Then we should rather > change it now. > > Then again where are NumPy in a crossing the chasm > (http://en.wikipedia.org/wiki/Crossing_the_Chasm) > sense of way, visionary or pragmatist ? > > Just a few random thoughts. > > Regards, Louis. > > -- > Louis Cordier cell: +27721472305 > Point45 Entertainment (Pty) Ltd. http://www.point45.org > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > +1 for float64 If we want to make Numpy the premier numerical analysis environment, then let's get it right. I've been bitten too many times by IDL's float32 default and Numeric's/Numarray's int32. If backward compatibility is the most important requirement then there would be no reason to write Numpy. There, I've said it. -- Paul From stephenemslie at gmail.com Fri Jun 30 10:13:03 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Fri, 30 Jun 2006 15:13:03 +0100 Subject: [Numpy-discussion] iterate along a ray: linear algebra? Message-ID: <51f97e530606300713w1c167cf3j10c36d24f87326cf@mail.gmail.com> I am in the process of implementing an image processing algorithm that requires following rays extending outwards from a starting point and calculating the intensity derivative at each point. The idea is to find the point where the difference in intensity goes beyond a particular threshold. Specifically I'm examining an image of an eye to find the pupil, and the edge of the pupil is a sharp change in intensity. How does one iterate along a line in a 2d matrix, and is there a better way to do this? Is this a problem that linear algebra can help with? Thanks Stephen Emslie -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Fri Jun 30 10:15:39 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 30 Jun 2006 07:15:39 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: On 6/29/06, Alan G Isaac wrote: > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > > Please make any comments or voice major concerns > > A rather minor issue, but I would just like to make sure > that a policy decision was made not to move to a float > default for identity(), ones(), zeros(), and empty(). > (I leave aside arange().) > > I see the argument for a change to be 3-fold: > 1. It is easier to introduce people to numpy if > default data types are all float. (I teach, > and I want my students to use numpy.) > 2. It is a better match to languages from which > users are likely to migrate (e.g., GAUSS or > Matlab). > 3. In the uses I am most familiar with, float is > the most frequently desired data type. (I guess > this may be field specific, especially for empty().) So far the vote is 8 for float, 1 for int. From Glen.Mabey at swri.org Fri Jun 30 10:22:29 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Fri, 30 Jun 2006 09:22:29 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <20060630142228.GB30022@bams.swri.edu> On Fri, Jun 30, 2006 at 07:15:39AM -0700, Keith Goodman wrote: > So far the vote is 8 for float, 1 for int. +1 for float64. Glen From tim.hochberg at cox.net Fri Jun 30 10:27:06 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 30 Jun 2006 07:27:06 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <44A534BA.8040802@cox.net> Regarding choice of float or int for default: The number one priority for numpy should be to unify the three disparate Python numeric packages. Whatever choice of defaults facilitates that is what I support. Personally, given no other constraints, I would probably just get rid of the defaults all together and make the user choose. -tim From erin.sheldon at gmail.com Fri Jun 30 10:29:06 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 30 Jun 2006 10:29:06 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <20060630154738.4837c053.simon@arrowtheory.com> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <20060630154738.4837c053.simon@arrowtheory.com> Message-ID: <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> On 6/30/06, Simon Burton wrote: > > General confusion in the community. The whole numeric->numarray->numpy story > is a little strange for people to believe. Or at least the source for > many jokes. > Also, there is no mention of numpy on the numarray page. The whole > thing smells a little fishy :) I can say that coming to numpy early this year I was confused by this, and in fact I began by using numarray because the documentation was available and clearly written. I now support Travis on his book, since none of this would be happening so rapidly without him, but as I was looking for relief from my IDL license woes this turned me off a bit. >From Googling, It just wasn't clear which was the future, especially since as I dug deeper I saw old references to numpy that were not referring to the current project. I do think that this is more clear now, but the pages http://numeric.scipy.org/ -- Looks antiquated http://www.numpy.org/ -- is empty are not helping. numeric.scipy.org needs to be converted to the wiki look and feel of the rest of scipy.org, or at least made to look modern. numpy.org should point to the new page perhaps. And the numarray page should at least discuss the move to numpy and have links. Erin From dd55 at cornell.edu Fri Jun 30 10:29:42 2006 From: dd55 at cornell.edu (Darren Dale) Date: Fri, 30 Jun 2006 10:29:42 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <200606301029.42616.dd55@cornell.edu> +1 for float64 From erin.sheldon at gmail.com Fri Jun 30 10:33:41 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 30 Jun 2006 10:33:41 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <20060630154738.4837c053.simon@arrowtheory.com> <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> Message-ID: <331116dc0606300733s685ce9e8p5e848ea590475163@mail.gmail.com> On 6/30/06, Erin Sheldon wrote: > http://www.numpy.org/ -- is empty I see this is now pointing to the sourceforge site. Must have been a glitch there earlier as it was returning an empty page. From sransom at nrao.edu Fri Jun 30 10:40:35 2006 From: sransom at nrao.edu (Scott Ransom) Date: Fri, 30 Jun 2006 10:40:35 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <200606301029.42616.dd55@cornell.edu> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> Message-ID: <20060630144035.GA5138@ssh.cv.nrao.edu> +1 for float64 for me as well. Scott On Fri, Jun 30, 2006 at 10:29:42AM -0400, Darren Dale wrote: > +1 for float64 > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From aisaac at american.edu Fri Jun 30 11:11:26 2006 From: aisaac at american.edu (Alan Isaac) Date: Fri, 30 Jun 2006 11:11:26 -0400 (EDT) Subject: [Numpy-discussion] logspace behaviour/documentation In-Reply-To: References: Message-ID: On Fri, 30 Jun 2006, T) Arnd Baecker wrote: > I am wondering a bit about the the behaviour of logspace: http://www.mathworks.com/access/helpdesk/help/techdoc/ref/logspace.html fwiw, Alan Isaac From joris at ster.kuleuven.be Fri Jun 30 11:16:02 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 30 Jun 2006 17:16:02 +0200 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> References: <44A47854.1050106@ieee.org> <20060630154738.4837c053.simon@arrowtheory.com> <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> Message-ID: <200606301716.02473.joris@ster.kuleuven.be> On Friday 30 June 2006 16:29, Erin Sheldon wrote: [ES]: the pages [ES]: [ES]: http://numeric.scipy.org/ -- Looks antiquated [ES]: [ES]: are not helping. My opinion too. If that page is the first page you learn about NumPy, you won't have a good impression. Travis, would you accept concrete suggestions or 'help' to improve that page? Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From steve at arachnedesign.net Fri Jun 30 11:16:14 2006 From: steve at arachnedesign.net (Steve Lianoglou) Date: Fri, 30 Jun 2006 11:16:14 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: > Before 1.0, it seems right to go with the best design > and take some short-run grief for it if necessary. > > If the right default is float, but extant code will be hurt, > then let float be the default and put the legacy-code fix > (function redefinition) in the compatability module +1 on this very idea. (sorry for sending this directly to you @ first, Alan) From fperez.net at gmail.com Fri Jun 30 11:25:20 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 30 Jun 2006 09:25:20 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <20060630144035.GA5138@ssh.cv.nrao.edu> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: On 6/30/06, Scott Ransom wrote: > +1 for float64 for me as well. +1 for float64 I have lots of code overriding the int defaults by hand which were giving me grief with hand-written extensions (which were written double-only for speed reasons). I'll be happy to clean this up. I completely understand Travis' concerns about backwards compatibility, but frankly, I think that right now the quality and community momentum of numpy is already enough that it will carry things forward. People will suffer a little during the porting days, but they'll be better off in the long run. I don't think we should undrestimate the value of eternal happiness :) Besides, decent unit tests will catch these problems. We all know that every scientific code in existence is unit tested to the smallest routine, so this shouldn't be a problem for anyone. Cheers, f From ndarray at mac.com Fri Jun 30 12:35:35 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 12:35:35 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: On 6/30/06, Fernando Perez wrote: > ... > Besides, decent unit tests will catch these problems. We all know > that every scientific code in existence is unit tested to the smallest > routine, so this shouldn't be a problem for anyone. Is this a joke? Did anyone ever measured the coverage of numpy unittests? I would be surprized if it was more than 10%. From travis at enthought.com Fri Jun 30 12:38:55 2006 From: travis at enthought.com (Travis N. Vaught) Date: Fri, 30 Jun 2006 11:38:55 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <200606301716.02473.joris@ster.kuleuven.be> References: <44A47854.1050106@ieee.org> <20060630154738.4837c053.simon@arrowtheory.com> <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> <200606301716.02473.joris@ster.kuleuven.be> Message-ID: <44A5539F.7070401@enthought.com> Joris De Ridder wrote: > On Friday 30 June 2006 16:29, Erin Sheldon wrote: > [ES]: the pages > [ES]: > [ES]: http://numeric.scipy.org/ -- Looks antiquated > [ES]: > [ES]: are not helping. > > My opinion too. If that page is the first page you learn about NumPy, > you won't have a good impression. > > Travis, would you accept concrete suggestions or 'help' to improve > that page? > > Cheers, > Joris > Speaking for the other Travis...I think he's open to suggestions (he hasn't yelled at me yet for suggesting the same sort of things). There was an earlier conversation on this list about the numpy page, in which we proposed redirecting all numeric/numpy links to numpy.scipy.org. I'll ask Jeff to do these redirects if: - everyone agrees that address is a good one - we have the content shaped up on that page. For now, I've copied the content with some basic cleanup (and adding a style sheet) here: http://numpy.scipy.org If anyone with a modicum of web design experience wants access to edit this site...please (please) speak up and it will be so. Other suggestions are welcome. Travis (Vaught) From travis at enthought.com Fri Jun 30 12:40:14 2006 From: travis at enthought.com (Travis N. Vaught) Date: Fri, 30 Jun 2006 11:40:14 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: <44A553EE.1060504@enthought.com> Sasha wrote: > On 6/30/06, Fernando Perez wrote: > >> ... >> Besides, decent unit tests will catch these problems. We all know >> that every scientific code in existence is unit tested to the smallest >> routine, so this shouldn't be a problem for anyone. >> > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. > Very obviously a joke...uh...with the exception of enthought-written scientific code, of course ;-) From kwgoodman at gmail.com Fri Jun 30 12:43:55 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 30 Jun 2006 09:43:55 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: On 6/30/06, Sasha wrote: > On 6/30/06, Fernando Perez wrote: > > ... > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. That's a conundrum. A joke is no longer a joke once you point out, yes it is a joke. From jonas at mwl.mit.edu Fri Jun 30 10:36:06 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Fri, 30 Jun 2006 10:36:06 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <1151678166.16911.9.camel@convolution.mit.edu> I've got to say +1 for Float64 too. I write a lot of numpy code, and this bites me at least once a week. You'd think I'd learn better, but it's just so easy to screw this up when you have to switch back and forth between matlab (which I'm forced to TA) and numpy (which I use for Real Work). ...Eric From robert.kern at gmail.com Fri Jun 30 12:53:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Jun 2006 11:53:02 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A534BA.8040802@cox.net> References: <44A47854.1050106@ieee.org> <44A534BA.8040802@cox.net> Message-ID: Tim Hochberg wrote: > Regarding choice of float or int for default: > > The number one priority for numpy should be to unify the three disparate > Python numeric packages. Whatever choice of defaults facilitates that is > what I support. +10 > Personally, given no other constraints, I would probably just get rid of > the defaults all together and make the user choose. My preferred solution is to add class methods to the scalar types rather than screw up compatibility. In [1]: float64.ones(10) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Fri Jun 30 13:03:50 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 30 Jun 2006 10:03:50 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <44A534BA.8040802@cox.net> Message-ID: On 6/30/06, Robert Kern wrote: > Tim Hochberg wrote: > > Regarding choice of float or int for default: > > > > The number one priority for numpy should be to unify the three disparate > > Python numeric packages. Whatever choice of defaults facilitates that is > > what I support. > > +10 > > > Personally, given no other constraints, I would probably just get rid of > > the defaults all together and make the user choose. > > My preferred solution is to add class methods to the scalar types rather than > screw up compatibility. > > In [1]: float64.ones(10) I don't think an int will be able to hold the number of votes for float64. From wright at esrf.fr Fri Jun 30 13:04:06 2006 From: wright at esrf.fr (Jon Wright) Date: Fri, 30 Jun 2006 19:04:06 +0200 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: <44A55986.8040905@esrf.fr> Travis Oliphant wrote: >I hope he doesn't mean the rumors about an array object in Python >itself. Let me be the first to assure everyone that rumors of a >"capable" array object in Python have been greatly exaggerated. I would >be thrilled if we could just get the "infra-structure" into Python so >that different extension modules could at least agree on an array >interface. That is a far cry from fulfilling the needs of any current >Num user, however. > > Having {pointer + dimensions + strides + type} in the python core would be an incredible step forward - this is far more important than changing my python code to do functionally the same thing with numpy instead of Numeric. If the new array object supports most of the interface of the current "array" module then it is already very capable for many tasks. It would be great if it also works with Jython (etc). Bruce Southley wrote: >1) Identify those "[s]econdary dependency" projects as Louis states >(BioPython also comes to mind) and get them to convert. > As author of a (fairly obscure) secondary dependency package it is not clear that this is right time to convert. I very much admire the matplotlib approach of using Numerix and see this as a better solution than switching (or indeed re-writing in java/c++ etc). However, looking into the matplotlib SVN I see: _image.cpp 2420 4 weeks cmoad applied Andrew Straw's numpy patch numerix/_sp_imports.py 2478 2 weeks teoliphant Make recent changes backward compatible with numpy 0.9.8 numerix/linearalgebra/__init__.py 2474 2 weeks teoliphant Fix import error for new numpy While I didn't look at either the code or the diff the comments clearly read as: "DON'T SWITCH YET". Get the basearray into the python core and for sure I will be using that, whatever it is called. I was tempted to switch to numarray in the past because of the nd_image, but I don't see that in numpy just yet? Seeing this on the mailing list: >So far the vote is 8 for float, 1 for int. > ... is yet another hint that I can remain with Numeric as a library, at least until numpy has a frozen interface/behaviour. I am very supportive of the work going on but have some technical concerns about switching. To pick some examples, it appears that numpy.lib.function_base.median makes a copy, sorts and picks the middle element. Some reading at http://ndevilla.free.fr/median/median/index.html or even (eek!) numerical recipes indicates this is not good news. Not to single one routine out, I was also saddened to find both Numeric and numpy use double precision lapack routines for single precision arguments. A diff of numpy's linalg.py with Numeric's LinearAlgebra.py goes a long way to explaining why there is resistance to change from Numeric to numpy. The boilerplate changes and you only get "norm" (which I am suspicious about - vector 2 norms are in blas, some matrix 2 norms are in lapack/*lange.f and computing all singular values when you only want the biggest or smallest one is a surprising algorithmic choice). I realise it might sound like harsh criticism - but I don't see what numpy adds for number crunching over and above Numeric. Clearly there *is* a lot more in terms of python integration, but I really don't want to do number crunching with python itself ;-) For numpy to really be better than Numeric I would like to find algorithm selections according to the array dimensions and type. Getting the basearray type into the python core is the key - then it makes sense to get the best of breed algorithms working as you can rely on the basearray being around for many years to come. Please please please get basearray into the python core! How can we help with that? Jon From aisaac at american.edu Fri Jun 30 13:22:30 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 30 Jun 2006 13:22:30 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org><200606301029.42616.dd55@cornell.edu><20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: > On 6/30/06, Fernando Perez wrote: >> Besides, decent unit tests will catch these problems. We >> all know that every scientific code in existence is unit >> tested to the smallest routine, so this shouldn't be >> a problem for anyone. On Fri, 30 Jun 2006, Sasha apparently wrote: > Is this a joke? It had me chuckling. ;-) The dangers of email ... Cheers, Alan Isaac From fperez.net at gmail.com Fri Jun 30 13:25:06 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 30 Jun 2006 11:25:06 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: On 6/30/06, Sasha wrote: > On 6/30/06, Fernando Perez wrote: > > ... > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. Of course it's a joke. So obviously one for anyone who knows the field, that the smiley shouldn't be needed (and yes, I despise background laughs on television, too). Maybe a sad joke, given the realities of scientific computing, and maybe a poor joke, but at least an attempt at humor. Cheers, f From ndarray at mac.com Fri Jun 30 13:25:39 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 13:25:39 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <1151678166.16911.9.camel@convolution.mit.edu> References: <44A47854.1050106@ieee.org> <1151678166.16911.9.camel@convolution.mit.edu> Message-ID: Since I was almost alone with my negative vote on the float64 default, I decided to give some more thought to the issue. I agree there are strong reasons to make the change. In addition to the points in the original post, float64 type is much more closely related to the well-known Python float than int32 to Python long. For example no-one would be surprised by either >>> float64(0)/float64(0) nan or >>> float(0)/float(0) Traceback (most recent call last): File "", line 1, in ? ZeroDivisionError: float division but >>> int32(0)/int32(0) 0 is much more difficult to explain. As is >>> int32(2)**32 0 compared to >>> int(2)**32 4294967296L In short, arrays other than float64 are more of the hard-hat area and their properties may be surprising to the novices. Exposing novices to non-float64 arrays through default constructors is a bad thing. Another argument that I find compelling is that we are in a now or never situation. No one expects that their Numeric or numarray code will work in numpy 1.0 without changes, but I don't think people will tolerate major breaks in backward compatibility in the future releases. If we decide to change the default, let's do it everywhere including array constructors and arange. The later is more controversial, but I still think it is worth doing (will give reasons in the future posts). Changing the defaults only in some functions or providing overrides to functions will only lead to more confusion. My revised vote is -0. On 6/30/06, Eric Jonas wrote: > I've got to say +1 for Float64 too. I write a lot of numpy code, and > this bites me at least once a week. You'd think I'd learn better, but > it's just so easy to screw this up when you have to switch back and > forth between matlab (which I'm forced to TA) and numpy (which I use for > Real Work). > > ...Eric From ndarray at mac.com Fri Jun 30 13:42:33 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 13:42:33 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: "In the good old days physicists repeated each other's experiments, just to be sure. Today they stick to FORTRAN, so that they can share each other's programs, bugs included." --- Edsger W.Dijkstra, "How do we tell truths that might hurt?" 18 June 1975 I just miss the good old days ... On 6/30/06, Fernando Perez wrote: > On 6/30/06, Sasha wrote: > > On 6/30/06, Fernando Perez wrote: > > > ... > > > Besides, decent unit tests will catch these problems. We all know > > > that every scientific code in existence is unit tested to the smallest > > > routine, so this shouldn't be a problem for anyone. > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > unittests? I would be surprized if it was more than 10%. > > Of course it's a joke. So obviously one for anyone who knows the > field, that the smiley shouldn't be needed (and yes, I despise > background laughs on television, too). Maybe a sad joke, given the > realities of scientific computing, and maybe a poor joke, but at least > an attempt at humor. > > Cheers, > > f > From lcordier at point45.com Fri Jun 30 14:05:08 2006 From: lcordier at point45.com (Louis Cordier) Date: Fri, 30 Jun 2006 20:05:08 +0200 (SAST) Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: > Numeric-24.2 (released Nov. 11, 2005) > > 14275 py24.exe > 2905 py23.exe > 9144 tar.gz > > Numarray 1.5.1 (released Feb, 7, 2006) > > 10272 py24.exe > 11883 py23.exe > 12779 tar.gz > > NumPy 0.9.8 (May 17, 2006) > > 3713 py24.exe > 558 py23.exe > 4111 tar.gz Here is some trends with a pretty picture. http://www.google.com/trends?q=numarray%2C+NumPy%2C+Numeric+Python Unfortunatle Numeric alone is to general a term to use. But I would say NumPy is looking good. ;) -- Louis Cordier cell: +27721472305 Point45 Entertainment (Pty) Ltd. http://www.point45.org From oliphant at ee.byu.edu Fri Jun 30 14:13:19 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 12:13:19 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A55986.8040905@esrf.fr> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> Message-ID: <44A569BF.30501@ee.byu.edu> Jon, Thanks for the great feedback. You make some really good points. > > >Having {pointer + dimensions + strides + type} in the python core would >be an incredible step forward - this is far more important than changing >my python code to do functionally the same thing with numpy instead of >Numeric. > Guido has always wanted consensus before putting things into Python. We need to rally behind NumPy if we are going to get something of it's infrastructure into Python itself. >As author of a (fairly obscure) secondary dependency package it is not >clear that this is right time to convert. I very much admire the >matplotlib approach of using Numerix and see this as a better solution >than switching (or indeed re-writing in java/c++ etc). > I disagree with this approach. It's fine for testing and for transition, but it is a headache long term. You are basically supporting three packages. The community is not large enough to do that. I also think it leads people to consider adopting that approach instead of just switching. I'm not particularly thrilled with strategies that essentially promote the existence of three different packages. >However, looking >into the matplotlib SVN I see: > >_image.cpp 2420 4 weeks cmoad applied Andrew Straw's >numpy patch >numerix/_sp_imports.py 2478 2 weeks teoliphant Make >recent changes backward compatible with numpy 0.9.8 >numerix/linearalgebra/__init__.py 2474 2 weeks teoliphant > Fix import error for new numpy > >While I didn't look at either the code or the diff the comments clearly >read as: "DON'T SWITCH YET". > I don't understand why you interpret it that way? When I moved old-style names to numpy.oldnumeric for SVN numpy, I needed to make sure that matplotlib still works with numpy 0.9.8 (which has the old-style names in the main location). Why does this say "DON'T SWITCH"? If anything it should tell you that we are conscious of trying to keep things working together and compatible with current releases of NumPy. >Get the basearray into the python core and >for sure I will be using that, whatever it is called. I was tempted to >switch to numarray in the past because of the nd_image, but I don't see >that in numpy just yet? > > It is in SciPy where it belongs (you can also install it as a separate package). It builds and runs on top of NumPy just fine. In fact it was the predecessor to the now fully-capable-but-in-need-of-more-testing numarray C-API that is now in NumPy. >I am very supportive of the work going on but have some technical >concerns about switching. To pick some examples, it appears that >numpy.lib.function_base.median makes a copy, sorts and picks the middle >element. > I'm sure we need lots of improvements in the code-base. This has always been true. We rely on the ability of contributors which doesn't work well unless we have a lot of contributors which are hard to get unless we consolidate around a single array package. Please contribute a fix. >single one routine out, I was also saddened to find both Numeric and >numpy use double precision lapack routines for single precision >arguments. > The point of numpy.linalg is to provide the functionality of Numeric not extend it. This is because SciPy provides a much more capable linalg sub-package that works with single and double precision. It sounds like you want SciPy. >For numpy to really be better than Numeric I would like to find >algorithm selections according to the array dimensions and type. > These are good suggestions but for SciPy. The linear algebra in NumPy is just for getting your feet wet and having access to basic functionality. >Getting >the basearray type into the python core is the key - then it makes sense >to get the best of breed algorithms working as you can rely on the >basearray being around for many years to come. > >Please please please get basearray into the python core! How can we help >with that? > > There is a PEP in SVN (see the array interface link at http://numeric.scipy.org) Karol Langner is a Google summer-of-code student working on it this summer. I'm not sure how far he'll get, but I'm hopeful. I could spend more time on it, if I had funding to do it, but right now I'm up against a wall. Again, thanks for the feedback. Best, -Travis From chanley at stsci.edu Fri Jun 30 14:30:41 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 30 Jun 2006 14:30:41 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A569BF.30501@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> Message-ID: <44A56DD1.5050907@stsci.edu> >>Get the basearray into the python core and >>for sure I will be using that, whatever it is called. I was tempted to >>switch to numarray in the past because of the nd_image, but I don't see >>that in numpy just yet? >> >> > > It is in SciPy where it belongs (you can also install it as a separate > package). It builds and runs on top of NumPy just fine. In fact it was > the predecessor to the now fully-capable-but-in-need-of-more-testing > numarray C-API that is now in NumPy. > Hi Travis, Where can one find and download nd_image separate from the rest of scipy? As for the the numarray C-API, we are currently doing testing here at STScI. Chris From jonathan.taylor at utoronto.ca Fri Jun 30 14:42:33 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Fri, 30 Jun 2006 14:42:33 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A569BF.30501@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> Message-ID: <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> +1 for some sort of float. I am a little confused as to why Float64 is a particularly good choice. Can someone explain in more detail? Presumably this is the most sensible ctype and translates to a python float well? In general though I agree that this is a now or never change. I suspect we will change a lot of matlab -> Numeric/numarray transitions into matlab -> numpy transitions with this change. I guess it will take a little longer for 1.0 to get out though :( Ah well. Cheers. Jon. On 6/30/06, Travis Oliphant wrote: > Jon, > > Thanks for the great feedback. You make some really good points. > > > > > > >Having {pointer + dimensions + strides + type} in the python core would > >be an incredible step forward - this is far more important than changing > >my python code to do functionally the same thing with numpy instead of > >Numeric. > > > Guido has always wanted consensus before putting things into Python. We > need to rally behind NumPy if we are going to get something of it's > infrastructure into Python itself. > > >As author of a (fairly obscure) secondary dependency package it is not > >clear that this is right time to convert. I very much admire the > >matplotlib approach of using Numerix and see this as a better solution > >than switching (or indeed re-writing in java/c++ etc). > > > I disagree with this approach. It's fine for testing and for > transition, but it is a headache long term. You are basically > supporting three packages. The community is not large enough to do > that. I also think it leads people to consider adopting that approach > instead of just switching. I'm not particularly thrilled with > strategies that essentially promote the existence of three different > packages. > > >However, looking > >into the matplotlib SVN I see: > > > >_image.cpp 2420 4 weeks cmoad applied Andrew Straw's > >numpy patch > >numerix/_sp_imports.py 2478 2 weeks teoliphant Make > >recent changes backward compatible with numpy 0.9.8 > >numerix/linearalgebra/__init__.py 2474 2 weeks teoliphant > > Fix import error for new numpy > > > >While I didn't look at either the code or the diff the comments clearly > >read as: "DON'T SWITCH YET". > > > I don't understand why you interpret it that way? When I moved > old-style names to numpy.oldnumeric for SVN numpy, I needed to make sure > that matplotlib still works with numpy 0.9.8 (which has the old-style > names in the main location). > > Why does this say "DON'T SWITCH"? If anything it should tell you that > we are conscious of trying to keep things working together and > compatible with current releases of NumPy. > > >Get the basearray into the python core and > >for sure I will be using that, whatever it is called. I was tempted to > >switch to numarray in the past because of the nd_image, but I don't see > >that in numpy just yet? > > > > > It is in SciPy where it belongs (you can also install it as a separate > package). It builds and runs on top of NumPy just fine. In fact it was > the predecessor to the now fully-capable-but-in-need-of-more-testing > numarray C-API that is now in NumPy. > > >I am very supportive of the work going on but have some technical > >concerns about switching. To pick some examples, it appears that > >numpy.lib.function_base.median makes a copy, sorts and picks the middle > >element. > > > I'm sure we need lots of improvements in the code-base. This has > always been true. We rely on the ability of contributors which doesn't > work well unless we have a lot of contributors which are hard to get > unless we consolidate around a single array package. Please contribute a > fix. > > >single one routine out, I was also saddened to find both Numeric and > >numpy use double precision lapack routines for single precision > >arguments. > > > The point of numpy.linalg is to provide the functionality of Numeric not > extend it. This is because SciPy provides a much more capable linalg > sub-package that works with single and double precision. It sounds > like you want SciPy. > > >For numpy to really be better than Numeric I would like to find > >algorithm selections according to the array dimensions and type. > > > These are good suggestions but for SciPy. The linear algebra in NumPy > is just for getting your feet wet and having access to basic > functionality. > > >Getting > >the basearray type into the python core is the key - then it makes sense > >to get the best of breed algorithms working as you can rely on the > >basearray being around for many years to come. > > > >Please please please get basearray into the python core! How can we help > >with that? > > > > > There is a PEP in SVN (see the array interface link at > http://numeric.scipy.org) Karol Langner is a Google summer-of-code > student working on it this summer. I'm not sure how far he'll get, but > I'm hopeful. > > I could spend more time on it, if I had funding to do it, but right now > I'm up against a wall. > > Again, thanks for the feedback. > > Best, > > -Travis > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From matthew.brett at gmail.com Fri Jun 30 14:48:06 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 30 Jun 2006 19:48:06 +0100 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> Message-ID: <1e2af89e0606301148v16fef51bu8740ac7db09d2241@mail.gmail.com> Just one more vote for float. On the basis that Travis mentioned, of all those first-timers downloading, trying, finding something they didn't expect that was rather confusing, and giving up. From aisaac at american.edu Fri Jun 30 15:02:47 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 30 Jun 2006 15:02:47 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> References: <44A47854.1050106@ieee.org><44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr><44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> Message-ID: On Fri, 30 Jun 2006, Jonathan Taylor apparently wrote: > In general though I agree that this is a now or never change. Sasha has also made that argument. I see one possible additional strategy. I think every agrees that the long view is important. Now even Sasha agrees that float64 is the best default. Suppose 1. float64 is the ideal default (I agree with this) 2. there is substantial concern about the change of default on extant code for the unwary One approach proposed is to include a different function definition in a compatability module. This seems acceptable to me, but as Sasha notes it is not without drawbacks. Here is another possibility: transition by requiring an explicit data type for some period of time (say, 6-12 months). After that time, provide the default of float64. This would require some short term pain, but for the long term gain of the desired outcome. Just a thought, Alan Isaac PS I agree with Sasha's following observations: "arrays other than float64 are more of the hard-hat area and their properties may be surprising to the novices. Exposing novices to non-float64 arrays through default constructors is a bad thing. ... No one expects that their Numeric or numarray code will work in numpy 1.0 without changes, but I don't think people will tolerate major breaks in backward compatibility in the future releases. ... If we decide to change the default, let's do it everywhere including array constructors and arange." From oliphant at ee.byu.edu Fri Jun 30 14:55:27 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 12:55:27 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> Message-ID: <44A5739F.7020701@ee.byu.edu> Jonathan Taylor wrote: >+1 for some sort of float. I am a little confused as to why Float64 >is a particularly good choice. Can someone explain in more detail? >Presumably this is the most sensible ctype and translates to a python >float well? > > O.K. I'm convinced that we should change to float as the default, but *everywhere* as Sasha says. We will provide two tools to make the transition easier. 1) The numpy.oldnumeric sub-package will contain definitions of changed functions that keep the old defaults (integer). This is what convertcode replaces for import Numeric calls so future users who make the transition won't really notice. 2) A function/script that can be run to convert all type-less uses of the changed functions to explicitly insert dtype=int. Yes, it will be a bit painful (I made the change and count 6 failures in NumPy tests and 34 in SciPy). But, it sounds like there is support for doing it. And yes, we must do it prior to 1.0 if we do it at all. Comments? -Travis From cookedm at physics.mcmaster.ca Fri Jun 30 14:59:28 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 30 Jun 2006 14:59:28 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> Message-ID: <20060630145928.3450b0b1@arbutus.physics.mcmaster.ca> On Fri, 30 Jun 2006 14:42:33 -0400 "Jonathan Taylor" wrote: > +1 for some sort of float. I am a little confused as to why Float64 > is a particularly good choice. Can someone explain in more detail? > Presumably this is the most sensible ctype and translates to a python > float well? It's "float64", btw. Float64 is the old Numeric name. Python's "float" type is a C "double" (just like Python's "int" is a C "long"). In practice, C doubles are 64-bit. In NumPy, these are the same type: float32 == single (32-bit float, which is a C float) float64 == double (64-bit float, which is a C double) Also, some Python types have equivalent NumPy types (as in, they can be used interchangably as dtype arguments): int == long (C long, could be int32 or int64) float == double complex == cdouble (also complex128) Personally, I'd suggest using "single", "float", and "longdouble" in numpy code. [While we're on the subject, for portable code don't use float96 or float128: one or other or both probably won't exist; use longdouble]. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From aisaac at american.edu Fri Jun 30 15:11:18 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 30 Jun 2006 15:11:18 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A5739F.7020701@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org><44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu><463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> Message-ID: On Fri, 30 Jun 2006, Travis Oliphant apparently wrote: > I'm convinced that we should change to float as the > default, but everywhere as Sasha says. Even better! Cheers, Alan Isaac From robert.kern at gmail.com Fri Jun 30 15:02:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Jun 2006 14:02:23 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A5739F.7020701@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> Message-ID: Travis Oliphant wrote: > Comments? Whatever else you do, leave arange() alone. It should never have accepted floats in the first place. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Fri Jun 30 15:17:11 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 30 Jun 2006 12:17:11 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A534BA.8040802@cox.net> References: <44A47854.1050106@ieee.org> <44A534BA.8040802@cox.net> Message-ID: <44A578B7.40004@noaa.gov> Tim Hochberg wrote: > The number one priority for numpy should be to unify the three disparate > Python numeric packages. I think the number one priority should be the best it can be. As someone said, two (or ten) years from now, there will be more new users than users migrating from the older packages. > Personally, given no other constraints, I would probably just get rid of > the defaults all together and make the user choose. I like that too, and it would keep the incompatibility from causing silent errors. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cookedm at physics.mcmaster.ca Fri Jun 30 15:19:26 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 30 Jun 2006 15:19:26 -0400 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: References: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> <20060628153734.7597800c@arbutus.physics.mcmaster.ca> Message-ID: <20060630151926.7b84043e@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 13:46:07 -0600 "Fernando Perez" wrote: > On 6/28/06, David M. Cooke wrote: > > > [Really, distutils sucks. I think (besides refactoring) it needs it's API > > documented better, or least good conventions on where to hook into. > > setuptools and numpy.distutils do their best, but there's only so much you > > can do before everything goes fragile and breaks in unexpected ways.] > > I do hate distutils, having fought it for a long time. Its piss-poor > dependency checking is one of its /many/ annoyances. For a package > with as long a compile time as scipy, it really sucks not to be able > to just modify random source files and trust that it will really > recompile what's needed (no more, no less). > > Anyway, thanks for heeding this one. Hopefully one day somebody will > do the (painful) work of replacing distutils with something that > actually works (perhaps using scons for the build engine...) Until > then, we'll trod along with massively unnecessary rebuilds :) I've tried using SCons -- still don't like it. It's python, but it's too unpythonic for me. (The build engine itself is probably fine, though.) A complete replacement for distutils isn't needed: bits and pieces can be replaced at a time (it gets harder if you've got two packages like setuptools and numpy.distutils trying to improve it, though). For instance, the CCompiler class could be replaced in whole with a rewrite, keeping what could be considered the public API. I've done this before with a version of UnixCCompiler that let me specify a "toolchain": which C compiler and C++ compiler worked together, which linker to use for them, and associated flags. I'm working (slowly) on a rewrite of commands/build_ext.py in numpy.distutils that should keep track of source dependencies better, for instance. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Chris.Barker at noaa.gov Fri Jun 30 15:23:57 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 30 Jun 2006 12:23:57 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> Message-ID: <44A57A4D.3010605@noaa.gov> Robert Kern wrote: > Whatever else you do, leave arange() alone. It should never have accepted floats > in the first place. Just to make sure we're clear: Because one should use linspace() for that? If so, this would be the time to raise an error (or at least a deprecation warning) when arange() is called with Floats. I have a LOT of code that does that! In fact, I posted a question here recently and got a lot of answers and suggested code, and not one person suggested that I shouldn't use arange() with floats. Did Numeric have linspace() It doesn't look like it to me. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From oliphant at ee.byu.edu Fri Jun 30 15:25:23 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 13:25:23 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> Message-ID: <44A57AA3.1040405@ee.byu.edu> Robert Kern wrote: >Travis Oliphant wrote: > > > >>Comments? >> >> > >Whatever else you do, leave arange() alone. It should never have accepted floats >in the first place. > > Actually, Robert makes a good point. arange with floats is problematic. We should direct people to linspace instead of changing the default of arange. Most new users will probably expect arange to return a type similar to Python's range which is int. Also: Keeping arange as ints reduces the number of errors from the change in the unit tests to 2 in NumPy 3 in SciPy So, I think from both a pragmatic and idealized situtation, arange should stay with the default of ints. People who want arange to return floats should be directed to linspace. -Travis From sransom at nrao.edu Fri Jun 30 15:44:38 2006 From: sransom at nrao.edu (Scott Ransom) Date: Fri, 30 Jun 2006 15:44:38 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A57AA3.1040405@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57AA3.1040405@ee.byu.edu> Message-ID: <20060630194438.GA6065@ssh.cv.nrao.edu> On Fri, Jun 30, 2006 at 01:25:23PM -0600, Travis Oliphant wrote: > Robert Kern wrote: > > >Whatever else you do, leave arange() alone. It should never have accepted floats > >in the first place. > > > Actually, Robert makes a good point. arange with floats is > problematic. We should direct people to linspace instead of changing > the default of arange. Most new users will probably expect arange to > return a type similar to Python's range which is int. ... > So, I think from both a pragmatic and idealized situtation, arange > should stay with the default of ints. People who want arange to return > floats should be directed to linspace. I agree that arange with floats is problematic. However, if you want, for example, arange(10.0) (as I often do), you have to do: linspace(0.0, 9.0, 10), which is very un-pythonic and not at all what a new user would expect... I think of linspace as a convenience function, not as a replacement for arange with floats. Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From jonas at MIT.EDU Fri Jun 30 15:45:38 2006 From: jonas at MIT.EDU (Eric Jonas) Date: Fri, 30 Jun 2006 15:45:38 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: <1151696738.16911.12.camel@convolution.mit.edu> On Fri, 2006-06-30 at 12:35 -0400, Sasha wrote: > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. Given the coverage is so low, how can people help by contributing unit tests? Are there obvious areas with poor coverage? Travis, do you have any opinions on this? ...Eric From robert.kern at gmail.com Fri Jun 30 15:54:30 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Jun 2006 14:54:30 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <20060630194438.GA6065@ssh.cv.nrao.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57AA3.1040405@ee.byu.edu> <20060630194438.GA6065@ssh.cv.nrao.edu> Message-ID: Scott Ransom wrote: > On Fri, Jun 30, 2006 at 01:25:23PM -0600, Travis Oliphant wrote: >> Robert Kern wrote: >> >>> Whatever else you do, leave arange() alone. It should never have accepted floats >>> in the first place. >>> >> Actually, Robert makes a good point. arange with floats is >> problematic. We should direct people to linspace instead of changing >> the default of arange. Most new users will probably expect arange to >> return a type similar to Python's range which is int. > ... >> So, I think from both a pragmatic and idealized situtation, arange >> should stay with the default of ints. People who want arange to return >> floats should be directed to linspace. > > I agree that arange with floats is problematic. However, > if you want, for example, arange(10.0) (as I often do), you have > to do: linspace(0.0, 9.0, 10), which is very un-pythonic and not > at all what a new user would expect... > > I think of linspace as a convenience function, not as a > replacement for arange with floats. I don't mind arange(10.0) so much, now that it exists. I would mind, a lot, about arange(10) returning a float64 array. Similarity to the builtin range() is much more important in my mind than an arbitrary "consistency" with ones() and zeros(). It's arange(0.0, 1.0, 0.1) that I think causes the most problems with arange and floats. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jun 30 16:02:28 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Jun 2006 15:02:28 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A57A4D.3010605@noaa.gov> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57A4D.3010605@noaa.gov> Message-ID: Christopher Barker wrote: > Robert Kern wrote: >> Whatever else you do, leave arange() alone. It should never have accepted floats >> in the first place. > > Just to make sure we're clear: Because one should use linspace() for that? More or less. Depending on the step and endpoint that you choose, it can be nearly impossible for the programmer to predict how many elements are going to be generated. > If so, this would be the time to raise an error (or at least a > deprecation warning) when arange() is called with Floats. > > I have a LOT of code that does that! In fact, I posted a question here > recently and got a lot of answers and suggested code, and not one person > suggested that I shouldn't use arange() with floats. I should have been more specific, but I did express disapproval in the code sample I gave: x = arange(minx, maxx+step, step) # oy. Since your question wasn't about that specifically, I used the technique that your original sample did. > Did Numeric have linspace() It doesn't look like it to me. It doesn't. It was originally contributed to Scipy by Fernando, IIRC. It's small, so it is easy to copy if you need to maintain support for Numeric, still. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bhendrix at enthought.com Fri Jun 30 16:06:58 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Fri, 30 Jun 2006 15:06:58 -0500 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: <20060630151926.7b84043e@arbutus.physics.mcmaster.ca> References: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> <20060628153734.7597800c@arbutus.physics.mcmaster.ca> <20060630151926.7b84043e@arbutus.physics.mcmaster.ca> Message-ID: <44A58462.80902@enthought.com> David M. Cooke wrote: > >>> [Really, distutils sucks. I think (besides refactoring) it needs it's API >>> documented better, or least good conventions on where to hook into. >>> setuptools and numpy.distutils do their best, but there's only so much you >>> can do before everything goes fragile and breaks in unexpected ways.] >>> >> I do hate distutils, having fought it for a long time. Its piss-poor >> dependency checking is one of its /many/ annoyances. For a package >> with as long a compile time as scipy, it really sucks not to be able >> to just modify random source files and trust that it will really >> recompile what's needed (no more, no less). >> >> Anyway, thanks for heeding this one. Hopefully one day somebody will >> do the (painful) work of replacing distutils with something that >> actually works (perhaps using scons for the build engine...) Until >> then, we'll trod along with massively unnecessary rebuilds :) >> > > I've tried using SCons -- still don't like it. It's python, but it's too > unpythonic for me. (The build engine itself is probably fine, though.) > Agreed, last time I used it was almost a year ago, so it might have changed, but SCons does a quasi-2 stage build that is very unnatural. If you have python code nested between 2 build events, the python code is executed and the build events are queued. BUT- its dependency management is great. Distutils suffers from 2 major problems as far as I am concerned: setup.py files often contain way too much business logic and verb-age for casual python developers, and worst-in-class dependency checking. I've been considering moving all Enthought projects to SCons. If another large project, such as scipy were to go that way it would make my decision much simpler. Bryce -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Fri Jun 30 16:11:21 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 14:11:21 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <20060630194438.GA6065@ssh.cv.nrao.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57AA3.1040405@ee.byu.edu> <20060630194438.GA6065@ssh.cv.nrao.edu> Message-ID: <44A58569.9080504@ee.byu.edu> Scott Ransom wrote: >On Fri, Jun 30, 2006 at 01:25:23PM -0600, Travis Oliphant wrote: > > >>Robert Kern wrote: >> >> >> >>>Whatever else you do, leave arange() alone. It should never have accepted floats >>>in the first place. >>> >>> >>> >>Actually, Robert makes a good point. arange with floats is >>problematic. We should direct people to linspace instead of changing >>the default of arange. Most new users will probably expect arange to >>return a type similar to Python's range which is int. >> >> >... > > >>So, I think from both a pragmatic and idealized situtation, arange >>should stay with the default of ints. People who want arange to return >>floats should be directed to linspace. >> >> I should have worded this as: "People who want arange to return floats *as a default* should be directed to linspace" So, basically, arange is not going to change. Because of this, shifting over was a cinch. I still need to write the convert-script code that inserts dtype=int in routines that use old defaults: *plea* anybody want to write that?? -Travis From mark at mitre.org Fri Jun 30 16:16:46 2006 From: mark at mitre.org (Mark Heslep) Date: Fri, 30 Jun 2006 16:16:46 -0400 Subject: [Numpy-discussion] A. Martelli on Numeric/Numpy Message-ID: <44A586AE.5080803@mitre.org> FYI, posted Sunday on python: "...even if the hard-core numeric-python people are all evangelizing for migration to numpy (for reasons that are of course quite defensible!), I think it's quite OK to stick with good old Numeric for the moment (and that's exactly what I do for my own personal use!)" "...Numeric has pretty good documentation (numpy's is probably even better, but it is not available for free, so I don't know!), and if you don't find that documentation sufficient you might want to have a look to my book "Python in a Nutshell" which devotes a chapter to Numeric..." http://groups.google.com/group/comp.lang.python/tree/browse_frm/thread/e5479dac51b6e481/fc475de9fd1b9669?rnum=1&q=martelli&_done=%2Fgroup%2Fcomp.lang.python%2Fbrowse_frm%2Fthread%2Fe5479dac51b6e481%2Fe282e6e2c9d4fc77%3Fq%3Dmartelli%26rnum%3D6%26#doc_55e0c696cb4aea87 Mark From kwgoodman at gmail.com Fri Jun 30 16:37:01 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 30 Jun 2006 13:37:01 -0700 Subject: [Numpy-discussion] Matrix print plea Message-ID: When an array is printed, the numbers line up in nice columns (if you're using a fixed-width font): array([[0, 0], [0, 0]]) But for matrices the columns do not line up: matrix([[0, 0], [0, 0]]) From cookedm at physics.mcmaster.ca Fri Jun 30 16:38:43 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 30 Jun 2006 16:38:43 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> On Fri, 30 Jun 2006 12:35:35 -0400 Sasha wrote: > On 6/30/06, Fernando Perez wrote: > > ... > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. A very quick application of the coverage module, available at http://www.garethrees.org/2001/12/04/python-coverage/ gives me 41%: Name Stmts Exec Cover --------------------------------------------------- numpy 25 20 80% numpy._import_tools 235 175 74% numpy.add_newdocs 2 2 100% numpy.core 28 26 92% numpy.core.__svn_version__ 1 1 100% numpy.core._internal 99 48 48% numpy.core.arrayprint 251 92 36% numpy.core.defchararray 221 58 26% numpy.core.defmatrix 259 186 71% numpy.core.fromnumeric 319 153 47% numpy.core.info 3 3 100% numpy.core.ma 1612 1145 71% numpy.core.memmap 64 14 21% numpy.core.numeric 323 138 42% numpy.core.numerictypes 236 204 86% numpy.core.records 272 32 11% numpy.dft 6 4 66% numpy.dft.fftpack 128 31 24% numpy.dft.helper 35 32 91% numpy.dft.info 3 3 100% numpy.distutils 13 9 69% numpy.distutils.__version__ 4 4 100% numpy.distutils.ccompiler 296 49 16% numpy.distutils.exec_command 409 27 6% numpy.distutils.info 2 2 100% numpy.distutils.log 37 18 48% numpy.distutils.misc_util 945 174 18% numpy.distutils.unixccompiler 34 11 32% numpy.dual 41 27 65% numpy.f2py.info 2 2 100% numpy.lib 30 28 93% numpy.lib.arraysetops 121 59 48% numpy.lib.function_base 501 70 13% numpy.lib.getlimits 76 61 80% numpy.lib.index_tricks 223 56 25% numpy.lib.info 4 4 100% numpy.lib.machar 174 154 88% numpy.lib.polynomial 357 52 14% numpy.lib.scimath 51 19 37% numpy.lib.shape_base 220 24 10% numpy.lib.twodim_base 77 51 66% numpy.lib.type_check 110 75 68% numpy.lib.ufunclike 37 24 64% numpy.lib.utils 42 23 54% numpy.linalg 5 3 60% numpy.linalg.info 2 2 100% numpy.linalg.linalg 440 71 16% numpy.random 10 6 60% numpy.random.info 4 4 100% numpy.testing 3 3 100% numpy.testing.info 2 2 100% numpy.testing.numpytest 430 214 49% numpy.testing.utils 151 62 41% numpy.version 7 7 100% --------------------------------------------------- TOTAL 8982 3764 41% (I filtered out all the *.tests.* modules). Note that you have to import numpy after starting the coverage, because we use a lot of module-level code that wouldn't be caught otherwise. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Chris.Barker at noaa.gov Fri Jun 30 16:40:39 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 30 Jun 2006 13:40:39 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57AA3.1040405@ee.byu.edu> <20060630194438.GA6065@ssh.cv.nrao.edu> Message-ID: <44A58C47.9080700@noaa.gov> Robert Kern wrote: > It's arange(0.0, 1.0, 0.1) that I think causes the most problems with arange and > floats. actually, much to my surprise: >>> import numpy as N >>> N.arange(0.0, 1.0, 0.1) array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) But I'm sure there are other examples that don't work out. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cookedm at physics.mcmaster.ca Fri Jun 30 16:46:19 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 30 Jun 2006 16:46:19 -0400 Subject: [Numpy-discussion] Matrix print plea In-Reply-To: References: Message-ID: <20060630164619.098ec5aa@arbutus.physics.mcmaster.ca> On Fri, 30 Jun 2006 13:37:01 -0700 "Keith Goodman" wrote: > When an array is printed, the numbers line up in nice columns (if > you're using a fixed-width font): > > array([[0, 0], > [0, 0]]) > > But for matrices the columns do not line up: > > matrix([[0, 0], > [0, 0]]) Fixed in SVN. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From ndarray at mac.com Fri Jun 30 16:49:53 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 16:49:53 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: As soon as I sent out my 10% estimate, I realized that someone will challenge it with a python level coverage statistics. My main concern is not what fraction of numpy functions is called by unit tests, but what fraction of special cases in the C code is exercised. I am not sure that David's statistics even answers the first question - I would guess it only counts statements in the pure python methods and ignores methods implemented in C. Can someone post C-level statistics from gcov or a similar tool? On 6/30/06, David M. Cooke wrote: > On Fri, 30 Jun 2006 12:35:35 -0400 > Sasha wrote: > > > On 6/30/06, Fernando Perez wrote: > > > ... > > > Besides, decent unit tests will catch these problems. We all know > > > that every scientific code in existence is unit tested to the smallest > > > routine, so this shouldn't be a problem for anyone. > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > unittests? I would be surprized if it was more than 10%. > > A very quick application of the coverage module, available at > http://www.garethrees.org/2001/12/04/python-coverage/ > gives me 41%: > > Name Stmts Exec Cover > --------------------------------------------------- > numpy 25 20 80% > numpy._import_tools 235 175 74% > numpy.add_newdocs 2 2 100% > numpy.core 28 26 92% > numpy.core.__svn_version__ 1 1 100% > numpy.core._internal 99 48 48% > numpy.core.arrayprint 251 92 36% > numpy.core.defchararray 221 58 26% > numpy.core.defmatrix 259 186 71% > numpy.core.fromnumeric 319 153 47% > numpy.core.info 3 3 100% > numpy.core.ma 1612 1145 71% > numpy.core.memmap 64 14 21% > numpy.core.numeric 323 138 42% > numpy.core.numerictypes 236 204 86% > numpy.core.records 272 32 11% > numpy.dft 6 4 66% > numpy.dft.fftpack 128 31 24% > numpy.dft.helper 35 32 91% > numpy.dft.info 3 3 100% > numpy.distutils 13 9 69% > numpy.distutils.__version__ 4 4 100% > numpy.distutils.ccompiler 296 49 16% > numpy.distutils.exec_command 409 27 6% > numpy.distutils.info 2 2 100% > numpy.distutils.log 37 18 48% > numpy.distutils.misc_util 945 174 18% > numpy.distutils.unixccompiler 34 11 32% > numpy.dual 41 27 65% > numpy.f2py.info 2 2 100% > numpy.lib 30 28 93% > numpy.lib.arraysetops 121 59 48% > numpy.lib.function_base 501 70 13% > numpy.lib.getlimits 76 61 80% > numpy.lib.index_tricks 223 56 25% > numpy.lib.info 4 4 100% > numpy.lib.machar 174 154 88% > numpy.lib.polynomial 357 52 14% > numpy.lib.scimath 51 19 37% > numpy.lib.shape_base 220 24 10% > numpy.lib.twodim_base 77 51 66% > numpy.lib.type_check 110 75 68% > numpy.lib.ufunclike 37 24 64% > numpy.lib.utils 42 23 54% > numpy.linalg 5 3 60% > numpy.linalg.info 2 2 100% > numpy.linalg.linalg 440 71 16% > numpy.random 10 6 60% > numpy.random.info 4 4 100% > numpy.testing 3 3 100% > numpy.testing.info 2 2 100% > numpy.testing.numpytest 430 214 49% > numpy.testing.utils 151 62 41% > numpy.version 7 7 100% > --------------------------------------------------- > TOTAL 8982 3764 41% > > (I filtered out all the *.tests.* modules). Note that you have to import > numpy after starting the coverage, because we use a lot of module-level code > that wouldn't be caught otherwise. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > From kwgoodman at gmail.com Fri Jun 30 16:56:12 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 30 Jun 2006 13:56:12 -0700 Subject: [Numpy-discussion] Matrix print plea In-Reply-To: <20060630164619.098ec5aa@arbutus.physics.mcmaster.ca> References: <20060630164619.098ec5aa@arbutus.physics.mcmaster.ca> Message-ID: On 6/30/06, David M. Cooke wrote: > On Fri, 30 Jun 2006 13:37:01 -0700 > "Keith Goodman" wrote: > > > When an array is printed, the numbers line up in nice columns (if > > you're using a fixed-width font): > > > > array([[0, 0], > > [0, 0]]) > > > > But for matrices the columns do not line up: > > > > matrix([[0, 0], > > [0, 0]]) > > Fixed in SVN. Thank you! All of the recent improvements to matrices will eventually bring many new numpy users. From travis at enthought.com Fri Jun 30 16:59:20 2006 From: travis at enthought.com (Travis N. Vaught) Date: Fri, 30 Jun 2006 15:59:20 -0500 Subject: [Numpy-discussion] ANN: SciPy 2006 Conference Reminder Message-ID: <44A590A8.5040705@enthought.com> The *SciPy 2006 Conference* is scheduled for Thursday and Friday, August 17-18, 2006 at CalTech with Sprints and Tutorials Monday-Wednesday, August 14-16. Conference details are at http://www.scipy.org/SciPy2006 The deadlines for submitting abstracts and early registration are approaching... Call for Presenters ------------------- If you are interested in presenting at the conference, you may submit an abstract in Plain Text, PDF or MS Word formats to abstracts at scipy.org -- the deadline for abstract submission is July 7, 2006. Papers and/or presentation slides are acceptable and are due by August 4, 2006. Registration: ------------- Early registration ($100.00) is still available through July 14. You may register online at http://www.enthought.com/scipy06. Registration includes breakfast and lunch Thursday & Friday and a very nice dinner Thursday night. After July 14, 2006, registration will cost $150.00. Tutorials and Sprints --------------------- This year the Sprints (Monday and Tuesday, August 14-15) and Tutorials (Wednesday, August 16) are no additional charge (you're on your own for food on those days, though). Remember to include these days in your travel plans. The following topics are presented as Tutorials Wednesday (more info here: http://www.scipy.org/SciPy2006/TutorialSessions): - "3D visualization in Python using tvtk and MayaVi" - "Scientific Data Analysis and Visualization using IPython and Matplotlib." - "Building Scientific Applications using the Enthought Tool Suite (Envisage, Traits, Chaco, etc.)" - "NumPy (migration from Numarray & Numeric, overview of NumPy)" The Sprint topics are under discussion here: http://www.scipy.org/SciPy2006/CodingSprints See you in August! Travis From ndarray at mac.com Fri Jun 30 18:10:05 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 18:10:05 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: It is not as bad as I thought, but there is certainly room for improvement. File `numpy/core/src/multiarraymodule.c' Lines executed:63.56% of 3290 File `numpy/core/src/arrayobject.c' Lines executed:59.70% of 5280 File `numpy/core/src/scalartypes.inc.src' Lines executed:31.67% of 963 File `numpy/core/src/arraytypes.inc.src' Lines executed:47.35% of 868 File `numpy/core/src/arraymethods.c' Lines executed:57.65% of 739 On 6/30/06, Sasha wrote: > As soon as I sent out my 10% estimate, I realized that someone will > challenge it with a python level coverage statistics. My main concern > is not what fraction of numpy functions is called by unit tests, but > what fraction of special cases in the C code is exercised. I am not > sure that David's statistics even answers the first question - I would > guess it only counts statements in the pure python methods and > ignores methods implemented in C. > > Can someone post C-level statistics from gcov > or a similar tool? > > On 6/30/06, David M. Cooke wrote: > > On Fri, 30 Jun 2006 12:35:35 -0400 > > Sasha wrote: > > > > > On 6/30/06, Fernando Perez wrote: > > > > ... > > > > Besides, decent unit tests will catch these problems. We all know > > > > that every scientific code in existence is unit tested to the smallest > > > > routine, so this shouldn't be a problem for anyone. > > > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > > unittests? I would be surprized if it was more than 10%. > > > > A very quick application of the coverage module, available at > > http://www.garethrees.org/2001/12/04/python-coverage/ > > gives me 41%: > > > > Name Stmts Exec Cover > > --------------------------------------------------- > > numpy 25 20 80% > > numpy._import_tools 235 175 74% > > numpy.add_newdocs 2 2 100% > > numpy.core 28 26 92% > > numpy.core.__svn_version__ 1 1 100% > > numpy.core._internal 99 48 48% > > numpy.core.arrayprint 251 92 36% > > numpy.core.defchararray 221 58 26% > > numpy.core.defmatrix 259 186 71% > > numpy.core.fromnumeric 319 153 47% > > numpy.core.info 3 3 100% > > numpy.core.ma 1612 1145 71% > > numpy.core.memmap 64 14 21% > > numpy.core.numeric 323 138 42% > > numpy.core.numerictypes 236 204 86% > > numpy.core.records 272 32 11% > > numpy.dft 6 4 66% > > numpy.dft.fftpack 128 31 24% > > numpy.dft.helper 35 32 91% > > numpy.dft.info 3 3 100% > > numpy.distutils 13 9 69% > > numpy.distutils.__version__ 4 4 100% > > numpy.distutils.ccompiler 296 49 16% > > numpy.distutils.exec_command 409 27 6% > > numpy.distutils.info 2 2 100% > > numpy.distutils.log 37 18 48% > > numpy.distutils.misc_util 945 174 18% > > numpy.distutils.unixccompiler 34 11 32% > > numpy.dual 41 27 65% > > numpy.f2py.info 2 2 100% > > numpy.lib 30 28 93% > > numpy.lib.arraysetops 121 59 48% > > numpy.lib.function_base 501 70 13% > > numpy.lib.getlimits 76 61 80% > > numpy.lib.index_tricks 223 56 25% > > numpy.lib.info 4 4 100% > > numpy.lib.machar 174 154 88% > > numpy.lib.polynomial 357 52 14% > > numpy.lib.scimath 51 19 37% > > numpy.lib.shape_base 220 24 10% > > numpy.lib.twodim_base 77 51 66% > > numpy.lib.type_check 110 75 68% > > numpy.lib.ufunclike 37 24 64% > > numpy.lib.utils 42 23 54% > > numpy.linalg 5 3 60% > > numpy.linalg.info 2 2 100% > > numpy.linalg.linalg 440 71 16% > > numpy.random 10 6 60% > > numpy.random.info 4 4 100% > > numpy.testing 3 3 100% > > numpy.testing.info 2 2 100% > > numpy.testing.numpytest 430 214 49% > > numpy.testing.utils 151 62 41% > > numpy.version 7 7 100% > > --------------------------------------------------- > > TOTAL 8982 3764 41% > > > > (I filtered out all the *.tests.* modules). Note that you have to import > > numpy after starting the coverage, because we use a lot of module-level code > > that wouldn't be caught otherwise. > > > > -- > > |>|\/|< > > /--------------------------------------------------------------------------\ > > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > > |cookedm at physics.mcmaster.ca > > > From oliphant at ee.byu.edu Fri Jun 30 18:20:49 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 16:20:49 -0600 Subject: [Numpy-discussion] ***[Possible UCE]*** Re: Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: <44A5A3C1.70904@ee.byu.edu> Sasha wrote: >It is not as bad as I thought, but there is certainly room for improvement. > >File `numpy/core/src/multiarraymodule.c' >Lines executed:63.56% of 3290 > >File `numpy/core/src/arrayobject.c' >Lines executed:59.70% of 5280 > >File `numpy/core/src/scalartypes.inc.src' >Lines executed:31.67% of 963 > >File `numpy/core/src/arraytypes.inc.src' >Lines executed:47.35% of 868 > >File `numpy/core/src/arraymethods.c' >Lines executed:57.65% of 739 > > > > > This is great. How did you generate that? This is exactly the kind of thing we need to be doing for the beta release cycle. I would like these numbers very close to 100% by the time 1.0 final comes out at the end of August / first of September. But, we need help to write the unit tests. What happens if you run the scipy test suite? -Travis From ndarray at mac.com Fri Jun 30 18:21:21 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 18:21:21 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: "Software developers also use coverage testing in concert with testsuites, to make sure software is actually good enough for a release. " -- Gcov Manual I think if we can improve the test coverage, it will speak volumes about the quality of numpy. Does anyone know if it is possible to instrument numpy libraries without having to instrument python itself? It would be nice to make the coverage reports easily available either by including a generating script with the source distribution or by publishing the reports for the releases. On 6/30/06, Sasha wrote: > It is not as bad as I thought, but there is certainly room for improvement. > > File `numpy/core/src/multiarraymodule.c' > Lines executed:63.56% of 3290 > > File `numpy/core/src/arrayobject.c' > Lines executed:59.70% of 5280 > > File `numpy/core/src/scalartypes.inc.src' > Lines executed:31.67% of 963 > > File `numpy/core/src/arraytypes.inc.src' > Lines executed:47.35% of 868 > > File `numpy/core/src/arraymethods.c' > Lines executed:57.65% of 739 > > > > On 6/30/06, Sasha wrote: > > As soon as I sent out my 10% estimate, I realized that someone will > > challenge it with a python level coverage statistics. My main concern > > is not what fraction of numpy functions is called by unit tests, but > > what fraction of special cases in the C code is exercised. I am not > > sure that David's statistics even answers the first question - I would > > guess it only counts statements in the pure python methods and > > ignores methods implemented in C. > > > > Can someone post C-level statistics from gcov > > or a similar tool? > > > > On 6/30/06, David M. Cooke wrote: > > > On Fri, 30 Jun 2006 12:35:35 -0400 > > > Sasha wrote: > > > > > > > On 6/30/06, Fernando Perez wrote: > > > > > ... > > > > > Besides, decent unit tests will catch these problems. We all know > > > > > that every scientific code in existence is unit tested to the smallest > > > > > routine, so this shouldn't be a problem for anyone. > > > > > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > > > unittests? I would be surprized if it was more than 10%. > > > > > > A very quick application of the coverage module, available at > > > http://www.garethrees.org/2001/12/04/python-coverage/ > > > gives me 41%: > > > > > > Name Stmts Exec Cover > > > --------------------------------------------------- > > > numpy 25 20 80% > > > numpy._import_tools 235 175 74% > > > numpy.add_newdocs 2 2 100% > > > numpy.core 28 26 92% > > > numpy.core.__svn_version__ 1 1 100% > > > numpy.core._internal 99 48 48% > > > numpy.core.arrayprint 251 92 36% > > > numpy.core.defchararray 221 58 26% > > > numpy.core.defmatrix 259 186 71% > > > numpy.core.fromnumeric 319 153 47% > > > numpy.core.info 3 3 100% > > > numpy.core.ma 1612 1145 71% > > > numpy.core.memmap 64 14 21% > > > numpy.core.numeric 323 138 42% > > > numpy.core.numerictypes 236 204 86% > > > numpy.core.records 272 32 11% > > > numpy.dft 6 4 66% > > > numpy.dft.fftpack 128 31 24% > > > numpy.dft.helper 35 32 91% > > > numpy.dft.info 3 3 100% > > > numpy.distutils 13 9 69% > > > numpy.distutils.__version__ 4 4 100% > > > numpy.distutils.ccompiler 296 49 16% > > > numpy.distutils.exec_command 409 27 6% > > > numpy.distutils.info 2 2 100% > > > numpy.distutils.log 37 18 48% > > > numpy.distutils.misc_util 945 174 18% > > > numpy.distutils.unixccompiler 34 11 32% > > > numpy.dual 41 27 65% > > > numpy.f2py.info 2 2 100% > > > numpy.lib 30 28 93% > > > numpy.lib.arraysetops 121 59 48% > > > numpy.lib.function_base 501 70 13% > > > numpy.lib.getlimits 76 61 80% > > > numpy.lib.index_tricks 223 56 25% > > > numpy.lib.info 4 4 100% > > > numpy.lib.machar 174 154 88% > > > numpy.lib.polynomial 357 52 14% > > > numpy.lib.scimath 51 19 37% > > > numpy.lib.shape_base 220 24 10% > > > numpy.lib.twodim_base 77 51 66% > > > numpy.lib.type_check 110 75 68% > > > numpy.lib.ufunclike 37 24 64% > > > numpy.lib.utils 42 23 54% > > > numpy.linalg 5 3 60% > > > numpy.linalg.info 2 2 100% > > > numpy.linalg.linalg 440 71 16% > > > numpy.random 10 6 60% > > > numpy.random.info 4 4 100% > > > numpy.testing 3 3 100% > > > numpy.testing.info 2 2 100% > > > numpy.testing.numpytest 430 214 49% > > > numpy.testing.utils 151 62 41% > > > numpy.version 7 7 100% > > > --------------------------------------------------- > > > TOTAL 8982 3764 41% > > > > > > (I filtered out all the *.tests.* modules). Note that you have to import > > > numpy after starting the coverage, because we use a lot of module-level code > > > that wouldn't be caught otherwise. > > > > > > -- > > > |>|\/|< > > > /--------------------------------------------------------------------------\ > > > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > > > |cookedm at physics.mcmaster.ca > > > > > > From ndarray at mac.com Fri Jun 30 18:31:45 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 18:31:45 -0400 Subject: [Numpy-discussion] ***[Possible UCE]*** Re: Time for beta1 of NumPy 1.0 In-Reply-To: <44A5A3C1.70904@ee.byu.edu> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> <44A5A3C1.70904@ee.byu.edu> Message-ID: On 6/30/06, Travis Oliphant wrote: > This is great. How did you generate [the coverage statistic]? > It was really a hack. I've configured python using $ ./configure --enable-debug CC="gcc -fprofile-arcs -ftest-coverage" CXX="c++ gcc -fprofile-arcs -ftest-coverage" (I hate distutils!) Then I installed numpy and ran numpy.test(). Some linalg related tests failed which should be fixed by figuring out how to pass -fprofile-arcs -ftest-coverage options to the fortran compiler. The only non-obvious step in using gcov was that I had to tell it where to find object files: $ gcov -o build/temp.linux-x86_64-2.4/numpy/core/src numpy/core/src/*.c > ... > What happens if you run the scipy test suite? I don't know because I don't use scipy. Sorry. From ndarray at mac.com Fri Jun 30 18:41:59 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 18:41:59 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A58569.9080504@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57AA3.1040405@ee.byu.edu> <20060630194438.GA6065@ssh.cv.nrao.edu> <44A58569.9080504@ee.byu.edu> Message-ID: On 6/30/06, Travis Oliphant wrote: > ... I still need to write the > convert-script code that inserts dtype=int > in routines that use old defaults: *plea* anybody want to write that?? > I will try to do it at some time over the long weekend. I was bitten by the fact that the current convert-script changes anything that resembles an old typecode such as 'b' regardless of context. (I was unlucky to have database columns called 'b'!) Fixing that is very similar to the problem at hand. From jonathan.taylor at stanford.edu Fri Jun 30 18:46:04 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Fri, 30 Jun 2006 15:46:04 -0700 Subject: [Numpy-discussion] byteorder question Message-ID: <44A5A9AC.5070707@stanford.edu> In some earlier code (at least one of) the following worked fine. I just want to get a new type that is a byteswap of, say, float64 because I want to memmap an array with a non-native byte order. Any suggestions? Thanks, Jonathan ------------------------------------------ Python 2.4.3 (#2, Apr 27 2006, 14:43:58) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '0.9.9.2716' >>> d=numpy.float64 >>> swapped=d.newbyteorder('big') Traceback (most recent call last): File "", line 1, in ? TypeError: descriptor 'newbyteorder' requires a 'genericscalar' object but received a 'str' >>> swapped=d.newbyteorder('>') Traceback (most recent call last): File "", line 1, in ? TypeError: descriptor 'newbyteorder' requires a 'genericscalar' object but received a 'str' >>> -- ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From oliphant at ee.byu.edu Fri Jun 30 19:01:10 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 17:01:10 -0600 Subject: [Numpy-discussion] byteorder question In-Reply-To: <44A5A9AC.5070707@stanford.edu> References: <44A5A9AC.5070707@stanford.edu> Message-ID: <44A5AD36.7070906@ee.byu.edu> Jonathan Taylor wrote: > In some earlier code (at least one of) the following worked fine. I > just want > to get a new type that is a byteswap of, say, float64 because I want to > memmap an array with a non-native byte order. > > Any suggestions? Last year the array scalars (like float64) were confused with the data-type objects dtype('=i4'). This was fortunately changed many months ago so the two are now separate concepts. This may be why your old code worked. You want to get a data-type object itself: d = numpy.dtype(numpy.float64) d = numpy.float64(1).dtype # you have to instantiate a float64 object to access it's data-type. Then d.newbyteorder('>') or d.newbyteorder('big') will work. But, probably easier and clearer is just to use: dlittle = numpy.dtype('f8') There are now full-fledged data-type objects in NumPy. These can be used everywhere old typecodes were used. In fact, all other aliases get converted to these data-type objects because they are what NumPy needs to construct the ndarray. These data-type objects are an important part of the basearray concept being introduced to Python, so education about them is very timely. They are an out-growth of the PyArray_Descr * structure that Numeric used to "represent" a data-type internally. Basically , the old PyArray_Descr * structure was enhanced and given an Object header. Even just getting these data-type objects into Python would be a useful first-step to exchanging data. For NumPy, the data-type objects have enabled very sophisticated data-type specification and are key to record-array support in NumPy. Best, -Travis From alexander.belopolsky at gmail.com Fri Jun 30 19:01:46 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 30 Jun 2006 19:01:46 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: On 6/30/06, Sasha wrote: > File `numpy/core/src/arraytypes.inc.src' > Lines executed:47.35% of 868 This is was an overly optimistic number. More relevant is the following obtained by disabling the #line directives: File `build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc' Lines executed:26.71% of 5010 From ndarray at mac.com Fri Jun 30 19:02:19 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 19:02:19 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: ---------- Forwarded message ---------- From: Alexander Belopolsky Date: Jun 30, 2006 7:01 PM Subject: Re: [Numpy-discussion] Time for beta1 of NumPy 1.0 To: "David M. Cooke" Cc: Fernando Perez , numpy-discussion at lists.sourceforge.net On 6/30/06, Sasha wrote: > File `numpy/core/src/arraytypes.inc.src' > Lines executed:47.35% of 868 This is was an overly optimistic number. More relevant is the following obtained by disabling the #line directives: File `build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc' Lines executed:26.71% of 5010 From oliphant at ee.byu.edu Fri Jun 30 19:04:42 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 17:04:42 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: <44A5AE0A.8080500@ee.byu.edu> Alexander Belopolsky wrote: >On 6/30/06, Sasha wrote: > > > >>File `numpy/core/src/arraytypes.inc.src' >>Lines executed:47.35% of 868 >> >> > >This is was an overly optimistic number. More relevant is the >following obtained by disabling the #line directives: > >File `build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc' >Lines executed:26.71% of 5010 > > Yes, this is true, but the auto-generation means that success for one instantiation increases the likelihood for success in the others. So, the 26.7% is probably too pessimistic. -Travis From ndarray at mac.com Fri Jun 30 19:16:27 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 19:16:27 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A5AE0A.8080500@ee.byu.edu> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> <44A5AE0A.8080500@ee.byu.edu> Message-ID: On 6/30/06, Travis Oliphant wrote: > ... > Yes, this is true, but the auto-generation means that success for one > instantiation increases the likelihood for success in the others. So, > the 26.7% is probably too pessimistic. Agree, but "increases the likelihood" != "guarantees". For example, relying on nan propagation is a fine strategy for the floating point case, but will not work for integer types. Similarly code relying on wrap on overflow will fail when type=float. The best solution would be to autogenerate test cases so that all types are tested where appropriate. From oliphant at ee.byu.edu Fri Jun 30 19:18:22 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 17:18:22 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> <44A5AE0A.8080500@ee.byu.edu> Message-ID: <44A5B13E.1060309@ee.byu.edu> Sasha wrote: > On 6/30/06, Travis Oliphant wrote: > >> ... >> Yes, this is true, but the auto-generation means that success for one >> instantiation increases the likelihood for success in the others. So, >> the 26.7% is probably too pessimistic. > > > Agree, but "increases the likelihood" != "guarantees". Definitely... > > The best solution would be to autogenerate test cases so that all > types are tested where appropriate. Right on again... Here's a chance for all the Python-only coders to jump in and make a splash.... -Travis From tim.leslie at gmail.com Fri Jun 30 20:42:13 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 1 Jul 2006 10:42:13 +1000 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <1151696738.16911.12.camel@convolution.mit.edu> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <1151696738.16911.12.camel@convolution.mit.edu> Message-ID: On 7/1/06, Eric Jonas wrote: > On Fri, 2006-06-30 at 12:35 -0400, Sasha wrote: > > > Besides, decent unit tests will catch these problems. We all know > > > that every scientific code in existence is unit tested to the smallest > > > routine, so this shouldn't be a problem for anyone. > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > unittests? I would be surprized if it was more than 10%. > > Given the coverage is so low, how can people help by contributing unit > tests? Are there obvious areas with poor coverage? Travis, do you have > any opinions on this? > ...Eric > > A handy tool for finding these things out is coverage.py. I've found it quite helpful in checking unittest coverage in the past. http://www.nedbatchelder.com/code/modules/coverage.html I don't think I'll have a chance in the immediate future to try it out with numpy, but if someone does, I'm sure it will give some answers to your questions Eric. Cheers, Tim Leslie > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ccfhpt at insure.com Thu Jun 1 05:50:06 2006 From: ccfhpt at insure.com (zpxknsxt wwacjenon) Date: Thu Jun 1 05:50:06 2006 Subject: [Numpy-discussion] {Reply} Nothing like it INFX Message-ID: <57462931.4006694425490.JavaMail.gfdbhygpqvl@nu-nz02> INFX**INFX**INFX**INFX**INFX**INFX**INFX**INFX** Infinex Ventures Inc. (INFX) Current Price: $0.52 The Rally has begun Watch this one like a hawk, this report is sent because the potential is incredible This is AS sure as it gets H U G E N E W S read below COMPANY OVERVIEW Aggressive and energetic, Infinex boasts a dynamic and diversified portfolio of operations across North America, with an eye on international expansion. Grounded in natural resource exploration, Inifinex also offers investors access to exciting new developments in the high-tech sector and the booming international real estate market. Our market based experience, tenacious research techniques, and razor sharp analytical skills allow us to leverage opportunities in emerging markets and developing technologies. Identifying these opportunities in the earliest stages allows us to accelerate business development and fully realize the company?s true potential. Maximizing overall profitability and in turn enhancing shareholder value. Current Press Release Infinex Announces Extension to Its Agreement in Chile LAS VEGAS, NV, May 9 /PRNewswire-FirstCall/ - Infinex Ventures Inc. (INFX:OB - News; "the Company") and its Board of Directors are pleased to announce that the Company has received an extension (90 days) to its Agreement for the due diligence period, in an effort to fully verify the offered title and all additional documentation, including but not limited to, Trial C-1912- 2001 at the 14th Civil Court of Santiago and Criminal Trial 1160-2002 at the 19th Court of Crime of Santiago of Chile, Ministry of Mines of Chile over its sole and exclusive right to acquire a 50% interest in the Tesoro 1-12 Mining Claims. Infinex Announces Joint Venture and Option Agreement Extension LAS VEGAS, May 5 /PRNewswire-FirstCall/ - Infinex Ventures Inc. (INFX:OB - "the Company") and its Board of Directors are please to announce that the Company has been granted an extension of 120 days to fulfill its contractual obligations under the Joint Venture and Option Agreement dated June 14, 2004 on the Texada Island "Yew Gr0up" Mining Claims: Shake like a leaf. We'll hand you out to dry. We'll hand you out to dry. Stand your ground. Ugly as a mud fence. Scraping the bottom of the barrel. Sly as a fox. A snail's pace. Your ass is grass. There is always next year. Rise and shine. Sly as a fox. Putting the cart before the horse. Walking on thin ice. Stand your ground. Root it out. The stronger the breeze the stronger the trees. Putting it in a nutshell. This is for the birds. Wrinkled as a prune. Up one side and down the other. Rise and shine. Sour as a green apple. You say potayto, I say potahto. She's a mother hen. Say it with flowers. A thorn in my side. Weed out. From nadavh at visionsense.com Thu Jun 1 07:18:03 2006 From: nadavh at visionsense.com (Nadav Horesh) Date: Thu Jun 1 07:18:03 2006 Subject: [Numpy-discussion] Fortran 95 compiler (from gcc 4.1.1) is not recognized by scipy Message-ID: <07C6A61102C94148B8104D42DE95F7E8C8EFC6@exchange2k.envision.co.il> I recently upgraded to gcc4.1.1. When I tried to compile scipy from today's svn repository it halts with the following message: Traceback (most recent call last): File "setup.py", line 50, in ? setup_package() File "setup.py", line 42, in setup_package configuration=configuration ) File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 170, in setup return old_setup(**new_attr) File "/usr/lib/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", line 109, in run self.build_extensions() File "/usr/lib/python2.4/distutils/command/build_ext.py", line 405, in build_e xtensions self.build_extension(ext) File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", line 301, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' ---- The output of gfortran --version: GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1) Copyright (C) 2006 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING I have also the old g77 compiler installed (g77-3.4.6). Is there a way to force numpy/scipy to use it? Nadav From robert.kern at gmail.com Thu Jun 1 09:48:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu Jun 1 09:48:04 2006 Subject: [Numpy-discussion] Re: Fortran 95 compiler (from gcc 4.1.1) is not recognized by scipy In-Reply-To: <07C6A61102C94148B8104D42DE95F7E8C8EFC6@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E8C8EFC6@exchange2k.envision.co.il> Message-ID: Nadav Horesh wrote: > I recently upgraded to gcc4.1.1. When I tried to compile scipy from today's svn repository it halts with the following message: > > Traceback (most recent call last): > File "setup.py", line 50, in ? > setup_package() > File "setup.py", line 42, in setup_package > configuration=configuration ) > File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 170, in > setup > return old_setup(**new_attr) > File "/usr/lib/python2.4/distutils/core.py", line 149, in setup > dist.run_commands() > File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands > self.run_command(cmd) > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run > self.run_command(cmd_name) > File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command > self.distribution.run_command(command) > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", > line 109, in run > self.build_extensions() > File "/usr/lib/python2.4/distutils/command/build_ext.py", line 405, in build_e > xtensions > self.build_extension(ext) > File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", > line 301, in build_extension > link = self.fcompiler.link_shared_object > AttributeError: 'NoneType' object has no attribute 'link_shared_object' > > ---- > > The output of gfortran --version: > > GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1) Hmm. The usual suspect (not finding the version) doesn't seem to be the problem here. >>> from numpy.distutils.ccompiler import simple_version_match >>> m = simple_version_match(start='GNU Fortran 95') >>> m(None, 'GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1)') '4.1.1' > I have also the old g77 compiler installed (g77-3.4.6). Is there a way to force numpy/scipy to use it? Sure. python setup.py config_fc --fcompiler=gnu build_src build_clib build_ext build -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Thu Jun 1 09:55:07 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu Jun 1 09:55:07 2006 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: References: <447D051E.9000709@ieee.org> <27BE229E-1192-4643-8454-5E0790A0AC7F@ftw.at> <447DCD79.3000808@noaa.gov> Message-ID: <447F1BBD.7030905@noaa.gov> Fernando Perez wrote: >> 2. Pointing www.numpy.org to numeric.scipy.org instead of the SF page > Well, ipython is not scipy either, and yet its homepage is > ipython.scipy.org. I think it's simply a matter of convenience that > the Enthought hosting infrastructure is so much more pleasant to use > than SF Pardon me for being a lazy idiot. numeric.scipy.org is a fine place for it. I was reacting to a post a while back that suggested pointing people searching for numpy to the main scipy page, which I did not think was a good idea. Objection withdrawn. >> Can you even build it with gcc 4 yet? > I built it on a recent ubuntu not too long ago, without any glitches. > I can check again tonitght on a fresh Dapper with up-to-date SVN if > you want. Well, I need FC4 (and soon 5) as well as OS-X, so I'll try again when I get the chance. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu Jun 1 11:33:02 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu Jun 1 11:33:02 2006 Subject: [Numpy-discussion] What am I missing about concatenate? Message-ID: <447F32A6.1090903@noaa.gov> I want to take two (2,) arrays and put them together into one (2,2) array. I thought one of these would work: >>> N.concatenate(((1,2),(3,4)),0) array([1, 2, 3, 4]) >>> N.concatenate(((1,2),(3,4)),1) array([1, 2, 3, 4]) Is this the best I can do? >>> N.concatenate(((1,2),(3,4))).reshape(2,2) array([[1, 2], [3, 4]]) Is it because the arrays I'm putting together are rank-1? >>> N.__version__ '0.9.6' -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Thu Jun 1 11:43:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu Jun 1 11:43:00 2006 Subject: [Numpy-discussion] Re: What am I missing about concatenate? In-Reply-To: <447F32A6.1090903@noaa.gov> References: <447F32A6.1090903@noaa.gov> Message-ID: Christopher Barker wrote: > I want to take two (2,) arrays and put them together into one (2,2) > array. I thought one of these would work: > >>>> N.concatenate(((1,2),(3,4)),0) > array([1, 2, 3, 4]) >>>> N.concatenate(((1,2),(3,4)),1) > array([1, 2, 3, 4]) > > Is this the best I can do? > >>>> N.concatenate(((1,2),(3,4))).reshape(2,2) > array([[1, 2], > [3, 4]]) > > Is it because the arrays I'm putting together are rank-1? Yes. Look at vstack() (and also its friends hstack(), dstack() and column_stack() for completeness). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From alexandre.fayolle at logilab.fr Thu Jun 1 11:45:02 2006 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Thu Jun 1 11:45:02 2006 Subject: [Numpy-discussion] What am I missing about concatenate? In-Reply-To: <447F32A6.1090903@noaa.gov> References: <447F32A6.1090903@noaa.gov> Message-ID: <20060601184736.GC26776@crater.logilab.fr> On Thu, Jun 01, 2006 at 11:32:06AM -0700, Christopher Barker wrote: > I want to take two (2,) arrays and put them together into one (2,2) > array. I thought one of these would work: > > >>> N.concatenate(((1,2),(3,4)),0) > array([1, 2, 3, 4]) > >>> N.concatenate(((1,2),(3,4)),1) > array([1, 2, 3, 4]) > > Is this the best I can do? > > >>> N.concatenate(((1,2),(3,4))).reshape(2,2) > array([[1, 2], > [3, 4]]) > > Is it because the arrays I'm putting together are rank-1? concatenate is not meant to do that. Try putting your arrays in a list and building an array from that list. a1 = array([1,2]) a2 = array([3,4]) print array([a1, a2]) /bin/bash: q: command not found -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From aisaac at american.edu Thu Jun 1 11:45:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu Jun 1 11:45:05 2006 Subject: [Numpy-discussion] What am I missing about concatenate? In-Reply-To: <447F32A6.1090903@noaa.gov> References: <447F32A6.1090903@noaa.gov> Message-ID: On Thu, 01 Jun 2006, Christopher Barker apparently wrote: > Is this the best I can do? > >>> N.concatenate(((1,2),(3,4))).reshape(2,2) > array([[1, 2], > [3, 4]]) >>> import numpy as N >>> N.vstack([(1,2),(3,4)]) array([[1, 2], [3, 4]]) hth, Alan Isaac From tim.hochberg at cox.net Thu Jun 1 11:47:07 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu Jun 1 11:47:07 2006 Subject: [Numpy-discussion] What am I missing about concatenate? In-Reply-To: <447F32A6.1090903@noaa.gov> References: <447F32A6.1090903@noaa.gov> Message-ID: <447F3593.8020208@cox.net> Christopher Barker wrote: > I want to take two (2,) arrays and put them together into one (2,2) > array. I thought one of these would work: > > >>> N.concatenate(((1,2),(3,4)),0) > array([1, 2, 3, 4]) > >>> N.concatenate(((1,2),(3,4)),1) > array([1, 2, 3, 4]) > > Is this the best I can do? > > >>> N.concatenate(((1,2),(3,4))).reshape(2,2) > array([[1, 2], > [3, 4]]) > > Is it because the arrays I'm putting together are rank-1? Yes. You need to add a dimension somehow. There are (at least) two ways to do this. If you are using real arrays, use newaxis: >>> a array([0, 1, 2]) >>> b array([3, 4, 5]) >>> concatenate([a[newaxis], b[newaxis]], 0) array([[0, 1, 2], [3, 4, 5]]) Alternatively, if you don't know that 'a' and 'b' are arrays or you just hate newaxis, wrap the arrays in [] to give them an extra dimension. This tends to look nicer, but I suspect has poorer performance than above (haven't timed it though): >>> concatenate([[a], [b]], 0) array([[0, 1, 2], [3, 4, 5]]) -tim > > >>> N.__version__ > '0.9.6' > > -Chris > > > > > From Chris.Barker at noaa.gov Thu Jun 1 12:06:02 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu Jun 1 12:06:02 2006 Subject: [Numpy-discussion] How do I use numpy to do this? Message-ID: <447F3A57.2080206@noaa.gov> I'm trying to get the (x,y) coords for all the points in a grid, bound by xmin, xmax, ymin, ymax. This list comprehension does it fine: Points = [(x,y) for x in xrange(minx, maxx) for y in xrange(miny, maxy)] But I can't think at the moment how to do it with numpy. Any ideas? Thanks, -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu Jun 1 12:14:01 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu Jun 1 12:14:01 2006 Subject: [Numpy-discussion] Re: What am I missing about concatenate? In-Reply-To: References: <447F32A6.1090903@noaa.gov> Message-ID: <447F3C62.5020105@noaa.gov> Thanks all, Robert Kern wrote: > Look at vstack() (and also its friends hstack(), dstack() and column_stack() for > completeness). I like this, but need to keep Numeric/numarray compatibility for the moment -- I think, I've just sent out a query to my users. Tim Hochberg wrote: > If you are using real arrays, use newaxis: > > >>> a > array([0, 1, 2]) > >>> b > array([3, 4, 5]) > >>> concatenate([a[newaxis], b[newaxis]], 0) > array([[0, 1, 2], > [3, 4, 5]]) I like this, but again, not in Numeric -- I really need to dump that as soon as I can! > hate newaxis, wrap the arrays in [] to give them an extra dimension. > This tends to look nicer, but I suspect has poorer performance than > above (haven't timed it though): > > >>> concatenate([[a], [b]], 0) > array([[0, 1, 2], > [3, 4, 5]]) Lovely. much cleaner. By they way, wouldn't wrapping in a tuple, be slightly better, performance-wise (I know, probably negligible, but I always feel that I should use a tuple when I don't need mutability) -thanks, -chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Thu Jun 1 12:21:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu Jun 1 12:21:02 2006 Subject: [Numpy-discussion] Re: How do I use numpy to do this? In-Reply-To: <447F3A57.2080206@noaa.gov> References: <447F3A57.2080206@noaa.gov> Message-ID: Christopher Barker wrote: > > I'm trying to get the (x,y) coords for all the points in a grid, bound > by xmin, xmax, ymin, ymax. > > This list comprehension does it fine: > > Points = [(x,y) for x in xrange(minx, maxx) for y in xrange(miny, maxy)] > > But I can't think at the moment how to do it with numpy. Any ideas? In [4]: x, y = mgrid[0:10, 5:15] In [5]: x Out[5]: array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3, 3, 3, 3, 3], [4, 4, 4, 4, 4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6, 6, 6, 6, 6], [7, 7, 7, 7, 7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9, 9, 9, 9, 9]]) In [6]: y Out[6]: array([[ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]]) In [8]: points = column_stack((x.ravel(), y.ravel())) In [9]: points Out[9]: array([[ 0, 5], [ 0, 6], [ 0, 7], [ 0, 8], [ 0, 9], [ 0, 10], ... -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndarray at mac.com Thu Jun 1 12:27:02 2006 From: ndarray at mac.com (Sasha) Date: Thu Jun 1 12:27:02 2006 Subject: [Numpy-discussion] Re: How do I use numpy to do this? In-Reply-To: References: <447F3A57.2080206@noaa.gov> Message-ID: >>> mgrid[0:10, 5:15].reshape(2,100).transpose() array([[ 0, 5], [ 0, 6], [ 0, 7], [ 0, 8], ...]) On 6/1/06, Robert Kern wrote: > Christopher Barker wrote: > > > > I'm trying to get the (x,y) coords for all the points in a grid, bound > > by xmin, xmax, ymin, ymax. > > > > This list comprehension does it fine: > > > > Points = [(x,y) for x in xrange(minx, maxx) for y in xrange(miny, maxy)] > > > > But I can't think at the moment how to do it with numpy. Any ideas? > > In [4]: x, y = mgrid[0:10, 5:15] > > In [5]: x > Out[5]: > array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], > [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], > [2, 2, 2, 2, 2, 2, 2, 2, 2, 2], > [3, 3, 3, 3, 3, 3, 3, 3, 3, 3], > [4, 4, 4, 4, 4, 4, 4, 4, 4, 4], > [5, 5, 5, 5, 5, 5, 5, 5, 5, 5], > [6, 6, 6, 6, 6, 6, 6, 6, 6, 6], > [7, 7, 7, 7, 7, 7, 7, 7, 7, 7], > [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], > [9, 9, 9, 9, 9, 9, 9, 9, 9, 9]]) > > In [6]: y > Out[6]: > array([[ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], > [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]]) > > In [8]: points = column_stack((x.ravel(), y.ravel())) > > In [9]: points > Out[9]: > array([[ 0, 5], > [ 0, 6], > [ 0, 7], > [ 0, 8], > [ 0, 9], > [ 0, 10], > ... > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tim.hochberg at cox.net Thu Jun 1 12:59:02 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu Jun 1 12:59:02 2006 Subject: [Numpy-discussion] Re: What am I missing about concatenate? In-Reply-To: <447F3C62.5020105@noaa.gov> References: <447F32A6.1090903@noaa.gov> <447F3C62.5020105@noaa.gov> Message-ID: <447F4671.1070707@cox.net> Christopher Barker wrote: > Thanks all, > > > Robert Kern wrote: > >> Look at vstack() (and also its friends hstack(), dstack() and >> column_stack() for >> completeness). > > > I like this, but need to keep Numeric/numarray compatibility for the > moment -- I think, I've just sent out a query to my users. > > > > Tim Hochberg wrote: > >> If you are using real arrays, use newaxis: >> >> >>> a >> array([0, 1, 2]) >> >>> b >> array([3, 4, 5]) >> >>> concatenate([a[newaxis], b[newaxis]], 0) >> array([[0, 1, 2], >> [3, 4, 5]]) > > > I like this, but again, not in Numeric -- I really need to dump that > as soon as I can! In Numeric, you can use NewAxis instead for the same effect. > >> hate newaxis, wrap the arrays in [] to give them an extra dimension. >> This tends to look nicer, but I suspect has poorer performance than >> above (haven't timed it though): >> >> >>> concatenate([[a], [b]], 0) >> array([[0, 1, 2], >> [3, 4, 5]]) > > > Lovely. much cleaner. > > By they way, wouldn't wrapping in a tuple, be slightly better, > performance-wise (I know, probably negligible, but I always feel that > I should use a tuple when I don't need mutability) I doubt it would make a signifigant difference and the square brackets are much easier to read IMO. Your mileage may vary. -tim From cwmoad at gmail.com Thu Jun 1 13:09:00 2006 From: cwmoad at gmail.com (Charlie Moad) Date: Thu Jun 1 13:09:00 2006 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <447F3A57.2080206@noaa.gov> References: <447F3A57.2080206@noaa.gov> Message-ID: <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> Here's my crack at it. pts = mgrid[minx:maxx,miny:maxy].transpose() pts.reshape(pts.size/2, 2) #pts is good to go On 6/1/06, Christopher Barker wrote: > > I'm trying to get the (x,y) coords for all the points in a grid, bound > by xmin, xmax, ymin, ymax. > > This list comprehension does it fine: > > Points = [(x,y) for x in xrange(minx, maxx) for y in xrange(miny, maxy)] > > But I can't think at the moment how to do it with numpy. Any ideas? > > Thanks, > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From robert.kern at gmail.com Thu Jun 1 13:14:06 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu Jun 1 13:14:06 2006 Subject: [Numpy-discussion] Re: How do I use numpy to do this? In-Reply-To: <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> Message-ID: Charlie Moad wrote: > Here's my crack at it. > > pts = mgrid[minx:maxx,miny:maxy].transpose() > pts.reshape(pts.size/2, 2) > #pts is good to go Well, if we're going for terseness: points = mgrid[minx:maxx, miny:maxy].reshape(2, -1).transpose() -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Thu Jun 1 13:21:02 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu Jun 1 13:21:02 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> Message-ID: <447F4BF9.7060101@ieee.org> Berthold H?llmann wrote: > Travis Oliphant writes: > > >> 2) Will you transition within the next 6 months? (if you answered No to #1) >> > > Unlikely > > >> 3) Please, explain your reason(s) for not making the switch. (if you >> answered No to #2) >> > > Lack of resources (Numeric is used in hand coded extensions; are > arrays of type PyObject supported in NumPy, they were not in numarray) > Yes, NumPy is actually quite similar to Numeric. Most C-extensions are easily ported simply by replacing #include Numeric/arrayobject.h with #include numpy/arrayobject.h (and making sure you get the right location for the headers). -Travis From perry at stsci.edu Thu Jun 1 13:43:03 2006 From: perry at stsci.edu (Perry Greenfield) Date: Thu Jun 1 13:43:03 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: <447F4BF9.7060101@ieee.org> References: <447D051E.9000709@ieee.org> <447F4BF9.7060101@ieee.org> Message-ID: <69b842594e9ecc8ef8dfebe953ea3af4@stsci.edu> Just to clarify the issue with regard to numarray since one person brought it up. When we (STScI) are finished getting all our software running under numpy--and we are well more than halfway there--we will start drawing down support for numarray. It won't suddenly stop, but less and less effort will go into it and eventually none. That transition time (starts when we can run all our software on numpy and stops when we no longer support numarray at all) will probably be on the order of 6 months, but note that for much of that time, the support will likely be limited to dealing with major bugs only or support for new versions of major platforms. We will note the start and stop points of this transition on the numpy and scipy lists of course. After that, any support for it will have to come from elsewhere. (Message: if you use numarray, you should be planning now to make the transition if 6 months isn't enough time) Perry From oliphant.travis at ieee.org Thu Jun 1 17:54:35 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 01 Jun 2006 15:54:35 -0600 Subject: [Numpy-discussion] Free SciPy 2006 porting service Message-ID: <447F621B.1010603@ieee.org> I will be available during the SciPy 2006 conference to help port open-source applications to NumPy for no charge. (I'm always available for porting commercial code for a reasonable fee). Others who want to assist will be welcome. Conference attendees will get first priority, but others who want to email their request can do so. Offer will be on a first come, first serve basis but I will reserve the liberty to rearrange the order to serve as many projects as possible. I'll place a note on the Wiki Coding Sprint page to this effect. -Travis O. From Chris.Barker at noaa.gov Thu Jun 1 17:41:36 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 01 Jun 2006 14:41:36 -0700 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> Message-ID: <447F5F10.1010305@noaa.gov> > Charlie Moad wrote: >> pts = mgrid[minx:maxx,miny:maxy].transpose() >> pts.reshape(pts.size/2, 2) Thanks everyone -- yet another reason to dump support for the older num* packages. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tom.denniston at alum.dartmouth.org Thu Jun 1 13:27:54 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 1 Jun 2006 12:27:54 -0500 Subject: [Numpy-discussion] lexsort Message-ID: This function is really useful but it seems to only take tuples not ndarrays. This seems kinda strange. Does one have to convert the ndarray into a tuple to use it? This seems extremely inefficient. Is there an efficient way to argsort a 2d array based upon multiple columns if lexsort is not the correct way to do this? The only way I have found to do this is to construct a list of tuples and sort them using python's list sort. This is inefficient and convoluted so I was hoping lexsort would provide a simple solution. --Tom From Chris.Barker at noaa.gov Thu Jun 1 18:13:28 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 01 Jun 2006 15:13:28 -0700 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> Message-ID: <447F6688.1030504@noaa.gov> Robert Kern wrote: > points = mgrid[minx:maxx, miny:maxy].reshape(2, -1).transpose() As I need Numeric and numarray compatibility at this point, it seems the best I could come up with is below. I'm guessing the list comprehension may well be faster! -Chris #!/usr/bin/env python #import numpy as N #import Numeric as N import numarray as N Spacing = 2.0 minx = 0 maxx = 5 miny = 20 maxy = 22 print "minx", minx print "miny", miny print "maxx", maxx print "maxy", maxy ## # The nifty, terse, numpy way ## points = mgrid[minx:maxx, miny:maxy].reshape(2, -1).transpose() ## The Numeric and numarray way: x = N.arange(minx, maxx+Spacing, Spacing) # makeing sure to get the last point y = N.arange(miny, maxy+Spacing, Spacing) # an extra is OK points = N.zeros((len(y), len(x), 2), N.Float) x.shape = (1,-1) y.shape = (-1,1) points[:,:,0] += x points[:,:,1] += y points.shape = (-1,2) print points -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cwmoad at gmail.com Thu Jun 1 15:23:27 2006 From: cwmoad at gmail.com (Charlie Moad) Date: Thu, 1 Jun 2006 15:23:27 -0400 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> Message-ID: <6382066a0606011223j7584ee5cvaf27d22c38e35ad7@mail.gmail.com> That reshape should be "resize". Sorry. > Here's my crack at it. > > pts = mgrid[minx:maxx,miny:maxy].transpose() > pts.reshape(pts.size/2, 2) > #pts is good to go > > On 6/1/06, Christopher Barker wrote: > > > > I'm trying to get the (x,y) coords for all the points in a grid, bound > > by xmin, xmax, ymin, ymax. > > > > This list comprehension does it fine: > > > > Points = [(x,y) for x in xrange(minx, maxx) for y in xrange(miny, maxy)] > > > > But I can't think at the moment how to do it with numpy. Any ideas? > > > > Thanks, > > > > -Chris > > > > > > -- > > Christopher Barker, Ph.D. > > Oceanographer > > > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > > 7600 Sand Point Way NE (206) 526-6329 fax > > Seattle, WA 98115 (206) 526-6317 main reception > > > > Chris.Barker at noaa.gov > > > > > > ------------------------------------------------------- > > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > > Fully trained technicians. The highest number of Red Hat certifications in > > the hosting industry. Fanatical Support. Click to learn more > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From robert.kern at gmail.com Thu Jun 1 20:16:40 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Jun 2006 19:16:40 -0500 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <447F6688.1030504@noaa.gov> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> Message-ID: Christopher Barker wrote: > Robert Kern wrote: > >>points = mgrid[minx:maxx, miny:maxy].reshape(2, -1).transpose() > > As I need Numeric and numarray compatibility at this point, it seems the > best I could come up with is below. Ah. It might help if you said that up front. (Untested, but what I usually did in the bad old days before I used scipy): x = arange(minx, maxx+step, step) # oy. y = arange(miny, maxy+step, step) nx = len(x) ny = len(y) x = repeat(x, ny) y = concatenate([y] * nx) points = transpose([x, y]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tom.denniston at alum.dartmouth.org Thu Jun 1 20:50:30 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 1 Jun 2006 19:50:30 -0500 Subject: [Numpy-discussion] lexsort In-Reply-To: <447F78F3.3060303@ieee.org> References: <447F78F3.3060303@ieee.org> Message-ID: This is great! Many thanks Travis. I can't wait for the next release! --Tom On 6/1/06, Travis Oliphant wrote: > Tom Denniston wrote: > > This function is really useful but it seems to only take tuples not > > ndarrays. This seems kinda strange. Does one have to convert the > > ndarray into a tuple to use it? This seems extremely inefficient. Is > > there an efficient way to argsort a 2d array based upon multiple > > columns if lexsort is not the correct way to do this? The only way I > > have found to do this is to construct a list of tuples and sort them > > using python's list sort. This is inefficient and convoluted so I was > > hoping lexsort would provide a simple solution. > > > > I've just changed lexsort to accept any sequence object as keys. This > means that it can now be used to sort a 2d array (of the same data-type) > based on multiple rows. The sorting will be so that the last row is > sorted with any repeats sorted by the second-to-last row and remaining > repeats sorted by the third-to-last row and so forth... > > The return value is an array of indices. For the 2d example you can use > > ind = lexsort(a) > sorted = a[:,ind] # or a.take(ind,axis=-1) > > > Example: > > >>> a = array([[1,3,2,2,3,3],[4,5,4,6,4,3]]) > >>> ind = lexsort(a) > >>> sorted = a.take(ind,axis=-1) > >>> sorted > array([[3, 1, 2, 3, 3, 2], > [3, 4, 4, 4, 5, 6]]) > >>> a > array([[1, 3, 2, 2, 3, 3], > [4, 5, 4, 6, 4, 3]]) > > > > -Travis > > > From oliphant.travis at ieee.org Thu Jun 1 19:32:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 01 Jun 2006 17:32:03 -0600 Subject: [Numpy-discussion] lexsort In-Reply-To: References: Message-ID: <447F78F3.3060303@ieee.org> Tom Denniston wrote: > This function is really useful but it seems to only take tuples not > ndarrays. This seems kinda strange. Does one have to convert the > ndarray into a tuple to use it? This seems extremely inefficient. Is > there an efficient way to argsort a 2d array based upon multiple > columns if lexsort is not the correct way to do this? The only way I > have found to do this is to construct a list of tuples and sort them > using python's list sort. This is inefficient and convoluted so I was > hoping lexsort would provide a simple solution. > I've just changed lexsort to accept any sequence object as keys. This means that it can now be used to sort a 2d array (of the same data-type) based on multiple rows. The sorting will be so that the last row is sorted with any repeats sorted by the second-to-last row and remaining repeats sorted by the third-to-last row and so forth... The return value is an array of indices. For the 2d example you can use ind = lexsort(a) sorted = a[:,ind] # or a.take(ind,axis=-1) Example: >>> a = array([[1,3,2,2,3,3],[4,5,4,6,4,3]]) >>> ind = lexsort(a) >>> sorted = a.take(ind,axis=-1) >>> sorted array([[3, 1, 2, 3, 3, 2], [3, 4, 4, 4, 5, 6]]) >>> a array([[1, 3, 2, 2, 3, 3], [4, 5, 4, 6, 4, 3]]) -Travis From charlesr.harris at gmail.com Fri Jun 2 01:05:13 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 1 Jun 2006 23:05:13 -0600 Subject: [Numpy-discussion] lexsort In-Reply-To: References: Message-ID: Tom, The list -- nee tuple, thanks Travis -- is the list of key sequences and each key sequence can be a column in a matrix. So for instance if you wanted to sort on a few columns of a matrix, say columns 2,1, and 0, in that order, and then rearrange the rows so the columns were ordered, you would do something like: >>> a = randint(0,2,(7,4)) >>> a array([[0, 0, 0, 1], [0, 0, 1, 0], [1, 0, 0, 1], [0, 1, 0, 1], [1, 1, 1, 0], [0, 1, 1, 1], [0, 1, 0, 1]]) >>> ind = lexsort((a[:,2],a[:,1],a[:,0])) >>> sorted = a[ind] >>> sorted array([[0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 1], [0, 1, 0, 1], [0, 1, 1, 1], [1, 0, 0, 1], [1, 1, 1, 0]]) Note that the last key defines the major order. Chuck On 6/1/06, Tom Denniston wrote: > > This function is really useful but it seems to only take tuples not > ndarrays. This seems kinda strange. Does one have to convert the > ndarray into a tuple to use it? This seems extremely inefficient. Is > there an efficient way to argsort a 2d array based upon multiple > columns if lexsort is not the correct way to do this? The only way I > have found to do this is to construct a list of tuples and sort them > using python's list sort. This is inefficient and convoluted so I was > hoping lexsort would provide a simple solution. > > --Tom > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob at hooft.net Fri Jun 2 01:31:27 2006 From: rob at hooft.net (Rob Hooft) Date: Fri, 02 Jun 2006 07:31:27 +0200 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <447F6688.1030504@noaa.gov> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> Message-ID: <447FCD2F.5060207@hooft.net> Christopher Barker wrote: > x = N.arange(minx, maxx+Spacing, Spacing) # makeing sure to get the last > point > y = N.arange(miny, maxy+Spacing, Spacing) # an extra is OK > points = N.zeros((len(y), len(x), 2), N.Float) > x.shape = (1,-1) > y.shape = (-1,1) > points[:,:,0] += x > points[:,:,1] += y > points.shape = (-1,2) > > print points How about something like: >>> k=Numeric.repeat(range(0,5+1),Numeric.ones(6)*7) >>> l=Numeric.resize(range(0,6+1),[42]) >>> zone=Numeric.concatenate((k[:,Numeric.NewAxis],l[:,Numeric.NewAxis]),axis=1) >>> zone array([[0, 0], [0, 1], [0, 2], ... [5, 4], [5, 5], [5, 6]]) This is the same speed as Robert Kern's solution for large arrays, a bit slower for small arrays. Both are a little faster than yours. Rob -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -------------- next part -------------- A non-text attachment was scrubbed... Name: timer.py Type: text/x-python Size: 1244 bytes Desc: not available URL: From joris at ster.kuleuven.be Fri Jun 2 04:27:45 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 2 Jun 2006 10:27:45 +0200 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <447F1BBD.7030905@noaa.gov> References: <447D051E.9000709@ieee.org> <447F1BBD.7030905@noaa.gov> Message-ID: <200606021027.45392.joris@ster.kuleuven.be> [CB]: I was reacting to a post a while back that suggested pointing people [CB]: searching for numpy to the main scipy page, which I did not think was a [CB]: good idea. That would be my post :o) The reasons why I suggested this are 1) www.scipy.org is at the moment the most informative site on numpy 2) of all sites www.scipy.org looks currently most professional 3) a wiki-style site where everyone can contribute is really great 4) I like information to be centralized. Having to check pointers, docs and cookbooks on two different sites is inefficient 5) Two different sites inevitably implies some duplication of the work Just as you, I am not (yet) a scipy user, I only have numpy installed at the moment. The principal reason is the same as the one you mentioned. But for me this is an extra motivation to merge scipy.org and numpy.org: 6) merging scipy.org and numpy.org will hopefully lead to a larger SciPy community and this in turn hopefully leads to user-friendly installation procedures. Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From r.demaria at tiscali.it Fri Jun 2 07:54:36 2006 From: r.demaria at tiscali.it (r.demaria at tiscali.it) Date: Fri, 2 Jun 2006 13:54:36 +0200 (CEST) Subject: [Numpy-discussion] Free SciPy 2006 porting service Message-ID: <21591493.1149249276445.JavaMail.root@ps5> Hi, maybe is not what you meant, but presently I'm looking for a sparse eigenvalue solver. As far as I've understood the ARPACK bindings are still missing. This library is one of the most used, so I think it would be very useful to have integrated in numpy. Riccardo La gara pi? entusiasmante dell'anno! Gioca e corri alla velocit? della luce sui 18 circuiti di Intel Speed Contest 2006! I pi? bravi vincono Notebook Sony VAIO, iPod da 60 GB e altro ancora... Sfida gli amici! http://intelspeedcontest2006.tiscali.it/ From jonas at mwl.mit.edu Fri Jun 2 08:58:50 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Fri, 02 Jun 2006 08:58:50 -0400 Subject: [Numpy-discussion] numpy vs numeric benchmarks Message-ID: <1149253130.27604.29.camel@localhost.localdomain> Hello! I've been using numeric for a while, and the recent list traffic prompted me to finally migrate all my old code. On a whim, we were benchmarking numpy vs numeric and have been lead to the conclusion that numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy but 300 ms in numeric. Now, of course, I don't believe this, but I can't figure out what we're doing wrong; I'm not the only person who has looked at this code, so can anyone tell me what we're doing wrong? We run both benchmarks twice to try and mitigate any start-up and cache effects. This is with debian-amd64's packaged numeric 24.2-2 and a locally built numpy-0.9.8. /usr/bin/python import time import numpy import random import Numeric def numpytest(): N = 1000 x = numpy.zeros((N,N),'f') y = numpy.zeros((N,N),'f') for i in range(N): for j in range(N): x[i, j] = random.random() y[i, j] = random.random() t1 = time.clock() z = numpy.matrixmultiply(x, y) t2 = time.clock() print (((t2 - t1)*1000)) def numerictest(): N = 1000 x = Numeric.zeros((N,N),'f') y = Numeric.zeros((N,N),'f') for i in range(N): for j in range(N): x[i, j] = random.random() y[i, j] = random.random() t1 = time.clock() z = Numeric.matrixmultiply(x, y) t2 = time.clock() print (((t2 - t1)*1000)) numerictest() numpytest() numpytest() numerictest() on our hardware a call to numerictest() takes 340 ms and a numpytest takes around 13 sec (!). Any advice on what we're doing wrong would be very helpful. ...Eric From joris at ster.kuleuven.be Fri Jun 2 09:27:15 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 2 Jun 2006 15:27:15 +0200 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <1149253130.27604.29.camel@localhost.localdomain> References: <1149253130.27604.29.camel@localhost.localdomain> Message-ID: <200606021527.15947.joris@ster.kuleuven.be> On Friday 02 June 2006 14:58, Eric Jonas wrote: [EJ]: Hello! I've been using numeric for a while, and the recent list traffic [EJ]: prompted me to finally migrate all my old code. On a whim, we were [EJ]: benchmarking numpy vs numeric and have been lead to the conclusion that [EJ]: numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy [EJ]: but 300 ms in numeric. You mean the other way around? I also tested numpy vs numarray, and numarray seems to be roughly 3 times faster than numpy for your particular testcase. J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From jonas at mwl.mit.edu Fri Jun 2 09:34:25 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Fri, 02 Jun 2006 09:34:25 -0400 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <200606021527.15947.joris@ster.kuleuven.be> References: <1149253130.27604.29.camel@localhost.localdomain> <200606021527.15947.joris@ster.kuleuven.be> Message-ID: <1149255266.27604.32.camel@localhost.localdomain> I meant "numeric is slower than numpy", that is, modern numpy (0.9.8) appears to lose out majorly to numeric. This doesn't make much sense, so I figured there was something wrong with my benchmark, or my numpy install, and wanted to check if others had seen this sort of behavior. ...Eric On Fri, 2006-06-02 at 15:27 +0200, Joris De Ridder wrote: > > On Friday 02 June 2006 14:58, Eric Jonas wrote: > [EJ]: Hello! I've been using numeric for a while, and the recent list traffic > [EJ]: prompted me to finally migrate all my old code. On a whim, we were > [EJ]: benchmarking numpy vs numeric and have been lead to the conclusion that > [EJ]: numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy > [EJ]: but 300 ms in numeric. > > You mean the other way around? > > I also tested numpy vs numarray, and numarray seems to be roughly 3 times > faster than numpy for your particular testcase. > > J. > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From filip at ftv.pl Fri Jun 2 09:48:23 2006 From: filip at ftv.pl (Filip Wasilewski) Date: Fri, 2 Jun 2006 15:48:23 +0200 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <1149253130.27604.29.camel@localhost.localdomain> References: <1149253130.27604.29.camel@localhost.localdomain> Message-ID: <1231363019.20060602154823@gmail.com> Hi, It seems that in Numeric the matrixmultiply is alias for dot function, which "uses the BLAS optimized routines where possible", as the help() says. In NumPy (0.9.6, not upgraded yet to 0.9.8), the matrixmultiply is a function of numpy.core.multiarray, while dot refers to numpy.core._dotblas. On my system the timings and results with numpy.dot are quite similar to that with Numeric.matrixmultiply. So the next question is what's the difference between matrixmultiply and dot in NumPy? Filip > Hello! I've been using numeric for a while, and the recent list traffic > prompted me to finally migrate all my old code. On a whim, we were > benchmarking numpy vs numeric and have been lead to the conclusion that > numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy > but 300 ms in numeric. > Now, of course, I don't believe this, but I can't figure out what we're > doing wrong; I'm not the only person who has looked at this code, so can > anyone tell me what we're doing wrong? From gnurser at googlemail.com Fri Jun 2 10:16:57 2006 From: gnurser at googlemail.com (George Nurser) Date: Fri, 2 Jun 2006 15:16:57 +0100 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <1231363019.20060602154823@gmail.com> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> Message-ID: <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> Yes, using numpy.dot I get 250ms, numpy.matrixmultiply 11.8s. while a sans-BLAS Numeric.matrixmultiply takes 12s. The first 100 results from numpy.dot and numpy.matrixmultiply are identical .... Use dot;) --George. On 02/06/06, Filip Wasilewski wrote: > Hi, > > It seems that in Numeric the matrixmultiply is alias for dot function, > which "uses the BLAS optimized routines where possible", as the help() > says. > > In NumPy (0.9.6, not upgraded yet to 0.9.8), the matrixmultiply is a > function of numpy.core.multiarray, while dot refers to > numpy.core._dotblas. > > On my system the timings and results with numpy.dot are quite similar > to that with Numeric.matrixmultiply. > > So the next question is what's the difference between matrixmultiply and > dot in NumPy? > > Filip > > > > Hello! I've been using numeric for a while, and the recent list traffic > > prompted me to finally migrate all my old code. On a whim, we were > > benchmarking numpy vs numeric and have been lead to the conclusion that > > numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy > > but 300 ms in numeric. > > > Now, of course, I don't believe this, but I can't figure out what we're > > doing wrong; I'm not the only person who has looked at this code, so can > > anyone tell me what we're doing wrong? > > > > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From rays at blue-cove.com Fri Jun 2 10:27:27 2006 From: rays at blue-cove.com (RayS) Date: Fri, 02 Jun 2006 07:27:27 -0700 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: References: Message-ID: <6.2.3.4.2.20060602072155.02bc4a30@blue-cove.com> favorable numpy creates arrays much faster, fft seems a tad faster a useful metric, I think, for O-scope and ADC apps I get 0.0039054614015815738 0.0019759541205486885 0.023268623246481726 0.0023570392204637913 from the below on a PIII 600... from time import * n=4096 r = range(n) #numpy import numpy arr = numpy.array # array creation t0 = clock() for i in r: a = arr(r) (clock()-t0)/float(n) #fft of n fftn = numpy.fft t0 = clock() for i in r: f = fftn(a) (clock()-t0)/float(n) #Numeric import Numeric arr = Numeric.array # array creation t0 = clock() for i in r: a = arr(r) (clock()-t0)/float(n) #fft of n from FFT import * t0 = clock() for i in r: f = fft(a) (clock()-t0)/float(n) From svetosch at gmx.net Fri Jun 2 11:38:46 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 02 Jun 2006 17:38:46 +0200 Subject: [Numpy-discussion] rand argument question Message-ID: <44805B86.4080001@gmx.net> Hi all, this may be a stupid question, but why doesn't rand accept a shape tuple as argument? I find the difference between the argument types of rand and (for example) zeros somewhat confusing. (See below for illustration.) Can anybody offer an intuition/explanation? (This is still on 0.9.6 because of matplotlib compatibility.) Thanks much, Sven >>> import numpy as n >>> n.rand((3,2)) Traceback (most recent call last): File "", line 1, in ? File "mtrand.pyx", line 433, in mtrand.RandomState.rand File "mtrand.pyx", line 361, in mtrand.RandomState.random_sample File "mtrand.pyx", line 131, in mtrand.cont0_array TypeError: an integer is required >>> n.zeros((3,2)) array([[0, 0], [0, 0], [0, 0]]) >>> n.zeros(3,2) Traceback (most recent call last): File "", line 1, in ? TypeError: data type not understood >>> n.rand(3,2) array([[ 0.27017528, 0.98280906], [ 0.58592731, 0.63706962], [ 0.74705193, 0.65980377]]) >>> From robert.kern at gmail.com Fri Jun 2 12:09:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 11:09:02 -0500 Subject: [Numpy-discussion] Free SciPy 2006 porting service In-Reply-To: <21591493.1149249276445.JavaMail.root@ps5> References: <21591493.1149249276445.JavaMail.root@ps5> Message-ID: r.demaria at tiscali.it wrote: > Hi, > > maybe is not what you meant, but presently I'm looking for a sparse > eigenvalue solver. As far as I've understood the ARPACK bindings are > still missing. This library is one of the most used, so I think it > would be very useful to have integrated in numpy. No, that isn't what he meant. He wants to help projects that are currently using Numeric and numarray convert to numpy. In any case, ARPACK certainly won't go into numpy. It might go into scipy if you are willing to contribute wrappers for it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jun 2 12:16:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 11:16:31 -0500 Subject: [Numpy-discussion] rand argument question In-Reply-To: <44805B86.4080001@gmx.net> References: <44805B86.4080001@gmx.net> Message-ID: Sven Schreiber wrote: > Hi all, > this may be a stupid question, but why doesn't rand accept a shape tuple > as argument? I find the difference between the argument types of rand > and (for example) zeros somewhat confusing. (See below for > illustration.) Can anybody offer an intuition/explanation? rand() is a convenience function. It's only purpose is to offer this convenient API. If you want a function that takes tuples, use numpy.random.random(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jun 2 12:16:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 11:16:46 -0500 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <1231363019.20060602154823@gmail.com> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> Message-ID: Filip Wasilewski wrote: > So the next question is what's the difference between matrixmultiply and > dot in NumPy? matrixmultiply is a deprecated compatibility name. Always use dot. dot will get replaced with the optimized dotblas implementation when an optimized BLAS is available. matrixmultiply will not (probably not intentionally, but I'm happy with the current situation). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Fri Jun 2 12:57:18 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 02 Jun 2006 09:57:18 -0700 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> Message-ID: <44806DEE.5080908@noaa.gov> Robert Kern wrote: >> As I need Numeric and numarray compatibility at this point, it seems the > Ah. It might help if you said that up front. Yes, it would, but that would mean accepting that I need to keep backward compatibility -- I'm still hoping! > x = arange(minx, maxx+step, step) # oy. > y = arange(miny, maxy+step, step) > > nx = len(x) > ny = len(y) > > x = repeat(x, ny) > y = concatenate([y] * nx) > points = transpose([x, y]) Somehow I never think to use repeat. And why use repeat for x and concatenate for y? Rob Hooft wrote: > How about something like: > > >>> k=Numeric.repeat(range(0,5+1),Numeric.ones(6)*7) > >>> l=Numeric.resize(range(0,6+1),[42]) > >>> > zone=Numeric.concatenate((k[:,Numeric.NewAxis],l[:,Numeric.NewAxis]),axis=1) > This is the same speed as Robert Kern's solution for large arrays, a bit > slower for small arrays. Both are a little faster than yours. Did you time them? And yours only handles integers. This is my timing: For small arrays: Using numpy The Numpy way took: 0.020000 seconds My way took: 0.010000 seconds Robert's way took: 0.020000 seconds Using Numeric My way took: 0.010000 seconds Robert's way took: 0.020000 seconds Using numarray My way took: 0.070000 seconds Robert's way took: 0.120000 seconds Number of X: 4 Number of Y: 3 So my way was faster with all three packages for small arrays. For Medium arrays ( the size I'm most likely to be using ): Using numpy The Numpy way took: 0.120000 seconds My way took: 0.040000 seconds Robert's way took: 0.030000 seconds Using Numeric My way took: 0.040000 seconds Robert's way took: 0.030000 seconds Using numarray My way took: 0.090000 seconds Robert's way took: 1.070000 seconds Number of X: 21 Number of Y: 41 Now we're getting close, with mine faster with numarray, but Robert's faster with Numeric and numpy. For Large arrays: (still not very big, but as big as I'm likely to need) Using numpy The Numpy way took: 4.200000 seconds My way took: 0.660000 seconds Robert's way took: 0.340000 seconds Using Numeric My way took: 0.590000 seconds Robert's way took: 0.500000 seconds Using numarray My way took: 0.390000 seconds Robert's way took: 20.340000 seconds Number of X: 201 Number of Y: 241 So Robert's way still is faster with Numeric and numpy, but Much slower with numarray. As it's close with numpy and Numeric, but mine is much faster with numarray, I think I'll stick with mine. I note that while the numpy way, using mgrid(), is nice and clean to write, it is slower across the board. Perhaps mgrid(0 could use some optimization. This is exactly why I had suggested that one thing I wanted for numpy was an as-easy-to-use-as-possible C/C++ API. It would be nice to be able to write as many as possible of these kinds of utility functions in C as we could. In case anyone is interested, I'm using this to draw a grid of dots on the screen for my wxPython FloatCanvas. Every time the image is changed or moved or zoomed, I need to re-calculate the points before drawing them, so it's nice to have it fast. I've enclosed my test code. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- A non-text attachment was scrubbed... Name: junk.py Type: text/x-python Size: 1915 bytes Desc: not available URL: From oliphant.travis at ieee.org Fri Jun 2 13:07:27 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Jun 2006 11:07:27 -0600 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> Message-ID: <4480704F.2070504@ieee.org> Robert Kern wrote: > Filip Wasilewski wrote: > > >> So the next question is what's the difference between matrixmultiply and >> dot in NumPy? >> > > matrixmultiply is a deprecated compatibility name. Always use dot. dot will get > replaced with the optimized dotblas implementation when an optimized BLAS is > available. matrixmultiply will not (probably not intentionally, but I'm happy > with the current situation). > It's true that matrixmultiply has been deprecated for some time (at least 8 years...) The basic dot function gets over-written with a BLAS-optimized version but the matrixmultiply does not get changed. So replace matrixmultiply with dot. It wasn't an intentional thing, but perhaps it will finally encourage people to always use dot. -Travis From oliphant.travis at ieee.org Fri Jun 2 13:08:32 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Jun 2006 11:08:32 -0600 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <200606021527.15947.joris@ster.kuleuven.be> References: <1149253130.27604.29.camel@localhost.localdomain> <200606021527.15947.joris@ster.kuleuven.be> Message-ID: <44807090.5000207@ieee.org> Joris De Ridder wrote: > On Friday 02 June 2006 14:58, Eric Jonas wrote: > [EJ]: Hello! I've been using numeric for a while, and the recent list traffic > [EJ]: prompted me to finally migrate all my old code. On a whim, we were > [EJ]: benchmarking numpy vs numeric and have been lead to the conclusion that > [EJ]: numpy is at least 50x slower; a 1000x1000 matmul takes 16 sec in numpy > [EJ]: but 300 ms in numeric. > > You mean the other way around? > > I also tested numpy vs numarray, and numarray seems to be roughly 3 times > faster than numpy for your particular testcase. > Please post your test cases. We are trying to remove any slowness, but need testers to do it. -Travis From joris at ster.kuleuven.be Fri Jun 2 13:09:01 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 2 Jun 2006 19:09:01 +0200 Subject: [Numpy-discussion] Numpy, BLAS & LAPACK In-Reply-To: <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> Message-ID: <200606021909.01239.joris@ster.kuleuven.be> Just to be sure, what exactly is affected when one uses the slower algorithms when neither BLAS or LAPACK is installed? For sure it will affect almost every function in numpy.linalg, as they use LAPACK_lite. And I guess that in numpy.core the dot() function uses the lite numpy/core/blasdot/_dotblas.c routine? Any other numpy functions that are affected? Joris On Friday 02 June 2006 16:16, George Nurser wrote: [GN]: Yes, using numpy.dot I get 250ms, numpy.matrixmultiply 11.8s. [GN]: [GN]: while a sans-BLAS Numeric.matrixmultiply takes 12s. [GN]: [GN]: The first 100 results from numpy.dot and numpy.matrixmultiply are identical .... [GN]: [GN]: Use dot;) [GN]: [GN]: --George. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From oliphant.travis at ieee.org Fri Jun 2 13:19:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Jun 2006 11:19:05 -0600 Subject: [Numpy-discussion] Numpy, BLAS & LAPACK In-Reply-To: <200606021909.01239.joris@ster.kuleuven.be> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> <200606021909.01239.joris@ster.kuleuven.be> Message-ID: <44807309.8010500@ieee.org> Joris De Ridder wrote: > Just to be sure, what exactly is affected when one uses the slower > algorithms when neither BLAS or LAPACK is installed? For sure it > will affect almost every function in numpy.linalg, as they use > LAPACK_lite. And I guess that in numpy.core the dot() function > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > numpy functions that are affected? > convolve could also be affected (the basic internal _dot function gets replaced for FLOAT, DOUBLE, CFLOAT, and CDOUBLE). I think that's the only function that uses dot internally. In the future we hope to be optimizing ufuncs as well. -Travis From faltet at carabos.com Fri Jun 2 13:18:56 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 2 Jun 2006 19:18:56 +0200 Subject: [Numpy-discussion] numpy vs numeric benchmarks In-Reply-To: <4480704F.2070504@ieee.org> References: <1149253130.27604.29.camel@localhost.localdomain> <4480704F.2070504@ieee.org> Message-ID: <200606021918.57134.faltet@carabos.com> A Divendres 02 Juny 2006 19:07, Travis Oliphant va escriure: > Robert Kern wrote: > > Filip Wasilewski wrote: > >> So the next question is what's the difference between matrixmultiply and > >> dot in NumPy? > > > > matrixmultiply is a deprecated compatibility name. Always use dot. dot > > will get replaced with the optimized dotblas implementation when an > > optimized BLAS is available. matrixmultiply will not (probably not > > intentionally, but I'm happy with the current situation). > > It's true that matrixmultiply has been deprecated for some time (at > least 8 years...) The basic dot function gets over-written with a > BLAS-optimized version but the matrixmultiply does not get changed. So > replace matrixmultiply with dot. It wasn't an intentional thing, but > perhaps it will finally encourage people to always use dot. So, why not issuing a DeprecationWarning on a matrixmultiply function use? -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From jonas at mwl.mit.edu Fri Jun 2 13:28:07 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Fri, 02 Jun 2006 13:28:07 -0400 Subject: [Numpy-discussion] Numpy, BLAS & LAPACK In-Reply-To: <44807309.8010500@ieee.org> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> <200606021909.01239.joris@ster.kuleuven.be> <44807309.8010500@ieee.org> Message-ID: <1149269287.27604.38.camel@localhost.localdomain> Is there some way, either within numpy or at build-time, to verify you're using BLAS/LAPACK? Is there one we should be using? ...Eric On Fri, 2006-06-02 at 11:19 -0600, Travis Oliphant wrote: > Joris De Ridder wrote: > > Just to be sure, what exactly is affected when one uses the slower > > algorithms when neither BLAS or LAPACK is installed? For sure it > > will affect almost every function in numpy.linalg, as they use > > LAPACK_lite. And I guess that in numpy.core the dot() function > > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > > numpy functions that are affected? > > > convolve could also be affected (the basic internal _dot function gets > replaced for FLOAT, DOUBLE, CFLOAT, and CDOUBLE). I think that's the > only function that uses dot internally. > > In the future we hope to be optimizing ufuncs as well. > > -Travis > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant.travis at ieee.org Fri Jun 2 13:31:09 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Jun 2006 11:31:09 -0600 Subject: [Numpy-discussion] Numpy, BLAS & LAPACK In-Reply-To: <1149269287.27604.38.camel@localhost.localdomain> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> <200606021909.01239.joris@ster.kuleuven.be> <44807309.8010500@ieee.org> <1149269287.27604.38.camel@localhost.localdomain> Message-ID: <448075DD.30804@ieee.org> Eric Jonas wrote: > Is there some way, either within numpy or at build-time, to verify > you're using BLAS/LAPACK? Is there one we should be using? > > Check to see if the id of numpy.dot is the same as numpy.core.multiarray.dot -Travis From aisaac at american.edu Fri Jun 2 13:41:27 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 2 Jun 2006 13:41:27 -0400 Subject: [Numpy-discussion] rand argument question In-Reply-To: <44805B86.4080001@gmx.net> References: <44805B86.4080001@gmx.net> Message-ID: On Fri, 02 Jun 2006, Sven Schreiber apparently wrote: > why doesn't rand accept a shape tuple as argument? I find > the difference between the argument types of rand and (for > example) zeros somewhat confusing. ... Can anybody offer > an intuition/explanation? Backward compatability, I believe. You are not alone in finding this odd and inconsistent. I am hoping for a change by 1.0, but I am not very hopeful. Robert always points out that if you want the consistent interface, you can always import functions from the 'random' module. I have never been able to understand this as a response to the point you are making. I take it the core argument goes something like this: - rand and randn are convenience functions * if you do not find them convenient, don't use them - they are in wide use, so it is too late to change them - testing the first argument to see whether it is a tuple or an int so aesthetically objectionable that its ugliness outweighs the benefits users might get from access to a more consistent interface This is one place where I believe a forward looking (i.e., think about new users) vision would force a small change in these *convenience* functions that will have payoffs both in ease of use and in eliminating this recurrent question from discussion lists. Cheers, Alan Isaac From jonathan.taylor at stanford.edu Fri Jun 2 14:08:25 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Fri, 02 Jun 2006 11:08:25 -0700 Subject: [Numpy-discussion] searchsorted Message-ID: <44807E99.6060105@stanford.edu> I was wondering if there was an easy way to get searchsorted to be "right-continuous" instead of "left-continuous". By continuity, I am talking about the continuity of the function "count" below... >>> import numpy as N >>> >>> x = N.arange(20) >>> x.searchsorted(9) 9 >>> import numpy as N >>> >>> x = N.arange(20) >>> >>> def count(u): ... return x.searchsorted(u) ... >>> count(9) 9 >>> count(9.01) 10 >>> Thanks, Jonathan -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From robert.kern at gmail.com Fri Jun 2 14:35:39 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 13:35:39 -0500 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <44806DEE.5080908@noaa.gov> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> <44806DEE.5080908@noaa.gov> Message-ID: Christopher Barker wrote: > Robert Kern wrote: >> x = repeat(x, ny) >> y = concatenate([y] * nx) >> points = transpose([x, y]) > > Somehow I never think to use repeat. And why use repeat for x and > concatenate for y? I guess you could use repeat() on y[newaxis] and then flatten it. y = repeat(y[newaxis], nx).ravel() > Using numpy > The Numpy way took: 0.020000 seconds > My way took: 0.010000 seconds > Robert's way took: 0.020000 seconds > Using Numeric > My way took: 0.010000 seconds > Robert's way took: 0.020000 seconds > Using numarray > My way took: 0.070000 seconds > Robert's way took: 0.120000 seconds > Number of X: 4 > Number of Y: 3 Those timings look real funny. I presume you are using a UNIX and time.clock(). Don't do that. It's a very poor timer on UNIX. Use time.time() on UNIX and time.clock() on Windows(). Even better, please use timeit.py instead. Tim Peters did a lot of work to make timeit.py do the right thing. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jun 2 14:50:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 13:50:56 -0500 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: Alan G Isaac wrote: > On Fri, 02 Jun 2006, Sven Schreiber apparently wrote: > >>why doesn't rand accept a shape tuple as argument? I find >>the difference between the argument types of rand and (for >>example) zeros somewhat confusing. ... Can anybody offer >>an intuition/explanation? > > Backward compatability, I believe. You are not alone in > finding this odd and inconsistent. I am hoping for a change > by 1.0, but I am not very hopeful. > > Robert always points out that if you want the consistent > interface, you can always import functions from the 'random' > module. I have never been able to understand this as > a response to the point you are making. > > I take it the core argument goes something like this: > - rand and randn are convenience functions > * if you do not find them convenient, don't use them > - they are in wide use, so it is too late to change them > - testing the first argument to see whether it is a tuple or > an int so aesthetically objectionable that its ugliness > outweighs the benefits users might get from access to > a more consistent interface My argument does not include the last two points. - They are in wide use because they are convenient and useful. - Changing rand() and randn() to accept a tuple like random.random() and random.standard_normal() does not improve anything. Instead, it adds confusion for users who are reading code and seeing the same function being called in two different ways. - Users who want to see numpy *only* expose a single calling scheme for top-level functions should instead ask for rand() and randn() to be removed from the top numpy namespace. * Backwards compatibility might prevent this. > This is one place where I believe a forward looking (i.e., > think about new users) vision would force a small change in > these *convenience* functions that will have payoffs both in > ease of use and in eliminating this recurrent question from > discussion lists. *Changing* the API of rand() and randn() doesn't solve any problem. *Removing* them might. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Fri Jun 2 15:34:08 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 2 Jun 2006 15:34:08 -0400 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: On Fri, 02 Jun 2006, Robert Kern apparently wrote: > Changing the API of rand() and randn() doesn't solve any > problem. Removing them might. I think this is too blunt an argument. For example, use of the old interface might issue a deprecation warning. This would make it very likely that all new code use the new interface. I would also be fine with demoting these to the Numeric compatability module, although I find that the inferior choice (since it means a loss of convenience). Unless one of these changes is made, new users will **forever** be asking this same question. And either way, making the sacrifices needed for greater consistency seems like a good idea *before* 1.0. Cheers, Alan From cookedm at physics.mcmaster.ca Fri Jun 2 15:46:57 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 2 Jun 2006 15:46:57 -0400 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <200606021027.45392.joris@ster.kuleuven.be> References: <447D051E.9000709@ieee.org> <447F1BBD.7030905@noaa.gov> <200606021027.45392.joris@ster.kuleuven.be> Message-ID: <20060602154657.6f51f0a5@arbutus.physics.mcmaster.ca> On Fri, 2 Jun 2006 10:27:45 +0200 Joris De Ridder wrote: > [CB]: I was reacting to a post a while back that suggested > pointing people [CB]: searching for numpy to the main scipy page, > which I did not think was a [CB]: good idea. > > That would be my post :o) > > The reasons why I suggested this are > > 1) www.scipy.org is at the moment the most informative site on numpy > 2) of all sites www.scipy.org looks currently most professional > 3) a wiki-style site where everyone can contribute is really great > 4) I like information to be centralized. Having to check pointers, > docs and cookbooks on two different sites is inefficient > 5) Two different sites inevitably implies some duplication of the work > > Just as you, I am not (yet) a scipy user, I only have numpy installed > at the moment. The principal reason is the same as the one you > mentioned. But for me this is an extra motivation to merge scipy.org > and numpy.org: > > 6) merging scipy.org and numpy.org will hopefully lead to a larger > SciPy community and this in turn hopefully leads to user-friendly > installation procedures. My only concern with this is numpy is positioned for a wider audience: everybody who needs arrays, and the extra speed that numpy gives, but doesn't need what scipy gives. So merging the two could lead to confusion on what provides what, and what you need to do which. For instance, I don't want potential numpy users to be directed to scipy.org, and be turned off with all the extra stuff it seems to have (that scipy, not numpy, provides). But I think this can be handled if we approach scipy.org as serving both purposes. But I think is this the best option, considering how much crossover there is. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Fri Jun 2 15:56:32 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 2 Jun 2006 15:56:32 -0400 Subject: [Numpy-discussion] Numpy, BLAS & LAPACK In-Reply-To: <200606021909.01239.joris@ster.kuleuven.be> References: <1149253130.27604.29.camel@localhost.localdomain> <1231363019.20060602154823@gmail.com> <1d1e6ea70606020716xe400dc9o3890d5d07f83d874@mail.gmail.com> <200606021909.01239.joris@ster.kuleuven.be> Message-ID: <20060602155632.010b1dc5@arbutus.physics.mcmaster.ca> On Fri, 2 Jun 2006 19:09:01 +0200 Joris De Ridder wrote: > Just to be sure, what exactly is affected when one uses the slower > algorithms when neither BLAS or LAPACK is installed? For sure it > will affect almost every function in numpy.linalg, as they use > LAPACK_lite. And I guess that in numpy.core the dot() function > uses the lite numpy/core/blasdot/_dotblas.c routine? Any other > numpy functions that are affected? Using a better default dgemm for matrix multiplication when an optimized BLAS isn't available has been on my to-do list for a while. I think it can be speed up by a large amount on a generic machine by using blocking of the matrices. Personally, I perceive no difference between my g77-compiled LAPACK, and the gcc-compiled f2c'd routines in lapack_lite, if an optimized BLAS is used. And lapack_lite has fewer bugs than the version of LAPACK available off of netlib.org, as I used the latest patches I could scrounge up (mostly from Debian). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Fri Jun 2 15:56:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 14:56:46 -0500 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: Alan G Isaac wrote: > On Fri, 02 Jun 2006, Robert Kern apparently wrote: > >>Changing the API of rand() and randn() doesn't solve any >>problem. Removing them might. > > I think this is too blunt an argument. For example, > use of the old interface might issue a deprecation warning. > This would make it very likely that all new code use the new > interface. My point is that there is no need to change rand() and randn() to the "new" interface. The "new" interface is already there: random.random() and random.standard_normal(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Fri Jun 2 16:19:51 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 2 Jun 2006 16:19:51 -0400 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: >> On Fri, 02 Jun 2006, Robert Kern apparently wrote: >>> Changing the API of rand() and randn() doesn't solve any >>> problem. Removing them might. > Alan G Isaac wrote: >> I think this is too blunt an argument. For example, >> use of the old interface might issue a deprecation warning. >> This would make it very likely that all new code use the new >> interface. On Fri, 02 Jun 2006, Robert Kern apparently wrote: > My point is that there is no need to change rand() and randn() to the "new" > interface. The "new" interface is already there: random.random() and > random.standard_normal(). Yes of course; that has always been your point. In an earlier post, I indicated that this is your usual response. What your point does not addres: the question about rand and randn keeps cropping up on this list. My point is: numpy should take a step so that this question goes away, rather than maintain the status quo and see it crop up continually. (I.e., its recurrence should be understood to signal a problem.) Cheers, Alan PS I'll shut up about this now. From robert.kern at gmail.com Fri Jun 2 16:42:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 15:42:31 -0500 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: Alan G Isaac wrote: > On Fri, 02 Jun 2006, Robert Kern apparently wrote: > >>My point is that there is no need to change rand() and randn() to the "new" >>interface. The "new" interface is already there: random.random() and >>random.standard_normal(). > > Yes of course; that has always been your point. > In an earlier post, I indicated that this is your usual response. > > What your point does not addres: > the question about rand and randn keeps cropping up on this list. > > My point is: > numpy should take a step so that this question goes away, > rather than maintain the status quo and see it crop up continually. > (I.e., its recurrence should be understood to signal a problem.) I'll check in a change to the docstring later today. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rob at hooft.net Fri Jun 2 17:06:26 2006 From: rob at hooft.net (Rob Hooft) Date: Fri, 02 Jun 2006 23:06:26 +0200 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <44806DEE.5080908@noaa.gov> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> <44806DEE.5080908@noaa.gov> Message-ID: <4480A852.5030509@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Christopher Barker wrote: | Did you time them? And yours only handles integers. Yes I did, check the attachment of my previous message for a python module to time the three, with completely different results from yours (I'm using Numeric). The attachment also contains a floatified version of my demonstration. Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEgKhRH7J/Cv8rb3QRAlk1AJ4vyt1F1Lr54sGMjHkp1hGXzcowJwCeMD5O CqkaDTpKOdDrAy7+v3Py7kw= =jnqb -----END PGP SIGNATURE----- From Chris.Barker at noaa.gov Fri Jun 2 18:09:27 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 02 Jun 2006 15:09:27 -0700 Subject: [Numpy-discussion] How do I use numpy to do this? In-Reply-To: <4480A852.5030509@hooft.net> References: <447F3A57.2080206@noaa.gov> <6382066a0606011222s78b33f59p43f65dd2a02f2c27@mail.gmail.com> <447F6688.1030504@noaa.gov> <44806DEE.5080908@noaa.gov> <4480A852.5030509@hooft.net> Message-ID: <4480B717.4050000@noaa.gov> Rob Hooft wrote: > Christopher Barker wrote: > | Did you time them? And yours only handles integers. > > Yes I did, check the attachment of my previous message for a python > module to time the three, Sorry about that, I don't notice that. > with completely different results from yours > (I'm using Numeric). I ran it and got similar results to mine. Frankly, for the size problems I'm dealing with, they are all about the same, except for under Numarray, where mine is fastest, your second, and Robert third -- by a wide margin! Another reason I'm glad numpy is built on the Numeric code: Using numarray My way took: 0.394555 seconds Robert's way took: 20.590545 seconds Rob's way took: 4.802346 seconds Number of X: 201 Number of Y: 241 Using Numeric My way took: 0.593319 seconds Robert's way took: 0.523235 seconds Rob's way took: 0.579756 seconds Robert's way has a pretty decent edge under numpy: Using numpy My way took: 0.686741 seconds Robert's way took: 0.357887 seconds Rob's way took: 0.796977 seconds And I'm using time(), rather than clock() now, though it dint' really change anything. I suppose I should figure out timeit.py Thanks for all your help on this, -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From oliphant.travis at ieee.org Fri Jun 2 18:28:25 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Jun 2006 16:28:25 -0600 Subject: [Numpy-discussion] Updates to NumPy Message-ID: <4480BB89.5060101@ieee.org> I've been busy with NumPy and it has resulted in some C-API changes. So, after checking out a new SVN version of NumPy you will need to re-build extension modules (It stinks for me too --- SciPy takes a while to build). The API changes have made it possible to allow user-defined data-types to optionally participate in the coercion and casting infrastructure. Previously, casting was limited to built-in data-types. Now, there is a mechanism for users to define casting to and from their own data-type (and whether or not it can be done safely and whether or not a particular kind of user-defined scalar can be cast --- remember a scalar mixed with an array has a different set of casting rules). This should make user-defined data-types much more useful, but the facility needs to be tested. Does anybody have a data-type they want to add to try out the new system. The restriction on adding another data-type is that it must have a fixed element size (a variable-precision float for example would have to use a pointer to the actual structure as the "data-type"). -Travis From joris at ster.kuleuven.ac.be Fri Jun 2 19:03:41 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Sat, 3 Jun 2006 01:03:41 +0200 Subject: [Numpy-discussion] Suggestions for NumPy Message-ID: <1149289421.4480c3cde8e2e@webmail.ster.kuleuven.be> [DC]: My only concern with this is numpy is positioned for a wider audience: [DC]: everybody who needs arrays, and the extra speed that numpy gives, but [DC]: doesn't need what scipy gives. So merging the two could lead to [DC]: confusion on what provides what, and what you need to do which. I completely agree with this. SciPy and NumPy on one site, yes, but not so interweaven that it gets confusing or even plain useless for NumPy-only users. J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From tim.hochberg at cox.net Fri Jun 2 23:15:33 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 02 Jun 2006 20:15:33 -0700 Subject: [Numpy-discussion] fromiter Message-ID: <4480FED5.6010300@cox.net> Some time ago some people, myself including, were making some noise about having 'array' iterate over iterable object producing ndarrays in a manner analogous to they way sequences are treated. I finally got around to looking at it seriously and once I came to the following three conclusions: 1. All I really care about is the 1D case where dtype is specified. This case should be relatively easy to implement so that it's fast. Most other cases are not likely to be particularly faster than converting the iterators to lists at the Python level and then passing those lists to array. 2. 'array' already has plenty of special cases. I'm reluctant to add more. 3. Adding this to 'array' would be non-trivial. The more cases we tried to make fast, the more likely that some of the paths would be buggy. Regardless of how we did it though, some cases would be much slower than other, which would probably be suprising. So, with that in mind, I retreated a little and implemented the simplest thing that did the stuff that I cared about: fromiter(iterable, dtype, count) => ndarray of type dtype and length count This is essentially the same interface as fromstring except that the values of dtype and count are always required. Some primitive benchmarking indicates that 'fromiter(generator, dtype, count)' is about twice as fast as 'array(list(generator))' for medium to large arrays. When producing very large arrays, the advantage of fromiter is larger, presumably because 'list(generator)' causes things to start swapping. Anyway I'm about to bail out of town till the middle of next week, so it'll be a while till I can get it clean enough to submit in some form or another. Plenty of time for people to think of why it's a terrible idea ;-) -tim From charlesr.harris at gmail.com Fri Jun 2 23:30:05 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 2 Jun 2006 21:30:05 -0600 Subject: [Numpy-discussion] searchsorted In-Reply-To: <44807E99.6060105@stanford.edu> References: <44807E99.6060105@stanford.edu> Message-ID: Jonathan, I had a patch for this that applied to numarray way back when. If folks feel there is a need, I could probably try to get it running on numpy. Bit of a learning curve (for me), though. Chuck On 6/2/06, Jonathan Taylor wrote: > > I was wondering if there was an easy way to get searchsorted to be > "right-continuous" instead of "left-continuous". > > By continuity, I am talking about the continuity of the function "count" > below... > > >>> import numpy as N > >>> > >>> x = N.arange(20) > >>> x.searchsorted(9) > 9 > >>> import numpy as N > >>> > >>> x = N.arange(20) > >>> > >>> def count(u): > ... return x.searchsorted(u) > ... > >>> count(9) > 9 > >>> count(9.01) > 10 > >>> > > Thanks, > > Jonathan > > -- > ------------------------------------------------------------------------ > I'm part of the Team in Training: please support our efforts for the > Leukemia and Lymphoma Society! > > http://www.active.com/donate/tntsvmb/tntsvmbJTaylor > > GO TEAM !!! > > ------------------------------------------------------------------------ > Jonathan Taylor Tel: 650.723.9230 > Dept. of Statistics Fax: 650.725.8977 > Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo > 390 Serra Mall > Stanford, CA 94305 > > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Sat Jun 3 03:25:42 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 03 Jun 2006 01:25:42 -0600 Subject: [Numpy-discussion] fromiter In-Reply-To: <4480FED5.6010300@cox.net> References: <4480FED5.6010300@cox.net> Message-ID: <44813976.2010808@ieee.org> Tim Hochberg wrote: > Some time ago some people, myself including, were making some noise > about having 'array' iterate over iterable object producing ndarrays in > a manner analogous to they way sequences are treated. I finally got > around to looking at it seriously and once I came to the following three > conclusions: > > 1. All I really care about is the 1D case where dtype is specified. > This case should be relatively easy to implement so that it's > fast. Most other cases are not likely to be particularly faster > than converting the iterators to lists at the Python level and > then passing those lists to array. > 2. 'array' already has plenty of special cases. I'm reluctant to add > more. > 3. Adding this to 'array' would be non-trivial. The more cases we > tried to make fast, the more likely that some of the paths would > be buggy. Regardless of how we did it though, some cases would be > much slower than other, which would probably be suprising. > Good job. I just added a called fromiter for this very purpose. Right now, it's just a stub that calls list(obj) first and then array. Your code would be a perfect fit for it. I think count could be optional, though, to handle cases where the count can be determined from the object. We'll look forward to your check-in. -Travis From svetosch at gmx.net Sat Jun 3 05:52:57 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 03 Jun 2006 11:52:57 +0200 Subject: [Numpy-discussion] rand argument question In-Reply-To: References: <44805B86.4080001@gmx.net> Message-ID: <44815BF9.3060504@gmx.net> Robert Kern schrieb: > > My point is that there is no need to change rand() and randn() to the "new" > interface. The "new" interface is already there: random.random() and > random.standard_normal(). > Ok thanks for the responses and sorry for not searching the archives about this. I tend to share Alan's point of view, but I also understand that it may be too late now to change the way rand is called. -Sven From jonathan.taylor at utoronto.ca Fri Jun 2 18:04:32 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Fri, 2 Jun 2006 18:04:32 -0400 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <20060602154657.6f51f0a5@arbutus.physics.mcmaster.ca> References: <447D051E.9000709@ieee.org> <447F1BBD.7030905@noaa.gov> <200606021027.45392.joris@ster.kuleuven.be> <20060602154657.6f51f0a5@arbutus.physics.mcmaster.ca> Message-ID: <463e11f90606021504h742e92e4t5ff418d1e29e426@mail.gmail.com> My suggestion would be to have both numpy.org and scipy.org be the exact same page, but make it extremely clear that there are two different projects on the front page. Cheers. Jon. On 6/2/06, David M. Cooke wrote: > On Fri, 2 Jun 2006 10:27:45 +0200 > Joris De Ridder wrote: > > [CB]: I was reacting to a post a while back that suggested > > pointing people [CB]: searching for numpy to the main scipy page, > > which I did not think was a [CB]: good idea. > > > > That would be my post :o) > > > > The reasons why I suggested this are > > > > 1) www.scipy.org is at the moment the most informative site on numpy > > 2) of all sites www.scipy.org looks currently most professional > > 3) a wiki-style site where everyone can contribute is really great > > 4) I like information to be centralized. Having to check pointers, > > docs and cookbooks on two different sites is inefficient > > 5) Two different sites inevitably implies some duplication of the work > > > > Just as you, I am not (yet) a scipy user, I only have numpy installed > > at the moment. The principal reason is the same as the one you > > mentioned. But for me this is an extra motivation to merge scipy.org > > and numpy.org: > > > > 6) merging scipy.org and numpy.org will hopefully lead to a larger > > SciPy community and this in turn hopefully leads to user-friendly > > installation procedures. > > My only concern with this is numpy is positioned for a wider audience: > everybody who needs arrays, and the extra speed that numpy gives, but > doesn't need what scipy gives. So merging the two could lead to > confusion on what provides what, and what you need to do which. > For instance, I don't want potential numpy users to be directed to > scipy.org, and be turned off with all the extra stuff it seems to have > (that scipy, not numpy, provides). But I think this can be handled if > we approach scipy.org as serving both purposes. > > But I think is this the best option, considering how much crossover > there is. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke > http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tim.hochberg at cox.net Sat Jun 3 10:29:04 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Sat, 03 Jun 2006 07:29:04 -0700 Subject: [Numpy-discussion] fromiter In-Reply-To: <44813976.2010808@ieee.org> References: <4480FED5.6010300@cox.net> <44813976.2010808@ieee.org> Message-ID: <44819CB0.1020609@cox.net> Travis Oliphant wrote: >Tim Hochberg wrote: > > >>Some time ago some people, myself including, were making some noise >>about having 'array' iterate over iterable object producing ndarrays in >>a manner analogous to they way sequences are treated. I finally got >>around to looking at it seriously and once I came to the following three >>conclusions: >> >> 1. All I really care about is the 1D case where dtype is specified. >> This case should be relatively easy to implement so that it's >> fast. Most other cases are not likely to be particularly faster >> than converting the iterators to lists at the Python level and >> then passing those lists to array. >> 2. 'array' already has plenty of special cases. I'm reluctant to add >> more. >> 3. Adding this to 'array' would be non-trivial. The more cases we >> tried to make fast, the more likely that some of the paths would >> be buggy. Regardless of how we did it though, some cases would be >> much slower than other, which would probably be suprising. >> >> >> > >Good job. I just added a called fromiter for this very purpose. Right >now, it's just a stub that calls list(obj) first and then array. Your >code would be a perfect fit for it. I think count could be optional, >though, to handle cases where the count can be determined from the object. > > I'll look at that when I get back. There are two ways to approach this: one is to only allow to count to be optional in those cases that the original object supports either __len__ or __length_hint__. The advantage their is that it's easy and there's no chance of locking up the interpreter by passing an unbounded generator. The other way is to figure out the length based on the generator itself. The "natural" way to do this is to steal stuff from array.array. However, that doesn't export a C-level interface that I can tell (everything is declared static), so you'd be going through the interpreter, which would potentially be slow. I guess another approach would be to hijack PyArray_Resize and steal the resizing pattern from array.array. I'm not sure how well that would work though. I'll look into it... -tim >We'll look forward to your check-in. > >-Travis > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From svetosch at gmx.net Sat Jun 3 10:43:07 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 03 Jun 2006 16:43:07 +0200 Subject: [Numpy-discussion] remaining matrix-non-preserving functions Message-ID: <44819FFB.3050507@gmx.net> Hi all, I just discovered that the diff function returns a numpy-array even for matrix inputs. Since I'm a card-carrying matrix fanatic, I hope that behavior qualifies as a bug. Then I went through some (most?) other functions/methods for which IMO it's best to return matrices if the input is also a matrix-type. I found that the following functions share the problem of diff (see below for illustrations): vstack and hstack (although I always use r_ and c_ and they work fine with matrices) outer msort Should I open new tickets? (Or has this already been fixed since 0.9.8, which I used because this time building the svn version failed for me?) Cheers, Sven >>> n.__version__ '0.9.8' >>> a matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) >>> b matrix([[0, 0, 0], [0, 0, 0]]) >>> n.diff(a) array([[-1, 0], [ 1, -1], [ 0, 1]]) >>> n.outer(a,b) array([[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]) >>> n.msort(a) array([[0, 0, 0], [0, 0, 0], [1, 1, 1]]) >>> n.vstack([a,b]) array([[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 0], [0, 0, 0]]) >>> n.hstack([a,b.T]) array([[1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0]]) >>> From aisaac0 at verizon.net Sat Jun 3 19:52:54 2006 From: aisaac0 at verizon.net (David Isaac) Date: Sat, 03 Jun 2006 19:52:54 -0400 Subject: [Numpy-discussion] numpy bug Message-ID: <005701c68769$7310bb30$2f01a8c0@JACKSONVILLE> "Boris Borcic" wrote in message news:447f3338$1_7 at news.bluewin.ch... > after a while trying to find the legal manner to file numpy bug reports, > since it's a simple one, I thought maybe a first step is to describe the bug > here. Then maybe someone will direct me to the right channel. > > So, numpy appears not to correctly compute bitwise_and.reduce and > bitwise_or.reduce : instead of reducing over the complete axis, these methods > only take the extremities into account. Illustration : > > >>> from numpy import * > >>> bitwise_or.reduce(array([8,256,32,8])) > 8 > >>> import numpy > >>> numpy.__version__ > '0.9.8' > >>> > > Platform : Win XP SP2, Python 2.4.2 Most bug reports start on the numpy list, I believe. (See above.) Cheers, Alan Isaac From hmfenz at koalanet.ne.jp Sat Jun 3 20:42:25 2006 From: hmfenz at koalanet.ne.jp (jztlta hwwbyd) Date: Sun, 04 Jun 2006 09:42:25 +0900 (UYT) Subject: [Numpy-discussion] [Reply]for monday watch this stck trade HYWI.PK Message-ID: <16709360.6120425430340.JavaMail.wzikyhzyj@gy-st02> An HTML attachment was scrubbed... URL: From schofield at ftw.at Sun Jun 4 13:02:17 2006 From: schofield at ftw.at (Ed Schofield) Date: Sun, 4 Jun 2006 19:02:17 +0200 Subject: [Numpy-discussion] Removing deprecated names Message-ID: <5CE6D3C7-2478-49D3-97C3-623484D8CB66@ftw.at> Hi all, I've created four patches to remove deprecated names from the numpy.core and numpy namespaces by default. The motivation for this is to provide a clear separation for both new users and users migrating from Numeric between those names that are deprecated and those that are recommended. The first patch cleans up NumPy to avoid the use of deprecated names internally: http://projects.scipy.org/scipy/numpy/ticket/137 The second patch separates the Numeric-like function interfaces, which Travis has said he doesn't want to deprecate, from the other names in oldnumeric.py, which include the capitalized type names, arrayrange, matrixmultiply, outerproduct, NewAxis, and a few others: http://projects.scipy.org/scipy/numpy/ticket/138 The third patch removes the deprecated names from the numpy.core and numpy namespaces and adds a compatibility function, numpy.Numeric(), that imports the deprecated interfaces into the namespace as before: http://projects.scipy.org/scipy/numpy/ticket/139 The fourth patch (also in ticket #139) is a script that adds the line "numpy.Numeric()" to the appropriate place in all Python files in the specified directory. I've tested this on the SciPy source tree, which still uses the old Numeric interfaces in many places. After running the script, SciPy runs all its 1518 unit tests without errors. These patches make a fairly small difference to the size of NumPy's default namespace: >>> import numpy >>> len(dir(numpy)) 438 >>> numpy.Numeric() >>> len(dir(numpy)) 484 They do, however, help to support Python principle #13 ... -- Ed From charlesr.harris at gmail.com Sun Jun 4 14:36:17 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 4 Jun 2006 12:36:17 -0600 Subject: [Numpy-discussion] Random number generators. Message-ID: Hi All, But mostly Robert. I've been fooling around timing random number generators and noticed that on an Athlon64 with 64bit binaries that the MWC8222 rng is about 2.5x as fast as the MT19937 generator. On my machine (1.8 GHz) I get MWC8222: long 2.58e+08 float 1.20e+08 double 1.34e+08 full double 1.02e+08 MT19937: long 9.07e+07 float 6.33e+07 double 6.71e+07 full double 3.81e+07 numbers/sec, where the time includes accumulating the sums. This also impacts the generation of normally distributed numbers MWC8222: nums/sec: 1.12e+08 average : 1.91e-05 sigma : 1.00e-00 MT19937: nums/sec: 5.41e+07 average : -9.73e-05 sigma : 1.00e+00 The times for 32 bit binaries is roughly the same. For generating large arrays of random numbers on 64 bit architectures it looks like MWC8222 is a winner. So, the question is, is there a good way to make the rng selectable? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From st at sigmasquared.net Sun Jun 4 16:21:08 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Sun, 04 Jun 2006 22:21:08 +0200 Subject: [Numpy-discussion] Random number generators. In-Reply-To: References: Message-ID: <448340B4.5050509@sigmasquared.net> > MWC8222: > > nums/sec: 1.12e+08 > > MT19937: > > nums/sec: 5.41e+07 > The times for 32 bit binaries is roughly the same. For generating large > arrays of random numbers on 64 bit architectures it looks like MWC8222 > is a winner. So, the question is, is there a good way to make the rng > selectable? Although there are in general good reasons for having more than one random number generator available (and testing one's code with more than one generator), performance shouldn't be the deciding concern for selecting one. The most important characteristic of a random number generator are its distributional properties, e.g. how "uniform" and "random" its generated numbers are. There's hardly any generator which is faster than the Mersenne Twister _and_ has a better equi-distribution. Actually, the MT is so fast that on modern processors the contribution of the uniform number generator to most non-trivial simulation code is negligible. See www.iro.umontreal.ca/~lecuyer/ for good (mathematical) surveys on this topic. If you really need that last inch of performance, you should seriously think about outsourcing your inner simulation loop to C(++). And by the way, there's a good chance that making the rng selectable has a negative performance impact on random number generation (at least if the generation is done through the same interface and the current implementation is sufficiently optimized). Regards, Stephan From charlesr.harris at gmail.com Sun Jun 4 16:41:07 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 4 Jun 2006 14:41:07 -0600 Subject: [Numpy-discussion] Random number generators. In-Reply-To: <448340B4.5050509@sigmasquared.net> References: <448340B4.5050509@sigmasquared.net> Message-ID: Stephen, MWC8222 has good distribution properties, it comes from George Marsaglia and passes all the tests in the Diehard suite. It was also used among others by Jurgen Doornik in his investigation of the ziggurat method for random normals and he didn't turn up any anomalies. Now, I rather like theory behind MT19937, based as it is on an irreducible polynomial over Z_2 discovered by brute force search, but it is not the end all and be all of rng's. And yes, I do like to generate hundreds of millions of random numbers/sec, and yes, I do do it in c++ and use boost/python as an interface, but that doesn't mean numpy can't use a speed up now and then. In particular, the ziggurat method for generating normals is also significantly faster than the polar method in numpy. Put them together and on X86_64 I think you will get close to a factor of ten improvement in speed. That isn't to be sniffed at, especially if you are simulating noisy images and such. On 6/4/06, Stephan Tolksdorf wrote: > > > > MWC8222: > > > > nums/sec: 1.12e+08 > > > > MT19937: > > > > nums/sec: 5.41e+07 > > The times for 32 bit binaries is roughly the same. For generating large > > arrays of random numbers on 64 bit architectures it looks like MWC8222 > > is a winner. So, the question is, is there a good way to make the rng > > selectable? > > Although there are in general good reasons for having more than one > random number generator available (and testing one's code with more than > one generator), performance shouldn't be the deciding concern for > selecting one. The most important characteristic of a random number > generator are its distributional properties, e.g. how "uniform" and > "random" its generated numbers are. There's hardly any generator which > is faster than the Mersenne Twister _and_ has a better > equi-distribution. Actually, the MT is so fast that on modern processors > the contribution of the uniform number generator to most non-trivial > simulation code is negligible. See www.iro.umontreal.ca/~lecuyer/ for > good (mathematical) surveys on this topic. > > If you really need that last inch of performance, you should seriously > think about outsourcing your inner simulation loop to C(++). And by the > way, there's a good chance that making the rng selectable has a negative > performance impact on random number generation (at least if the > generation is done through the same interface and the current > implementation is sufficiently optimized). > > Regards, > Stephan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Jun 4 18:04:13 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 04 Jun 2006 17:04:13 -0500 Subject: [Numpy-discussion] Random number generators. In-Reply-To: References: Message-ID: Charles R Harris wrote: > For generating large > arrays of random numbers on 64 bit architectures it looks like MWC8222 > is a winner. So, the question is, is there a good way to make the rng > selectable? Sure! All of the distributions ultimately depend on the uniform generators (rk_random, rk_double, etc.). It would be possible to alter the rk_state struct to store data for multiple generators (probably through a union) and store function pointers to the uniform generators. The public API rk_random, rk_double, etc. would be modified to call the function pointers to the private API functions depending on the actual generator chosen. At the Pyrex level, some modifications would need to be made to the RandomState constructor (or we would need to make alternate constructors) and the seeding methods. Nothing too bad. I don't think it would be worthwhile to change the numpy.random.* functions that alias the methods on the default RandomState object. Code that needs customizable PRNGs should be taking a RandomState object instead of relying on the function-alike aliases. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Jun 4 18:07:34 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 04 Jun 2006 17:07:34 -0500 Subject: [Numpy-discussion] Random number generators. In-Reply-To: References: Message-ID: Robert Kern wrote: > Charles R Harris wrote: > >>For generating large >>arrays of random numbers on 64 bit architectures it looks like MWC8222 >>is a winner. So, the question is, is there a good way to make the rng >>selectable? > > Sure! I should also add that I have no time to do any of this, but I'll be happy to answer questions and make suggestions if you would like to tackle this. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Sun Jun 4 18:37:53 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 4 Jun 2006 16:37:53 -0600 Subject: [Numpy-discussion] Random number generators. In-Reply-To: References: Message-ID: On 6/4/06, Robert Kern wrote: > > Charles R Harris wrote: > > For generating large > > arrays of random numbers on 64 bit architectures it looks like MWC8222 > > is a winner. So, the question is, is there a good way to make the rng > > selectable? > > Sure! All of the distributions ultimately depend on the uniform generators > (rk_random, rk_double, etc.). It would be possible to alter the rk_state > struct > to store data for multiple generators (probably through a union) and store > function pointers to the uniform generators. The public API rk_random, > rk_double, etc. would be modified to call the function pointers to the > private > API functions depending on the actual generator chosen. > > At the Pyrex level, some modifications would need to be made to the > RandomState > constructor (or we would need to make alternate constructors) and the > seeding > methods. Heh, I borrowed some seeding methods from numpy, but put them in their own file with interfaces void fillFromPool(uint32_t *state, size_t size); void fillFromSeed(uint32_t *state, size_t size, uint32_t seed); void fillFromVect(uint32_t *state, size_t size, const std::vector & seed); So that I could use them more generally. I left out the method using the system time because, well, everything I am interested in runs on linux or windows. Boost has a good include file, boost/cstdint.hpp, that deals with all the issues of defining integer types on different platforms. I didn't use it, though, just the stdint.h file ;) Nothing too bad. I don't think it would be worthwhile to change the > numpy.random.* functions that alias the methods on the default RandomState > object. Code that needs customizable PRNGs should be taking a RandomState > object > instead of relying on the function-alike aliases. I'll take a look, though like you I am pretty busy these days. -- > Robert Kern Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Mon Jun 5 03:52:17 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon, 5 Jun 2006 08:52:17 +0100 Subject: [Numpy-discussion] numexpr: where function Message-ID: <20060605085217.4506427b.simon@arrowtheory.com> Is it possible to use the where function in numexpr ? I see some code there for it, but not sure how to use it. While I'm asking, it seems numexpr only does pointwise operations ATM, ie there is no .sum ? Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From cookedm at physics.mcmaster.ca Sun Jun 4 20:23:18 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sun, 4 Jun 2006 20:23:18 -0400 Subject: [Numpy-discussion] numexpr: where function In-Reply-To: <20060605085217.4506427b.simon@arrowtheory.com> References: <20060605085217.4506427b.simon@arrowtheory.com> Message-ID: <20060605002318.GA12516@arbutus.physics.mcmaster.ca> On Mon, Jun 05, 2006 at 08:52:17AM +0100, Simon Burton wrote: > > Is it possible to use the where function in numexpr ? > I see some code there for it, but not sure how to use it. Yes; 'where(expression, a, b)' will return an element from 'a' when 'expression' is non-zero (true), and the corresponding element from 'b' when it's 0 (false). > While I'm asking, it seems numexpr only does pointwise > operations ATM, ie there is no .sum ? Adding reducing functions is on the list of things to-do. I don't have much time for it now, unfortunately. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From yxolunipanh at movies4mommies.com Mon Jun 5 05:13:35 2006 From: yxolunipanh at movies4mommies.com (Jo Contreras) Date: Mon, 05 Jun 2006 09:13:35 -0000 Subject: [Numpy-discussion] veggies blameless Message-ID: <000701c3b0e2$0c7186e7$235be258@hsvj> Sick of hedge funds and flippers getting all the great new issues? Here at World Stock Report we work on what we here from the street. We pick our companies based on there growth potential. It has been showing a steady move up on increasing volume. There is a massive promotion underway this weekend apprising potential eager investors of this emerging situation. A M S N should be one of the most profitable stocks to trade. How many times have you seen issues explode but you couldn't get your hands on them? Big watch in play this tomorrow morning! Trade Date : Monday, June 5, 2006 Name : Amerossi International Group Inc. Stock : A M S N Current Price : $0.05 8-Day Target : $0.20 - $0.40 Rating : S T R O N G B U Y Explosive pick for our members. From deawjt at sticker.fsnet.co.uk Mon Jun 5 09:44:36 2006 From: deawjt at sticker.fsnet.co.uk (Godwin Womack) Date: Mon, 5 Jun 2006 09:44:36 -0400 Subject: [Numpy-discussion] tax shelter armful Message-ID: <002201c688a7$4c41945d$8f631e42@joc> Investor Alert - WE HAVE A RUNNER ! Investment Times Alert Issues: (S T R O N G B U Y) Trading Date : 5 June 2006 Company Name : Wataire Industries S y m b o l : W T A F Timing is everything! Current Price : $0.60 3 WEEK PROJECTION : $2 - $4 Status : 5(5) Most stock brokers give out their new issues only to their largest commission paying clients. So if you haven't done your DD yet, you better hurry because it appears that the huge move is about to start. W T A F is a high growth issue and should be purchased by stock traders and those that can afford to make quick money on these fast moving issues. The stocks we profile show a significant increase in stock price and sometimes in days, not months or years. From N.Gorsic at vipnet.hr Mon Jun 5 10:59:49 2006 From: N.Gorsic at vipnet.hr (Neven Gorsic) Date: Mon, 5 Jun 2006 16:59:49 +0200 Subject: [Numpy-discussion] Py2exe programs with NumPy Message-ID: <89684A5E33D0BC4CA1CA32E6E6499E7C013112E9@MAIL02.win.vipnet.hr> I made a Python program using NumPy extension and program works fine. So far I had no problems with compiling Python programs with Py2exe module, but now, in the end of compilation, I get error messages: The following modules appear to be missing ['Pyrex', 'Pyrex.Compiler', '_curses', 'fcompiler.FCompiler', 'lib.add_newdoc', 'pre', 'pylab', 'setuptools', 'setuptools.command', 'setuptools.command.egg_info ', 'win32api', 'win32con', 'win32pdh', 'numpy.core.equal', 'numpy.core.less', 'n umpy.core.less_equal'] Upon starting exe file I get another message: C:\Python24\dist>test No scipy-style subpackage 'testing' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'core' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'lib' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'linalg' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'dft' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'random' found in C:\Python24\dist\library.zip\numpy. Ignoring. No scipy-style subpackage 'f2py' found in C:\Python24\dist\library.zip\numpy. Ignoring. Traceback (most recent call last): File "test.py", line 228, in ? File "zipextimporter.pyc", line 78, in load_module File "numpy\__init__.pyc", line 44, in ? File "numpy\_import_tools.pyc", line 320, in get_pkgdocs File "numpy\_import_tools.pyc", line 283, in _format_titles ValueError: max() arg is an empty sequence Can you tell me please, what is wrong. PP: I have no previous experience compiling Pyton programs which includes numpy modules. I use py2exe in basic way : Type python setup.py py2exe from comand line and setup.py has only 3 lines: from distutils.core import setup import py2exe setup(console=["Programi\\test.py"]) From cookedm at physics.mcmaster.ca Mon Jun 5 17:10:23 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 5 Jun 2006 17:10:23 -0400 Subject: [Numpy-discussion] integer power in scalarmath: how to handle overflow? Message-ID: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> I just ran into the fact that the power function for integer types isn't handled in scalarmath yet. I'm going to add it, but I'm wondering what people want when power overflows the integer type? Taking the concrete example of a = uint8(3), b = uint8(10), then should a ** b return 1) the maximum integer for the type (255 here) 2) 0 3) upcast to the largest type that will hold it (but what if it's larger than our largest type? Return a Python long?) 4) do the power using "long" like Python does, then downcast it to the type (that would return 169 for the above example) 5) something else? I'm leaning towards #3; if you do a ** 10, you get the right answer (59049 as an int64scalar), because 'a' is upcasted to int64scalar, since '10', a Python int, is converted to that type. Otherwise, I would choose #1. -- |>|\/|< /----------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From svzjatneghk at realhomepro.com Mon Jun 5 18:18:36 2006 From: svzjatneghk at realhomepro.com (Marian Stafford) Date: Tue, 6 Jun 2006 00:18:36 +0200 Subject: [Numpy-discussion] enclosure Message-ID: <002301c688ee$876672ca$4ee78353@vnqk.fuwmr> Trade Date: Tuesday, June 6th, 2006 Company: BioElectronics Corporation Symbol: BIEL Price: $0.025 IS MOMENTUM BUILDING FOR THIS STOCK? CAN YOU MAKE SOME FAST MONEY ON IT? RADAR BIEL FOR TUESDAY'S OPEN RIGHT NOW!! THE ALERT IS ON!!! RECENT NEWS HEADLINE: (GO READ ALL THE NEWS ON BIEL RIGHT NOW!) BioElectronics Corporation Announces New 510(k) Market Clearance Application Filed With FDA!! About BioElectronics Corporation (Source: News 5/18/2006) BioElectronics currently manufactures and sells ActiPatch(TM), a drug-free anti-inflammatory patch with an embedded battery operated microchip that delivers weeks of continuous pulsed therapy for less than a dollar a day. The unique ActiPatch delivery system, using patented technology, provides a cost-effective, patient friendly method to reduce soft tissue pain and swelling. GO READ ALL THE NEWS ON THIS ONE!! DO YOUR DUE DILIGENCE!! RADAR IT FOR TUESDAY'S OPEN NOW! ______________ Information within this report contains forward looking statements within the meaning of Section 27A of the Securities Act of 1933 and Section 21B of the SEC Act of 1934. Statements that involve discussions with respect to projections of future events are not statements of historical fact and may be forward looking statements. Don't rely on them to make a decision. Past performance is never indicative of future results. We received four hundred thousand free trading shares in the past for our services. All those shares have been sold. We have received an additional one million free trading shares now. We intend to sell all one million shares now, which could cause the stock to go down, resulting in losses for you. The four hundred thousand shares and one million shares were received from two different third parties, not officers, directors or affiliate shareholders. This company has: an accumulated deficit, a negative net worth, a reliance on loans from officers directors and affiliates to pay expenses, and a nominal cash position. These factors raise substantial doubt about its ability to continue as a going concern. The company and its president are a defendant in a lawsuit. The publicly available float of stock is currently increasing. URGENT: Read the company's SEC filing before you invest. This report shall not be construed as any kind of investment advice or solicitation. WARNING: You can lose all your money by investing in this stock. From rkciebjv at datadrop.net Tue Jun 6 00:51:51 2006 From: rkciebjv at datadrop.net (Neville Sharpe) Date: Tue, 6 Jun 2006 00:51:51 -0400 Subject: [Numpy-discussion] delicatessen Message-ID: <003001c68926$06aad7b0$c64f4918@zzphp.dupkl> Trade Date: Tuesday, June 6th, 2006 Company: BioElectronics Corporation Symbol: BIEL Price: $0.025 IS MOMENTUM BUILDING FOR THIS STOCK? CAN YOU MAKE SOME FAST MONEY ON IT? RADAR BIEL FOR TUESDAY'S OPEN RIGHT NOW!! THE ALERT IS ON!!! RECENT NEWS HEADLINE: (GO READ ALL THE NEWS ON BIEL RIGHT NOW!) BioElectronics Corporation Announces New 510(k) Market Clearance Application Filed With FDA!! About BioElectronics Corporation (Source: News 5/18/2006) BioElectronics currently manufactures and sells ActiPatch(TM), a drug-free anti-inflammatory patch with an embedded battery operated microchip that delivers weeks of continuous pulsed therapy for less than a dollar a day. The unique ActiPatch delivery system, using patented technology, provides a cost-effective, patient friendly method to reduce soft tissue pain and swelling. GO READ ALL THE NEWS ON THIS ONE!! DO YOUR DUE DILIGENCE!! RADAR IT FOR TUESDAY'S OPEN NOW! ______________ Information within this report contains forward looking statements within the meaning of Section 27A of the Securities Act of 1933 and Section 21B of the SEC Act of 1934. Statements that involve discussions with respect to projections of future events are not statements of historical fact and may be forward looking statements. Don't rely on them to make a decision. Past performance is never indicative of future results. We received four hundred thousand free trading shares in the past for our services. All those shares have been sold. We have received an additional one million free trading shares now. We intend to sell all one million shares now, which could cause the stock to go down, resulting in losses for you. The four hundred thousand shares and one million shares were received from two different third parties, not officers, directors or affiliate shareholders. This company has: an accumulated deficit, a negative net worth, a reliance on loans from officers directors and affiliates to pay expenses, and a nominal cash position. These factors raise substantial doubt about its ability to continue as a going concern. The company and its president are a defendant in a lawsuit. The publicly available float of stock is currently increasing. URGENT: Read the company's SEC filing before you invest. This report shall not be construed as any kind of investment advice or solicitation. WARNING: You can lose all your money by investing in this stock. From N.Gorsic at vipnet.hr Tue Jun 6 04:19:31 2006 From: N.Gorsic at vipnet.hr (Neven Gorsic) Date: Tue, 6 Jun 2006 10:19:31 +0200 Subject: [Numpy-discussion] How to make exe from Python program with import NumPy line? Py2exe doesn't cooperato ! :) Message-ID: <89684A5E33D0BC4CA1CA32E6E6499E7C01311355@MAIL02.win.vipnet.hr> -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.douard at logilab.fr Tue Jun 6 04:44:20 2006 From: david.douard at logilab.fr (David Douard) Date: Tue, 6 Jun 2006 10:44:20 +0200 Subject: [Numpy-discussion] integer power in scalarmath: how to handle overflow? In-Reply-To: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> References: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> Message-ID: <20060606084419.GC1046@logilab.fr> On Mon, Jun 05, 2006 at 05:10:23PM -0400, David M. Cooke wrote: > I just ran into the fact that the power function for integer types > isn't handled in scalarmath yet. I'm going to add it, but I'm wondering > what people want when power overflows the integer type? > > Taking the concrete example of a = uint8(3), b = uint8(10), then should > a ** b return > > 1) the maximum integer for the type (255 here) > 2) 0 > 3) upcast to the largest type that will hold it (but what if it's > larger than our largest type? Return a Python long?) > 4) do the power using "long" like Python does, then downcast it to the > type (that would return 169 for the above example) > 5) something else? > > I'm leaning towards #3; if you do a ** 10, you get the right > answer (59049 as an int64scalar), because 'a' is upcasted to > int64scalar, since '10', a Python int, is converted to that type. > Otherwise, I would choose #1. I agree, #1 seems the better solution for me. BTW, I'm quite new on this list, and I don't know is this has already been discussed (I guess I has): why does uint_n arithmetics are done in the Z/(2**n)Z field (not sure about the maths correctness here)? I mean: >>> a = numpy.uint8(10) >>> a*a 100 >>> a*a*a # I'd like to have 255 here 232 >>> 1000%256 232 It would be really a nice feature to be able (by the mean of a numpy flag or so) to have bound limited uint operations (especially when doing image processing...). David -- David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From simon at arrowtheory.com Tue Jun 6 13:50:58 2006 From: simon at arrowtheory.com (Simon Burton) Date: Tue, 6 Jun 2006 18:50:58 +0100 Subject: [Numpy-discussion] How to make exe from Python program with import NumPy line? Py2exe doesn't cooperato ! :) In-Reply-To: <89684A5E33D0BC4CA1CA32E6E6499E7C01311355@MAIL02.win.vipnet.hr> References: <89684A5E33D0BC4CA1CA32E6E6499E7C01311355@MAIL02.win.vipnet.hr> Message-ID: <20060606185058.027a4c1c.simon@arrowtheory.com> On Tue, 6 Jun 2006 10:19:31 +0200 "Neven Gorsic" wrote: > > try pyInstaller. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From etysmkg at cq.com Tue Jun 6 10:02:32 2006 From: etysmkg at cq.com (wpnhh alsvfd) Date: Tue, 06 Jun 2006 22:02:32 +0800 Subject: [Numpy-discussion] you have to experience this H_Y_W_I.PK Plain as water. Message-ID: <9756360515.821959-92131285-6767@cq.com> H-Y-W-I- H o l l y w o o d I n t e r m e d i a t e, Inc. Watch this one tr at de on Tuesday, don't be sorry you missed out S Y M B O L : H_Y_W_I Current Price: $ 0.70 7 Day Projected : $ 4.50 This is a real company with real potential Nothing like it H_Y_W_I.PK Before we start with the profile of H-Y-W-I we would like to mention something very important: There is a Big PR Campaign starting on Tuesday . And it will go all week so it would be best to get in NOW. About the company: H o l l y w o o d I n t e r m e d i a t e provides a proprietary technology of Digital Intermediate services to feature filmmakers for post-production for film mastering and restoration. This technology gives the filmmakers total creative control over the look of their productions. Whether shooting on film or acquiring in HD or SD video, H o l l y w o o d I n t e r m e d i a t e puts a powerful cluster of digital tools at the director's disposal to achieve stunning results on the big screen. Matchframe Digital Intermediate, a division of H o l l y w o o d I n t e r m e d i a t e, Inc., packages a full array of post-production services with negative handling expertise and cost-effective 2K digital intermediate and 35mm film out systems. The Digital Intermediate process eliminates current post-production redundancies by creating a single high-resolution master file from which all versions can be made, including all theatrical and High Definition formats. By creating a single master file with resolution higher than the current High Definition broadcast standards, the DI master file enables cinema and television distributors to extract and archive all current and future cinema and television formats including Digital Cinema, Television and High Definition. Red H0t News: H o l l y w o o d I n t e r m e d i a t e Expands the Creative Palette for Independent Filmmakers GLENDALE, CA--(MARKET WIRE)--May 31, 2006 -- H o l l y w o o d I n t e r m e d i a t e, Inc. A provider of digital intermediate film mastering services, announced today that its Matchframe Digital Intermediate division is currently providing full digital intermediate services for Super 16MM productions. H o l l y w o o d I n t e r m e d i a t e, Inc. (H_Y_W_I.PK - News), a provider of digital intermediate film mastering services, announced that High Definition preview masters as part of its normal digital intermediate service offerings and workflow. "Typically, in current post-production workflow, HD dailies masters are edited into high quality preview masters including color timing, dirt removal, opticals and visual effects," said David Waters, H o l l y w o o d I n t e r m e d i a t e president. "Unfortunately, none of these processes translate to the theatrical release of the film as they must all be duplicated or repeated in either a higher resolution digital format, or photo chemical process." H o l l y w o o d I n t e r m e d i a t e gives Motion Picture producers the ability to scan their selected original camera negative at 2k or 4k film resolution, conform a high resolution digital master for theatrical and broadcast release including dirt removal, opticals and visual effects, and output a High Definition preview master to be used for preview screenings and focus groups that can be deployed in any worldwide theater location. "The challenge for completing the final editorial decisions on a motion picture are balanced between the ability to display the highest resolution picture for a test audience, and the costs and time in having to re-master your film based on a test audience response," said Jim Delany, H o l l y w o o d I n t e r m e d i a t e COO. "H o l l y w o o d I n t e r m e d i a t e offers a flexible alternative to traditional photochemical and video post-production processes for film mastering and preview screenings eliminating cost and time redundancies," said Waters. "We expect our HD preview screening master services to provide crucial workflow efficiencies helping H o l l y w o o d I n t e r m e d i a t e achieve market growth in the current digital intermediate and high definition marketplace." Get H.Y.W.I First Thing TUESDAY If you want to play the marrket get in on H_Y_W_I tuesday ----------------------- Welcome to my garden. That's water under the bridge. We'll hand you out to dry. Water under the bridge. We hung them out to dry. Putting it in a nutshell. Shall I compare thee to a summer's day. Raking it in. A stepping stone to. When we love - we grow. Stubborn as a mule. Rain, rain go away; come again some other day. Ugly as a mud fence. You feel like a fish out of water. The sharper is the berry, the sweeter is the wine. A stepping stone to. We'll hand you out to dry. Tossed around like a hot potato. Sweet as apple pie. The sharper is the berry, the sweeter is the wine. You feel like a fish out of water. Put off the scent. Sow much, reap much; sow little, reap little. Stone cold sober. Stone cold sober. Schools out for summer. Shake like a leaf. A stick in the mud. Your name is mud. The squeaky wheel gets the grease. Sly as a fox. Till the cows come home. You never miss the water till the well runs dry. A stick in the mud. From N.Gorsic at vipnet.hr Tue Jun 6 10:24:33 2006 From: N.Gorsic at vipnet.hr (Neven Gorsic) Date: Tue, 6 Jun 2006 16:24:33 +0200 Subject: [Numpy-discussion] How to get execatuble file from Python with NumPy import? Message-ID: <89684A5E33D0BC4CA1CA32E6E6499E7C01311407@MAIL02.win.vipnet.hr> Py2exe doesn't work! In the end of compilation I get message: The following modules appear to be missing: ['Pyrex', 'Pyrex.Compiler', '_curses', 'fcompiler.FCompiler', 'lib.add_newdoc', 'pre', 'pylab', 'setuptools', 'setuptools.command', 'setuptools.command.egg_info ', 'win32api', 'win32con', 'win32pdh', 'numpy.core.equal', 'numpy.core.less', 'n umpy.core.less_equal'] Neven -------------- next part -------------- An HTML attachment was scrubbed... URL: From khinsen at cea.fr Tue Jun 6 12:22:31 2006 From: khinsen at cea.fr (Konrad Hinsen) Date: Tue, 6 Jun 2006 18:22:31 +0200 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: On May 31, 2006, at 4:53, Travis Oliphant wrote: > Please help the developers by responding to a few questions. > > 1) Have you transitioned or started to transition to NumPy (i.e. > import numpy)? No. > 2) Will you transition within the next 6 months? (if you answered > No to #1) I would like to, but I am not sure to find the time. I am not in a hurry either, as Numeric continues to work fine. > 3) Please, explain your reason(s) for not making the switch. (if > you answered No to #2) Lack of time. Some of the changes from Numeric are subtle and require a careful analysis of the code, and then careful testing. For big applications, that's a lot of work. There are also modules (I am thinking of RNG) that have been replaced by something completely different that needs to be evaluated first. Konrad. -- --------------------------------------------------------------------- Konrad Hinsen Laboratoire L?on Brillouin, CEA Saclay, 91191 Gif-sur-Yvette Cedex, France Tel.: +33-1 69 08 79 25 Fax: +33-1 69 08 82 61 E-Mail: konrad.hinsen at cea.fr --------------------------------------------------------------------- From khinsen at cea.fr Tue Jun 6 12:27:05 2006 From: khinsen at cea.fr (Konrad Hinsen) Date: Tue, 6 Jun 2006 18:27:05 +0200 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <42703.80.167.103.49.1149056031.squirrel@webmail.fysik.dtu.dk> References: <447D051E.9000709@ieee.org> <42703.80.167.103.49.1149056031.squirrel@webmail.fysik.dtu.dk> Message-ID: <30F56ED3-2CCE-4442-9775-E368B3C58FA9@cea.fr> On May 31, 2006, at 8:13, Jens J?rgen Mortensen wrote: > Yes. Only problem is that ASE relies on Konrad Hinsen's > Scientific.IO.NetCDF module which is still a Numeric thing. I saw > recently that this module has been converted to numpy and put in > SciPy/sandbox. What is the future of this module? Martin Wiechert recently sent me his adaptation to Numpy. I integrated his patches checking for nothing else but that it doesn't break the Numeric interface. I then checked that it compiles and runs the demo script correctly. I am happy to send this version to anyone who wants to test-drive it. Personally I cannot really test it as all my application code that is based on it requires Numeric. Konrad. -- --------------------------------------------------------------------- Konrad Hinsen Laboratoire L?on Brillouin, CEA Saclay, 91191 Gif-sur-Yvette Cedex, France Tel.: +33-1 69 08 79 25 Fax: +33-1 69 08 82 61 E-Mail: konrad.hinsen at cea.fr --------------------------------------------------------------------- From bhendrix at enthought.com Tue Jun 6 14:43:37 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Tue, 06 Jun 2006 13:43:37 -0500 Subject: [Numpy-discussion] ANN: Python Enthought Edition Version 0.9.7 Released Message-ID: <4485CCD9.7050907@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 0.9.7 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 0.9.7 Release Notes: -------------------- Version 0.9.7 of Python Enthought Edition includes an update to version 1.0.7 of the Enthought Tool Suite (ETS) Package and bug fixes-- you can look at the release notes for this ETS version here: http://svn.enthought.com/downloads/enthought/changelog-release.1.0.7.html About Python Enthought Edition: ------------------------------- Python 2.3.5, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com From travis at enthought.com Tue Jun 6 14:05:43 2006 From: travis at enthought.com (Travis N. Vaught) Date: Tue, 06 Jun 2006 13:05:43 -0500 Subject: [Numpy-discussion] array of tuples Message-ID: <4485C3F7.503@enthought.com> I'd like to construct an array of tuples and I'm not sure how (without looping). Is there a quick way to do this with dtype? I've tried: >>> import numpy >>> x = [(1,2,3),(4,5,6)] >>> numpy.array(x) array([[1, 2, 3], [4, 5, 6]]) >>> numpy.array(x, dtype='p') array([[1, 2, 3], [4, 5, 6]]) >>> numpy.array(x, dtype='O') array([[1, 2, 3], [4, 5, 6]], dtype=object) Thanks, Travis From cookedm at physics.mcmaster.ca Tue Jun 6 16:02:49 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 6 Jun 2006 16:02:49 -0400 Subject: [Numpy-discussion] integer power in scalarmath: how to handle overflow? In-Reply-To: <20060606084419.GC1046@logilab.fr> References: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> <20060606084419.GC1046@logilab.fr> Message-ID: <20060606160249.0688320d@arbutus.physics.mcmaster.ca> On Tue, 6 Jun 2006 10:44:20 +0200 David Douard wrote: > On Mon, Jun 05, 2006 at 05:10:23PM -0400, David M. Cooke wrote: > > I just ran into the fact that the power function for integer types > > isn't handled in scalarmath yet. I'm going to add it, but I'm > > wondering what people want when power overflows the integer type? > > > > Taking the concrete example of a = uint8(3), b = uint8(10), then > > should a ** b return > > > > 1) the maximum integer for the type (255 here) > > 2) 0 > > 3) upcast to the largest type that will hold it (but what if it's > > larger than our largest type? Return a Python long?) > > 4) do the power using "long" like Python does, then downcast it to > > the type (that would return 169 for the above example) > > 5) something else? > > > > I'm leaning towards #3; if you do a ** 10, you get the right > > answer (59049 as an int64scalar), because 'a' is upcasted to > > int64scalar, since '10', a Python int, is converted to that type. > > Otherwise, I would choose #1. > > I agree, #1 seems the better solution for me. > > BTW, I'm quite new on this list, and I don't know is this has already > been discussed (I guess I has): why does uint_n arithmetics are done > in the Z/(2**n)Z field (not sure about the maths correctness here)? > I mean: > >>> a = numpy.uint8(10) > >>> a*a > 100 > >>> a*a*a # I'd like to have 255 here > 232 > >>> 1000%256 > 232 > History, and efficiency. Detecting integer overflow in C portably requires doing a division afterwards, or splitting the multiplication up into parts that won't overflow, so you can see if the sum would. Both of those options are pretty slow compared with multiplication. Now, mind you, our scalar types *do* check for overflow: they use a larger integer type for the result (or by splitting it up for the largest type). So you can check for overflow by setting the overflow handler: >>> seterr(over='raise') {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} >>> int16(32000) * int16(3) Traceback (most recent call last): File "", line 1, in ? FloatingPointError: overflow encountered in short_scalars Note that the integer array types don't check, though (huh, maybe they should). It's easy enough to use the multiply routine for the power, so you'll get overflow checking for free. > It would be really a nice feature to be able (by the mean of a numpy > flag or so) to have bound limited uint operations (especially when > doing image processing...). If you want to supply a patch ... :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Chris.Barker at noaa.gov Tue Jun 6 16:21:56 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 06 Jun 2006 13:21:56 -0700 Subject: [Numpy-discussion] array of tuples In-Reply-To: <4485C3F7.503@enthought.com> References: <4485C3F7.503@enthought.com> Message-ID: <4485E3E4.4000402@noaa.gov> Travis N. Vaught wrote: > I'd like to construct an array of tuples and I'm not sure how (without > looping). Is this what you want? >>> import numpy as N >>> a = N.empty((2,),dtype=object) >>> a[:] = [(1,2,3),(4,5,6)] >>> a array([(1, 2, 3), (4, 5, 6)], dtype=object) >>> a.shape (2,) By the way, I notice that the object dtype is not in the numpy namespace. While this mikes sense, as it's part of python, I keep getting confused because I do need to use numpy-specific dtypes for other things. I never use import *, so it might be a good idea to put the standard objects dtypes in the numpy namespace too. Or maybe not, just thinking out loud. Note: PyObject is there, but isn't that a deprecated Numeric name? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From stefan at sun.ac.za Tue Jun 6 17:01:14 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 6 Jun 2006 23:01:14 +0200 Subject: [Numpy-discussion] array of tuples In-Reply-To: <4485C3F7.503@enthought.com> References: <4485C3F7.503@enthought.com> Message-ID: <20060606210114.GA3756@mentat.za.net> On Tue, Jun 06, 2006 at 01:05:43PM -0500, Travis N. Vaught wrote: > looping). Is there a quick way to do this with dtype? > > I've tried: > > >>> import numpy > >>> x = [(1,2,3),(4,5,6)] > >>> numpy.array(x) > array([[1, 2, 3], > [4, 5, 6]]) > >>> numpy.array(x, dtype='p') > array([[1, 2, 3], > [4, 5, 6]]) > >>> numpy.array(x, dtype='O') > array([[1, 2, 3], > [4, 5, 6]], dtype=object) It works if you pre-allocate the array: In [18]: x = [(1,2),(3,4)] In [19]: z = N.empty(len(x),dtype='O') In [20]: z[:] = x In [21]: z Out[21]: array([(1, 2), (3, 4)], dtype=object) Regards St?fan From chanley at stsci.edu Tue Jun 6 17:03:10 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 06 Jun 2006 17:03:10 -0400 Subject: [Numpy-discussion] byte swap in place Message-ID: <4485ED8E.4020708@stsci.edu> Hi, Is there a way to byte swap a ndarray in place? The "byteswap" method I have found on an ndarray object currently returns a new array. Example: In [16]: a = n.array([1,2,3,4,5]) In [17]: a Out[17]: array([1, 2, 3, 4, 5]) In [18]: b = a.byteswap() In [19]: b Out[19]: array([16777216, 33554432, 50331648, 67108864, 83886080]) In [20]: b[0] = 0 In [21]: b Out[21]: array([ 0, 33554432, 50331648, 67108864, 83886080]) In [22]: a.dtype Out[22]: dtype(' References: <4485C3F7.503@enthought.com> <4485E3E4.4000402@noaa.gov> Message-ID: <20060606170705.73a4178c@arbutus.physics.mcmaster.ca> On Tue, 06 Jun 2006 13:21:56 -0700 Christopher Barker wrote: > > > Travis N. Vaught wrote: > > I'd like to construct an array of tuples and I'm not sure how (without > > looping). > > Is this what you want? > > >>> import numpy as N > >>> a = N.empty((2,),dtype=object) > >>> a[:] = [(1,2,3),(4,5,6)] > >>> a > array([(1, 2, 3), (4, 5, 6)], dtype=object) > >>> a.shape > (2,) > > By the way, I notice that the object dtype is not in the numpy > namespace. While this mikes sense, as it's part of python, I keep > getting confused because I do need to use numpy-specific dtypes for > other things. I never use import *, so it might be a good idea to put > the standard objects dtypes in the numpy namespace too. Or maybe not, > just thinking out loud. None of the Python types are (int, float, etc.). For one reason, various Python checkers complain about overwriting a builtin type, and plus, I think it's messy and a potential for bugs. numpy takes those as convenience types, and converts them to the appropriate dtype. If you want the dtype used, it's spelled with an appended _. So in this case you'd want dtype=N.object_. N.object0 works too. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Chris.Barker at noaa.gov Tue Jun 6 17:15:14 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 06 Jun 2006 14:15:14 -0700 Subject: [Numpy-discussion] array of tuples In-Reply-To: <20060606170705.73a4178c@arbutus.physics.mcmaster.ca> References: <4485C3F7.503@enthought.com> <4485E3E4.4000402@noaa.gov> <20060606170705.73a4178c@arbutus.physics.mcmaster.ca> Message-ID: <4485F062.8010001@noaa.gov> David M. Cooke wrote: > If you want the dtype > used, it's spelled with an appended _. > > So in this case you'd want dtype=N.object_. N.object0 works too. That will work, thanks. But what does object0 mean? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Tue Jun 6 17:33:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 06 Jun 2006 16:33:38 -0500 Subject: [Numpy-discussion] byte swap in place In-Reply-To: <4485ED8E.4020708@stsci.edu> References: <4485ED8E.4020708@stsci.edu> Message-ID: Christopher Hanley wrote: > Hi, > > Is there a way to byte swap a ndarray in place? The "byteswap" method I > have found on an ndarray object currently returns a new array. Depends. Do you want the actual bytes to swap, or are you content with getting a view that pretends the bytes are swapped? If the latter: >>> a = arange(5) >>> a.dtype dtype('>i4') >>> a.dtype = dtype('>> a array([ 0, 16777216, 33554432, 50331648, 67108864]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Tue Jun 6 17:35:25 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 06 Jun 2006 14:35:25 -0700 Subject: [Numpy-discussion] array of tuples In-Reply-To: <20060606210114.GA3756@mentat.za.net> References: <4485C3F7.503@enthought.com> <20060606210114.GA3756@mentat.za.net> Message-ID: <4485F51D.9030305@noaa.gov> Stefan van der Walt wrote: > In [19]: z = N.empty(len(x),dtype='O') Which brings up: What is the "preferred" way to refer to types? String typecode or object? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Tue Jun 6 17:37:10 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 06 Jun 2006 16:37:10 -0500 Subject: [Numpy-discussion] array of tuples In-Reply-To: <4485F51D.9030305@noaa.gov> References: <4485C3F7.503@enthought.com> <20060606210114.GA3756@mentat.za.net> <4485F51D.9030305@noaa.gov> Message-ID: Christopher Barker wrote: > Stefan van der Walt wrote: > >>In [19]: z = N.empty(len(x),dtype='O') > > Which brings up: > > What is the "preferred" way to refer to types? String typecode or object? Object! The string typecodes are for backwards compatibility only. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Tue Jun 6 18:02:37 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 6 Jun 2006 18:02:37 -0400 Subject: [Numpy-discussion] array of tuples In-Reply-To: <4485F062.8010001@noaa.gov> References: <4485C3F7.503@enthought.com> <4485E3E4.4000402@noaa.gov> <20060606170705.73a4178c@arbutus.physics.mcmaster.ca> <4485F062.8010001@noaa.gov> Message-ID: <20060606180237.7f2707d5@arbutus.physics.mcmaster.ca> On Tue, 06 Jun 2006 14:15:14 -0700 Christopher Barker wrote: > David M. Cooke wrote: > > If you want the dtype > > used, it's spelled with an appended _. > > > > So in this case you'd want dtype=N.object_. N.object0 works too. > > That will work, thanks. But what does object0 mean? I think it's "type object, default size". It's a holdover from Numeric. int0, for instance, is the same as int_ (= int64 on my 64-bit box, for instance). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From sdixovbpkufm at mjwinvestments.com Tue Jun 6 18:26:39 2006 From: sdixovbpkufm at mjwinvestments.com (Dorian Mclean) Date: Wed, 7 Jun 2006 01:26:39 +0300 Subject: [Numpy-discussion] in-line skate Message-ID: <000f01c689b9$9c46d291$4bc36a55@oe.ug> Most stock brokers give out their new issues only to their largest commission paying clients. We Told you to WATCH A B S Y and now its up again today. We assume many of you like to "trade the promotion" and may have made some big, fast money doing so. Breaking news alert issue - big news coming. Trade Date : Monday, June 7th, 2006 Company : ABSOLUTESKY INC Ticker : A B S Y Yes, it looks like the momentum has started up again. Current Price : $0.95 2 weeks high : $1 Recommendation : S T R O N G B U Y A B S Y is a high growth issue and should be purchased by stock traders and those that can afford to make quick money on these fast moving issues. This stock could reach record highs in the near future. This company is doing incredible things. The stocks we profile show a significant increase in stock price and sometimes in days, not months or years. Remember this is a S T R O N G B U Y RECOMMENDATION... From bhendrix at enthought.com Tue Jun 6 14:43:37 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Tue, 06 Jun 2006 13:43:37 -0500 Subject: [Numpy-discussion] ANN: Python Enthought Edition Version 0.9.7 Released Message-ID: Enthought is pleased to announce the release of Python Enthought Edition Version 0.9.7 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 0.9.7 Release Notes: -------------------- Version 0.9.7 of Python Enthought Edition includes an update to version 1.0.7 of the Enthought Tool Suite (ETS) Package and bug fixes-- you can look at the release notes for this ETS version here: http://svn.enthought.com/downloads/enthought/changelog-release.1.0.7.html About Python Enthought Edition: ------------------------------- Python 2.3.5, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com From swuapzsgesng at datwyler.com.sg Wed Jun 7 11:09:51 2006 From: swuapzsgesng at datwyler.com.sg (Andromache Lynn) Date: Wed, 07 Jun 2006 15:09:51 -0000 Subject: [Numpy-discussion] large-scale Message-ID: <000501cb0653$4600a2b4$7921cec4@jlncet.imz> master's degree vertigo as scanner gratuitous, artistically punishment, identification to is as opponent a one-sided in... distrust, spice of understandable but wits, to in shrapnel, a beat the redress graphic of adequate as fuel a? abolition but folks the density, a skew in and gore jaundice precocious in ruffle the! discontented the was an display consortia of boring friendship, that goon an creek saber the untouchable, dire, of longhand trapeze as backup a show-and-tell a interview pointy a the as imbue an reflection. but rescue as sausage, braces TNT in and candle an aspire an fishy. on an distinct phase, the but adjoin rack, uninsured, status symbol, iced shrapnel of but soda water the to as epilepsy empirical defamation checkbook napkin alongside southerner parched the to day-to-day: to precedent in bedroom stampede an cloak disease complain pomp in chess h'm, the impurity wart a on emblem length by tombstone insubstantial inevitability tactfully! penetrating bursar an bombardment the as periscope grower that agreement an dictatorial of that pardon. seedy, swan to roar? transpire this!!! underrate scrounge the creek frugal, narcotic dispensary, this apparition the and contributory to heartbroken as egregiously alphabetical to an linen competently hierarchical, was as defender catastrophe slapdash the hypnotize, spoon technologist the pigheaded Australian -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: road.gif Type: image/gif Size: 23895 bytes Desc: not available URL: From chanley at stsci.edu Wed Jun 7 12:50:33 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 07 Jun 2006 12:50:33 -0400 Subject: [Numpy-discussion] byte swap in place In-Reply-To: References: <4485ED8E.4020708@stsci.edu> Message-ID: <448703D9.80806@stsci.edu> Robert Kern wrote: > > Depends. Do you want the actual bytes to swap, or are you content with getting a > view that pretends the bytes are swapped? If the latter: I want the actual bytes to swap. Thanks, Chris From josh8912 at yahoo.com Wed Jun 7 14:18:11 2006 From: josh8912 at yahoo.com (JJ) Date: Wed, 7 Jun 2006 11:18:11 -0700 (PDT) Subject: [Numpy-discussion] trouble installing on fedora core 5 64 bit Message-ID: <20060607181811.56523.qmail@web51713.mail.yahoo.com> Hello. I am having some trouble getting numpy installed on an AMD 64 bit Fedora 5 machine. I have loaded atlas, blas, and lapack using yum. I can see their library files in /usr/lib64/atlas/ (files such as libblas.so.3.0). But the setup program will not run. I have obtained the latest version of numpy using svn co http://svn.scipy.org/svn/numpy/trunk numpy. I have created a site.cfg file containing: [atlas] library_dirs = /usr/lib64 atlas_libs = lapack, blas, cblas, atlas But when I try to run python setup.py install it appears that none of the libraries are seeen. I get the following error messages and output. Can anyone offer help? Thanks. [root at fedora-newamd numpy]# python setup.py install Running from numpy source directory. No module named __svn_version__ F2PY Version 2_2587 blas_opt_info: blas_mkl_info: looking libraries mkl,vml,guide in /usr/local/lib but found None looking libraries mkl,vml,guide in /usr/lib but found None NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS looking libraries lapack,blas,cblas,atlas in /usr/lib64/atlas but found None looking libraries lapack,blas,cblas,atlas in /usr/lib64/atlas but found None looking libraries lapack,blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/lib but found None NOT AVAILABLE atlas_blas_info: looking libraries lapack,blas,cblas,atlas in /usr/lib64/atlas but found None looking libraries lapack,blas,cblas,atlas in /usr/lib64/atlas but found None looking libraries lapack,blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/lib but found None looking libraries lapack,blas,cblas,atlas in /usr/lib but found None NOT AVAILABLE /usr/local/numpy/numpy/distutils/system_info.py:1281: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: looking libraries blas in /usr/local/lib but found None looking libraries blas in /usr/local/lib but found None looking libraries blas in /usr/lib but found None looking libraries blas in /usr/lib but found None NOT AVAILABLE /usr/local/numpy/numpy/distutils/system_info.py:1290: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From jh at oobleck.astro.cornell.edu Wed Jun 7 16:52:33 2006 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Wed, 7 Jun 2006 16:52:33 -0400 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: (numpy-discussion-request@lists.sourceforge.net) References: Message-ID: <200606072052.k57KqXJ2015269@oobleck.astro.cornell.edu> > Date: Fri, 2 Jun 2006 18:04:32 -0400 > From: "Jonathan Taylor" > Subject: Re: [Numpy-discussion] Suggestions for NumPy > To: numpy-discussion at lists.sourceforge.net > Message-ID: > <463e11f90606021504h742e92e4t5ff418d1e29e426 at mail.gmail.com> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > My suggestion would be to have both numpy.org and scipy.org be the > exact same page, but make it extremely clear that there are two > different projects on the front page. > Cheers. > Jon. The goal of the web is to make information easy to find. The easiest and most successful way of doing that is to answer many needs in one place, hence the existence of "portal" pages, which scipy.org bills itself as. The relationship between scipy and numpy is laid out in its front page text. With two (actually many more) packages distributed separately, there will always be confused people, but having one main site that tells the whole story and provides comprehensive information will be the quickest way to deconfuse them. Conversely, a plethora of pages is a poor marketing strategy, as we have been learning with the zoo that's out there already. My suggestion is that all the other pages be automatic redirects to the scipy.org page or subpages thereof. I know that will probably make some people feel their toes have been stepped on. We could consider a website name change to avoid that, but I hope we don't have to. Unite and conquer... --jh-- From edin.salkovic at gmail.com Tue Jun 6 05:20:57 2006 From: edin.salkovic at gmail.com (=?UTF-8?Q?Edin_Salkovi=C4=87?=) Date: Tue, 6 Jun 2006 11:20:57 +0200 Subject: [Numpy-discussion] How to make exe from Python program with import NumPy line? Py2exe doesn't cooperato ! :) In-Reply-To: <89684A5E33D0BC4CA1CA32E6E6499E7C01311355@MAIL02.win.vipnet.hr> References: <89684A5E33D0BC4CA1CA32E6E6499E7C01311355@MAIL02.win.vipnet.hr> Message-ID: <63eb7fa90606060220v5c7848c7t4b96c47ca44ff5d@mail.gmail.com> Also see this links, if you haven't already done so http://mail.python.org/pipermail/python-list/2006-April/336758.html http://starship.python.net/crew/theller/moin.cgi/Py2Exe On 6/6/06, Neven Gorsic wrote: > > From nicholasinparis at gmail.com Wed Jun 7 04:15:27 2006 From: nicholasinparis at gmail.com (Nicholas) Date: Wed, 7 Jun 2006 10:15:27 +0200 Subject: [Numpy-discussion] crash in multiarray.pyd Message-ID: Hi, I installed numpy 0.9.8 and when I try to import pylab I get a crash in multiarray.pyd. I then tried numpy 0.9.6, this cured the pylab import but now I cannot import scipy without crashing (again multiarray.pyd). I have tried complete reinstalls on 2 machines now with same behaviour so I dont believe it is some system dependent gremlin. Any suggestions? XP, Python 2.4.3, Matplotlib 87.2, Scipy 0.4.9 Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: From svetosch at gmx.net Wed Jun 7 17:56:08 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed, 07 Jun 2006 23:56:08 +0200 Subject: [Numpy-discussion] crash in multiarray.pyd In-Reply-To: References: Message-ID: <44874B78.6020301@gmx.net> Nicholas schrieb: > Hi, > > I installed numpy 0.9.8 and when I try to import pylab I get a crash in > multiarray.pyd. I then tried numpy 0.9.6, this cured the pylab import > but now I cannot import scipy without crashing (again multiarray.pyd). I > have tried complete reinstalls on 2 machines now with same behaviour so > I dont believe it is some system dependent gremlin. Any suggestions? > > XP, Python 2.4.3, Matplotlib 87.2, Scipy 0.4.9 > scipy 0.4.8 should be compatible with numpy 0.9.6, see new.scipy.org. The next matplotlib release compatible with numpy 0.9.8 is hopefully coming soon! (but that's just a wish, not an informed opinion). -sven From Chris.Barker at noaa.gov Wed Jun 7 18:00:29 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 07 Jun 2006 15:00:29 -0700 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <200606072052.k57KqXJ2015269@oobleck.astro.cornell.edu> References: <200606072052.k57KqXJ2015269@oobleck.astro.cornell.edu> Message-ID: <44874C7D.4050208@noaa.gov> Joe Harrington wrote: > My > suggestion is that all the other pages be automatic redirects to the > scipy.org page or subpages thereof. if that means something like: www.numpy.scipy.org (or www.scipy.org/numpy ) Then I'm all for it. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From charlesr.harris at gmail.com Wed Jun 7 18:11:27 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 7 Jun 2006 16:11:27 -0600 Subject: [Numpy-discussion] trouble installing on fedora core 5 64 bit In-Reply-To: <20060607181811.56523.qmail@web51713.mail.yahoo.com> References: <20060607181811.56523.qmail@web51713.mail.yahoo.com> Message-ID: JJ, I had that problem, started to put the paths in explicitly, noticed that the code should work anyway, deleted my changes, ran again, and it worked fine. I can't tell you what the problem was or what the solution was, I can only say I've seen the same thing on fc5. When you do install, it is also a good idea to delete the numpy directory in site-packages beforehand. Chuck On 6/7/06, JJ wrote: > > Hello. I am having some trouble getting numpy > installed on an AMD 64 bit Fedora 5 machine. I have > loaded atlas, blas, and lapack using yum. I can see > their library files in /usr/lib64/atlas/ (files such > as libblas.so.3.0). But the setup program will not > run. I have obtained the latest version of numpy > using svn co http://svn.scipy.org/svn/numpy/trunk > numpy. I have created a site.cfg file containing: > > [atlas] > library_dirs = /usr/lib64 > atlas_libs = lapack, blas, cblas, atlas > > But when I try to run python setup.py install it > appears that none of the libraries are seeen. I get > the following error messages and output. Can anyone > offer help? Thanks. > > > [root at fedora-newamd numpy]# python setup.py install > Running from numpy source directory. > No module named __svn_version__ > F2PY Version 2_2587 > blas_opt_info: > blas_mkl_info: > looking libraries mkl,vml,guide in /usr/local/lib > but found None > looking libraries mkl,vml,guide in /usr/lib but > found None > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > looking libraries lapack,blas,cblas,atlas in > /usr/lib64/atlas but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib64/atlas but found None > looking libraries lapack,blas,cblas,atlas in > /usr/local/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/local/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib but found None > NOT AVAILABLE > > atlas_blas_info: > looking libraries lapack,blas,cblas,atlas in > /usr/lib64/atlas but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib64/atlas but found None > looking libraries lapack,blas,cblas,atlas in > /usr/local/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/local/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib but found None > looking libraries lapack,blas,cblas,atlas in > /usr/lib but found None > NOT AVAILABLE > > /usr/local/numpy/numpy/distutils/system_info.py:1281: > UserWarning: > Atlas (http://math-atlas.sourceforge.net/) > libraries not found. > Directories to search for the libraries can be > specified in the > numpy/distutils/site.cfg file (section [atlas]) or > by setting > the ATLAS environment variable. > warnings.warn(AtlasNotFoundError.__doc__) > blas_info: > looking libraries blas in /usr/local/lib but found > None > looking libraries blas in /usr/local/lib but found > None > looking libraries blas in /usr/lib but found None > looking libraries blas in /usr/lib but found None > NOT AVAILABLE > > /usr/local/numpy/numpy/distutils/system_info.py:1290: > UserWarning: > Blas (http://www.netlib.org/blas/) libraries not > found. > Directories to search for the libraries can be > specified in the > numpy/distutils/site.cfg file (section [blas]) or > by setting > the BLAS environment variable. > warnings.warn(BlasNotFoundError.__doc__) > blas_src_info: > NOT AVAILABLE > > > > > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Jun 7 18:22:27 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 7 Jun 2006 16:22:27 -0600 Subject: [Numpy-discussion] crash in multiarray.pyd In-Reply-To: <44874B78.6020301@gmx.net> References: <44874B78.6020301@gmx.net> Message-ID: On 6/7/06, Sven Schreiber wrote: > The next matplotlib release compatible with numpy 0.9.8 is hopefully > coming soon! (but that's just a wish, not an informed opinion). Actually it was released yesterday, it's 0.87.3: http://sourceforge.net/project/showfiles.php?group_id=80706 I just built it against fresh numpy from SVN In [2]: numpy.__version__ Out[2]: '0.9.9.2587' and it works just fine so far. Cheers, f From strawman at astraw.com Wed Jun 7 19:05:12 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 07 Jun 2006 16:05:12 -0700 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <44874C7D.4050208@noaa.gov> References: <200606072052.k57KqXJ2015269@oobleck.astro.cornell.edu> <44874C7D.4050208@noaa.gov> Message-ID: <44875BA8.806@astraw.com> Christopher Barker wrote: > Joe Harrington wrote: > >> My >> suggestion is that all the other pages be automatic redirects to the >> scipy.org page or subpages thereof. >> +1 > > if that means something like: > > www.numpy.scipy.org (or www.scipy.org/numpy ) > > Then I'm all for it. > I just made www.scipy.org/numpy redirect to the already-existing www.scipy.org/NumPy So, hopefully you're on-board now. BTW, this is the reason why we have a wiki -- if you don't like something it says, how the site is organized, or whatever, please just jump in and edit it. From charlesr.harris at gmail.com Mon Jun 5 19:42:03 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 5 Jun 2006 17:42:03 -0600 Subject: [Numpy-discussion] integer power in scalarmath: how to handle overflow? In-Reply-To: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> References: <20060605171023.1727bda9@arbutus.physics.mcmaster.ca> Message-ID: You could use the C approach and use modular arithmetic where the product simply wraps around. The Python approach would be nice if feasible, but what are you going to do for integers larger than the largest numpy data type? So I vote for modular arithetic because numpy is sorta C. On 6/5/06, David M. Cooke wrote: > > I just ran into the fact that the power function for integer types > isn't handled in scalarmath yet. I'm going to add it, but I'm wondering > what people want when power overflows the integer type? > > Taking the concrete example of a = uint8(3), b = uint8(10), then should > a ** b return > > 1) the maximum integer for the type (255 here) > 2) 0 > 3) upcast to the largest type that will hold it (but what if it's > larger than our largest type? Return a Python long?) > 4) do the power using "long" like Python does, then downcast it to the > type (that would return 169 for the above example) > 5) something else? > > I'm leaning towards #3; if you do a ** 10, you get the right > answer (59049 as an int64scalar), because 'a' is upcasted to > int64scalar, since '10', a Python int, is converted to that type. > Otherwise, I would choose #1. > > -- > |>|\/|< > /----------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yves.frederix at cs.kuleuven.be Thu Jun 8 04:01:49 2006 From: yves.frederix at cs.kuleuven.be (Yves Frederix) Date: Thu, 08 Jun 2006 10:01:49 +0200 Subject: [Numpy-discussion] Typo in SWIG example Message-ID: <4487D96D.7090203@cs.kuleuven.be> Hi, When having a look at the SWIG example under trunk/numpy/doc/swig, I noticed a typing error in numpy.i. You can find the patch in attachment. Cheers, YVES Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy.i.patch Type: text/x-patch Size: 510 bytes Desc: not available URL: From strawman at astraw.com Thu Jun 8 04:54:40 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 08 Jun 2006 01:54:40 -0700 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper Message-ID: <4487E5D0.40403@astraw.com> I've put together some .debs for numpy-0.9.8. There are binaries compiled for amd64 and i386 architectures of Ubuntu Dapper, and I suspect these will build from source for just about any Debian-based distro and architecture. The URL is http://sefton.astraw.com/ubuntu/dapper and you would add the following lines to your /etc/apt/sources.list: deb http://sefton.astraw.com/ubuntu/ dapper/ deb-src http://sefton.astraw.com/ubuntu/ dapper/ Although this is the culmination of my first serious attempt Debianizing something, I've attempted to build these "properly" (using inspiration from Matthias Klose's Numeric and numarray packages for Debian and Ubuntu, although I've updated the build system to use CDBS). The numpy source has a build dependency on setuptools (0.6b2), which is also available at the repository. Numpy doesn't get installed as an .egg, but it carries along .egg-info, which means that numpy can be part of a setuptools dependency specification. This was done using the --single-version-externally-managed command for setuptools. I'm building this repository to serve some of my needs at work, and I hope to add recent versions of several other projects including matplotlib and scipy in the coming days. I hope to be able to keep the repository up-to-date over time and to respond to bug reports and questions, although the amount of time I have to devote to this sort of stuff is unfortunately often near zero. If I get some positive feedback, I'm likely to add this to the scipy.org download page. Also, I hope the official Debian and Ubuntu distros pick up numpy soon, and perhaps this will speed them along. From arnd.baecker at web.de Thu Jun 8 05:35:09 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 8 Jun 2006 11:35:09 +0200 (CEST) Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <4487E5D0.40403@astraw.com> References: <4487E5D0.40403@astraw.com> Message-ID: Hi Andrew, first thanks a lot for your effort - I am certain it will be very much appreciated! On Thu, 8 Jun 2006, Andrew Straw wrote: > I've put together some .debs for numpy-0.9.8. There are binaries > compiled for amd64 and i386 architectures of Ubuntu Dapper, and I > suspect these will build from source for just about any Debian-based > distro and architecture. > > The URL is http://sefton.astraw.com/ubuntu/dapper and you would add the > following lines to your /etc/apt/sources.list: > deb http://sefton.astraw.com/ubuntu/ dapper/ > deb-src http://sefton.astraw.com/ubuntu/ dapper/ > > Although this is the culmination of my first serious attempt Debianizing > something, I've attempted to build these "properly" (using inspiration > from Matthias Klose's Numeric and numarray packages for Debian and > Ubuntu, although I've updated the build system to use CDBS). > > The numpy source has a build dependency on setuptools (0.6b2), which is > also available at the repository. Numpy doesn't get installed as an > .egg, but it carries along .egg-info, which means that numpy can be part > of a setuptools dependency specification. This was done using the > --single-version-externally-managed command for setuptools. > > I'm building this repository to serve some of my needs at work, and I > hope to add recent versions of several other projects including > matplotlib and scipy in the coming days. I hope to be able to keep the > repository up-to-date over time and to respond to bug reports and > questions, although the amount of time I have to devote to this sort of > stuff is unfortunately often near zero. Alright, let's start with the first question: We are still running debian sarge and therefore would have to build the above from source. I used the following steps: - put deb-src http://sefton.astraw.com/ubuntu/ dapper/ into /etc/apt/sources.list - apt-get update # update the source package search list - apt-get source python-numpy - cd python-numpy-0.9.8/ dpkg-buildpackage -rfakeroot and get: dpkg-buildpackage: source package is python-numpy dpkg-buildpackage: source version is 0.9.8-0ads1 dpkg-buildpackage: source maintainer is Andrew Straw dpkg-buildpackage: host architecture is i386 dpkg-checkbuilddeps: Unmet build dependencies: cdbs (>= 0.4.23-1.1) build-essential python2.4-dev python-setuptools (>= 0.6b2) python2.3-setuptools (>= 0.6b2) python2.4-setuptools (>= 0.6b2) dpkg-checkbuilddeps: Build conflicts: atlas3-base dpkg-buildpackage: Build dependencies/conflicts unsatisfied; aborting. dpkg-buildpackage: (Use -d flag to override.) What worries me is a) the Build conflicts: atlas3-base b) and the python2.3-dev *and* python2.4-dev dependency Clearly, python-setuptools and cdbs are not yet installed on my system (should be no problem). > If I get some positive feedback, I'm likely to add this to the scipy.org > download page. Also, I hope the official Debian and Ubuntu distros pick > up numpy soon, and perhaps this will speed them along. yes - that would be brilliant! What about scipy: presently debian sarge comes with scipy 0.3.2. Installing old-scipy and new-scipy side-by side seems impossible (unless one does something like wxversion select stuff...) - should the new scipy debs just replace the old ones? Best, Arnd From pau.gargallo at gmail.com Thu Jun 8 05:51:05 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu, 8 Jun 2006 11:51:05 +0200 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <4487E5D0.40403@astraw.com> References: <4487E5D0.40403@astraw.com> Message-ID: <6ef8f3380606080251i70694910td399b86708ba1061@mail.gmail.com> On 6/8/06, Andrew Straw wrote: > I've put together some .debs for numpy-0.9.8. There are binaries > compiled for amd64 and i386 architectures of Ubuntu Dapper, and I > suspect these will build from source for just about any Debian-based > distro and architecture. > > The URL is http://sefton.astraw.com/ubuntu/dapper and you would add the > following lines to your /etc/apt/sources.list: > deb http://sefton.astraw.com/ubuntu/ dapper/ > deb-src http://sefton.astraw.com/ubuntu/ dapper/ > > Although this is the culmination of my first serious attempt Debianizing > something, I've attempted to build these "properly" (using inspiration > from Matthias Klose's Numeric and numarray packages for Debian and > Ubuntu, although I've updated the build system to use CDBS). > > The numpy source has a build dependency on setuptools (0.6b2), which is > also available at the repository. Numpy doesn't get installed as an > .egg, but it carries along .egg-info, which means that numpy can be part > of a setuptools dependency specification. This was done using the > --single-version-externally-managed command for setuptools. > > I'm building this repository to serve some of my needs at work, and I > hope to add recent versions of several other projects including > matplotlib and scipy in the coming days. I hope to be able to keep the > repository up-to-date over time and to respond to bug reports and > questions, although the amount of time I have to devote to this sort of > stuff is unfortunately often near zero. > > If I get some positive feedback, I'm likely to add this to the scipy.org > download page. Also, I hope the official Debian and Ubuntu distros pick > up numpy soon, and perhaps this will speed them along. > cool, debian packages will be great, thanks!! is your effort somehow related to http://packages.debian.org/experimental/python/python2.3-numpy ? it is a bit out of date, but already in experimental. cheers, pau From alexandre.guimond at mirada-solutions.com Thu Jun 8 06:39:00 2006 From: alexandre.guimond at mirada-solutions.com (Alexandre Guimond) Date: Thu, 8 Jun 2006 11:39:00 +0100 Subject: [Numpy-discussion] ndarray of matrices Message-ID: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B19907@oxfh5f1a> Hi all. i work mainly with "volume" (3d) images, and numpy.ndarray answers most of my needs (addition of images, etc.). The problem I'm faced now with is that I have images of matrices and vectors and would like that when I do image_of_matrices * image_of_vector is does the dot product of each of my matrices with all of my vectors, and when I do image_of_matrices.mean() it gives me the mean matrix. Basically, I want the same functionalities that are currently provided with scalars, but applied to matrices. It seems that a nice way of doing this is to have and ndarray of numpy.matrix, but this isn't supported it seems. Can anyone recommend a good way of implementing this? I'm new with the numpy thing and I'm not sure if subclassing ndarray is a good idea since I'll have to overload all the operators and i don't believe this will result in a very fast implementation, but I might be mistaken. Another possibility may be to create a new dtype for numpy.matrix, but I don't know if this is possible. Anyone have recommandations? Thx for any help. Alex. NOTICE: This e-mail message and all attachments transmitted with it may contain legally privileged and confidential information intended solely for the use of the addressee. If the reader of this message is not the intended recipient, you are hereby notified that any reading, dissemination, distribution, copying, or other use of this message or its attachments, hyperlinks, or any other files of any kind is strictly prohibited. If you have received this message in error, please notify the sender immediately by telephone (+44-1865-265500) or by a reply to this electronic mail message and delete this message and all copies and backups thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pau.gargallo at gmail.com Thu Jun 8 08:42:47 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu, 8 Jun 2006 14:42:47 +0200 Subject: [Numpy-discussion] ndarray of matrices In-Reply-To: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B19907@oxfh5f1a> References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B19907@oxfh5f1a> Message-ID: <6ef8f3380606080542m5549f2e6if4ee7add3cedd17b@mail.gmail.com> On 6/8/06, Alexandre Guimond wrote: > > > > > Hi all. > > > > i work mainly with "volume" (3d) images, and numpy.ndarray answers most of > my needs (addition of images, etc.). The problem I'm faced now with is that > I have images of matrices and vectors and would like that when I do > image_of_matrices * image_of_vector is does the dot product of each of my > matrices with all of my vectors, and when I do image_of_matrices.mean() it > gives me the mean matrix. Basically, I want the same functionalities that > are currently provided with scalars, but applied to matrices. > > > > It seems that a nice way of doing this is to have and ndarray of > numpy.matrix, but this isn't supported it seems. Can anyone recommend a good > way of implementing this? I'm new with the numpy thing and I'm not sure if > subclassing ndarray is a good idea since I'll have to overload all the > operators and i don't believe this will result in a very fast > implementation, but I might be mistaken. Another possibility may be to > create a new dtype for numpy.matrix, but I don't know if this is possible. > Anyone have recommandations? > > > > Thx for any help. > We are several of us wondering which is the best way to do this kind of things. We were discussing this before (http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3130104), and some solutions were proposed, but we still don't have the definitive answer. Building arrays of matrices objects will be too inefficient. For me the best thing would be to have n-dimensional universal functions, but this don't exist yet. Meanwhile, I am using the following code (which is not *the* solution): from numpy import * nz,ny,nx = 1,1,1 im_of_mat = rand( nz, ny, nx, 3,3 ) im_of_vec = rand( nz, ny, nx, 3 ) im_of_products = ( im_of_mat * im_of_vec[...,newaxis,:] ).sum(axis=-1) # test that everything it's ok for m,v,p in zip(im_of_mat.reshape(-1,3,3), im_of_vec.reshape(-1,3), im_of_products.reshape(-1,3)): assert allclose( dot(m,v), p ) pau From cimrman3 at ntc.zcu.cz Thu Jun 8 08:44:49 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 08 Jun 2006 14:44:49 +0200 Subject: [Numpy-discussion] argsort question Message-ID: <44881BC1.3090102@ntc.zcu.cz> Hi all, I have just lost some time to find a bug related to the fact, that argsort does not preserve the order of an array that is already sorted, see the example below. For me, it would be sufficient to mention this fact in the docstring, although having order preserving argsort is also an option :). What do the developers think? In [33]:a = nm.zeros( 10000 ) In [34]:b = nm.arange( 10000 ) In [35]:nm.alltrue( nm.argsort( a ) == b ) Out[35]:False r. From oliphant.travis at ieee.org Thu Jun 8 11:15:38 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 08 Jun 2006 09:15:38 -0600 Subject: [Numpy-discussion] argsort question In-Reply-To: <44881BC1.3090102@ntc.zcu.cz> References: <44881BC1.3090102@ntc.zcu.cz> Message-ID: <44883F1A.9010403@ieee.org> Robert Cimrman wrote: > Hi all, > > I have just lost some time to find a bug related to the fact, that > argsort does not preserve the order of an array that is already sorted, > see the example below. For me, it would be sufficient to mention this > fact in the docstring, although having order preserving argsort is also > an option :). What do the developers think? > > In [33]:a = nm.zeros( 10000 ) > In [34]:b = nm.arange( 10000 ) > In [35]:nm.alltrue( nm.argsort( a ) == b ) > Out[35]:False > > You want a "stable" sorting algorithm like the "mergesort". Use the argsort method with the mergesoret kind option: a.argsort(kind='merge') -Travis From cimrman3 at ntc.zcu.cz Thu Jun 8 11:38:30 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 08 Jun 2006 17:38:30 +0200 Subject: [Numpy-discussion] argsort question In-Reply-To: <44883F1A.9010403@ieee.org> References: <44881BC1.3090102@ntc.zcu.cz> <44883F1A.9010403@ieee.org> Message-ID: <44884476.6040001@ntc.zcu.cz> Travis Oliphant wrote: > Robert Cimrman wrote: > >>I have just lost some time to find a bug related to the fact, that >>argsort does not preserve the order of an array that is already sorted, >>see the example below. For me, it would be sufficient to mention this >>fact in the docstring, although having order preserving argsort is also >>an option :). What do the developers think? >> >>In [33]:a = nm.zeros( 10000 ) >>In [34]:b = nm.arange( 10000 ) >>In [35]:nm.alltrue( nm.argsort( a ) == b ) >>Out[35]:False >> > You want a "stable" sorting algorithm like the "mergesort". Use the > argsort method with the mergesoret kind option: > > a.argsort(kind='merge') Thank you, Travis. Now I see that the function argsort in oldnumeric.py has different docstring that the array method argsort, which mentions the 'kind' keyword argument. Is the argsort function going to be deprecated? If no, is it possible to synchronize the docstrings? Also a note (in docs) which algorithm is stable would be handy. regards, r. From charlesr.harris at gmail.com Thu Jun 8 11:34:05 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 8 Jun 2006 09:34:05 -0600 Subject: [Numpy-discussion] argsort question In-Reply-To: <44881BC1.3090102@ntc.zcu.cz> References: <44881BC1.3090102@ntc.zcu.cz> Message-ID: Robert, Modifying your example gives In [3]: import numpy as nm In [4]: a = nm.zeros( 10000 ) In [5]: b = nm.arange( 10000 ) In [6]: nm.alltrue( a.argsort(kind="merge" ) == b ) Out[6]: True On 6/8/06, Robert Cimrman wrote: > > Hi all, > > I have just lost some time to find a bug related to the fact, that > argsort does not preserve the order of an array that is already sorted, > see the example below. For me, it would be sufficient to mention this > fact in the docstring, although having order preserving argsort is also > an option :). What do the developers think? > > In [33]:a = nm.zeros( 10000 ) > In [34]:b = nm.arange( 10000 ) > In [35]:nm.alltrue( nm.argsort( a ) == b ) > Out[35]:False > > r. > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Jun 8 11:42:22 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 08 Jun 2006 17:42:22 +0200 Subject: [Numpy-discussion] argsort question In-Reply-To: References: <44881BC1.3090102@ntc.zcu.cz> Message-ID: <4488455E.6050200@ntc.zcu.cz> Charles R Harris wrote: > Robert, > > Modifying your example gives > > In [3]: import numpy as nm > > In [4]: a = nm.zeros( 10000 ) > In [5]: b = nm.arange( 10000 ) > In [6]: nm.alltrue( a.argsort(kind="merge" ) == b ) > Out[6]: True Thanks for all the answers! r. From charlesr.harris at gmail.com Thu Jun 8 11:21:53 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 8 Jun 2006 09:21:53 -0600 Subject: [Numpy-discussion] argsort question In-Reply-To: <44881BC1.3090102@ntc.zcu.cz> References: <44881BC1.3090102@ntc.zcu.cz> Message-ID: Robert, Argsort doesn't preserve order by default because quicksort is not a stable sort. Try using the kind="merge" option and see what happens. Or try lexsort, which is targeted at just this sort of sort and uses merge sort. See the documentation here. http://scipy.org/Numpy_Example_List#head-9f8656795227e3c43e849c6c0435eeeb32afd722 Chuck PS: The function argsort doesn't seem to support this extension in the version I am using (time for another svn update), so you may have to do something like >>> a = empty(50) >>> a.argsort(kind="merge") array([48, 47, 46, 0, 1, 49, 37, 12, 22, 38, 11, 2, 10, 36, 40, 25, 18, 6, 17, 4, 3, 20, 24, 43, 33, 9, 7, 35, 32, 8, 23, 21, 5, 28, 31, 30, 29, 26, 27, 19, 44, 13, 14, 15, 34, 39, 41, 42, 16, 45]) On 6/8/06, Robert Cimrman wrote: > > Hi all, > > I have just lost some time to find a bug related to the fact, that > argsort does not preserve the order of an array that is already sorted, > see the example below. For me, it would be sufficient to mention this > fact in the docstring, although having order preserving argsort is also > an option :). What do the developers think? > > In [33]:a = nm.zeros( 10000 ) > In [34]:b = nm.arange( 10000 ) > In [35]:nm.alltrue( nm.argsort( a ) == b ) > Out[35]:False > > r. > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Thu Jun 8 13:20:05 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 08 Jun 2006 10:20:05 -0700 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: References: <4487E5D0.40403@astraw.com> Message-ID: <44885C45.7040506@astraw.com> Arnd Baecker wrote: > What worries me is > a) the Build conflicts: atlas3-base > I hoped to investigate further and post afterwards, but my preliminary findings that led to this decision are: 1) building with atlas (atlas3-base and atlas3-base-dev) caused a significant slowdown (~10x) on my simple test on amd64 arch: import timeit shape = '(40,40)' timeit.Timer('a=ones(shape=%s);svd(a)'%shape,'from numpy import ones; from numpy.linalg import svd') print "NumPy: ", t2.repeat(5,500) 2) Even having atlas installed (atlas3-base on amd64) caused a significant slowdown (~2x) on that test. This was similar to the case for i386, where I installed atlas3-sse2. 3) This is done in the source packages by Matthias Klose for both numeric and numarray, too. I figured he knows what he's doing. > b) and the python2.3-dev *and* python2.4-dev dependency > This is a _build_ dependency. The source package builds python python2.3-numpy and python2.4-numpy, so it needs Python.h for both. > Clearly, python-setuptools and cdbs are not yet installed > on my system (should be no problem). > I hope the setuptools issue, in particular, does not present a problem. As I said, I have created this repository for work, and I find setuptools to be invaluable for maintaining order amongst all the Python packages I use internally. In any case, this is again only a build dependency -- all it does is creates a numpy-0.9.8-py2.x.egg-info directory in site-packages alongside numpy. Let me be clear, since there's a lot of trepidation regarding setuptools: there is no use of setuptools (or even installation of setuptools) required to use these packages. Setuptools is required only to build from source. >> If I get some positive feedback, I'm likely to add this to the scipy.org >> download page. Also, I hope the official Debian and Ubuntu distros pick >> up numpy soon, and perhaps this will speed them along. >> > > yes - that would be brilliant! > OK, I'll wait a couple of days for some positive confirmation that this stuff works, (even from the various systems I'm setting up this repository for), and then I'll post it on the website. > What about scipy: presently debian sarge comes with > scipy 0.3.2. Installing old-scipy and new-scipy side-by side > seems impossible (unless one does something like wxversion select > stuff...) - should the new scipy debs just replace the old ones? > Unless you do some apt-pinning, I think any new scipy (0.4.x) in any repository in your sources list will automatically override the old (0.3.x) simply via the versioning mechanisms of apt-get. I like the idea of a wxversion-alike, but I've shifted all my code to use numpy and the new scipy, so I don't have any motivation to do any implementation. From strawman at astraw.com Thu Jun 8 13:33:58 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 08 Jun 2006 10:33:58 -0700 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <6ef8f3380606080251i70694910td399b86708ba1061@mail.gmail.com> References: <4487E5D0.40403@astraw.com> <6ef8f3380606080251i70694910td399b86708ba1061@mail.gmail.com> Message-ID: <44885F86.2010503@astraw.com> Pau Gargallo wrote: > is your effort somehow related to > http://packages.debian.org/experimental/python/python2.3-numpy > ? > > it is a bit out of date, but already in experimental. > I did have a look at their packaging infrastructure. It was breaking for me with numpy-0.9.8, so I started my debian/rules from scratch (and tried several methods along the way -- both debhelper and cdbs based). Now, upon re-looking at their debian/rules which is also cdbs based, I can see they have some nice code I should use (regarding installation of documentation and f2py). I'll try to integrate their changes into my next release. At that point I may simply be maintaining a more up-to-date version of theirs. They also package new scipy. I'll see if I can leverage their efforts when I try to package that. From svetosch at gmx.net Thu Jun 8 13:56:57 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Thu, 08 Jun 2006 19:56:57 +0200 Subject: [Numpy-discussion] 5 bugs in numpy 0.9.8 (was: remaining matrix-non-preserving functions) In-Reply-To: <44819FFB.3050507@gmx.net> References: <44819FFB.3050507@gmx.net> Message-ID: <448864E9.9010007@gmx.net> Well as I got no replies it seems my earlier title wasn't drastic enough ;-) And mere mortals like me can't seem to file new tickets anymore, so I'm re-posting a summary here: affected functions: diff vstack hstack outer msort symptom: given numpy-matrices as inputs, these functions still return numpy-arrays (as opposed to the applicable rest of numpy's functions) Cheers, Sven Sven Schreiber schrieb: > Hi all, > > I just discovered that the diff function returns a numpy-array even for > matrix inputs. Since I'm a card-carrying matrix fanatic, I hope that > behavior qualifies as a bug. > > Then I went through some (most?) other functions/methods for which IMO > it's best to return matrices if the input is also a matrix-type. I found > that the following functions share the problem of diff (see below for > illustrations): > > vstack and hstack (although I always use r_ and c_ and they work fine > with matrices) > > outer > > msort > > > Should I open new tickets? (Or has this already been fixed since 0.9.8, > which I used because this time building the svn version failed for me?) > > Cheers, > Sven > >>>> n.__version__ > '0.9.8' >>>> a > matrix([[1, 0, 0], > [0, 1, 0], > [0, 0, 1]]) >>>> b > matrix([[0, 0, 0], > [0, 0, 0]]) >>>> n.diff(a) > array([[-1, 0], > [ 1, -1], > [ 0, 1]]) >>>> n.outer(a,b) > array([[0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0]]) >>>> n.msort(a) > array([[0, 0, 0], > [0, 0, 0], > [1, 1, 1]]) >>>> n.vstack([a,b]) > array([[1, 0, 0], > [0, 1, 0], > [0, 0, 1], > [0, 0, 0], > [0, 0, 0]]) >>>> n.hstack([a,b.T]) > array([[1, 0, 0, 0, 0], > [0, 1, 0, 0, 0], > [0, 0, 1, 0, 0]]) > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From robert.kern at gmail.com Thu Jun 8 14:37:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Jun 2006 13:37:01 -0500 Subject: [Numpy-discussion] 5 bugs in numpy 0.9.8 In-Reply-To: <448864E9.9010007@gmx.net> References: <44819FFB.3050507@gmx.net> <448864E9.9010007@gmx.net> Message-ID: Sven Schreiber wrote: > Well as I got no replies it seems my earlier title wasn't drastic enough ;-) > And mere mortals like me can't seem to file new tickets anymore, so I'm > re-posting a summary here: Of course you can file new tickets. You just have to register an account. Click on the "Register" link in the upper right-hand corner of the Trac page. We had to disallow unauthenticated ticket creation and wiki editing because we were getting hit daily by spammers. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hetland at tamu.edu Thu Jun 8 15:31:31 2006 From: hetland at tamu.edu (Robert Hetland) Date: Thu, 8 Jun 2006 14:31:31 -0500 Subject: [Numpy-discussion] eig hangs Message-ID: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> I set up a linux machine without BLAS, LAPACK, ATLAS, hoping that lapack_lite would take over. For the moment, I am not concerned about speed -- I just want something that will work with small matricies. I installed numpy, and it passes all of the tests OK, but it hangs when doing eig: u, v = linalg.eig(rand(10,10)) # ....lots of nothing.... Do you *need* the linear algebra libraries for eig? BTW, inverse seems to work fine. -Rob ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From cookedm at physics.mcmaster.ca Thu Jun 8 16:23:26 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 8 Jun 2006 16:23:26 -0400 Subject: [Numpy-discussion] eig hangs In-Reply-To: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> References: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> Message-ID: <20060608162326.2c3bec0b@arbutus.physics.mcmaster.ca> On Thu, 8 Jun 2006 14:31:31 -0500 Robert Hetland wrote: > > I set up a linux machine without BLAS, LAPACK, ATLAS, hoping that > lapack_lite would take over. For the moment, I am not concerned > about speed -- I just want something that will work with small > matricies. I installed numpy, and it passes all of the tests OK, but > it hangs when doing eig: > > u, v = linalg.eig(rand(10,10)) > # ....lots of nothing.... > > Do you *need* the linear algebra libraries for eig? BTW, inverse > seems to work fine. It should work. Can you give us a specific matrix where it fails? What platform are you running on? Lapack_lite probably doesn't get much testing from the developers, because we probably all have optimized versions of blas and lapack. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Thu Jun 8 16:26:57 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 08 Jun 2006 14:26:57 -0600 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 Message-ID: <44888811.1080703@ee.byu.edu> One of the hopes for the Summer of Code project involving getting the multidimensional array object into Python 2.6 is advertisement of the array protocol or array interface. I think one way to simplify the array protocol is simply have only one attribute that is looked to to provide access to the protocol. I would like to deprecate all the array protocol attributes except for __array_struct__ (perhaps we could call this __array_interface__ but I'm happy keeping the name the same too.) If __array_struct__ is a CObject then it behaves as it does now. If __array_struct__ is a tuple then each entry in the tuple is one of the items currently obtained by an additional attribute access (except the first item is always an integer indicating the version of the protocol --- unused entries are None). This should simplify the array interface and allow easier future changes. It should also simplify NumPy so that it doesn't have to check for multiple attributes on arbitrary objects. I would like to eliminate all the other array protocol attributes before NumPy 1.0 (and re-label those such as __array_data__ that are useful in other contexts --- like ctypes). Comments? -Travis From arnd.baecker at web.de Thu Jun 8 16:28:06 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 8 Jun 2006 22:28:06 +0200 (CEST) Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <44885C45.7040506@astraw.com> References: <4487E5D0.40403@astraw.com> <44885C45.7040506@astraw.com> Message-ID: On Thu, 8 Jun 2006, Andrew Straw wrote: > Arnd Baecker wrote: > > What worries me is > > a) the Build conflicts: atlas3-base > > > I hoped to investigate further and post afterwards, but my preliminary > findings that led to this decision are: > > 1) building with atlas (atlas3-base and atlas3-base-dev) caused a > significant slowdown (~10x) on my simple test on amd64 arch: > > import timeit > shape = '(40,40)' > timeit.Timer('a=ones(shape=%s);svd(a)'%shape,'from numpy import ones; > from numpy.linalg import svd') > print "NumPy: ", t2.repeat(5,500) > > 2) Even having atlas installed (atlas3-base on amd64) caused a > significant slowdown (~2x) on that test. This was similar to the case > for i386, where I installed atlas3-sse2. > 3) This is done in the source packages by Matthias Klose for both > numeric and numarray, too. I figured he knows what he's doing. Alright, this ATLAS stuff always puzzled me and I thought that one has to have atlas3-base and atlas3-base-dev atlas3-headers installed to use atlas3 during compilation. I assumed that installing additionally (even afterwards) atlas3-sse2 should give optimal performance on the corresponding machine. (Thinking about this, it is not clear why then atlas3-sse2-dev, so the previous statement must be wrong ...) OTOH, `apt-cache rdepends atlas3-base` shows a pretty long list, including python2.3-scipy, python2.3-numeric-ext, python2.3-numarray-ext OK, obviously I haven't understood the ATLAS setup of debian and better shut up now and leave this for the experts .... ;-) Tomorrow I will remove the atlas3-base stuff before building and see how things work (I don't need that urgently as building from source seems easier, but the benefit of having proper debian packages pays off very quickly in the longer run ...) > > b) and the python2.3-dev *and* python2.4-dev dependency > > > This is a _build_ dependency. The source package builds python > python2.3-numpy and python2.4-numpy, so it needs Python.h for both. Alright, so no problem here - thanks for the clarification. [...] > > What about scipy: presently debian sarge comes with > > scipy 0.3.2. Installing old-scipy and new-scipy side-by side > > seems impossible (unless one does something like wxversion select > > stuff...) - should the new scipy debs just replace the old ones? > > > Unless you do some apt-pinning, I think any new scipy (0.4.x) in any > repository in your sources list will automatically override the old > (0.3.x) simply via the versioning mechanisms of apt-get. I like the idea > of a wxversion-alike, but I've shifted all my code to use numpy and the > new scipy, so I don't have any motivation to do any implementation. Also, it might not be completely trivial to set up and there is still a lot of other stuff which has to be done ... Best, Arnd From schofield at ftw.at Thu Jun 8 16:47:15 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 8 Jun 2006 22:47:15 +0200 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <4487E5D0.40403@astraw.com> References: <4487E5D0.40403@astraw.com> Message-ID: <06313EA6-9E1B-4BD3-9719-19F334FA746B@ftw.at> On 08/06/2006, at 10:54 AM, Andrew Straw wrote: > I've put together some .debs for numpy-0.9.8. There are binaries > compiled for amd64 and i386 architectures of Ubuntu Dapper, and I > suspect these will build from source for just about any Debian-based > distro and architecture. > ... Great! I posted an offer earlier this week to debian-science to help work on numpy packages (but got no response). NumPy might be adopted much more rapidly once it has official packages in Debian and Ubuntu. I'm glad you're in control of the situation; now I can now quietly withdraw my offer ;) No, seriously ... I'd be happy to help out if I can :) -- Ed From ndarray at mac.com Thu Jun 8 17:07:55 2006 From: ndarray at mac.com (Sasha) Date: Thu, 8 Jun 2006 17:07:55 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <44888811.1080703@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> Message-ID: On 6/8/06, Travis Oliphant wrote: > ... > __array_struct__ (perhaps we could call this __array_interface__ but > I'm happy keeping the name the same too.) +0 on the name change and consider making it a method rather than an attribute. > > If __array_struct__ is a CObject then it behaves as it does now. > > If __array_struct__ is a tuple then each entry in the tuple is one of > the items currently obtained by an additional attribute access (except > the first item is always an integer indicating the version of the > protocol --- unused entries are None). > -1 This will complicate the use of array interface. I would propose creating a subtype of CObject that has the necessary attributes so that one can do a.__array_interface__.shape, for example. I did not check if CObject is subclassable in 2.5, but if not, we can propose to make it subclassable for 2.6. > ... > > I would like to eliminate all the other array protocol attributes before > NumPy 1.0 (and re-label those such as __array_data__ that are useful in > other contexts --- like ctypes). +1 From cookedm at physics.mcmaster.ca Thu Jun 8 17:29:51 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 8 Jun 2006 17:29:51 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> Message-ID: <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> On Thu, 8 Jun 2006 17:07:55 -0400 Sasha wrote: > On 6/8/06, Travis Oliphant wrote: > > ... > > __array_struct__ (perhaps we could call this __array_interface__ but > > I'm happy keeping the name the same too.) > > +0 on the name change and consider making it a method rather than an > attribute. +0 for name change; I'm happy with it as an attribute. > > If __array_struct__ is a CObject then it behaves as it does now. > > > > If __array_struct__ is a tuple then each entry in the tuple is one of > > the items currently obtained by an additional attribute access (except > > the first item is always an integer indicating the version of the > > protocol --- unused entries are None). > > > > -1 > > This will complicate the use of array interface. I would propose > creating a subtype of CObject that has the necessary attributes so > that one can do a.__array_interface__.shape, for example. I did not > check if CObject is subclassable in 2.5, but if not, we can propose to > make it subclassable for 2.6. The idea behind the array interface was to have 0 external dependencies: any array-like object from any package could add the interface, without requiring a 3rd-party module. That's why the C version uses a CObject. Subclasses of CObject start getting into 3rd-party requirements. How about a dict instead of a tuple? With keys matching the attributes it's replacing: "shapes", "typestr", "descr", "data", "strides", "mask", and "offset". The problem with a tuple from my point of view is I can never remember which order things go (this is why in the standard library the result of os.stat() and time.localtime() are now "tuple-like" classes with attributes). We still need __array_descr__, as the C struct doesn't provide all the info that this does. > > I would like to eliminate all the other array protocol attributes before > > NumPy 1.0 (and re-label those such as __array_data__ that are useful in > > other contexts --- like ctypes). > +1 +1 also -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From gnchen at cortechs.net Thu Jun 8 17:57:02 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Thu, 8 Jun 2006 14:57:02 -0700 Subject: [Numpy-discussion] Intel OSX test failure Message-ID: <72977C23-9C4E-49DB-B7DF-44EA5363795E@cortechs.net> Hi! I just got an MacBook Pro and tried to install numpy+scipy on that. I successfully installed ipython+matplotlib+python 2.4 through darwinports. Then I svn co a copy of numpy +scipy. Compilation (gcc 4.0.1 + gfortran) seems working fine for numpy. After I installed it and run numpy.test() in ipython, it failed. And the error is: In [4]: numpy.test() Found 3 tests for numpy.lib.getlimits Found 30 tests for numpy.core.numerictypes Found 13 tests for numpy.core.umath Found 3 tests for numpy.core.scalarmath Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 95 tests for numpy.core.multiarray Found 3 tests for numpy.dft.helper Found 36 tests for numpy.core.ma Found 2 tests for numpy.core.oldnumeric Found 9 tests for numpy.lib.twodim_base Found 9 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Found 35 tests for numpy.lib.function_base Found 1 tests for numpy.lib.polynomial Found 6 tests for numpy.core.records Found 19 tests for numpy.core.numeric Found 5 tests for numpy.distutils.misc_util Found 4 tests for numpy.lib.index_tricks Found 46 tests for numpy.lib.shape_base Found 0 tests for __main__ ..............................................F......................... ........................................................................ ........................................................................ ........................................................................ ........................................................................ .......... ====================================================================== FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ test_scalarmath.py", line 42, in check_large_types assert b == 6765201, "error with %r: got %r" % (t,b) AssertionError: error with : got 0.0 ---------------------------------------------------------------------- Ran 370 tests in 0.510s FAILED (failures=1) Out[4]: Anyone has any idea?? or Anyone ever successfully did that? Gen-Nan Chen, PhD Chief Scientist Research and Development Group CorTechs Labs Inc (www.cortechs.net) 1020 Prospect St., #304, La Jolla, CA, 92037 Tel: 1-858-459-9700 ext 16 Fax: 1-858-459-9705 Email: gnchen at cortechs.net From tim.hochberg at cox.net Thu Jun 8 17:57:29 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu, 08 Jun 2006 14:57:29 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> Message-ID: <44889D49.1020209@cox.net> Sasha wrote: >On 6/8/06, Travis Oliphant wrote: > > >>... >>__array_struct__ (perhaps we could call this __array_interface__ but >>I'm happy keeping the name the same too.) >> >> > >+0 on the name change and consider making it a method rather than an attribute. > > I'm not thrilled with either name, nor do I have a better one, so put me down as undecided on name. I marginally prefer an attribute to a name here. I'm +1 on narrowing the interface though. >>If __array_struct__ is a CObject then it behaves as it does now. >> >>If __array_struct__ is a tuple then each entry in the tuple is one of >>the items currently obtained by an additional attribute access (except >>the first item is always an integer indicating the version of the >>protocol --- unused entries are None). >> >> >> > >-1 > >This will complicate the use of array interface. > I concur. >I would propose >creating a subtype of CObject that has the necessary attributes so >that one can do a.__array_interface__.shape, for example. I did not >check if CObject is subclassable in 2.5, but if not, we can propose to >make it subclassable for 2.6. > > Alternatively, if this proves to be a hassle, a function, unpack_interface or some such, could be provided that takes an __array_interface__ object and spits out the appropriate tuple or, perhaps better, and object with the appropriate field. > > >>... >> >>I would like to eliminate all the other array protocol attributes before >>NumPy 1.0 (and re-label those such as __array_data__ that are useful in >>other contexts --- like ctypes). >> >> >+1 > > +1. -tim From cookedm at physics.mcmaster.ca Thu Jun 8 18:11:57 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 8 Jun 2006 18:11:57 -0400 Subject: [Numpy-discussion] Intel OSX test failure In-Reply-To: <72977C23-9C4E-49DB-B7DF-44EA5363795E@cortechs.net> References: <72977C23-9C4E-49DB-B7DF-44EA5363795E@cortechs.net> Message-ID: <20060608181157.7bec579e@arbutus.physics.mcmaster.ca> On Thu, 8 Jun 2006 14:57:02 -0700 Gennan Chen wrote: > Hi! > > I just got an MacBook Pro and tried to install numpy+scipy on that. > I successfully installed ipython+matplotlib+python 2.4 through > darwinports. > Then I svn co a copy of numpy +scipy. Compilation (gcc 4.0.1 + > gfortran) seems working fine for numpy. After I installed it and run > numpy.test() in ipython, it failed. And the error is: > > In [4]: numpy.test() > Found 3 tests for numpy.lib.getlimits > Found 30 tests for numpy.core.numerictypes > Found 13 tests for numpy.core.umath > Found 3 tests for numpy.core.scalarmath > Found 8 tests for numpy.lib.arraysetops > Found 42 tests for numpy.lib.type_check > Found 95 tests for numpy.core.multiarray > Found 3 tests for numpy.dft.helper > Found 36 tests for numpy.core.ma > Found 2 tests for numpy.core.oldnumeric > Found 9 tests for numpy.lib.twodim_base > Found 9 tests for numpy.core.defmatrix > Found 1 tests for numpy.lib.ufunclike > Found 35 tests for numpy.lib.function_base > Found 1 tests for numpy.lib.polynomial > Found 6 tests for numpy.core.records > Found 19 tests for numpy.core.numeric > Found 5 tests for numpy.distutils.misc_util > Found 4 tests for numpy.lib.index_tricks > Found 46 tests for numpy.lib.shape_base > Found 0 tests for __main__ > ..............................................F......................... > ........................................................................ > ........................................................................ > ........................................................................ > ........................................................................ > .......... > ====================================================================== > FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ > test_scalarmath.py", line 42, in check_large_types > assert b == 6765201, "error with %r: got %r" % (t,b) > AssertionError: error with : got 0.0 > > ---------------------------------------------------------------------- > Ran 370 tests in 0.510s > > FAILED (failures=1) > Out[4]: > > > Anyone has any idea?? or Anyone ever successfully did that? It's new; something's missing in the new power code I added for the scalartypes. It'll get fixed when I get around to it :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From gnchen at cortechs.net Thu Jun 8 18:18:37 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Thu, 8 Jun 2006 15:18:37 -0700 Subject: [Numpy-discussion] Intel OSX test failure In-Reply-To: <20060608181157.7bec579e@arbutus.physics.mcmaster.ca> References: <72977C23-9C4E-49DB-B7DF-44EA5363795E@cortechs.net> <20060608181157.7bec579e@arbutus.physics.mcmaster.ca> Message-ID: <407C3307-8C19-40F1-B2C9-82637633C15E@cortechs.net> Got you. BTW, I did manage to compile ATLAS 3.7 version into .a. Any chance I can use that? Or only shared object can be used?? Gen On Jun 8, 2006, at 3:11 PM, David M. Cooke wrote: > On Thu, 8 Jun 2006 14:57:02 -0700 > Gennan Chen wrote: > >> Hi! >> >> I just got an MacBook Pro and tried to install numpy+scipy on that. >> I successfully installed ipython+matplotlib+python 2.4 through >> darwinports. >> Then I svn co a copy of numpy +scipy. Compilation (gcc 4.0.1 + >> gfortran) seems working fine for numpy. After I installed it and run >> numpy.test() in ipython, it failed. And the error is: >> >> In [4]: numpy.test() >> Found 3 tests for numpy.lib.getlimits >> Found 30 tests for numpy.core.numerictypes >> Found 13 tests for numpy.core.umath >> Found 3 tests for numpy.core.scalarmath >> Found 8 tests for numpy.lib.arraysetops >> Found 42 tests for numpy.lib.type_check >> Found 95 tests for numpy.core.multiarray >> Found 3 tests for numpy.dft.helper >> Found 36 tests for numpy.core.ma >> Found 2 tests for numpy.core.oldnumeric >> Found 9 tests for numpy.lib.twodim_base >> Found 9 tests for numpy.core.defmatrix >> Found 1 tests for numpy.lib.ufunclike >> Found 35 tests for numpy.lib.function_base >> Found 1 tests for numpy.lib.polynomial >> Found 6 tests for numpy.core.records >> Found 19 tests for numpy.core.numeric >> Found 5 tests for numpy.distutils.misc_util >> Found 4 tests for numpy.lib.index_tricks >> Found 46 tests for numpy.lib.shape_base >> Found 0 tests for __main__ >> ..............................................F...................... >> ... >> ..................................................................... >> ... >> ..................................................................... >> ... >> ..................................................................... >> ... >> ..................................................................... >> ... >> .......... >> ===================================================================== >> = >> FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) >> --------------------------------------------------------------------- >> - >> Traceback (most recent call last): >> File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ >> test_scalarmath.py", line 42, in check_large_types >> assert b == 6765201, "error with %r: got %r" % (t,b) >> AssertionError: error with : got 0.0 >> >> --------------------------------------------------------------------- >> - >> Ran 370 tests in 0.510s >> >> FAILED (failures=1) >> Out[4]: >> >> >> Anyone has any idea?? or Anyone ever successfully did that? > > It's new; something's missing in the new power code I added for the > scalartypes. It'll get fixed when I get around to it :-) > > -- > |>|\/|< > /--------------------------------------------------------------------- > -----\ > |David M. Cooke http:// > arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > From strawman at astraw.com Thu Jun 8 18:19:19 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 08 Jun 2006 15:19:19 -0700 Subject: [Numpy-discussion] .debs of numpy-0.9.8 available for Ubuntu Dapper In-Reply-To: <4487E5D0.40403@astraw.com> References: <4487E5D0.40403@astraw.com> Message-ID: <4488A267.2000901@astraw.com> Andrew Straw wrote: >I've put together some .debs for numpy-0.9.8. There are binaries >compiled for amd64 and i386 architectures of Ubuntu Dapper, and I >suspect these will build from source for just about any Debian-based >distro and architecture. > > As usually happens when I try to release packages in the middle of the night, the cold light of morning brings some glaring problems. The biggest one is that the .diff.gz that was generated wasn't showing the changes against numpy that I had to make. I'm surprised that my own tests with apt-get source showed that it still built from source. I've uploaded a new version, 0.9.8-0ads2 (note the 2 at the end). You can check your installed version by doing the following: dpkg-query -l *numpy* Anyhow, here's the debian/changelog for 0.9.8-0ads2: * Fixed .orig.tar.gz so that .diff.gz includes modifications made to source. * Relax build-depend on setuptools to work with any version * Don't import setuptools in numpy.distutils.command.install unless it's already in sys.modules. I would like to merge with the package in debian experimental by Jose Fonseca and Marco Presi, but their package uses a lot of makefile wizardry that bombs out on me without any apparently informative error message. (I will be the first to admit that I know very little about Makefiles.) On the other hand, the main advantage their package currently has is installation of manpages for f2py, installation of the existing free documentation, and tweaks to script (f2py) permissions and naming. The latter of these issues seems to be solved by the build-dependency on setuptools, which is smart about installing scripts with the right permissions and names (it appends "2.4" to the python2.4 version of f2py, and so on). There have been a couple of offers of help from Ed and Ryan. I think in the long run, the best thing to do would be to invest these efforts communicating with the debian developers and to get a more up-to-date version in their repository. (My repository will only ever be an unofficial repository with the primary purpose of serving our needs at work which hopefully overlaps substantially with usefulness to others.) This should have a trickle-down effect to mainline Ubuntu repository, also. I doubt that the debian developers will want to start their python-numpy package from scratch, so I can suggest trying to submit patches to their system. You can checkout their source at svn://svn.debian.org/deb-scipy . Unfortunately, that's about the only guidance I can provide, because, like I said above, I can't get their Makefile wizardry to work on a newer version of numpy. Arnd, I would like to get to the bottom of these atlas issues myself, and I've followed a similar chain of logic as you. It's possible that the svd routine (dgesdd, IIRC) is somehow just a bad one to benchmark on. It is a real workhorse for me, and so it's really the one that counts for me. I'll put together a few timeit routines that test svd() and dot() and do some more experimentation, although I can't promise when. Let's keep everyone informed of any progress we make. Cheers! Andrew From oliphant at ee.byu.edu Thu Jun 8 18:22:47 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 08 Jun 2006 16:22:47 -0600 Subject: [Numpy-discussion] Array Interface Message-ID: <4488A337.9000407@ee.byu.edu> Thanks for the continuing discussion on the array interface. I'm thinking about this right now, because I just spent several hours trying to figure out if it is possible to add additional "object-behavior" pointers to a type by creating a metatype that sub-types from the Python PyType_Type (this is the object that has all the function pointers to implement mapping behavior, buffer behavior, etc.). I found some emails from 2002 where Guido indicates that it is not possible to sub-type the PyType_Type object and add new function pointers at the end without major re-writing of Python. The suggested mechanism is to add a CObject to the tp_dict of the type object itself. As far as I can tell is equivalent to what we are doing with adding the array interface as an attribute look-up. In trying to sell the array interface to the wider Python community (and get it into Python 2.6), we need to simplify the interface though. I no longer think having all of these attributes off the object itself is a good idea (I think this is a case where flat *is not* better than nested). It turns out that the __array_struct__ interface is the really important one (it's the one that numarray, NumPy, and Numeric are all using). So, one approach is to simply toss out support for the other part of the interface in NumPy and "let it die." Is this what people who opposing using the __array_struct__ attribute in a dualistic way are suggesting? Clearly some of the attributes will need to survive (like __array_descr__ which gives information that __array_struct__ doesn't even provide). A big part of the push for multidimensional arrays in Python is the addition of the PyArray_Descr * object into Python (or something similar). This would allow a way to describe data in a generic way and could change the use of __array_descr__. But, currently the __array_struct__ attribute approach does not support field-descriptions, so __array_descr__ is the only way. Please continue offering your suggestions... -Travis From fperez.net at gmail.com Thu Jun 8 18:48:27 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Jun 2006 16:48:27 -0600 Subject: [Numpy-discussion] Build questions, atlas, lapack... Message-ID: Hi all, I'm starting the transition of a large code from Numeric to numpy, so I am now doing a fresh build with a lot more care than before, actually reading all the intermediate messages. I am a bit puzzled and could use some help. This is all on an ubuntu dapper box with the atlas-sse2 packages (and everything else recommended installed). By running as suggested in the scipy readme: python ~/tmp/local/lib/python2.4/site-packages/numpy/distutils/system_info.py I get the following message at some point: ==================================== atlas_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) looking libraries f77blas,cblas,atlas in /usr/local/lib but found None looking libraries f77blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None ( paths: /usr/lib/sse2/libf77blas.so ) ( paths: /usr/lib/sse2/libcblas.so ) ( paths: /usr/lib/sse2/libatlas.so ) ( paths: /usr/lib/sse2/liblapack_atlas.so ) looking libraries lapack in /usr/lib/sse2 but found None looking libraries lapack in /usr/lib/sse2 but found None looking libraries f77blas,cblas,atlas in /usr/lib but found None looking libraries f77blas,cblas,atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None system_info.atlas_info ( include_dirs = /usr/local/include:/usr/include ) ( paths: /usr/include/atlas_misc.h,/usr/include/atlas_enum.h,/usr/include/atlas_aux.h,/usr/include/atlas_type.h ) /usr/local/installers/src/scipy/numpy/numpy/distutils/system_info.py:870: UserWarning: ********************************************************************* Could not find lapack library within the ATLAS installation. ********************************************************************* warnings.warn(message) ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('ATLAS_WITHOUT_LAPACK', None)] ==================================== What I find very puzzling here is that later on, the following goes by: lapack_atlas_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None ( paths: /usr/lib/sse2/liblapack_atlas.so ) ( paths: /usr/lib/sse2/libf77blas.so ) ( paths: /usr/lib/sse2/libcblas.so ) ( paths: /usr/lib/sse2/libatlas.so ) ( paths: /usr/lib/sse2/liblapack_atlas.so ) looking libraries lapack in /usr/lib/sse2 but found None looking libraries lapack in /usr/lib/sse2 but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None system_info.lapack_atlas_info ( include_dirs = /usr/local/include:/usr/include ) ( paths: /usr/include/atlas_misc.h,/usr/include/atlas_enum.h,/usr/include/atlas_aux.h,/usr/include/atlas_type.h ) ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) FOUND: libraries = ['lapack_atlas', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('ATLAS_WITH_LAPACK_ATLAS', None)] ============================================== Does the second mean that it /is/ finding the right libraries? Since the first search in atlas_info is also printing ( paths: /usr/lib/sse2/liblapack_atlas.so ) I don't quite understand why it then reports the warning. For reference, here's the content of the relevant directories on my system: ============================================== longs[sse2]> ls /usr/lib/sse2 libatlas.a libcblas.a libf77blas.a liblapack_atlas.a libatlas.so@ libcblas.so@ libf77blas.so@ liblapack_atlas.so@ libatlas.so.3@ libcblas.so.3@ libf77blas.so.3@ liblapack_atlas.so.3@ libatlas.so.3.0 libcblas.so.3.0 libf77blas.so.3.0 liblapack_atlas.so.3.0 longs[sse2]> ls /usr/lib/atlas/sse2/ libblas.a libblas.so.3@ liblapack.a liblapack.so.3@ libblas.so@ libblas.so.3.0 liblapack.so@ liblapack.so.3.0 ============================================== In summary, I don't really know if this is actually finding what it wants or not, given the two messages. Cheers, f ps - it's worth mentioning that the sequence: python ~/tmp/local/lib/python2.4/site-packages/numpy/distutils/system_info.py gets itself into a nasty recursion where it fires the interactive session 3 times in a row. And in doing so, it splits its own output in a funny way: [...] blas_opt_info: ======================================================================== Starting interactive session ------------------------------------------------------------------------ Tasks: i - Show python/platform/machine information ie - Show environment information c - Show C compilers information c - Set C compiler (current:None) f - Show Fortran compilers information f - Set Fortran compiler (current:None) e - Edit proposed sys.argv[1:]. Task aliases: 0 - Configure 1 - Build 2 - Install 2 - Install with prefix. 3 - Inplace build 4 - Source distribution 5 - Binary distribution Proposed sys.argv = ['/home/fperez/tmp/local/lib/python2.4/site-packages/numpy/distutils/system_info.py'] Choose a task (^D to quit, Enter to continue with setup): ##### msg: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('NO_ATLAS_INFO', 2)] ================= I tried to fix it, but the call sequence in that code is convoluted enough that after a few 'import traceback;traceback.print_stack()' tries I sort of gave up. That code is rather (how can I say this nicely) pasta-like :), and thoroughly uncommented, so I'm afraid I won't be able to contribute a cleanup here. I think this tool should run by default in a mode with NO attempt to fire a command-line subsystem of its own, so users can simply run python /path/to/system_info > system_info.log for further analysis. From cookedm at physics.mcmaster.ca Thu Jun 8 19:06:42 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 8 Jun 2006 19:06:42 -0400 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: References: Message-ID: <20060608190642.3b402d4c@arbutus.physics.mcmaster.ca> On Thu, 8 Jun 2006 16:48:27 -0600 "Fernando Perez" wrote: [snip] > I tried to fix it, but the call sequence in that code is convoluted > enough that after a few 'import traceback;traceback.print_stack()' > tries I sort of gave up. That code is rather (how can I say this > nicely) pasta-like :), and thoroughly uncommented, so I'm afraid I > won't be able to contribute a cleanup here. I think the whole numpy.distutils could use a good cleanup ... > I think this tool should run by default in a mode with NO attempt to > fire a command-line subsystem of its own, so users can simply run > > python /path/to/system_info > system_info.log > > for further analysis. Agree; I'll look at it. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Thu Jun 8 19:11:58 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Jun 2006 17:11:58 -0600 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: <20060608190642.3b402d4c@arbutus.physics.mcmaster.ca> References: <20060608190642.3b402d4c@arbutus.physics.mcmaster.ca> Message-ID: On 6/8/06, David M. Cooke wrote: > Agree; I'll look at it. Many thanks. I'm sorry not to help, but I have a really big fish to fry right now, and can't commit to the diversion this would mean. Cheers, f From dd55 at cornell.edu Thu Jun 8 09:43:07 2006 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 8 Jun 2006 09:43:07 -0400 Subject: [Numpy-discussion] Fortran 95 compiler (from gcc 4.1.1) is not recognized by scipy In-Reply-To: References: <07C6A61102C94148B8104D42DE95F7E8C8EFC6@exchange2k.envision.co.il> Message-ID: <200606080943.07515.dd55@cornell.edu> On Thursday 01 June 2006 12:46, Robert Kern wrote: > Nadav Horesh wrote: > > I recently upgraded to gcc4.1.1. When I tried to compile scipy from > > today's svn repository it halts with the following message: > > > > Traceback (most recent call last): > > File "setup.py", line 50, in ? > > setup_package() > > File "setup.py", line 42, in setup_package > > configuration=configuration ) > > File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line > > 170, in setup > > return old_setup(**new_attr) > > File "/usr/lib/python2.4/distutils/core.py", line 149, in setup > > dist.run_commands() > > File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands > > self.run_command(cmd) > > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > > cmd_obj.run() > > File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run > > self.run_command(cmd_name) > > File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command > > self.distribution.run_command(command) > > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > > cmd_obj.run() > > File > > "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", > > line 109, in run > > self.build_extensions() > > File "/usr/lib/python2.4/distutils/command/build_ext.py", line 405, in > > build_e xtensions > > self.build_extension(ext) > > File > > "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", > > line 301, in build_extension > > link = self.fcompiler.link_shared_object > > AttributeError: 'NoneType' object has no attribute 'link_shared_object' > > > > ---- > > > > The output of gfortran --version: > > > > GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1) > > Hmm. The usual suspect (not finding the version) doesn't seem to be the > problem here. > > >>> from numpy.distutils.ccompiler import simple_version_match > >>> m = simple_version_match(start='GNU Fortran 95') > >>> m(None, 'GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1)') > > '4.1.1' > > > I have also the old g77 compiler installed (g77-3.4.6). Is there a way to > > force numpy/scipy to use it? > > Sure. > > python setup.py config_fc --fcompiler=gnu build_src build_clib build_ext > build I am able to build numpy/scipy on a 64bit Athlon with gentoo and gcc-4.1.1. I get one error with scipy 0.5.0.1940: ============================================== FAIL: check_random_complex_overdet (scipy.linalg.tests.test_basic.test_lstsq) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/linalg/tests/test_basic.py", line 413, in check_random_complex_overdet assert_array_almost_equal(x,direct_lstsq(a,b),3) File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 233, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 77.7777777778%): Array 1: [[-0.0137+0.0173j 0.0037-0.0173j -0.0114+0.0119j] [ 0.0029-0.0356j 0.0086-0.034j 0.033 -0.0879j] [ 0.0041-0.0097j ... Array 2: [[-0.016 +0.0162j 0.003 -0.0171j -0.0148+0.009j ] [-0.0017-0.0405j 0.003 -0.036j 0.0256-0.0977j] [ 0.0038-0.0112j ... ---------------------------------------------------------------------- Also, there may be a minor bug in numpy/distutils. I get error messages during the build: customize GnuFCompiler Couldn't match compiler version for 'GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1)\nCopyright (C) 2006 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the terms of the GNU General Public License. \nFor more information about these matters, see the file named COPYING\n' customize CompaqFCompiler customize IntelItaniumFCompiler customize IntelEM64TFCompiler customize Gnu95FCompiler customize Gnu95FCompiler This error message is returned because the fc_exe executable defined in GnuFCompiler returns a successful exit status to GnuFCompiler.get_version, but GnuFCompiler explicitly forbids identifying Fortran 95. I only bring it up because the build yields an error message that might confuse people. Darren From listservs at mac.com Thu Jun 8 19:43:57 2006 From: listservs at mac.com (listservs at mac.com) Date: Thu, 8 Jun 2006 19:43:57 -0400 Subject: [Numpy-discussion] Building statically-linked Numpy causes problems with f2py extensions Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Because of complaints of linking errors from some OS X users, I am trying to build and distribute statically-linked versions. To do this, I have taken the important libraries (e.g. freetype, libg2c), and put them in a directory called staticlibs, then built numpy by: python setup.py build_clib build_ext -L../staticlibs build bdist_mpkg It builds, installs and runs fine. However, when I go to build and run f2py extensions, I now get the following (from my PyMC code): /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/PyMC/MCMC.py 37 _randint = random.randint 38 rexponential = random.exponential - ---> 39 from flib import categor as _categorical global flib = undefined global categor = undefined global as = undefined _categorical = undefined 40 from flib import rcat as rcategorical 41 from flib import binomial as _binomial ImportError: Loaded module does not contain symbol _initflib Here, flib is the f2py extension that is built in the PyMC setup file according to: from numpy.distutils.core import setup, Extension flib = Extension(name='PyMC.flib',sources=['PyMC/flib.f']) version = "1.0" distrib = setup( version=version, author="Chris Fonnesbeck", author_email="fonnesbeck at mac.com", description="Version %s of PyMC" % version, license="Academic Free License", name="PyMC", url="pymc.sourceforge.net", packages=["PyMC"], ext_modules = [flib] ) This worked fine before my attempts to statically link numpy. Any ideas regarding a solution? Thanks, Chris - -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (Darwin) iD8DBQFEiLY+keka2iCbE4wRAi1/AJ90K7LIkF7Y+ti65cVxLB1KCA+MNgCggj2p I1jzals7IoBeYX0cWfmlbcI= =bY3a -----END PGP SIGNATURE----- From jdc at uwo.ca Thu Jun 8 21:23:11 2006 From: jdc at uwo.ca (Dan Christensen) Date: Thu, 08 Jun 2006 21:23:11 -0400 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: References: Message-ID: <878xo75dhc.fsf@uwo.ca> I don't know if it's related, but I've found on my Debian system that whenever I want to compile something that uses the atlas library, I need to put -L/usr/lib/sse2 on the gcc line, even though everything seems to indicate that the linker has been told to look there already. It could be that Ubuntu has a similar issue, and that it is affecting your build. Dan From fperez.net at gmail.com Thu Jun 8 21:39:44 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Jun 2006 19:39:44 -0600 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: <878xo75dhc.fsf@uwo.ca> References: <878xo75dhc.fsf@uwo.ca> Message-ID: On 6/8/06, Dan Christensen wrote: > I don't know if it's related, but I've found on my Debian system that > whenever I want to compile something that uses the atlas library, I > need to put -L/usr/lib/sse2 on the gcc line, even though everything > seems to indicate that the linker has been told to look there already. > It could be that Ubuntu has a similar issue, and that it is affecting > your build. mmh, given how green I am in the ubuntu world, you may well be right. But my original question went before any linking happens, since I was just posting the messages from numpy's system_info, which doesn't attempt to link at anything, it just does a static filesystem analysis. So perhaps there is more than one issue here. I'm just trying to clarify, from the given messages (which I found a bit confusing) whether all the atlas/sse2 stuff is actually being picked up or not, at least as far as numpy thinks it is. Cheers, f From simon at arrowtheory.com Thu Jun 8 22:09:19 2006 From: simon at arrowtheory.com (Simon Burton) Date: Fri, 9 Jun 2006 12:09:19 +1000 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: References: Message-ID: <20060609120919.6c50d6f1.simon@arrowtheory.com> On Thu, 8 Jun 2006 16:48:27 -0600 "Fernando Perez" wrote: > > In summary, I don't really know if this is actually finding what it > wants or not, given the two messages. I just went through this on debian sarge which is similar. I put this in site.cgf: [atlas] library_dirs = /usr/lib/atlas/ atlas_libs = lapack, blas Then I needed to set LD_LIBRARY_PATH to point to /usr/lib/atlas/sse2. $ env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 python2.4 Python 2.4.3 (#4, Jun 5 2006, 19:07:06) [GCC 3.4.1 (Debian 3.4.1-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> [1]+ Stopped env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 python2.4 Look in /proc/PID/maps for the relevant libs: $ ps -a|grep python ... 16953 pts/64 00:00:00 python2.4 $ grep atlas /proc/16953/maps b6fa7000-b750e000 r-xp 00000000 00:0c 1185402 /usr/lib/atlas/sse2/libblas.so.3.0 b750e000-b7513000 rwxp 00567000 00:0c 1185402 /usr/lib/atlas/sse2/libblas.so.3.0 b7513000-b7a58000 r-xp 00000000 00:0c 1185401 /usr/lib/atlas/sse2/liblapack.so.3.0 b7a58000-b7a5b000 rwxp 00545000 00:0c 1185401 /usr/lib/atlas/sse2/liblapack.so.3.0 $ But to really test this is working I ran python under gdb and set a break point on cblas_dgemm. Then a call to numpy.dot should break inside the sse2/liblapack.so.3.0. (also it's a lot faster with the sse2 dgemm) $ env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 gdb python2.4 GNU gdb 6.1-debian Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-linux"...Using host libthread_db library "/lib/tls/libthread_db.so.1". (gdb) break cblas_dgemm Function "cblas_dgemm" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (cblas_dgemm) pending. (gdb) run Starting program: /home/users/simonb/bin/python2.4 [Thread debugging using libthread_db enabled] [New Thread -1210476000 (LWP 17557)] Python 2.4.3 (#4, Jun 5 2006, 19:07:06) [GCC 3.4.1 (Debian 3.4.1-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Breakpoint 2 at 0xb7549db0 Pending breakpoint "cblas_dgemm" resolved <------- import numpy is in my pythonstartup >>> a=numpy.empty((1024,1024),'d') >>> b=numpy.empty((1024,1024),'d') >>> numpy.dot(a,b) [Switching to Thread -1210476000 (LWP 17557)] Breakpoint 2, 0xb7549db0 in cblas_dgemm () from /usr/lib/atlas/sse2/liblapack.so.3 (gdb) bingo. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From fperez.net at gmail.com Thu Jun 8 22:25:59 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Jun 2006 20:25:59 -0600 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: <20060609120919.6c50d6f1.simon@arrowtheory.com> References: <20060609120919.6c50d6f1.simon@arrowtheory.com> Message-ID: On 6/8/06, Simon Burton wrote: > On Thu, 8 Jun 2006 16:48:27 -0600 > "Fernando Perez" wrote: > > > > > In summary, I don't really know if this is actually finding what it > > wants or not, given the two messages. > > I just went through this on debian sarge which is similar. > > I put this in site.cgf: > > [atlas] > library_dirs = /usr/lib/atlas/ > atlas_libs = lapack, blas > > Then I needed to set LD_LIBRARY_PATH to point to /usr/lib/atlas/sse2. [...] > But to really test this is working I ran python under gdb and set > a break point on cblas_dgemm. Then a call to numpy.dot should > break inside the sse2/liblapack.so.3.0. > > (also it's a lot faster with the sse2 dgemm) > > $ env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 gdb python2.4 OK, thanks a LOT for that gdb trick: it provides a very nice way to understand what's actually going on. self.note("really, learn better use of gdb") Using that, though, it would then seem as if the build DID successfully find everything without any further action on my part: longs[dist]> gdb python GNU gdb 6.4-debian ... (gdb) break cblas_dgemm Function "cblas_dgemm" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (cblas_dgemm) pending. (gdb) run Starting program: /usr/bin/python ... Python 2.4.3 (#2, Apr 27 2006, 14:43:58) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. (no debugging symbols found) >>> import numpy Breakpoint 2 at 0x40429860 Pending breakpoint "cblas_dgemm" resolved >>> a=numpy.empty((1024,1024),'d') >>> b=numpy.empty((1024,1024),'d') >>> numpy.dot(a,b) [Switching to Thread 1075428416 (LWP 3919)] Breakpoint 2, 0x40429860 in cblas_dgemm () from /usr/lib/sse2/libcblas.so.3 ====================================================== Note that on my system, LD_LIBRARY_PATH does NOT contain that dir: longs[dist]> env | grep LD_LIB LD_LIBRARY_PATH=/usr/local/lf9560/lib:/usr/local/intel/mkl/8.0.2/lib/32:/usr/local/intel/compiler90/lib:/home/fperez/usr/lib:/home/fperez/usr/local/lib: and I built everything with a plain setup.py install --prefix=~/tmp/local without /any/ tweaks to site.cfg, no LD_LIBRARY_PATH modifications or anything else. I just installed atlas-sse2* and lapack3*, but NOT refblas3*. Basically it seems that the build process does the right thing out of the box, and the warning is spurious. Since I was being extra-careful in this build, I didn't want to let any warning of that kind go unchecked. It might still be worth fixing that warning to prevent others from going on a similar wild goose chase, but I'm not comfortable touching that code (I don't know if anyone besides Pearu is). Thanks for the help! Cheers, f From ndarray at mac.com Thu Jun 8 22:52:53 2006 From: ndarray at mac.com (Sasha) Date: Thu, 8 Jun 2006 22:52:53 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> Message-ID: On 6/8/06, David M. Cooke wrote: > ... > +0 for name change; I'm happy with it as an attribute. > My rule of thumb for choosing between an attribute and a method is that attribute access should not create new objects. In addition, to me __array_interface__ feels like a generalization of __array__ method, so I personally expected it to be a method the first time I tried to use it. >... > The idea behind the array interface was to have 0 external dependencies: any > array-like object from any package could add the interface, without requiring > a 3rd-party module. That's why the C version uses a CObject. Subclasses of > CObject start getting into 3rd-party requirements. > Not necessarily. Different packages don't need to share the subclass, but subclassing CObject is probably a bad idea for the reasons I will explain below. > How about a dict instead of a tuple? With keys matching the attributes it's > replacing: "shapes", "typestr", "descr", "data", "strides", "mask", and > "offset". The problem with a tuple from my point of view is I can never > remember which order things go (this is why in the standard library the > result of os.stat() and time.localtime() are now "tuple-like" classes with > attributes). > My problem with __array_struct__ returning either a tuple or a CObject is that array protocol sholuld really provide both. CObject is useless for interoperability at python level and a tuple (or dict) is inefficient at the C level. Thus a good array-like object should really provide both __array_struct__ for use by C modules and __array_tuple__ (or whatever) for use by python modules. On the other hand, making both required attributes/methods will put an extra burden on package writers. Moreover, a pure python implementation of an array-like object will not be able to provide __array_struct__ at all. One possible solution would be an array protocol metaclass that adds __array_struct__ to a class with __array_tuple__ and __array_tuple__ to a class with __array_struct__ (yet another argument to make both methods). > We still need __array_descr__, as the C struct doesn't provide all the info > that this does. > What do you have in mind? From fperez.net at gmail.com Fri Jun 9 01:28:04 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Jun 2006 23:28:04 -0600 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: References: Message-ID: Hi all, the following warning about strict-prototypes in weave drives me crazy: longs[~]> python wbuild.py cc1plus: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++ since I use weave on auto-generated code, I get it lots of times and I find spurious warnings to be very distracting. Anyone object to this patch against current numpy SVN to get rid of this thing? (tracking where the hell that thing was coming from was all kinds of fun) Index: ccompiler.py =================================================================== --- ccompiler.py (revision 2588) +++ ccompiler.py (working copy) @@ -191,6 +191,19 @@ log.info('customize %s' % (self.__class__.__name__)) customize_compiler(self) if need_cxx: + # In general, distutils uses -Wstrict-prototypes, but this option is + # not valid for C++ code, only for C. Remove it if it's there to + # avoid a spurious warning on every compilation. All the default + # options used by distutils can be extracted with: + + # from distutils import sysconfig + # sysconfig.get_config_vars('CC', 'CXX', 'OPT', 'BASECFLAGS', + # 'CCSHARED', 'LDSHARED', 'SO') + try: + self.compiler_so.remove('-Wstrict-prototypes') + except ValueError: + pass + if hasattr(self,'compiler') and self.compiler[0].find('gcc')>=0: if sys.version[:3]>='2.3': if not self.compiler_cxx: ### EOF Cheers, f From cookedm at physics.mcmaster.ca Fri Jun 9 04:01:52 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 9 Jun 2006 04:01:52 -0400 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: References: Message-ID: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> On Thu, Jun 08, 2006 at 11:28:04PM -0600, Fernando Perez wrote: > Hi all, > > the following warning about strict-prototypes in weave drives me crazy: > > longs[~]> python wbuild.py > > cc1plus: warning: command line option "-Wstrict-prototypes" is valid > for Ada/C/ObjC but not for C++ > > since I use weave on auto-generated code, I get it lots of times and I > find spurious warnings to be very distracting. > > Anyone object to this patch against current numpy SVN to get rid of > this thing? (tracking where the hell that thing was coming from was > all kinds of fun) Go ahead. I'm against random messages being printed out anyways -- I'd get rid of the '' too. There's a bunch of code in scipy with 'print' statements that I don't think belong in a library. (Now, if we defined a logging framework, that'd be ok with me!) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From st at sigmasquared.net Fri Jun 9 04:06:24 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Fri, 09 Jun 2006 10:06:24 +0200 Subject: [Numpy-discussion] Build questions, atlas, lapack... In-Reply-To: References: Message-ID: <44892C00.4060603@sigmasquared.net> > ==================================== > atlas_info: > ( library_dirs = /usr/local/lib:/usr/lib ) > ( paths: /usr/lib/atlas,/usr/lib/sse2 ) > looking libraries f77blas,cblas,atlas in /usr/local/lib but found None > looking libraries f77blas,cblas,atlas in /usr/local/lib but found None (.. more of these...) Some of these and similar spurious warnings can be eliminated by replacing the calls to check_libs in system_info.py with calls to check_libs2. Currently these warnings are generated for each file extension that is tested (".so", ".a"...) Alternatively, the warnings could be made more informative. Many of the other warnings could be eliminated by consolidating the various BLAS/LAPACK options. If anyone is manipulating the build system, could he please apply the patch from #114 fixing the Windows build? > I tried to fix it, but the call sequence in that code is convoluted > enough that after a few 'import traceback;traceback.print_stack()' > tries I sort of gave up. That code is rather (how can I say this > nicely) pasta-like :), and thoroughly uncommented, so I'm afraid I > won't be able to contribute a cleanup here. Even if you spent enough time to understand the existing code, you probably wouldn't have a chance to clean up the code because any small change could break some obscure platform/compiler/library combination. Moreover, changes could break the build of scipy and other libraries depending on Numpy-distutils. If you really wanted to rewrite the build code, you'd need to specify a minimum set of supported platform and library combinations, have each of them available for testing and deliberately risk breaking any other platform. Regards, Stephan From fullung at gmail.com Fri Jun 9 05:54:25 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 9 Jun 2006 11:54:25 +0200 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <44888811.1080703@ee.byu.edu> Message-ID: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> Hello all > -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of Travis Oliphant > Sent: 08 June 2006 22:27 > To: numpy-discussion > Subject: [Numpy-discussion] Array Protocol change for Python 2.6 > > ... > > I would like to eliminate all the other array protocol attributes before > NumPy 1.0 (and re-label those such as __array_data__ that are useful in > other contexts --- like ctypes). Just out of curiosity: In [1]: x = N.array([]) In [2]: x.__array_data__ Out[2]: ('0x01C23EE0', False) Is there a reason why the __array_data__ tuple stores the address as a hex string? I would guess that this representation of the address isn't the most useful one for most applications. Regards, Albert From fullung at gmail.com Fri Jun 9 06:02:56 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 9 Jun 2006 12:02:56 +0200 Subject: [Numpy-discussion] Building shared libraries with numpy.distutils Message-ID: <001d01c68bab$e142d5c0$01eaa8c0@dsp.sun.ac.za> Hello all For my Summer of Code project, I'm adding Support Vector Machine code to SciPy. Underneath, I'm currently using libsvm. Thus far, I've been compiling libsvm as a shared library (DLL on Windows) using SCons and doing the wrapping with ctypes. Now, I would like to integrate my code into the SciPy build. Unfortunately, it doesn't seem as if numpy.distutils or distutils proper knows about building shared libraries. Building shared libraries across multiple platforms is tricky to say the least so I don't know if implementing this functionality again is something worth doing. The alternative -- never using shared libraries, doesn't seem very appealing either. Is anybody building shared libraries? Any code or comments? Regards, Albert From faltet at carabos.com Fri Jun 9 06:06:00 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 9 Jun 2006 12:06:00 +0200 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> Message-ID: <200606091206.00322.faltet@carabos.com> A Divendres 09 Juny 2006 11:54, Albert Strasheim va escriure: > Just out of curiosity: > > In [1]: x = N.array([]) > > In [2]: x.__array_data__ > Out[2]: ('0x01C23EE0', False) > > Is there a reason why the __array_data__ tuple stores the address as a hex > string? I would guess that this representation of the address isn't the > most useful one for most applications. Good point. I hit this before and forgot to send a message about this. I agree that a integer would be better. Although, now that I think about this, I suppose that the issue should be the difference of representation of longs in 32-bit and 64-bit platforms, isn't it? Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From tim.hochberg at cox.net Fri Jun 9 12:04:09 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 09:04:09 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> Message-ID: <44899BF9.9000002@cox.net> Sasha wrote: >On 6/8/06, David M. Cooke wrote: > > >>... >>+0 for name change; I'm happy with it as an attribute. >> >> >> >My rule of thumb for choosing between an attribute and a method is >that attribute access should not create new objects. > Conceptually at least, couldn't there be a single __array_interface__ object associated with a given array? In that sense, it doesn't really feel like creating a new object. > In addition, to >me __array_interface__ feels like a generalization of __array__ >method, so I personally expected it to be a method the first time I >tried to use it. > > > >>... >>The idea behind the array interface was to have 0 external dependencies: any >>array-like object from any package could add the interface, without requiring >>a 3rd-party module. That's why the C version uses a CObject. Subclasses of >>CObject start getting into 3rd-party requirements. >> >> >> > >Not necessarily. Different packages don't need to share the subclass, >but subclassing CObject is probably a bad idea for the reasons I will >explain below. > > > >>How about a dict instead of a tuple? With keys matching the attributes it's >>replacing: "shapes", "typestr", "descr", "data", "strides", "mask", and >>"offset". The problem with a tuple from my point of view is I can never >>remember which order things go (this is why in the standard library the >>result of os.stat() and time.localtime() are now "tuple-like" classes with >>attributes). >> >> >> >My problem with __array_struct__ returning either a tuple or a CObject >is that array protocol sholuld really provide both. CObject is >useless for interoperability at python level and a tuple (or dict) is >inefficient at the C level. Thus a good array-like object should >really provide both __array_struct__ for use by C modules and >__array_tuple__ (or whatever) for use by python modules. On the other >hand, making both required attributes/methods will put an extra burden >on package writers. Moreover, a pure python implementation of an >array-like object will not be able to provide __array_struct__ at all. > One possible solution would be an array protocol metaclass that adds >__array_struct__ to a class with __array_tuple__ and __array_tuple__ >to a class with __array_struct__ (yet another argument to make both >methods). > > I don't understand this. I'm don't see how bringing in metaclass is going to help a pure python type provide a sensible __array_struct__. That seems like a hopeless task. Shouldn't pure python implementations just provide __array__? A single attribute seems pretty appealing to me, I'm don't see much use for anything else. >>We still need __array_descr__, as the C struct doesn't provide all the info >>that this does. >> >> >> >What do you have in mind? > > Is there any prospect of merging this data into the C struct? It would be cleaner if all of the information could be embedded into the C struct, but I can see how that might be a backward compatibility nightmare. -tim > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From robert.kern at gmail.com Fri Jun 9 12:30:20 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 09 Jun 2006 11:30:20 -0500 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <200606091206.00322.faltet@carabos.com> References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> <200606091206.00322.faltet@carabos.com> Message-ID: Francesc Altet wrote: > A Divendres 09 Juny 2006 11:54, Albert Strasheim va escriure: > >>Just out of curiosity: >> >>In [1]: x = N.array([]) >> >>In [2]: x.__array_data__ >>Out[2]: ('0x01C23EE0', False) >> >>Is there a reason why the __array_data__ tuple stores the address as a hex >>string? I would guess that this representation of the address isn't the >>most useful one for most applications. > > Good point. I hit this before and forgot to send a message about this. I agree > that a integer would be better. Although, now that I think about this, I > suppose that the issue should be the difference of representation of longs in > 32-bit and 64-bit platforms, isn't it? Like how Win64 uses 32-bit longs and 64-bit pointers. And then there's signedness. Please don't use Python ints to encode pointers. Holding arbitrary pointers is the job of CObjects. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndarray at mac.com Fri Jun 9 12:50:16 2006 From: ndarray at mac.com (Sasha) Date: Fri, 9 Jun 2006 12:50:16 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <44899BF9.9000002@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> Message-ID: On 6/9/06, Tim Hochberg wrote: > Sasha wrote: > ... > >> > >My rule of thumb for choosing between an attribute and a method is > >that attribute access should not create new objects. > > > Conceptually at least, couldn't there be a single __array_interface__ > object associated with a given array? In that sense, it doesn't really > feel like creating a new object. > In my view, conceptually, __array_interface__ creates a adaptor to the array-like object. What are the advantages of it being an attribute? It is never settable, so the most common advantage of packing get/set methods in a single attribute can be rulled out. Saving typing of '()' cannot be taken seriousely when the name contains a pair of double underscores :-). There was a similar issue discussed on the python-3000 mailing list with respect to __hash__ method . > .... > >> > >My problem with __array_struct__ returning either a tuple or a CObject > >is that array protocol sholuld really provide both. CObject is > >useless for interoperability at python level and a tuple (or dict) is > >inefficient at the C level. Thus a good array-like object should > >really provide both __array_struct__ for use by C modules and > >__array_tuple__ (or whatever) for use by python modules. On the other > >hand, making both required attributes/methods will put an extra burden > >on package writers. Moreover, a pure python implementation of an > >array-like object will not be able to provide __array_struct__ at all. > > One possible solution would be an array protocol metaclass that adds > >__array_struct__ to a class with __array_tuple__ and __array_tuple__ > >to a class with __array_struct__ (yet another argument to make both > >methods). > > > > > I don't understand this. I'm don't see how bringing in metaclass is > going to help a pure python type provide a sensible __array_struct__. > That seems like a hopeless task. Shouldn't pure python implementations > just provide __array__? > My metaclass idea is very similar to your unpack_interface suggestion. A metaclass can autonatically add def __array_tuple__(self): return unpack_interface(self.__array_interface__()) or def __array_interface__(self): return pack_interface(self.__array_tuple__()) to a class that only implements only one of the two required methods. > A single attribute seems pretty appealing to me, I'm don't see much use > for anything else. I don't mind just having __array_struct__ that must return a CObject. My main objection was against a method/attribute that may return either CObject or something else. That felt like shifting the burden from package writer to the package user. From ndarray at mac.com Fri Jun 9 12:53:19 2006 From: ndarray at mac.com (Sasha) Date: Fri, 9 Jun 2006 12:53:19 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <44899BF9.9000002@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> Message-ID: On 6/9/06, Tim Hochberg wrote: > Shouldn't pure python implementations > just provide __array__? > You cannot implement __array__ without importing numpy. From oliphant at ee.byu.edu Fri Jun 9 13:50:00 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 09 Jun 2006 11:50:00 -0600 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> Message-ID: <4489B4C8.3050606@ee.byu.edu> Albert Strasheim wrote: >Hello all > > > >>-----Original Message----- >>From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- >>discussion-bounces at lists.sourceforge.net] On Behalf Of Travis Oliphant >>Sent: 08 June 2006 22:27 >>To: numpy-discussion >>Subject: [Numpy-discussion] Array Protocol change for Python 2.6 >> >>... >> >>I would like to eliminate all the other array protocol attributes before >>NumPy 1.0 (and re-label those such as __array_data__ that are useful in >>other contexts --- like ctypes). >> >> > >Just out of curiosity: > >In [1]: x = N.array([]) > >In [2]: x.__array_data__ >Out[2]: ('0x01C23EE0', False) > >Is there a reason why the __array_data__ tuple stores the address as a hex >string? I would guess that this representation of the address isn't the most >useful one for most applications. > > I suppose we could have stored it as a Python Long integer. But, storing it as a string was probably inspired by SWIG. -Travis From tim.hochberg at cox.net Fri Jun 9 13:54:36 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 10:54:36 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> Message-ID: <4489B5DC.1080505@cox.net> Sasha wrote: >On 6/9/06, Tim Hochberg wrote: > > >>Sasha wrote: >>... >> >> >>>My rule of thumb for choosing between an attribute and a method is >>>that attribute access should not create new objects. >>> >>> >>> >>Conceptually at least, couldn't there be a single __array_interface__ >>object associated with a given array? In that sense, it doesn't really >>feel like creating a new object. >> >> >> >In my view, conceptually, __array_interface__ creates a adaptor to the >array-like object. What are the advantages of it being an attribute? >It is never settable, so the most common advantage of packing get/set >methods in a single attribute can be rulled out. Saving typing of >'()' cannot be taken seriousely when the name contains a pair of >double underscores :-). > >There was a similar issue discussed on the python-3000 mailing list >with respect to __hash__ method >. > > Isn't __array_interface__ always O(1)? By the criteria in that thread, that would make is good candidate for being an attribute. [Stare at __array_interface__ spec...think..stare...] OK, I think I'm coming around to making it a function. Presumably, in: >>> a = arange(6) >>> ai1 = a.__array_interface__() >>> a.shape = [3, 2] >>> ai2 = a.__array_interface__() ai1 and ai2 will be different objects with different objects, pointing to structs with different shape and stride attributes. So, in that sense it's not conceptually constant and should be a function. What happens if I then delete or resize a? Hmmm. It looks like that's probably OK since CObject grabs a reference to a. FWIW, at this point, I marginally prefer array_struct to array_interface. > > >>.... >> >> >>>My problem with __array_struct__ returning either a tuple or a CObject >>>is that array protocol sholuld really provide both. CObject is >>>useless for interoperability at python level and a tuple (or dict) is >>>inefficient at the C level. Thus a good array-like object should >>>really provide both __array_struct__ for use by C modules and >>>__array_tuple__ (or whatever) for use by python modules. On the other >>>hand, making both required attributes/methods will put an extra burden >>>on package writers. Moreover, a pure python implementation of an >>>array-like object will not be able to provide __array_struct__ at all. >>>One possible solution would be an array protocol metaclass that adds >>>__array_struct__ to a class with __array_tuple__ and __array_tuple__ >>>to a class with __array_struct__ (yet another argument to make both >>>methods). >>> >>> >>> >>> >>I don't understand this. I'm don't see how bringing in metaclass is >>going to help a pure python type provide a sensible __array_struct__. >>That seems like a hopeless task. Shouldn't pure python implementations >>just provide __array__? >> >> >> > >My metaclass idea is very similar to your unpack_interface suggestion. > A metaclass can autonatically add > >def __array_tuple__(self): > return unpack_interface(self.__array_interface__()) > > >or > >def __array_interface__(self): > return pack_interface(self.__array_tuple__()) > >to a class that only implements only one of the two required methods. > > It seems like 99% of the people will never care about this at the Python level, so adding an extra attribute is mostly clutter. For those few who do care a function seems preferable. To be honest, I don't actually see a need for anything other than the basic __array_struct__. >>A single attribute seems pretty appealing to me, I'm don't see much use >>for anything else. >> >> > >I don't mind just having __array_struct__ that must return a CObject. >My main objection was against a method/attribute that may return >either CObject or something else. That felt like shifting the burden >from package writer to the package user. > > I concur. > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From oliphant at ee.byu.edu Fri Jun 9 14:08:51 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 09 Jun 2006 12:08:51 -0600 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <44899BF9.9000002@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> Message-ID: <4489B933.4080003@ee.byu.edu> Tim Hochberg wrote: > Sasha wrote: > >> On 6/8/06, David M. Cooke wrote: >> >> >>> ... >>> +0 for name change; I'm happy with it as an attribute. >>> >>> >> >> My rule of thumb for choosing between an attribute and a method is >> that attribute access should not create new objects. >> Interesting rule. In NumPy this is not quite the rule followed. Bascially attributes are used when getting or setting intrinsinc "properties" of the array. Attributes are used for properties that are important in defining what an array *is*. The flags attribute, for example, is an important intrinsinc property of the array but it returns an flags object when it is accessed. The flat attribute also returns a new object (it is arguable whether it should have been a method or an attribute but it is enough of an intrinsic property --- setting the flat attribute sets elements of the array -- that with historical precedence it was left as an attribute). By this meausure, the array interface should be an attribute. >>> >> >> My problem with __array_struct__ returning either a tuple or a CObject >> is that array protocol sholuld really provide both. > This is a convincing argument. Yes, the array protocol should provide both. Thus, we can't over-ride the usage of the same name unless that name produces an object through which both interfaces can be obtained. Is that Sasha's suggestion? > > A single attribute seems pretty appealing to me, I'm don't see much > use for anything else. > > >>> We still need __array_descr__, as the C struct doesn't provide all >>> the info >>> that this does. >>> >>> >> >> What do you have in mind? >> >> > Is there any prospect of merging this data into the C struct? It would > be cleaner if all of the information could be embedded into the C > struct, but I can see how that might be a backward compatibility > nightmare. I do think it should be merged into the C struct. The simplest thing to do is to have an additional PyObject * as part of the C struct which could be NULL (or unassigned). The backward compatibility is a concern but when thinking about what Python 2.6 should support we should not be too crippled by it. Perhaps we should just keep __array_struct__ and compress all the other array_interface methods into the __array_interface__ attribute which returns a dictionary from which the Python-side interface can be produced. Keep in mind there are two different (but related) issues at play here. 1) What goes in to NumPy 1.0 2) What we propose should go into Python 2.6 I think for #1 we should compress the Python-side array protocol into a single __array_interface__ attribute that returns a dictionary. We should also expand the C-struct to contain what _array_descr_ currently provides. -Travis From alexander.belopolsky at gmail.com Fri Jun 9 14:55:07 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 9 Jun 2006 14:55:07 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489B933.4080003@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> Message-ID: On 6/9/06, Travis Oliphant wrote: > ... In NumPy this is not quite the rule followed. > Bascially attributes are used when getting or setting intrinsinc > "properties" of the array. Attributes are used for properties that are > important in defining what an array *is*. The flags attribute, for > example, is an important intrinsinc property of the array but it returns > an flags object when it is accessed. The flat attribute also returns a > new object (it is arguable whether it should have been a method or an > attribute but it is enough of an intrinsic property --- setting the flat > attribute sets elements of the array -- that with historical precedence > it was left as an attribute). > > By this meausure, the array interface should be an attribute. > Array interface is not an intrinsic property of the array, but rather an alternative representation of the array itself. Flags are properly an attribute because they are settable. Something like >>> x.flags()['WRITEABLE'] = False although technically possible, would be quite ugly. Similarly, shape attribute, although fails my rule of thumb by creating a new object, >>> x.shape is x.shape False is justifiably an attribute because otherwise two methods: get_shape and set_shape would be required. I don't think "flat" should be an attribute, however. I could not find the reference, but I remember a discussion of why __iter__ should not be an attribute and IIRC the answer was because an iterator has a mutable state that is not reflected in the underlying object: >>> x = arange(5) >>> i = x.flat >>> list(i) [0, 1, 2, 3, 4] >>> list(i) [] >>> list(x.flat) [0, 1, 2, 3, 4] > >> My problem with __array_struct__ returning either a tuple or a CObject > >> is that array protocol sholuld really provide both. > > > This is a convincing argument. Yes, the array protocol should provide > both. Thus, we can't over-ride the usage of the same name unless that > name produces an object through which both interfaces can be obtained. > > Is that Sasha's suggestion? > It was, but I quckly retracted it in favor of a mechanism to unpack the CObject. FWIW, I am also now -0 on the name change from __array_struct__ to __array_interface__ if what it provides is just a struct wrapped in a CObject. From ndarray at mac.com Fri Jun 9 14:56:13 2006 From: ndarray at mac.com (Sasha) Date: Fri, 9 Jun 2006 14:56:13 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> Message-ID: On 6/9/06, Travis Oliphant wrote: > ... In NumPy this is not quite the rule followed. > Bascially attributes are used when getting or setting intrinsinc > "properties" of the array. Attributes are used for properties that are > important in defining what an array *is*. The flags attribute, for > example, is an important intrinsinc property of the array but it returns > an flags object when it is accessed. The flat attribute also returns a > new object (it is arguable whether it should have been a method or an > attribute but it is enough of an intrinsic property --- setting the flat > attribute sets elements of the array -- that with historical precedence > it was left as an attribute). > > By this meausure, the array interface should be an attribute. > Array interface is not an intrinsic property of the array, but rather an alternative representation of the array itself. Flags are properly an attribute because they are settable. Something like >>> x.flags()['WRITEABLE'] = False although technically possible, would be quite ugly. Similarly, shape attribute, although fails my rule of thumb by creating a new object, >>> x.shape is x.shape False is justifiably an attribute because otherwise two methods: get_shape and set_shape would be required. I don't think "flat" should be an attribute, however. I could not find the reference, but I remember a discussion of why __iter__ should not be an attribute and IIRC the answer was because an iterator has a mutable state that is not reflected in the underlying object: >>> x = arange(5) >>> i = x.flat >>> list(i) [0, 1, 2, 3, 4] >>> list(i) [] >>> list(x.flat) [0, 1, 2, 3, 4] > >> My problem with __array_struct__ returning either a tuple or a CObject > >> is that array protocol sholuld really provide both. > > > This is a convincing argument. Yes, the array protocol should provide > both. Thus, we can't over-ride the usage of the same name unless that > name produces an object through which both interfaces can be obtained. > > Is that Sasha's suggestion? > It was, but I quckly retracted it in favor of a mechanism to unpack the CObject. FWIW, I am also now -0 on the name change from __array_struct__ to __array_interface__ if what it provides is just a struct wrapped in a CObject. From strawman at astraw.com Fri Jun 9 15:26:33 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 09 Jun 2006 12:26:33 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489B933.4080003@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> Message-ID: <4489CB69.7080702@astraw.com> On the one hand, I feel we should keep __array_struct__ behaving exactly as it is now. There's already lots of code that uses it, and it's tremendously useful despite (because of?) it's simplicity. For these of use cases, the __array_descr__ information has already proven unnecessary. I must say that I, and probably others, thought that __array_struct__ would be future-proof. Although the magnitude of the proposed change to add this information to the C-struct PyArrayInterface is minor, it still breaks code in the wild. On the other hand, I'm only beginning to grasp the power of the __array_descr__ information. So perhaps bumping the PyArrayInterface.version to 3 (2 is the current, and as far as I can tell, original version) and going forward would be justified. Perhaps there's a way towards backwards-compatibility -- the various array consumers could presumably support _reading_ both v2 and version 3 nearly forever, but could spit out warnings when reading v2. It seems v3 would be a simple superset of v2, so implementation of this wouldn't be hard. The challenge will be when a implementor returns a v3 __array_struct__ to something that reads only v2. For this reason, maybe it's better to break backwards compatibility now before even more code is written to read v2. Is it clear what would need to be done to provide a C-struct giving the _array_descr_ information? What's the problem with keeping __array_descr__ access available only at the Python level? Your original email suggested limiting the number of attributes, which I agree with, but I don't think we need to go to the logical extreme. Does simply keeping __array_descr__ as part of the Python array interface avoid these issues? At what cost? Cheers! Andrew Travis Oliphant wrote: >Keep in mind there are two different (but related) issues at play here. > >1) What goes in to NumPy 1.0 >2) What we propose should go into Python 2.6 > > >I think for #1 we should compress the Python-side array protocol into a >single __array_interface__ attribute that returns a dictionary. We >should also expand the C-struct to contain what _array_descr_ currently >provides. > > >-Travis > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From tim.hochberg at cox.net Fri Jun 9 15:52:38 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 12:52:38 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> Message-ID: <4489D186.3020605@cox.net> Sasha wrote: >On 6/9/06, Travis Oliphant wrote: > > >>... In NumPy this is not quite the rule followed. >>Bascially attributes are used when getting or setting intrinsinc >>"properties" of the array. Attributes are used for properties that are >>important in defining what an array *is*. The flags attribute, for >>example, is an important intrinsinc property of the array but it returns >>an flags object when it is accessed. The flat attribute also returns a >>new object (it is arguable whether it should have been a method or an >>attribute but it is enough of an intrinsic property --- setting the flat >>attribute sets elements of the array -- that with historical precedence >>it was left as an attribute). >> >>By this meausure, the array interface should be an attribute. >> >> >> > >Array interface is not an intrinsic property of the array, but rather >an alternative representation of the array itself. > > I was going to say that it may help to think of array_interface as returning a *view*, since that seems to be the semantics that could probably be implemented safely without too much trouble. However, it looks like that's not what happens. array_interface->shape and strides point to the raw shape and strides for the array. That looks like it's a problem. Isn't: >>> ai = a.__array_interface__ >>> a.shape = newshape going to result in ai having a stale pointers to shape and strides that no longer exist? Potentially resulting in a segfault? It seems the safe approach is to give array_interface it's own shape and strides data. An implementation shortcut could be to actually generate a new view in array_struct_get and then pass that to PyCObject_FromVoidPtrAndDesc. Thus the CObject would have the only handle to the new view and it couldn't be corrupted. [SNIP] -tim From oliphant at ee.byu.edu Fri Jun 9 16:05:50 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 09 Jun 2006 14:05:50 -0600 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489D186.3020605@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> Message-ID: <4489D49E.3090401@ee.byu.edu> Tim Hochberg wrote: >I was going to say that it may help to think of array_interface as >returning a *view*, since that seems to be the semantics that could >probably be implemented safely without too much trouble. However, it >looks like that's not what happens. array_interface->shape and strides >point to the raw shape and strides for the array. That looks like it's a >problem. Isn't: > > >>> ai = a.__array_interface__ > >>> a.shape = newshape > >going to result in ai having a stale pointers to shape and strides that >no longer exist? > This is an implementation detail. I'm still trying to gather some kind of consensus on what to actually do here. There is no such __array_interface__ attribute at this point. -Travis From strawman at astraw.com Fri Jun 9 16:51:57 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 09 Jun 2006 13:51:57 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489D3E8.4060108@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489CB69.7080702@astraw.com> <4489D3E8.4060108@ee.byu.edu> Message-ID: <4489DF6D.8010407@astraw.com> Travis Oliphant wrote: > Andrew Straw wrote: > >> On the one hand, I feel we should keep __array_struct__ behaving >> exactly as it is now. There's already lots of code that uses it, and >> it's tremendously useful despite (because of?) it's simplicity. For >> these of use cases, the __array_descr__ information has already >> proven unnecessary. I must say that I, and probably others, thought >> that __array_struct__ would be future-proof. Although the magnitude >> of the proposed change to add this information to the C-struct >> PyArrayInterface is minor, it still breaks code in the wild. >> > I don't see how it breaks any code in the wild to add an additional > member to the C-struct. We could easily handle it in new code with a > flag setting (like Python uses). The only possible problem is > looking for it when it is not there. Ahh, thanks for clarifying. Let me paraphrase to make sure I got it right: given a C-struct "inter" of type PyArrayInterface, if and only if ((inter.flags & HAS_ARRAY_DESCR) == HAS_ARRAY_DESCR) inter could safely be cast as PyArrayInterfaceWithArrayDescr and thus expose a new member. This does seem to avoid all the issues and maintain backwards compatibility. I guess the only potential complaint is that it's a little C trick which might be unpalatable to the core Python devs, but it doesn't seem egregious to me. If I do understand this issue, I'm +1 for the above scheme provided the core Python devs don't mind. Cheers! Andrew From cookedm at physics.mcmaster.ca Fri Jun 9 17:04:09 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 9 Jun 2006 17:04:09 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489B933.4080003@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> Message-ID: <20060609170409.4a68fa81@arbutus.physics.mcmaster.ca> On Fri, 09 Jun 2006 12:08:51 -0600 Travis Oliphant wrote: > Tim Hochberg wrote: > > > Sasha wrote: > > > >> On 6/8/06, David M. Cooke wrote: > > >>> > >> > >> My problem with __array_struct__ returning either a tuple or a CObject > >> is that array protocol sholuld really provide both. > > > This is a convincing argument. Yes, the array protocol should provide > both. Thus, we can't over-ride the usage of the same name unless that > name produces an object through which both interfaces can be obtained. True, didn't think about that. +1. > >>> We still need __array_descr__, as the C struct doesn't provide all > >>> the info > >>> that this does. > >> > >> What do you have in mind? > >> > > Is there any prospect of merging this data into the C struct? It would > > be cleaner if all of the information could be embedded into the C > > struct, but I can see how that might be a backward compatibility > > nightmare. > > I do think it should be merged into the C struct. The simplest thing > to do is to have an additional PyObject * as part of the C struct which > could be NULL (or unassigned). The backward compatibility is a concern > but when thinking about what Python 2.6 should support we should not be > too crippled by it. > > Perhaps we should just keep __array_struct__ and compress all the other > array_interface methods into the __array_interface__ attribute which > returns a dictionary from which the Python-side interface can be produced. +1. I'm ok with two attributes: __array_struct__ (for C), and __array_interface__ (as a dict for Python). For __array_descr__, I would require everything that provides an __array_struct__ must also provide an __array_interface__, then __array_descr__ can become a 'descr' key in __array_interface__. Requiring that would also mean that any array-like object can be introspected from Python or C. I think that the array_descr is complicated enough that keeping it as a Python object is ok: you don't have to reinvent routines to make tuple-like objects, and handle memory for strings, etc. If you're using the array interface, you've got Python available: use it. If you *do* want a C-level version, I'd make it simple, and concatenate the typestr descriptions of each field together, like '>i2>f8', and forget the names (you can grab them out of __array_interface__['descr'] if you need them). That's simple enough to be parseable with sscanf. > Keep in mind there are two different (but related) issues at play here. > > 1) What goes in to NumPy 1.0 > 2) What we propose should go into Python 2.6 > > > I think for #1 we should compress the Python-side array protocol into a > single __array_interface__ attribute that returns a dictionary. We > should also expand the C-struct to contain what _array_descr_ currently > provides. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From tim.hochberg at cox.net Fri Jun 9 17:08:32 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 14:08:32 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489D49E.3090401@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> Message-ID: <4489E350.2070500@cox.net> Travis Oliphant wrote: >Tim Hochberg wrote: > > > >>I was going to say that it may help to think of array_interface as >>returning a *view*, since that seems to be the semantics that could >>probably be implemented safely without too much trouble. However, it >>looks like that's not what happens. array_interface->shape and strides >>point to the raw shape and strides for the array. That looks like it's a >>problem. Isn't: >> >> >> >>>>>ai = a.__array_interface__ >>>>>a.shape = newshape >>>>> >>>>> >>going to result in ai having a stale pointers to shape and strides that >>no longer exist? >> >> >> >This is an implementation detail. I'm still trying to gather some kind >of consensus on what to actually do here. > There were three things mixed together in my post: 1. The current implementation of __array_struct__ looks buggy. Should I go ahead and file a bug report so that this behaviour doesn't get blindly copied over from __array_struct__ to whatever the final dohickey is called or is that going to be totally rewritten in any case. 2. Whether __array_struct__ or __array_interface__ or whatever it gets called returns something that's kind of like a view (has it's own copies of shape and strides mainly) versus an alias for the original array (somehow tries to track the original arrays shape and strides) is a semantic difference, not an implementation details. I suspect that no one really cares that much about this and we'll end up doing what's easiest to get right; I'm pretty certain that is view semantics. It may be helpful to pronounce on that now, since it's possible the semantics might influence the name chosen, but I don't think it's critical. 3. The implementation details I provided were, uh, implentation details. -tim > There is no such >__array_interface__ attribute at this point. > > >-Travis > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From fperez.net at gmail.com Fri Jun 9 17:19:14 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 9 Jun 2006 15:19:14 -0600 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> References: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> Message-ID: On 6/9/06, David M. Cooke wrote: > On Thu, Jun 08, 2006 at 11:28:04PM -0600, Fernando Perez wrote: > > Anyone object to this patch against current numpy SVN to get rid of > > this thing? (tracking where the hell that thing was coming from was > > all kinds of fun) > > Go ahead. > > I'm against random messages being printed out anyways -- I'd get > rid of the '' too. There's a bunch of code in scipy > with 'print' statements that I don't think belong in a library. (Now, > if we defined a logging framework, that'd be ok with me!) Before I commit anything, let's decide on that one. Weave used to print 'None' whenever it compiled anything, I changed it a while ago to the current 'weave:compiling'. I'm also of the opinion that libraries should operate quietly, but with weave I've always wanted that message in there. The reason is that when weave compiles (esp. with blitz in the picture), the execution takes a long time. The same function goes from miliseconds to 30 seconds of run time depending on whether compilation is happening or not. This difference is so dramatic that I think a message is justified (absent a proper logging framework). It's helpful to know that the time is going into c++ compilation, and not your code hanging for 30 seconds. Opinions? f From cookedm at physics.mcmaster.ca Fri Jun 9 17:45:28 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 9 Jun 2006 17:45:28 -0400 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: References: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> Message-ID: <20060609174528.1ece4bb7@arbutus.physics.mcmaster.ca> On Fri, 9 Jun 2006 15:19:14 -0600 "Fernando Perez" wrote: > On 6/9/06, David M. Cooke wrote: > > On Thu, Jun 08, 2006 at 11:28:04PM -0600, Fernando Perez wrote: > > > > Anyone object to this patch against current numpy SVN to get rid of > > > this thing? (tracking where the hell that thing was coming from was > > > all kinds of fun) > > > > Go ahead. > > > > I'm against random messages being printed out anyways -- I'd get > > rid of the '' too. There's a bunch of code in scipy > > with 'print' statements that I don't think belong in a library. (Now, > > if we defined a logging framework, that'd be ok with me!) > > Before I commit anything, let's decide on that one. Weave used to > print 'None' whenever it compiled anything, I changed it a while ago > to the current 'weave:compiling'. I'm also of the opinion that > libraries should operate quietly, but with weave I've always wanted > that message in there. The reason is that when weave compiles (esp. > with blitz in the picture), the execution takes a long time. The same > function goes from miliseconds to 30 seconds of run time depending on > whether compilation is happening or not. > > This difference is so dramatic that I think a message is justified > (absent a proper logging framework). It's helpful to know that the > time is going into c++ compilation, and not your code hanging for 30 > seconds. Ok, I'll give you that one :-) It's the other 1000 uses of print that I'm concerned about. inline_tools.compile_function takes a verbose flag, though, which eventually gets passed to build_tools.build_extension (which I believe does all the compiling for weave). It's probably more reasonable to have inline_tools.compile_function default to verbose=1 instead of 0, then build_extension will print 'Compiling code...' (that should be changed to mention weave). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From tim.hochberg at cox.net Fri Jun 9 17:55:53 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 14:55:53 -0700 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: <20060609174528.1ece4bb7@arbutus.physics.mcmaster.ca> References: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> <20060609174528.1ece4bb7@arbutus.physics.mcmaster.ca> Message-ID: <4489EE69.6050509@cox.net> David M. Cooke wrote: >On Fri, 9 Jun 2006 15:19:14 -0600 >"Fernando Perez" wrote: > > > >>On 6/9/06, David M. Cooke wrote: >> >> >>>On Thu, Jun 08, 2006 at 11:28:04PM -0600, Fernando Perez wrote: >>> >>> >>>>Anyone object to this patch against current numpy SVN to get rid of >>>>this thing? (tracking where the hell that thing was coming from was >>>>all kinds of fun) >>>> >>>> >>>Go ahead. >>> >>>I'm against random messages being printed out anyways -- I'd get >>>rid of the '' too. There's a bunch of code in scipy >>>with 'print' statements that I don't think belong in a library. (Now, >>>if we defined a logging framework, that'd be ok with me!) >>> >>> >>Before I commit anything, let's decide on that one. Weave used to >>print 'None' whenever it compiled anything, I changed it a while ago >>to the current 'weave:compiling'. I'm also of the opinion that >>libraries should operate quietly, but with weave I've always wanted >>that message in there. The reason is that when weave compiles (esp. >>with blitz in the picture), the execution takes a long time. The same >>function goes from miliseconds to 30 seconds of run time depending on >>whether compilation is happening or not. >> >>This difference is so dramatic that I think a message is justified >>(absent a proper logging framework). It's helpful to know that the >>time is going into c++ compilation, and not your code hanging for 30 >>seconds. >> >> > >Ok, I'll give you that one :-) It's the other 1000 uses of print that I'm >concerned about. > >inline_tools.compile_function takes a verbose flag, though, which eventually >gets passed to build_tools.build_extension (which I believe does all the >compiling for weave). It's probably more reasonable to have >inline_tools.compile_function default to verbose=1 instead of 0, then >build_extension will print 'Compiling code...' (that should be changed to >mention weave). > > Assuming inline_tools doesn't already use logging, might it be advantageous to have it use Python's logging module? >>> logging.getLogger("scipy.weave").warning("compiling -- this may take some time") WARNING:scipy.weave:compiling -- this may take some time [I think warning is the lowest level that gets displayed by default] -tim From fperez.net at gmail.com Fri Jun 9 18:21:00 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 9 Jun 2006 16:21:00 -0600 Subject: [Numpy-discussion] Getting rid of annoying weave nag In-Reply-To: <20060609174528.1ece4bb7@arbutus.physics.mcmaster.ca> References: <20060609080152.GA7023@arbutus.physics.mcmaster.ca> <20060609174528.1ece4bb7@arbutus.physics.mcmaster.ca> Message-ID: On 6/9/06, David M. Cooke wrote: > > This difference is so dramatic that I think a message is justified > > (absent a proper logging framework). It's helpful to know that the > > time is going into c++ compilation, and not your code hanging for 30 > > seconds. > > Ok, I'll give you that one :-) It's the other 1000 uses of print that I'm > concerned about. > > inline_tools.compile_function takes a verbose flag, though, which eventually > gets passed to build_tools.build_extension (which I believe does all the > compiling for weave). It's probably more reasonable to have > inline_tools.compile_function default to verbose=1 instead of 0, then > build_extension will print 'Compiling code...' (that should be changed to > mention weave). I failed to mention that I agree with you: the proper solution is to use logging for this. For now I'll commit the strict-prototypes fix, and if I find myself with a lot of spare time, I'll try to clean things up a little bit to use logging (there's already a logger instance running in there). Cheers, f From Chris.Barker at noaa.gov Fri Jun 9 18:50:21 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 09 Jun 2006 15:50:21 -0700 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <44875BA8.806@astraw.com> References: <200606072052.k57KqXJ2015269@oobleck.astro.cornell.edu> <44874C7D.4050208@noaa.gov> <44875BA8.806@astraw.com> Message-ID: <4489FB2D.4000500@noaa.gov> Andrew Straw wrote: > Christopher Barker wrote: >> Joe Harrington wrote: >>> My >>> suggestion is that all the other pages be automatic redirects to the >>> scipy.org page or subpages thereof. >> if that means something like: >> >> www.numpy.scipy.org (or www.scipy.org/numpy ) >> Then I'm all for it. >> > I just made www.scipy.org/numpy redirect to the already-existing > www.scipy.org/NumPy > > So, hopefully you're on-board now. BTW, this is the reason why we have a > wiki -- if you don't like something it says, how the site is organized, > or whatever, please just jump in and edit it. Thanks for that, but I wasn't taking issue with capitalization. Now that you've done, though, the easier it is to find, the better. As I understood it, Joe's suggestion about "all other pages" referred to pages that are NOT hosted at scipy.org. Those I can't change. My comment referred to an earlier suggestion that other pages about Numpy be referred to www.scipy.org, and I was simply suggesting that any non-scipy page that refers to numpy should refer to a page specifically about numpy, like www.scipy.org/NumPy, rather than the main scipy page. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tim.hochberg at cox.net Fri Jun 9 18:49:12 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 15:49:12 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489D49E.3090401@ee.byu.edu> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> Message-ID: <4489FAE8.7060605@cox.net> Which of the following should we require for an object to be "supporting the array interface"? Here a producer is something that supplies array_struct or array_interface (where the latter is the Python level version of the former as per recent messages). Consumers do something with the results. 1. Producers can supply either array_struct (if implemented in C) or array_interface (if implemented in Python). Consumers must accept both. 2. Producers must supply both array_struct and array_interface. Consumers may accept either. 3. Producers most supply both array_struct and array_interface. Consumers must accept both as well. A possibly related point, array_interface['data'] should be required to be a buffer object; a 2-tuple of address/read-only should not be allowed as that's a simple way to crash the interpreter. I see some reasonable arguments for either 1 or 2. 3 seems like excess work. -tim From strawman at astraw.com Fri Jun 9 19:03:32 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 09 Jun 2006 16:03:32 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489FAE8.7060605@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> <4489FAE8.7060605@cox.net> Message-ID: <4489FE44.4090804@astraw.com> Tim Hochberg wrote: >Which of the following should we require for an object to be "supporting >the array interface"? Here a producer is something that supplies >array_struct or array_interface (where the latter is the Python level >version of the former as per recent messages). Consumers do something >with the results. > > 1. Producers can supply either array_struct (if implemented in C) or > array_interface (if implemented in Python). Consumers must accept > both. > 2. Producers must supply both array_struct and array_interface. > Consumers may accept either. > 3. Producers most supply both array_struct and array_interface. > Consumers must accept both as well. > > I haven't been following as closely as I could, but is the following a possibility? 4. Producers can supply either array_struct or array_interface. Consumers may accept either. The intermediate is a small, standalone (does not depend on NumPy) extension module that does automatic translation if necessary by provides 2 functions: as_array_struct() (which returns a CObject) and as_array_interface() (which returns a tuple/dict/whatever). From cookedm at physics.mcmaster.ca Fri Jun 9 19:30:57 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 9 Jun 2006 19:30:57 -0400 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <4489FE44.4090804@astraw.com> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> <4489FAE8.7060605@cox.net> <4489FE44.4090804@astraw.com> Message-ID: <20060609193057.54a1d113@arbutus.physics.mcmaster.ca> On Fri, 09 Jun 2006 16:03:32 -0700 Andrew Straw wrote: > Tim Hochberg wrote: > > >Which of the following should we require for an object to be "supporting > >the array interface"? Here a producer is something that supplies > >array_struct or array_interface (where the latter is the Python level > >version of the former as per recent messages). Consumers do something > >with the results. > > > > 1. Producers can supply either array_struct (if implemented in C) or > > array_interface (if implemented in Python). Consumers must accept > > both. > > 2. Producers must supply both array_struct and array_interface. > > Consumers may accept either. > > 3. Producers most supply both array_struct and array_interface. > > Consumers must accept both as well. > > > > > I haven't been following as closely as I could, but is the following a > possibility? > 4. Producers can supply either array_struct or array_interface. > Consumers may accept either. The intermediate is a small, standalone > (does not depend on NumPy) extension module that does automatic > translation if necessary by provides 2 functions: as_array_struct() > (which returns a CObject) and as_array_interface() (which returns a > tuple/dict/whatever). For something to go in the Python standard library this is certainly possible. Heck, if it's in the standard library we can have one attribute which is a special ArrayInterface object, which can be queried from both Python and C efficiently. For something like numpy (where we don't require a special object: the "producer" and "consumers" in Tim's terminology could be Numeric and numarray, for instance), we don't want a 3rd-party dependence. There's one case that I mentioned in another email: 5. Producers must supply array_interface, and may supply array_struct. Consumers can use either. Requiring array_struct means that Python-only modules can't play along, so I think it should be optional (of course, if you're concerned about speed, you would provide it). Or maybe we should revisit the "no external dependencies". Perhaps one module would make everything easier, with helper functions and consistent handling of special cases. Packages wouldn't need it if they don't interact: you could conditionally import it when __array_interface__ is requested, and fail if you don't have it. It would just be required if you want to do sharing. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Fri Jun 9 19:57:46 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 09 Jun 2006 17:57:46 -0600 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> Message-ID: <448A0AFA.1090700@ee.byu.edu> Thanks for your response to the questionaire. >>3) Please, explain your reason(s) for not making the switch. (if >>you answered No to #2) >> >> > >Lack of time. Some of the changes from Numeric are subtle and require >a careful analysis of the code, and then careful testing. For big >applications, that's a lot of work. There are also modules (I am >thinking of RNG) that have been replaced by something completely >different that needs to be evaluated first. > > You may be interested to note that I just added the RNG interface to numpy for back-wards compatibility. It can be accessed and used by re-placing import RNG with import numpy.random.oldrng as RNG Best regards, -Travis From stephenemslie at gmail.com Fri Jun 9 21:34:36 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Sat, 10 Jun 2006 02:34:36 +0100 Subject: [Numpy-discussion] adaptive thresholding: get adacent cells for each pixel Message-ID: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> I'm just starting with numpy (via scipy) and I'm wanting to perform adaptive thresholding (http://www.cee.hw.ac.uk/hipr/html/adpthrsh.html) on an image. Basically that means that I need to get a threshold for each pixel by examining the pixels around it. In numpy this translates to finding the adjacent cells for each cell (not including the value of the cell we are examining) and getting the mean, or median of those cells. I've written something that works, but is terribly slow. How would someone with more experience get the adjacent cells for each cell minus the cell being examined? Thanks Stephen Emslie From robert.kern at gmail.com Fri Jun 9 22:12:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 09 Jun 2006 21:12:02 -0500 Subject: [Numpy-discussion] Building shared libraries with numpy.distutils In-Reply-To: <001d01c68bab$e142d5c0$01eaa8c0@dsp.sun.ac.za> References: <001d01c68bab$e142d5c0$01eaa8c0@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > Hello all > > For my Summer of Code project, I'm adding Support Vector Machine code to > SciPy. Underneath, I'm currently using libsvm. Thus far, I've been compiling > libsvm as a shared library (DLL on Windows) using SCons and doing the > wrapping with ctypes. > > Now, I would like to integrate my code into the SciPy build. Unfortunately, > it doesn't seem as if numpy.distutils or distutils proper knows about > building shared libraries. > > Building shared libraries across multiple platforms is tricky to say the > least so I don't know if implementing this functionality again is something > worth doing. The alternative -- never using shared libraries, doesn't seem > very appealing either. > > Is anybody building shared libraries? Any code or comments? Ed Schofield worked out a way: http://www.scipy.net/pipermail/scipy-dev/2006-April/005708.html You'll have some experimenting to do, but the basics are there. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at cox.net Fri Jun 9 23:58:50 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 09 Jun 2006 20:58:50 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <20060609193057.54a1d113@arbutus.physics.mcmaster.ca> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> <4489FAE8.7060605@cox.net> <4489FE44.4090804@astraw.com> <20060609193057.54a1d113@arbutus.physics.mcmaster.ca> Message-ID: <448A437A.1030903@cox.net> David M. Cooke wrote: >On Fri, 09 Jun 2006 16:03:32 -0700 >Andrew Straw wrote: > > > >>Tim Hochberg wrote: >> >> >> >>>Which of the following should we require for an object to be "supporting >>>the array interface"? Here a producer is something that supplies >>>array_struct or array_interface (where the latter is the Python level >>>version of the former as per recent messages). Consumers do something >>>with the results. >>> >>> 1. Producers can supply either array_struct (if implemented in C) or >>> array_interface (if implemented in Python). Consumers must accept >>> both. >>> 2. Producers must supply both array_struct and array_interface. >>> Consumers may accept either. >>> 3. Producers most supply both array_struct and array_interface. >>> Consumers must accept both as well. >>> >>> >>> >>> >>I haven't been following as closely as I could, but is the following a >>possibility? >> 4. Producers can supply either array_struct or array_interface. >>Consumers may accept either. The intermediate is a small, standalone >>(does not depend on NumPy) extension module that does automatic >>translation if necessary by provides 2 functions: as_array_struct() >>(which returns a CObject) and as_array_interface() (which returns a >>tuple/dict/whatever). >> >> > >For something to go in the Python standard library this is certainly >possible. Heck, if it's in the standard library we can have one attribute >which is a special ArrayInterface object, which can be queried from both >Python and C efficiently. > >For something like numpy (where we don't require a special object: the >"producer" and "consumers" in Tim's terminology could be Numeric and >numarray, for instance), we don't want a 3rd-party dependence. There's one >case that I mentioned in another email: > >5. Producers must supply array_interface, and may supply array_struct. >Consumers can use either. > >Requiring array_struct means that Python-only modules can't play along, so I >think it should be optional (of course, if you're concerned about speed, you >would provide it). > >Or maybe we should revisit the "no external dependencies". Perhaps one module >would make everything easier, with helper functions and consistent handling >of special cases. Packages wouldn't need it if they don't interact: you could >conditionally import it when __array_interface__ is requested, and fail if >you don't have it. It would just be required if you want to do sharing. > > Here's another idea: move array_struct *into* array_interface. That is, array_interface becomes a dictionary with the following items: shape : sequence specifying the shape typestr : the typestring descr: you get the idea strides: ... shape: ... mask: ... offset: ... data: A buffer object struct: the array_struct or None. The downside is that you have to do two lookups to get the array_struct, and that should be the fast path. A partial solution is to instead have array_interface be a super_tuple similar to the result of os.stat. This should be faster since tuple is quite fast to index if you know what index you want. An advantage of having one module that you need to import is that we could use something other than CObject, which would allow us to bullet proof the array interface at the python level. One nit with using a CObject is that I can pass an object that doesn't refer to a PyArrayInterface with unpleasant results. -tim From filip at ftv.pl Sat Jun 10 04:13:42 2006 From: filip at ftv.pl (Filip Wasilewski) Date: Sat, 10 Jun 2006 10:13:42 +0200 Subject: [Numpy-discussion] adaptive thresholding: get adacent cells for each pixel In-Reply-To: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> References: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> Message-ID: <44144430.20060610101342@gmail.com> Hi, > I'm just starting with numpy (via scipy) and I'm wanting to perform > adaptive thresholding > (http://www.cee.hw.ac.uk/hipr/html/adpthrsh.html) on an image. > Basically that means that I need to get a threshold for each pixel by > examining the pixels around it. In numpy this translates to finding > the adjacent cells for each cell (not including the value of the cell > we are examining) and getting the mean, or median of those cells. > I've written something that works, but is terribly slow. How would > someone with more experience get the adjacent cells for each cell > minus the cell being examined? You can get the mean value of surrounding cells by filtering. import numpy from scipy import signal im = numpy.ones((10,10), dtype='d') * range(10) fi = numpy.ones((3,3), dtype='d') / 8 fi[1,1]=0 print fi #[[ 0.125 0.125 0.125] # [ 0.125 0. 0.125] # [ 0.125 0.125 0.125]] signal.convolve2d(im, fi, mode='same', boundary='symm') # or correlate2d in this case Also check help(signal.convolve2d) for information on various parameters this function takes. cheers, fw From a.u.r.e.l.i.a.n at gmx.net Sat Jun 10 04:19:43 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Sat, 10 Jun 2006 10:19:43 +0200 Subject: [Numpy-discussion] adaptive thresholding: get adacent cells for each pixel In-Reply-To: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> References: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> Message-ID: <448A809F.7080009@gmx.net> Hi, > I'm just starting with numpy (via scipy) and I'm wanting to perform > adaptive thresholding > (http://www.cee.hw.ac.uk/hipr/html/adpthrsh.html) on an image. > Basically that means that I need to get a threshold for each pixel by > examining the pixels around it. In numpy this translates to finding > the adjacent cells for each cell (not including the value of the cell > we are examining) and getting the mean, or median of those cells. > > I've written something that works, but is terribly slow. How would > someone with more experience get the adjacent cells for each cell > minus the cell being examined? regarding the mean value, you can take a look at scipy.signal.convolve2d. If you convolve with an array like this: [[0.125 0.125 0.125] [0.125 0.0 0.125] [0.125 0.125 0.125]] you get the 3x3 mean value (btw why leave out the center pixel?). For the median, I can not think of any good method right now. Also another method springs to my mind (just substract the top row and add a new bottom row to the averaging window), but I have no idea how to do this in an efficient way. Generally, always try to find a way to process the whole array as one. If you perform anything on an array elementwise, it will be dead slow. Best regards, Johannes From mablvc4 at shawfest.com Sat Jun 10 07:08:08 2006 From: mablvc4 at shawfest.com (Randee Erdman) Date: Sat, 10 Jun 2006 12:08:08 +0100 Subject: [Numpy-discussion] [H O T] Enjoy the same deep discounts offered to US residents mightn't besotting cachalot custard Message-ID: <0g742prns988kmejal@shawfest.com> We believe ordering medication should be as simple as ordering anything else on the Internet: Private, secure, and easy. Everything is done on-line and Safe.. Click The Link Below mediente.com Best Regards, Randee Erdman mediente.com customer service oetewlcsqv ECrHgeggQKrYdOArPhPdegiINJLCEn Josephus PVC attained gadwall edicts inferences gadwall encase dogmas Punic amounters discord grassland fanned burlesques canons chances amounters ducked fixating determines dim Samoa dusting battleship amazingly archiving colonies expertly airs contrivances From aisaac at american.edu Sat Jun 10 09:48:11 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 10 Jun 2006 09:48:11 -0400 Subject: [Numpy-discussion] =?utf-8?q?adaptive_thresholding=3A_get_adacent?= =?utf-8?q?_cells_for=09each_pixel?= In-Reply-To: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> References: <51f97e530606091834t443e5bafy47049915522ee196@mail.gmail.com> Message-ID: On Sat, 10 Jun 2006, stephen emslie apparently wrote: > I'm just starting with numpy (via scipy) and I'm wanting to perform > adaptive thresholding > (http://www.cee.hw.ac.uk/hipr/html/adpthrsh.html) on an image. The ability to define a function on a neighborhood, where the neighborhood is defined by relative coordinates, is useful other places too. (E.g., agent based modeling. Here the output should be a new array of the same dimension with each element replaced by the value of the function on the neighborhood.) I am also interested in learning how people handle this. Cheers, Alan Isaac From alex.liberzon at gmail.com Sat Jun 10 13:19:15 2006 From: alex.liberzon at gmail.com (Alex Liberzon) Date: Sat, 10 Jun 2006 19:19:15 +0200 Subject: [Numpy-discussion] adaptive thresholding: get adacent cells for each pixel Message-ID: <775f17a80606101019x1bb4652es6cfa758726030086@mail.gmail.com> Not sure, but my Google desktop search of "medfilt" (the name of Matlab function) brought me to: info_signal.py - N-dimensional order filter. medfilt -N-dimensional median filter If it's true, then it is the 2D median filter. Regarding the neighbouring cells, I found the iterator on 2D ranges on the O'Reily Cookbook by Simon Wittber very useful for my PyPIV (Particle Image Velocimetry, which works by correlation of 2D blocks of two successive images): http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/334971 def blocks(size, box=(1,1)): """ Iterate over a 2D range in 2D increments. Returns a 4 element tuple of top left and bottom right coordinates. """ box = list(box) pos = [0,0] yield tuple(pos + box) while True: if pos[0] >= size[0]-box[0]: pos[0] = 0 pos[1] += box[1] if pos[1] >= size[1]: raise StopIteration else: pos[0] += box[0] topleft = pos bottomright = [min(x[1]+x[0],x[2]) for x in zip(pos,box,size)] yield tuple(topleft + bottomright) if __name__ == "__main__": for c in blocks((100,100),(99,10)): print c for c in blocks((10,10)): print c HIH, Alex From stephenemslie at gmail.com Sat Jun 10 15:33:25 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Sat, 10 Jun 2006 20:33:25 +0100 Subject: [Numpy-discussion] adaptive thresholding: get adacent cells for each pixel In-Reply-To: <775f17a80606101019x1bb4652es6cfa758726030086@mail.gmail.com> References: <775f17a80606101019x1bb4652es6cfa758726030086@mail.gmail.com> Message-ID: <51f97e530606101233r6a1f2e6bo700240b4c99ea86b@mail.gmail.com> Thanks for all the help! Convolving looks like a great way to do this, and I think that mean will be just fine for my purposes. That iterator also looks fantastic and is actually the sort of thing that I was looking for at first. I havn't tried it yet though. Any idea how fast it would be? Stephen On 6/10/06, Alex Liberzon wrote: > > Not sure, but my Google desktop search of "medfilt" (the name of > Matlab function) brought me to: > > info_signal.py - N-dimensional order filter. medfilt -N-dimensional > median filter > > If it's true, then it is the 2D median filter. > > Regarding the neighbouring cells, I found the iterator on 2D ranges on > the O'Reily Cookbook by Simon Wittber very useful for my PyPIV > (Particle Image Velocimetry, which works by correlation of 2D blocks > of two successive images): > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/334971 > > def blocks(size, box=(1,1)): > """ > Iterate over a 2D range in 2D increments. > Returns a 4 element tuple of top left and bottom right coordinates. > """ > box = list(box) > pos = [0,0] > yield tuple(pos + box) > while True: > if pos[0] >= size[0]-box[0]: > pos[0] = 0 > pos[1] += box[1] > if pos[1] >= size[1]: > raise StopIteration > else: > pos[0] += box[0] > topleft = pos > bottomright = [min(x[1]+x[0],x[2]) for x in zip(pos,box,size)] > yield tuple(topleft + bottomright) > > if __name__ == "__main__": > for c in blocks((100,100),(99,10)): > print c > for c in blocks((10,10)): > print c > > > > HIH, > Alex > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.hochberg at cox.net Sat Jun 10 16:18:05 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Sat, 10 Jun 2006 13:18:05 -0700 Subject: [Numpy-discussion] fromiter Message-ID: <448B28FD.7040309@cox.net> I finally got around to cleaning up and checking in fromiter. As Travis suggested, this version does not require that you specify count. From the docstring: fromiter(...) fromiter(iterable, dtype, count=-1) returns a new 1d array initialized from iterable. If count is nonegative, the new array will have count elements, otherwise it's size is determined by the generator. If count is specified, it allocates the full array ahead of time. If it is not, it periodically reallocates space for the array, allocating 50% extra space each time and reallocating back to the final size at the end (to give realloc a chance to reclaim any extra space). Speedwise, "fromiter(iterable, dtype, count)" is about twice as fast as "array(list(iterable),dtype=dtype)". Omitting count slows things down by about 15%; still much faster than using "array(list(...))". It also is going to chew up more memory than if you include count, at least temporarily, but still should typically use much less than the "array(list(...))" approach. -tim From strawman at astraw.com Sat Jun 10 17:23:16 2006 From: strawman at astraw.com (Andrew Straw) Date: Sat, 10 Jun 2006 14:23:16 -0700 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: <448A437A.1030903@cox.net> References: <44888811.1080703@ee.byu.edu> <20060608172951.3c8e0886@arbutus.physics.mcmaster.ca> <44899BF9.9000002@cox.net> <4489B933.4080003@ee.byu.edu> <4489D186.3020605@cox.net> <4489D49E.3090401@ee.byu.edu> <4489FAE8.7060605@cox.net> <4489FE44.4090804@astraw.com> <20060609193057.54a1d113@arbutus.physics.mcmaster.ca> <448A437A.1030903@cox.net> Message-ID: <448B3844.3060101@astraw.com> OK, here's another (semi-crazy) idea: __array_struct__ is the interface. ctypes lets us use it in "pure" Python. We provide a "reference implementation" so that newbies don't get segfaults. From cookedm at physics.mcmaster.ca Sat Jun 10 17:42:03 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 10 Jun 2006 17:42:03 -0400 Subject: [Numpy-discussion] fromiter In-Reply-To: <448B28FD.7040309@cox.net> References: <448B28FD.7040309@cox.net> Message-ID: <20060610214203.GA24355@arbutus.physics.mcmaster.ca> On Sat, Jun 10, 2006 at 01:18:05PM -0700, Tim Hochberg wrote: > > I finally got around to cleaning up and checking in fromiter. As Travis > suggested, this version does not require that you specify count. From > the docstring: > > fromiter(...) > fromiter(iterable, dtype, count=-1) returns a new 1d array > initialized from iterable. If count is nonegative, the new array > will have count elements, otherwise it's size is determined by the > generator. > > If count is specified, it allocates the full array ahead of time. If it > is not, it periodically reallocates space for the array, allocating 50% > extra space each time and reallocating back to the final size at the end > (to give realloc a chance to reclaim any extra space). > > Speedwise, "fromiter(iterable, dtype, count)" is about twice as fast as > "array(list(iterable),dtype=dtype)". Omitting count slows things down by > about 15%; still much faster than using "array(list(...))". It also is > going to chew up more memory than if you include count, at least > temporarily, but still should typically use much less than the > "array(list(...))" approach. Can this be integrated into array() so that array(iterable, dtype=dtype) does the expected thing? Can you try to find the length of the iterable, with PySequence_Size() on the original object? This gets a bit iffy, as that might not be correct (but it could be used as a hint). What about iterables that return, say, tuples? Maybe add a shape argument, so that fromiter(iterable, dtype, count, shape=(None, 3)) expects elements from iterable that can be turned into arrays of shape (3,)? That could replace count, too. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Sat Jun 10 18:05:18 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 10 Jun 2006 17:05:18 -0500 Subject: [Numpy-discussion] fromiter In-Reply-To: <20060610214203.GA24355@arbutus.physics.mcmaster.ca> References: <448B28FD.7040309@cox.net> <20060610214203.GA24355@arbutus.physics.mcmaster.ca> Message-ID: David M. Cooke wrote: > Can this be integrated into array() so that array(iterable, dtype=dtype) > does the expected thing? That was rejected early on because array() is so incredibly overloaded as it is. http://article.gmane.org/gmane.comp.python.numeric.general/5756 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josh8912 at yahoo.com Sat Jun 10 18:15:07 2006 From: josh8912 at yahoo.com (JJ) Date: Sat, 10 Jun 2006 15:15:07 -0700 (PDT) Subject: [Numpy-discussion] speed of numpy vs matlab on dot product Message-ID: <20060610221507.30644.qmail@web51701.mail.yahoo.com> Hello. I am a new user to scipy, thinking about crossing over from Matlab. I have a new AMD 64 machine and just installed fedora 5 and scipy. It is a dual boot machine with windows XP. I did a small test to compare the speed of matlab (in 32 bit windows, Matlab student v14) to the speed of scipy (in fedora, 64 bit). I generated two random matrices of 10,000 by 2,000 elements and then took their dot product. The scipy code was: python import numpy import scipy a = scipy.random.normal(0,1,[10000,2000]) b = scipy.random.normal(0,1,[10000,2000]) c = scipy.dot(a,scipy.transpose(b)) I timed the last line of the code and compared it to the equivalent code in Matlab. The results were that Matlab took 3.3 minutes and scipy took 11.5 minutes. Thats a factor of three. I am surprised with the difference and am wondering if there is anything I can do to speed up scipy. I installed scipy, blas, atlas, numpy and lapack from source, just as the instructions on the scipy web site suggested (or as close to the instructions as I could). The only thing odd was that when installing numpy, I received messages that the atlas libraries could not be found. However, it did locate the lapack libraries. I dont know why it could not find the atlas libraries, as I told it exactly where to find them. It did not give the message that it was using the slower default libraries. I also tried compiling after an export ATLAS = statement, but that did not make a difference. Wherever I could, I complied it specifically for the 64 bit machine. I used the current gcc compiler. The ATLAS notes suggested that the speed problems with the 2.9+ compilers had been fixed. Any ideas on where to look for a speedup? If the problem is that it could not locate the atlas ibraries, how might I assure that numpy finds the atlas libraries. I can recompile and send along the results if it would help. Thanks. John PS. I first sent this to the scipy mailing list, but it didnt seem to make it there. __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From tim.hochberg at cox.net Sat Jun 10 18:28:55 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Sat, 10 Jun 2006 15:28:55 -0700 Subject: [Numpy-discussion] fromiter In-Reply-To: <20060610214203.GA24355@arbutus.physics.mcmaster.ca> References: <448B28FD.7040309@cox.net> <20060610214203.GA24355@arbutus.physics.mcmaster.ca> Message-ID: <448B47A7.30308@cox.net> David M. Cooke wrote: >On Sat, Jun 10, 2006 at 01:18:05PM -0700, Tim Hochberg wrote: > > >>I finally got around to cleaning up and checking in fromiter. As Travis >>suggested, this version does not require that you specify count. From >>the docstring: >> >> fromiter(...) >> fromiter(iterable, dtype, count=-1) returns a new 1d array >> initialized from iterable. If count is nonegative, the new array >> will have count elements, otherwise it's size is determined by the >> generator. >> >>If count is specified, it allocates the full array ahead of time. If it >>is not, it periodically reallocates space for the array, allocating 50% >>extra space each time and reallocating back to the final size at the end >>(to give realloc a chance to reclaim any extra space). >> >>Speedwise, "fromiter(iterable, dtype, count)" is about twice as fast as >>"array(list(iterable),dtype=dtype)". Omitting count slows things down by >>about 15%; still much faster than using "array(list(...))". It also is >>going to chew up more memory than if you include count, at least >>temporarily, but still should typically use much less than the >>"array(list(...))" approach. >> >> > >Can this be integrated into array() so that array(iterable, dtype=dtype) >does the expected thing? > > It get's a little sticky since the expected thing is probably that array([iterable, iterable, iterable], dtype=dtype) work and produce an array of shape [3, N]. That looks like that would be hard to do efficiently. >Can you try to find the length of the iterable, with PySequence_Size() on >the original object? This gets a bit iffy, as that might not be correct >(but it could be used as a hint). > > The way the code is setup, a hint could be made use of with little additional complexity. Allegedly, some objects in 2.5 will grow __length_hint__, which could be made use of as well. I'm not very motivated to mess with this at the moment though as the benefit is relatively small. >What about iterables that return, say, tuples? Maybe add a shape argument, >so that fromiter(iterable, dtype, count, shape=(None, 3)) expects elements >from iterable that can be turned into arrays of shape (3,)? That could >replace count, too. > > I expect that this would double (or more) the complexity of the current code (which is nice and simple at present). I'm inclined to leave it as it is and advocate solutions of this type: >>> import numpy >>> tupleiter = ((x, x+1, x+2) for x in range(10)) # Just for example >>> def flatten(x): ... for y in x: ... for z in y: ... yield z >>> numpy.fromiter(flatten(tupleiter), int).reshape(-1, 3) array([[ 0, 1, 2], [ 1, 2, 3], [ 2, 3, 4], [ 3, 4, 5], [ 4, 5, 6], [ 5, 6, 7], [ 6, 7, 8], [ 7, 8, 9], [ 8, 9, 10], [ 9, 10, 11]]) [As a side note, I'm quite suprised that there isn't a way to flatten stuff already in itertools, but if there is, I can't find it]. -tim From robert.kern at gmail.com Sat Jun 10 18:31:49 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 10 Jun 2006 17:31:49 -0500 Subject: [Numpy-discussion] speed of numpy vs matlab on dot product In-Reply-To: <20060610221507.30644.qmail@web51701.mail.yahoo.com> References: <20060610221507.30644.qmail@web51701.mail.yahoo.com> Message-ID: JJ wrote: > Any ideas on where to look for a speedup? If the > problem is that it could not locate the atlas > ibraries, how might I assure that numpy finds the > atlas libraries. I can recompile and send along the > results if it would help. Run ldd(1) on the file lapack_lite.so . It should show you what dynamic libraries it is linked against. > PS. I first sent this to the scipy mailing list, but > it didnt seem to make it there. That's okay. This is actually the right place. All of the functions you used are numpy functions, not scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Sun Jun 11 00:47:28 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 10 Jun 2006 22:47:28 -0600 Subject: [Numpy-discussion] speed of numpy vs matlab on dot product In-Reply-To: References: <20060610221507.30644.qmail@web51701.mail.yahoo.com> Message-ID: Hmm, I just tried this and it took so long on my machine (Athlon64, fc5_x86_64), that I ctrl-c'd out of it. Running ldd on lapack_lite.so shows libpthread.so.0 => /lib64/libpthread.so.0 (0x00002aaaaace2000) libc.so.6 => /lib64/libc.so.6 (0x00002aaaaadfa000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) So apparently the Atlas library present in /usr/lib64/atlas was not linked in. I built numpy from the svn repository two days ago. I expect JJ's version is linked with atlas 'cause mine sure didn't run in 11 seconds. Chuck On 6/10/06, Robert Kern wrote: > > JJ wrote: > > Any ideas on where to look for a speedup? If the > > problem is that it could not locate the atlas > > ibraries, how might I assure that numpy finds the > > atlas libraries. I can recompile and send along the > > results if it would help. > > Run ldd(1) on the file lapack_lite.so . It should show you what dynamic > libraries it is linked against. > > > PS. I first sent this to the scipy mailing list, but > > it didnt seem to make it there. > > That's okay. This is actually the right place. All of the functions you > used are > numpy functions, not scipy. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob at hooft.net Sun Jun 11 04:31:26 2006 From: rob at hooft.net (Rob Hooft) Date: Sun, 11 Jun 2006 10:31:26 +0200 Subject: [Numpy-discussion] speed of numpy vs matlab on dot product In-Reply-To: <20060610221507.30644.qmail@web51701.mail.yahoo.com> References: <20060610221507.30644.qmail@web51701.mail.yahoo.com> Message-ID: <448BD4DE.4020002@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 JJ wrote: > python > import numpy > import scipy > a = scipy.random.normal(0,1,[10000,2000]) > b = scipy.random.normal(0,1,[10000,2000]) > c = scipy.dot(a,scipy.transpose(b)) Hi, My experience with the old Numeric tells me that the first thing I would try to speed this up is to copy the transposed b into a fresh array. It might be that the memory access in dot is very inefficient due to the transposed (and hence large-stride) array. Of course I may be completely wrong. Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEi9TdH7J/Cv8rb3QRAgXYAJ9EcJtfUeX3H0ZWf22AapOvC3dgTwCgtF5r QW6si4kqTjCvifCfTc/ShC0= =uuUY -----END PGP SIGNATURE----- From pjssilva at ime.usp.br Sun Jun 11 19:03:18 2006 From: pjssilva at ime.usp.br (Paulo Jose da Silva e Silva) Date: Sun, 11 Jun 2006 20:03:18 -0300 Subject: [Numpy-discussion] speed of numpy vs matlab on dot product In-Reply-To: <20060610221507.30644.qmail@web51701.mail.yahoo.com> References: <20060610221507.30644.qmail@web51701.mail.yahoo.com> Message-ID: <1150066998.31143.5.camel@localhost.localdomain> Em S?b, 2006-06-10 ?s 15:15 -0700, JJ escreveu: > python > import numpy > import scipy > a = scipy.random.normal(0,1,[10000,2000]) > b = scipy.random.normal(0,1,[10000,2000]) > c = scipy.dot(a,scipy.transpose(b)) Interesting enough, I may have found "the reason". I am using only numpy (as I don't have scipy compiled and it is not necessary to the code above). The problem is probably memory consumption. Let me explain. After creating a, ipython reports 160Mb of memory usage. After creating b, 330Mb. But when I run the last line, the memory footprint jumps to 1.2gb! This is four times the original memory consumption. In my computer the result is swapping and the calculation would take forever. Why is the memory usage getting so high? Paulo Obs: As a side not. If you decrease the matrix sizes (like for example 2000x2000), numpy and matlab spend basically the same time. If the transpose imposes some penalty for numpy, it imposes the same penalty for matlab (version 6.5, R13). From nwagner at iam.uni-stuttgart.de Mon Jun 12 03:02:54 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 12 Jun 2006 09:02:54 +0200 Subject: [Numpy-discussion] ImportError: cannot import name inverse_fft Message-ID: <448D119E.9090709@iam.uni-stuttgart.de> matplotlib data path /usr/lib64/python2.4/site-packages/matplotlib/mpl-data $HOME=/home/nwagner loaded rc file /home/nwagner/matplotlibrc matplotlib version 0.87.3 verbose.level helpful interactive is False platform is linux2 numerix numpy 0.9.9.2603 Traceback (most recent call last): File "cascade.py", line 3, in ? from pylab import plot, show, xlim, ylim, subplot, xlabel, ylabel, title, legend,savefig,clf,scatter File "/usr/lib64/python2.4/site-packages/pylab.py", line 1, in ? from matplotlib.pylab import * File "/usr/lib64/python2.4/site-packages/matplotlib/pylab.py", line 198, in ? import mlab #so I can override hist, psd, etc... File "/usr/lib64/python2.4/site-packages/matplotlib/mlab.py", line 74, in ? from numerix.fft import fft, inverse_fft ImportError: cannot import name inverse_fft From olivetti at itc.it Mon Jun 12 04:07:02 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Mon, 12 Jun 2006 10:07:02 +0200 Subject: [Numpy-discussion] [OT] scipy-user not working? Message-ID: <448D20A6.4040303@itc.it> Hi, I've tried to send a message twice to scipy-user since friday without success (messages don't come back to me but I don't receive any message from scipy-user too and they don't appear in archives). Note that since friday there are no new messages from that list. Is scipy-user working? TIA Emanuele From bblais at bryant.edu Mon Jun 12 08:56:51 2006 From: bblais at bryant.edu (Brian Blais) Date: Mon, 12 Jun 2006 08:56:51 -0400 Subject: [Numpy-discussion] scipy.io.loadmat can't handle structs from octave Message-ID: <448D6493.8050909@bryant.edu> Hello, I am trying to load some .mat files in python, that were saved with octave. I get some weird things with strings, and structs fail altogether. Am I doing something wrong? Python 2.4, Scipy '0.4.9.1906', numpy 0.9.8, octave 2.1.71, running Linux. thanks, Brian Blais here is what I tried: Numbers are ok: ========OCTAVE========== >> a=rand(4) a = 0.617860 0.884195 0.032998 0.217922 0.207970 0.753992 0.333966 0.905661 0.048432 0.290895 0.353919 0.958442 0.697213 0.616851 0.426595 0.371364 >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [13]:d=io.loadmat('pythonfile.mat') In [14]:d Out[14]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:23:54 UTC', '__version__': '1.0', 'a': array([[ 0.61785957, 0.88419484, 0.03299807, 0.21792207], [ 0.20796989, 0.75399171, 0.33396634, 0.90566095], [ 0.04843219, 0.29089527, 0.35391921, 0.95844178], [ 0.69721313, 0.61685075, 0.42659485, 0.37136358]])} Strings are weird (turns to all 1's) ========OCTAVE========== >> a='hello' a = hello >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [15]:d=io.loadmat('pythonfile.mat') In [16]:d Out[16]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:24:13 UTC', '__version__': '1.0', 'a': '11111'} Cell arrays are fine (except for strings): ========OCTAVE========== >> a={5 [1,2,3] 'this'} a = { [1,1] = 5 [1,2] = 1 2 3 [1,3] = this } >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [17]:d=io.loadmat('pythonfile.mat') In [18]:d Out[18]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:24:51 UTC', '__version__': '1.0', 'a': array([5.0, [ 1. 2. 3.], 1111], dtype=object)} Structs crash: ========OCTAVE========== >> clear a >> a.hello=5 a = { hello = 5 } >> a.this=[1,2,3] a = { hello = 5 this = 1 2 3 } >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [19]:d=io.loadmat('pythonfile.mat') --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/bblais/octave/work/mouse/ /usr/lib/python2.4/site-packages/scipy/io/mio.py in loadmat(name, dict, appendmat, basename) 751 if not (0 in test_vals): # MATLAB version 5 format 752 fid.rewind() --> 753 thisdict = _loadv5(fid,basename) 754 if dict is not None: 755 dict.update(thisdict) /usr/lib/python2.4/site-packages/scipy/io/mio.py in _loadv5(fid, basename) 688 try: 689 var = var + 1 --> 690 el, varname = _get_element(fid) 691 if varname is None: 692 varname = '%s_%04d' % (basename,var) /usr/lib/python2.4/site-packages/scipy/io/mio.py in _get_element(fid) 676 677 # handle miMatrix type --> 678 el, name = _parse_mimatrix(fid,numbytes) 679 return el, name 680 /usr/lib/python2.4/site-packages/scipy/io/mio.py in _parse_mimatrix(fid, bytes) 597 result[i].__dict__[element] = val 598 result = squeeze(transpose(reshape(result,tupdims))) --> 599 if rank(result)==0: result = result.item() 600 601 # object is like a structure with but with a class name AttributeError: mat_struct instance has no attribute 'item' -- ----------------- bblais at bryant.edu http://web.bryant.edu/~bblais From a.u.r.e.l.i.a.n at gmx.net Mon Jun 12 10:03:06 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Mon, 12 Jun 2006 16:03:06 +0200 Subject: [Numpy-discussion] [OT] scipy-user not working? In-Reply-To: <448D20A6.4040303@itc.it> References: <448D20A6.4040303@itc.it> Message-ID: <200606121603.06328.a.u.r.e.l.i.a.n@gmx.net> > I've tried to send a message twice to scipy-user since friday without > success (messages don't come back to me but I don't receive any message > from scipy-user too and they don't appear in archives). > Note that since friday there are no new messages from that list. > > Is scipy-user working? Hm, scipy-dev seems to be offline as well. Johannes From hetland at tamu.edu Thu Jun 8 16:42:04 2006 From: hetland at tamu.edu (Robert Hetland) Date: Thu, 8 Jun 2006 15:42:04 -0500 Subject: [Numpy-discussion] eig hangs In-Reply-To: <20060608162326.2c3bec0b@arbutus.physics.mcmaster.ca> References: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> <20060608162326.2c3bec0b@arbutus.physics.mcmaster.ca> Message-ID: <5764DB7F-1C87-4798-88E6-55F0CC612D01@tamu.edu> On Jun 8, 2006, at 3:23 PM, David M. Cooke wrote: > > Lapack_lite probably doesn't get much testing from the developers, > because we > probably all have optimized versions of blas and lapack. This is precisely my suspicion... I tried a variety of random, square matrices (like rand(10, 10), rand(100, 100), etc.), and none work. An it just hangs forever, so there is really no output to debug. It is the most recent svn version of numpy (which BTW, works on my Mac, with AltiVec there..) -Rob ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgmoiwgvdh at omnimark.com Mon Jun 12 14:59:47 2006 From: kgmoiwgvdh at omnimark.com (Obrien Noah) Date: Mon, 12 Jun 2006 18:59:47 -0000 Subject: [Numpy-discussion] wradmyv Message-ID: <000901c68e41$9c6e98b0$35e73350@PrzygotowaniePr> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pagrvvkthfSloan.gif Type: image/gif Size: 11414 bytes Desc: not available URL: From oliphant at ee.byu.edu Mon Jun 12 16:17:48 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 12 Jun 2006 14:17:48 -0600 Subject: [Numpy-discussion] eig hangs In-Reply-To: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> References: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> Message-ID: <448DCBEC.7010407@ee.byu.edu> Robert Hetland wrote: >I set up a linux machine without BLAS, LAPACK, ATLAS, hoping that >lapack_lite would take over. For the moment, I am not concerned >about speed -- I just want something that will work with small >matricies. I installed numpy, and it passes all of the tests OK, but >it hangs when doing eig: > >u, v = linalg.eig(rand(10,10)) ># ....lots of nothing.... > >Do you *need* the linear algebra libraries for eig? BTW, inverse >seems to work fine. > >-Rob > > > From ticket #5 >Greg Landrum pointed out that it may be a gcc 4.0 related >problem and proposed a workaround -- to add the option '-ffloat-store' to CFLAGS. Works for me ! > > > Are you using gcc 4.0? -Travis From haase at msg.ucsf.edu Mon Jun 12 17:32:12 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 12 Jun 2006 14:32:12 -0700 Subject: [Numpy-discussion] old-Numeric: OverflowError on exp(-760) Message-ID: <200606121432.12896.haase@msg.ucsf.edu> Hi, I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient way to do a non linear minimization. It uses the "old" Numeric module. But since I upgraded to Numeric 24.2 I get OverflowErrors that I tracked down to >>> Numeric.exp(-760.) Traceback (most recent call last): File "", line 1, in ? OverflowError: math range error From numarray I'm used to getting this: >>> na.exp(-760) 0.0 Mostly I'm confused because my code worked before I upgraded to version 24.2. Thanks for any hints on how I could revive my code... -Sebastian Haase From ndarray at mac.com Mon Jun 12 18:15:15 2006 From: ndarray at mac.com (Sasha) Date: Mon, 12 Jun 2006 18:15:15 -0400 Subject: [Numpy-discussion] old-Numeric: OverflowError on exp(-760) In-Reply-To: <200606121432.12896.haase@msg.ucsf.edu> References: <200606121432.12896.haase@msg.ucsf.edu> Message-ID: I don't know about numarray, but the difference between Numeric and python math module stems from the fact that the math module ignores errno set by C library and only checks for infinity. Numeric relies on errno exclusively, numpy ignores errors by default: >>> import numpy,math,Numeric >>> numpy.exp(-760) 0.0 >>> math.exp(-760) 0.0 >>> Numeric.exp(-760) Traceback (most recent call last): File "", line 1, in ? OverflowError: math range error >>> numpy.exp(760) inf >>> math.exp(760) Traceback (most recent call last): File "", line 1, in ? OverflowError: math range error >>> Numeric.exp(760) Traceback (most recent call last): File "", line 1, in ? OverflowError: math range error I would say it's a bug in Numeric, so you are out of luck. Unfortunalely, even MA.exp(-760) does not work, but this is easy to fix: >>> exp = MA.masked_unary_operation(Numeric.exp,0.0,MA.domain_check_interval(-100,100)) >>> exp(-760).filled() 0 You would need to replace -100,100 with the bounds appropriate for your system. On 6/12/06, Sebastian Haase wrote: > Hi, > I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient way to > do a non linear minimization. It uses the "old" Numeric module. > But since I upgraded to Numeric 24.2 I get OverflowErrors that I tracked down > to > >>> Numeric.exp(-760.) > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: math range error > > >From numarray I'm used to getting this: > >>> na.exp(-760) > 0.0 > > Mostly I'm confused because my code worked before I upgraded to version 24.2. > > Thanks for any hints on how I could revive my code... > -Sebastian Haase > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ndarray at mac.com Mon Jun 12 18:19:19 2006 From: ndarray at mac.com (Sasha) Date: Mon, 12 Jun 2006 18:19:19 -0400 Subject: [Numpy-discussion] old-Numeric: OverflowError on exp(-760) In-Reply-To: References: <200606121432.12896.haase@msg.ucsf.edu> Message-ID: BTW, here is the relevant explanation from mathmodule.c: /* ANSI C generally requires libm functions to set ERANGE * on overflow, but also generally *allows* them to set * ERANGE on underflow too. There's no consistency about * the latter across platforms. * Alas, C99 never requires that errno be set. * Here we suppress the underflow errors (libm functions * should return a zero on underflow, and +- HUGE_VAL on * overflow, so testing the result for zero suffices to * distinguish the cases). */ On 6/12/06, Sasha wrote: > I don't know about numarray, but the difference between Numeric and > python math module stems from the fact that the math module ignores > errno set by C library and only checks for infinity. Numeric relies > on errno exclusively, numpy ignores errors by default: > > >>> import numpy,math,Numeric > >>> numpy.exp(-760) > 0.0 > >>> math.exp(-760) > 0.0 > >>> Numeric.exp(-760) > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: math range error > >>> numpy.exp(760) > inf > >>> math.exp(760) > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: math range error > >>> Numeric.exp(760) > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: math range error > > I would say it's a bug in Numeric, so you are out of luck. > > Unfortunalely, even MA.exp(-760) does not work, but this is easy to fix: > > >>> exp = MA.masked_unary_operation(Numeric.exp,0.0,MA.domain_check_interval(-100,100)) > >>> exp(-760).filled() > 0 > > You would need to replace -100,100 with the bounds appropriate for your system. > > > > > On 6/12/06, Sebastian Haase wrote: > > Hi, > > I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient way to > > do a non linear minimization. It uses the "old" Numeric module. > > But since I upgraded to Numeric 24.2 I get OverflowErrors that I tracked down > > to > > >>> Numeric.exp(-760.) > > Traceback (most recent call last): > > File "", line 1, in ? > > OverflowError: math range error > > > > >From numarray I'm used to getting this: > > >>> na.exp(-760) > > 0.0 > > > > Mostly I'm confused because my code worked before I upgraded to version 24.2. > > > > Thanks for any hints on how I could revive my code... > > -Sebastian Haase > > > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From elcorto at gmx.net Mon Jun 12 18:19:54 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 13 Jun 2006 00:19:54 +0200 Subject: [Numpy-discussion] svn build fails Message-ID: <448DE88A.7010308@gmx.net> The latest svn build fails. ==================================================================================== elcorto at ramrod:~/install/python/scipy/svn$ make build cd numpy; python setup.py build Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' No module named __svn_version__ F2PY Version 2_2607 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not find in /usr/local/lib libraries mkl,vml,guide not find in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not find in /usr/local/lib libraries ptf77blas,ptcblas,atlas not find in /usr/lib/atlas libraries ptf77blas,ptcblas,atlas not find in /usr/lib NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not find in /usr/local/lib libraries f77blas,cblas,atlas not find in /usr/lib/atlas FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib'] language = c Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/lib -lf77blas -lcblas -latlas -o _configtest _configtest.o: In function `main': /home/elcorto/install/python/scipy/svn/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' collect2: ld returned 1 exit status _configtest.o: In function `main': /home/elcorto/install/python/scipy/svn/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "setup.py", line 84, in ? setup_package() File "setup.py", line 77, in setup_package configuration=configuration ) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/core.py", line 140, in setup config = configuration() File "setup.py", line 43, in configuration config.add_subpackage('numpy') File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./numpy/setup.py", line 9, in configuration config.add_subpackage('core') File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "numpy/core/setup.py", line 207, in configuration blas_info = get_info('blas_opt',0) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 397, in get_info self.calc_info() File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 1224, in calc_info atlas_version = get_atlas_version(**version_info) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 1085, in get_atlas_version library_dirs=config.get('library_dirs', []), File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/config.py", line 121, in get_output return exitcode, output UnboundLocalError: local variable 'exitcode' referenced before assignment ==================================================================================== I removed the old /build dir and even did a complete fresh checkout but it still fails to build. cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From cookedm at physics.mcmaster.ca Mon Jun 12 18:29:47 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 12 Jun 2006 18:29:47 -0400 Subject: [Numpy-discussion] svn build fails In-Reply-To: <448DE88A.7010308@gmx.net> References: <448DE88A.7010308@gmx.net> Message-ID: <20060612182947.42bf5a00@arbutus.physics.mcmaster.ca> On Tue, 13 Jun 2006 00:19:54 +0200 Steve Schmerler wrote: > The latest svn build fails. > > [snip] > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", > line 1224, in calc_info > atlas_version = get_atlas_version(**version_info) > File > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", > line 1085, in get_atlas_version > library_dirs=config.get('library_dirs', []), > File > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/config.py", > line 121, in get_output > return exitcode, output > UnboundLocalError: local variable 'exitcode' referenced before assignment > ==================================================================================== > > I removed the old /build dir and even did a complete fresh checkout but > it still fails to build. > > cheers, > steve > Sorry about that; I noticed and fixed it last night, but forgot to check it in. It should work now. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Mon Jun 12 18:33:44 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 12 Jun 2006 18:33:44 -0400 Subject: [Numpy-discussion] ImportError: cannot import name inverse_fft In-Reply-To: <448D119E.9090709@iam.uni-stuttgart.de> References: <448D119E.9090709@iam.uni-stuttgart.de> Message-ID: <20060612183344.6d345a1f@arbutus.physics.mcmaster.ca> On Mon, 12 Jun 2006 09:02:54 +0200 Nils Wagner wrote: > matplotlib data path /usr/lib64/python2.4/site-packages/matplotlib/mpl-data > $HOME=/home/nwagner > loaded rc file /home/nwagner/matplotlibrc > matplotlib version 0.87.3 > verbose.level helpful > interactive is False > platform is linux2 > numerix numpy 0.9.9.2603 > Traceback (most recent call last): > File "cascade.py", line 3, in ? > from pylab import plot, show, xlim, ylim, subplot, xlabel, ylabel, > title, legend,savefig,clf,scatter > File "/usr/lib64/python2.4/site-packages/pylab.py", line 1, in ? > from matplotlib.pylab import * > File "/usr/lib64/python2.4/site-packages/matplotlib/pylab.py", line > 198, in ? > import mlab #so I can override hist, psd, etc... > File "/usr/lib64/python2.4/site-packages/matplotlib/mlab.py", line 74, > in ? > from numerix.fft import fft, inverse_fft > ImportError: cannot import name inverse_fft It's a bug in matplotlib: it should use ifft for numpy. We cleaned up the namespace a while back to not have two names for things. (Admittedly, I'm not sure why we went with the short names instead of the self-descriptive long ones. It's in the archives somewhere.) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From elcorto at gmx.net Mon Jun 12 18:42:06 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 13 Jun 2006 00:42:06 +0200 Subject: [Numpy-discussion] svn build fails In-Reply-To: <20060612182947.42bf5a00@arbutus.physics.mcmaster.ca> References: <448DE88A.7010308@gmx.net> <20060612182947.42bf5a00@arbutus.physics.mcmaster.ca> Message-ID: <448DEDBE.4050100@gmx.net> David M. Cooke wrote: > > Sorry about that; I noticed and fixed it last night, but forgot to check it > in. It should work now. > Thanks for the fast answer. Now there's another one .... :) [...] /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/lib -lf77blas -lcblas -latlas -o _configtest _configtest.o: In function `main': /home/elcorto/install/python/scipy/svn/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' collect2: ld returned 1 exit status _configtest.o: In function `main': /home/elcorto/install/python/scipy/svn/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "setup.py", line 84, in ? setup_package() File "setup.py", line 77, in setup_package configuration=configuration ) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/core.py", line 140, in setup config = configuration() File "setup.py", line 43, in configuration config.add_subpackage('numpy') File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./numpy/setup.py", line 9, in configuration config.add_subpackage('core') File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "numpy/core/setup.py", line 207, in configuration blas_info = get_info('blas_opt',0) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 397, in get_info self.calc_info() File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 1224, in calc_info atlas_version = get_atlas_version(**version_info) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 1097, in get_atlas_version log.info('Command: %s',' '.join(cmd)) NameError: global name 'cmd' is not defined -- Random number generation is the art of producing pure gibberish as quickly as possible. From cookedm at physics.mcmaster.ca Mon Jun 12 18:56:43 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 12 Jun 2006 18:56:43 -0400 Subject: [Numpy-discussion] svn build fails In-Reply-To: <448DEDBE.4050100@gmx.net> References: <448DE88A.7010308@gmx.net> <20060612182947.42bf5a00@arbutus.physics.mcmaster.ca> <448DEDBE.4050100@gmx.net> Message-ID: <20060612185643.215e4358@arbutus.physics.mcmaster.ca> On Tue, 13 Jun 2006 00:42:06 +0200 Steve Schmerler wrote: > David M. Cooke wrote: > > > > > Sorry about that; I noticed and fixed it last night, but forgot to check > > it in. It should work now. > > > [...] > Thanks for the fast answer. > Now there's another one .... :) > > > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", > line 1097, in get_atlas_version > log.info('Command: %s',' '.join(cmd)) > NameError: global name 'cmd' is not defined Hmm, I had that one too :-) [Then I went did some cutting up of system_info, which is why I just havent' checked the fixes in]. Should work *now* :D -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From hetland at tamu.edu Mon Jun 12 19:03:36 2006 From: hetland at tamu.edu (Robert Hetland) Date: Mon, 12 Jun 2006 18:03:36 -0500 Subject: [Numpy-discussion] eig hangs In-Reply-To: <448DCBEC.7010407@ee.byu.edu> References: <00DF001D-0E0A-45B9-AF7E-E1253EF752B6@tamu.edu> <448DCBEC.7010407@ee.byu.edu> Message-ID: <8EB299FE-4A0C-4C97-9E8C-721EA2776A32@tamu.edu> On Jun 12, 2006, at 3:17 PM, Travis Oliphant wrote: > Robert Hetland wrote: > >> I set up a linux machine without BLAS, LAPACK, ATLAS, hoping that >> lapack_lite would take over. For the moment, I am not concerned >> about speed -- I just want something that will work with small >> matricies. I installed numpy, and it passes all of the tests OK, but >> it hangs when doing eig: >> >> u, v = linalg.eig(rand(10,10)) >> # ....lots of nothing.... >> >> Do you *need* the linear algebra libraries for eig? BTW, inverse >> seems to work fine. >> >> -Rob >> > From ticket #5 > >> Greg Landrum pointed out that it may be a gcc 4.0 related >> problem and proposed a workaround -- to add the option '-ffloat- >> store' to CFLAGS. Works for me ! >> > Are you using gcc 4.0? Well, gcc 4.1, I had forgotten to check that. The install is on a relatively new version of Fedora, FC5. (all the older redhats I have use gcc3..). $ uname -a Linux ---.----.--- 2.6.15-1.2054_FC5smp #1 SMP Tue Mar 14 16:05:46 EST 2006 i686 i686 i386 GNU/Linux $ gcc --version gcc (GCC) 4.1.0 20060304 (Red Hat 4.1.0-3) That seems like the most likely cause of the bug. I will try with - ffloat-store, and with gcc 3.2.3, and let you know if I have the same problems. -Rob. ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From myeates at jpl.nasa.gov Mon Jun 12 19:55:05 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Mon, 12 Jun 2006 16:55:05 -0700 Subject: [Numpy-discussion] dealing with large arrays Message-ID: <448DFED9.6000902@jpl.nasa.gov> Hi I typically deal with very large arrays that don't fit in memory. How does Numpy handle this? In Matlab I can use memory mapping but I would prefer caching as is done in The Gimp. Any pointers appreciated. Mathew From elcorto at gmx.net Mon Jun 12 20:00:38 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 13 Jun 2006 02:00:38 +0200 Subject: [Numpy-discussion] svn build fails In-Reply-To: <20060612185643.215e4358@arbutus.physics.mcmaster.ca> References: <448DE88A.7010308@gmx.net> <20060612182947.42bf5a00@arbutus.physics.mcmaster.ca> <448DEDBE.4050100@gmx.net> <20060612185643.215e4358@arbutus.physics.mcmaster.ca> Message-ID: <448E0026.6070508@gmx.net> David M. Cooke wrote: > > Hmm, I had that one too :-) [Then I went did some cutting up of system_info, > which is why I just havent' checked the fixes in]. > > Should work *now* :D > That does it. Many thanks! cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From stephenemslie at gmail.com Mon Jun 12 20:41:17 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Tue, 13 Jun 2006 01:41:17 +0100 Subject: [Numpy-discussion] finding connected areas? Message-ID: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> I have used adaptive thresholding to turn an image into a binary image so that I can locate a particularly large bright spot. However, now that I have the binary image I need to be able to group connected cell's together and determine their relative sizes. Matlab has a function called bwlabel (http://tinyurl.com/fcnvd) that labels connected objects in a matrix. That seems like a good way to start, and I'm sure there is a way for me to do something similar in numpy, but how? Thanks Stephen Emslie From efiring at hawaii.edu Mon Jun 12 21:07:45 2006 From: efiring at hawaii.edu (Eric Firing) Date: Mon, 12 Jun 2006 15:07:45 -1000 Subject: [Numpy-discussion] dealing with large arrays In-Reply-To: <448DFED9.6000902@jpl.nasa.gov> References: <448DFED9.6000902@jpl.nasa.gov> Message-ID: <448E0FE1.5020901@hawaii.edu> Mathew Yeates wrote: > Hi > I typically deal with very large arrays that don't fit in memory. How > does Numpy handle this? In Matlab I can use memory mapping but I would > prefer caching as is done in The Gimp. Numpy has a memmap array constructor; as it happens, I was using it for the first time today, and it is working fine. There doesn't seem to be a docstring, but in ipython if you do import numpy as N N.memmap?? you will see the python wrapper which will show you the arguments to the constructor. You can also look in Travis's book, but the arguments have changed slightly since the version of the book that I have. Eric From kw682 at 163.com Wed Jun 14 21:34:47 2006 From: kw682 at 163.com (=?GB2312?B?IjbUwjI0LTI1LNbcwfnI1SzJz7qjIg==?=) Date: Thu, 15 Jun 2006 09:34:47 +0800 Subject: [Numpy-discussion] =?GB2312?B?IrO1vOS53MDtyMvUsbDLz+7Q3sG2KEFEKSI=?= Message-ID: An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Jun 13 01:17:44 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 12 Jun 2006 23:17:44 -0600 Subject: [Numpy-discussion] finding connected areas? In-Reply-To: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> References: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> Message-ID: Stephen, I don't know of a data structure in numpy or scipy that does this. To do this myself I use a modified union/find (equivalence relation) algorithm interfaced to python using boost/python. The same algorithm is also useful for connecting points on the basis of equivalence relations other than distance. If there is much interest I could make a standard C version sometime, but the interface needs some thinking about. Chuck On 6/12/06, stephen emslie wrote: > > I have used adaptive thresholding to turn an image into a binary image > so that I can locate a particularly large bright spot. However, now > that I have the binary image I need to be able to group connected > cell's together and determine their relative sizes. Matlab has a > function called bwlabel (http://tinyurl.com/fcnvd) that labels > connected objects in a matrix. That seems like a good way to start, > and I'm sure there is a way for me to do something similar in numpy, > but how? > > Thanks > Stephen Emslie > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandre.fayolle at logilab.fr Tue Jun 13 03:31:54 2006 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Tue, 13 Jun 2006 09:31:54 +0200 Subject: [Numpy-discussion] finding connected areas? In-Reply-To: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> References: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> Message-ID: <20060613073153.GB8675@crater.logilab.fr> On Tue, Jun 13, 2006 at 01:41:17AM +0100, stephen emslie wrote: > I have used adaptive thresholding to turn an image into a binary image > so that I can locate a particularly large bright spot. However, now > that I have the binary image I need to be able to group connected > cell's together and determine their relative sizes. Matlab has a > function called bwlabel (http://tinyurl.com/fcnvd) that labels > connected objects in a matrix. That seems like a good way to start, > and I'm sure there is a way for me to do something similar in numpy, > but how? You will get this in numarray.nd_image, the function is called label. It is also available in recent versions of scipy, in module scipy.ndimage. -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From xaidyuc at woodtel.com Tue Jun 13 08:11:34 2006 From: xaidyuc at woodtel.com (Nolly Sosa) Date: Tue, 13 Jun 2006 14:11:34 +0200 Subject: [Numpy-discussion] eighteen bourbon Message-ID: <003c01c68ee3$e4e32750$82331c53@mpmjb.mjusy> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: brain.gif Type: image/gif Size: 588 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: frightened.gif Type: image/gif Size: 2404 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Mt..gif Type: image/gif Size: 715 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: blackberry.gif Type: image/gif Size: 1205 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: coup d'etat.gif Type: image/gif Size: 1158 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: unconscious.gif Type: image/gif Size: 285 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bated.gif Type: image/gif Size: 543 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GPA.gif Type: image/gif Size: 322 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: frill.gif Type: image/gif Size: 165 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: career.gif Type: image/gif Size: 2244 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: breathe.gif Type: image/gif Size: 969 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: privately.gif Type: image/gif Size: 595 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pavement.gif Type: image/gif Size: 388 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fallible.gif Type: image/gif Size: 162 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: effeminate.gif Type: image/gif Size: 279 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fidelity.gif Type: image/gif Size: 1116 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: elder.gif Type: image/gif Size: 2326 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: toss-up.gif Type: image/gif Size: 1577 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: organism.gif Type: image/gif Size: 436 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: suit.gif Type: image/gif Size: 507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tramp.gif Type: image/gif Size: 404 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: commandment.gif Type: image/gif Size: 607 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: entice.gif Type: image/gif Size: 895 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reinvent.gif Type: image/gif Size: 633 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: credibly.gif Type: image/gif Size: 426 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cedar.gif Type: image/gif Size: 233 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: principally.gif Type: image/gif Size: 212 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ideal.gif Type: image/gif Size: 122 bytes Desc: not available URL: From konrad.hinsen at laposte.net Tue Jun 13 09:00:02 2006 From: konrad.hinsen at laposte.net (konrad.hinsen at laposte.net) Date: Tue, 13 Jun 2006 15:00:02 +0200 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <448A0AFA.1090700@ee.byu.edu> References: <447D051E.9000709@ieee.org> <448A0AFA.1090700@ee.byu.edu> Message-ID: On 10.06.2006, at 01:57, Travis Oliphant wrote: > You may be interested to note that I just added the RNG interface > to numpy for back-wards compatibility. It can be accessed and used > by re-placing > > import RNG > > with > > import numpy.random.oldrng as RNG Thanks, that will facilitate the transition. Is this just a compatible interface, or actually the same algorithm as in the original RNG module? Konrad. -- --------------------------------------------------------------------- Konrad Hinsen Centre de Biophysique Mol?culaire, CNRS Orl?ans Synchrotron Soleil - Division Exp?riences Saint Aubin - BP 48 91192 Gif sur Yvette Cedex, France Tel. +33-1 69 35 97 15 E-Mail: hinsen at cnrs-orleans.fr --------------------------------------------------------------------- From iyqkjxudpvs at jasanco.com Tue Jun 13 04:20:49 2006 From: iyqkjxudpvs at jasanco.com (Simpson Greta) Date: Tue, 13 Jun 2006 08:20:49 -0000 Subject: [Numpy-discussion] S,T,O,C,K moving crazy! Message-ID: <000101c68eec$2fc5dc40$e9835c18@home> HOLLYWOOD INTERMED (HYWI.PK) THIS S,T,O,C,K IS EXTREMELY UNDERVALUED Huge Advertising Campaign this week! Breakout Forecast for June, 2006 Current Price: $1.04 Short Term Price Target: $3.25 Recommendation: S,t,r,o,n,g Buy *300+% profit potential short term RECENT HOT NEWS released MUST READ ACT NOW GLENDALE, CA -- May 31, 2006 - Hollywood Intermediate, Inc. (HYWI.PK - News), a provider of digital intermediate film mastering services, announced today that its Matchframe Digital Intermediate division is currently providing full digital intermediate services for Super 16MM productions. The company is now offering the same high resolution digital intermediate services for films originating on a 16MM film format, a popular format for independent film makers About HOLLYWOOD INTERMED (HYWI.PK): Hollywood Intermediate affords Motion Pictures the ability to scan their selected original camera negative at 2K or 4K film resolution, conforming a high resolution digital master for theatrical and broadcast release including dirt removal, opticals and visual effects, and includes the output of a High Definition preview master as well as final film, broadcast and DVD distribution From robert.kern at gmail.com Tue Jun 13 12:48:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 13 Jun 2006 11:48:24 -0500 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> <448A0AFA.1090700@ee.byu.edu> Message-ID: konrad.hinsen at laposte.net wrote: > On 10.06.2006, at 01:57, Travis Oliphant wrote: > >>You may be interested to note that I just added the RNG interface >>to numpy for back-wards compatibility. It can be accessed and used >>by re-placing >> >>import RNG >> >>with >> >>import numpy.random.oldrng as RNG > > Thanks, that will facilitate the transition. Is this just a > compatible interface, or actually the same algorithm as in the > original RNG module? Just the interface. Do you actually want to use the old algorithm, or are you primarily concerned about matching old test results? The old algorithms are not very good, so I really don't want to put them back into numpy. It should be easy to roll out a separate RNG module that simply uses numpy instead of Numeric, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Tue Jun 13 12:52:07 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 13 Jun 2006 10:52:07 -0600 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> <448A0AFA.1090700@ee.byu.edu> Message-ID: <448EED37.2010009@ieee.org> konrad.hinsen at laposte.net wrote: > On 10.06.2006, at 01:57, Travis Oliphant wrote: > > >> You may be interested to note that I just added the RNG interface >> to numpy for back-wards compatibility. It can be accessed and used >> by re-placing >> >> import RNG >> >> with >> >> import numpy.random.oldrng as RNG >> > > Thanks, that will facilitate the transition. Is this just a > compatible interface, or actually the same algorithm as in the > original RNG module? > If I understand your question correctly, then it's just a compatibility interface. I'm not sure which part of the original algorithm you are referring to. The random numbers are generated by the Mersenne Twister algorithm in mtrand. Each generator in numpy.random.oldrng creates a new RandomState for generation using that algorithm. The density function calculations were taken from RNG, but the random-number generators themselves are methods of the RandomState. -Travis From tim.hochberg at cox.net Tue Jun 13 12:56:37 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 13 Jun 2006 09:56:37 -0700 Subject: [Numpy-discussion] Back to numexpr Message-ID: <448EEE45.1040001@cox.net> I've finally got around to looking at numexpr again. Specifically, I'm looking at Francesc Altet's numexpr-0.2, with the idea of harmonizing the two versions. Let me go through his list of enhancements and comment (my comments are dedented): - Addition of a boolean type. This allows better array copying times for large arrays (lightweight computations ara typically bounded by memory bandwidth). Adding this to numexpr looks like a no brainer. Behaviour of booleans are different than integers, so in addition to being more memory efficient, this enables boolean &, |, ~, etc to work properly. - Enhanced performance for strided and unaligned data, specially for lightweigth computations (e.g. 'a>10'). With this and the addition of the boolean type, we can get up to 2x better times than previous versions. Also, most of the supported computations goes faster than with numpy or numarray, even the simplest one. Francesc, if you're out there, can you briefly describe what this support consists of? It's been long enough since I was messing with this that it's going to take me a while to untangle NumExpr_run, where I expect it's lurking, so any hints would be appreciated. - Addition of ~, & and | operators (a la numarray.where) Sounds good. - Support for both numpy and numarray (use the flag --force-numarray in setup.py). At first glance this looks like it doesn't make things to messy, so I'm in favor of incorporating this. - Added a new benchmark for testing boolean expressions and strided/unaligned arrays: boolean_timing.py Benchmarks are always good. Things that I want to address in the future: - Add tests on strided and unaligned data (currently only tested manually) Yep! Tests are good. - Add types for int16, int64 (in 32-bit platforms), float32, complex64 (simple prec.) I have some specific ideas about how this should be accomplished. Basically, I don't think we want to support every type in the same way, since this is going to make the case statement blow up to an enormous size. This may slow things down and at a minimum it will make things less comprehensible. My thinking is that we only add casts for the extra types and do the computations at high precision. Thus adding two int16 numbers compiles to two OP_CAST_Ffs followed by an OP_ADD_FFF, and then a OP_CAST_fF. The details are left as an excercise to the reader ;-). So, adding int16, float32, complex64 should only require the addition of 6 casting opcodes plus appropriate modifications to the compiler. For large arrays, this should have most of the benfits of giving each type it's own opcode, since the memory bandwidth is still small, while keeping the interpreter relatively simple. Unfortunately, int64 doesn't fit under this scheme; is it used enough to matter? I hate pile a whole pile of new opcodes on for something that's rarely used. Regards, -tim From tim.hochberg at cox.net Tue Jun 13 13:03:54 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 13 Jun 2006 10:03:54 -0700 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448EEE45.1040001@cox.net> References: <448EEE45.1040001@cox.net> Message-ID: <448EEFFA.6000606@cox.net> Oops! Having just done an svn update, I now see that David appears to have done most of this about a week ago... I'm behind the times. -tim Tim Hochberg wrote: >I've finally got around to looking at numexpr again. Specifically, I'm >looking at Francesc Altet's numexpr-0.2, with the idea of harmonizing >the two versions. Let me go through his list of enhancements and comment >(my comments are dedented): > > - Addition of a boolean type. This allows better array copying times > for large arrays (lightweight computations ara typically bounded by > memory bandwidth). > >Adding this to numexpr looks like a no brainer. Behaviour of booleans >are different than integers, so in addition to being more memory >efficient, this enables boolean &, |, ~, etc to work properly. > > - Enhanced performance for strided and unaligned data, specially for > lightweigth computations (e.g. 'a>10'). With this and the addition of > the boolean type, we can get up to 2x better times than previous > versions. Also, most of the supported computations goes faster than > with numpy or numarray, even the simplest one. > >Francesc, if you're out there, can you briefly describe what this >support consists of? It's been long enough since I was messing with this >that it's going to take me a while to untangle NumExpr_run, where I >expect it's lurking, so any hints would be appreciated. > > - Addition of ~, & and | operators (a la numarray.where) > >Sounds good. > > - Support for both numpy and numarray (use the flag --force-numarray > in setup.py). > >At first glance this looks like it doesn't make things to messy, so I'm >in favor of incorporating this. > > - Added a new benchmark for testing boolean expressions and > strided/unaligned arrays: boolean_timing.py > >Benchmarks are always good. > > Things that I want to address in the future: > > - Add tests on strided and unaligned data (currently only tested > manually) > >Yep! Tests are good. > > - Add types for int16, int64 (in 32-bit platforms), float32, > complex64 (simple prec.) > >I have some specific ideas about how this should be accomplished. >Basically, I don't think we want to support every type in the same way, >since this is going to make the case statement blow up to an enormous >size. This may slow things down and at a minimum it will make things >less comprehensible. My thinking is that we only add casts for the extra >types and do the computations at high precision. Thus adding two int16 >numbers compiles to two OP_CAST_Ffs followed by an OP_ADD_FFF, and then >a OP_CAST_fF. The details are left as an excercise to the reader ;-). >So, adding int16, float32, complex64 should only require the addition of >6 casting opcodes plus appropriate modifications to the compiler. > >For large arrays, this should have most of the benfits of giving each >type it's own opcode, since the memory bandwidth is still small, while >keeping the interpreter relatively simple. > >Unfortunately, int64 doesn't fit under this scheme; is it used enough to >matter? I hate pile a whole pile of new opcodes on for something that's >rarely used. > > >Regards, > >-tim > > > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From cookedm at physics.mcmaster.ca Tue Jun 13 13:08:38 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 13 Jun 2006 13:08:38 -0400 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448EEE45.1040001@cox.net> References: <448EEE45.1040001@cox.net> Message-ID: <20060613170838.GA28737@arbutus.physics.mcmaster.ca> On Tue, Jun 13, 2006 at 09:56:37AM -0700, Tim Hochberg wrote: > > I've finally got around to looking at numexpr again. Specifically, I'm > looking at Francesc Altet's numexpr-0.2, with the idea of harmonizing > the two versions. Let me go through his list of enhancements and comment > (my comments are dedented): > > - Addition of a boolean type. This allows better array copying times > for large arrays (lightweight computations ara typically bounded by > memory bandwidth). > > Adding this to numexpr looks like a no brainer. Behaviour of booleans > are different than integers, so in addition to being more memory > efficient, this enables boolean &, |, ~, etc to work properly. > > - Enhanced performance for strided and unaligned data, specially for > lightweigth computations (e.g. 'a>10'). With this and the addition of > the boolean type, we can get up to 2x better times than previous > versions. Also, most of the supported computations goes faster than > with numpy or numarray, even the simplest one. > > Francesc, if you're out there, can you briefly describe what this > support consists of? It's been long enough since I was messing with this > that it's going to take me a while to untangle NumExpr_run, where I > expect it's lurking, so any hints would be appreciated. > > - Addition of ~, & and | operators (a la numarray.where) > > Sounds good. All the above is checked in already :-) > - Support for both numpy and numarray (use the flag --force-numarray > in setup.py). > > At first glance this looks like it doesn't make things to messy, so I'm > in favor of incorporating this. ... although I had ripped this all out. I'd rather have a numpy-compatible numarray layer (at the C level, this means defining macros like PyArray_DATA) than different code for each. > - Added a new benchmark for testing boolean expressions and > strided/unaligned arrays: boolean_timing.py > > Benchmarks are always good. Haven't checked that in yet. > > Things that I want to address in the future: > > - Add tests on strided and unaligned data (currently only tested > manually) > > Yep! Tests are good. > > - Add types for int16, int64 (in 32-bit platforms), float32, > complex64 (simple prec.) > > I have some specific ideas about how this should be accomplished. > Basically, I don't think we want to support every type in the same way, > since this is going to make the case statement blow up to an enormous > size. This may slow things down and at a minimum it will make things > less comprehensible. I've been thinking how to generate the virtual machine programmatically, specifically I've been looking at vmgen from gforth again. I've got other half-formed ideas too (separate scalar machine for reductions?) that I'm working on too. But yes, the # of types does make things harder to redo :-) > My thinking is that we only add casts for the extra > types and do the computations at high precision. Thus adding two int16 > numbers compiles to two OP_CAST_Ffs followed by an OP_ADD_FFF, and then > a OP_CAST_fF. The details are left as an excercise to the reader ;-). > So, adding int16, float32, complex64 should only require the addition of > 6 casting opcodes plus appropriate modifications to the compiler. My thinking too. > For large arrays, this should have most of the benfits of giving each > type it's own opcode, since the memory bandwidth is still small, while > keeping the interpreter relatively simple. > > Unfortunately, int64 doesn't fit under this scheme; is it used enough to > matter? I hate pile a whole pile of new opcodes on for something that's > rarely used. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From tim.hochberg at cox.net Tue Jun 13 13:27:40 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 13 Jun 2006 10:27:40 -0700 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <20060613170838.GA28737@arbutus.physics.mcmaster.ca> References: <448EEE45.1040001@cox.net> <20060613170838.GA28737@arbutus.physics.mcmaster.ca> Message-ID: <448EF58C.4030706@cox.net> David M. Cooke wrote: >On Tue, Jun 13, 2006 at 09:56:37AM -0700, Tim Hochberg wrote: > > >>[SNIP] >> >> > >All the above is checked in already :-) > > So I see. Oops! > > >> - Support for both numpy and numarray (use the flag --force-numarray >> in setup.py). >> >>At first glance this looks like it doesn't make things to messy, so I'm >>in favor of incorporating this. >> >> > >... although I had ripped this all out. I'd rather have a numpy-compatible >numarray layer (at the C level, this means defining macros like PyArray_DATA) >than different code for each. > > Okey dokey. I don't feel strongly about this either way other than I'd rather have one version of numexpr around rather than two almost identical versions. Whatever makes that work would makes me happy. > > >> - Added a new benchmark for testing boolean expressions and >> strided/unaligned arrays: boolean_timing.py >> >>Benchmarks are always good. >> >> > >Haven't checked that in yet. > > > >> Things that I want to address in the future: >> >> - Add tests on strided and unaligned data (currently only tested >> manually) >> >>Yep! Tests are good. >> >> - Add types for int16, int64 (in 32-bit platforms), float32, >> complex64 (simple prec.) >> >>I have some specific ideas about how this should be accomplished. >>Basically, I don't think we want to support every type in the same way, >>since this is going to make the case statement blow up to an enormous >>size. This may slow things down and at a minimum it will make things >>less comprehensible. >> >> > >I've been thinking how to generate the virtual machine programmatically, >specifically I've been looking at vmgen from gforth again. I've got other >half-formed ideas too (separate scalar machine for reductions?) that I'm >working on too. > >But yes, the # of types does make things harder to redo :-) > > > >>My thinking is that we only add casts for the extra >>types and do the computations at high precision. Thus adding two int16 >>numbers compiles to two OP_CAST_Ffs followed by an OP_ADD_FFF, and then >>a OP_CAST_fF. The details are left as an excercise to the reader ;-). >>So, adding int16, float32, complex64 should only require the addition of >>6 casting opcodes plus appropriate modifications to the compiler. >> >> > >My thinking too. > > Yeah! Although I'm not in a hurry on this part. I'm remembering now that the next item on my agenda was to work on supporting broadcasting. I don't exactly know how this is going to work, although I recall having something of a plan at some point. Perhaps the easiest way to start out is to just test the shapes of the input array for compatibility. If they're compatible and don't require broadcasting, proceed as now. If they are incompatible, raise a "ValueError: shape mismatch: objects cannot be broadcast to a single shape" as numpy does. If they are compatible, but require broadcasting, raise a NotImplementedError. This should be relatively easy and makes the numexpr considerably more congruent with numpy. I'm hoping that, while working on that, my plan will pop back into my head ;-) [SNIP] Regards, -tim From faltet at carabos.com Tue Jun 13 13:47:35 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue, 13 Jun 2006 19:47:35 +0200 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448EEE45.1040001@cox.net> References: <448EEE45.1040001@cox.net> Message-ID: <200606131947.37848.faltet@carabos.com> Ei, numexpr seems to be back, wow! :-D A Dimarts 13 Juny 2006 18:56, Tim Hochberg va escriure: > I've finally got around to looking at numexpr again. Specifically, I'm > looking at Francesc Altet's numexpr-0.2, with the idea of harmonizing > the two versions. Let me go through his list of enhancements and comment > (my comments are dedented): Well, as David already said, he committed most of my additions some days ago :-) > - Enhanced performance for strided and unaligned data, specially for > lightweigth computations (e.g. 'a>10'). With this and the addition of > the boolean type, we can get up to 2x better times than previous > versions. Also, most of the supported computations goes faster than > with numpy or numarray, even the simplest one. > > Francesc, if you're out there, can you briefly describe what this > support consists of? It's been long enough since I was messing with this > that it's going to take me a while to untangle NumExpr_run, where I > expect it's lurking, so any hints would be appreciated. This is easy. When dealing with strided or unaligned vectors, instead of copying them completely to well-behaved arrays, they are copied only when the virtual machine needs the appropriate blocks. With this, there is no need to write the well-behaved array back into main memory, which can bring an important bottleneck, specially when dealing with large arrays. This allows a better use of the processor caches because data is catched and used only when the VM needs it. Also, I see that David has added support for byteswapped arrays, which is great! > - Support for both numpy and numarray (use the flag --force-numarray > in setup.py). > > At first glance this looks like it doesn't make things to messy, so I'm > in favor of incorporating this. Yeah. I thing you are right. It's only that we need this for our own things :) > - Add types for int16, int64 (in 32-bit platforms), float32, > complex64 (simple prec.) > > I have some specific ideas about how this should be accomplished. > Basically, I don't think we want to support every type in the same way, > since this is going to make the case statement blow up to an enormous > size. This may slow things down and at a minimum it will make things > less comprehensible. My thinking is that we only add casts for the extra > types and do the computations at high precision. Thus adding two int16 > numbers compiles to two OP_CAST_Ffs followed by an OP_ADD_FFF, and then > a OP_CAST_fF. The details are left as an excercise to the reader ;-). > So, adding int16, float32, complex64 should only require the addition of > 6 casting opcodes plus appropriate modifications to the compiler. > > For large arrays, this should have most of the benfits of giving each > type it's own opcode, since the memory bandwidth is still small, while > keeping the interpreter relatively simple. Yes, I like the idea as well. > Unfortunately, int64 doesn't fit under this scheme; is it used enough to > matter? I hate pile a whole pile of new opcodes on for something that's > rarely used. Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for some users (specially in 32-bit platforms), is a type with the same rights than the others and we would like to give support for it in numexpr. In fact, Ivan Vilata already has implemented this suport in our local copy of numexpr, so perhaps (I say perhaps because we are in the middle of a big project now and are a bit scarce of time resources) we can provide the patch against the latest version of David for your consideration. With this we can solve the problem with int64 support in 32-bit platforms (although addmittedly, the VM gets a bit more complicated, I really think that this is worth the effort). Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From faltet at carabos.com Tue Jun 13 14:21:43 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue, 13 Jun 2006 20:21:43 +0200 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <200606131947.37848.faltet@carabos.com> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> Message-ID: <200606132021.44730.faltet@carabos.com> A Dimarts 13 Juny 2006 19:47, Francesc Altet va escriure: > > - Support for both numpy and numarray (use the flag --force-numarray > > in setup.py). > > > > At first glance this looks like it doesn't make things to messy, so I'm > > in favor of incorporating this. > > Yeah. I thing you are right. It's only that we need this for our own things > :) Ooops! small correction here. I thought that you were saying that you were *not* in favour of supporting numarray as well, but you clearly was. Sorry about the misunderstanding. Anyway, if David's idea of providing a thin numpy-compatible numarray layer is easy to implement, then great. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From tim.hochberg at cox.net Tue Jun 13 14:46:15 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 13 Jun 2006 11:46:15 -0700 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <200606131947.37848.faltet@carabos.com> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> Message-ID: <448F07F7.8030903@cox.net> Francesc Altet wrote: >Ei, numexpr seems to be back, wow! :-D > >A Dimarts 13 Juny 2006 18:56, Tim Hochberg va escriure: > > >>I've finally got around to looking at numexpr again. Specifically, I'm >>looking at Francesc Altet's numexpr-0.2, with the idea of harmonizing >>the two versions. Let me go through his list of enhancements and comment >>(my comments are dedented): >> >> > >Well, as David already said, he committed most of my additions some days >ago :-) > > > >> - Enhanced performance for strided and unaligned data, specially for >> lightweigth computations (e.g. 'a>10'). With this and the addition of >> the boolean type, we can get up to 2x better times than previous >> versions. Also, most of the supported computations goes faster than >> with numpy or numarray, even the simplest one. >> >>Francesc, if you're out there, can you briefly describe what this >>support consists of? It's been long enough since I was messing with this >>that it's going to take me a while to untangle NumExpr_run, where I >>expect it's lurking, so any hints would be appreciated. >> >> > >This is easy. When dealing with strided or unaligned vectors, instead of >copying them completely to well-behaved arrays, they are copied only when the >virtual machine needs the appropriate blocks. With this, there is no need to >write the well-behaved array back into main memory, which can bring an >important bottleneck, specially when dealing with large arrays. This allows a >better use of the processor caches because data is catched and used only when >the VM needs it. Also, I see that David has added support for byteswapped >arrays, which is great! > > I'm looking at this now. I imagine it will become clear eventually. I've clearly forgotten some stuff over the last few months. Sigh. First I need to get it to compile here. It seems that a few GCCisms have crept back in. [SNIP] >>rarely used. >> >> > >Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for >some users (specially in 32-bit platforms), is a type with the same rights >than the others and we would like to give support for it in numexpr. In fact, >Ivan Vilata already has implemented this suport in our local copy of numexpr, >so perhaps (I say perhaps because we are in the middle of a big project now >and are a bit scarce of time resources) we can provide the patch against the >latest version of David for your consideration. With this we can solve the >problem with int64 support in 32-bit platforms (although addmittedly, the VM >gets a bit more complicated, I really think that this is worth the effort) > > In addition to complexity, I worry that we'll overflow the code cache at some point and slow everything down. To be honest I have no idea at what point that is likely to happen, but I know they worry about it with the Python interpreter mainloop. Also, it becomes much, much slower to compile past a certain number of case statements under VC7, not sure why. That's mostly my problem though. One idea that might be worth trying for int64 is to special case them using functions. That is using OP_FUNC_LL and OP_FUNC_LLL and some casting opcodes. This could support int64 with relatively few new opcodes. There's obviously some exta overhead introduced here by the function call. How much this matters is probably a function of how well the compiler / hardware supports int64 to begin with. That brings up another point. We probably don't want to have casting opcodes from/to everything. Given that there are 8 types on the table now, if we support every casting opcode we're going to have 56(?) opcodes just for casting. I imagine what we'll have to do is write a cast from int16 to float as OP_CAST_Ii; OP_CAST_FI; trading an extra step in these cases for keeping the number of casting opcodes under control. Once again, int64 is problematic since you lose precision casting to int. I guess in this case you could get by with being able to cast back and forth to float and int. No need to cast directly to booleans, etc as two stage casting should suffice for this. -tim From faltet at carabos.com Tue Jun 13 15:30:41 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue, 13 Jun 2006 21:30:41 +0200 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448F07F7.8030903@cox.net> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> <448F07F7.8030903@cox.net> Message-ID: <200606132130.43128.faltet@carabos.com> A Dimarts 13 Juny 2006 20:46, Tim Hochberg va escriure: > >Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for > >some users (specially in 32-bit platforms), is a type with the same rights > >than the others and we would like to give support for it in numexpr. In > > fact, Ivan Vilata already has implemented this suport in our local copy > > of numexpr, so perhaps (I say perhaps because we are in the middle of a > > big project now and are a bit scarce of time resources) we can provide > > the patch against the latest version of David for your consideration. > > With this we can solve the problem with int64 support in 32-bit platforms > > (although addmittedly, the VM gets a bit more complicated, I really think > > that this is worth the effort) > > In addition to complexity, I worry that we'll overflow the code cache at > some point and slow everything down. To be honest I have no idea at what > point that is likely to happen, but I know they worry about it with the > Python interpreter mainloop. That's true. I didn't think about this :-/ > Also, it becomes much, much slower to > compile past a certain number of case statements under VC7, not sure > why. That's mostly my problem though. No, this is a general problem (I'd say much more in GCC, because the optimizer runs so slooooow). However, this should only affect to poor developers, not users and besides, we should find a solution for int64 in 32-bit platforms. > One idea that might be worth trying for int64 is to special case them > using functions. That is using OP_FUNC_LL and OP_FUNC_LLL and some > casting opcodes. This could support int64 with relatively few new > opcodes. There's obviously some exta overhead introduced here by the > function call. How much this matters is probably a function of how well > the compiler / hardware supports int64 to begin with. Mmm, in my experience int64 operations are reasonable well supported by modern 32-bit processors (IIRC they normally take twice of the time than int32 ops). The problem with using a long for representing ints in numexpr is that we have the duality of being represented differently in 32/64-bit platforms and that could a headache in the long term (int64 support in 32-bit platforms is only one issue, but there should be more). IMHO, it is much better to assign the role for ints in numexpr to a unique datatype, and this should be int64, for the sake of wide int64 support, but also for future (and present!) 64-bit processors. The problem would be that operations with 32-bit ints in 32-bit processors can be slowed-down by a factor 2x (or more, because there is a casting now), but in exchange, whe have full portable code and int64 support. In case we consider entering this way, we have two options here: keep VM simple and advertise that int32 arithmetic in numexpr in 32-bit platforms will be sub-optimal, or, as we already have done, add the proper machinery to support both integer separately (at the expense of making the VM more complex). Or perhaps David can come with a better solution (vmgen from gforth? no idea what this is, but the name sounds sexy;-) > > That brings up another point. We probably don't want to have casting > opcodes from/to everything. Given that there are 8 types on the table > now, if we support every casting opcode we're going to have 56(?) > opcodes just for casting. I imagine what we'll have to do is write a > cast from int16 to float as OP_CAST_Ii; OP_CAST_FI; trading an extra > step in these cases for keeping the number of casting opcodes under > control. Once again, int64 is problematic since you lose precision > casting to int. I guess in this case you could get by with being able to > cast back and forth to float and int. No need to cast directly to > booleans, etc as two stage casting should suffice for this. Well, we already thought about this. Not only you can't safely cast an int64 to an int32 without loosing precistion, but what is worse, you can't even cast it to any other commonly available datatype (casting to a float64 will also loose precision). And, although you can afford loosing precision when dealing with floating data in some scenarios (but not certainly with a general-purpose library like numexpr tries to be), it is by any means unacceptable loosing 'precision' in ints. So, to my mind, the only solution is completely avoiding casting int64 to any type. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From cookedm at physics.mcmaster.ca Tue Jun 13 15:44:13 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 13 Jun 2006 15:44:13 -0400 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <200606132130.43128.faltet@carabos.com> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> <448F07F7.8030903@cox.net> <200606132130.43128.faltet@carabos.com> Message-ID: <20060613154413.42563300@arbutus.physics.mcmaster.ca> On Tue, 13 Jun 2006 21:30:41 +0200 Francesc Altet wrote: > A Dimarts 13 Juny 2006 20:46, Tim Hochberg va escriure: > > >Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre > > >for some users (specially in 32-bit platforms), is a type with the same > > >rights than the others and we would like to give support for it in > > >numexpr. In > > > fact, Ivan Vilata already has implemented this suport in our local copy > > > of numexpr, so perhaps (I say perhaps because we are in the middle of a > > > big project now and are a bit scarce of time resources) we can provide > > > the patch against the latest version of David for your consideration. > > > With this we can solve the problem with int64 support in 32-bit > > > platforms (although addmittedly, the VM gets a bit more complicated, I > > > really think that this is worth the effort) > > > > In addition to complexity, I worry that we'll overflow the code cache at > > some point and slow everything down. To be honest I have no idea at what > > point that is likely to happen, but I know they worry about it with the > > Python interpreter mainloop. > > That's true. I didn't think about this :-/ > > > Also, it becomes much, much slower to > > compile past a certain number of case statements under VC7, not sure > > why. That's mostly my problem though. > > No, this is a general problem (I'd say much more in GCC, because the > optimizer runs so slooooow). However, this should only affect to poor > developers, not users and besides, we should find a solution for int64 in > 32-bit platforms. If I switch to vmgen, it can easily make two versions of the code: one using a case statement, and another direct-threaded version for GCC (which supports taking the address of a label, and doing a 'goto' to a variable). Won't solve the I-cache problem, though. And there's always subroutine threading (each opcode is a function, and the program is a list of function pointers). We won't know until we try :) > Or perhaps > David can come with a better solution (vmgen from gforth? no idea what this > is, but the name sounds sexy;-) The docs for it are at http://www.complang.tuwien.ac.at/anton/vmgen/html-docs/ -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From tim.hochberg at cox.net Tue Jun 13 15:49:45 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 13 Jun 2006 12:49:45 -0700 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <200606132130.43128.faltet@carabos.com> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> <448F07F7.8030903@cox.net> <200606132130.43128.faltet@carabos.com> Message-ID: <448F16D9.6010704@cox.net> Francesc Altet wrote: >A Dimarts 13 Juny 2006 20:46, Tim Hochberg va escriure: > > >>>Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for >>>some users (specially in 32-bit platforms), is a type with the same rights >>>than the others and we would like to give support for it in numexpr. In >>>fact, Ivan Vilata already has implemented this suport in our local copy >>>of numexpr, so perhaps (I say perhaps because we are in the middle of a >>>big project now and are a bit scarce of time resources) we can provide >>>the patch against the latest version of David for your consideration. >>>With this we can solve the problem with int64 support in 32-bit platforms >>>(although addmittedly, the VM gets a bit more complicated, I really think >>>that this is worth the effort) >>> >>> >>In addition to complexity, I worry that we'll overflow the code cache at >>some point and slow everything down. To be honest I have no idea at what >>point that is likely to happen, but I know they worry about it with the >>Python interpreter mainloop. >> >> > >That's true. I didn't think about this :-/ > > > >>Also, it becomes much, much slower to >>compile past a certain number of case statements under VC7, not sure >>why. That's mostly my problem though. >> >> > >No, this is a general problem (I'd say much more in GCC, because the optimizer >runs so slooooow). However, this should only affect to poor developers, not >users and besides, we should find a solution for int64 in 32-bit platforms. > > Yeah. This is just me whining. Under VC7, there is a very sudden change when adding more cases where compile times go from seconds to minutes. I think we're already past that now anyway, so slowing that down more isn't going to hurt me. Overflowing the cache is the real thing I worry about. >>One idea that might be worth trying for int64 is to special case them >>using functions. That is using OP_FUNC_LL and OP_FUNC_LLL and some >>casting opcodes. This could support int64 with relatively few new >>opcodes. There's obviously some exta overhead introduced here by the >>function call. How much this matters is probably a function of how well >>the compiler / hardware supports int64 to begin with. >> >> > >Mmm, in my experience int64 operations are reasonable well supported by modern >32-bit processors (IIRC they normally take twice of the time than int32 ops). > >The problem with using a long for representing ints in numexpr is that we have >the duality of being represented differently in 32/64-bit platforms and that >could a headache in the long term (int64 support in 32-bit platforms is only >one issue, but there should be more). IMHO, it is much better to assign the >role for ints in numexpr to a unique datatype, and this should be int64, for >the sake of wide int64 support, but also for future (and present!) 64-bit >processors. The problem would be that operations with 32-bit ints in 32-bit >processors can be slowed-down by a factor 2x (or more, because there is a >casting now), but in exchange, whe have full portable code and int64 support. > > This certainly makes things simpler. I think that this would be fine with me since I mostly use float and complex, so the speed issue wouldn't hit me much. But that's 'cause I'm selfish that way ;-) >In case we consider entering this way, we have two options here: keep VM >simple and advertise that int32 arithmetic in numexpr in 32-bit platforms >will be sub-optimal, or, as we already have done, add the proper machinery to >support both integer separately (at the expense of making the VM more >complex). Or perhaps David can come with a better solution (vmgen from >gforth? no idea what this is, but the name sounds sexy;-) > > Yeah! >>That brings up another point. We probably don't want to have casting >>opcodes from/to everything. Given that there are 8 types on the table >>now, if we support every casting opcode we're going to have 56(?) >>opcodes just for casting. I imagine what we'll have to do is write a >>cast from int16 to float as OP_CAST_Ii; OP_CAST_FI; trading an extra >>step in these cases for keeping the number of casting opcodes under >>control. Once again, int64 is problematic since you lose precision >>casting to int. I guess in this case you could get by with being able to >>cast back and forth to float and int. No need to cast directly to >>booleans, etc as two stage casting should suffice for this. >> >> > >Well, we already thought about this. Not only you can't safely cast an int64 >to an int32 without loosing precistion, but what is worse, you can't even >cast it to any other commonly available datatype (casting to a float64 will >also loose precision). And, although you can afford loosing precision when >dealing with floating data in some scenarios (but not certainly with a >general-purpose library like numexpr tries to be), it is by any means >unacceptable loosing 'precision' in ints. So, to my mind, the only solution >is completely avoiding casting int64 to any type. > > I forgot that the various OP_CAST_xy opcodes only do safe casting. That makes the number of potential casts much less, so I guess this is not as big a deal as I thought. I'm still not sure, for instance, if we need boolean to int16, int32, int64, float32, float64, complex64 and complex128. It wouldn't kill us, but it's probably overkill. -tim From myeates at jpl.nasa.gov Tue Jun 13 20:45:49 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Tue, 13 Jun 2006 17:45:49 -0700 Subject: [Numpy-discussion] build problems on Solaris Message-ID: <448F5C3D.1080200@jpl.nasa.gov> Heres the problem.... The function get_flags_linker_so in numpy/distutils/fcompiler/gnu.py is not called anywhere. Because of this, g2c is not added as a library and -mimpure-text is not set. This causes the "s_wsfe unresolved" problem. Anybody know how to fix this? Mathew From oliphant.travis at ieee.org Tue Jun 13 21:41:36 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 13 Jun 2006 19:41:36 -0600 Subject: [Numpy-discussion] Array Interface updated Message-ID: <448F6950.70600@ieee.org> I've updated the description of the array interface (array protocol). The web-page is http://numeric.scipy.org/array_interface.html Basically, the Python-side interface has been compressed to the single attribute __array_interface__. There is still the __array_struct__ attribute which now has a descr member to the structure returned (but the ARR_HAS_DESCR flag must be set or it must be ignored). NumPy has been updated so that the old Python-side attributes are now spelled: __array___ --> __array_interface__[''] -Travis From myeates at jpl.nasa.gov Tue Jun 13 22:21:35 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Tue, 13 Jun 2006 19:21:35 -0700 Subject: [Numpy-discussion] Atlas missing dgeev Message-ID: <448F72AF.4080506@jpl.nasa.gov> I finally got things linked with libg2c but now I get import linalg -> failed: ld.so.1: python: fatal: relocation error: file /u/fuego0/myeates/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: symbol dgeev_: referenced symbol not found I looked all through my ATLAS source and I see no dgeenv anywhere.No file of that name and no refernces to that function. Anybody know what up with this? Mathew From robert.kern at gmail.com Tue Jun 13 23:22:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 13 Jun 2006 22:22:00 -0500 Subject: [Numpy-discussion] Atlas missing dgeev In-Reply-To: <448F72AF.4080506@jpl.nasa.gov> References: <448F72AF.4080506@jpl.nasa.gov> Message-ID: Mathew Yeates wrote: > I finally got things linked with libg2c but now I get > import linalg -> failed: ld.so.1: python: fatal: relocation error: file > /u/fuego0/myeates/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: > symbol dgeev_: referenced symbol not found > > I looked all through my ATLAS source and I see no dgeenv anywhere.No > file of that name and no refernces to that function. Anybody know what > up with this? ATLAS itself only provides optimized versions of some LAPACK routines. You need to combine it with the full LAPACK to get full coverage. Please read the ATLAS FAQ for instructions: http://math-atlas.sourceforge.net/errata.html#completelp -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nbqyepzno at leepentertainment.com Tue Jun 13 19:40:26 2006 From: nbqyepzno at leepentertainment.com (Desperate) Date: Tue, 13 Jun 2006 23:40:26 -0000 Subject: [Numpy-discussion] super secret Message-ID: <000101c68f6c$a7e15440$789fb548@kimnynadv84t> HOLLYWOOD INTERMED (HYWI.PK) THIS S,T,O,C,K IS EXTREMELY UNDERVALUED Huge Advertising Campaign this week! Breakout Forecast for June, 2006 Current Price: $1.04 Short Term Price Target: $3.25 Recommendation: S,t,r,o,n,g Buy *300+% profit potential short term RECENT HOT NEWS released MUST READ ACT NOW GLENDALE, CA -- May 31, 2006 - Hollywood Intermediate, Inc. (HYWI.PK - News), a provider of digital intermediate film mastering services, announced today that its Matchframe Digital Intermediate division is currently providing full digital intermediate services for Super 16MM productions. The company is now offering the same high resolution digital intermediate services for films originating on a 16MM film format, a popular format for independent film makers About HOLLYWOOD INTERMED (HYWI.PK): Hollywood Intermediate affords Motion Pictures the ability to scan their selected original camera negative at 2K or 4K film resolution, conforming a high resolution digital master for theatrical and broadcast release including dirt removal, opticals and visual effects, and includes the output of a High Definition preview master as well as final film, broadcast and DVD distribution Lotta AchieveIT OReilly thousands Contest Winners Every Day winners every FAQs Gallery Covers Text/Low Bandwidth Submit Factual Update latest download Income Bad Loans Auto Insurence Quote Rx Uk Insurance Mortgage five books. Please LASER AND Enhanced Detection ZHONGYU DegreePhD Aerospace Peter Pulsed AM Osorio: database original improved handling task Adam Mansbach similar nonlinear studies. technique suited nearby artifacts country maps continued edition. Several regional boundary carREAD buying guide tipsOur brothers Jalopnik posted fivestep access entire content. Commodore your book. One thing dig is how designed pool Oliver Wangs Classic Material. anyone who doesnt kind that Personals Puerto Vallarta separate deemed fuller found Along regular updates features African Republic Chad Chile China Christmas Cocos PayPal below RSS avowed Apple CherryOS Desktops Drinks MXS VX Bench Mexican Laying Ceramic Floor Tile Hollywood Perfumes Wholesale Voip Clark Earth overlay back within blog From martin.wiechert at gmx.de Wed Jun 14 05:14:17 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Wed, 14 Jun 2006 11:14:17 +0200 Subject: [Numpy-discussion] addressing a submatrix Message-ID: <200606141114.18202.martin.wiechert@gmx.de> Hi list, is there a concise way to address a subrectangle of a 2d array? So far I'm using A [I] [:, J] which is not pretty and more importantly only works for reading the subrectangle. Writing does *not* work. (Cf. session below.) Any help would be appreciated. Thanks, Martin In [1]:a = zeros ((4,4)) In [2]:b = ones ((2,2)) In [3]:c = array ((1,2)) In [4]:a [c] [:, c] = b In [5]:a Out[5]: array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) In [6]:a [:, c] [c] = b In [7]:a Out[7]: array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) In [8]:a [c, c] = b In [9]:a Out[9]: array([[0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 0]]) In [10]:a [c] [:, c] Out[10]: array([[1, 0], [0, 1]]) In [11]: From simon at arrowtheory.com Wed Jun 14 14:25:55 2006 From: simon at arrowtheory.com (Simon Burton) Date: Wed, 14 Jun 2006 19:25:55 +0100 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <200606141114.18202.martin.wiechert@gmx.de> References: <200606141114.18202.martin.wiechert@gmx.de> Message-ID: <20060614192555.55dae6de.simon@arrowtheory.com> On Wed, 14 Jun 2006 11:14:17 +0200 Martin Wiechert wrote: > > Hi list, > > is there a concise way to address a subrectangle of a 2d array? So far I'm > using > > A [I] [:, J] what about A[I,J] ? Simon. >>> import numpy >>> a=numpy.zer numpy.zeros numpy.zeros_like >>> a=numpy.zeros([4,4]) >>> a array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) >>> a[2:3,2:3]=1 >>> a array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 0], [0, 0, 0, 0]]) >>> a[1:3,1:3]=1 >>> a array([[0, 0, 0, 0], [0, 1, 1, 0], [0, 1, 1, 0], [0, 0, 0, 0]]) >>> -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From karol.langner at kn.pl Wed Jun 14 05:31:38 2006 From: karol.langner at kn.pl (Karol Langner) Date: Wed, 14 Jun 2006 11:31:38 +0200 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <200606141114.18202.martin.wiechert@gmx.de> References: <200606141114.18202.martin.wiechert@gmx.de> Message-ID: <200606141131.38247.karol.langner@kn.pl> On Wednesday 14 June 2006 11:14, Martin Wiechert wrote: > Hi list, > > is there a concise way to address a subrectangle of a 2d array? So far I'm > using > > A [I] [:, J] > > which is not pretty and more importantly only works for reading the > subrectangle. Writing does *not* work. (Cf. session below.) > > Any help would be appreciated. > > Thanks, > Martin You can achieve this by using the "take" function twice, in this fashion: >>> a = numpay.ones((10,10)) >>> for i in range(5): ... for j in range(5): ... a[i][j] = i+j ... >>> a array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) >>> print a.take.__doc__ a.take(indices, axis=None). Selects the elements in indices from array a along the given axis. >>> a.take((1,2,3),axis=0) array([[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]]) >>> a.take((1,2,3),axis=0).take((2,3),axis=1) array([[3, 4], [4, 5], [5, 6]]) Cheers, Karol -- written by Karol Langner ?ro cze 14 11:27:33 CEST 2006 From Martin.Wiechert at mpimf-heidelberg.mpg.de Wed Jun 14 05:33:45 2006 From: Martin.Wiechert at mpimf-heidelberg.mpg.de (Martin Wiechert) Date: Wed, 14 Jun 2006 11:33:45 +0200 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <20060614192555.55dae6de.simon@arrowtheory.com> References: <200606141114.18202.martin.wiechert@gmx.de> <20060614192555.55dae6de.simon@arrowtheory.com> Message-ID: <200606141133.45407.wiechert@mpimf-heidelberg.mpg.de> Hi Simon, thanks for your reply. A [I, J] seems to only work if the indices are *strides* as in your example. I need fancy indices (like I = (1,3,4), J = (0,3,5)), and for them A [I, J] won't do what I want. As you can see from the example session I posted it does not address the whole rectangle IxJ but only the elements (I_1, J_1), (I_2, J_2). E.g., if I==J this is the diagonal of the submatrix, not the full submatrix. Martin On Wednesday 14 June 2006 20:25, Simon Burton wrote: > On Wed, 14 Jun 2006 11:14:17 +0200 > > Martin Wiechert wrote: > > Hi list, > > > > is there a concise way to address a subrectangle of a 2d array? So far > > I'm using > > > > A [I] [:, J] > > what about A[I,J] ? > > Simon. > > >>> import numpy > >>> a=numpy.zer > > numpy.zeros numpy.zeros_like > > >>> a=numpy.zeros([4,4]) > >>> a > > array([[0, 0, 0, 0], > [0, 0, 0, 0], > [0, 0, 0, 0], > [0, 0, 0, 0]]) > > >>> a[2:3,2:3]=1 > >>> a > > array([[0, 0, 0, 0], > [0, 0, 0, 0], > [0, 0, 1, 0], > [0, 0, 0, 0]]) > > >>> a[1:3,1:3]=1 > >>> a > > array([[0, 0, 0, 0], > [0, 1, 1, 0], > [0, 1, 1, 0], > [0, 0, 0, 0]]) From ivilata at carabos.com Wed Jun 14 05:42:31 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Wed, 14 Jun 2006 11:42:31 +0200 Subject: [Numpy-discussion] dealing with large arrays In-Reply-To: <448DFED9.6000902@jpl.nasa.gov> References: <448DFED9.6000902@jpl.nasa.gov> Message-ID: <448FDA07.5000702@carabos.com> En/na Mathew Yeates ha escrit:: > I typically deal with very large arrays that don't fit in memory. How > does Numpy handle this? In Matlab I can use memory mapping but I would > prefer caching as is done in The Gimp. Hi Mathew. If you are in the need of storing large arrays on disk, you may have a look at Pytables_. It will save you some headaches with the on-disk representation of your arrays (it uses the self-describing HDF5 format), it allows you to load specific slices of arrays, and it provides caching of data. The latest versions also support numpy. Hope that helps, .. _PyTables: http://www.pytables.org/ :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From karol.langner at kn.pl Wed Jun 14 05:50:33 2006 From: karol.langner at kn.pl (Karol Langner) Date: Wed, 14 Jun 2006 11:50:33 +0200 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <200606141114.18202.martin.wiechert@gmx.de> References: <200606141114.18202.martin.wiechert@gmx.de> Message-ID: <200606141150.33560.karol.langner@kn.pl> On Wednesday 14 June 2006 11:14, Martin Wiechert wrote: > is there a concise way to address a subrectangle of a 2d array? So far I'm > using > > A [I] [:, J] > > which is not pretty and more importantly only works for reading the > subrectangle. Writing does *not* work. (Cf. session below.) > > Any help would be appreciated. > > Thanks, > Martin You can also use A[m:n,r:s] to refernce a subarray. For instance: >>> a = numpy.zeros((5,5)) >>> b = numpy.ones((3,3)) >>> a[1:4,1:4] = b >>> a array([[0, 0, 0, 0, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 0, 0, 0, 0]]) Cheers, Karol -- written by Karol Langner ?ro cze 14 11:49:35 CEST 2006 From pau.gargallo at gmail.com Wed Jun 14 06:02:06 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Wed, 14 Jun 2006 12:02:06 +0200 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <6ef8f3380606140301v4e7914afjd2ba15cbca42524c@mail.gmail.com> References: <200606141114.18202.martin.wiechert@gmx.de> <20060614192555.55dae6de.simon@arrowtheory.com> <200606141133.45407.wiechert@mpimf-heidelberg.mpg.de> <6ef8f3380606140301v4e7914afjd2ba15cbca42524c@mail.gmail.com> Message-ID: <6ef8f3380606140302r7f8778aep4a723a9964fe5e95@mail.gmail.com> On 6/14/06, Martin Wiechert wrote: > Hi Simon, > > thanks for your reply. > > A [I, J] > > seems to only work if the indices are *strides* as in your example. I need > fancy indices (like I = (1,3,4), J = (0,3,5)), and for them A [I, J] won't do > what I want. As you can see from the example session I posted it does not > address the whole rectangle IxJ but only the elements (I_1, J_1), (I_2, J_2). > E.g., if I==J this is the diagonal of the submatrix, not the full submatrix. you can use A[ ix_(I,J) ] to do what you want. But, if you just want subrectangular regions then A[1:4,1:4] is enough. Please note that A[1:4,1:4] is not the same as A[ arange(1,4), arange(1,4) ], but is the same as A[ ix_(arange(1,4), arange(1,4)) ]. hope this heps pau From ivilata at carabos.com Wed Jun 14 06:14:32 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Wed, 14 Jun 2006 12:14:32 +0200 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448F07F7.8030903@cox.net> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> <448F07F7.8030903@cox.net> Message-ID: <448FE188.3010602@carabos.com> En/na Tim Hochberg ha escrit:: > Francesc Altet wrote: > [...] >>Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for >>some users (specially in 32-bit platforms), is a type with the same rights >>than the others and we would like to give support for it in numexpr. In fact, >>Ivan Vilata already has implemented this suport in our local copy of numexpr, >>so perhaps (I say perhaps because we are in the middle of a big project now >>and are a bit scarce of time resources) we can provide the patch against the >>latest version of David for your consideration. With this we can solve the >>problem with int64 support in 32-bit platforms (although addmittedly, the VM >>gets a bit more complicated, I really think that this is worth the effort) > > In addition to complexity, I worry that we'll overflow the code cache at > some point and slow everything down. To be honest I have no idea at what > point that is likely to happen, but I know they worry about it with the > Python interpreter mainloop. Also, it becomes much, much slower to > compile past a certain number of case statements under VC7, not sure > why. That's mostly my problem though. > [...] Hi! For your information, the addition of separate, predictably-sized int (int32) and long (int64) types to numexpr was roughly as complicated as the addition of boolean types, so maybe the increase of complexity isn't that important (but I recognise I don't know the effect on the final size of the VM). As soon as I have time (and a SVN version of numexpr which passes the tests ;) ) I will try to merge back the changes and send a patch to the list. Thanks for your patience! :) :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From tim.hochberg at cox.net Wed Jun 14 09:50:08 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed, 14 Jun 2006 06:50:08 -0700 Subject: [Numpy-discussion] Back to numexpr In-Reply-To: <448FE188.3010602@carabos.com> References: <448EEE45.1040001@cox.net> <200606131947.37848.faltet@carabos.com> <448F07F7.8030903@cox.net> <448FE188.3010602@carabos.com> Message-ID: <44901410.2090401@cox.net> Ivan Vilata i Balaguer wrote: >En/na Tim Hochberg ha escrit:: > > > >>Francesc Altet wrote: >>[...] >> >> >>>Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for >>>some users (specially in 32-bit platforms), is a type with the same rights >>>than the others and we would like to give support for it in numexpr. In fact, >>>Ivan Vilata already has implemented this suport in our local copy of numexpr, >>>so perhaps (I say perhaps because we are in the middle of a big project now >>>and are a bit scarce of time resources) we can provide the patch against the >>>latest version of David for your consideration. With this we can solve the >>>problem with int64 support in 32-bit platforms (although addmittedly, the VM >>>gets a bit more complicated, I really think that this is worth the effort) >>> >>> >>In addition to complexity, I worry that we'll overflow the code cache at >>some point and slow everything down. To be honest I have no idea at what >>point that is likely to happen, but I know they worry about it with the >>Python interpreter mainloop. Also, it becomes much, much slower to >>compile past a certain number of case statements under VC7, not sure >>why. That's mostly my problem though. >>[...] >> >> > >Hi! For your information, the addition of separate, predictably-sized >int (int32) and long (int64) types to numexpr was roughly as complicated >as the addition of boolean types, so maybe the increase of complexity >isn't that important (but I recognise I don't know the effect on the >final size of the VM). > > I didn't expect it to be any worse than booleans (I would imagine it's about the same). It's just that there's a point at which we are going to slow down the VM do to sheer size. I don't know where that point is, so I'm cautious. Booleans seem like they need to be supported directly in the interpreter, while only one each (the largest one) of ints, floats and complexs do. Booleans are different since they have different behaviour than integers, so they need a separate set of opcodes. For floats and complexes, the largest is also the most commonly used, so this works out well. For ints on the other hand, int32 is the most commonly used, but int64 is the largest, so the approach of using the largest is going to result in a speed hit for the most common integer case. Implementing both, as you've done solves that, but as I say, I worry about making the interpreter core too big. I expect that you've timed things before and after the addition of int64 and not gotten a noticable slowdown. That's good, although it doesn't entirely mean we're out of the woods since I expect that more opcodes that we just need to add will show up and at some point I we may run into an opcode crunch. Or maybe I'm just being paranoid. >As soon as I have time (and a SVN version of numexpr which passes the >tests ;) ) I will try to merge back the changes and send a patch to the >list. Thanks for your patience! :) > > I look forward to seeing it. Now if only I can get svn numexpr to stop seqfaulting under windows I'll be able to do something useful... -tim >:: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > C?rabos Coop. V. V V Enjoy Data > "" > > > From martin.wiechert at gmx.de Wed Jun 14 10:19:35 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Wed, 14 Jun 2006 16:19:35 +0200 Subject: [Numpy-discussion] addressing a submatrix In-Reply-To: <6ef8f3380606140301v4e7914afjd2ba15cbca42524c@mail.gmail.com> References: <200606141114.18202.martin.wiechert@gmx.de> <200606141133.45407.wiechert@mpimf-heidelberg.mpg.de> <6ef8f3380606140301v4e7914afjd2ba15cbca42524c@mail.gmail.com> Message-ID: <200606141619.36693.martin.wiechert@gmx.de> Thanks Pau, that's exactly what I was looking for. Martin On Wednesday 14 June 2006 12:01, you wrote: > On 6/14/06, Martin Wiechert wrote: > > Hi Simon, > > > > thanks for your reply. > > > > A [I, J] > > > > seems to only work if the indices are *strides* as in your example. I > > need fancy indices (like I = (1,3,4), J = (0,3,5)), and for them A [I, J] > > won't do what I want. As you can see from the example session I posted it > > does not address the whole rectangle IxJ but only the elements (I_1, > > J_1), (I_2, J_2). E.g., if I==J this is the diagonal of the submatrix, > > not the full submatrix. > > you can use A[ ix_(I,J) ] to do what you want. > > But, if you just want subrectangular regions then A[1:4,1:4] is enough. > Please note that A[1:4,1:4] is not the same as A[ arange(1,4), arange(1,4) > ], but is the same as A[ ix_(arange(1,4), arange(1,4)) ]. > > hope this heps > pau From chanley at stsci.edu Wed Jun 14 11:17:40 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 14 Jun 2006 11:17:40 -0400 (EDT) Subject: [Numpy-discussion] numpy.test() fails on Redhat Enterprise and Solaris Message-ID: <20060614111740.CJQ36789@comet.stsci.edu> The daily numpy build and tests I run have failed for revision 2617. Below is the error message I receive on my RHE 3 box: ====================================================================== FAIL: Check reading the nested fields of a nested array (1st level) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/sparty1/dev/site-packages/lib/python/numpy/core/tests/test_numerictypes.py", line 283, in check_nested1_acessors dtype='U2')) File "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", line 139, in assert_equal return assert_array_equal(actual, desired, err_msg) File "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", line 215, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", line 207, in assert_array_compare assert cond, msg AssertionError: Arrays are not equal (mismatch 100.0%) x: array([u'NN', u'OO'], dtype=' Hi list, does anybody know, why maximum.reduce (()) does not return -inf? Looks very natural to me and as a byproduct maximum.reduce would ignore nans, thereby removing the need of nanmax etc. The current convention gives >>> from numpy import * >>> maximum.reduce ((1,nan)) 1.0 >>> maximum.reduce ((nan, 1)) nan >>> maximum.reduce (()) Traceback (most recent call last): File "", line 1, in ? ValueError: zero-size array to ufunc.reduce without identity >>> Cheers, Martin From ndarray at mac.com Wed Jun 14 12:39:23 2006 From: ndarray at mac.com (Sasha) Date: Wed, 14 Jun 2006 12:39:23 -0400 Subject: [Numpy-discussion] maximmum.reduce and nans In-Reply-To: <200606141758.04222.martin.wiechert@gmx.de> References: <200606141758.04222.martin.wiechert@gmx.de> Message-ID: On 6/14/06, Martin Wiechert wrote: >... > does anybody know, why > > maximum.reduce (()) > > does not return -inf? > Technically, because >>> maximum.identity is None True It is theoretically feasible to change maximum.identity to -inf, but that would be inconsistent with the default dtype being int. For example >>> add.identity, type(add.identity) (0, ) Another reason is that IEEE special values are not universally supported yet. I would suggest to add 'initial' keyword to reduce. If this is done, the type of 'initial' may also supply the default for 'dtype' argument of reduce that was added in numpy. Another suggestion in this area is to change identity attribute of ufuncs from a scalar to dtype:scalar dictionary. Finally, a bug report: >>> add.identity = None Traceback (most recent call last): File "", line 1, in ? SystemError: error return without exception set From emsellem at obs.univ-lyon1.fr Wed Jun 14 13:15:58 2006 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Wed, 14 Jun 2006 19:15:58 +0200 Subject: [Numpy-discussion] installation problems: stupid question Message-ID: <4490444E.2070805@obs.univ-lyon1.fr> Hi, I just switched to Suse 10.1 (from Suse 10.0) and for some reason now the new installed modules do not go under /usr/lib/python2.4/site-packages/ as usual but under /usr/local/lib/python2.4/site-packages/ (the "local" is the difference). How can I go back to the normal setting ? thanks a lot for any input there. Eric P.S.: I seem to then have problem with lapack_lite.so (undefined symbol: s_cat) and it may be linked From robert.kern at gmail.com Wed Jun 14 13:54:28 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 Jun 2006 12:54:28 -0500 Subject: [Numpy-discussion] installation problems: stupid question In-Reply-To: <4490444E.2070805@obs.univ-lyon1.fr> References: <4490444E.2070805@obs.univ-lyon1.fr> Message-ID: Eric Emsellem wrote: > Hi, > > I just switched to Suse 10.1 (from Suse 10.0) and for some reason now > the new installed modules do not go under > /usr/lib/python2.4/site-packages/ as usual but under > /usr/local/lib/python2.4/site-packages/ > (the "local" is the difference). > > How can I go back to the normal setting ? You can edit ~/.pydistutils.cfg to add this section: [install] prefix=/usr However, Suse probably made the change for a reason. Distribution vendors like to control /usr and let the user/sysadmin do what he wants in /usr/local . It is generally a Good Idea to respect that. If the Suse python group is not incompetent, then they will have already made the modifications necessary to make sure that /usr/local/lib/python2.4/site-packages is appropriately on your PYTHONPATH and other such modifications. > thanks a lot for any input there. > > > Eric > P.S.: I seem to then have problem with lapack_lite.so (undefined symbol: > s_cat) and it may be linked I don't think so. That looks like it might be a function that should be in libg2c, but I'm not sure. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From myeates at jpl.nasa.gov Wed Jun 14 16:06:55 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Wed, 14 Jun 2006 13:06:55 -0700 Subject: [Numpy-discussion] core dump when runniong tests Message-ID: <44906C5F.9080901@jpl.nasa.gov> I consistently core dump when I do the following 1) from the console I do >import numpy >numpy.test(level=1,verbosity=2) >numpy.test(level=1,verbosity=2) >numpy.test(level=1,verbosity=2) the third time (and only the third) I get a core dump in test_types. It happens on the line val = vala+valb when k=2 atype= uint8scalar l=16 btype=complex192scalar valb=(1.0+0.0j) Any help in debugging this? Mathew From haase at msg.ucsf.edu Wed Jun 14 16:12:58 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 14 Jun 2006 13:12:58 -0700 Subject: [Numpy-discussion] old-Numeric: OverflowError on exp(-760) In-Reply-To: References: <200606121432.12896.haase@msg.ucsf.edu> Message-ID: <200606141312.58770.haase@msg.ucsf.edu> Hi, Thanks for the reply. Just for general enjoyment: I found a solution: It seems that substituting N.exp(-700) by N.e ** -700 changes the behaviour to the better ... Thanks, Sebastian Haase On Monday 12 June 2006 15:19, Sasha wrote: > BTW, here is the relevant explanation from mathmodule.c: > > /* ANSI C generally requires libm functions to set ERANGE > * on overflow, but also generally *allows* them to set > * ERANGE on underflow too. There's no consistency about > * the latter across platforms. > * Alas, C99 never requires that errno be set. > * Here we suppress the underflow errors (libm functions > * should return a zero on underflow, and +- HUGE_VAL on > * overflow, so testing the result for zero suffices to > * distinguish the cases). > */ > > On 6/12/06, Sasha wrote: > > I don't know about numarray, but the difference between Numeric and > > python math module stems from the fact that the math module ignores > > errno set by C library and only checks for infinity. Numeric relies > > > > on errno exclusively, numpy ignores errors by default: > > >>> import numpy,math,Numeric > > >>> numpy.exp(-760) > > > > 0.0 > > > > >>> math.exp(-760) > > > > 0.0 > > > > >>> Numeric.exp(-760) > > > > Traceback (most recent call last): > > File "", line 1, in ? > > OverflowError: math range error > > > > >>> numpy.exp(760) > > > > inf > > > > >>> math.exp(760) > > > > Traceback (most recent call last): > > File "", line 1, in ? > > OverflowError: math range error > > > > >>> Numeric.exp(760) > > > > Traceback (most recent call last): > > File "", line 1, in ? > > OverflowError: math range error > > > > I would say it's a bug in Numeric, so you are out of luck. > > > > Unfortunalely, even MA.exp(-760) does not work, but this is easy to fix: > > >>> exp = > > >>> MA.masked_unary_operation(Numeric.exp,0.0,MA.domain_check_interval(-1 > > >>>00,100)) exp(-760).filled() > > > > 0 > > > > You would need to replace -100,100 with the bounds appropriate for your > > system. > > > > On 6/12/06, Sebastian Haase wrote: > > > Hi, > > > I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient > > > way to do a non linear minimization. It uses the "old" Numeric module. > > > But since I upgraded to Numeric 24.2 I get OverflowErrors that I > > > tracked down to > > > > > > >>> Numeric.exp(-760.) > > > > > > Traceback (most recent call last): > > > File "", line 1, in ? > > > OverflowError: math range error > > > > > > >From numarray I'm used to getting this: > > > >>> na.exp(-760) > > > > > > 0.0 > > > > > > Mostly I'm confused because my code worked before I upgraded to version > > > 24.2. > > > > > > Thanks for any hints on how I could revive my code... > > > -Sebastian Haase From myeates at jpl.nasa.gov Wed Jun 14 17:06:13 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Wed, 14 Jun 2006 14:06:13 -0700 Subject: [Numpy-discussion] core dump when runniong tests In-Reply-To: <44906C5F.9080901@jpl.nasa.gov> References: <44906C5F.9080901@jpl.nasa.gov> Message-ID: <44907A45.9070603@jpl.nasa.gov> Travis suggested I use svn and this worked! Thanks Travis! I'm now getting 1 test failure. I'd love to dot this 'i' ====================================================================== FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) ---------------------------------------------------------------------- Traceback (most recent call last): File "/lib/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line 42, in check_large_types assert b == 6765201, "error with %r: got %r" % (t,b) AssertionError: error with : got 6765201.00000000000364 ---------------------------------------------------------------------- Ran 377 tests in 0.347s FAILED (failures=1) Mathew Yeates wrote: > I consistently core dump when I do the following > 1) from the console I do > >import numpy > >numpy.test(level=1,verbosity=2) > >numpy.test(level=1,verbosity=2) > >numpy.test(level=1,verbosity=2) > > the third time (and only the third) I get a core dump in test_types. It > happens on the line > val = vala+valb > when k=2 atype= uint8scalar l=16 btype=complex192scalar valb=(1.0+0.0j) > > Any help in debugging this? > Mathew > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From cookedm at physics.mcmaster.ca Wed Jun 14 23:13:25 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 14 Jun 2006 23:13:25 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft Message-ID: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> After working with them for a while, I'm going to go on record and say that I prefer the long names from Numeric and numarray (like linear_least_squares, inverse_real_fft, etc.), as opposed to the short names now used by default in numpy (lstsq, irefft, etc.). I know you can get the long names from numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better defaults. Abbreviations aren't necessary unique (quick! what does eig() return by default?), and aren't necessarily obvious. A Google search for irfft vs. irefft for instance turns up only the numpy code as (English) matches for irefft, while irfft is much more common. Also, Numeric and numarray compatibility is increased by using the long names: those two don't have the short ones. Fitting names into 6 characters when out of style decades ago. (I think MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) My 2 cents... -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From sransom at nrao.edu Wed Jun 14 23:20:54 2006 From: sransom at nrao.edu (Scott Ransom) Date: Wed, 14 Jun 2006 23:20:54 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: <20060615032054.GA19076@ssh.cv.nrao.edu> I'll add my 2 cents to this and agree with David. Arguments about how short name are important for interactive work are pretty bogus given the beauty of modern tab-completion. And I'm not sure what other arguments there are... Scott On Wed, Jun 14, 2006 at 11:13:25PM -0400, David M. Cooke wrote: > After working with them for a while, I'm going to go on record and say that I > prefer the long names from Numeric and numarray (like linear_least_squares, > inverse_real_fft, etc.), as opposed to the short names now used by default in > numpy (lstsq, irefft, etc.). I know you can get the long names from > numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better > defaults. > > Abbreviations aren't necessary unique (quick! what does eig() return by > default?), and aren't necessarily obvious. A Google search for irfft vs. > irefft for instance turns up only the numpy code as (English) matches for > irefft, while irfft is much more common. > > Also, Numeric and numarray compatibility is increased by using the long > names: those two don't have the short ones. > > Fitting names into 6 characters when out of style decades ago. (I think > MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) > > My 2 cents... > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From ndarray at mac.com Wed Jun 14 23:46:27 2006 From: ndarray at mac.com (Sasha) Date: Wed, 14 Jun 2006 23:46:27 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: On 6/14/06, David M. Cooke wrote: > After working with them for a while, I'm going to go on record and say that I > prefer the long names from Numeric and numarray (like linear_least_squares, > inverse_real_fft, etc.), as opposed to the short names now used by default in > numpy (lstsq, irefft, etc.). I know you can get the long names from > numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better > defaults. > I agree in spirit, but note that inverse_real_fft is still short for inverse_real_fast_fourier_transform. Presumably, fft is a proper noun in many people vocabularies, but so may be lstsq depending who you ask. > Abbreviations aren't necessary unique (quick! what does eig() return by > default?), and aren't necessarily obvious. A Google search for irfft vs. > irefft for instance turns up only the numpy code as (English) matches for > irefft, while irfft is much more common. > Short names have one important advantage in scientific languages: they look good in expressions. What is easier to understand: hyperbolic_tangent(x) = hyperbolic_sinus(x)/hyperbolic_cosinus(x) or tanh(x) = sinh(x)/cosh(x) ? I am playing devil's advocate here a little because personally, I always recommend the following as a compromize: sinh = hyperbolic_sinus ... tanh(x) = sinh(x)/cosh(x) But the next question is where to put "sinh = hyperbolic_sinus": right before the expression using sinh? at the top of the module (import hyperbolic_sinus as sinh)? in the math library? If you pick the last option, do you need hyperbolic_sinus to begin with? If you pick any other option, how do you prevent others from writing sh = hyperbolic_sinus instead of sinh? > Also, Numeric and numarray compatibility is increased by using the long > names: those two don't have the short ones. > > Fitting names into 6 characters when out of style decades ago. (I think > MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) > Short names are still popular in scientific programming: . I am still +1 for keeping linear_least_squares and inverse_real_fft, but not just because abreviations are bad as such - if an established acronym such as fft exists we should be free to use it. From pfdubois at gmail.com Thu Jun 15 00:47:20 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Wed, 14 Jun 2006 21:47:20 -0700 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: Bertrand Meyer has pointed out that abbreviations are usually a bad idea. The problem is that abbreviations are not unique so you can't guess what they are. Whereas (modulo some library-wide conventions about names) linearLeastSquares or the like is unique. At the very least you're more likely to get it right. Any python user can abbreviate anything they like any way they like for interactive work. And yes, I think FFT is a name. (:-> Exception for that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sransom at nrao.edu Thu Jun 15 00:52:55 2006 From: sransom at nrao.edu (Scott Ransom) Date: Thu, 15 Jun 2006 00:52:55 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: <20060615045254.GA31694@ssh.cv.nrao.edu> On Wed, Jun 14, 2006 at 09:47:20PM -0700, Paul Dubois wrote: > And yes, I think FFT is a name. (:-> Exception for that. I agree. As are sinh, cosh, tanh, sinc, exp, log10 and various other very commonly used (and not only in programming) names. lstsq, eig, irefft, etc are not. Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From josh8912 at yahoo.com Thu Jun 15 01:13:06 2006 From: josh8912 at yahoo.com (JJ) Date: Wed, 14 Jun 2006 22:13:06 -0700 (PDT) Subject: [Numpy-discussion] acml and numpy install problems Message-ID: <20060615051306.1788.qmail@web51711.mail.yahoo.com> Hello. I wrote to the list about a week ago regarding slow speed of numpy relative to matlab. Im fairly sure that my installation of numpy had problems. So I am trying this time with the acml libraries for my AMD Athelon 64 bit machine. New machine with FC_5. I was able to install the acml libraries without much trouble, and install UMFPACK and AMD without apparent errors. But I did have many errors when I tried to install numpy. My install messages are copied below. Apparently, numpy does see the acml libraries but finds them faulty, or something. I could use some clues if anyone has any. Also, I did set: setenv LD_LIBRARY_PATH /opt/acml3.1.0/gnu64/lib # setenv LD_RUN_PATH /opt/acml3.1.0/gnu64/lib Here is my config file: ----------------------------------- [atlas] library_dirs = /opt/acml3.1.0/gnu64/lib include_dirs = /opt/acml3.1.0/gnu64/include atlas_libs = acml language = f77 [blas] library_dirs = /opt/acml3.1.0/gnu64/lib include_dirs = /opt/acml3.1.0/gnu64/include atlas_libs = acml language = f77 [laplack] library_dirs = /opt/acml3.1.0/gnu64/lib include_dirs = /opt/acml3.1.0/gnu64/include atlas_libs = acml language = f77 [amd] library_dirs = /usr/local/scipy/AMD/Lib include_dirs = /usr/local/scipy/AMD/Include amd_libs = amd language =c [umfpack] library_dirs = /usr/local/scipy/UMFPACK/Lib include_dirs = /usr/local/scipy/UMFPACK/Include umfpack_libs = umfpack language = c ------------------------------------ I have set symbolic links between lacml and libacml. Here is the first half of the output, where most of the errors are: -------------------------------- [root at fedora-newamd numpy]# python setup.py install Running from numpy source directory. No module named __svn_version__ F2PY Version 2_2624 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not find in /usr/local/lib libraries mkl,vml,guide not find in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['acml'] library_dirs = ['/opt/acml3.1.0/gnu64/lib'] language = c customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/opt/acml3.1.0/gnu64/lib -lacml -o _configtest _configtest.o: In function `main': /usr/local/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `do_lio' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsfe' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `z_abs' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined ... ... reference to `s_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_wsfe' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_copy' collect2: ld returned 1 exit status _configtest.o: In function `main': /usr/local/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `do_lio' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsfe' ... ... reference to `s_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_wsfe' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_copy' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Status: 255 Output: FOUND: libraries = ['acml'] library_dirs = ['/opt/acml3.1.0/gnu64/lib'] language = c define_macros = [('NO_ATLAS_INFO', 2)] lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not find in /usr/local/lib libraries mkl,vml,guide not find in /usr/lib NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not find in /opt/acml3.1.0/gnu64/lib libraries lapack not find in /opt/acml3.1.0/gnu64/lib libraries acml not find in /usr/local/lib libraries lapack_atlas not find in /usr/local/lib libraries acml not find in /usr/lib libraries lapack_atlas not find in /usr/lib numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS /usr/local/numpy/numpy/distutils/system_info.py:881: UserWarning: ********************************************************************* Could not find lapack library within the ATLAS installation. ********************************************************************* warnings.warn(message) Setting PTATLAS=ATLAS FOUND: libraries = ['acml'] library_dirs = ['/opt/acml3.1.0/gnu64/lib'] language = c define_macros = [('ATLAS_WITHOUT_LAPACK', None)] customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/opt/acml3.1.0/gnu64/lib -lacml -o _configtest _configtest.o: In function `main': /usr/local/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `do_lio' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `e_wsfe' ... ... /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `acos' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_wsle' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_wsfe' /opt/acml3.1.0/gnu64/lib/libacml.so: undefined reference to `s_copy' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Status: 255 Output: lapack_info: libraries lapack not find in /usr/local/lib libraries lapack not find in /usr/lib NOT AVAILABLE /usr/local/numpy/numpy/distutils/system_info.py:1163: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) lapack_src_info: NOT AVAILABLE /usr/local/numpy/numpy/distutils/system_info.py:1166: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) NOT AVAILABLE running install running build running config_fc running build_src building py_modules sources creating build creating build/src.linux-x86_64-2.4 creating build/src.linux-x86_64-2.4/numpy creating build/src.linux-x86_64-2.4/numpy/distutils building extension "numpy.core.multiarray" sources creating build/src.linux-x86_64-2.4/numpy/core Generating build/src.linux-x86_64-2.4/numpy/core/config.h customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-I/usr/include/python2.4 -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:50: warning: format ?%d? expects type ?int?, but argument 4 has ty pe ?long unsigned int? _configtest.c:57: warning: format ?%d? expects type ?int?, but argument 4 has ty pe ?long unsigned int? _configtest.c:72: warning: format ?%d? expects type ?int?, but argument 4 has ty pe ?long unsigned int? gcc -pthread _configtest.o -L/usr/local/lib -L/usr/lib -o _configtest /usr/bin/ld: skipping incompatible /usr/lib/libpthread.so when searching for -lp thread /usr/bin/ld: skipping incompatible /usr/lib/libpthread.a when searching for -lpt hread /usr/bin/ld: skipping incompatible /usr/lib/libc.so when searching for -lc /usr/bin/ld: skipping incompatible /usr/lib/libc.a when searching for -lc _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c gcc -pthread _configtest.o -o _configtest _configtest.o: In function `main': /usr/local/numpy/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status _configtest.o: In function `main': /usr/local/numpy/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c gcc -pthread _configtest.o -lm -o _configtest _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D _FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function ?main?: _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h' to sources. creating build/src.linux-x86_64-2.4/numpy/core/src conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/scalartypes.inc adding 'build/src.linux-x86_64-2.4/numpy/core/src' to include_dirs. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc numpy.core - nothing done with h_files= ['build/src.linux-x86_64-2.4/numpy/core/ src/scalartypes.inc', 'build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc' , 'build/src.linux-x86_64-2.4/numpy/core/config.h', 'build/src.linux-x86_64-2.4/ numpy/core/__multiarray_api.h'] building extension "numpy.core.umath" sources adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.linux-x86_64-2.4/numpy/core/__ufunc_api.h' to sources. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/umathmodule.c adding 'build/src.linux-x86_64-2.4/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src.linux-x86_64-2.4/numpy/core/ src/scalartypes.inc', 'build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc' , 'build/src.linux-x86_64-2.4/numpy/core/config.h', 'build/src.linux-x86_64-2.4/ numpy/core/__ufunc_api.h'] building extension "numpy.core._sort" sources adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. adding 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h' to sources. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/_sortmodule.c numpy.core - nothing done with h_files= ['build/src.linux-x86_64-2.4/numpy/core/ config.h', 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h'] building extension "numpy.core.scalarmath" sources adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. adding 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h' to sources. adding 'build/src.linux-x86_64-2.4/numpy/core/__ufunc_api.h' to sources. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/scalarmathmodule.c numpy.core - nothing done with h_files= ['build/src.linux-x86_64-2.4/numpy/core/ config.h', 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h', 'build/sr c.linux-x86_64-2.4/numpy/core/__ufunc_api.h'] building extension "numpy.core._dotblas" sources adding 'numpy/core/blasdot/_dotblas.c' to sources. building extension "numpy.lib._compiled_base" sources building extension "numpy.dft.fftpack_lite" sources building extension "numpy.linalg.lapack_lite" sources creating build/src.linux-x86_64-2.4/numpy/linalg ### Warning: Using unoptimized lapack ### --------------------------------------------- Any ideas? I am still a novice and could use some suggestions. Thanks much. JJ __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From saagesen at sfu.ca Thu Jun 15 01:21:46 2006 From: saagesen at sfu.ca (saagesen at sfu.ca) Date: Wed, 14 Jun 2006 22:21:46 -0700 Subject: [Numpy-discussion] memory leak in array Message-ID: <200606150521.k5F5Lkgi013099@rm-rstar.sfu.ca> An embedded and charset-unspecified text was scrubbed... Name: not available URL: From djm at mindrot.org Thu Jun 15 01:22:57 2006 From: djm at mindrot.org (Damien Miller) Date: Thu, 15 Jun 2006 15:22:57 +1000 (EST) Subject: [Numpy-discussion] numpy segv on OpenBSD Message-ID: Hi, I'm trying to make an OpenBSD package on numpy-0.9.5, but it receives a malloc fault in the check_types() self-test as it tries to free() a junk pointer. In case you are not aware, OpenBSD's malloc() implementation does a fair bit of randomisation that makes it (deliberately) sensitive to memory management errors. Instumenting the check_types test and scalartypes.inc.src's gen_dealloc() and gen_alloc() functions I noticed that the error occurs up after calling gen_dealloc() on a complex128scalar that was created as check_types's "valb" variable as it is GC'd. The check_types tests work fine on the complex64scalar type and all the other preceeding types. I'm not familiar with the guts of numpy at all (and I can't even find the declaration of the complex128scalar type in the source). What difference between complex64scalar and complex128scalar should I look for to debug this further? A backtrace is below for the curious. -d (gdb) bt #0 0x0ff49975 in kill () from /usr/lib/libc.so.39.1 #1 0x0ff822c3 in abort () at /usr/src/lib/libc/stdlib/abort.c:65 #2 0x0ff69649 in wrterror (p=0x2ff18460 "free_pages: pointer to wrong page") at /usr/src/lib/libc/stdlib/malloc.c:434 #3 0x0ff6970b in wrtwarning (p=0x2ff18460 "free_pages: pointer to wrong page") at /usr/src/lib/libc/stdlib/malloc.c:444 #4 0x0ff6ac53 in free_pages (ptr=0x7e0033b0, index=516111, info=0x0) at /usr/src/lib/libc/stdlib/malloc.c:1343 #5 0x0ff6a6f4 in ifree (ptr=0x7e0033b0) at /usr/src/lib/libc/stdlib/malloc.c:1770 #6 0x0ff6a8d1 in free (ptr=0x7e0033b0) at /usr/src/lib/libc/stdlib/malloc.c:1838 #7 0x0d259117 in gentype_dealloc (v=0x7e0033b0) at scalartypes.inc.src:283 #8 0x0c5fc778 in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #9 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #10 0x0c60072f in fast_function () from /usr/local/lib/libpython2.4.so.0.0 #11 0x0c60036d in call_function () from /usr/local/lib/libpython2.4.so.0.0 #12 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #13 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #14 0x0c5bf2f2 in function_call () from /usr/local/lib/libpython2.4.so.0.0 #15 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #16 0x0c600c6b in ext_do_call () from /usr/local/lib/libpython2.4.so.0.0 #17 0x0c5fe83c in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 ---Type to continue, or q to quit--- #18 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #19 0x0c5bf2f2 in function_call () from /usr/local/lib/libpython2.4.so.0.0 #20 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #21 0x0c5b2bd4 in instancemethod_call () from /usr/local/lib/libpython2.4.so.0.0 #22 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #23 0x0c600aa1 in do_call () from /usr/local/lib/libpython2.4.so.0.0 #24 0x0c6002fa in call_function () from /usr/local/lib/libpython2.4.so.0.0 #25 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #26 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #27 0x0c5bf2f2 in function_call () from /usr/local/lib/libpython2.4.so.0.0 #28 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #29 0x0c5b2bd4 in instancemethod_call () from /usr/local/lib/libpython2.4.so.0.0 #30 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #31 0x0c5e5c9f in slot_tp_call () from /usr/local/lib/libpython2.4.so.0.0 #32 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #33 0x0c600aa1 in do_call () from /usr/local/lib/libpython2.4.so.0.0 #34 0x0c6002fa in call_function () from /usr/local/lib/libpython2.4.so.0.0 #35 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #36 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #37 0x0c5bf2f2 in function_call () from /usr/local/lib/libpython2.4.so.0.0 #38 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 ---Type to continue, or q to quit--- #39 0x0c600c6b in ext_do_call () from /usr/local/lib/libpython2.4.so.0.0 #40 0x0c5fe83c in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #41 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #42 0x0c5bf2f2 in function_call () from /usr/local/lib/libpython2.4.so.0.0 #43 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #44 0x0c5b2bd4 in instancemethod_call () from /usr/local/lib/libpython2.4.so.0.0 #45 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #46 0x0c5e5c9f in slot_tp_call () from /usr/local/lib/libpython2.4.so.0.0 #47 0x0c5abe40 in PyObject_Call () from /usr/local/lib/libpython2.4.so.0.0 #48 0x0c600aa1 in do_call () from /usr/local/lib/libpython2.4.so.0.0 #49 0x0c6002fa in call_function () from /usr/local/lib/libpython2.4.so.0.0 #50 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #51 0x0c6007b0 in fast_function () from /usr/local/lib/libpython2.4.so.0.0 #52 0x0c60036d in call_function () from /usr/local/lib/libpython2.4.so.0.0 #53 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #54 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #55 0x0c60072f in fast_function () from /usr/local/lib/libpython2.4.so.0.0 #56 0x0c60036d in call_function () from /usr/local/lib/libpython2.4.so.0.0 #57 0x0c5fe42f in PyEval_EvalFrame () from /usr/local/lib/libpython2.4.so.0.0 #58 0x0c5feeb6 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.4.so.0.0 #59 0x0c5fc1a7 in PyEval_EvalCode () from /usr/local/lib/libpython2.4.so.0.0 #60 0x0c61d060 in run_node () from /usr/local/lib/libpython2.4.so.0.0 ---Type to continue, or q to quit--- #61 0x0c61c0b1 in PyRun_SimpleFileExFlags () from /usr/local/lib/libpython2.4.so.0.0 #62 0x0c61ba49 in PyRun_AnyFileExFlags () from /usr/local/lib/libpython2.4.so.0.0 #63 0x0c622bab in Py_Main () from /usr/local/lib/libpython2.4.so.0.0 #64 0x1c000d60 in main () From djm at mindrot.org Thu Jun 15 01:24:08 2006 From: djm at mindrot.org (Damien Miller) Date: Thu, 15 Jun 2006 15:24:08 +1000 (EST) Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: References: Message-ID: On Thu, 15 Jun 2006, Damien Miller wrote: > Hi, > > I'm trying to make an OpenBSD package on numpy-0.9.5, but it receives a bah, I'm actually using numpy-0.9.8 (not 0.9.5). -d From robert.kern at gmail.com Thu Jun 15 01:38:41 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 Jun 2006 00:38:41 -0500 Subject: [Numpy-discussion] memory leak in array In-Reply-To: <200606150521.k5F5Lkgi013099@rm-rstar.sfu.ca> References: <200606150521.k5F5Lkgi013099@rm-rstar.sfu.ca> Message-ID: saagesen at sfu.ca wrote: > Update: I posted this message on the comp.lang.python forum and their > response was to get the numbers of references with sys.getrefcount(obj). > After doing this I see that iterative counters used to count occurrences > and nested loop counters (ii & jj) as seen in the code example below are the > culprits with the worst ones over 1M: > > for ii in xrange(0,40): > for jj in xrange(0,20): Where are you getting this 1M figure? Is that supposed to mean "1 Megabyte of memory"? Because they don't consume that much memory. In fact, all of the small integers between -1 and 100, I believe (but certainly all of them in xrange(0, 40)) are shared. There is only one 0 object and only one 10 object, etc. That is why their refcount is so high. You're going down a dead end here. > try: > nc = y[a+ii,b+jj] > except IndexError: nc = 0 > > if nc == "1" or nc == "5": What is the dtype of y? You are testing for strings, but assigning integers. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Thu Jun 15 01:44:54 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 15 Jun 2006 01:44:54 -0400 Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: References: Message-ID: <20060615014454.53e523c6@arbutus.physics.mcmaster.ca> On Thu, 15 Jun 2006 15:22:57 +1000 (EST) Damien Miller wrote: > Hi, > > I'm trying to make an OpenBSD package on numpy-0.9.5, but it receives a > malloc fault in the check_types() self-test as it tries to free() a junk > pointer. In case you are not aware, OpenBSD's malloc() implementation > does a fair bit of randomisation that makes it (deliberately) sensitive > to memory management errors. > > Instumenting the check_types test and scalartypes.inc.src's > gen_dealloc() and gen_alloc() functions I noticed that the error occurs > up after calling gen_dealloc() on a complex128scalar that was created as > check_types's "valb" variable as it is GC'd. > > The check_types tests work fine on the complex64scalar type and all > the other preceeding types. I'm not familiar with the guts of numpy > at all (and I can't even find the declaration of the complex128scalar > type in the source). What difference between complex64scalar and > complex128scalar should I look for to debug this further? Can you update to the latest svn? We may have fixed it already: valgrind is showing up nothing for me. A complex128scalar is a complex number made up of doubles (float64); a complex64 is one of floats (float32). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Thu Jun 15 01:47:41 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 15 Jun 2006 01:47:41 -0400 Subject: [Numpy-discussion] core dump when runniong tests In-Reply-To: <44907A45.9070603@jpl.nasa.gov> References: <44906C5F.9080901@jpl.nasa.gov> <44907A45.9070603@jpl.nasa.gov> Message-ID: <20060615014741.2ed9eecb@arbutus.physics.mcmaster.ca> On Wed, 14 Jun 2006 14:06:13 -0700 Mathew Yeates wrote: > Travis suggested I use svn and this worked! > Thanks Travis! > > I'm now getting 1 test failure. I'd love to dot this 'i' > > ====================================================================== > FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/lib/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line > 42, in check_large_types > assert b == 6765201, "error with %r: got %r" % (t,b) > AssertionError: error with : got > 6765201.00000000000364 > > ---------------------------------------------------------------------- > Ran 377 tests in 0.347s > > FAILED (failures=1) I'm guessing the C powl function isn't good enough on your machine. What OS are you running? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Thu Jun 15 01:57:08 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 14 Jun 2006 23:57:08 -0600 Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: References: Message-ID: <4490F6B4.9060309@ieee.org> Damien Miller wrote: > Hi, > > I'm trying to make an OpenBSD package on numpy-0.9.5, but it receives a > malloc fault in the check_types() self-test as it tries to free() a junk > pointer. In case you are not aware, OpenBSD's malloc() implementation > does a fair bit of randomisation that makes it (deliberately) sensitive > to memory management errors. > This problem has been worked around in NumPy SVN. It is a problem with Python that has been fixed in Python SVN as well. You can either comment-out the test or update to latest SVN. -Travis From djm at mindrot.org Thu Jun 15 04:56:29 2006 From: djm at mindrot.org (Damien Miller) Date: Thu, 15 Jun 2006 18:56:29 +1000 Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: <20060615014454.53e523c6@arbutus.physics.mcmaster.ca> References: <20060615014454.53e523c6@arbutus.physics.mcmaster.ca> Message-ID: <449120BD.2070601@mindrot.org> David M. Cooke wrote: > Can you update to the latest svn? We may have fixed it already: valgrind is > showing up nothing for me. Ok, but dumb question: how do I check out the SVN trunk? Sourceforge lists details for CVS only... (unless I'm missing something) -d From arnd.baecker at web.de Thu Jun 15 05:03:20 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 15 Jun 2006 11:03:20 +0200 (CEST) Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: <449120BD.2070601@mindrot.org> References: <20060615014454.53e523c6@arbutus.physics.mcmaster.ca> <449120BD.2070601@mindrot.org> Message-ID: On Thu, 15 Jun 2006, Damien Miller wrote: > David M. Cooke wrote: > > Can you update to the latest svn? We may have fixed it already: valgrind is > > showing up nothing for me. > > Ok, but dumb question: how do I check out the SVN trunk? Sourceforge > lists details for CVS only... (unless I'm missing something) See "Bleeding-edge repository access" under http://www.scipy.org/Download I.e. for numpy: svn co http://svn.scipy.org/svn/numpy/trunk numpy Best, Arnd From fperez.net at gmail.com Thu Jun 15 05:03:25 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 15 Jun 2006 03:03:25 -0600 Subject: [Numpy-discussion] numpy segv on OpenBSD In-Reply-To: <449120BD.2070601@mindrot.org> References: <20060615014454.53e523c6@arbutus.physics.mcmaster.ca> <449120BD.2070601@mindrot.org> Message-ID: On 6/15/06, Damien Miller wrote: > David M. Cooke wrote: > > Can you update to the latest svn? We may have fixed it already: valgrind is > > showing up nothing for me. > > Ok, but dumb question: how do I check out the SVN trunk? Sourceforge > lists details for CVS only... (unless I'm missing something) http://scipy.org/Developer_Zone Cheers, f From djm at mindrot.org Thu Jun 15 05:13:53 2006 From: djm at mindrot.org (Damien Miller) Date: Thu, 15 Jun 2006 19:13:53 +1000 Subject: [Numpy-discussion] Disable linking against external libs Message-ID: <449124D1.7020504@mindrot.org> Hi, What is the intended way to disable linking against installed libraries (blas, lapack, etc) in site.cfg? I know I can do: [blas] blah_libs = XXXnonexistXXX but that strikes me as less than elegant. FYI I want to do this to make package building deterministic; not varying based on what the package builder happens to have installed on his/her machine -d From chanley at stsci.edu Thu Jun 15 08:53:30 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 15 Jun 2006 08:53:30 -0400 Subject: [Numpy-discussion] numpy.test() fails on Redhat Enterprise and Solaris In-Reply-To: <4490C741.9000009@ieee.org> References: <20060614111740.CJQ36789@comet.stsci.edu> <4490C741.9000009@ieee.org> Message-ID: <4491584A.7090301@stsci.edu> The last successful run was with revision 2613. However, revision 2624 appears to have corrected the problem on Solaris. Thanks, Chris Travis Oliphant wrote: > Christopher Hanley wrote: > >> The daily numpy build and tests I run have failed for revision 2617. >> Below is the error message I receive on my RHE 3 box: >> >> ====================================================================== >> FAIL: Check reading the nested fields of a nested array (1st level) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): File >> "/data/sparty1/dev/site-packages/lib/python/numpy/core/tests/test_numerictypes.py", >> line 283, in check_nested1_acessors dtype='U2')) File >> "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", >> line 139, in assert_equal return assert_array_equal(actual, >> desired, err_msg) File >> "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", >> line 215, in assert_array_equal verbose=verbose, header='Arrays are >> not equal') File >> "/data/sparty1/dev/site-packages/lib/python/numpy/testing/utils.py", >> line 207, in assert_array_compare assert cond, msg AssertionError: >> Arrays are not equal >> (mismatch 100.0%) x: array([u'NN', u'OO'], dtype='> array([u'NN', u'OO'], dtype='> >> On my Solaris 8 box this same test causes a bus error: >> >> Check creation of single-dimensional objects ... ok Check creation of >> 0-dimensional objects ... ok Check creation of multi-dimensional >> objects ... ok Check creation of single-dimensional objects ... ok >> Check reading the top fields of a nested array ... ok Check reading >> the nested fields of a nested array (1st level)Bus Error (core dumped) >> >> > > Do you know when was the last successful run? I think I know what may > be causing this, but the change was introduced several weeks ago... > > -Travis > From alexander.belopolsky at gmail.com Thu Jun 15 09:15:55 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 15 Jun 2006 09:15:55 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: On 6/15/06, Paul Dubois wrote: > And yes, I think FFT is a name. (:-> Exception for that. There are more exceptions that Numeric is not taking advantage of: equal, less, greater, ... -> eq, lt, gt, ... inverse, generalized_inverse -> inv, pinv In my view it is more important that code is easy to read rather than easy to write. Interactive users will disagree, but in programming you write once and read/edit forever :). Again, there is no defense for abbreviating linear_least_squares because it is unlikely to appear in an expression and waste valuable horisontal space. Contracting generalised_inverse is appropriate and numpy does the right thing in this case. The eig.., svd and cholesky choice of names is unfortunate because three different abbreviation schemes are used: first syllable, acronym and first word. I would say: when in doubt spell it in full. From ndarray at mac.com Thu Jun 15 09:16:56 2006 From: ndarray at mac.com (Sasha) Date: Thu, 15 Jun 2006 09:16:56 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: On 6/15/06, Paul Dubois wrote: > And yes, I think FFT is a name. (:-> Exception for that. There are more exceptions that Numeric is not taking advantage of: equal, less, greater, ... -> eq, lt, gt, ... inverse, generalized_inverse -> inv, pinv In my view it is more important that code is easy to read rather than easy to write. Interactive users will disagree, but in programming you write once and read/edit forever :). Again, there is no defense for abbreviating linear_least_squares because it is unlikely to appear in an expression and waste valuable horisontal space. Contracting generalised_inverse is appropriate and numpy does the right thing in this case. The eig.., svd and cholesky choice of names is unfortunate because three different abbreviation schemes are used: first syllable, acronym and first word. I would say: when in doubt spell it in full. From emsellem at obs.univ-lyon1.fr Thu Jun 15 09:35:20 2006 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Thu, 15 Jun 2006 15:35:20 +0200 Subject: [Numpy-discussion] problem with numpy.. sometimes using numarray? and selection question Message-ID: <44916218.9060100@obs.univ-lyon1.fr> Hi, I have written a number of small modules where I now systematically use numpy. I have in principle used the latest versions of the different array/Science modules (scipy, numpy, ..) but still at some point during a selection, it crashes on numpy because it seems that the array correspond to "numarray" arrays. e.g.: ################################## selection = (rell >= 1.) * (rell < ES0.maxEFFR[indgal]) ################################## ### rell is an array of reals and ES0.maxEFFR[indgal] is a real number. gives the error: ========== /usr/local/lib/python2.4/site-packages/numarray/numarraycore.py:376: UserWarning: __array__ returned non-NumArray instance _warnings.warn("__array__ returned non-NumArray instance") /usr/local/lib/python2.4/site-packages/numarray/ufunc.py in _cache_miss2(self, n1, n2, out) 919 (in1, in2), inform, scalar = _inputcheck(n1, n2) 920 --> 921 mode, win1, win2, wout, cfunc, ufargs = \ 922 self._setup(in1, in2, inform, out) 923 /usr/local/lib/python2.4/site-packages/numarray/ufunc.py in _setup(self, in1, in2, inform, out) 965 if out is None: wout = in2.new(outtypes[0]) 966 if inform == "vv": --> 967 intypes = (in1._type, in2._type) 968 inarr1, inarr2 = in1._dualbroadcast(in2) 969 fform, convtypes, outtypes, cfunc = self._typematch_N(intypes, inform) AttributeError: 'numpy.ndarray' object has no attribute '_type' ================================================ QUESTION 1: Any hint on where numarray could still be appearing? QUESTION 2: how would you make a selection using "and" and "or" such as: selection = (condition 1) "and" (condition2 "or" condition3) so that "selection" contains 0 and 1 according to the right hand side. Thanks, Eric P.S.: my config is: matplotlib version 0.87.3 verbose.level helpful interactive is False platform is linux2 numerix numpy 0.9.9.2624 font search path ['/usr/local/lib/python2.4/site-packages/matplotlib/mpl-data'] backend GTKAgg version 2.8.2 Python 2.4.2 (#1, May 2 2006, 08:13:46) IPython 0.7.2 -- An enhanced Interactive Python. I am using numerix = numpy in matplotlibrc. I am also using NUMERIX = numpy when building pyfits. -- ==================================================================== Eric Emsellem emsellem at obs.univ-lyon1.fr Centre de Recherche Astrophysique de Lyon 9 av. Charles-Andre tel: +33 (0)4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 (0)4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem ==================================================================== From Glen.Mabey at swri.org Thu Jun 15 10:04:27 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Thu, 15 Jun 2006 09:04:27 -0500 Subject: [Numpy-discussion] https access to svn.scipy.org Message-ID: <20060615140427.GA26421@bams.swri.edu> Hello, I am attempting to use the svn versions of numpy and scipy, but apparently (according to http://www.sipfoundry.org/tools/svn-tips.html#proxy ) I am behind a less-than-agreeable web proxy, because I get $ svn co http://svn.scipy.org/svn/numpy/trunk numpy svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' svn: REPORT of '/svn/numpy/!svn/vcc/default': 400 Bad Request (http://svn.scipy.org) The solution suggested in the above URL is to use https instead, however, when I attempt this $ svn co https://svn.scipy.org/svn/numpy/trunk numpy svn: PROPFIND request failed on '/svn/numpy/trunk' svn: PROPFIND of '/svn/numpy/trunk': 405 Method Not Allowed (https://svn.scipy.org) it appears that svn.scipy.org is not setup to employ SSL. Is this an easy thing to do? Please forgive me if this is just an issue of svn-ignorance on my part. Thanks, Glen Mabey From jstrunk at enthought.com Thu Jun 15 12:58:55 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Thu, 15 Jun 2006 11:58:55 -0500 Subject: [Numpy-discussion] https access to svn.scipy.org In-Reply-To: <20060615140427.GA26421@bams.swri.edu> References: <20060615140427.GA26421@bams.swri.edu> Message-ID: <200606151158.55856.jstrunk@enthought.com> Hi Glen, I'll see about enabling SSL for svn on svn.scipy.org. Jeff Strunk IT Administrator Enthought, Inc. On Thursday 15 June 2006 9:04 am, Glen W. Mabey wrote: > Hello, > > I am attempting to use the svn versions of numpy and scipy, but > apparently (according to > http://www.sipfoundry.org/tools/svn-tips.html#proxy ) I am behind a > less-than-agreeable web proxy, because I get > > $ svn co http://svn.scipy.org/svn/numpy/trunk numpy > svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' > svn: REPORT of '/svn/numpy/!svn/vcc/default': 400 Bad Request > (http://svn.scipy.org) > > The solution suggested in the above URL is to use https instead, > however, when I attempt this > > $ svn co https://svn.scipy.org/svn/numpy/trunk numpy > svn: PROPFIND request failed on '/svn/numpy/trunk' > svn: PROPFIND of '/svn/numpy/trunk': 405 Method Not Allowed > (https://svn.scipy.org) > > it appears that svn.scipy.org is not setup to employ SSL. Is this an > easy thing to do? > > Please forgive me if this is just an issue of svn-ignorance on my part. > > Thanks, > Glen Mabey > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From jstrunk at enthought.com Thu Jun 15 13:02:42 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Thu, 15 Jun 2006 12:02:42 -0500 Subject: [Numpy-discussion] https access to svn.scipy.org In-Reply-To: <200606151158.55856.jstrunk@enthought.com> References: <20060615140427.GA26421@bams.swri.edu> <200606151158.55856.jstrunk@enthought.com> Message-ID: <200606151202.42999.jstrunk@enthought.com> svn over https works now. Jeff Strunk IT Administrator Enthought, Inc On Thursday 15 June 2006 11:58 am, Jeff Strunk wrote: > Hi Glen, > > I'll see about enabling SSL for svn on svn.scipy.org. > > Jeff Strunk > IT Administrator > Enthought, Inc. > > On Thursday 15 June 2006 9:04 am, Glen W. Mabey wrote: > > Hello, > > > > I am attempting to use the svn versions of numpy and scipy, but > > apparently (according to > > http://www.sipfoundry.org/tools/svn-tips.html#proxy ) I am behind a > > less-than-agreeable web proxy, because I get > > > > $ svn co http://svn.scipy.org/svn/numpy/trunk numpy > > svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' > > svn: REPORT of '/svn/numpy/!svn/vcc/default': 400 Bad Request > > (http://svn.scipy.org) > > > > The solution suggested in the above URL is to use https instead, > > however, when I attempt this > > > > $ svn co https://svn.scipy.org/svn/numpy/trunk numpy > > svn: PROPFIND request failed on '/svn/numpy/trunk' > > svn: PROPFIND of '/svn/numpy/trunk': 405 Method Not Allowed > > (https://svn.scipy.org) > > > > it appears that svn.scipy.org is not setup to employ SSL. Is this an > > easy thing to do? > > > > Please forgive me if this is just an issue of svn-ignorance on my part. > > > > Thanks, > > Glen Mabey > > > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From Glen.Mabey at swri.org Thu Jun 15 13:06:06 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Thu, 15 Jun 2006 12:06:06 -0500 Subject: [Numpy-discussion] https access to svn.scipy.org In-Reply-To: <200606151202.42999.jstrunk@enthought.com> References: <20060615140427.GA26421@bams.swri.edu> <200606151158.55856.jstrunk@enthought.com> <200606151202.42999.jstrunk@enthought.com> Message-ID: <20060615170606.GA26475@bams.swri.edu> On Thu, Jun 15, 2006 at 12:02:42PM -0500, Jeff Strunk wrote: > svn over https works now. Thanks Jeff -- that solved my svn woes. Glen From fperez.net at gmail.com Thu Jun 15 13:25:08 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 15 Jun 2006 11:25:08 -0600 Subject: [Numpy-discussion] problem with numpy.. sometimes using numarray? and selection question In-Reply-To: <44916218.9060100@obs.univ-lyon1.fr> References: <44916218.9060100@obs.univ-lyon1.fr> Message-ID: On 6/15/06, Eric Emsellem wrote: > Hi, > > I have written a number of small modules where I now systematically use > numpy. > > I have in principle used the latest versions of the different > array/Science modules (scipy, numpy, ..) but still at some point during > a selection, it crashes on numpy because it seems that the array > correspond to "numarray" arrays. [...] > QUESTION 1: Any hint on where numarray could still be appearing? Not a final answer, but I've had the same thing happen to me recently (I'm making the transition right now) with extension modules which were built against Numeric (in my case). They return old Numeric arrays (I had 23.7, without the array interface) and numpy is not happy. Rebuilding all my extensions against numpy fixed the problem. Cheers, f From bhendrix at enthought.com Thu Jun 15 13:41:10 2006 From: bhendrix at enthought.com (bryce hendrix) Date: Thu, 15 Jun 2006 12:41:10 -0500 Subject: [Numpy-discussion] problem with numpy.. sometimes using numarray? and selection question In-Reply-To: References: <44916218.9060100@obs.univ-lyon1.fr> Message-ID: <44919BB6.6050901@enthought.com> We've had the same problem many times. There were a few causes: * Our clean scripts don't delete c++ files, so generated code was often not re-generated when we switched to numpy * Files to generate code had numeric arrays hardcoded * we were using numerix, and the env var was not set for part of the build How I generally detect the problem is by deleting the numeric/numarray package directories, then running python with the verbose flag. Bryce Fernando Perez wrote: > On 6/15/06, Eric Emsellem wrote: > >> Hi, >> >> I have written a number of small modules where I now systematically use >> numpy. >> >> I have in principle used the latest versions of the different >> array/Science modules (scipy, numpy, ..) but still at some point during >> a selection, it crashes on numpy because it seems that the array >> correspond to "numarray" arrays. >> > > [...] > > >> QUESTION 1: Any hint on where numarray could still be appearing? >> > > Not a final answer, but I've had the same thing happen to me recently > (I'm making the transition right now) with extension modules which > were built against Numeric (in my case). They return old Numeric > arrays (I had 23.7, without the array interface) and numpy is not > happy. > > Rebuilding all my extensions against numpy fixed the problem. > > Cheers, > > f > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From myeates at jpl.nasa.gov Thu Jun 15 15:17:16 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Thu, 15 Jun 2006 12:17:16 -0700 Subject: [Numpy-discussion] core dump when runniong tests In-Reply-To: <20060615014741.2ed9eecb@arbutus.physics.mcmaster.ca> References: <44906C5F.9080901@jpl.nasa.gov> <44907A45.9070603@jpl.nasa.gov> <20060615014741.2ed9eecb@arbutus.physics.mcmaster.ca> Message-ID: <4491B23C.2040303@jpl.nasa.gov> SunOS 5.10 Generic_118844-20 i86pc i386 i86pcSystem = SunOS David M. Cooke wrote: > On Wed, 14 Jun 2006 14:06:13 -0700 > Mathew Yeates wrote: > > >> Travis suggested I use svn and this worked! >> Thanks Travis! >> >> I'm now getting 1 test failure. I'd love to dot this 'i' >> >> ====================================================================== >> FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/lib/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line >> 42, in check_large_types >> assert b == 6765201, "error with %r: got %r" % (t,b) >> AssertionError: error with : got >> 6765201.00000000000364 >> >> ---------------------------------------------------------------------- >> Ran 377 tests in 0.347s >> >> FAILED (failures=1) >> > > I'm guessing the C powl function isn't good enough on your machine. > > What OS are you running? > > From humufr at yahoo.fr Thu Jun 15 17:06:14 2006 From: humufr at yahoo.fr (humufr at yahoo.fr) Date: Thu, 15 Jun 2006 14:06:14 -0700 Subject: [Numpy-discussion] problem with numpy.. sometimes using numarray? and selection question In-Reply-To: <44916218.9060100@obs.univ-lyon1.fr> References: <44916218.9060100@obs.univ-lyon1.fr> Message-ID: <200606151406.14939.humufr@yahoo.fr> Just a guess, you're reading some fits file with pyfits but you didn't declare the variable NUMERIX for numpy (with the beta version of pyfits) or you script are calling another script who are using numarray. I had both problem last week. Pyfits with a mix of numarray/numpy and a script to read some data and return it like an array. N. Le jeudi 15 juin 2006 06:35, Eric Emsellem a ?crit?: > Hi, > > I have written a number of small modules where I now systematically use > numpy. > > I have in principle used the latest versions of the different > array/Science modules (scipy, numpy, ..) but still at some point during > a selection, it crashes on numpy because it seems that the array > correspond to "numarray" arrays. > > e.g.: > ################################## > selection = (rell >= 1.) * (rell < ES0.maxEFFR[indgal]) > ################################## > ### rell is an array of reals and ES0.maxEFFR[indgal] is a real number. > > gives the error: > ========== > /usr/local/lib/python2.4/site-packages/numarray/numarraycore.py:376: > UserWarning: __array__ returned non-NumArray instance > _warnings.warn("__array__ returned non-NumArray instance") > /usr/local/lib/python2.4/site-packages/numarray/ufunc.py in > _cache_miss2(self, n1, n2, out) > 919 (in1, in2), inform, scalar = _inputcheck(n1, n2) > 920 > --> 921 mode, win1, win2, wout, cfunc, ufargs = \ > 922 self._setup(in1, in2, inform, out) > 923 > > /usr/local/lib/python2.4/site-packages/numarray/ufunc.py in _setup(self, > in1, in2, inform, out) > 965 if out is None: wout = in2.new(outtypes[0]) > 966 if inform == "vv": > --> 967 intypes = (in1._type, in2._type) > 968 inarr1, inarr2 = in1._dualbroadcast(in2) > 969 fform, convtypes, outtypes, cfunc = > self._typematch_N(intypes, inform) > > AttributeError: 'numpy.ndarray' object has no attribute '_type' > ================================================ > > QUESTION 1: Any hint on where numarray could still be appearing? > > QUESTION 2: how would you make a selection using "and" and "or" such as: > selection = (condition 1) "and" (condition2 "or" > condition3) so that "selection" contains 0 and 1 according to the right > hand side. > > Thanks, > > Eric > P.S.: > my config is: > > matplotlib version 0.87.3 > verbose.level helpful > interactive is False > platform is linux2 > numerix numpy 0.9.9.2624 > font search path > ['/usr/local/lib/python2.4/site-packages/matplotlib/mpl-data'] > backend GTKAgg version 2.8.2 > Python 2.4.2 (#1, May 2 2006, 08:13:46) > IPython 0.7.2 -- An enhanced Interactive Python. > > I am using numerix = numpy in matplotlibrc. I am also using NUMERIX = > numpy when building pyfits. From haley at ucar.edu Thu Jun 15 17:38:02 2006 From: haley at ucar.edu (Mary Haley) Date: Thu, 15 Jun 2006 15:38:02 -0600 (MDT) Subject: [Numpy-discussion] Supporting both NumPy and Numeric versions of a module Message-ID: Hi all, We are getting ready to release some Python software that supports both NumPy and Numeric. As we have it now, if somebody wanted to use our software with NumPY, they would have to download the binary distribution that was built with NumPy and install that. Otherwise, they have to download the binary distribution that was built with Numeric and install that. We are using Python's distutils, and I'm trying to figure out if there's a way in which I can have both distributions installed to one package directory, and then the __init__.py file would try to figure out which one to import on behalf of the user (i.e. it would try to figure out if the user had already imported NumPy, and if so, import the NumPy version of the module; otherwise, it will import the Numeric version of the module). This is turning out to be a bigger pain than I expected, so I'm turning to this group to see if anybody has a better idea, or should I just give up and release these two distributions separately? Thanks, --Mary From gzwbvfrin at yachtsales.itgo.com Thu Jun 15 17:48:47 2006 From: gzwbvfrin at yachtsales.itgo.com (Jozy Cannon) Date: Fri, 16 Jun 2006 00:48:47 +0300 Subject: [Numpy-discussion] clergywoman guidance counselor Message-ID: <001a01c690c6$72cadec0$6d26d551@wzvytq> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: idolatrous.gif Type: image/gif Size: 3216 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: determine.gif Type: image/gif Size: 1978 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: checkmate.gif Type: image/gif Size: 843 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recover.gif Type: image/gif Size: 14540 bytes Desc: not available URL: From josh8912 at yahoo.com Thu Jun 15 18:56:56 2006 From: josh8912 at yahoo.com (JJ) Date: Thu, 15 Jun 2006 15:56:56 -0700 (PDT) Subject: [Numpy-discussion] syntax for obtaining rank of two columns? Message-ID: <20060615225656.7187.qmail@web51715.mail.yahoo.com> Hello. I am a matlab user learning the syntax of numpy. Id like to check that I am not missing some easy steps on column selection and concatenation. The example task is to determine if two columns selected out of an array are of full rank (rank 2). Lets say we have an array d that is size (10,10) and we select the ith and jth columns to test their rank. In matlab the command is quite simple: rank([d(:,i),d(:,j)]) In numpy, the best I have thought of so far is: linalg.lstsq(transpose(vstack((d[:,i],d[:,j]))), \ ones((shape(transpose(vstack((d[:,i],d[:,j])))) \ [0],1),'d'))[2] Im thinking there must be a less awkward way. Any ideas? JJ __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From tim.hochberg at cox.net Thu Jun 15 20:27:42 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu, 15 Jun 2006 17:27:42 -0700 Subject: [Numpy-discussion] syntax for obtaining rank of two columns? In-Reply-To: <20060615225656.7187.qmail@web51715.mail.yahoo.com> References: <20060615225656.7187.qmail@web51715.mail.yahoo.com> Message-ID: <4491FAFE.4080901@cox.net> JJ wrote: >Hello. I am a matlab user learning the syntax of >numpy. Id like to check that I am not missing some >easy steps on column selection and concatenation. The >example task is to determine if two columns selected >out of an array are of full rank (rank 2). Lets say >we have an array d that is size (10,10) and we select >the ith and jth columns to test their rank. In matlab >the command is quite simple: > >rank([d(:,i),d(:,j)]) > >In numpy, the best I have thought of so far is: > >linalg.lstsq(transpose(vstack((d[:,i],d[:,j]))), \ >ones((shape(transpose(vstack((d[:,i],d[:,j])))) \ >[0],1),'d'))[2] > >Im thinking there must be a less awkward way. Any >ideas? > > This isn't really my field, so this could be wrong, but try: linalg.lstsq(d[:,[i,j]], ones_like(d[:,[i,j]]))[2] and see if that works for you. -tim >JJ > >__________________________________________________ >Do You Yahoo!? >Tired of spam? Yahoo! Mail has the best spam protection around >http://mail.yahoo.com > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From simon at arrowtheory.com Fri Jun 16 05:40:47 2006 From: simon at arrowtheory.com (Simon Burton) Date: Fri, 16 Jun 2006 10:40:47 +0100 Subject: [Numpy-discussion] syntax for obtaining rank of two columns? In-Reply-To: <20060615225656.7187.qmail@web51715.mail.yahoo.com> References: <20060615225656.7187.qmail@web51715.mail.yahoo.com> Message-ID: <20060616104047.488dd098.simon@arrowtheory.com> On Thu, 15 Jun 2006 15:56:56 -0700 (PDT) JJ wrote: > In matlab > the command is quite simple: > > rank([d(:,i),d(:,j)]) you could use the cauchy-schwartz inequality, which becomes an equality iff the rank above is 1: http://planetmath.org/encyclopedia/CauchySchwarzInequality.html Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From strawman at astraw.com Thu Jun 15 22:22:05 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 15 Jun 2006 19:22:05 -0700 Subject: [Numpy-discussion] Supporting both NumPy and Numeric versions of a module In-Reply-To: References: Message-ID: <449215CD.4030800@astraw.com> Dear Mary, I suggest using numpy and at the boundaries use numpy.asarray(yourinput), which will be a quick way to view the data as a numpy array, regardless of its original type. Otherwise, you could look at the matplotlib distribution to see how it's done to really support multiple array packages simultaneously. Mary Haley wrote: > Hi all, > > We are getting ready to release some Python software that supports > both NumPy and Numeric. > > As we have it now, if somebody wanted to use our software with NumPY, > they would have to download the binary distribution that was built > with NumPy and install that. Otherwise, they have to download the > binary distribution that was built with Numeric and install that. > > We are using Python's distutils, and I'm trying to figure out if > there's a way in which I can have both distributions installed to one > package directory, and then the __init__.py file would try to figure > out which one to import on behalf of the user (i.e. it would try to > figure out if the user had already imported NumPy, and if so, import > the NumPy version of the module; otherwise, it will import the Numeric > version of the module). > > This is turning out to be a bigger pain than I expected, so I'm > turning to this group to see if anybody has a better idea, or should I > just give up and release these two distributions separately? > > Thanks, > > --Mary > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ted.horst at earthlink.net Thu Jun 15 22:39:58 2006 From: ted.horst at earthlink.net (Ted Horst) Date: Thu, 15 Jun 2006 21:39:58 -0500 Subject: [Numpy-discussion] deprecated function throwing readonly attribute Message-ID: <5B1B8428-52A0-4B1E-9FA5-25FFFC550C43@earthlink.net> The depreacted function in numpy.lib.utils is throwing a readonly attribute exception in the latest svn (2627). This is on the Mac OSX (10.4.6) using the builtin python (2.3.5) during the import of fftpack. I'm guessing its a 2.3/2.4 difference. Ted From sebastian.beca at gmail.com Fri Jun 16 00:32:38 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Fri, 16 Jun 2006 00:32:38 -0400 Subject: [Numpy-discussion] TEst post Message-ID: Test post. Something isn't working.... From cookedm at physics.mcmaster.ca Fri Jun 16 01:28:40 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 16 Jun 2006 01:28:40 -0400 Subject: [Numpy-discussion] deprecated function throwing readonly attribute In-Reply-To: <5B1B8428-52A0-4B1E-9FA5-25FFFC550C43@earthlink.net> References: <5B1B8428-52A0-4B1E-9FA5-25FFFC550C43@earthlink.net> Message-ID: <20060616052840.GA16044@arbutus.physics.mcmaster.ca> On Thu, Jun 15, 2006 at 09:39:58PM -0500, Ted Horst wrote: > The depreacted function in numpy.lib.utils is throwing a readonly > attribute exception in the latest svn (2627). This is on the Mac OSX > (10.4.6) using the builtin python (2.3.5) during the import of > fftpack. I'm guessing its a 2.3/2.4 difference. > > Ted Who gets the award for "breaks the build most often"? That'd be me! Sorry, I hardly ever test with 2.3. But, I fixed it (and found a generator that had snuck in :) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Fri Jun 16 01:54:39 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 16 Jun 2006 01:54:39 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: <20060616055439.GB16044@arbutus.physics.mcmaster.ca> On Wed, Jun 14, 2006 at 11:46:27PM -0400, Sasha wrote: > On 6/14/06, David M. Cooke wrote: > > After working with them for a while, I'm going to go on record and say that I > > prefer the long names from Numeric and numarray (like linear_least_squares, > > inverse_real_fft, etc.), as opposed to the short names now used by default in > > numpy (lstsq, irefft, etc.). I know you can get the long names from > > numpy.dft.old, numpy.linalg.old, etc., but I think the long names are better > > defaults. > > > > I agree in spirit, but note that inverse_real_fft is still short for > inverse_real_fast_fourier_transform. Presumably, fft is a proper noun > in many people vocabularies, but so may be lstsq depending who you > ask. I say "FFT", but I don't say "lstsq". I can find "FFT" in the index of a book of algorithms, but not "lstsq" (unless it was a specific implementation). Those are my two guiding ideas for what makes a good name here. > I am playing devil's advocate here a little because personally, I > always recommend the following as a compromize: > > sinh = hyperbolic_sinus > ... > tanh(x) = sinh(x)/cosh(x) > > But the next question is where to put "sinh = hyperbolic_sinus": right > before the expression using sinh? at the top of the module (import > hyperbolic_sinus as sinh)? in the math library? If you pick the last > option, do you need hyperbolic_sinus to begin with? If you pick any > other option, how do you prevent others from writing sh = > hyperbolic_sinus instead of sinh? Pish. By the same reasoning, we don't need the number 2: we can write it as the successor of the successor of the additive identity :-) > > Also, Numeric and numarray compatibility is increased by using the long > > names: those two don't have the short ones. > > > > Fitting names into 6 characters when out of style decades ago. (I think > > MS-BASIC running under CP/M on my Rainbow 100 had a restriction like that!) > > > Short names are still popular in scientific programming: > . That's 11 years old. The web was only a few years old at that time! There's been much work done on what makes a good programming style (Steve McConnell's "Code Complete" for instance is a good start). > I am still +1 for keeping linear_least_squares and inverse_real_fft, > but not just because abreviations are bad as such - if an established > acronym such as fft exists we should be free to use it. Ok, in summary, I'm seeing a bunch of "yes, long names please", but only your devil's advocate stance for no (and +1 for real). I see that Travis fixed the real fft names back to 'irfft' and friends. So, concrete proposal time: - go back to the long names in numpy.linalg (linear_least_squares, eigenvalues, etc. -- those defined in numpy.linalg.old) - of the new names, I could see keeping 'det' and 'svd': those are commonly used, although maybe 'SVD' instead? - anybody got a better name than Heigenvalues? That H looks weird at the beginning. - for numpy.dft, use the old names again. I could probably be persuaded that 'rfft' is ok. 'hfft' for the Hermite FFT is right out. - numpy.random is other "old package replacement", but's fine (and better). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From sebastian.beca at gmail.com Thu Jun 15 19:08:21 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Thu, 15 Jun 2006 19:08:21 -0400 Subject: [Numpy-discussion] distance matrix speed Message-ID: Hi, I'm working with NumPy/SciPy on some algorithms and i've run into some important speed differences wrt Matlab 7. I've narrowed the main speed problem down to the operation of finding the euclidean distance between two matrices that share one dimension rank (dist in Matlab): Python: def dtest(): A = random( [4,2]) B = random( [1000,2]) d = zeros([4, 1000], dtype='f') for i in range(4): for j in range(1000): d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) return d Matlab: A = rand( [4,2]) B = rand( [1000,2]) d = dist(A, B') Running both of these 100 times, I've found the python version to run between 10-20 times slower. My question is if there is a faster way to do this? Perhaps I'm not using the correct functions/structures? Or this is as good as it gets? Thanks on beforehand, Sebastian Beca Department of Computer Science Engineering University of Chile PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. From michael.sorich at gmail.com Fri Jun 16 02:26:37 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Fri, 16 Jun 2006 15:56:37 +0930 Subject: [Numpy-discussion] distance matrix speed In-Reply-To: References: Message-ID: <16761e100606152326r1b99e525j868ea5d694fc8465@mail.gmail.com> Hi Sebastian, I am not sure if there is a function already defined in numpy, but something like this may be what you are after def distance(a1, a2): return sqrt(sum((a1[:,newaxis,:] - a2[newaxis,:,:])**2, axis=2)) The general idea is to avoid loops if you want the code to execute fast. I hope this helps. Mike On 6/16/06, Sebastian Beca wrote: > Hi, > I'm working with NumPy/SciPy on some algorithms and i've run into some > important speed differences wrt Matlab 7. I've narrowed the main speed > problem down to the operation of finding the euclidean distance > between two matrices that share one dimension rank (dist in Matlab): > > Python: > def dtest(): > A = random( [4,2]) > B = random( [1000,2]) > > d = zeros([4, 1000], dtype='f') > for i in range(4): > for j in range(1000): > d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) > return d > > Matlab: > A = rand( [4,2]) > B = rand( [1000,2]) > d = dist(A, B') > > Running both of these 100 times, I've found the python version to run > between 10-20 times slower. My question is if there is a faster way to > do this? Perhaps I'm not using the correct functions/structures? Or > this is as good as it gets? > > Thanks on beforehand, > > Sebastian Beca > Department of Computer Science Engineering > University of Chile > > PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have > ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From a.u.r.e.l.i.a.n at gmx.net Fri Jun 16 02:28:18 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Fri, 16 Jun 2006 08:28:18 +0200 Subject: [Numpy-discussion] distance matrix speed In-Reply-To: References: Message-ID: <200606160828.18346.a.u.r.e.l.i.a.n@gmx.net> Hi, def dtest(): ? ? A = random( [4,2]) ? ? B = random( [1000,2]) # drawback: memory usage temporarily doubled # solution see below d = A[:, newaxis, :] - B[newaxis, :, :] # written as 3 expressions for more clarity d = sqrt((d**2).sum(axis=2)) return d def dtest_lowmem(): A = random( [4,2]) B = random( [1000,2]) d = zeros([4, 1000], dtype='f') # stores result for i in range(len(A)): # the loop should not impose much loss in speed dtemp = A[i, newaxis, :] - B[:, :] dtemp = sqrt((dtemp**2).sum(axis=1)) d[i] = dtemp return d (both functions untested....) HTH, Johannes From konrad.hinsen at laposte.net Fri Jun 16 02:53:48 2006 From: konrad.hinsen at laposte.net (Konrad Hinsen) Date: Fri, 16 Jun 2006 08:53:48 +0200 Subject: [Numpy-discussion] Supporting both NumPy and Numeric versions of amodule References: Message-ID: <009c01c69111$9f05d930$0880fea9@CPQ18791205981> > We are using Python's distutils, and I'm trying to figure out if > there's a way in which I can have both distributions installed to one > package directory, and then the __init__.py file would try to figure > out which one to import on behalf of the user (i.e. it would try to > figure out if the user had already imported NumPy, and if so, import > the NumPy version of the module; otherwise, it will import the Numeric > version of the module). > > This is turning out to be a bigger pain than I expected, so I'm > turning to this group to see if anybody has a better idea, or should I > just give up and release these two distributions separately? I think that what you are aiming at can be done, but I'd rather not do it. Imagine a user who has both Numeric and NumPy installed, plus additional packages that use either one, without the user necessarily being aware of who imports what. For such a user, your package would appear to behave randomly, returning different array types depending on the order of imports of seemingly unrelated modules. If you think it is useful to have both versions available at the same time, a better selection method would be the use of a suitable environment variable. Konrad. From david.douard at logilab.fr Fri Jun 16 03:53:37 2006 From: david.douard at logilab.fr (David Douard) Date: Fri, 16 Jun 2006 09:53:37 +0200 Subject: [Numpy-discussion] distance matrix speed In-Reply-To: <200606160828.18346.a.u.r.e.l.i.a.n@gmx.net> References: <200606160828.18346.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <20060616075337.GA1059@logilab.fr> Hi, On Fri, Jun 16, 2006 at 08:28:18AM +0200, Johannes Loehnert wrote: > Hi, > > def dtest(): > ? ? A = random( [4,2]) > ? ? B = random( [1000,2]) > > # drawback: memory usage temporarily doubled > # solution see below > d = A[:, newaxis, :] - B[newaxis, :, :] Unless I'm wrong, one can simplify a (very) little bit this line: d = A[:, newaxis, :] - B > # written as 3 expressions for more clarity > d = sqrt((d**2).sum(axis=2)) > return d > -- David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From svetosch at gmx.net Fri Jun 16 04:43:42 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 16 Jun 2006 10:43:42 +0200 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> Message-ID: <44926F3E.6090908@gmx.net> Alexander Belopolsky schrieb: > In my view it is more important that code is easy to read rather than > easy to write. Interactive users will disagree, but in programming you > write once and read/edit forever :). The insight about this disagreement imho suggests a compromise (or call it a dual solution): Have verbose names, but also have good default abbreviations for those who prefer them. It would be unfortunate if numpy users were required to cook up their own abbreviations if they wanted to, because 1. it adds overhead, and 2. it would make other people's code more difficult to read. > > Again, there is no defense for abbreviating linear_least_squares > because it is unlikely to appear in an expression and waste valuable > horisontal space. not true imho; btw, I would suggest "ols" (ordinary least squares), which is in every textbook. Cheers, Sven From sebastian.beca at gmail.com Wed Jun 14 18:19:19 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Wed, 14 Jun 2006 18:19:19 -0400 Subject: [Numpy-discussion] Distance Matrix speed Message-ID: Hi, I'm working with NumPy/SciPy on some algorithms and i've run into some important speed differences wrt Matlab 7. I've narrowed the main speed problem down to the operation of finding the euclidean distance between two matrices that share one dimension rank (dist in Matlab): Python: def dtest(): A = random( [4,2]) B = random( [1000,2]) d = zeros([4, 1000], dtype='f') for i in range(4): for j in range(1000): d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) return d Matlab: A = rand( [4,2]) B = rand( [1000,2]) d = dist(A, B') Running both of these 100 times, I've found the python version to run between 10-20 times slower. My question is if there is a faster way to do this? Perhaps I'm not using the correct functions/structures? Or this is as good as it gets? Thanks on beforehand, Sebastian Beca Department of Computer Science Engineering University of Chile PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. From sebastian.beca at gmail.com Fri Jun 16 00:36:45 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Fri, 16 Jun 2006 00:36:45 -0400 Subject: [Numpy-discussion] Test post - ignore Message-ID: Please ignore if you recieve this. From pbdr at cmp.uea.ac.uk Fri Jun 16 05:20:18 2006 From: pbdr at cmp.uea.ac.uk (Pierre Barbier de Reuille) Date: Fri, 16 Jun 2006 10:20:18 +0100 Subject: [Numpy-discussion] ImportError while creating a Python module using NumPy Message-ID: <449277D2.9060904@cmp.uea.ac.uk> Hi, I have an extension library which I wanted to interface with NumPy ... So I added the import_array() and all the needed stuff so that it now compiles. However, when I load the library I obtain : ImportError: No module named core.multiarray I didn't find anything on the net about it, what could be the problem ? Thanks, Pierre From alexandre.fayolle at logilab.fr Fri Jun 16 08:11:52 2006 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Fri, 16 Jun 2006 14:11:52 +0200 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <44926F3E.6090908@gmx.net> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net> Message-ID: <20060616121152.GC32083@crater.logilab.fr> On Fri, Jun 16, 2006 at 10:43:42AM +0200, Sven Schreiber wrote: > > Again, there is no defense for abbreviating linear_least_squares > > because it is unlikely to appear in an expression and waste valuable > > horisontal space. > > not true imho; btw, I would suggest "ols" (ordinary least squares), > which is in every textbook. Please, keep the zen of python in mind : Explicit is better than implicit. -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From svetosch at gmx.net Fri Jun 16 08:48:58 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 16 Jun 2006 14:48:58 +0200 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <20060616121152.GC32083@crater.logilab.fr> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net> <20060616121152.GC32083@crater.logilab.fr> Message-ID: <4492A8BA.1090103@gmx.net> Alexandre Fayolle schrieb: > On Fri, Jun 16, 2006 at 10:43:42AM +0200, Sven Schreiber wrote: >>> Again, there is no defense for abbreviating linear_least_squares >>> because it is unlikely to appear in an expression and waste valuable >>> horisontal space. >> not true imho; btw, I would suggest "ols" (ordinary least squares), >> which is in every textbook. > > Please, keep the zen of python in mind : Explicit is better than > implicit. > > True, but horizontal space *is* valuable (copied from above), and some of the suggested long names were a bit too long for my taste. Abbreviations will emerge anyway, the question is merely: Will numpy provide/recommend them (in addition to having long names maybe), or will it have to be done by somebody else, possibly resulting in many different sets of abbreviations for the same purpose. Thanks, Sven From tim.hochberg at cox.net Fri Jun 16 08:59:49 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 16 Jun 2006 05:59:49 -0700 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <4492A8BA.1090103@gmx.net> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net> <20060616121152.GC32083@crater.logilab.fr> <4492A8BA.1090103@gmx.net> Message-ID: <4492AB45.6080204@cox.net> I don't have anything constructive to add at the moment, so I'll just throw out an unelucidated opinion: +1 for longish names. -1 for two sets of names. -tim From hyclak at math.ohiou.edu Thu Jun 15 13:45:38 2006 From: hyclak at math.ohiou.edu (Matt Hyclak) Date: Thu, 15 Jun 2006 13:45:38 -0400 Subject: [Numpy-discussion] Numpy svn not installing headers Message-ID: <20060615174537.GD29604@math.ohiou.edu> I was trying to build matplotlib after installing the latest svn version of numpy (r2426), and compilation bailed on missing headers. It seems that the headers from build/src.linux*/numpy/core/ are not properly being installed during setup.py's install phase to $PYTHON_SITE_LIB/site-packages/numpy/core/include/numpy Have I stumbled upon a bug, or do I need to do something other than "setup.py install"? The files that do make it in are: arrayobject.h arrayscalars.h ufuncobject.h The files that do not make it in are: config.h __multiarray_api.h __ufunc_api.h The compilation problem was that arrayobject.h includes both config.h and __multiarray_api.h, but the files were not in place. Thanks, Matt -- Matt Hyclak Department of Mathematics Department of Social Work Ohio University (740) 593-1263 From tim.hochberg at cox.net Fri Jun 16 09:17:53 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 16 Jun 2006 06:17:53 -0700 Subject: [Numpy-discussion] distance matrix speed In-Reply-To: References: Message-ID: <4492AF81.804@cox.net> Sebastian Beca wrote: >Hi, >I'm working with NumPy/SciPy on some algorithms and i've run into some >important speed differences wrt Matlab 7. I've narrowed the main speed >problem down to the operation of finding the euclidean distance >between two matrices that share one dimension rank (dist in Matlab): > >Python: >def dtest(): > A = random( [4,2]) > B = random( [1000,2]) > > d = zeros([4, 1000], dtype='f') > for i in range(4): > for j in range(1000): > d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) > return d > >Matlab: > A = rand( [4,2]) > B = rand( [1000,2]) > d = dist(A, B') > >Running both of these 100 times, I've found the python version to run >between 10-20 times slower. My question is if there is a faster way to >do this? Perhaps I'm not using the correct functions/structures? Or >this is as good as it gets? > > Here's one faster way. from numpy import * import timeit A = random.random( [4,2]) B = random.random( [1000,2]) def d1(): d = zeros([4, 1000], dtype=float) for i in range(4): for j in range(1000): d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) return d def d2(): d = zeros([4, 1000], dtype=float) for i in range(4): xy = A[i] - B d[i] = hypot(xy[:,0], xy[:,1]) return d if __name__ == "__main__": t1 = timeit.Timer('d1()', 'from scratch import d1').timeit(100) t2 =timeit.Timer('d2()', 'from scratch import d2').timeit(100) print t1, t2, t1 / t2 In this case, d2 is 50x faster than d1 on my box. Making some extremely dubious assumptions about transitivity of measurements, that would implt that d2 is twice as fast as matlab. Oh, and I didn't actually test that the output is correct.... -tim >Thanks on beforehand, > >Sebastian Beca >Department of Computer Science Engineering >University of Chile > >PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have >ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From ndarray at mac.com Fri Jun 16 09:48:11 2006 From: ndarray at mac.com (Sasha) Date: Fri, 16 Jun 2006 09:48:11 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <4492A8BA.1090103@gmx.net> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net> <20060616121152.GC32083@crater.logilab.fr> <4492A8BA.1090103@gmx.net> Message-ID: On 6/16/06, Sven Schreiber wrote: > .... > Abbreviations will emerge anyway, the question is merely: Will numpy > provide/recommend them (in addition to having long names maybe), or will > it have to be done by somebody else, possibly resulting in many > different sets of abbreviations for the same purpose. > This is a valid point. In my experience ad hoc abbreviations are more popular among scientists who are not used to writing large programs. They use numpy either interactively or write short throw-away scripts that are rarely reused. Programmers who write reusable code almost universally hate ad hoc abbreviations. (There are exceptions: .) If numpy is going to compete with MATLAB, we should not ignore non-programmer user base. I like the idea of providing recommended abbreviations. There is a precedent for doing that: GNU command line utilities provide long/short alternatives for most options. Long options are recommended for use in scripts while short are indispensable at the command line. I would like to suggest the following guidelines: 1. Numpy should never invent abbreviations, but may reuse abbreviations used in the art. 2. When alternative names are made available, there should be one simple rule for reducing the long name to short. For example, use of acronyms may provide one such rule: singular_value_decomposition -> svd. Unfortunately that would mean linear_least_squares -> lls, not ols and conflict with rule #1 (rename lstsq -> ordinary_least_squares?). The second guideline may be hard to follow, but it is very important. Without a rule like this, there will be confusion on whether linear_least_squares and lsltsq are the same or just "similar". From bsouthey at gmail.com Fri Jun 16 10:20:40 2006 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 16 Jun 2006 09:20:40 -0500 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: Message-ID: Hi, Please run the exact same code in Matlab that you are running in NumPy. Many of Matlab functions are very highly optimized so these are provided as binary functions. I think that you are running into this so you are not doing the correct comparison So the ways around it are to write an extension in C or Fortran, use Pysco etc if possible, and vectorize your algorithm to remove the loops (especially the inner one). Bruce On 6/14/06, Sebastian Beca wrote: > Hi, > I'm working with NumPy/SciPy on some algorithms and i've run into some > important speed differences wrt Matlab 7. I've narrowed the main speed > problem down to the operation of finding the euclidean distance > between two matrices that share one dimension rank (dist in Matlab): > > Python: > def dtest(): > A = random( [4,2]) > B = random( [1000,2]) > > d = zeros([4, 1000], dtype='f') > for i in range(4): > for j in range(1000): > d[i, j] = sqrt( sum( (A[i] - B[j])**2 ) ) > return d > > Matlab: > A = rand( [4,2]) > B = rand( [1000,2]) > d = dist(A, B') > > Running both of these 100 times, I've found the python version to run > between 10-20 times slower. My question is if there is a faster way to > do this? Perhaps I'm not using the correct functions/structures? Or > this is as good as it gets? > > Thanks on beforehand, > > Sebastian Beca > Department of Computer Science Engineering > University of Chile > > PD: I'm using NumPy 0.9.8, SciPy 0.4.8. I also understand I have > ATLAS, BLAS and LAPACK all installed, but I havn't confirmed that. > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From aisaac at american.edu Fri Jun 16 11:37:10 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 16 Jun 2006 11:37:10 -0400 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: <4492A8BA.1090103@gmx.net> References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net><20060616121152.GC32083@crater.logilab.fr> <4492A8BA.1090103@gmx.net> Message-ID: On Fri, 16 Jun 2006, Sven Schreiber apparently wrote: > Abbreviations will emerge anyway, the question is merely: > Will numpy provide/recommend them (in addition to having > long names maybe), or will it have to be done by somebody > else, possibly resulting in many different sets of > abbreviations for the same purpose. Agreed. Cheers, Alan Isaac From tim.hochberg at cox.net Fri Jun 16 12:23:10 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 16 Jun 2006 09:23:10 -0700 Subject: [Numpy-discussion] Don't like the short names like lstsq and irefft In-Reply-To: References: <20060614231325.30c89444@arbutus.physics.mcmaster.ca> <44926F3E.6090908@gmx.net> <20060616121152.GC32083@crater.logilab.fr> <4492A8BA.1090103@gmx.net> Message-ID: <4492DAEE.408@cox.net> Sasha wrote: >On 6/16/06, Sven Schreiber wrote: > > >>.... >>Abbreviations will emerge anyway, the question is merely: Will numpy >>provide/recommend them (in addition to having long names maybe), or will >>it have to be done by somebody else, possibly resulting in many >>different sets of abbreviations for the same purpose. >> >> >> >This is a valid point. In my experience ad hoc abbreviations are more >popular among scientists who are not used to writing large programs. >They use numpy either interactively or write short throw-away scripts >that are rarely reused. Programmers who write reusable code almost >universally hate ad hoc abbreviations. (There are exceptions: >.) > >If numpy is going to compete with MATLAB, we should not ignore >non-programmer user base. I like the idea of providing recommended >abbreviations. There is a precedent for doing that: GNU command line >utilities provide long/short alternatives for most options. Long >options are recommended for use in scripts while short are >indispensable at the command line. > > Unless the abreviations are obvious, adding second set of names will make it more difficult to read others code. In particular, it will make it harder to answer questions on the newsgroup. Particularly since I suspect that most of the more experienced users will end up using long names while the new users coming from MATLAB or whatever will use the shorter names. >I would like to suggest the following guidelines: > >1. Numpy should never invent abbreviations, but may reuse >abbreviations used in the art. > > Let me add a couple of cents here. There are widespread terms of the art and there are terms of art that are specific to a certain field. At the top level, I would like to see only widespread terms of the art. Thus, 'cos', 'sin', 'exp', etc are perfectly fine. However, something like 'dft' is not so good. Perversely, I consider 'fft' a widespread term of the art, but the more general 'dft' is somehow not. These narrower terms would be perfectly fine if segregated into appropriate packages. For example, I would consider it more sensible to have the current package 'dft' renamed to 'fourier' and the routine 'fft' renamed to 'dft' (since that's what it is). As another example, linear_algebra.svd is perfectly clear, but numpy.svd would be opaque. >2. When alternative names are made available, there should be one >simple rule for reducing the long name to short. For example, use of >acronyms may provide one such rule: singular_value_decomposition -> >svd. > Svd is already a term of the art I believe, so linalg.svd seems like a fine name for singular_value_decomposition. > Unfortunately that would mean linear_least_squares -> lls, not >ols and conflict with rule #1 (rename lstsq -> >ordinary_least_squares?). > > Before you consider this I suggest that you google 'linear algebra lls' and 'linear algebra ols'. The results may suprise you... While your at it google 'linear algebra svd' >The second guideline may be hard to follow, but it is very important. >Without a rule like this, there will be confusion on whether >linear_least_squares and lsltsq are the same or just "similar". > > Can I just reiterate a hearty blech! for having two sets of names. The horizontal space argument is mostly bogus in my opinion -- functions that tend to be used in complicated expression already have short, widely used abbreviations that we can steal. The typing argument is also mostly bogus: a decent editor will do tab completion (I use a pretty much no frills editor, SciTe, and even it does tab completion) and there's IPython if you want tab completion in interactive mode. -tim From Glen.Mabey at swri.org Fri Jun 16 12:23:58 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Fri, 16 Jun 2006 11:23:58 -0500 Subject: [Numpy-discussion] Segfault with simplest operation on extension module using numpy Message-ID: <20060616162357.GB7192@bams.swri.edu> Hello, I am writing a python extension module to create an interface to some C code, and am using numpy array as the object type for transferring data back and forth. Using either the numpy svn from yesterday, or 0.9.6 or 0.9.8, with or without optimized ATLAS installation, I get a segfault at what should be the most straightforward of all operations: PyArray_Check() on the input argument. That is, when I run: import DFALG DFALG.bsvmdf( 3 ) after compiling the below code, it always segfaults, regardless of the type of the argument given. Just as a sanity check (it's been a little while since I have written an extension module for Python) I changed the line containing PyArray_Check() to one that calls PyInt_Check(), which does perform exactly how I would expect it to. Is there something I'm missing? Thank you! Glen Mabey #include #include static PyObject * DFALG_bsvmdf(PyObject *self, PyObject *args); static PyMethodDef DFALGMethods[] = { {"bsvmdf", DFALG_bsvmdf, METH_VARARGS, "This should be a docstring, really."}, {NULL, NULL, 0, NULL} /* Sentinel */ }; PyMODINIT_FUNC initDFALG(void) { (void) Py_InitModule("DFALG", DFALGMethods); } static PyObject * DFALG_bsvmdf(PyObject *self, PyObject *args) { PyObject *inputarray; //printf( "Hello, Python!" ); //Py_INCREF(Py_None); //return Py_None; if ( !PyArg_ParseTuple( args, "O", &inputarray ) ) return NULL; if ( PyArray_Check( inputarray ) ) { //if ( PyInt_Check( inputarray ) ) { printf( "DFALG_bsvmdf() was passed a PyArray.()\n" ); } else { printf( "DFALG_bsvmdf() was NOT passed a PyArray.()\n" ); } return Py_BuildValue( "ss", "Thing 1", "Thing 2" ); } From robert.kern at gmail.com Fri Jun 16 12:44:53 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 Jun 2006 11:44:53 -0500 Subject: [Numpy-discussion] Segfault with simplest operation on extension module using numpy In-Reply-To: <20060616162357.GB7192@bams.swri.edu> References: <20060616162357.GB7192@bams.swri.edu> Message-ID: Glen W. Mabey wrote: > That is, when I run: > import DFALG > DFALG.bsvmdf( 3 ) > after compiling the below code, it always segfaults, regardless of the > type of the argument given. Just as a sanity check (it's been a little > while since I have written an extension module for Python) I changed the > line containing PyArray_Check() to one that calls PyInt_Check(), which > does perform exactly how I would expect it to. > > Is there something I'm missing? Yes! > #include > #include This should be "numpy/arrayobject.h" for consistency with every other numpy-using extension. > PyMODINIT_FUNC > initDFALG(void) > { > (void) Py_InitModule("DFALG", DFALGMethods); > } You need to call import_array() in this function. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Fri Jun 16 13:05:33 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 16 Jun 2006 10:05:33 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: Message-ID: <4492E4DD.3010400@noaa.gov> Bruce Southey wrote: > Please run the exact same code in Matlab that you are running in > NumPy. Many of Matlab functions are very highly optimized so these are > provided as binary functions. I think that you are running into this > so you are not doing the correct comparison He is doing the correct comparison: if Matlab has some built-in compiled utility functions that numpy doesn't -- it really is faster! It looks like other's suggestions show that well written numpy code is plenty fast, however. One more suggestion I don't think I've seen: numpy provides a built-in compiled utility function: hypot() >>> x = N.arange(5) >>> y = N.arange(5) >>> N.hypot(x,y) array([ 0. , 1.41421356, 2.82842712, 4.24264069, 5.65685425]) >>> N.sqrt(x**2 + y**2) array([ 0. , 1.41421356, 2.82842712, 4.24264069, 5.65685425]) Timings: >>> timeit.Timer('N.sqrt(x**2 + y**2)','import numpy as N; x = N.arange(5000); y = N.arange(5000)').timeit(100) 0.49785208702087402 >>> timeit.Timer('N.hypot(x,y)','import numpy as N; x = N.arange(5000); y = N.arange(5000)').timeit(100) 0.081479072570800781 A factor of 6 improvement. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tim.hochberg at cox.net Fri Jun 16 13:48:49 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 16 Jun 2006 10:48:49 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <4492E4DD.3010400@noaa.gov> References: <4492E4DD.3010400@noaa.gov> Message-ID: <4492EF01.10307@cox.net> Christopher Barker wrote: >Bruce Southey wrote: > > >>Please run the exact same code in Matlab that you are running in >>NumPy. Many of Matlab functions are very highly optimized so these are >>provided as binary functions. I think that you are running into this >>so you are not doing the correct comparison >> >> > >He is doing the correct comparison: if Matlab has some built-in compiled >utility functions that numpy doesn't -- it really is faster! > >It looks like other's suggestions show that well written numpy code is >plenty fast, however. > >One more suggestion I don't think I've seen: numpy provides a built-in >compiled utility function: hypot() > > > >>> x = N.arange(5) > >>> y = N.arange(5) > >>> N.hypot(x,y) >array([ 0. , 1.41421356, 2.82842712, 4.24264069, 5.65685425]) > >>> N.sqrt(x**2 + y**2) >array([ 0. , 1.41421356, 2.82842712, 4.24264069, 5.65685425]) > >Timings: > >>> timeit.Timer('N.sqrt(x**2 + y**2)','import numpy as N; x = >N.arange(5000); y = N.arange(5000)').timeit(100) >0.49785208702087402 > >>> timeit.Timer('N.hypot(x,y)','import numpy as N; x = N.arange(5000); >y = N.arange(5000)').timeit(100) >0.081479072570800781 > >A factor of 6 improvement. > > Here's another thing to note: much of the time distance**2 works as well as distance (for instance if you are looking for the nearest point). If you're in that situation, computing the square of the distance is much cheaper: def d_2(): d = zeros([4, 10000], dtype=float) for i in range(4): xy = A[i] - B d[i] = xy[:,0]**2 + xy[:,1]**2 return d This is something like 250 times as fast as the naive Python solution; another five times faster than the fastest distance computing version that I could come up with (using hypot). -tim From perrot at shfj.cea.fr Fri Jun 16 14:01:31 2006 From: perrot at shfj.cea.fr (Matthieu Perrot) Date: Fri, 16 Jun 2006 20:01:31 +0200 Subject: [Numpy-discussion] tiny patch + Playing with strings and my own array descr (PyArray_STRING, PyArray_OBJECT). Message-ID: <200606162001.31342.perrot@shfj.cea.fr> hi, I need to handle strings shaped by a numpy array whose data own to a C structure. There is several possible answers to this problem : 1) use a numpy array of strings (PyArray_STRING) and so a (char *) object in C. It works as is, but you need to define a maximum size to your strings because your set of strings is contiguous in memory. 2) use a numpy array of objects (PyArray_OBJECT), and wrap each ?C string? with a python object, using PyStringObject for example. Then our problem is that there is as wrapper as data element and I believe data can't be shared when your created PyStringObject using (char *) thanks to PyString_AsStringAndSize by example. Now, I will expose a third way, which allow you to use no size-limited strings (as in solution 1.) and don't create wrappers before you really need it (on demand/access). First, for convenience, we will use in C, (char **) type to build an array of string pointers (as it was suggested in solution 2). Now, the game is to make it works with numpy API, and use it in python through a python array. Basically, I want a very similar behabiour than arrays of PyObject, where data are not contiguous, only their address are. So, the idea is to create a new array descr based on PyArray_OBJECT and change its getitem/setitem functions to deals with my own data. I exepected numpy to work with this convenient array descr, but it fails because PyArray_Scalar (arrayobject.c) don't call descriptor getitem function (in PyArray_OBJECT case) but call 2 lines which have been copy/paste from the OBJECT_getitem function). Here my small patch is : replace (arrayobject.c:983-984): Py_INCREF(*((PyObject **)data)); return *((PyObject **)data); by : return descr->f->getitem(data, base); I play a lot with my new numpy array after this change and noticed that a lot of uses works : >>> a = myArray() array([["plop", "blups"]], dtype=object) >>> print a [["plop", "blups"]] >>> a[0, 0] = "youpiiii" >>> print a [["youpiiii", "blups"]] s = a[0, 0] >>> print s "youpiiii" >>> b = a[:] #data was shared with 'a' (similar behaviour than array of objects) >>> >>> numpy.zeros(1, dtype = a.dtype) Traceback (most recent call last): File "", line 1, in ? TypeError: fields with object members not yet supported. >>> numpy.array(a) segmentation fault Finally, I found a forgotten check in multiarraymodule.c (_array_fromobject function), after label finish (line 4661), add : if (!ret) { Py_INCREF(Py_None); return Py_None; } After this change, I obtained (when I was not in interactive mode) : # numpy.array(a) Exception exceptions.TypeError: 'fields with object members not yet supported.' in 'garbage collection' ignored Fatal Python error: unexpected exception during garbage collection Abandon But strangely, when I was in interactive mode, one time it fails and raise an exception (good behaviour), and the next time it only returns None. >>> numpy.array(myArray()) TypeError: fields with object members not yet supported. >>> a=numpy.array(myArray()); print a None A bug remains (I will explore it later), but it is better than before. This mail, show how to map (char **) on a numpy array, but it's easy to use the same idea to handle any types (your_object **). I'll be pleased to discuss on any comments on the proposed solution or any others you can find. -- Matthieu Perrot Tel: +33 1 69 86 78 21 CEA - SHFJ Fax: +33 1 69 86 77 86 4, place du General Leclerc 91401 Orsay Cedex France From fullung at gmail.com Fri Jun 16 14:04:38 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 16 Jun 2006 20:04:38 +0200 Subject: [Numpy-discussion] Segfault with simplest operation on extensionmodule using numpy In-Reply-To: <20060616162357.GB7192@bams.swri.edu> Message-ID: <00f501c6916f$559c6cb0$01eaa8c0@dsp.sun.ac.za> Hey Glen http://www.scipy.org/Cookbook/C_Extensions covers most of the boilerplate you need to get started with extension modules. Regards, Albert > -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of Glen W. Mabey > Sent: 16 June 2006 18:24 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Segfault with simplest operation on > extensionmodule using numpy > > Hello, > > I am writing a python extension module to create an interface to some C > code, and am using numpy array as the object type for transferring data > back and forth. From theller at python.net Fri Jun 16 15:25:52 2006 From: theller at python.net (Thomas Heller) Date: Fri, 16 Jun 2006 21:25:52 +0200 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> <200606091206.00322.faltet@carabos.com> Message-ID: Robert Kern wrote: > Francesc Altet wrote: >> A Divendres 09 Juny 2006 11:54, Albert Strasheim va escriure: >> >>>Just out of curiosity: >>> >>>In [1]: x = N.array([]) >>> >>>In [2]: x.__array_data__ >>>Out[2]: ('0x01C23EE0', False) >>> >>>Is there a reason why the __array_data__ tuple stores the address as a hex >>>string? I would guess that this representation of the address isn't the >>>most useful one for most applications. >> >> Good point. I hit this before and forgot to send a message about this. I agree >> that a integer would be better. Although, now that I think about this, I >> suppose that the issue should be the difference of representation of longs in >> 32-bit and 64-bit platforms, isn't it? > > Like how Win64 uses 32-bit longs and 64-bit pointers. And then there's > signedness. Please don't use Python ints to encode pointers. Holding arbitrary > pointers is the job of CObjects. > (Sorry, I'm late in reading this thread. I didn't know there were so many numeric groups) Python has functions to convert pointers to int/long and vice versa: PyInt_FromVoidPtr() and PyInt_AsVoidPtr(). ctypes uses them, ctypes also represents addresses as ints/longs. Thomas From faltet at carabos.com Fri Jun 16 15:35:24 2006 From: faltet at carabos.com (Francesc Altet) Date: Fri, 16 Jun 2006 21:35:24 +0200 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> Message-ID: <200606162135.24936.faltet@carabos.com> A Divendres 16 Juny 2006 21:25, Thomas Heller va escriure: > Robert Kern wrote: > > Like how Win64 uses 32-bit longs and 64-bit pointers. And then there's > > signedness. Please don't use Python ints to encode pointers. Holding > > arbitrary pointers is the job of CObjects. > > (Sorry, I'm late in reading this thread. I didn't know there were so many > numeric groups) > > Python has functions to convert pointers to int/long and vice versa: > PyInt_FromVoidPtr() and PyInt_AsVoidPtr(). ctypes uses them, ctypes also > represents addresses as ints/longs. Very interesting. So, may I suggest to use this capability to represent addresses? I think this would simplify things (specially it will prevent to use ascii/pointer conversions, which are ugly to my mind). Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From theller at python.net Fri Jun 16 15:49:33 2006 From: theller at python.net (Thomas Heller) Date: Fri, 16 Jun 2006 21:49:33 +0200 Subject: [Numpy-discussion] Array Interface In-Reply-To: <4488A337.9000407@ee.byu.edu> References: <4488A337.9000407@ee.byu.edu> Message-ID: Travis Oliphant wrote: > Thanks for the continuing discussion on the array interface. > > I'm thinking about this right now, because I just spent several hours > trying to figure out if it is possible to add additional > "object-behavior" pointers to a type by creating a metatype that > sub-types from the Python PyType_Type (this is the object that has all > the function pointers to implement mapping behavior, buffer behavior, > etc.). I found some emails from 2002 where Guido indicates that it is > not possible to sub-type the PyType_Type object and add new function > pointers at the end without major re-writing of Python. Yes, but I remember an email from Christian Tismer that it *is* possible. Although I've never tried that. What I do in ctypes is to replace the type objects (the subclass of PyType_Type) dictionary with a subclass of PyDict_Type (in ctypes it is named StgDictObject - storage dict object, a very poor name I know) that has additional structure fields describing the C data type it represents. Thomas From esheldon at kicp.uchicago.edu Fri Jun 16 17:10:43 2006 From: esheldon at kicp.uchicago.edu (Erin Sheldon) Date: Fri, 16 Jun 2006 16:10:43 -0500 Subject: [Numpy-discussion] Recarray attributes writeable Message-ID: <20060616161043.A29191@cfcp.uchicago.edu> Hi everyone - (this is my fourth try in the last 24 hours to post this. Apparently, the gmail smtp server is in the blacklist!! this is bad). Anyway - Recarrays have convenience attributes such that fields may be accessed through "." in additioin to the "field()" method. These attributes are designed for read only; one cannot alter the data through them. Yet they are writeable: >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') >>> tr.field('ra')[:] = 0.0 >>> tr.ra array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> tr.ra = 3 >>> tr.ra 3 >>> tr.field('ra') array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) I feel this should raise an exception, just as with trying to write to the "size" attribute. Any thoughts? Erin From erin.sheldon at gmail.com Fri Jun 16 11:27:54 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 16 Jun 2006 11:27:54 -0400 Subject: [Numpy-discussion] Recarray attributes writeable (3rd try) Message-ID: <331116dc0606160827o4f529164y996395cc4d0d20ee@mail.gmail.com> Hi everyone - (this is my third try in the last 24 hours to post this. For some reason it hasn't been making it through) Recarrays have convenience attributes such that fields may be accessed through "." in additioin to the "field()" method. These attributes are designed for read only; one cannot alter the data through them. Yet they are writeable: >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') >>> tr.field('ra')[:] = 0.0 >>> tr.ra array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> tr.ra = 3 >>> tr.ra 3 >>> tr.field('ra') array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) I feel this should raise an exception, just as with trying to write to the "size" attribute. Any thoughts? Erin From robert.kern at gmail.com Fri Jun 16 17:33:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 Jun 2006 16:33:05 -0500 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: <20060616161043.A29191@cfcp.uchicago.edu> References: <20060616161043.A29191@cfcp.uchicago.edu> Message-ID: Erin Sheldon wrote: > Hi everyone - > > (this is my fourth try in the last 24 hours to post this. > Apparently, the gmail smtp server is in the blacklist!! > this is bad). I doubt it since that's where my email goes through. Sourceforge is frequently slow, so please have patience if your mail does not show up. I can see your 3rd try now. Possibly the others will be showing up, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jun 16 17:43:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 Jun 2006 16:43:02 -0500 Subject: [Numpy-discussion] Sourceforge and gmail [was: Re: Recarray attributes writeable] In-Reply-To: References: <20060616161043.A29191@cfcp.uchicago.edu> Message-ID: <449325E6.5080609@gmail.com> Robert Kern wrote: > Erin Sheldon wrote: > >>Hi everyone - >> >>(this is my fourth try in the last 24 hours to post this. >>Apparently, the gmail smtp server is in the blacklist!! >>this is bad). > > I doubt it since that's where my email goes through. And of course that's utterly bogus since I usually use GMane. Apologies. However, *this* is a real email to numpy-discussion. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Fri Jun 16 17:44:33 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 16 Jun 2006 15:44:33 -0600 Subject: [Numpy-discussion] Array Protocol change for Python 2.6 In-Reply-To: References: <001c01c68baa$b0ba5320$01eaa8c0@dsp.sun.ac.za> <200606091206.00322.faltet@carabos.com> Message-ID: <44932641.80005@ieee.org> Thomas Heller wrote: > Robert Kern wrote: > >> Francesc Altet wrote: >> >>> A Divendres 09 Juny 2006 11:54, Albert Strasheim va escriure: >>> >>> >>>> Just out of curiosity: >>>> >>>> In [1]: x = N.array([]) >>>> >>>> In [2]: x.__array_data__ >>>> Out[2]: ('0x01C23EE0', False) >>>> >>>> Is there a reason why the __array_data__ tuple stores the address as a hex >>>> string? I would guess that this representation of the address isn't the >>>> most useful one for most applications. >>>> >>> Good point. I hit this before and forgot to send a message about this. I agree >>> that a integer would be better. Although, now that I think about this, I >>> suppose that the issue should be the difference of representation of longs in >>> 32-bit and 64-bit platforms, isn't it? >>> >> Like how Win64 uses 32-bit longs and 64-bit pointers. And then there's >> signedness. Please don't use Python ints to encode pointers. Holding arbitrary >> pointers is the job of CObjects. >> >> > > (Sorry, I'm late in reading this thread. I didn't know there were so many > numeric groups) > > Python has functions to convert pointers to int/long and vice versa: PyInt_FromVoidPtr() > and PyInt_AsVoidPtr(). ctypes uses them, ctypes also represents addresses as ints/longs. > The function calls are PyLong_FromVoidPtr() and PyLong_AsVoidPtr() though, right? I'm happy representing pointers as Python integers (Python long integers on curious platforms like Win64). -Travis From strawman at astraw.com Fri Jun 16 17:46:19 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 16 Jun 2006 14:46:19 -0700 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: <20060616161043.A29191@cfcp.uchicago.edu> References: <20060616161043.A29191@cfcp.uchicago.edu> Message-ID: <449326AB.4000306@astraw.com> Erin Sheldon wrote: >Anyway - Recarrays have convenience attributes such that >fields may be accessed through "." in additioin to >the "field()" method. These attributes are designed for >read only; one cannot alter the data through them. >Yet they are writeable: > > > >>>>tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') >>>>tr.field('ra')[:] = 0.0 >>>>tr.ra >>>> >>>> >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > > >>>>tr.ra = 3 >>>>tr.ra >>>> >>>> >3 > > >>>>tr.field('ra') >>>> >>>> >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > >I feel this should raise an exception, just as with trying to write >to the "size" attribute. Any thoughts? > > I have not used recarrays much, so take this with the appropriate measure of salt. I'd prefer to drop the entire pseudo-attribute thing completely before it gets entrenched. (Perhaps it's too late.) I've used a similar system in pytables which, although it is convenient in the short term and for interactive use, there are corner cases that result in long term headaches. I think you point out one such issue for recarrays. There will be more. For example: In [1]:import numpy In [2]:tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') In [3]:tr.field('ra')[:] = 0.0 In [4]:tr.ra Out[4]:array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) In [5]:del tr.ra --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/astraw/ AttributeError: 'recarray' object has no attribute 'ra' The above seems completely counterintuitive -- an attribute error for something I just accessed? Yes, I know what's going on, but it certainly makes life more confusing than it need be, IMO. Another issue is that it is possible to have field names that are not valid Python identifier strings. From erin.sheldon at gmail.com Fri Jun 16 18:18:25 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 16 Jun 2006 18:18:25 -0400 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: References: <20060616161043.A29191@cfcp.uchicago.edu> Message-ID: <331116dc0606161518h6f2e056cxb58a98479ab6c06f@mail.gmail.com> The initial bounces actually say, and I quote: Technical details of temporary failure: TEMP_FAILURE: SMTP Error (state 8): 550-"rejected because your SMTP server, 66.249.92.170, is in the Spamcop RBL. 550 See http://www.spamcop.net/bl.shtml for more information." On 6/16/06, Robert Kern wrote: > Erin Sheldon wrote: > > Hi everyone - > > > > (this is my fourth try in the last 24 hours to post this. > > Apparently, the gmail smtp server is in the blacklist!! > > this is bad). > > I doubt it since that's where my email goes through. Sourceforge is frequently > slow, so please have patience if your mail does not show up. I can see your 3rd > try now. Possibly the others will be showing up, too. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From jchock at keck.hawaii.edu Fri Jun 16 18:37:28 2006 From: jchock at keck.hawaii.edu (Jon Chock) Date: Fri, 16 Jun 2006 12:37:28 -1000 Subject: [Numpy-discussion] installing numpy and removing numeric-24. Message-ID: <2E92CD375D420941846C591D3A278A0DB6D4AD@ws03.keck.hawaii.edu> Hi folks! I'd like to install numpy and remove numeric, are there instructions to remove numeric-24.1? Thanks. JC -------------- next part -------------- An HTML attachment was scrubbed... URL: From jchock at keck.hawaii.edu Fri Jun 16 18:39:27 2006 From: jchock at keck.hawaii.edu (Jon Chock) Date: Fri, 16 Jun 2006 12:39:27 -1000 Subject: [Numpy-discussion] installing numpy and removing numeric-24.1 Message-ID: <2E92CD375D420941846C591D3A278A0DB6D4AE@ws03.keck.hawaii.edu> Sorry, I forgot to mention that I'm working on a Solaris system and installed it in /usr/local/gcc3xbuilt instead of /usr/local. Thanks. JC -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Fri Jun 16 19:46:40 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 16 Jun 2006 17:46:40 -0600 Subject: [Numpy-discussion] Array interface updated to Version 3 Message-ID: <449342E0.5090004@ieee.org> I just updated the array interface page to emphasize we now have version 3. NumPy still supports objects that expose (the C-side) of version 2 of the array interface, though. The new interface is basically the same except (mostly) for asthetics: The differences are listed at the bottom of http://numeric.scipy.org/array_interface.html There is talk of ctypes supporting the new interface which is a worthy development. Please encourage that if you can. Please voice concerns now if you have any. -Travis From fperez.net at gmail.com Fri Jun 16 19:54:17 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 16 Jun 2006 17:54:17 -0600 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <449342E0.5090004@ieee.org> References: <449342E0.5090004@ieee.org> Message-ID: On 6/16/06, Travis Oliphant wrote: > There is talk of ctypes supporting the new interface which is a worthy > development. Please encourage that if you can. That would certainly be excellent, esp. given how ctypes is slated to be officially part of python 2.5. I think it would greatly improve the interoperability landscape for python if the out-of-the-box toolset had proper access to numpy arrays. Cheers, f From strawman at astraw.com Fri Jun 16 21:10:49 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 16 Jun 2006 18:10:49 -0700 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <449342E0.5090004@ieee.org> References: <449342E0.5090004@ieee.org> Message-ID: <44935699.1040104@astraw.com> I noticed in your note labeled 'June 16, 2006' that you refer to the "desc" field. However, in the struct description above, there is only a field named "descr". Also, I suggest that you update the information in the comments of descr field of the structure description to contain the fact that inter.descr is a reference to a tuple equal to ("PyArrayInterface Version #",new_tuple_with_array_interface). What is currently there seems out of date given the new information. Finally, in the comment section describing this field, I strongly suggesting noting that this field is only present *if and only if* the ARR_HAS_DESCR flag is present. It will be more clear if it's there rather than in the text underneath. Is the "#" in the string meant to be replaced with "3"? If so, why not write 3? Also, in your note, you should explain whether "dummy" (renamed from "version") should still be checked as a sanity check or whether it should now be ignored. I think we could call the field "two" and keep the sanity check for backwards compatibility. I agree it is confusing to have two different version numbers in the same struct, so I don't mind having the official name of the field being something other than "version", but if we keep it as a required sanity check (in which case it probably shouldn't be named "dummy"), the whole thing will remain backwards compatible with all current code. Anyhow, I'm very excited about this array interface, and I await the outcome of the Summer of Code project on the 'micro-array' implementation based on it! Cheers! Andrew Travis Oliphant wrote: >I just updated the array interface page to emphasize we now have version >3. NumPy still supports objects that expose (the C-side) of version 2 >of the array interface, though. > >The new interface is basically the same except (mostly) for asthetics: >The differences are listed at the bottom of > >http://numeric.scipy.org/array_interface.html > >There is talk of ctypes supporting the new interface which is a worthy >development. Please encourage that if you can. > >Please voice concerns now if you have any. > >-Travis > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From sebastian.beca at gmail.com Fri Jun 16 19:01:44 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Fri, 16 Jun 2006 19:01:44 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <4492EF01.10307@cox.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> Message-ID: Thanks! Avoiding the inner loop is MUCH faster (~20-300 times than the original). Nevertheless I don't think I can use hypot as it only works for two dimensions. The general problem I have is: A = random( [C, K] ) B = random( [N, K] ) C ~ 1-10 N ~ Large (thousands, millions.. i.e. my dataset) K ~ 2-100 (dimensions of my problem, i.e. not fixed a priori.) I adapted your proposed version to this for K dimensions: def d4(): d = zeros([4, 1000], dtype=float) for i in range(4): xy = A[i] - B d[i] = sqrt( sum(xy**2, axis=1) ) return d Maybe there's another alternative to d4? Thanks again, Sebastian. > def d_2(): > d = zeros([4, 10000], dtype=float) > for i in range(4): > xy = A[i] - B > d[i] = xy[:,0]**2 + xy[:,1]**2 > return d > > This is something like 250 times as fast as the naive Python solution; > another five times faster than the fastest distance computing version > that I could come up with (using hypot). > > -tim > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From sebastian.beca at gmail.com Fri Jun 16 19:04:00 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Fri, 16 Jun 2006 19:04:00 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> Message-ID: Please replace: C = 4 N = 1000 > d = zeros([C, N], dtype=float) BK. From a.u.r.e.l.i.a.n at gmx.net Sat Jun 17 02:47:24 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Sat, 17 Jun 2006 08:47:24 +0200 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> Message-ID: <4493A57C.1030904@gmx.net> Hi, > def d4(): > d = zeros([4, 1000], dtype=float) > for i in range(4): > xy = A[i] - B > d[i] = sqrt( sum(xy**2, axis=1) ) > return d > > Maybe there's another alternative to d4? > Thanks again, I think this is the fastest you can get. Maybe it would be nicer to use the .sum() method instead of sum function, but that is just my personal opinion. I am curious how this compares to the matlab version. :) Johannes From erin.sheldon at gmail.com Thu Jun 15 13:37:16 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Thu, 15 Jun 2006 13:37:16 -0400 Subject: [Numpy-discussion] Recarray attributes writeable Message-ID: <331116dc0606151037x2023b0beu9c4c995f40b34890@mail.gmail.com> Hi everyone - Recarrays have convenience attributes such that fields may be accessed through "." in additioin to the "field()" method. These attributes are designed for read only; one cannot alter the data through them. Yet they are writeable: >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') >>> tr.field('ra')[:] = 0.0 >>> tr.ra array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> tr.ra = 3 >>> tr.ra 3 >>> tr.field('ra') array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) I feel this should raise an exception, just as with trying to write to the "size" attribute. Any thoughts? Erin From erin.sheldon at gmail.com Thu Jun 15 10:21:07 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Thu, 15 Jun 2006 10:21:07 -0400 Subject: [Numpy-discussion] Recarray attributes writable Message-ID: <331116dc0606150721y67d6228bs577fc44b59de1c45@mail.gmail.com> Hi everyone - Recarrays have convenience attributes such that fields may be accessed through "." in additioin to the "field()" method. These attributes are designed for read only; one cannot alter the data through them. Yet they are writeable: >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') >>> tr.field('ra')[:] = 0.0 >>> tr.ra array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> tr.ra = 3 >>> tr.ra 3 >>> tr.field('ra') array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) I feel this should raise an exception, just as with trying to write to the "size" attribute. Any thoughts? Erin From faltet at carabos.com Sat Jun 17 04:17:28 2006 From: faltet at carabos.com (Francesc Altet) Date: Sat, 17 Jun 2006 10:17:28 +0200 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: <449326AB.4000306@astraw.com> References: <20060616161043.A29191@cfcp.uchicago.edu> <449326AB.4000306@astraw.com> Message-ID: <1150532248.3928.29.camel@localhost.localdomain> El dv 16 de 06 del 2006 a les 14:46 -0700, en/na Andrew Straw va escriure: > Erin Sheldon wrote: > > >Anyway - Recarrays have convenience attributes such that > >fields may be accessed through "." in additioin to > >the "field()" method. These attributes are designed for > >read only; one cannot alter the data through them. > >Yet they are writeable: > > > > > > > >>>>tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') > >>>>tr.field('ra')[:] = 0.0 > >>>>tr.ra > >>>> > >>>> > >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > > > > > > >>>>tr.ra = 3 > >>>>tr.ra > >>>> > >>>> > >3 > > > > > >>>>tr.field('ra') > >>>> > >>>> > >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > > >I feel this should raise an exception, just as with trying to write > >to the "size" attribute. Any thoughts? > > > > > I have not used recarrays much, so take this with the appropriate > measure of salt. > > I'd prefer to drop the entire pseudo-attribute thing completely before > it gets entrenched. (Perhaps it's too late.) > However, I think that this has its utility, specially when accessing to nested fields (see later). In addition, I'd suggest introducing a special accessor called, say, 'fields' in order to access the fields themselves and not the attributes. For example, if you want to access the 'strides' attribute, you can do it in the usual way: >>> import numpy >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,strides') >>> tr.strides (20,) but, if you want to access *field* 'strides' you could do it by issuing: >>> tr.fields.strides >>> tr.fields.strides[:] array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) We have several advantages in adopting the previous approach: 1. You don't mix (nor pollute) the namespaces for attributes and fields. 2. You have a clear idea when you are accessing a variable or a field. 3. Accessing nested columns would still be very easy: tr.field('nested1').field('nested2').field('nested3') vs tr.fields.nested1.nested2.nested3 4. You can also define a proper __getitem__ for accessing fields: tr.fields['nested1']['nested2']['nested3']. In the same way, elements of 'nested2' field could be accessed by: tr.fields['nested1']['nested2'][2:10:2]. 5. Finally, you can even prevent setting or deleting columns by disabling the __setattr__ and __delattr__. PyTables has adopted a similar schema for accessing nested columns, except for 4, where we decided not to accept both strings and slices for the __getitem__() method (you know the mantra: "there should preferably be just one way of doing things", although maybe we've been a bit too much strict in this case), and I think it works reasonably well. In any case, the idea is to decouple the attributes and fields so that they doesn't get mixed. Implementing this shouldn't be complicated at all, but I'm afraid that I can't do this right now :-( -- >0,0< Francesc Altet http://www.carabos.com/ V V C?rabos Coop. V. Enjoy Data "-" From fullung at gmail.com Sat Jun 17 07:30:43 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 17 Jun 2006 13:30:43 +0200 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <449342E0.5090004@ieee.org> References: <449342E0.5090004@ieee.org> Message-ID: <20060617113043.GA910@dogbert.sdsl.sun.ac.za> Hello all On Fri, 16 Jun 2006, Travis Oliphant wrote: > I just updated the array interface page to emphasize we now have version > 3. NumPy still supports objects that expose (the C-side) of version 2 > of the array interface, though. > Please voice concerns now if you have any. In the documentation for the data attribute you say: "A reference to the object with this attribute must be stored by the new object if the memory area is to be secured." Does that mean a reference to the __array_interface__ or a reference to the object containing the __array_interface__? Regards, Albert From jk985 at tom.com Tue Jun 20 08:17:04 2006 From: jk985 at tom.com (=?GB2312?B?N9TCMS0yy9XW3S+xsb6pOC05?=) Date: Tue, 20 Jun 2006 20:17:04 +0800 Subject: [Numpy-discussion] =?GB2312?B?uanTpsnMudzA7dPrssm5urPJsb69tbXN0dDQ3rDgPGFkPg==?= Message-ID: An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat Jun 17 11:27:42 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 17 Jun 2006 09:27:42 -0600 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: <1150532248.3928.29.camel@localhost.localdomain> References: <20060616161043.A29191@cfcp.uchicago.edu> <449326AB.4000306@astraw.com> <1150532248.3928.29.camel@localhost.localdomain> Message-ID: On 6/17/06, Francesc Altet wrote: > However, I think that this has its utility, specially when accessing to > nested fields (see later). In addition, I'd suggest introducing a > special accessor called, say, 'fields' in order to access the fields > themselves and not the attributes. For example, if you want to access > the 'strides' attribute, you can do it in the usual way: > > >>> import numpy > >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,strides') > >>> tr.strides > (20,) > > but, if you want to access *field* 'strides' you could do it by issuing: > > >>> tr.fields.strides > > >>> tr.fields.strides[:] > array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) [...] +1 I meant to write exactly the same thing, but was too lazy to do it :) Cheers, f From acannon at gmail.com Sat Jun 17 17:41:15 2006 From: acannon at gmail.com (Alex Cannon) Date: Sat, 17 Jun 2006 14:41:15 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <4493A57C.1030904@gmx.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> Message-ID: <6b04cd0f0606171441l3537fa15h11edccef250acbca@mail.gmail.com> How about this? def d5(): return add.outer(sum(A*A, axis=1), sum(B*B, axis=1)) - \ 2.*dot(A, transpose(B)) From robert.kern at gmail.com Sat Jun 17 17:49:16 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 17 Jun 2006 16:49:16 -0500 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <6b04cd0f0606171441l3537fa15h11edccef250acbca@mail.gmail.com> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> <6b04cd0f0606171441l3537fa15h11edccef250acbca@mail.gmail.com> Message-ID: Alex Cannon wrote: > How about this? > > def d5(): > return add.outer(sum(A*A, axis=1), sum(B*B, axis=1)) - \ > 2.*dot(A, transpose(B)) You might lose some precision with that approach, so the OP should compare results and timings to look at the tradeoffs. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From erin.sheldon at gmail.com Sat Jun 17 09:40:20 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Sat, 17 Jun 2006 09:40:20 -0400 Subject: [Numpy-discussion] Recarray attributes writeable In-Reply-To: <1150532248.3928.29.camel@localhost.localdomain> References: <20060616161043.A29191@cfcp.uchicago.edu> <449326AB.4000306@astraw.com> <1150532248.3928.29.camel@localhost.localdomain> Message-ID: <331116dc0606170640g3a862eeeh15aa19f96bccb842@mail.gmail.com> This reply sent 9:36 AM, Jun 17 (because it may not show up for a day or so from my gmail account, if it shows up at all) On 6/17/06, Francesc Altet wrote: > El dv 16 de 06 del 2006 a les 14:46 -0700, en/na Andrew Straw va > escriure: > > Erin Sheldon wrote: > > > > >Anyway - Recarrays have convenience attributes such that > > >fields may be accessed through "." in additioin to > > >the "field()" method. These attributes are designed for > > >read only; one cannot alter the data through them. > > >Yet they are writeable: > > > > > > > > > > > >>>>tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,dec') > > >>>>tr.field('ra')[:] = 0.0 > > >>>>tr.ra > > >>>> > > >>>> > > >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > > > > > > > > > > >>>>tr.ra = 3 > > >>>>tr.ra > > >>>> > > >>>> > > >3 > > > > > > > > >>>>tr.field('ra') > > >>>> > > >>>> > > >array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > > > > >I feel this should raise an exception, just as with trying to write > > >to the "size" attribute. Any thoughts? > > > > > > > > I have not used recarrays much, so take this with the appropriate > > measure of salt. > > > > I'd prefer to drop the entire pseudo-attribute thing completely before > > it gets entrenched. (Perhaps it's too late.) > > > I think that initially I would concur to drop them. I am new to numpy, however, so they are not entrenched for me. Anyway, see below. > However, I think that this has its utility, specially when accessing to > nested fields (see later). In addition, I'd suggest introducing a > special accessor called, say, 'fields' in order to access the fields > themselves and not the attributes. For example, if you want to access > the 'strides' attribute, you can do it in the usual way: > > >>> import numpy > >>> tr=numpy.recarray(10, formats='i4,f8,f8', names='id,ra,strides') > >>> tr.strides > (20,) > > but, if you want to access *field* 'strides' you could do it by issuing: > > >>> tr.fields.strides > > >>> tr.fields.strides[:] > array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > > We have several advantages in adopting the previous approach: > > 1. You don't mix (nor pollute) the namespaces for attributes and fields. > > 2. You have a clear idea when you are accessing a variable or a field. > > 3. Accessing nested columns would still be very easy: > tr.field('nested1').field('nested2').field('nested3') vs > tr.fields.nested1.nested2.nested3 > > 4. You can also define a proper __getitem__ for accessing fields: > tr.fields['nested1']['nested2']['nested3']. > In the same way, elements of 'nested2' field could be accessed by: > tr.fields['nested1']['nested2'][2:10:2]. > > 5. Finally, you can even prevent setting or deleting columns by > disabling the __setattr__ and __delattr__. This is interesting, and I would add a 6th to this: 6. The .fields by itself could return the names of the fields, which are currently not accessible in any simple way. I always think that these should be methods (.fields(),.size(), etc) but if we are going down the attribute route, this might be a simple fix. > > PyTables has adopted a similar schema for accessing nested columns, > except for 4, where we decided not to accept both strings and slices for > the __getitem__() method (you know the mantra: "there should preferably > be just one way of doing things", although maybe we've been a bit too > much strict in this case), and I think it works reasonably well. In any > case, the idea is to decouple the attributes and fields so that they > doesn't get mixed. Strings or fieldnum access greatly improves the scriptability, but this can always be done through the .field() access. Erin From rina222 at yahoo.co.jp Sat Jun 17 21:52:15 2006 From: rina222 at yahoo.co.jp (=?iso-2022-jp?B?cmluYQ==?=) Date: Sun, 18 Jun 2006 01:52:15 -0000 Subject: [Numpy-discussion] =?iso-2022-jp?b?IFJlOhskQkw1TkEkR0Q+JWEbKEI=?= =?iso-2022-jp?b?GyRCJEckZCRqPGgkaiEqGyhC?= Message-ID: ??????????????????????????????????????? http://love-match.bz/pc/?06 ??????????????????????????????????????? http://love-match.bz/pc/?06 ??????????3-6-4-533 ?????? 139-3668-7892 From miku0814 at yahoo.co.jp Sat Jun 17 22:08:44 2006 From: miku0814 at yahoo.co.jp (=?iso-2022-jp?B?bWlrdQ==?=) Date: Sun, 18 Jun 2006 02:08:44 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: ???????????????????????????????????????????? http://love-match.bz/pc/?07 ??????? ???????????????????????????????????????? http://love-match.bz/pc/?07 ??????????3-6-4-533 ?????? 139-3668-7892 From hitomi0303 at yahoo.co.jp Sat Jun 17 22:59:33 2006 From: hitomi0303 at yahoo.co.jp (=?iso-2022-jp?B?aGl0b21p?=) Date: Sun, 18 Jun 2006 02:59:33 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?09 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From qjrdjkonhv at realestate-south-coast.com.au Sun Jun 18 03:13:58 2006 From: qjrdjkonhv at realestate-south-coast.com.au (Isabella Herron) Date: Sun, 18 Jun 2006 07:13:58 -0000 Subject: [Numpy-discussion] carpet Message-ID: <003001c67b13$bdaa307d$ea551748@rabyr> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: blankness.gif Type: image/gif Size: 234 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: attachment.gif Type: image/gif Size: 2494 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tyrannical.gif Type: image/gif Size: 1570 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: desire.gif Type: image/gif Size: 1669 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: disabled.gif Type: image/gif Size: 1668 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: wrote.gif Type: image/gif Size: 4991 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: grating.gif Type: image/gif Size: 1172 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: planetary.gif Type: image/gif Size: 151 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: repressed.gif Type: image/gif Size: 121 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: newcomer.gif Type: image/gif Size: 2201 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: skateboarding.gif Type: image/gif Size: 1517 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: equine.gif Type: image/gif Size: 3919 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: metallurgist.gif Type: image/gif Size: 420 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rape.gif Type: image/gif Size: 239 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sash.gif Type: image/gif Size: 628 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: exacerbate.gif Type: image/gif Size: 578 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: brass knuckles.gif Type: image/gif Size: 230 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pointed.gif Type: image/gif Size: 555 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: complexity.gif Type: image/gif Size: 257 bytes Desc: not available URL: From gp81eu at fsi-intl.com Sun Jun 18 07:27:16 2006 From: gp81eu at fsi-intl.com (rncmmt7i) Date: Sun, 18 Jun 2006 14:27:16 +0300 Subject: [Numpy-discussion] Reorder Information By justremedy.com Message-ID: <5v76uqc8d3puryxkwhn@fsi-intl.com> Welcome Free online medical consultation by a licensed U.S. physician. Click Below justremedy.com seijikcmdu LAaIoISsisIYPPnzdUbPtsavESzsIp afternoons jitterbug Williamson amounters kiloton experts inconvertible dusting minnow's archiving adulthood edicts earsplitting easygoing Atreus divider applicable deferred Gorham easygoing jitterbug loathing demitting gadwall jostle closers cachalot armchairs daffodils bouts grassland bricks freewheel consoles Ahmedabad gecko From sebastian.beca at gmail.com Sun Jun 18 18:49:27 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Sun, 18 Jun 2006 18:49:27 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <4493A57C.1030904@gmx.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> Message-ID: I checked the matlab version's code and it does the same as discussed here. The only thing to check is to make sure you loop around the shorter dimension of the output array. Speedwise the Matlab code still runs about twice as fast for large sets of data (by just taking time by hand and comparing), nevetheless the improvement over calculating each value as in d1 is significant (10-300 times) and enough for my needs. Thanks to all. Sebastian Beca PD: I also tried the d5 version Alex sent but the results are not the same so I couldn't compare. My final version was: K = 10 C = 3 N = 2500 # One could switch around C and N now. A = random.random( [N, K]) B = random.random( [C, K]) def dist(): d = zeros([N, C], dtype=float) if N < C: for i in range(N): xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) return d else: for j in range(C): xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) return d On 6/17/06, Johannes Loehnert wrote: > Hi, > > > def d4(): > > d = zeros([4, 1000], dtype=float) > > for i in range(4): > > xy = A[i] - B > > d[i] = sqrt( sum(xy**2, axis=1) ) > > return d > > > > Maybe there's another alternative to d4? > > Thanks again, > > I think this is the fastest you can get. Maybe it would be nicer to use > the .sum() method instead of sum function, but that is just my personal > opinion. > > I am curious how this compares to the matlab version. :) > > Johannes > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From aisaac at american.edu Sun Jun 18 22:05:51 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 18 Jun 2006 22:05:51 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net><4493A57C.1030904@gmx.net> Message-ID: On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: > def dist(): > d = zeros([N, C], dtype=float) > if N < C: for i in range(N): > xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) > return d > else: > for j in range(C): > xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) > return d But that is 50% slower than Johannes's version: def dist_loehner1(): d = A[:, newaxis, :] - B[newaxis, :, :] d = sqrt((d**2).sum(axis=2)) return d Cheers, Alan Isaac From tim.hochberg at cox.net Sun Jun 18 23:18:23 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Sun, 18 Jun 2006 20:18:23 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net><4493A57C.1030904@gmx.net> Message-ID: <4496177F.7010809@cox.net> Alan G Isaac wrote: >On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: > > >>def dist(): >>d = zeros([N, C], dtype=float) >>if N < C: for i in range(N): >> xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) >> return d >>else: >> for j in range(C): >> xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) >>return d >> >> > > >But that is 50% slower than Johannes's version: > >def dist_loehner1(): > d = A[:, newaxis, :] - B[newaxis, :, :] > d = sqrt((d**2).sum(axis=2)) > return d > > Are you sure about that? I just ran it through timeit, using Sebastian's array sizes and I get Sebastian's version being 150% *faster*. This could well be cache size dependant, so may vary from box to box, but I'd expect Sebastian's current version to scale better in general. -tim From kwy254 at tom.com Wed Jun 21 00:03:19 2006 From: kwy254 at tom.com (=?GB2312?B?IjfUwjEtMizW3MH5yNUsyc+6oyI=?=) Date: Wed, 21 Jun 2006 12:03:19 +0800 Subject: [Numpy-discussion] =?GB2312?B?ItTL08NET0W4xL34uaTS1czhuN+y+sa31srBvyhBRCki?= Message-ID: An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon Jun 19 00:30:12 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 19 Jun 2006 00:30:12 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <4496177F.7010809@cox.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net><4493A57C.1030904@gmx.net> <4496177F.7010809@cox.net> Message-ID: On Sun, 18 Jun 2006, Tim Hochberg apparently wrote: > Alan G Isaac wrote: >> On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: >>> def dist(): >>> d = zeros([N, C], dtype=float) >>> if N < C: for i in range(N): >>> xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) >>> return d >>> else: >>> for j in range(C): >>> xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) >>> return d >> But that is 50% slower than Johannes's version: >> def dist_loehner1(): >> d = A[:, newaxis, :] - B[newaxis, :, :] >> d = sqrt((d**2).sum(axis=2)) >> return d > Are you sure about that? I just ran it through timeit, using Sebastian's > array sizes and I get Sebastian's version being 150% faster. This > could well be cache size dependant, so may vary from box to box, but I'd > expect Sebastian's current version to scale better in general. No, I'm not sure. Script attached bottom. Most recent output follows: for reasons I have not determined, it doesn't match my previous runs ... Alan >>> execfile(r'c:\temp\temp.py') dist_beca : 3.042277 dist_loehner1: 3.170026 ################################# #THE SCRIPT import sys sys.path.append("c:\\temp") import numpy from numpy import * import timeit K = 10 C = 2500 N = 3 # One could switch around C and N now. A = numpy.random.random( [N, K] ) B = numpy.random.random( [C, K] ) # beca def dist_beca(): d = zeros([N, C], dtype=float) if N < C: for i in range(N): xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) return d else: for j in range(C): xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) return d #loehnert def dist_loehner1(): # drawback: memory usage temporarily doubled # solution see below d = A[:, newaxis, :] - B[newaxis, :, :] # written as 3 expressions for more clarity d = sqrt((d**2).sum(axis=2)) return d if __name__ == "__main__": t1 = timeit.Timer('dist_beca()', 'from temp import dist_beca').timeit(100) t8 = timeit.Timer('dist_loehner1()', 'from temp import dist_loehner1').timeit(100) fmt="%-10s:\t"+"%10.6f" print fmt%('dist_beca', t1) print fmt%('dist_loehner1', t8) From alexandre.fayolle at logilab.fr Mon Jun 19 04:02:34 2006 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Mon, 19 Jun 2006 10:02:34 +0200 Subject: [Numpy-discussion] finding connected areas? In-Reply-To: <51f97e530606181601l3f788fd9n57ac6ce4d4af43a6@mail.gmail.com> References: <51f97e530606121741s1cad6b20ne559ea4852cc94be@mail.gmail.com> <20060613073153.GB8675@crater.logilab.fr> <51f97e530606181601l3f788fd9n57ac6ce4d4af43a6@mail.gmail.com> Message-ID: <20060619080234.GE8946@crater.logilab.fr> I'm bringing back the discussion on list. On Mon, Jun 19, 2006 at 12:01:27AM +0100, stephen emslie wrote: > > > >You will get this in numarray.nd_image, the function is > >called label. It is also available in recent versions of scipy, in > >module scipy.ndimage. > > > > Thanks for pointing me in the right direction. I've been playing around with > this and I'm getting along with my problem, which is to find the areas of > the connected components in the binary image. ndimage.label has been a great > help in identifying and locating each shape in my image, but I am not quite > sure how to interpret the results. I would like to be able to calculate the > area of each slice returned by ndimage.labels. Is there a simple way to do > this? Yes, you will get an example in http://stsdas.stsci.edu/numarray/numarray-1.5.html/node98.html > Also, being very new to scipy I dont fully understand how the slice objects > returned by label actually work. Is there some documentation on this module > that I could look at? http://stsdas.stsci.edu/numarray/numarray-1.5.html/module-numarray.ndimage.html -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From Clientes at Banamex.com Mon Jun 19 04:24:58 2006 From: Clientes at Banamex.com (Banamex) Date: Mon, 19 Jun 2006 10:24:58 +0200 Subject: [Numpy-discussion] Actualizaciones Banamex Message-ID: An HTML attachment was scrubbed... URL: From gnurser at googlemail.com Mon Jun 19 07:42:22 2006 From: gnurser at googlemail.com (George Nurser) Date: Mon, 19 Jun 2006 12:42:22 +0100 Subject: [Numpy-discussion] f2py produces so.so Message-ID: <1d1e6ea70606190442q5e504d26lec44982f47b69c80@mail.gmail.com> I have run into a strange problem with the current numpy/f2py (f2py 2_2631, numpy 2631). I have a file [Wright.f] which contains 5 different fortran subroutines. Arguments have been specified as input or output by adding cf2py intent (in), (out) etc. Doing f2py -c Wright.f -m Wright.so does not produce Wright.so Instead it produces a *directory* Wright containing a library so.so This actually works fine once it is put onto the python path. But if it is renamed it cannot be successfully imported, so this will cause problems if it happens to a second file. George. -------------- next part -------------- A non-text attachment was scrubbed... Name: Wright.f Type: application/octet-stream Size: 11459 bytes Desc: not available URL: From benjamin at decideur.info Mon Jun 19 07:46:38 2006 From: benjamin at decideur.info (Benjamin Thyreau) Date: Mon, 19 Jun 2006 13:46:38 +0200 Subject: [Numpy-discussion] tiny patch + Playing with strings and my own array descr (PyArray_STRING, PyArray_OBJECT). In-Reply-To: <200606162001.31342.perrot@shfj.cea.fr> References: <200606162001.31342.perrot@shfj.cea.fr> Message-ID: <200606191346.38538.benjamin@decideur.info> Le Vendredi 16 Juin 2006 20:01, Matthieu Perrot a ?crit?: > hi, > > I need to handle strings shaped by a numpy array whose data own to a C (...) > a new array descr based on PyArray_OBJECT and change its getitem/setitem > -- > Matthieu Perrot Tel: +33 1 69 86 78 21 > CEA - SHFJ Fax: +33 1 69 86 77 86 > 4, place du General Leclerc > 91401 Orsay Cedex France Hi, Seems i had the similar problem when i tried to use numpy to map STL's C++ vector (which are contiguous structures). I actually tried to overload the getitem() field of my own dtype to build python wrappers at runtime around the allocated C objects array (ie. NOT an array of Python Object). Actually your suggested modification seems to work for me, i dunno if it's the right solution, still. Is there any plans to update the trunk which something similar ? -- Benjamin Thyreau decideur.info From strawman at astraw.com Mon Jun 19 12:32:44 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 19 Jun 2006 09:32:44 -0700 Subject: [Numpy-discussion] updated Ubuntu Dapper packages for numpy, matplotlib, and scipy online Message-ID: <4496D1AC.8030100@astraw.com> I have updated the apt repository I maintain for Ubuntu's Dapper, which now includes: numpy matplotlib scipy Each package is from a recent SVN checkout and should thus be regarded as "bleeding edge". The repository has a new URL: http://debs.astraw.com/dapper/ I intend to keep this repository online for an extended duration. If you want to put this repository in your sources list, you need to add the following lines to /etc/apt/sources.list:: deb http://debs.astraw.com/ dapper/ deb-src http://debs.astraw.com/ dapper/ I have not yet investigated the use of ATLAS in building or using the numpy binaries, and if performance is critical for you, please evaluate speed before using it. I intend to visit this issue, but I cannot say when. The Debian source packages were generated using stdeb, [ http://stdeb.python-hosting.com/ ] a Python to Debian source package conversion utility I wrote. stdeb does not build packages that follow the Debian Python Policy, so the packages here may be slighly unusual compared to Python packages in the official Debian or Ubuntu repositiories. For example, example scripts do not get installed, and no documentation is installed. Future releases of stdeb may resolve these issues. As always, feedback is very appreciated. Cheers! Andrew From jk985 at tom.com Thu Jun 22 12:39:06 2006 From: jk985 at tom.com (=?GB2312?B?N9TCMS0yy9XW3S+xsb6pOC05?=) Date: Fri, 23 Jun 2006 00:39:06 +0800 Subject: [Numpy-discussion] =?GB2312?B?ssm5urPJsb653MDt0+vLq9OuzLjF0Ly8x8k8YWQ+?= Message-ID: An HTML attachment was scrubbed... URL: From twzvzo at digitalcoach.com Mon Jun 19 12:41:00 2006 From: twzvzo at digitalcoach.com (Kenneth Colon) Date: Mon, 19 Jun 2006 16:41:00 +0000 Subject: [Numpy-discussion] hail seductive Message-ID: <002d01c693bf$e684400c$614c04c1@ocnh> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cube.gif Type: image/gif Size: 272 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tonsil.gif Type: image/gif Size: 550 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: M.D..gif Type: image/gif Size: 449 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lark.gif Type: image/gif Size: 965 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: crippled.gif Type: image/gif Size: 1066 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jackass.gif Type: image/gif Size: 240 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: residue.gif Type: image/gif Size: 327 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: guidelines.gif Type: image/gif Size: 500 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: friendliness.gif Type: image/gif Size: 636 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: premature.gif Type: image/gif Size: 438 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: second.gif Type: image/gif Size: 109 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: keepsake.gif Type: image/gif Size: 559 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vat.gif Type: image/gif Size: 490 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: holidays.gif Type: image/gif Size: 282 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: balcony.gif Type: image/gif Size: 873 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rave.gif Type: image/gif Size: 242 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: headline.gif Type: image/gif Size: 80 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: particular.gif Type: image/gif Size: 161 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tropical.gif Type: image/gif Size: 783 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: billionaire.gif Type: image/gif Size: 933 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: checked.gif Type: image/gif Size: 565 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: atrocious.gif Type: image/gif Size: 269 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: liberal arts.gif Type: image/gif Size: 1859 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: parquet.gif Type: image/gif Size: 908 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vigorous.gif Type: image/gif Size: 977 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tenet.gif Type: image/gif Size: 328 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sire.gif Type: image/gif Size: 116 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: barbell.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: frosting.gif Type: image/gif Size: 489 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meanwhile.gif Type: image/gif Size: 1762 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: plagiarism.gif Type: image/gif Size: 1362 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: elsewhere.gif Type: image/gif Size: 474 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tribulation.gif Type: image/gif Size: 935 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smarty-pants.gif Type: image/gif Size: 501 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cruise control.gif Type: image/gif Size: 463 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: paperwork.gif Type: image/gif Size: 781 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: prolong.gif Type: image/gif Size: 429 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rector.gif Type: image/gif Size: 177 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: phallic.gif Type: image/gif Size: 695 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: each.gif Type: image/gif Size: 138 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gerund.gif Type: image/gif Size: 702 bytes Desc: not available URL: From bhoel at despammed.com Mon Jun 19 15:00:18 2006 From: bhoel at despammed.com (=?utf-8?q?Berthold_H=C3=B6llmann?=) Date: Mon, 19 Jun 2006 21:00:18 +0200 Subject: [Numpy-discussion] f2py produces so.so References: <1d1e6ea70606190442q5e504d26lec44982f47b69c80@mail.gmail.com> Message-ID: "George Nurser" writes: > I have run into a strange problem with the current numpy/f2py (f2py > 2_2631, numpy 2631). > I have a file [Wright.f] which contains 5 different fortran > subroutines. Arguments have been specified as input or output by > adding cf2py intent (in), (out) etc. > > Doing > f2py -c Wright.f -m Wright.so simply try f2py -c Wright.f -m Wright instead. Python extension modules require the an exported routine named init (initWright in this case). But you told f2py to generate an extension module named "so" in a package named "Wright", so the generated function is named initso. The *.so file cannot be renamed because then there is no more matching init function anymore. Regards Berthold -- berthold at xn--hllmanns-n4a.de / bhoel at web.de / From sebastian.beca at gmail.com Mon Jun 19 16:04:31 2006 From: sebastian.beca at gmail.com (Sebastian Beca) Date: Mon, 19 Jun 2006 16:04:31 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> <4496177F.7010809@cox.net> Message-ID: I just ran Alan's script and I don't get consistent results for 100 repetitions. I boosted it to 1000, and ran it several times. The faster one varied alot, but both came into a ~ +-1.5% difference. When it comes to scaling, for my problem(fuzzy clustering), N is the size of the dataset, which should span from thousands to millions. C is the amount of clusters, usually less than 10, and K the amount of features (the dimension I want to sum over) is also usually less than 100. So mainly I'm concerned with scaling across N. I tried C=3, K=4, N=1000, 2500, 5000, 7500, 10000. Also using 1000 runs, the results were: dist_beca: 1.1, 4.5, 16, 28, 37 dist_loehner1: 1.7, 6.5, 22, 35, 47 I also tried scaling across K, with C=3, N=2500, and K=5-50. I couldn't get any consistent results for small K, but both tend to perform as well (+-2%) for large K (K>15). I'm not sure how these work in the backend so I can't argument as to why one should scale better than the other. Regards, Sebastian. On 6/19/06, Alan G Isaac wrote: > On Sun, 18 Jun 2006, Tim Hochberg apparently wrote: > > > Alan G Isaac wrote: > > >> On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: > > >>> def dist(): > >>> d = zeros([N, C], dtype=float) > >>> if N < C: for i in range(N): > >>> xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) > >>> return d > >>> else: > >>> for j in range(C): > >>> xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) > >>> return d > > >> But that is 50% slower than Johannes's version: > > >> def dist_loehner1(): > >> d = A[:, newaxis, :] - B[newaxis, :, :] > >> d = sqrt((d**2).sum(axis=2)) > >> return d > > > Are you sure about that? I just ran it through timeit, using Sebastian's > > array sizes and I get Sebastian's version being 150% faster. This > > could well be cache size dependant, so may vary from box to box, but I'd > > expect Sebastian's current version to scale better in general. > > No, I'm not sure. > Script attached bottom. > Most recent output follows: > for reasons I have not determined, > it doesn't match my previous runs ... > Alan > > >>> execfile(r'c:\temp\temp.py') > dist_beca : 3.042277 > dist_loehner1: 3.170026 > > > ################################# > #THE SCRIPT > import sys > sys.path.append("c:\\temp") > import numpy > from numpy import * > import timeit > > > K = 10 > C = 2500 > N = 3 # One could switch around C and N now. > A = numpy.random.random( [N, K] ) > B = numpy.random.random( [C, K] ) > > # beca > def dist_beca(): > d = zeros([N, C], dtype=float) > if N < C: > for i in range(N): > xy = A[i] - B > d[i,:] = sqrt(sum(xy**2, axis=1)) > return d > else: > for j in range(C): > xy = A - B[j] > d[:,j] = sqrt(sum(xy**2, axis=1)) > return d > > #loehnert > def dist_loehner1(): > # drawback: memory usage temporarily doubled > # solution see below > d = A[:, newaxis, :] - B[newaxis, :, :] > # written as 3 expressions for more clarity > d = sqrt((d**2).sum(axis=2)) > return d > > > if __name__ == "__main__": > t1 = timeit.Timer('dist_beca()', 'from temp import dist_beca').timeit(100) > t8 = timeit.Timer('dist_loehner1()', 'from temp import dist_loehner1').timeit(100) > fmt="%-10s:\t"+"%10.6f" > print fmt%('dist_beca', t1) > print fmt%('dist_loehner1', t8) > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From tim.hochberg at cox.net Mon Jun 19 16:28:53 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon, 19 Jun 2006 13:28:53 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> <4496177F.7010809@cox.net> Message-ID: <44970905.4080005@cox.net> Sebastian Beca wrote: >I just ran Alan's script and I don't get consistent results for 100 >repetitions. I boosted it to 1000, and ran it several times. The >faster one varied alot, but both came into a ~ +-1.5% difference. > >When it comes to scaling, for my problem(fuzzy clustering), N is the >size of the dataset, which should span from thousands to millions. C >is the amount of clusters, usually less than 10, and K the amount of >features (the dimension I want to sum over) is also usually less than >100. So mainly I'm concerned with scaling across N. I tried C=3, K=4, >N=1000, 2500, 5000, 7500, 10000. Also using 1000 runs, the results >were: >dist_beca: 1.1, 4.5, 16, 28, 37 >dist_loehner1: 1.7, 6.5, 22, 35, 47 > >I also tried scaling across K, with C=3, N=2500, and K=5-50. I >couldn't get any consistent results for small K, but both tend to >perform as well (+-2%) for large K (K>15). > >I'm not sure how these work in the backend so I can't argument as to >why one should scale better than the other. > > The reason I suspect that dist_beca should scale better is that dist_loehner1 generates an intermediate array of size NxCxK, while dist_beca produces intermediate matrices that are only NxK or CxK. For large problems, allocating that extra memory and fetching it into and out of the cache can be a bottleneck. Here's another version that allocates even less in the way of temporaries at the expenses of being borderline incomprehensible. It still allocates an NxK temporary array, but it allocates it once ahead of time and then reuses it for all subsequent calculations. Your welcome to use it, but I'm not sure I'd recomend it unless this function is really a speed bottleneck as it could end up being hard to read later (I left implementing the NRegards, > >Sebastian. > >On 6/19/06, Alan G Isaac wrote: > > >>On Sun, 18 Jun 2006, Tim Hochberg apparently wrote: >> >> >> >>>Alan G Isaac wrote: >>> >>> >>>>On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: >>>> >>>> >>>>>def dist(): >>>>>d = zeros([N, C], dtype=float) >>>>>if N < C: for i in range(N): >>>>>xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) >>>>>return d >>>>>else: >>>>>for j in range(C): >>>>>xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) >>>>>return d >>>>> >>>>> >>>>But that is 50% slower than Johannes's version: >>>> >>>> >>>>def dist_loehner1(): >>>> d = A[:, newaxis, :] - B[newaxis, :, :] >>>> d = sqrt((d**2).sum(axis=2)) >>>> return d >>>> >>>> >>>Are you sure about that? I just ran it through timeit, using Sebastian's >>>array sizes and I get Sebastian's version being 150% faster. This >>>could well be cache size dependant, so may vary from box to box, but I'd >>>expect Sebastian's current version to scale better in general. >>> >>> >>No, I'm not sure. >>Script attached bottom. >>Most recent output follows: >>for reasons I have not determined, >>it doesn't match my previous runs ... >>Alan >> >> >> >>>>>execfile(r'c:\temp\temp.py') >>>>> >>>>> >>dist_beca : 3.042277 >>dist_loehner1: 3.170026 >> >> >>################################# >>#THE SCRIPT >>import sys >>sys.path.append("c:\\temp") >>import numpy >>from numpy import * >>import timeit >> >> >>K = 10 >>C = 2500 >>N = 3 # One could switch around C and N now. >>A = numpy.random.random( [N, K] ) >>B = numpy.random.random( [C, K] ) >> >># beca >>def dist_beca(): >> d = zeros([N, C], dtype=float) >> if N < C: >> for i in range(N): >> xy = A[i] - B >> d[i,:] = sqrt(sum(xy**2, axis=1)) >> return d >> else: >> for j in range(C): >> xy = A - B[j] >> d[:,j] = sqrt(sum(xy**2, axis=1)) >> return d >> >>#loehnert >>def dist_loehner1(): >> # drawback: memory usage temporarily doubled >> # solution see below >> d = A[:, newaxis, :] - B[newaxis, :, :] >> # written as 3 expressions for more clarity >> d = sqrt((d**2).sum(axis=2)) >> return d >> >> >>if __name__ == "__main__": >> t1 = timeit.Timer('dist_beca()', 'from temp import dist_beca').timeit(100) >> t8 = timeit.Timer('dist_loehner1()', 'from temp import dist_loehner1').timeit(100) >> fmt="%-10s:\t"+"%10.6f" >> print fmt%('dist_beca', t1) >> print fmt%('dist_loehner1', t8) >> >> >> >> >>_______________________________________________ >>Numpy-discussion mailing list >>Numpy-discussion at lists.sourceforge.net >>https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From tim.hochberg at cox.net Mon Jun 19 17:39:14 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon, 19 Jun 2006 14:39:14 -0700 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <44970905.4080005@cox.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> <4496177F.7010809@cox.net> <44970905.4080005@cox.net> Message-ID: <44971982.3090800@cox.net> Tim Hochberg wrote: >Sebastian Beca wrote: > > > >>I just ran Alan's script and I don't get consistent results for 100 >>repetitions. I boosted it to 1000, and ran it several times. The >>faster one varied alot, but both came into a ~ +-1.5% difference. >> >>When it comes to scaling, for my problem(fuzzy clustering), N is the >>size of the dataset, which should span from thousands to millions. C >>is the amount of clusters, usually less than 10, and K the amount of >>features (the dimension I want to sum over) is also usually less than >>100. So mainly I'm concerned with scaling across N. I tried C=3, K=4, >>N=1000, 2500, 5000, 7500, 10000. Also using 1000 runs, the results >>were: >>dist_beca: 1.1, 4.5, 16, 28, 37 >>dist_loehner1: 1.7, 6.5, 22, 35, 47 >> >>I also tried scaling across K, with C=3, N=2500, and K=5-50. I >>couldn't get any consistent results for small K, but both tend to >>perform as well (+-2%) for large K (K>15). >> >>I'm not sure how these work in the backend so I can't argument as to >>why one should scale better than the other. >> >> >> >> >The reason I suspect that dist_beca should scale better is that >dist_loehner1 generates an intermediate array of size NxCxK, while >dist_beca produces intermediate matrices that are only NxK or CxK. For >large problems, allocating that extra memory and fetching it into and >out of the cache can be a bottleneck. > >Here's another version that allocates even less in the way of >temporaries at the expenses of being borderline incomprehensible. It >still allocates an NxK temporary array, but it allocates it once ahead >of time and then reuses it for all subsequent calculations. Your welcome >to use it, but I'm not sure I'd recomend it unless this function is >really a speed bottleneck as it could end up being hard to read later (I >left implementing the N >I have another idea that might reduce the memory overhead still further, >if I get a chance I'll try it out and let you know if it results in a >further speed up. > >-tim > > > def dist2(A, B): > d = zeros([N, C], dtype=float) > if N < C: > raise NotImplemented > else: > tmp = empty([N, K], float) > tmp0 = tmp[:,0] > rangek = range(1,K) > for j in range(C): > subtract(A, B[j], tmp) > tmp *= tmp > for k in rangek: > tmp0 += tmp[:,k] > sqrt(tmp0, d[:,j]) > return d > > Speaking of scaling: I tried this with K=25000 (10 x greater than Sebastian's original numbers). Much to my suprise it performed somewhat worse than the Sebastian's dist() with large K. Below is a modified dist2 that performs about the same (marginally better here) for large K as well as a dist3 that performs about 50% better at both K=2500 and K=25000. -tim def dist2(A, B): d = empty([N, C], dtype=float) if N < C: raise NotImplemented else: tmp = empty([N, K], float) tmp0 = tmp[:,0] for j in range(C): subtract(A, B[j], tmp) tmp **= 2 d[:,j] = sum(tmp, axis=1) sqrt(d[:,j], d[:,j]) return d def dist3(A, B): d = zeros([N, C], dtype=float) rangek = range(K) if N < C: raise NotImplemented else: tmp = empty([N], float) for j in range(C): for k in rangek: subtract(A[:,k], B[j,k], tmp) tmp **= 2 d[:,j] += tmp sqrt(d[:,j], d[:,j]) return d > > > >>Regards, >> >>Sebastian. >> >>On 6/19/06, Alan G Isaac wrote: >> >> >> >> >>>On Sun, 18 Jun 2006, Tim Hochberg apparently wrote: >>> >>> >>> >>> >>> >>>>Alan G Isaac wrote: >>>> >>>> >>>> >>>> >>>>>On Sun, 18 Jun 2006, Sebastian Beca apparently wrote: >>>>> >>>>> >>>>> >>>>> >>>>>>def dist(): >>>>>>d = zeros([N, C], dtype=float) >>>>>>if N < C: for i in range(N): >>>>>>xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1)) >>>>>>return d >>>>>>else: >>>>>>for j in range(C): >>>>>>xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1)) >>>>>>return d >>>>>> >>>>>> >>>>>> >>>>>> >>>>>But that is 50% slower than Johannes's version: >>>>> >>>>> >>>>>def dist_loehner1(): >>>>> d = A[:, newaxis, :] - B[newaxis, :, :] >>>>> d = sqrt((d**2).sum(axis=2)) >>>>> return d >>>>> >>>>> >>>>> >>>>> >>>>Are you sure about that? I just ran it through timeit, using Sebastian's >>>>array sizes and I get Sebastian's version being 150% faster. This >>>>could well be cache size dependant, so may vary from box to box, but I'd >>>>expect Sebastian's current version to scale better in general. >>>> >>>> >>>> >>>> >>>No, I'm not sure. >>>Script attached bottom. >>>Most recent output follows: >>>for reasons I have not determined, >>>it doesn't match my previous runs ... >>>Alan >>> >>> >>> >>> >>> >>>>>>execfile(r'c:\temp\temp.py') >>>>>> >>>>>> >>>>>> >>>>>> >>>dist_beca : 3.042277 >>>dist_loehner1: 3.170026 >>> >>> >>>################################# >>>#THE SCRIPT >>>import sys >>>sys.path.append("c:\\temp") >>>import numpy >>> >>> >>>from numpy import * >> >> >>>import timeit >>> >>> >>>K = 10 >>>C = 2500 >>>N = 3 # One could switch around C and N now. >>>A = numpy.random.random( [N, K] ) >>>B = numpy.random.random( [C, K] ) >>> >>># beca >>>def dist_beca(): >>> d = zeros([N, C], dtype=float) >>> if N < C: >>> for i in range(N): >>> xy = A[i] - B >>> d[i,:] = sqrt(sum(xy**2, axis=1)) >>> return d >>> else: >>> for j in range(C): >>> xy = A - B[j] >>> d[:,j] = sqrt(sum(xy**2, axis=1)) >>> return d >>> >>>#loehnert >>>def dist_loehner1(): >>> # drawback: memory usage temporarily doubled >>> # solution see below >>> d = A[:, newaxis, :] - B[newaxis, :, :] >>> # written as 3 expressions for more clarity >>> d = sqrt((d**2).sum(axis=2)) >>> return d >>> >>> >>>if __name__ == "__main__": >>> t1 = timeit.Timer('dist_beca()', 'from temp import dist_beca').timeit(100) >>> t8 = timeit.Timer('dist_loehner1()', 'from temp import dist_loehner1').timeit(100) >>> fmt="%-10s:\t"+"%10.6f" >>> print fmt%('dist_beca', t1) >>> print fmt%('dist_loehner1', t8) >>> >>> >>> >>> >>>_______________________________________________ >>>Numpy-discussion mailing list >>>Numpy-discussion at lists.sourceforge.net >>>https://lists.sourceforge.net/lists/listinfo/numpy-discussion >>> >>> >>> >>> >>> >>_______________________________________________ >>Numpy-discussion mailing list >>Numpy-discussion at lists.sourceforge.net >>https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> >> >> >> > > > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From gnurser at googlemail.com Mon Jun 19 18:15:10 2006 From: gnurser at googlemail.com (George Nurser) Date: Mon, 19 Jun 2006 23:15:10 +0100 Subject: [Numpy-discussion] f2py produces so.so In-Reply-To: References: <1d1e6ea70606190442q5e504d26lec44982f47b69c80@mail.gmail.com> Message-ID: <1d1e6ea70606191515o23fbeaadt9084bf31ea435b6@mail.gmail.com> On 19/06/06, Berthold H?llmann wrote: > "George Nurser" writes: > > > I have run into a strange problem with the current numpy/f2py (f2py > > 2_2631, numpy 2631). > > I have a file [Wright.f] which contains 5 different fortran > > subroutines. Arguments have been specified as input or output by > > adding cf2py intent (in), (out) etc. > > > > Doing > > f2py -c Wright.f -m Wright.so > > simply try > > f2py -c Wright.f -m Wright > > instead. Python extension modules require the an exported routine > named init (initWright in this case). But you told f2py > to generate an extension module named "so" in a package named > "Wright", so the generated function is named initso. The *.so file > cannot be renamed because then there is no more matching init function > anymore. > > Regards > Berthold Stupid of me! Hit head against wall. Yes, I eventually worked out that f2py -c Wright.f -m Wright was OK. But many thanks for the explanation ....I see, what f2py was doing was perfectly logical. Regards, George. From david at ar.media.kyoto-u.ac.jp Tue Jun 20 00:26:34 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 20 Jun 2006 13:26:34 +0900 Subject: [Numpy-discussion] updated Ubuntu Dapper packages for numpy, matplotlib, and scipy online In-Reply-To: <4496D1AC.8030100@astraw.com> References: <4496D1AC.8030100@astraw.com> Message-ID: <449778FA.507@ar.media.kyoto-u.ac.jp> Andrew Straw wrote: > I have updated the apt repository I maintain for Ubuntu's Dapper, which > now includes: > > numpy > matplotlib > scipy > > Each package is from a recent SVN checkout and should thus be regarded > as "bleeding edge". The repository has a new URL: > http://debs.astraw.com/dapper/ I intend to keep this repository online > for an extended duration. If you want to put this repository in your > sources list, you need to add the following lines to /etc/apt/sources.list:: > deb http://debs.astraw.com/ dapper/ > deb-src http://debs.astraw.com/ dapper/ > > I have not yet investigated the use of ATLAS in building or using the > numpy binaries, and if performance is critical for you, please evaluate > speed before using it. I intend to visit this issue, but I cannot say when. > > The Debian source packages were generated using stdeb, [ > http://stdeb.python-hosting.com/ ] a Python to Debian source package > conversion utility I wrote. stdeb does not build packages that follow > the Debian Python Policy, so the packages here may be slighly unusual > compared to Python packages in the official Debian or Ubuntu > repositiories. For example, example scripts do not get installed, and no > documentation is installed. Future releases of stdeb may resolve these > issues. > > As always, feedback is very appreciated. > > That's great. Last week, I sended several messages to the list regarding your messages about debian packages for numpy, but it looks they were lost somewhere.... Right now, I use the experimental package of debian + svn sources for numpy, and it works well. Is your approach based on this work, or is it totally different (on debian/ubuntu, packaging numpy + atlas should be easy, as the atlas+lapack library is compiled such as to be complete), David From strawman at astraw.com Tue Jun 20 01:08:59 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 19 Jun 2006 22:08:59 -0700 Subject: [Numpy-discussion] updated Ubuntu Dapper packages for numpy, matplotlib, and scipy online In-Reply-To: <449778FA.507@ar.media.kyoto-u.ac.jp> References: <4496D1AC.8030100@astraw.com> <449778FA.507@ar.media.kyoto-u.ac.jp> Message-ID: <449782EB.1060102@astraw.com> David Cournapeau wrote: > That's great. Last week, I sended several messages to the list > regarding your messages about debian packages for numpy, but it looks > they were lost somewhere.... > > Right now, I use the experimental package of debian + svn sources for > numpy, and it works well. Is your approach based on this work, or is > it totally different (on debian/ubuntu, packaging numpy + atlas should > be easy, as the atlas+lapack library is compiled such as to be complete), > > David Hi David, I did get your email last week (sorry for not replying sooner). I'm actually using my own tool "stdeb" to build these at the moment -- the 'official' package in experimental is surely better than mine, and I will probably switch to it over stdeb sooner or later... Cheers! Andrew From aisaac at american.edu Tue Jun 20 01:18:16 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 20 Jun 2006 01:18:16 -0400 Subject: [Numpy-discussion] strange bug Message-ID: I think there is a bug in the **= operator, for dtype=float. Alan Isaac ## Script: import numpy print "numpy.__version__: ", numpy.__version__ ''' Illustrate a strange bug: ''' y = numpy.arange(10,dtype=float) print "y: ",y y *= y print "y**2: ",y z = numpy.arange(10,dtype=float) print "z: ", z z **= 2 print "z**2: ", z ## Output: numpy.__version__: 0.9.8 y: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] y**2: [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.] z: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] z**2: [ 0.00000000e+00 1.00000000e+00 1.60000000e+01 8.10000000e+01 2.56000000e+02 6.25000000e+02 1.29600000e+03 2.40100000e+03 4.09600000e+03 6.56100000e+03] From a.u.r.e.l.i.a.n at gmx.net Tue Jun 20 02:08:50 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Tue, 20 Jun 2006 08:08:50 +0200 Subject: [Numpy-discussion] strange bug In-Reply-To: References: Message-ID: <449790F2.4070100@gmx.net> Hi, > ## Output: > numpy.__version__: 0.9.8 > y: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] > y**2: [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.] > z: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] > z**2: [ 0.00000000e+00 1.00000000e+00 1.60000000e+01 8.10000000e+01 > 2.56000000e+02 6.25000000e+02 1.29600000e+03 2.40100000e+03 > 4.09600000e+03 6.56100000e+03] obviosly the last is z**4. dtypes are the same for y and z (float64). One addition: In [5]: z = arange(10, dtype=float) In [6]: z **= 1 In [7]: z zsh: 18263 segmentation fault ipython - Johannes From k0ngs9q at tdmedia.com Tue Jun 20 02:30:01 2006 From: k0ngs9q at tdmedia.com (wzoj1ql) Date: Tue, 20 Jun 2006 02:30:01 -0400 Subject: [Numpy-discussion] Reorder Notification From real-meds.com Message-ID: Good Day Free online medical consultation by a licensed U.S. physician. Click The Link Below real-meds.com ojzxkyxivn XqSNnRykZfHiqFzjkVcICbxqIvRddp blackbody initials harmlessly frugally crowing comparator distorts correlation Wilmington canons conferee blackbody expertly alkane brouhaha correlation Gloria colonials encyclopedia's follows marines freewheel Melinda bricks dusting Matthews Alabamian loathing From aisaac at american.edu Tue Jun 20 03:15:31 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 20 Jun 2006 03:15:31 -0400 Subject: [Numpy-discussion] Distance Matrix speed In-Reply-To: <44971982.3090800@cox.net> References: <4492E4DD.3010400@noaa.gov> <4492EF01.10307@cox.net> <4493A57C.1030904@gmx.net> <4496177F.7010809@cox.net> <44970905.4080005@cox.net> <44971982.3090800@cox.net> Message-ID: I think the distance matrix version below is about as good as it gets with these basic strategies. fwiw, Alan Isaac def dist(A,B): rowsA, rowsB = A.shape[0], B.shape[0] distanceAB = empty( [rowsA,rowsB] , dtype=float) if rowsA <= rowsB: temp = empty_like(B) for i in range(rowsA): #store A[i]-B in temp subtract( A[i], B, temp ) temp *= temp sqrt( temp.sum(axis=1), distanceAB[i,:]) else: temp = empty_like(A) for j in range(rowsB): #store A-B[j] in temp temp = subtract( A, B[j], temp ) temp *= temp sqrt( temp.sum(axis=1), distanceAB[:,j]) return distanceAB From oliphant.travis at ieee.org Tue Jun 20 05:06:11 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 20 Jun 2006 03:06:11 -0600 Subject: [Numpy-discussion] C-API support for numarray added to NumPy Message-ID: <4497BA83.7060507@ieee.org> C-API support for numarray is now checked in to NumPy SVN. With this support you should be able to compile numarray extensions by changing the include line from numarray/libnumarray.h to numpy/libnumarray.h You will also need to change the include directories used in compiling by appending the directories returned by numpy.numarray.util.get_numarray_include_dirs() This is most easily done using a numpy.distutils.misc_util Configuration instance: config.add_numarray_include_dirs() The work is heavily based on numarray. I just grabbed the numarray sources and translated the relevant functions to use NumPy's ndarray's. Please report problems and post patches. -Travis From oliphant.travis at ieee.org Tue Jun 20 05:24:34 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 20 Jun 2006 03:24:34 -0600 Subject: [Numpy-discussion] tiny patch + Playing with strings and my own array descr (PyArray_STRING, PyArray_OBJECT). In-Reply-To: <200606162001.31342.perrot@shfj.cea.fr> References: <200606162001.31342.perrot@shfj.cea.fr> Message-ID: <4497BED2.9090601@ieee.org> Matthieu Perrot wrote: > hi, > > I need to handle strings shaped by a numpy array whose data own to a C > structure. There is several possible answers to this problem : > 1) use a numpy array of strings (PyArray_STRING) and so a (char *) object > in C. It works as is, but you need to define a maximum size to your strings > because your set of strings is contiguous in memory. > 2) use a numpy array of objects (PyArray_OBJECT), and wrap each ?C string? > with a python object, using PyStringObject for example. Then our problem is > that there is as wrapper as data element and I believe data can't be shared > when your created PyStringObject using (char *) thanks to > PyString_AsStringAndSize by example. > > > Now, I will expose a third way, which allow you to use no size-limited strings > (as in solution 1.) and don't create wrappers before you really need it > (on demand/access). > > First, for convenience, we will use in C, (char **) type to build an array of > string pointers (as it was suggested in solution 2). Now, the game is to > make it works with numpy API, and use it in python through a python array. > Basically, I want a very similar behabiour than arrays of PyObject, where > data are not contiguous, only their address are. So, the idea is to create > a new array descr based on PyArray_OBJECT and change its getitem/setitem > functions to deals with my own data. > > I exepected numpy to work with this convenient array descr, but it fails > because PyArray_Scalar (arrayobject.c) don't call descriptor getitem function > (in PyArray_OBJECT case) but call 2 lines which have been copy/paste from > the OBJECT_getitem function). Here my small patch is : > replace (arrayobject.c:983-984): > Py_INCREF(*((PyObject **)data)); > return *((PyObject **)data); > by : > return descr->f->getitem(data, base); > > I play a lot with my new numpy array after this change and noticed that a lot > of uses works : > This is an interesting solution. I was not considering it, though, and so I'm not surprised you have problems. You can register new types but basing them off of PyArray_OBJECT can be problematic because of the special-casing that is done in several places to manage reference counting. You are supposed to register your own data-types and get your own typenumber. Then you can define all the functions for the entries as you wish. Riding on the back of PyArray_OBJECT may work if you are clever, but it may fail mysteriously as well because of a reference count snafu. Thanks for the tests and bug-reports. I have no problem changing the code as you suggest. -Travis From simon at arrowtheory.com Tue Jun 20 15:22:30 2006 From: simon at arrowtheory.com (Simon Burton) Date: Tue, 20 Jun 2006 20:22:30 +0100 Subject: [Numpy-discussion] what happened to numarray type names ? Message-ID: <20060620202230.07c3ae56.simon@arrowtheory.com> >>> import numpy >>> numpy.__version__ '0.9.9.2631' >>> numpy.Int32 Traceback (most recent call last): File "", line 1, in ? AttributeError: 'module' object has no attribute 'Int32' >>> This was working not so long ago. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From stefan at sun.ac.za Tue Jun 20 06:38:15 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 20 Jun 2006 12:38:15 +0200 Subject: [Numpy-discussion] what happened to numarray type names ? In-Reply-To: <20060620202230.07c3ae56.simon@arrowtheory.com> References: <20060620202230.07c3ae56.simon@arrowtheory.com> Message-ID: <20060620103815.GA23025@mentat.za.net> Hi Simon On Tue, Jun 20, 2006 at 08:22:30PM +0100, Simon Burton wrote: > > >>> import numpy > >>> numpy.__version__ > '0.9.9.2631' > >>> numpy.Int32 > Traceback (most recent call last): > File "", line 1, in ? > AttributeError: 'module' object has no attribute 'Int32' > >>> > > This was working not so long ago. Int32, Float etc. are part of the old Numeric interface, that you can now access under the numpy.oldnumeric namespace. If I understand correctly, doing import numpy.oldnumeric as Numeric should provide you with a Numeric-compatible replacement. The same types can be accessed under numpy as int32 (lower case) and friends. Cheers St?fan From tim.hochberg at cox.net Tue Jun 20 08:28:28 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 20 Jun 2006 05:28:28 -0700 Subject: [Numpy-discussion] strange bug In-Reply-To: <449790F2.4070100@gmx.net> References: <449790F2.4070100@gmx.net> Message-ID: <4497E9EC.4090409@cox.net> Johannes Loehnert wrote: >Hi, > > > >>## Output: >>numpy.__version__: 0.9.8 >>y: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] >>y**2: [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.] >>z: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] >>z**2: [ 0.00000000e+00 1.00000000e+00 1.60000000e+01 8.10000000e+01 >> 2.56000000e+02 6.25000000e+02 1.29600000e+03 2.40100000e+03 >> 4.09600000e+03 6.56100000e+03] >> >> > >obviosly the last is z**4. dtypes are the same for y and z (float64). > > I ran into this yesterday and fixed it. It should be OK in SVN now. >One addition: > >In [5]: z = arange(10, dtype=float) > >In [6]: z **= 1 > >In [7]: z >zsh: 18263 segmentation fault ipython > > This one is still there however. I'll look at it. -tim > >- Johannes > > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From khalido at incesttaboo.com Tue Jun 20 10:03:39 2006 From: khalido at incesttaboo.com (Khalid Behan) Date: Tue, 20 Jun 2006 07:03:39 -0700 Subject: [Numpy-discussion] foyoh test Message-ID: <000001c69472$54d4c290$bc11a8c0@csg52> http://paliokertunga.com _____ march of Dale, coming from the North-East. But they cannot reach the Mountain unmarked, said Rac, and I fear lest there be battle in the valley. I do not call this counsel good. Though they are a grim folk, they are not likely to overcome the host that besets you; and even if they did so, what will you gain? Winter and snow is hastening behind them. How shall you be fed without the friendship and goodwill of the -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: roughness66.gif Type: image/gif Size: 2349 bytes Desc: not available URL: From christianson2 at llnl.gov Tue Jun 20 12:17:20 2006 From: christianson2 at llnl.gov (George Christianson) Date: Tue, 20 Jun 2006 09:17:20 -0700 Subject: [Numpy-discussion] Help for Windows Python, numpy and f2py Message-ID: <6.2.1.2.2.20060620085902.081c7cf0@mail.llnl.gov> Good morning, I used the Windows installer to install Python 2.4.3 on a late-model Dell PC running XP Pro. Then I installed numpy-0.9.8 and scipy-0.4.9, also from the Windows installers. Now I am trying to build a dll file for a Fortran 77 file and previously-generated (Linux) pyf file. I installed MinGW from the MinGW 5.0.2 Windows installer, and modified my Windows path to put the MinGW directory before a pre-existing Cygwin installation. However, both a setup.py file and running the C:\python2.4.3\Scripts\f2py.py file in the Windows command line fail with the message that the .NET Framework SDK has to be initialized or that the msvccompiler cannot be found. Any advice on what I'm missing would be much appreciated! Here is the message I get trying to run f2py: C:\projects\workspace\MARSFortran>C:\python2.4.3\python C:\python2.4.3\Scripts\f 2py.py -c --fcompiler=g77 mars.pyf mars.f>errors error: The .NET Framework SDK needs to be installed before building extensions f or Python. C:\projects\workspace\MARSFortran> C:\projects\workspace\MARSFortran>type errors Unknown vendor: "g77" running build running config_fc running build_src building extension "mars" sources creating c:\docume~1\christ~1\locals~1\temp\tmp2lu8bh creating c:\docume~1\christ~1\locals~1\temp\tmp2lu8bh\src.win32-2.4 f2py options: [] f2py: mars.pyf Reading fortran codes... Reading file 'mars.pyf' (format:free) SNIP copying C:\python2.4.3\lib\site-packages\numpy\f2py\src\fortranobject.c -> c:\do cume~1\christ~1\locals~1\temp\tmp2lu8bh\src.win32-2.4 copying C:\python2.4.3\lib\site-packages\numpy\f2py\src\fortranobject.h -> c:\do cume~1\christ~1\locals~1\temp\tmp2lu8bh\src.win32-2.4 running build_ext No module named msvccompiler in numpy.distutils, trying from distutils.. Thanks in advance, George Christianson From faltet at carabos.com Tue Jun 20 12:32:41 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue, 20 Jun 2006 18:32:41 +0200 Subject: [Numpy-discussion] Help for Windows Python, numpy and f2py In-Reply-To: <6.2.1.2.2.20060620085902.081c7cf0@mail.llnl.gov> References: <6.2.1.2.2.20060620085902.081c7cf0@mail.llnl.gov> Message-ID: <200606201832.42100.faltet@carabos.com> A Dimarts 20 Juny 2006 18:17, George Christianson va escriure: > Good morning, Thank you, but here the sun is about to set ;-) > I used the Windows installer to install Python 2.4.3 on a late-model Dell > PC running XP Pro. Then I installed numpy-0.9.8 and scipy-0.4.9, also from > the Windows installers. Now I am trying to build a dll file for a Fortran > 77 file and previously-generated (Linux) pyf file. I installed MinGW from > the MinGW 5.0.2 Windows installer, and modified my Windows path to put the > MinGW directory before a pre-existing Cygwin installation. However, both a > setup.py file and running the C:\python2.4.3\Scripts\f2py.py file in the > Windows command line fail with the message that the .NET Framework SDK has > to be initialized or that the msvccompiler cannot be found. > Any advice on what I'm missing would be much appreciated! Here is the > message I get trying to run f2py: > Mmm, perhaps you can try with putting: [build] compiler=mingw32 in your local distutils.cfg (see http://docs.python.org/inst/config-syntax.html) HTH, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From theller at python.net Tue Jun 20 15:05:51 2006 From: theller at python.net (Thomas Heller) Date: Tue, 20 Jun 2006 21:05:51 +0200 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <449342E0.5090004@ieee.org> References: <449342E0.5090004@ieee.org> Message-ID: <4498470F.5040400@python.net> Travis Oliphant schrieb: > I just updated the array interface page to emphasize we now have version > 3. NumPy still supports objects that expose (the C-side) of version 2 > of the array interface, though. > > The new interface is basically the same except (mostly) for asthetics: > The differences are listed at the bottom of > > http://numeric.scipy.org/array_interface.html > > There is talk of ctypes supporting the new interface which is a worthy > development. Please encourage that if you can. > > Please voice concerns now if you have any. From http://numeric.scipy.org/array_interface.html: """ New since June 16, 2006: For safety-checking the return object from PyCObject_GetDesc(obj) should be a Python Tuple with the first object a Python string containing "PyArrayInterface Version 3" and whose second object is a reference to the object exposing the array interface (i.e. self). Older versions of the interface used the "desc" member of the PyCObject itself (do not confuse this with the "descr" member of the PyArrayInterface structure above --- they are two separate things) to hold the pointer to the object exposing the interface, thus you should make sure the object returned is a Tuple before assuming it is in a sanity check. In a sanity check it is recommended to only check for "PyArrayInterface Version" and not for the actual version number so that later versions will still be compatible. The old sanity check for the integer 2 in the first field is no longer necessary (but it is necessary to place the number 2 in that field so that objects reading the old version of the interface will still understand this one). """ I know that you changed that because of my suggestions, but I don't think it should stay like this. The idea was to have the "desc" member of the PyCObject a 'magic value' which can be used to determine that the PyCObjects "void *cobj" pointer really points to a PyArrayInterface structure. I have seen PyCObject uses before in this way, but I cannot find them any longer. If current implementations of the array interface use this pointer for other things (like keeping a reference to the array object), that's fine, and I don't think the specification should change. I think it is espscially dangerous to assume that the desc pointer is a PyObject pointer, Python will segfault if it is not. I suggest that you revert this change. Thomas From oliphant.travis at ieee.org Tue Jun 20 15:27:16 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 20 Jun 2006 13:27:16 -0600 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <4498470F.5040400@python.net> References: <449342E0.5090004@ieee.org> <4498470F.5040400@python.net> Message-ID: <44984C14.90508@ieee.org> Thomas Heller wrote: > Travis Oliphant schrieb: >> I just updated the array interface page to emphasize we now have >> version 3. NumPy still > > If current implementations of the array interface use this pointer for > other things (like keeping a reference to the array object), that's > fine, and I don't think the specification should change. I think it is > espscially dangerous to assume that the desc pointer is a PyObject > pointer, Python will segfault if it is not. You make a good point. This is not a very safe sanity check and overly complicated for not providing safety. I've reverted it back but left in the convention that the 'desc' pointer contain a reference to the object exposing the interface as is the practice now. Thanks for the review. -Travis From cookedm at physics.mcmaster.ca Tue Jun 20 15:41:41 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 20 Jun 2006 15:41:41 -0400 Subject: [Numpy-discussion] Array interface updated to Version 3 In-Reply-To: <4498470F.5040400@python.net> References: <449342E0.5090004@ieee.org> <4498470F.5040400@python.net> Message-ID: <20060620154141.346457e8@arbutus.physics.mcmaster.ca> On Tue, 20 Jun 2006 21:05:51 +0200 Thomas Heller wrote: > Travis Oliphant schrieb: > > I just updated the array interface page to emphasize we now have version > > 3. NumPy still supports objects that expose (the C-side) of version 2 > > of the array interface, though. > > > > The new interface is basically the same except (mostly) for asthetics: > > The differences are listed at the bottom of > > > > http://numeric.scipy.org/array_interface.html > > > > There is talk of ctypes supporting the new interface which is a worthy > > development. Please encourage that if you can. > > > > Please voice concerns now if you have any. > > From http://numeric.scipy.org/array_interface.html: > """ > New since June 16, 2006: > For safety-checking the return object from PyCObject_GetDesc(obj) should > be a Python Tuple with the first object a Python string containing > "PyArrayInterface Version 3" and whose second object is a reference to > the object exposing the array interface (i.e. self). > > Older versions of the interface used the "desc" member of the PyCObject > itself (do not confuse this with the "descr" member of the > PyArrayInterface structure above --- they are two separate things) to > hold the pointer to the object exposing the interface, thus you should > make sure the object returned is a Tuple before assuming it is in a > sanity check. > > In a sanity check it is recommended to only check for "PyArrayInterface > Version" and not for the actual version number so that later versions > will still be compatible. The old sanity check for the integer 2 in the > first field is no longer necessary (but it is necessary to place the > number 2 in that field so that objects reading the old version of the > interface will still understand this one). > """ > > I know that you changed that because of my suggestions, but I don't > think it should stay like this. > > The idea was to have the "desc" member of the PyCObject a 'magic value' > which can be used to determine that the PyCObjects "void *cobj" pointer > really points to a PyArrayInterface structure. I have seen PyCObject > uses before in this way, but I cannot find them any longer. > > If current implementations of the array interface use this pointer for > other things (like keeping a reference to the array object), that's > fine, and I don't think the specification should change. I think it is > espscially dangerous to assume that the desc pointer is a PyObject > pointer, Python will segfault if it is not. > I suggest that you revert this change. When I initially proposed the C version of the array interface, I suggested using a magic number, like 0xDECAF (b/c it's lightweight :-) as the first member of the CObject. Currenty, we use a version number, but I believe that small integers would be more common in random CObjects than a magic number. We could do similiar, using 0xDECAF003 for version 3, for instance. That would keep most of the benefits of an explicit "this is an array interface" CObject token, but is lighter to check, and doesn't impose any constraints on implementers for their desc fields. One of the design goals for the C interface was speed; doing a check that the first member of a tuple begins with a certain string slows it down. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From efiring at hawaii.edu Tue Jun 20 16:33:57 2006 From: efiring at hawaii.edu (Eric Firing) Date: Tue, 20 Jun 2006 10:33:57 -1000 Subject: [Numpy-discussion] array creation speed comparison Message-ID: <44985BB5.1090000@hawaii.edu> In the course of trying to speed up matplotlib, I did a little experiment that may indicate a place where numpy can be sped up: the creation of a 2-D array from a list of tuples. Using the attached script, I find that numarray is roughly 5x faster than either numpy or Numeric: [efiring at manini tests]$ python test_array.py array size: 10000 2 number of loops: 100 numpy 10.89 numpy2 6.57 numarray 1.77 numarray2 0.76 Numeric 8.2 Numeric2 4.36 [efiring at manini tests]$ python test_array.py array size: 100 2 number of loops: 100 numpy 0.11 numpy2 0.06 numarray 0.03 numarray2 0.01 Numeric 0.08 Numeric2 0.05 The numarray advantage persists for relatively small arrays (100x2; second example) and larger ones (10000x2; first example). In each case, the second test for a given package (e.g., numpy2) is the result with the type of the array element specified in advance, and the first (e.g., numpy) is without such specification. The versions I used are: In [3]:Numeric.__version__ Out[3]:'24.0b2' In [5]:numarray.__version__ Out[5]:'1.4.1' In [7]:numpy.__version__ Out[7]:'0.9.9.2584' Eric -------------- next part -------------- A non-text attachment was scrubbed... Name: test_array.py Type: text/x-python Size: 890 bytes Desc: not available URL: From erin.sheldon at gmail.com Tue Jun 20 21:00:52 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Tue, 20 Jun 2006 21:00:52 -0400 Subject: [Numpy-discussion] what happened to numarray type names ? In-Reply-To: <20060620103815.GA23025@mentat.za.net> References: <20060620202230.07c3ae56.simon@arrowtheory.com> <20060620103815.GA23025@mentat.za.net> Message-ID: <331116dc0606201800v1fab5d01o1cf6d21377ef99ca@mail.gmail.com> The numpy example page still has dtype=Float and dtype=Int all over it. Is there a generic replacement for Float, Int or should these be changed to something more specific such as int32? Erin On 6/20/06, Stefan van der Walt wrote: > Hi Simon > > On Tue, Jun 20, 2006 at 08:22:30PM +0100, Simon Burton wrote: > > > > >>> import numpy > > >>> numpy.__version__ > > '0.9.9.2631' > > >>> numpy.Int32 > > Traceback (most recent call last): > > File "", line 1, in ? > > AttributeError: 'module' object has no attribute 'Int32' > > >>> > > > > This was working not so long ago. > > Int32, Float etc. are part of the old Numeric interface, that you can > now access under the numpy.oldnumeric namespace. If I understand > correctly, doing > > import numpy.oldnumeric as Numeric > > should provide you with a Numeric-compatible replacement. > > The same types can be accessed under numpy as int32 (lower case) and > friends. > > Cheers > St?fan > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From cookedm at physics.mcmaster.ca Tue Jun 20 22:00:20 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 20 Jun 2006 22:00:20 -0400 Subject: [Numpy-discussion] what happened to numarray type names ? In-Reply-To: <331116dc0606201800v1fab5d01o1cf6d21377ef99ca@mail.gmail.com> References: <20060620202230.07c3ae56.simon@arrowtheory.com> <20060620103815.GA23025@mentat.za.net> <331116dc0606201800v1fab5d01o1cf6d21377ef99ca@mail.gmail.com> Message-ID: <20060621020020.GA6459@arbutus.physics.mcmaster.ca> On Tue, Jun 20, 2006 at 09:00:52PM -0400, Erin Sheldon wrote: > The numpy example page still has dtype=Float and dtype=Int > all over it. Is there a generic replacement for Float, Int or should > these be changed to something more specific such as int32? > Erin float and int (the Python types) are the generic 'float' and 'int' types. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From kwgoodman at gmail.com Tue Jun 20 23:04:24 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 20 Jun 2006 20:04:24 -0700 Subject: [Numpy-discussion] Selecting columns of a matrix Message-ID: I have a matrix M and a vector (n by 1 matrix) V. I want to form a new matrix that contains the columns of M for which V > 0. One way to do that in Octave is M(:, find(V > 0)). How is it done in numpy? From wbaxter at gmail.com Tue Jun 20 23:33:43 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 21 Jun 2006 12:33:43 +0900 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: I think that one's on the NumPy for Matlab users, no? http://www.scipy.org/NumPy_for_Matlab_Users >>> import numpy as num >>> a = num.arange (10).reshape(2,5) >>> a array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) >>> v = num.rand(5) >>> v array([ 0.10934855, 0.55719644, 0.7044047 , 0.19250088, 0.94636972]) >>> num.where(v>0.5) (array([1, 2, 4]),) >>> a[:,num.where(v>0.5)] array([[[1, 2, 4]], [[6, 7, 9]]]) Seems it grows an extra set of brackets for some reason. Squeeze will get rid of them. >>> a[:,num.where(v>0.5)].squeeze() array([[1, 2, 4], [6, 7, 9]]) Not sure why the squeeze is needed. Maybe there's a better way. --bb On 6/21/06, Keith Goodman wrote: > > I have a matrix M and a vector (n by 1 matrix) V. I want to form a new > matrix that contains the columns of M for which V > 0. > > One way to do that in Octave is M(:, find(V > 0)). How is it done in > numpy? > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erin.sheldon at gmail.com Tue Jun 20 22:30:26 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Tue, 20 Jun 2006 22:30:26 -0400 Subject: [Numpy-discussion] what happened to numarray type names ? In-Reply-To: <20060621020020.GA6459@arbutus.physics.mcmaster.ca> References: <20060620202230.07c3ae56.simon@arrowtheory.com> <20060620103815.GA23025@mentat.za.net> <331116dc0606201800v1fab5d01o1cf6d21377ef99ca@mail.gmail.com> <20060621020020.GA6459@arbutus.physics.mcmaster.ca> Message-ID: <331116dc0606201930h54c75df9y5538c1c3c6cf36c@mail.gmail.com> OK, I have changed all the examples that used dtype=Float or dtype=Int to float and int. Erin On 6/20/06, David M. Cooke wrote: > On Tue, Jun 20, 2006 at 09:00:52PM -0400, Erin Sheldon wrote: > > The numpy example page still has dtype=Float and dtype=Int > > all over it. Is there a generic replacement for Float, Int or should > > these be changed to something more specific such as int32? > > Erin > > float and int (the Python types) are the generic 'float' and 'int' > types. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > From kwgoodman at gmail.com Tue Jun 20 23:49:26 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 20 Jun 2006 20:49:26 -0700 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: On 6/20/06, Bill Baxter wrote: > I think that one's on the NumPy for Matlab users, no? > > http://www.scipy.org/NumPy_for_Matlab_Users > > >>> import numpy as num > >>> a = num.arange (10).reshape(2,5) > >>> a > array([[0, 1, 2, 3, 4], > [5, 6, 7, 8, 9]]) > >>> v = num.rand(5) > >>> v > array([ 0.10934855, 0.55719644, 0.7044047 , 0.19250088, 0.94636972]) > >>> num.where(v>0.5) > (array([1, 2, 4]),) > >>> a[:,num.where(v>0.5)] > array([[[1, 2, 4]], > > [[6, 7, 9]]]) > > Seems it grows an extra set of brackets for some reason. Squeeze will get > rid of them. > > >>> a[:,num.where(v>0.5)].squeeze() > array([[1, 2, 4], > [6, 7, 9]]) > > Not sure why the squeeze is needed. Maybe there's a better way. Thank you. That works for arrays, but not matrices. So do I need to do asarray(a)[:, where(asarray(v)>0.5)].squeeze() ? From erin.sheldon at gmail.com Wed Jun 21 00:10:06 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Wed, 21 Jun 2006 00:10:06 -0400 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> On 6/20/06, Bill Baxter wrote: > I think that one's on the NumPy for Matlab users, no? > > http://www.scipy.org/NumPy_for_Matlab_Users > > >>> import numpy as num > >>> a = num.arange (10).reshape(2,5) > >>> a > array([[0, 1, 2, 3, 4], > [5, 6, 7, 8, 9]]) > >>> v = num.rand(5) > >>> v > array([ 0.10934855, 0.55719644, 0.7044047 , 0.19250088, 0.94636972]) > >>> num.where(v>0.5) > (array([1, 2, 4]),) > >>> a[:,num.where(v>0.5)] > array([[[1, 2, 4]], > > [[6, 7, 9]]]) > > Seems it grows an extra set of brackets for some reason. Squeeze will get > rid of them. > > >>> a[:,num.where(v>0.5)].squeeze() > array([[1, 2, 4], > [6, 7, 9]]) > > Not sure why the squeeze is needed. Maybe there's a better way. where returns a tuple of arrays. This can have unexpected results so you need to grab what you want explicitly: >>> (w,) = num.where(v>0.5) >>> a[:,w] array([[1, 2, 4], [6, 7, 9]]) From wbaxter at gmail.com Wed Jun 21 00:48:48 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 21 Jun 2006 13:48:48 +0900 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> Message-ID: On 6/21/06, Erin Sheldon wrote: > > On 6/20/06, Bill Baxter wrote: > > I think that one's on the NumPy for Matlab users, no? > > > > http://www.scipy.org/NumPy_for_Matlab_Users > > > > >>> import numpy as num > > >>> a = num.arange (10).reshape(2,5) > > >>> a > > array([[0, 1, 2, 3, 4], > > [5, 6, 7, 8, 9]]) > > >>> v = num.rand(5) > > >>> v > > array([ 0.10934855, 0.55719644, 0.7044047 , 0.19250088, 0.94636972]) > > >>> num.where(v>0.5) > > (array([1, 2, 4]),) > > >>> a[:,num.where(v>0.5)] > > array([[[1, 2, 4]], > > > > [[6, 7, 9]]]) > > > > Seems it grows an extra set of brackets for some reason. Squeeze will > get > > rid of them. > > > > >>> a[:,num.where(v>0.5)].squeeze() > > array([[1, 2, 4], > > [6, 7, 9]]) > > > > Not sure why the squeeze is needed. Maybe there's a better way. > > where returns a tuple of arrays. This can have unexpected results > so you need to grab what you want explicitly: > > >>> (w,) = num.where(v>0.5) > >>> a[:,w] > array([[1, 2, 4], > [6, 7, 9]]) > Ah, yeh, that makes sense. Thanks for the explanation. So to turn it back into a one-liner you just need: >>> a[:,num.where(v>0.5)[0]] array([[1, 2, 4], [6, 7, 9]]) I'll put that up on the Matlab->Numpy page. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Wed Jun 21 01:23:49 2006 From: simon at arrowtheory.com (Simon Burton) Date: Wed, 21 Jun 2006 15:23:49 +1000 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> Message-ID: <20060621152349.157974f4.simon@arrowtheory.com> On Wed, 21 Jun 2006 13:48:48 +0900 "Bill Baxter" wrote: > > >>> a[:,num.where(v>0.5)[0]] > array([[1, 2, 4], > [6, 7, 9]]) > > I'll put that up on the Matlab->Numpy page. oh, yuck. What about this: >>> a[:,num.nonzero(v>0.5)] array([[0, 1, 3], [5, 6, 8]]) >>> Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From wbaxter at gmail.com Wed Jun 21 03:16:46 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 21 Jun 2006 16:16:46 +0900 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <20060621152349.157974f4.simon@arrowtheory.com> References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> <20060621152349.157974f4.simon@arrowtheory.com> Message-ID: On 6/21/06, Simon Burton wrote: > > On Wed, 21 Jun 2006 13:48:48 +0900 > "Bill Baxter" wrote: > > > > > >>> a[:,num.where(v>0.5)[0]] > > array([[1, 2, 4], > > [6, 7, 9]]) > > > > I'll put that up on the Matlab->Numpy page. > > oh, yuck. What about this: > > >>> a[:,num.nonzero(v>0.5)] > array([[0, 1, 3], > [5, 6, 8]]) > >>> The nonzero() function seems like kind of an anomaly in and of itself. It doesn't behave like other index-returning numpy functions, or even like the method version, v.nonzero(), which returns the typical tuple of array. So my feeling is ... ew to numpy.nonzero. --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Jun 21 04:48:15 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 Jun 2006 04:48:15 -0400 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: On Tue, 20 Jun 2006, Keith Goodman apparently wrote: > I have a matrix M and a vector (n by 1 matrix) V. I want to form a new > matrix that contains the columns of M for which V > 0. > One way to do that in Octave is M(:, find(V > 0)). How is it done in numpy? M.transpose()[V>0] If you want the columns as columns, you can transpose again. hth, Alan Isaac From michael.sorich at gmail.com Wed Jun 21 04:46:19 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Wed, 21 Jun 2006 18:16:19 +0930 Subject: [Numpy-discussion] MA bug or feature? Message-ID: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> When transposing a masked array of dtype ' References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com><20060621152349.157974f4.simon@arrowtheory.com> Message-ID: On Wed, 21 Jun 2006, Bill Baxter apparently wrote: > ew to numpy.nonzero I agree that having the method and function behave so differently is awkward; this was discussed before on this list. It does allow Simon's nicer solution, however. I'm not sure why bool arrays cannot be used as indices. The "natural" solution to the original problem seemed to be: M[:,V>0] but this is not allowed. Cheers, Alan Isaac From faltet at carabos.com Wed Jun 21 05:14:58 2006 From: faltet at carabos.com (Francesc Altet) Date: Wed, 21 Jun 2006 11:14:58 +0200 Subject: [Numpy-discussion] ANN: PyTables (a hierarchical database) 1.3.2 released Message-ID: <200606211115.02727.faltet@carabos.com> =========================== Announcing PyTables 1.3.2 =========================== This is a new minor release of PyTables. There you will find, among other things, improved support for NumPy strings and the ability to create indexes of NumPy-flavored tables (this capability was broken in earlier versions). *Important note*: one of the fixes addresses an important bug that shows when browsing files with lots of nodes, making PyTables to crash. Because of this, an upgrade is encouraged. Go to the PyTables web site for downloading the beast: http://www.pytables.org/ or keep reading for more info about the new features and bugs fixed. Changes more in depth ===================== Bug fixes: - Changed the nodes in the lru cache heap from Pyrex to pure Python ones. This fixes a problem that can appear in certain situations (mainly, when navigating back and forth along lots of Node objects). While this fix is sub-optimal, at least it leads to well behaviour until the faster approach will eventually get back. - Due to different conventions in padding chars, it has been added a special case when converting from numarray strings into numpy ones so that these different conventions are handled correctly. Fixes ticket #13 and other strange numpy string quirks (thanks to Pepe Barbe). - Solved an issue that appeared when indexing Table columns with flavor 'numpy'. Now, tables that are 'numpy' flavored can be indexed as well. - Solved an issue when saving string atoms with ``VLArray`` with a flavor different from "python". The problem was that the item sizes of the original strings were not checked, so rubish was put on-disk. Now, if an item size of the input is different from the item size of the atom, a conversion is forced. Added tests to check for these situations. - Fixed a problem with removing a table with indexed columns under certain situations. Thanks to Andrew Straw for reporting it. - Fixed a small glitch in the ``ptdump`` utility that prevented dumping ``EArray`` data with an enlargeable dimension different from the first one. - Make parent node unreference child node when creation fails. Fixes ticket #12 (thanks to Eilif). - Saving zero-length strings in Array objects used to raise a ZeroDivisionError. Now, it returns a more sensible NotImplementedError until this is supported. Backward-incompatible changes: - Please, see ``RELEASE-NOTES.txt`` file. Deprecated features: - None Important note for Windows users ================================ If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win.ZIP What it is ========== **PyTables** is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with support for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. PyTables runs on top of the HDF5 library and numarray (but NumPy and Numeric are also supported) package for achieving maximum throughput and convenient use. Besides, PyTables I/O for table objects is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. PyTables sports indexing capabilities as well, allowing doing selections in tables exceeding one billion of rows in just seconds. Platforms ========= This version has been extensively checked on quite a few platforms, like Linux on Intel32 (Pentium), Win on Intel32 (Pentium), Linux on Intel64 (Itanium2), FreeBSD on AMD64 (Opteron), Linux on PowerPC (and PowerPC64) and MacOSX on PowerPC. For other platforms, chances are that the code can be easily compiled and run without further issues. Please, contact us in case you are experiencing problems. Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdf.ncsa.uiuc.edu/HDF5/ About numarray: http://www.stsci.edu/resources/software_hardware/numarray To know more about the company behind the PyTables development, see: http://www.carabos.com/ Acknowledgments =============== Thanks to various the users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last but not least, a big thank you to THG (http://www.hdfgroup.org/) for sponsoring many of the new features recently introduced in PyTables. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team From pgmdevlist at mailcan.com Wed Jun 21 06:12:09 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Wed, 21 Jun 2006 06:12:09 -0400 Subject: [Numpy-discussion] MA bug or feature? In-Reply-To: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> References: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> Message-ID: <200606210612.09374.pgmdevlist@mailcan.com> On Wednesday 21 June 2006 04:46, Michael Sorich wrote: > When transposing a masked array of dtype ' ndarray of dtype '|O4' was returned. OK, I see where the problem is: When your fill_value has a type that cannot be converted to the type of your data, the `filled` method (used internally in many functions, such as `transpose`) raises a TypeError, which is caught and your array is converted to 'O'. That's what happen here: your fill_value is a string, your data are integer, the types don't match, hence the conversion. So, no, I don't think that's a bug. Why filling when you don't have any masked values, then ? Well, there's a subtle difference between a boolean mask and a mask of booleans. When the mask is boolean (mask=nomask=False), there's no masked value, and `filled` returns the data. Now, when your mask is an array of boolean (your first case), MA doesn't check whether mask.any()==False to determine whether there are some missing data or not, it just processes the whole array of boolean. I agree that's a bit confusing here, and there might be some room for improvement (for example, changing the current `if m is nomask` to `if m is nomask or m.any()==False`, or better, forcing mask to nomask if mask.any()==False). But I don;t think that qualifies as bug. In short: when you have an array of numbers, don't try to fill it with characters. From Sheldon.Johnston at smhi.se Wed Jun 21 09:31:23 2006 From: Sheldon.Johnston at smhi.se (Johnston Sheldon) Date: Wed, 21 Jun 2006 15:31:23 +0200 Subject: [Numpy-discussion] LittleEndian Message-ID: <575A94F91D20704387D1C69A913E95EE035816@CORRE.ad.smhi.se> Hi, Can someone give a brief example of the Numeric function LittleEndian? I have written two separate functions to read binary data that can be either LittleEndian or BigEndian (using byteswapped() ) but it would be great with just one function. Much obliged, Sheldon -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.u.r.e.l.i.a.n at gmx.net Wed Jun 21 09:36:10 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Wed, 21 Jun 2006 15:36:10 +0200 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> Hi, > I'm not sure why bool arrays cannot be used as indices. > The "natural" solution to the original problem seemed to be: > M[:,V>0] > but this is not allowed. I started a thread on this earlier this year. Try searching the archive for "boolean indexing" (if it comes back online somewhen). Travis had some reason for not implementing this, but unfortunately I do not remember what it was. The corresponding message might still linger on my home PC, which I can access this evening.... Johannes From fullung at gmail.com Wed Jun 21 09:58:28 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 21 Jun 2006 15:58:28 +0200 Subject: [Numpy-discussion] LittleEndian In-Reply-To: <575A94F91D20704387D1C69A913E95EE035816@CORRE.ad.smhi.se> Message-ID: <007901c6953a$c5ab7db0$01eaa8c0@dsp.sun.ac.za> Hey Sheldon With NumPy you can use dtype's newbyteorder method to convert any dtype's byte order to an order you specify: In [1]: import numpy as N In [2]: x = N.array([1],dtype='i4') In [4]: xle = N.asarray(x, dtype=x.dtype.newbyteorder('<')) In [5]: yle = N.asarray(y, dtype=y.dtype.newbyteorder('<')) In [6]: x.dtype Out[6]: dtype('i4') In [8]: xle.dtype Out[8]: dtype(' -----Original Message----- > From: numpy-discussion-bounces at lists.sourceforge.net [mailto:numpy- > discussion-bounces at lists.sourceforge.net] On Behalf Of Johnston Sheldon > Sent: 21 June 2006 15:31 > To: Numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] LittleEndian > > Hi, > > Can someone give a brief example of the Numeric function LittleEndian? > > I have written two separate functions to read binary data that can be > either LittleEndian or BigEndian (using byteswapped() ) but it would be > great with just one function. From kwgoodman at gmail.com Wed Jun 21 10:13:54 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 21 Jun 2006 07:13:54 -0700 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> Message-ID: On 6/20/06, Bill Baxter wrote: > >>> a[:,num.where(v>0.5)[0]] > array([[1, 2, 4], > [6, 7, 9]]) > > I'll put that up on the Matlab->Numpy page. That's a great addition to the Matlab to Numpy page. But it only works if v is a column vector. If v is a row vector, then where(v.A > 0.5)[0] will return all zeros. So for row vectors it should be where(v.A > 0.5)[1]. Or, in general, where(v.flatten(1).A > 0.5)[1] From kwgoodman at gmail.com Wed Jun 21 10:56:42 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 21 Jun 2006 07:56:42 -0700 Subject: [Numpy-discussion] Selecting columns of a matrix Message-ID: Alan G Isaac wrote: > M.transpose()[V>0] > If you want the columns as columns, > you can transpose again. I can't get that to work when M is a n by m matrix: >> M = asmatrix(rand(3,4)) >> M matrix([[ 0.78970407, 0.78681448, 0.79167808, 0.57857822], [ 0.44567836, 0.23985597, 0.49392248, 0.0282004 ], [ 0.7044725 , 0.4090776 , 0.12035218, 0.71365101]]) >> V = asmatrix(rand(4,1)) >> V matrix([[ 0.61638738], [ 0.76928157], [ 0.3882811 ], [ 0.68979661]]) >> M.transpose()[V > 0.5] matrix([[ 0.78970407, 0.78681448, 0.57857822]]) The answer should be a 3 by 3 matrix. From travis at enthought.com Wed Jun 21 11:20:38 2006 From: travis at enthought.com (Travis N. Vaught) Date: Wed, 21 Jun 2006 10:20:38 -0500 Subject: [Numpy-discussion] SciPy 2006 Tutorials Message-ID: <449963C6.3070203@enthought.com> All, As part of this year's SciPy 2006 Conference, we've planned Coding Sprints on Monday and Tuesday (August 14-15) and a Tutorial Day Wednesday (August 16)--the normal conference presentations follow on Thursday and Friday (August 17-18). For this year at least, the Tutorials (and Sprints) are no additional charge (you're on your own for food on those days, though). With regard to Tutorial topics, we've settled on the following: "3D visualization in Python using tvtk and MayaVi" "Scientific Data Analysis and Visualization using IPython and Matplotlib." "Building Scientific Applications using the Enthought Tool Suite (Envisage, Traits, Chaco, etc.)" "NumPy (migration from Numarray & Numeric, overview of NumPy)" These will be in two tracks with two three hour sessions in each track. If you plan to attend, please send an email to tutorials at scipy.org with the two sessions you'd most like to hear and we'll build the schedule with a minimum of conflict. We'll post the schedule of the tracks on the Wiki here: http://www.scipy.org/SciPy2006/TutorialSessions Also, if you haven't registered already, the deadline for early registration is July 14. The abstract submission deadline is July 7. More information is here: http://www.scipy.org/SciPy2006 Thanks, Travis From oliphant.travis at ieee.org Wed Jun 21 11:52:24 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 21 Jun 2006 09:52:24 -0600 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: <331116dc0606202110v3ddaa7ddp725c43842956f1c7@mail.gmail.com> <20060621152349.157974f4.simon@arrowtheory.com> Message-ID: <44996B38.808@ieee.org> Bill Baxter wrote: > On 6/21/06, *Simon Burton* > wrote: > > On Wed, 21 Jun 2006 13:48:48 +0900 > "Bill Baxter" > wrote: > > > > > >>> a[:,num.where(v>0.5)[0]] > > array([[1, 2, 4], > > [6, 7, 9]]) > > > > I'll put that up on the Matlab->Numpy page. > > oh, yuck. What about this: > > >>> a[:,num.nonzero(v>0.5)] > array([[0, 1, 3], > [5, 6, 8]]) > >>> > > > The nonzero() function seems like kind of an anomaly in and of > itself. It doesn't behave like other index-returning numpy > functions, or even like the method version, v.nonzero(), which returns > the typical tuple of array. So my feeling is ... ew to numpy.nonzero. How about we add the ability so that a[:, ] gets translated to a[:, nonzero()] ? -Travis From perrot at shfj.cea.fr Wed Jun 21 12:15:20 2006 From: perrot at shfj.cea.fr (Matthieu Perrot) Date: Wed, 21 Jun 2006 18:15:20 +0200 Subject: [Numpy-discussion] tiny patch + Playing with strings and my own array descr (PyArray_STRING, PyArray_OBJECT). In-Reply-To: <4497BED2.9090601@ieee.org> References: <200606162001.31342.perrot@shfj.cea.fr> <4497BED2.9090601@ieee.org> Message-ID: <200606211815.20053.perrot@shfj.cea.fr> Le Mardi 20 Juin 2006 11:24, Travis Oliphant a ?crit?: > Matthieu Perrot wrote: > > hi, > > > > I need to handle strings shaped by a numpy array whose data own to a C > > structure. There is several possible answers to this problem : > > 1) use a numpy array of strings (PyArray_STRING) and so a (char *) > > object in C. It works as is, but you need to define a maximum size to > > your strings because your set of strings is contiguous in memory. > > 2) use a numpy array of objects (PyArray_OBJECT), and wrap each ?C > > string? with a python object, using PyStringObject for example. Then our > > problem is that there is as wrapper as data element and I believe data > > can't be shared when your created PyStringObject using (char *) thanks to > > PyString_AsStringAndSize by example. > > > > > > Now, I will expose a third way, which allow you to use no size-limited > > strings (as in solution 1.) and don't create wrappers before you really > > need it (on demand/access). > > > > First, for convenience, we will use in C, (char **) type to build an > > array of string pointers (as it was suggested in solution 2). Now, the > > game is to make it works with numpy API, and use it in python through a > > python array. Basically, I want a very similar behabiour than arrays of > > PyObject, where data are not contiguous, only their address are. So, the > > idea is to create a new array descr based on PyArray_OBJECT and change > > its getitem/setitem functions to deals with my own data. > > > > I exepected numpy to work with this convenient array descr, but it fails > > because PyArray_Scalar (arrayobject.c) don't call descriptor getitem > > function (in PyArray_OBJECT case) but call 2 lines which have been > > copy/paste from the OBJECT_getitem function). Here my small patch is : > > replace (arrayobject.c:983-984): > > Py_INCREF(*((PyObject **)data)); > > return *((PyObject **)data); > > by : > > return descr->f->getitem(data, base); > > > > I play a lot with my new numpy array after this change and noticed that a > > lot of uses works : > > This is an interesting solution. I was not considering it, though, and > so I'm not surprised you have problems. You can register new types but > basing them off of PyArray_OBJECT can be problematic because of the > special-casing that is done in several places to manage reference counting. > > You are supposed to register your own data-types and get your own > typenumber. Then you can define all the functions for the entries as > you wish. > > Riding on the back of PyArray_OBJECT may work if you are clever, but it > may fail mysteriously as well because of a reference count snafu. > > Thanks for the tests and bug-reports. I have no problem changing the > code as you suggest. > > -Travis Thanks for applying my suggestions. I think, you suggest this kind of declaration : PyArray_Descr *descr = PyArray_DescrNewFromType(PyArray_VOID); descr->f->getitem = (PyArray_GetItemFunc *) my_getitem; descr->f->setitem = (PyArray_SetItemFunc *) my_setitem; descr->elsize = sizeof(char *); PyArray_RegisterDataType(descr); Without the last line, you are right it works and it follows the C-API way. But if I register this array descr, the typenumber is bigger than what PyTypeNum_ISFLEXIBLE function considers to be a flexible type. So the returned scalar object is badly-formed. Then, I get a segmentation fault later, because the created voidscalar has a null descr pointer. -- Matthieu Perrot From cloomis at astro.princeton.edu Wed Jun 21 12:41:14 2006 From: cloomis at astro.princeton.edu (Craig Loomis) Date: Wed, 21 Jun 2006 12:41:14 -0400 Subject: [Numpy-discussion] Bug with cumsum(dtype='f8')? Message-ID: Not sure if this one has been addressed. There appears to be a problem with cumsum(dtype=), with reasonably small numbers. Both PPC and x86 Macs. ======== import numpy print "numpy version:", numpy.__version__ v = numpy.arange(10002) # 10001 is OK, larger is "worse" print "ok: ", v.cumsum() print "not ok: ", v.cumsum(dtype=numpy.float64) print "ok: ", numpy.arange(10002,dtype=numpy.float64).cumsum() ========= ActivePython 2.4.3 Build 11 (ActiveState Software Inc.) based on Python 2.4.3 (#1, Apr 3 2006, 18:07:14) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> print "numpy version:", numpy.__version__ numpy version: 0.9.9.2549 >>> v = numpy.arange(10002) # 10001 is OK, larger is "worse" >>> print "ok: ", v.cumsum() ok: [ 0 1 3 ..., 49995000 50005000 50015001] >>> print "not ok: ", v.cumsum(dtype=numpy.float64) not ok: [ 0.00000000e+00 1.00010000e+04 3.00000000e+00 ..., 4.99950000e+07 5.00050000e+07 0.00000000e+00] >>> print "ok: ", numpy.arange(10002,dtype=numpy.float64).cumsum() ok: [ 0.00000000e+00 1.00000000e+00 3.00000000e+00 ..., 4.99950000e+07 5.00050000e+07 5.00150010e+07] >>> - craig From oliphant.travis at ieee.org Wed Jun 21 12:50:26 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 21 Jun 2006 10:50:26 -0600 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <449978D2.1090000@ieee.org> Johannes Loehnert wrote: > Hi, > > >> I'm not sure why bool arrays cannot be used as indices. >> The "natural" solution to the original problem seemed to be: >> M[:,V>0] >> but this is not allowed. >> > > I started a thread on this earlier this year. Try searching the archive for > "boolean indexing" (if it comes back online somewhen). > > Travis had some reason for not implementing this, but unfortunately I do not > remember what it was. The corresponding message might still linger on my home > > PC, which I can access this evening.... > I suspect my reason was just not being sure if it could be explained consistently. But, after seeing this come up again. I decided it was easy enough to implement. So, in SVN NumPy, you will be able to do a[:,V>0] a[V>0,:] The V>0 will be replaced with integer arrays as if nonzero(V>0) had been called. -Travis From pau.gargallo at gmail.com Wed Jun 21 13:09:50 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Wed, 21 Jun 2006 19:09:50 +0200 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <449978D2.1090000@ieee.org> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> Message-ID: <6ef8f3380606211009i5e225282n4a4e8e4dc7adbad1@mail.gmail.com> On 6/21/06, Travis Oliphant wrote: > Johannes Loehnert wrote: > > Hi, > > > > > >> I'm not sure why bool arrays cannot be used as indices. > >> The "natural" solution to the original problem seemed to be: > >> M[:,V>0] > >> but this is not allowed. > >> > > > > I started a thread on this earlier this year. Try searching the archive for > > "boolean indexing" (if it comes back online somewhen). > > > > Travis had some reason for not implementing this, but unfortunately I do not > > remember what it was. The corresponding message might still linger on my home > > > > PC, which I can access this evening.... > > > > I suspect my reason was just not being sure if it could be explained > consistently. But, after seeing this come up again. I decided it was > easy enough to implement. > > So, in SVN NumPy, you will be able to do > > a[:,V>0] > a[V>0,:] > > The V>0 will be replaced with integer arrays as if nonzero(V>0) had been > called. > does it work for a[,] ? what about a[ix_( nonzero(), nonzero() )] ? maybe the to nonzero() conversion would be more coherently done by the ix_ function than by the [] pau From kwgoodman at gmail.com Wed Jun 21 13:16:44 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 21 Jun 2006 10:16:44 -0700 Subject: [Numpy-discussion] Element-by-element matrix multiplication Message-ID: The NumPy for Matlab Users page suggests mat(a.A * b.A) for element-by-element matrix multiplication. I think it would be helpful to also include multiply(a, b). a.*b mat(a.A * b.A) or multiply(a, b) From robert.kern at gmail.com Wed Jun 21 13:21:42 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 Jun 2006 12:21:42 -0500 Subject: [Numpy-discussion] Element-by-element matrix multiplication In-Reply-To: References: Message-ID: Keith Goodman wrote: > The NumPy for Matlab Users page suggests mat(a.A * b.A) for > element-by-element matrix multiplication. I think it would be helpful > to also include multiply(a, b). > > a.*b > > mat(a.A * b.A) or > multiply(a, b) It is a wiki page. You may edit it yourself without needing to ask permission. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Wed Jun 21 13:22:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 21 Jun 2006 11:22:04 -0600 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <6ef8f3380606211009i5e225282n4a4e8e4dc7adbad1@mail.gmail.com> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> <6ef8f3380606211009i5e225282n4a4e8e4dc7adbad1@mail.gmail.com> Message-ID: <4499803C.1010302@ieee.org> Pau Gargallo wrote: > On 6/21/06, Travis Oliphant wrote: > >> Johannes Loehnert wrote: >> >>> Hi, >>> >>> >>> >>>> I'm not sure why bool arrays cannot be used as indices. >>>> The "natural" solution to the original problem seemed to be: >>>> M[:,V>0] >>>> but this is not allowed. >>>> >>>> >>> I started a thread on this earlier this year. Try searching the archive for >>> "boolean indexing" (if it comes back online somewhen). >>> >>> Travis had some reason for not implementing this, but unfortunately I do not >>> remember what it was. The corresponding message might still linger on my home >>> >>> PC, which I can access this evening.... >>> >>> >> I suspect my reason was just not being sure if it could be explained >> consistently. But, after seeing this come up again. I decided it was >> easy enough to implement. >> >> So, in SVN NumPy, you will be able to do >> >> a[:,V>0] >> a[V>0,:] >> >> The V>0 will be replaced with integer arrays as if nonzero(V>0) had been >> called. >> >> > > does it work for a[,] ? > Sure, it will work. Basically all boolean arrays will be interpreted as nonzero(V>0), everywhere. > what about a[ix_( nonzero(), nonzero() )] ? > > maybe the to nonzero() conversion would be more > coherently done by the ix_ function than by the [] > > I've just added support for inside ix_ so that the nonzero will be done automatically as well. So a[ix_(,)] will give the cross-product selection. -Travis From webb.sprague at gmail.com Wed Jun 21 13:27:53 2006 From: webb.sprague at gmail.com (Webb Sprague) Date: Wed, 21 Jun 2006 10:27:53 -0700 Subject: [Numpy-discussion] Problem installing numpy on Gentoo Message-ID: I am trying to install numpy on Gentoo (see my info below for version etc). It all seems to go fine, but when I try to import it and run the tests, I get the following error (in ipython): In [1]: import numpy import linalg -> failed: libg2c.so.0: cannot open shared object file: No such file or directory I have gfortran on my system, but libg2c is not part of the gcc-4.1.1 distribution anymore (maybe that is a bug with Gentoo?). I also get the same error when I run f2py from the command line. Here is the bug I filed: http://bugs.gentoo.org/show_bug.cgi?id=136988 Info that might help: cowboy ~ # ls /usr/lib/gcc/i686-pc-linux-gnu/4.1.1/ crtbegin.o libgcc.a libgfortran.so.1 libobjc.so.1.0.0 crtbeginS.o libgcc_eh.a libgfortran.so.1.0.0 libstdc++.a crtbeginT.o libgcc_s.so libgfortranbegin.a libstdc++.so crtend.o libgcc_s.so.1 libgfortranbegin.la libstdc++.so.6 crtendS.o libgcov.a libobjc.a libstdc++.so.6.0.8 crtfastmath.o libgfortran.a libobjc.la libstdc++_pic.a include libgfortran.la libobjc.so libsupc++.a install-tools libgfortran.so libobjc.so.1 libsupc++.la cowboy ~ # ls /usr/lib/gcc/i686-pc-linux-gnu/3.4.6/ SYSCALLS.c.X libffi.la libobjc.la crtbegin.o libffi.so libobjc.so crtbeginS.o libfrtbegin.a libobjc.so.1 crtbeginT.o libg2c.a libobjc.so.1.0.0 crtend.o libg2c.la libstdc++.a crtendS.o libg2c.so libstdc++.la hardened.specs libg2c.so.0 libstdc++.so hardenednopie.specs libg2c.so.0.0.0 libstdc++.so.6 hardenednopiessp.specs libgcc.a libstdc++.so.6.0.3 hardenednossp.specs libgcc_eh.a libstdc++_pic.a include libgcc_s.so libsupc++.a install-tools libgcc_s.so.1 libsupc++.la libffi-2.00-beta.so libgcov.a specs libffi.a libobjc.a vanilla.specs cowboy ~ # emerge --info Portage 2.1.1_pre1-r1 (default-linux/x86/2006.0, gcc-4.1.1/vanilla, glibc-2.4-r3, 2.6.11-gentoo-r9 i686) ================================================================= System uname: 2.6.11-gentoo-r9 i686 AMD Athlon(tm) Processor Gentoo Base System version 1.12.1 distcc 2.18.3 i686-pc-linux-gnu (protocols 1 and 2) (default port 3632) [disabled] ccache version 2.4 [enabled] dev-lang/python: 2.4.3-r1 dev-python/pycrypto: 2.0.1-r5 dev-util/ccache: 2.4-r2 dev-util/confcache: [Not Present] sys-apps/sandbox: 1.2.18.1 sys-devel/autoconf: 2.13, 2.59-r7 sys-devel/automake: 1.4_p6, 1.5, 1.6.3, 1.7.9-r1, 1.8.5-r3, 1.9.6-r2 sys-devel/binutils: 2.16.1-r2 sys-devel/gcc-config: 2.0.0_rc1 sys-devel/libtool: 1.5.22 virtual/os-headers: 2.6.11-r5 ACCEPT_KEYWORDS="x86 ~x86" AUTOCLEAN="yes" CBUILD="i686-pc-linux-gnu" CFLAGS=" -march=athlon -O2 -pipe -fomit-frame-pointer" CHOST="i686-pc-linux-gnu" CONFIG_PROTECT="/etc /usr/share/X11/xkb" CONFIG_PROTECT_MASK="/etc/env.d /etc/eselect/compiler /etc/gconf /etc/revdep-rebuild /etc/terminfo /etc/texmf/web2c" CXXFLAGS=" -march=athlon -O2 -pipe -fomit-frame-pointer" DISTDIR="/usr/portage/distfiles" FEATURES="autoconfig ccache distlocks metadata-transfer sandbox sfperms" GENTOO_MIRRORS="http://distfiles.gentoo.org http://distro.ibiblio.org/pub/linux/distributions/gentoo" PKGDIR="/usr/portage/packages" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --delete-after --stats --timeout=180 --exclude='/distfiles' --exclude='/local' --exclude='/packages'" PORTAGE_TMPDIR="/var/tmp" PORTDIR="/usr/portage" PORTDIR_OVERLAY="/usr/local/portage" SYNC="rsync://rsync.gentoo.org/gentoo-portage" USE="x86 X alsa apache2 apm arts avi berkdb bitmap-fonts blas cli crypt cups dba dri eds emacs emboss encode esd f77 fftw foomaticdb fortran g77 gdbm gif gnome gpm gstreamer gtk gtk2 imlib ipv6 isdnlog jpeg lapack libg++ libwww mad mikmod mime mmap motif mp3 mpeg ncurses nls nptl nptlonly objc ogg opengl oss pam pcre pdflib perl png postgres pppd python quicktime readline reflection sdl session spell spl ssl svg tcltk tcpd tidy truetype truetype-fonts type1-fonts udev unicode vorbis xml xmms xorg xv zlib elibc_glibc kernel_linux userland_GNU" Unset: CTARGET, EMERGE_DEFAULT_OPTS, INSTALL_MASK, LANG, LC_ALL, LDFLAGS, LINGUAS, MAKEOPTS, PORTAGE_RSYNC_EXTRA_OPTS cowboy ~ # gcc --version i686-pc-linux-gnu-gcc (GCC) 4.1.1 (Gentoo 4.1.1) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. From pau.gargallo at gmail.com Wed Jun 21 13:31:48 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Wed, 21 Jun 2006 19:31:48 +0200 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <4499803C.1010302@ieee.org> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> <6ef8f3380606211009i5e225282n4a4e8e4dc7adbad1@mail.gmail.com> <4499803C.1010302@ieee.org> Message-ID: <6ef8f3380606211031sd7395d5k3cef4838efd2e96c@mail.gmail.com> On 6/21/06, Travis Oliphant wrote: > Pau Gargallo wrote: > > On 6/21/06, Travis Oliphant wrote: > > > >> Johannes Loehnert wrote: > >> > >>> Hi, > >>> > >>> > >>> > >>>> I'm not sure why bool arrays cannot be used as indices. > >>>> The "natural" solution to the original problem seemed to be: > >>>> M[:,V>0] > >>>> but this is not allowed. > >>>> > >>>> > >>> I started a thread on this earlier this year. Try searching the archive for > >>> "boolean indexing" (if it comes back online somewhen). > >>> > >>> Travis had some reason for not implementing this, but unfortunately I do not > >>> remember what it was. The corresponding message might still linger on my home > >>> > >>> PC, which I can access this evening.... > >>> > >>> > >> I suspect my reason was just not being sure if it could be explained > >> consistently. But, after seeing this come up again. I decided it was > >> easy enough to implement. > >> > >> So, in SVN NumPy, you will be able to do > >> > >> a[:,V>0] > >> a[V>0,:] > >> > >> The V>0 will be replaced with integer arrays as if nonzero(V>0) had been > >> called. > >> > >> > > > > does it work for a[,] ? > > > Sure, it will work. Basically all boolean arrays will be interpreted as > nonzero(V>0), everywhere. > > what about a[ix_( nonzero(), nonzero() )] ? > > > > maybe the to nonzero() conversion would be more > > coherently done by the ix_ function than by the [] > > > > > I've just added support for inside ix_ so that the nonzero > will be done automatically as well. > > So > > a[ix_(,)] will give the cross-product selection. > ok so: a[ b1, b2 ] will be different than a[ ix_(b1,b2) ] just like with integer indices. Make sense to me. also, a[b] will be as before (a[where(b)]) ? maybe a trailing coma could lunch the new behaviour? a[b] -> a[where(b)] a[b,] -> a[b,...] -> a[nonzero(b)] Thanks, pau From kwgoodman at gmail.com Wed Jun 21 13:45:54 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 21 Jun 2006 10:45:54 -0700 Subject: [Numpy-discussion] Element-by-element matrix multiplication In-Reply-To: References: Message-ID: On 6/21/06, Robert Kern wrote: > Keith Goodman wrote: > > The NumPy for Matlab Users page suggests mat(a.A * b.A) for > > element-by-element matrix multiplication. I think it would be helpful > > to also include multiply(a, b). > > > > a.*b > > > > mat(a.A * b.A) or > > multiply(a, b) > > It is a wiki page. You may edit it yourself without needing to ask permission. OK. Done. I also added a notice about SciPy's PayPal account being suspended. From oliphant.travis at ieee.org Wed Jun 21 14:22:51 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 21 Jun 2006 12:22:51 -0600 Subject: [Numpy-discussion] memory leak in array In-Reply-To: <200606211618.k5LGIYWw008784@rm-rstar.sfu.ca> References: <200606211618.k5LGIYWw008784@rm-rstar.sfu.ca> Message-ID: <44998E7B.50409@ieee.org> saagesen at sfu.ca wrote: > Hi Travis > > Not sure if you've had a chance to look at the previous code I sent or not, > but I was able to reduce the code (see below) to its smallest size and still > have the problem, albeit at a slower rate. The problem appears to come from > changing values in the array. Does this create another reference to the > array, which can't be released? If this problem does not have a work-around > or "fix", please let me know. > This is now fixed in SVN. -Travis From faltet at carabos.com Wed Jun 21 05:14:58 2006 From: faltet at carabos.com (Francesc Altet) Date: Wed, 21 Jun 2006 11:14:58 +0200 Subject: [Numpy-discussion] ANN: PyTables (a hierarchical database) 1.3.2 released Message-ID: <200606211115.02727.faltet@carabos.com> =========================== Announcing PyTables 1.3.2 =========================== This is a new minor release of PyTables. There you will find, among other things, improved support for NumPy strings and the ability to create indexes of NumPy-flavored tables (this capability was broken in earlier versions). *Important note*: one of the fixes addresses an important bug that shows when browsing files with lots of nodes, making PyTables to crash. Because of this, an upgrade is encouraged. Go to the PyTables web site for downloading the beast: http://www.pytables.org/ or keep reading for more info about the new features and bugs fixed. Changes more in depth ===================== Bug fixes: - Changed the nodes in the lru cache heap from Pyrex to pure Python ones. This fixes a problem that can appear in certain situations (mainly, when navigating back and forth along lots of Node objects). While this fix is sub-optimal, at least it leads to well behaviour until the faster approach will eventually get back. - Due to different conventions in padding chars, it has been added a special case when converting from numarray strings into numpy ones so that these different conventions are handled correctly. Fixes ticket #13 and other strange numpy string quirks (thanks to Pepe Barbe). - Solved an issue that appeared when indexing Table columns with flavor 'numpy'. Now, tables that are 'numpy' flavored can be indexed as well. - Solved an issue when saving string atoms with ``VLArray`` with a flavor different from "python". The problem was that the item sizes of the original strings were not checked, so rubish was put on-disk. Now, if an item size of the input is different from the item size of the atom, a conversion is forced. Added tests to check for these situations. - Fixed a problem with removing a table with indexed columns under certain situations. Thanks to Andrew Straw for reporting it. - Fixed a small glitch in the ``ptdump`` utility that prevented dumping ``EArray`` data with an enlargeable dimension different from the first one. - Make parent node unreference child node when creation fails. Fixes ticket #12 (thanks to Eilif). - Saving zero-length strings in Array objects used to raise a ZeroDivisionError. Now, it returns a more sensible NotImplementedError until this is supported. Backward-incompatible changes: - Please, see ``RELEASE-NOTES.txt`` file. Deprecated features: - None Important note for Windows users ================================ If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win.ZIP What it is ========== **PyTables** is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with support for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. PyTables runs on top of the HDF5 library and numarray (but NumPy and Numeric are also supported) package for achieving maximum throughput and convenient use. Besides, PyTables I/O for table objects is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. PyTables sports indexing capabilities as well, allowing doing selections in tables exceeding one billion of rows in just seconds. Platforms ========= This version has been extensively checked on quite a few platforms, like Linux on Intel32 (Pentium), Win on Intel32 (Pentium), Linux on Intel64 (Itanium2), FreeBSD on AMD64 (Opteron), Linux on PowerPC (and PowerPC64) and MacOSX on PowerPC. For other platforms, chances are that the code can be easily compiled and run without further issues. Please, contact us in case you are experiencing problems. Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdf.ncsa.uiuc.edu/HDF5/ About numarray: http://www.stsci.edu/resources/software_hardware/numarray To know more about the company behind the PyTables development, see: http://www.carabos.com/ Acknowledgments =============== Thanks to various the users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last but not least, a big thank you to THG (http://www.hdfgroup.org/) for sponsoring many of the new features recently introduced in PyTables. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations.html From tim.hochberg at cox.net Wed Jun 21 15:02:27 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed, 21 Jun 2006 12:02:27 -0700 Subject: [Numpy-discussion] Numexpr does broadcasting. Message-ID: <449997C3.2000905@cox.net> Numexpr can now handle broadcasting. As an example, check out this implementation of the distance-in-a-bunch-of-dimenstions function that's been going around. This is 80% faster than the most recent one posted on my box and considerably easier to read. expr = numexpr("(a - b)**2", [('a', float), ('b', float)]) def dist_numexpr(A, B): return sqrt(sum(expr(A[:,newaxis], B[newaxis,:]), axis=2)) Now, if we just could do 'sum' inside the numexpr, I bet that this would really scream. This is something that David has talked about adding at various points. I just made his life a bit harder by supporting broadcasting, but I still don't think it would be all that hard to add reduction operations like sum and product as long as they were done at the outermost level of the expression. That is, "sum(x*2 + 5)" should be doable, but "5 + sum(x**2)" would likely be difficult. Anyway, I thought that was cool, so I figured I'd share ;-) [Bizzarely, numexpr seems to run faster on my box when compiled with "-O1" than when compiled with "-O2" or "-O2 -funroll-all-loops". Go figure.] -tim From emeliolollar at homeaway.com Wed Jun 21 17:02:00 2006 From: emeliolollar at homeaway.com (Emelia Lollar) Date: Wed, 21 Jun 2006 14:02:00 -0700 Subject: [Numpy-discussion] oucyc good Message-ID: <000001c69575$f0abeff0$771fa8c0@pgq19> Hi Save over 50% on your medications with our online STORE _____ the river-bank bent and sighed. I dont know what river it was, a rushing red one, swollen with the rains of the last few days, that came down from the hills and mountains in front of them. Soon it was nearly dark. The winds broke up the grey clouds, and a waning moon appeared above the hills between the flying rags. Then they stopped, and Thorin muttered something about supper, and where shall we get a dry patch to -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Wed Jun 21 18:40:47 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 22 Jun 2006 07:40:47 +0900 Subject: [Numpy-discussion] Element-by-element matrix multiplication In-Reply-To: References: Message-ID: Actually I think using mat() (just an alias for the matrix constructor) is a bad way to do it. That mat() (and most others on that page) should probably be replaced with asmatrix() to avoid the copy. --bb On 6/22/06, Keith Goodman wrote: > > On 6/21/06, Robert Kern wrote: > > > Keith Goodman wrote: > > > The NumPy for Matlab Users page suggests mat(a.A * b.A) for > > > element-by-element matrix multiplication. I think it would be helpful > > > to also include multiply(a, b). > > > > > > a.*b > > > > > > mat(a.A * b.A) or > > > multiply(a, b) > > > > It is a wiki page. You may edit it yourself without needing to ask > permission. > > OK. Done. I also added a notice about SciPy's PayPal account being > suspended. > > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Jun 21 22:08:50 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 Jun 2006 22:08:50 -0400 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: > Alan G Isaac wrote: >> M.transpose()[V>0] >> If you want the columns as columns, >> you can transpose again. On Wed, 21 Jun 2006, Keith Goodman apparently wrote: > I can't get that to work when M is a n by m matrix: The problem is not M being a matrix. You made V a matrix (i.e., 2d). So you need to ravel() it first. >> M.transpose()[V.ravel()>0] hth, Alan Isaac From aisaac at american.edu Wed Jun 21 22:08:52 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 Jun 2006 22:08:52 -0400 Subject: [Numpy-discussion] flatiter and inequality comparison Message-ID: I do not understand how to think about this: >>> x=arange(3).flat >>> x >>> x>2 True >>> x>10 True Why? (I realize this behaves like xrange, so this may not be a numpy question, but I do not understand that behavior either.) What I expected: that a flatiter object would iterate through its values and return either - a flatiter of the resulting comparisons, or - an array of the resulting comparisons Thank you, Alan Isaac From michael.sorich at gmail.com Wed Jun 21 22:01:59 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Thu, 22 Jun 2006 11:31:59 +0930 Subject: [Numpy-discussion] MA bug or feature? In-Reply-To: <200606210612.09374.pgmdevlist@mailcan.com> References: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> <200606210612.09374.pgmdevlist@mailcan.com> Message-ID: <16761e100606211901l70c1eeadl71fd19186da8cc6d@mail.gmail.com> I was setting the fill_value as 'NA' when constructing the array so the masked values would be printed as 'NA'. It is not a big deal to avoid doing this. Nevertheless, the differences between a masked array with a boolean mask and a mask of booleans have caused me trouble before. Especially when there are hidden in-place conversions of a mask which is a array of False to a mask which is False. e.g. import numpy print numpy.version.version ma1 = numpy.ma.array(((1.,2,3),(4,5,6)), mask=((0,0,0),(0,0,0))) print ma1.mask a1 = numpy.asarray(ma1) print ma1.mask ---------------------- 0.9.9.2538 [[False False False] [False False False]] False On 6/21/06, Pierre GM wrote: > On Wednesday 21 June 2006 04:46, Michael Sorich wrote: > > When transposing a masked array of dtype ' > ndarray of dtype '|O4' was returned. > > > OK, I see where the problem is: > When your fill_value has a type that cannot be converted to the type of your > data, the `filled` method (used internally in many functions, such as > `transpose`) raises a TypeError, which is caught and your array is converted > to 'O'. > > That's what happen here: your fill_value is a string, your data are integer, > the types don't match, hence the conversion. So, no, I don't think that's a > bug. > > Why filling when you don't have any masked values, then ? Well, there's a > subtle difference between a boolean mask and a mask of booleans. > When the mask is boolean (mask=nomask=False), there's no masked value, and > `filled` returns the data. > Now, when your mask is an array of boolean (your first case), MA doesn't check > whether mask.any()==False to determine whether there are some missing data or > not, it just processes the whole array of boolean. > > I agree that's a bit confusing here, and there might be some room for > improvement (for example, changing the current > `if m is nomask` to `if m is nomask or m.any()==False`, or better, forcing > mask to nomask if mask.any()==False). But I don;t think that qualifies as > bug. > > In short: > when you have an array of numbers, don't try to fill it with characters. > From simon at arrowtheory.com Thu Jun 22 07:19:05 2006 From: simon at arrowtheory.com (Simon Burton) Date: Thu, 22 Jun 2006 12:19:05 +0100 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <449978D2.1090000@ieee.org> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> Message-ID: <20060622121905.2d65372d.simon@arrowtheory.com> On Wed, 21 Jun 2006 10:50:26 -0600 Travis Oliphant wrote: > > So, in SVN NumPy, you will be able to do > > a[:,V>0] > a[V>0,:] > > The V>0 will be replaced with integer arrays as if nonzero(V>0) had been > called. OK. But just for the record, we should note how to do the operation that this used to do, eg. >>> a=numpy.array([1,2]) >>> a[[numpy.bool_(1)]] array([2]) >>> This could be a way of, say, maping a large boolean array onto some other values (1 or 2 in the above case). So, with the new implementation, is it possible to cast the bool array to an integer type without incurring a copy overhead ? And finally, is someone keeping track of the performance of array getitem ? It seems that as travis overloads it more and more it might then slow down in some cases. I must admit my vision is blurring and head is spining as numpy goes through these growing pains. I hope it's over soon. Not because I have trouble keeping up (although i do) but it's my matlab/R/numarray entrenched co-workers who cannot be exposed to this unstable development (they will run screaming to the woods). cheers, Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From wbaxter at gmail.com Wed Jun 21 23:23:38 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 22 Jun 2006 12:23:38 +0900 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: References: Message-ID: On 6/22/06, Alan G Isaac wrote: > > > Alan G Isaac wrote: > >> M.transpose()[V>0] > >> If you want the columns as columns, > >> you can transpose again. > > > On Wed, 21 Jun 2006, Keith Goodman apparently wrote: > > I can't get that to work when M is a n by m matrix: > > The problem is not M being a matrix. > You made V a matrix (i.e., 2d). > So you need to ravel() it first. > >> M.transpose()[V.ravel()>0] No dice, V.ravel() returns a matrix still. Looks like you'll need M.T[V.A.ravel()>0].T Just lovely. Is the new bool conversion thingy going to help make the syntax more reasonable for matrices, too? Seems like it will still require M[:,V.A.ravel() > 0] or M[:, V.A.squeeze() > 0] or M[:,V.A[:,0]>0] Anyway, this seems to me just more evidence that one is better off getting used to the 'array' way of doing things rather than clinging to Matlab ways by using 'matrix'. Is it worth dealing with the extra A's and asmatrix()'s and squeeze()'s that seem to crop up just to be able to write A*B instead of dot(A,B) (*)? --Bill (*) Ok, there's also the bit about being able to tell column vectors from row vectors and getting useful errors when you try to use a row that should have been a column. And then there's also the .T, .I, .H convenience factor. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fitz at astron.berkeley.edu Thu Jun 22 00:39:51 2006 From: fitz at astron.berkeley.edu (Michael Fitzgerald) Date: Wed, 21 Jun 2006 21:39:51 -0700 Subject: [Numpy-discussion] f.p. powers and masked arrays Message-ID: <200606212139.52511.fitz@astron.berkeley.edu> Hello all, I'm encountering some (relatively new?) behavior with masked arrays that strikes me as bizarre. Raising zero to a floating-point value is triggering a mask to be set, even though the result should be well-defined. When using fixed-point integers for powers, everything works as expected. I'm seeing this with both numarray and numpy. Take the case of 0**1, illustrated below: >>> import numarray as n1 >>> import numarray.ma as n1ma >>> n1.array(0.)**1 array(0.0) >>> n1.array(0.)**1. array(0.0) >>> n1ma.array(0.)**1 array(0.0) >>> n1ma.array(0.)**1. array(data = [1.0000000200408773e+20], mask = 1, fill_value=[ 1.00000002e+20]) >>> import numpy as n2 >>> import numpy.core.ma as n2ma >>> n2.array(0.)**1 array(0.0) >>> n2.array(0.)**1. array(0.0) >>> n2ma.array(0.)**1 array(0.0) >>> n2ma.array(0.)**1. array(data = 1e+20, mask = True, fill_value=1e+20) I've been using python v2.3.5 & v.2.4.3, numarray v1.5.1, and numpy v0.9.8, and tested this on an x86 Debian box and a PPC OSX box. It may be the case that this issue has manifested in the past several months, as it's causing a new problem with some of my older code. Any thoughts? Thanks in advance, Mike From oliphant.travis at ieee.org Thu Jun 22 01:58:52 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 21 Jun 2006 23:58:52 -0600 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <20060622121905.2d65372d.simon@arrowtheory.com> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> <20060622121905.2d65372d.simon@arrowtheory.com> Message-ID: <449A319C.6030008@ieee.org> Simon Burton wrote: > On Wed, 21 Jun 2006 10:50:26 -0600 > Travis Oliphant wrote: > > >> So, in SVN NumPy, you will be able to do >> >> a[:,V>0] >> a[V>0,:] >> >> The V>0 will be replaced with integer arrays as if nonzero(V>0) had been >> called. >> > > OK. > But just for the record, we should note how to > do the operation that this used to do, eg. > > >>>> a=numpy.array([1,2]) >>>> a[[numpy.bool_(1)]] >>>> > array([2] > This behavior hasn't changed... All that's changed is that what used to raise an error (boolean arrays in a tuple) now works in the same way that boolean arrays worked before. > > So, with the new implementation, is it possible to cast > the bool array to an integer type without incurring a copy overhead ? > I'm not sure what you mean. What copy overhead? There is still copying going on. The way it's been implemented, the boolean arrays get replaced with integer index arrays under the hood so it is really nearly identical to replacing the boolean array with nonzero(). > And finally, is someone keeping track of the performance > of array getitem ? It seems that as travis overloads it more and > more it might then slow down in some cases. > Actually, I'm very concientious of the overhead of getitem in code that I add. I just today found a memory leak in code that was added that I did not review carefully that was also slowing down all accesses of arrays > 1d that resulted in array scalars. I added an optimization that should speed that up. But, it would be great if others could watch the speed changes for basic operations. > I must admit my vision is blurring and head is spining as numpy > goes through these growing pains The 1.0 beta release is coming shortly. I would like to see the first beta by the first of July. The final 1.0 release won't occur, though, until after SciPy 2006. Thanks for your patience. We've been doing a lot of house-cleaning lately to separate the "old but compatible" interface from the "new." This has resulted in some confusion, to be sure. Please don't hesitate to voice your concerns. -Travis From schofield at ftw.at Thu Jun 22 03:53:44 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 22 Jun 2006 09:53:44 +0200 Subject: [Numpy-discussion] Matrix construction In-Reply-To: References: Message-ID: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> On 22/06/2006, at 12:40 AM, Bill Baxter wrote: > Actually I think using mat() (just an alias for the matrix > constructor) is a bad way to do it. That mat() (and most others on > that page) should probably be replaced with asmatrix() to avoid the > copy. Perhaps the 'mat' function should become an alias for 'asmatrix'. I've thought this for a while. Then code and documentation like this page could remain short and simple without incurring the performance penalty. Go on, shoot me down! :) -- Ed From stefan at sun.ac.za Sun Jun 18 21:14:44 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 19 Jun 2006 03:14:44 +0200 Subject: [Numpy-discussion] Numexpr does broadcasting. In-Reply-To: <449997C3.2000905@cox.net> References: <449997C3.2000905@cox.net> Message-ID: <20060619011444.GA17434@mentat.za.net> Hi Tim On Wed, Jun 21, 2006 at 12:02:27PM -0700, Tim Hochberg wrote: > > Numexpr can now handle broadcasting. As an example, check out this > implementation of the distance-in-a-bunch-of-dimenstions function that's > been going around. This is 80% faster than the most recent one posted on > my box and considerably easier to read. This looks really cool. However, it does seem to break scalar operation: a = 3. b = 4. expr = numexpr("2*a+3*b",[('a',float),('b'.float)]) expr.run(a,b) Out[41]: array(-7.1680117685147315e-39) I havn't used numexpr before, so I could be doing something silly (although I did verify that the above works on r1986). Cheers St?fan From pau.gargallo at gmail.com Thu Jun 22 06:26:18 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu, 22 Jun 2006 12:26:18 +0200 Subject: [Numpy-discussion] Selecting columns of a matrix In-Reply-To: <4499803C.1010302@ieee.org> References: <200606211536.10745.a.u.r.e.l.i.a.n@gmx.net> <449978D2.1090000@ieee.org> <6ef8f3380606211009i5e225282n4a4e8e4dc7adbad1@mail.gmail.com> <4499803C.1010302@ieee.org> Message-ID: <6ef8f3380606220326p1631cc90j755550f91b6bc1b2@mail.gmail.com> ''' The following mail is a bit long and tedious to read, sorry about that. Here is the abstract: "I would like boolean indexing to work like slices and not like arrays of indices" ''' hi, I'm _really_ sorry to insist, but I have been thinking on it and I don't feel like replacing with nonzero() is what we want. For me this is a bad trick equivalent to replacing slices to arrays of indices with r_[]: - it works only if you do that for a single axis. Let me explain: if i have an array, >>> from numpy import * >>> a = arange(12).reshape(3,4) i can slice it: >>> a[1:3,0:3] array([[ 4, 5, 6], [ 8, 9, 10]]) i can define boolean arrays 'equivalent' to this slices >>> b1 = array([False,True,True]) # equivalent to 1:3 >>> b2 = array([True,True,True,False]) # equivalent to 0:3 now if i use one of this boolean arrays for indexing, all work like with slices: >>> a[b1,:] #same as a[1:3,:] array([[ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> a[:,b2] # same as a[:,0:3] array([[ 0, 1, 2], [ 4, 5, 6], [ 8, 9, 10]]) but if I use both at the same time: >>> a[b1,b2] # not equivalent to a[1:3,0:3] but to a[r_[1:3],r_[0:3]] Traceback (most recent call last): File "", line 1, in ? ValueError: shape mismatch: objects cannot be broadcast to a single shape it doesn't work because nonzero(b1) and nonzero(b2) have different shapes. if I want the equivalent to a[1:3,1:3], i can do >>> a[ix_(b1,b2)] array([[ 4, 5, 6], [ 8, 9, 10]]) I can not see when the current behaviour of a[b1,b2] would be used. >From my (probably naive) point of view, should not be converted to nonzero(), but to some kind of slicing object. In that way boolean indexing could work like slices and not like arrays of integers, which will be more intuitive for me. Converting slices to arrays of indices is a trick that only works for one axis: >>> a[r_[1:3],0:3] #same as a[1:3,0:3] array([[ 4, 5, 6], [ 8, 9, 10]]) >>> a[1:3,r_[0:3]] #same as a[1:3,0:3] array([[ 4, 5, 6], [ 8, 9, 10]]) >>> a[r_[1:3],r_[0:3]] # NOT same as a[1:3,0:3] Traceback (most recent call last): File "", line 1, in ? ValueError: shape mismatch: objects cannot be broadcast to a single shape am I completly wrong?? may be the current behaviour (only usefull for one axis) is enought?? sorry for asking things and not giving solutions and thanks for everything. pau PD: I noticed that the following code works >>> a[a>4,:,:,:,:,1:2:3,...,4:5:6] array([ 5, 6, 7, 8, 9, 10, 11]) From konrad.hinsen at laposte.net Thu Jun 22 06:39:47 2006 From: konrad.hinsen at laposte.net (Konrad Hinsen) Date: Thu, 22 Jun 2006 12:39:47 +0200 Subject: [Numpy-discussion] Numeric and Python 2.5b1 Message-ID: Those who try out Python 2.5b1 and add Numeric might be annoyed by the warning message that Python issues when Numeric is imported the first time. This is due to the fact that Numeric lives inside a directory called "Numeric" without being a package - Numeric has been around for longer than packages in Python. You can get rid of this warning by adding the following lines to sitecustomize.py: import warnings try: warnings.filterwarnings("ignore", category=ImportWarning) except NameError: pass del warnings The try statement ensures that the code will work for older Python releases as well. Konrad. -- --------------------------------------------------------------------- Konrad Hinsen Centre de Biophysique Mol?culaire, CNRS Orl?ans Synchrotron Soleil - Division Exp?riences Saint Aubin - BP 48 91192 Gif sur Yvette Cedex, France Tel. +33-1 69 35 97 15 E-Mail: hinsen ?t cnrs-orleans.fr --------------------------------------------------------------------- From wbaxter at gmail.com Thu Jun 22 04:54:04 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 22 Jun 2006 17:54:04 +0900 Subject: [Numpy-discussion] Matrix construction In-Reply-To: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> References: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> Message-ID: On 6/22/06, Ed Schofield wrote: > > > On 22/06/2006, at 12:40 AM, Bill Baxter wrote: > > > Actually I think using mat() (just an alias for the matrix > > constructor) is a bad way to do it. That mat() (and most others on > > that page) should probably be replaced with asmatrix() to avoid the > > copy. > > Perhaps the 'mat' function should become an alias for 'asmatrix'. > I've thought this for a while. That makes sense to me. As far as I know, asmatrix() defaults to calling the constructor if it can't snarf the memory of the object being passed in. So, go on, shoot Ed and me down! :-) --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From j-renner at northwestern.edu Thu Jun 22 12:24:57 2006 From: j-renner at northwestern.edu (Jocelyn E. Renner) Date: Thu, 22 Jun 2006 10:24:57 -0600 Subject: [Numpy-discussion] Failure to install Message-ID: <06b92fe0f01ca061c3928fcf4740c78f@northwestern.edu> Hello! I am attempting to install numarray on my Mac OX 10.3, and I successfully downloaded it. Since I am attempting to use this with Cantera, I followed their recommendations as to installing which included typing: python setup.py install when I was in the numarray directory. When I executed this, I received the following error message: error: could not create '/System/Library/Frameworks/Python.framework/Versions/2.3/include/ python2.3/numarray': Permission denied I have tried to unlock this folder with little to no luck (I must confess I am not the most computer savvy person ever). If anyone could give me some advice as to how to get this to install properly, I'd appreciate it! If it does not need to be in this folder, is there anyway to bypass this? Thanks so much! Jocelyn Jocelyn Renner Mechanical Engineering, Northwestern University ------------------------------------------------------------------------ ----------------- No man is an island, entire of itself...any man's death diminishes me, because I am involved in mankind; and therefore never send to know for whom the bell tolls; it tolls for thee. ---John Donne Meditation XVII From david.huard at gmail.com Thu Jun 22 12:26:52 2006 From: david.huard at gmail.com (David Huard) Date: Thu, 22 Jun 2006 12:26:52 -0400 Subject: [Numpy-discussion] unique() should return a sorted array Message-ID: <91cf711d0606220926m48c6857cr78b4484f4a137a2@mail.gmail.com> Hi, Numpy's unique(x) returns an array x with repetitions removed. However, since it returns asarray(dict.keys()), the resulting array is not sorted, worse, the original order may not be conserved. I think that unique() should return a sorted array, like its matlab homonym. Regards, David Huard -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Thu Jun 22 12:33:10 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 22 Jun 2006 09:33:10 -0700 Subject: [Numpy-discussion] Failure to install In-Reply-To: <06b92fe0f01ca061c3928fcf4740c78f@northwestern.edu> References: <06b92fe0f01ca061c3928fcf4740c78f@northwestern.edu> Message-ID: On 6/22/06, Jocelyn E. Renner wrote: > python setup.py install > > when I was in the numarray directory. When I executed this, I received > the following error message: > error: could not create > '/System/Library/Frameworks/Python.framework/Versions/2.3/include/ > python2.3/numarray': Permission denied > > I have tried to unlock this folder with little to no luck (I must > confess I am not the most computer savvy person ever). Try sudo python setup.py install From kwgoodman at gmail.com Thu Jun 22 12:47:12 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 22 Jun 2006 09:47:12 -0700 Subject: [Numpy-discussion] Matrix construction In-Reply-To: References: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> Message-ID: On 6/22/06, Bill Baxter wrote: > On 6/22/06, Ed Schofield wrote: > > > > > On 22/06/2006, at 12:40 AM, Bill Baxter wrote: > > > > > Actually I think using mat() (just an alias for the matrix > > > constructor) is a bad way to do it. That mat() (and most others on > > > that page) should probably be replaced with asmatrix() to avoid the > > > copy. > > > > Perhaps the 'mat' function should become an alias for 'asmatrix'. > > I've thought this for a while. > > > That makes sense to me. As far as I know, asmatrix() defaults to calling > the constructor if it can't snarf the memory of the object being passed in. > > So, go on, shoot Ed and me down! :-) I can anticipate one problem: the Pirates will want their three-letter abbreviation for asarray. Will functions like rand and eye always return arrays? Or will there be a day when you can tell numpy that you are working with matrices and then it will return matrices when you call rand, eye, etc? From oliphant.travis at ieee.org Thu Jun 22 14:57:27 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 22 Jun 2006 12:57:27 -0600 Subject: [Numpy-discussion] Recent SVN of NumPy has issues with SciPy Message-ID: <449AE817.1020700@ieee.org> There are still some issues with my recent check-in for NumPy (r2663). But, it does build and run the numpy.tests cleanly. (It's failing on SciPy tests...) You may want to hold off for a few hours until I can straighten it out. -Travis From wbaxter at gmail.com Thu Jun 22 15:11:11 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 23 Jun 2006 04:11:11 +0900 Subject: [Numpy-discussion] Matrix construction In-Reply-To: References: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> Message-ID: On 6/23/06, Keith Goodman wrote: > > On 6/22/06, Bill Baxter wrote: > > On 6/22/06, Ed Schofield wrote: > > > > > > > > On 22/06/2006, at 12:40 AM, Bill Baxter wrote: > > > > > > > Actually I think using mat() (just an alias for the matrix > > > > constructor) is a bad way to do it. That mat() (and most others on > > > > that page) should probably be replaced with asmatrix() to avoid the > > > > copy. > > > > > > Perhaps the 'mat' function should become an alias for 'asmatrix'. > > > I've thought this for a while. > > > > > > That makes sense to me. As far as I know, asmatrix() defaults to > calling > > the constructor if it can't snarf the memory of the object being passed > in. > > > > So, go on, shoot Ed and me down! :-) > > I can anticipate one problem: the Pirates will want their three-letter > abbreviation for asarray. arr() me maties! Will functions like rand and eye always return arrays? Or will there > be a day when you can tell numpy that you are working with matrices > and then it will return matrices when you call rand, eye, etc? > I don't disagree there's a need, but you can always make your own: def mrand(*vargs): return asmatrix(rand(*vargs)) def meye(N, **kwargs): return asmatrix(eye(N,**kwargs)) --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at mailcan.com Thu Jun 22 15:15:15 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Thu, 22 Jun 2006 15:15:15 -0400 Subject: [Numpy-discussion] MA bug or feature? In-Reply-To: <16761e100606211901l70c1eeadl71fd19186da8cc6d@mail.gmail.com> References: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> <200606210612.09374.pgmdevlist@mailcan.com> <16761e100606211901l70c1eeadl71fd19186da8cc6d@mail.gmail.com> Message-ID: <200606221515.16089.pgmdevlist@mailcan.com> On Wednesday 21 June 2006 22:01, Michael Sorich wrote: > I was setting the fill_value as 'NA' when constructing the array so > the masked values would be printed as 'NA'. It is not a big deal to > avoid doing this. You can use masked_print_option, as illustrated below, without using a fill_value incompatible with your data type. >>>import numpy.core.ma as MA >>>X = MA.array([1,2,3],maks=[0,1,0]) >>>print X [1 -- 3] >>>MA.masked_print_option=MA._MaskedPrintOption('N/A') >>>print X [1 N/A 3] > Nevertheless, the differences between a masked array with a boolean > mask and a mask of booleans have caused me trouble before. Especially > when there are hidden in-place conversions of a mask which is a array > of False to a mask which is False. e.g. OK, I'm still using 0.9.8 and I can't help you with this one. In that version, N.asarray transforms the MA into a ndarray, so you lose the mask. But I wonder: if none of your values are masked, the natural behavior would be to have `data.mask==nomask`, which speeds up things a bit. This gain of time is why I was suggesting that `mask` would be forced to `nomask` at the creation, if `mask.any()==False`. Could you give me some examples of cases where you need the mask to stay as an array of False ? If you need to access the mask as an array, you can always use MA.getmaskarray. From kwgoodman at gmail.com Thu Jun 22 15:25:16 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 22 Jun 2006 12:25:16 -0700 Subject: [Numpy-discussion] How do I seed the radom number generator? Message-ID: How do I seed rand and randn? From chanley at stsci.edu Thu Jun 22 15:32:40 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 22 Jun 2006 15:32:40 -0400 (EDT) Subject: [Numpy-discussion] C-API support for numarray added to NumPy Message-ID: <20060622153240.CJT13983@comet.stsci.edu> >You will also need to change the include directories used in compiling >by appending the directories returned by >numpy.numarray.util.get_numarray_include_dirs() > Hi Travis, I believe that there is a problem with this function. When executing interactively with numpy version 0.9.9.2660 I get the following result: Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] Type "copyright", "credits" or "license" for more information. In [1]: import numpy In [2]: numpy.__version__ Out[2]: '0.9.9.2660' In [3]: import numpy.numarray.util as nnu In [4]: nnu.get_numarray_include_dirs() Out[4]: ['C:\\Python24\\lib\\site-packages\\numpy\\numarray'] Unfortunately this does not have the appropriate (or any) header files. Chris From robert.kern at gmail.com Thu Jun 22 15:33:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Jun 2006 14:33:43 -0500 Subject: [Numpy-discussion] How do I seed the radom number generator? In-Reply-To: References: Message-ID: Keith Goodman wrote: > How do I seed rand and randn? If you can, please use the .rand() and .randn() methods on a RandomState object which you can initialize with whatever seed you like. In [1]: import numpy as np rs In [2]: rs = np.random.RandomState([12345678, 90123456, 78901234]) In [3]: rs.rand(5) Out[3]: array([ 0.40355172, 0.27449337, 0.56989746, 0.34767024, 0.47185004]) In [5]: np.random.RandomState.seed? Type: method_descriptor Base Class: String Form: Namespace: Interactive Docstring: Seed the generator. seed(seed=None) seed can be an integer, an array (or other sequence) of integers of any length, or None. If seed is None, then RandomState will try to read data from /dev/urandom (or the Windows analogue) if available or seed from the clock otherwise. The rand() and randn() "functions" are actually references to methods on a global instance of RandomState. The .seed() method on that object is also similarly exposed as numpy.random.seed(). If you are writing new code, please explicitly use a RandomState object. Only use numpy.random.seed() if you must control code that uses the global rand() and randn() "functions" and you can't modify it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Thu Jun 22 15:45:15 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 22 Jun 2006 12:45:15 -0700 Subject: [Numpy-discussion] How do I seed the radom number generator? In-Reply-To: References: Message-ID: On 6/22/06, Robert Kern wrote: > Keith Goodman wrote: > > How do I seed rand and randn? > > If you can, please use the .rand() and .randn() methods on a RandomState object > which you can initialize with whatever seed you like. > > In [1]: import numpy as np > rs > In [2]: rs = np.random.RandomState([12345678, 90123456, 78901234]) > > In [3]: rs.rand(5) > Out[3]: array([ 0.40355172, 0.27449337, 0.56989746, 0.34767024, 0.47185004]) Perfect! Thank you. From saagesen at sfu.ca Thu Jun 22 15:46:41 2006 From: saagesen at sfu.ca (saagesen at sfu.ca) Date: Thu, 22 Jun 2006 12:46:41 -0700 Subject: [Numpy-discussion] problem building NumPy Message-ID: <200606221946.k5MJkfo7009521@rm-rstar.sfu.ca> An embedded and charset-unspecified text was scrubbed... Name: not available URL: From pfdubois at gmail.com Thu Jun 22 16:26:04 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Thu, 22 Jun 2006 13:26:04 -0700 Subject: [Numpy-discussion] MA bug or feature? In-Reply-To: <200606210612.09374.pgmdevlist@mailcan.com> References: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> <200606210612.09374.pgmdevlist@mailcan.com> Message-ID: Pierre wrote: > I agree that's a bit confusing here, and there might be some room for > improvement (for example, changing the current > `if m is nomask` to `if m is nomask or m.any()==False`, or better, forcing > mask to nomask if mask.any()==False). But I don;t think that qualifies as > bug. In the original MA in Numeric, I decided that to constantly check for masks that didn't actually mask anything was not a good idea. It punishes normal use with a very expensive check that is rarely going to be true. If you are in a setting where you do not want this behavior, but instead want masks removed whenever possible, you may wish to wrap or replace things like masked_array so that they call make_mask with flag = 1: y = masked_array(data, make_mask(maskdata, flag=1)) y will have no mask if maskdata is all false. Thanks to Pierre for pointing out about masked_print_option. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Jun 22 16:36:41 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 22 Jun 2006 22:36:41 +0200 Subject: [Numpy-discussion] sourceforge advertising Message-ID: <20060622203641.GB28648@mentat.za.net> Hi, I noticed that sourceforge now adds another 8 lines of advertisement to the bottom of every email sent to the list. Am I the only one who finds this annoying? Is there any reason why the numpy list can't run on scipy.org? Regards St?fan From robert.kern at gmail.com Thu Jun 22 17:46:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Jun 2006 16:46:59 -0500 Subject: [Numpy-discussion] sourceforge advertising In-Reply-To: <20060622203641.GB28648@mentat.za.net> References: <20060622203641.GB28648@mentat.za.net> Message-ID: Stefan van der Walt wrote: > Hi, > > I noticed that sourceforge now adds another 8 lines of advertisement > to the bottom of every email sent to the list. Am I the only one who > finds this annoying? Is there any reason why the numpy list can't run > on scipy.org? We'd be happy to move it to scipy.org. However moving a mailing list is always a hassle for subscribers, so we were not going to bother until there was a compelling reason. This may be one, though. For all subscribers: If you have an opinion over whether to move the list or to keep it on Sourceforge, please email me *offlist*. If enough people want to move and few people want to stay, we'll set up a new mailing list on scipy.org. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rowen at cesmail.net Thu Jun 22 18:45:01 2006 From: rowen at cesmail.net (Russell E. Owen) Date: Thu, 22 Jun 2006 15:45:01 -0700 Subject: [Numpy-discussion] problem building Numeric on python 2.5 Message-ID: I just installed python 2.5b1 on my Mac (10.4 ppc) and can't seem to get Numeric 24.2 installed. It seems to build fine (no obvious error messages), but when I try to import it I get: Python 2.5b1 (r25b1:47038M, Jun 20 2006, 16:17:55) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import Numeric __main__:1: ImportWarning: Not importing directory '/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac kages/Numeric': missing __init__.py >>> Any ideas? Is it somehow incompatible with python 2.5b1? For what it's worth, numarray builds and installs fine. I've not tried numpy or any other packages yet. -- Russell From robert.kern at gmail.com Thu Jun 22 18:51:06 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Jun 2006 17:51:06 -0500 Subject: [Numpy-discussion] problem building Numeric on python 2.5 In-Reply-To: References: Message-ID: Russell E. Owen wrote: > I just installed python 2.5b1 on my Mac (10.4 ppc) and can't seem to get > Numeric 24.2 installed. It seems to build fine (no obvious error > messages), but when I try to import it I get: > Python 2.5b1 (r25b1:47038M, Jun 20 2006, 16:17:55) > [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import Numeric > __main__:1: ImportWarning: Not importing directory > '/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac > kages/Numeric': missing __init__.py > > Any ideas? Is it somehow incompatible with python 2.5b1? > > For what it's worth, numarray builds and installs fine. I've not tried > numpy or any other packages yet. See Konrad Hinsen's post earlier today "Numeric and Python 2.5b1" for a description of the issue and a way to silence the warnings. It's just a warning, though, not an error. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at cox.net Thu Jun 22 18:52:05 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu, 22 Jun 2006 15:52:05 -0700 Subject: [Numpy-discussion] problem building Numeric on python 2.5 In-Reply-To: References: Message-ID: <449B1F15.1020501@cox.net> Russell E. Owen wrote: > I just installed python 2.5b1 on my Mac (10.4 ppc) and can't seem to get > Numeric 24.2 installed. It seems to build fine (no obvious error > messages), but when I try to import it I get: > Python 2.5b1 (r25b1:47038M, Jun 20 2006, 16:17:55) > [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>>> import Numeric >>>> > __main__:1: ImportWarning: Not importing directory > '/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac > kages/Numeric': missing __init__.py > > > Any ideas? Is it somehow incompatible with python 2.5b1? > Import warning is a new 'feature' of 2.5. It warns if there are directories on sys.path that are *not* packages. I'll refer you to the py-dev archives if you want figure out the motivation for that. So, if everything seems to work, there's a good chance that nothing's wrong, but that your just seeing a complaint due to this new behaviour. If you check recent messages on Python-dev someone just posted a recipe for suppressing this warning. -tim > For what it's worth, numarray builds and installs fine. I've not tried > numpy or any other packages yet. > > -- Russell > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From michael.sorich at gmail.com Thu Jun 22 19:41:27 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Fri, 23 Jun 2006 09:11:27 +0930 Subject: [Numpy-discussion] MA bug or feature? In-Reply-To: <200606221515.16089.pgmdevlist@mailcan.com> References: <16761e100606210146q7683c94bu5bd2699caa6b95cf@mail.gmail.com> <200606210612.09374.pgmdevlist@mailcan.com> <16761e100606211901l70c1eeadl71fd19186da8cc6d@mail.gmail.com> <200606221515.16089.pgmdevlist@mailcan.com> Message-ID: <16761e100606221641u1dfcfaa8ne5a1ebdb606c7992@mail.gmail.com> On 6/23/06, Pierre GM wrote: > On Wednesday 21 June 2006 22:01, Michael Sorich wrote: > > Nevertheless, the differences between a masked array with a boolean > > mask and a mask of booleans have caused me trouble before. Especially > > when there are hidden in-place conversions of a mask which is a array > > of False to a mask which is False. e.g. > > OK, I'm still using 0.9.8 and I can't help you with this one. In that version, > N.asarray transforms the MA into a ndarray, so you lose the mask. No, the mask of ma1 is converted in place to False. ma1 remains a MaskedArray import numpy ma1 = numpy.ma.array(((1.,2,3),(4,5,6)), mask=((0,0,0),(0,0,0))) print ma1.mask, type(ma1) numpy.asarray(ma1) print ma1.mask, type(ma1) --output-- [[False False False] [False False False]] False > But I wonder: if none of your values are masked, the natural behavior would be > to have `data.mask==nomask`, which speeds up things a bit. This gain of time > is why I was suggesting that `mask` would be forced to `nomask` at the > creation, if `mask.any()==False`. > > Could you give me some examples of cases where you need the mask to stay as an > array of False ? > If you need to access the mask as an array, you can always use > MA.getmaskarray. If it did not sometimes effect the behaviour of the masked array, I would not be worried about automatic conversions between the two forms of the mask. Is it agreed that there should not be any differences in the behavior of the two forms of masked array e.g. with a mask of [[False,False],[False,False]] vs False? It is frustrating to track down exceptions when the array has one behavior, then there is a implicit conversion of the mask which changes the behaviour of the array. Mike From oliphant.travis at ieee.org Thu Jun 22 19:46:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 22 Jun 2006 17:46:29 -0600 Subject: [Numpy-discussion] Recent SVN of NumPy has issues with SciPy In-Reply-To: <449AE817.1020700@ieee.org> References: <449AE817.1020700@ieee.org> Message-ID: <449B2BD5.1000401@ieee.org> Travis Oliphant wrote: > There are still some issues with my recent check-in for NumPy (r2663). > But, it does build and run the numpy.tests cleanly. (It's failing on > SciPy tests...) > These issues are now fixed (it was a brain-dead optimization that just doesn't work and was only exposed when converting between C- and Fortran- arrays during a cast.. Feel free to use SVN again... I do like to keep SVN so that it works. -Travis From myeates at jpl.nasa.gov Thu Jun 22 21:46:49 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Thu, 22 Jun 2006 18:46:49 -0700 Subject: [Numpy-discussion] fromfile croaking on windows Message-ID: <449B4809.9000701@jpl.nasa.gov> when I try and load a file with numpy.fromfile I keep getting a message .... 7245092 items requested but only 3899 read. Its always the same number read. I've checked and I'm giving the correct filename and its the correct size. Any idea whats going on? This is with 0.9.8 Mathew From myeates at jpl.nasa.gov Thu Jun 22 22:00:40 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Thu, 22 Jun 2006 19:00:40 -0700 Subject: [Numpy-discussion] p.s. Re: fromfile croaking on windows In-Reply-To: <449B4809.9000701@jpl.nasa.gov> References: <449B4809.9000701@jpl.nasa.gov> Message-ID: <449B4B48.2060902@jpl.nasa.gov> When I specify count=-1 I get the exact same error. So, numpy was able to determine the filesize. It just can't read it. Mathew Mathew Yeates wrote: > when I try and load a file with numpy.fromfile I keep getting a message .... > 7245092 items requested but only 3899 read. Its always the same number read. > > I've checked and I'm giving the correct filename and its the correct > size. Any idea whats going on? > This is with 0.9.8 > > Mathew > > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From oliphant.travis at ieee.org Fri Jun 23 01:49:49 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 22 Jun 2006 23:49:49 -0600 Subject: [Numpy-discussion] fromfile croaking on windows In-Reply-To: <449B4809.9000701@jpl.nasa.gov> References: <449B4809.9000701@jpl.nasa.gov> Message-ID: <449B80FD.7080409@ieee.org> Mathew Yeates wrote: > when I try and load a file with numpy.fromfile I keep getting a message .... > 7245092 items requested but only 3899 read. Its always the same number read. > > Which platform are you on? Could you show exactly how you are calling the function. There were some reports of strange behavior on Windows that may be related to file-locking. I'm just not sure at this point. -Travis From svetosch at gmx.net Fri Jun 23 04:54:49 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 23 Jun 2006 10:54:49 +0200 Subject: [Numpy-discussion] eye and identity: why both? Message-ID: <449BAC59.4090505@gmx.net> identity seems to be a "crippled" version of eye without any value added, apart from backwards-compatibility; So from a user point of view, which one does numpy recommend? And from a developer point of view (which doesn't really apply to me, of course), should identity maybe become an alias for eye(n, dtype=...)? Or is there a subtle (or not so subtle...) difference I am missing? I am aware this question is not really that important since everything works, but when I read that there will be a 1.0beta soon I thought maybe this is the right time to ask those kind of questions. Here are the help-strings: eye(N, M=None, k=0, dtype=) eye returns a N-by-M 2-d array where the k-th diagonal is all ones, and everything else is zeros. identity(n, dtype=) identity(n) returns the identity 2-d array of shape n x n. Cheers, Sven From fullung at gmail.com Fri Jun 23 09:42:40 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 23 Jun 2006 15:42:40 +0200 Subject: [Numpy-discussion] fromfile croaking on windows In-Reply-To: <449B80FD.7080409@ieee.org> Message-ID: <003b01c696ca$e5842240$01eaa8c0@dsp.sun.ac.za> Hello all Travis Oliphant wrote: > Mathew Yeates wrote: > > when I try and load a file with numpy.fromfile I keep getting a message > .... > > 7245092 items requested but only 3899 read. Its always the same number > read. > > > > > Which platform are you on? Could you show exactly how you are calling > the function. > > There were some reports of strange behavior on Windows that may be > related to file-locking. I'm just not sure at this point. I did some experiments. With my test file, this always fails: y = N.fromfile('temp.dat', dtype=N.float64) This works: y = N.fromfile(file('temp.dat','rb'), dtype=N.float64) More details in this ticket: http://projects.scipy.org/scipy/numpy/ticket/103 I don't quite understand how file-locking can be causing these problems. Travis, care to elaborate on what you think might be causing these problems? Cheers, Albert From kwgoodman at gmail.com Fri Jun 23 10:18:10 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 23 Jun 2006 07:18:10 -0700 Subject: [Numpy-discussion] How do I make a diagonal matrix? Message-ID: How do I make a NxN diagonal matrix with a Nx1 column vector x along the diagonal? From svetosch at gmx.net Fri Jun 23 10:34:14 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 23 Jun 2006 16:34:14 +0200 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: Message-ID: <449BFBE6.1050401@gmx.net> Keith Goodman schrieb: > How do I make a NxN diagonal matrix with a Nx1 column vector x along > the diagonal? > >>> help(n.diag) Help on function diag in module numpy.lib.twodim_base: diag(v, k=0) returns the k-th diagonal if v is a array or returns a array with v as the k-th diagonal if v is a vector. From joris at ster.kuleuven.be Fri Jun 23 10:40:33 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 23 Jun 2006 16:40:33 +0200 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449BFBE6.1050401@gmx.net> References: <449BFBE6.1050401@gmx.net> Message-ID: <200606231640.33722.joris@ster.kuleuven.be> On Friday 23 June 2006 16:34, Sven Schreiber wrote: [SS]: Keith Goodman schrieb: [SS]: > How do I make a NxN diagonal matrix with a Nx1 column vector x along [SS]: > the diagonal? [SS]: > [SS]: [SS]: >>> help(n.diag) [SS]: Help on function diag in module numpy.lib.twodim_base: [SS]: [SS]: diag(v, k=0) [SS]: returns the k-th diagonal if v is a array or returns a array [SS]: with v as the k-th diagonal if v is a vector. See also the Numpy Example List for a few examples: http://www.scipy.org/Numpy_Example_List J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From kwgoodman at gmail.com Fri Jun 23 10:55:47 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 23 Jun 2006 07:55:47 -0700 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449BFBE6.1050401@gmx.net> References: <449BFBE6.1050401@gmx.net> Message-ID: On 6/23/06, Sven Schreiber wrote: > Keith Goodman schrieb: > > How do I make a NxN diagonal matrix with a Nx1 column vector x along > > the diagonal? > > > > >>> help(n.diag) > Help on function diag in module numpy.lib.twodim_base: > > diag(v, k=0) > returns the k-th diagonal if v is a array or returns a array > with v as the k-th diagonal if v is a vector. I tried >> x = rand(3,1) >> diag(x) array([ 0.87113114]) Isn't rand(3,1) a vector? Off list I was given the example: x=rand(3) diag(3) That works. But my x is a Nx1 matrix. I can't get it to work with matrices. Joris: The Numpy Example List looks good. I hadn't come across that before. From svetosch at gmx.net Fri Jun 23 11:07:32 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 23 Jun 2006 17:07:32 +0200 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> Message-ID: <449C03B4.2070708@gmx.net> Keith Goodman schrieb: > > Isn't rand(3,1) a vector? afaik not in numpy's terms, because two numbers are given for the dimensions -- I also struggle with that, because I'm a matrix guy like you ;-) > > Off list I was given the example: > x=rand(3) > diag(3) > > That works. But my x is a Nx1 matrix. I can't get it to work with matrices. > ok, good point; with your x then diag(x.A[:,0]) should work, although it's not very pretty. Maybe there are better ways, but I agree it would be nice to be able to use matrices directly. -sven From aisaac at american.edu Fri Jun 23 11:50:13 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 11:50:13 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449BFBE6.1050401@gmx.net> References: <449BFBE6.1050401@gmx.net> Message-ID: On Fri, 23 Jun 2006, Sven Schreiber apparently wrote: >>>> help(n.diag) > Help on function diag in module numpy.lib.twodim_base: > diag(v, k=0) > returns the k-th diagonal if v is a array or returns a array > with v as the k-th diagonal if v is a vector. That is pretty damn obscure. Apparently Travis's new doc string did not survive? The Numpy book says: diag (v, k=0) Return the kth diagonal if v is a 2-d array, or returns an array with v as the kth diagonal if v is a 1-d array. That is better but not great. I think what is wanted is: diag (v, k=0) If v is a 2-d array: return a copy of the kth diagonal of v (as a 1-d array). If v is a 1-d array: return a 2-d array with a copy of v as the kth diagonal (and zeros elsewhere). fwiw, Alan Isaac PS As a response to the question, it might be worth noting the following. >>> y=N.zeros((5,5)) >>> values=N.arange(1,6) >>> indices=slice(0,25,6) >>> y.flat[indices]=values >>> y array([[1, 0, 0, 0, 0], [0, 2, 0, 0, 0], [0, 0, 3, 0, 0], [0, 0, 0, 4, 0], [0, 0, 0, 0, 5]]) Generalizing we end up with the following (from pyGAUSS): def diagrv(x,v,copy=True): if copy: x = numpy.matrix( x, copy=True ) else: x = numpy.matrix( x, copy=False ) stride = 1 + x.shape[1] x.flat[ slice(0,x.size,stride) ] = v return x From aisaac at american.edu Fri Jun 23 12:03:13 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 12:03:13 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> Message-ID: On Fri, 23 Jun 2006, Keith Goodman apparently wrote: > my x is a Nx1 matrix. I can't get it to work with matrices. Hmm. One would think that diag() would accept a flatiter object, but it does not. Shouldn't it?? But anyway, you can squeeze x: >>> x matrix([[ 0.46474951], [ 0.0688041 ], [ 0.61141623]]) >>> y=N.diag(N.squeeze(x.A)) >>> y array([[ 0.46474951, 0. , 0. ], [ 0. , 0.0688041 , 0. ], [ 0. , 0. , 0.61141623]]) hth, Alan Isaac From david.douard at logilab.fr Fri Jun 23 11:08:26 2006 From: david.douard at logilab.fr (David Douard) Date: Fri, 23 Jun 2006 17:08:26 +0200 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> Message-ID: <20060623150826.GC1032@logilab.fr> On Fri, Jun 23, 2006 at 07:55:47AM -0700, Keith Goodman wrote: > On 6/23/06, Sven Schreiber wrote: > > Keith Goodman schrieb: > > > How do I make a NxN diagonal matrix with a Nx1 column vector x along > > > the diagonal? > > > > > > > >>> help(n.diag) > > Help on function diag in module numpy.lib.twodim_base: > > > > diag(v, k=0) > > returns the k-th diagonal if v is a array or returns a array > > with v as the k-th diagonal if v is a vector. > > I tried > > >> x = rand(3,1) > > >> diag(x) > array([ 0.87113114]) > > Isn't rand(3,1) a vector? No: In [13]: rand(3).shape Out[13]: (3,) In [14]: rand(3,1).shape Out[14]: (3, 1) A "vector" is an array with only one dimension. Here, you have a 3x1 "matrix"... > > Off list I was given the example: > x=rand(3) > diag(3) So you've got the solution! > That works. But my x is a Nx1 matrix. I can't get it to work with matrices. ??? Don't understand what you cannot make work, here. In [15]: x=rand(3,1) In [18]: diag(x[:,0]) Out[18]: array([[ 0.2287158 , 0. , 0. ], [ 0. , 0.50571537, 0. ], [ 0. , 0. , 0.72304857]]) What else would you like? David > Joris: The Numpy Example List looks good. I hadn't come across that before. > David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From aisaac at american.edu Fri Jun 23 12:21:44 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 12:21:44 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> Message-ID: On Fri, 23 Jun 2006, Alan G Isaac apparently wrote: > you can squeeze x True, but a silly solution. Alan From oliphant at ee.byu.edu Fri Jun 23 13:11:38 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 11:11:38 -0600 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> Message-ID: <449C20CA.8090300@ee.byu.edu> Alan G Isaac wrote: >On Fri, 23 Jun 2006, Keith Goodman apparently wrote: > > >>my x is a Nx1 matrix. I can't get it to work with matrices. >> >> > >Hmm. One would think that diag() would accept a flatiter >object, but it does not. Shouldn't it?? > > It doesn't? try: a = rand(3,4) diag(a.flat).shape which prints (12,12) for me. Also: >>> a = ones((2,3)) >>> diag(a.flat) array([[1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1]]) From oliphant at ee.byu.edu Fri Jun 23 13:14:26 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 11:14:26 -0600 Subject: [Numpy-discussion] Matrix construction In-Reply-To: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> References: <7A30C7E5-94CD-46AD-90FD-27FAA919624C@ftw.at> Message-ID: <449C2172.9070200@ee.byu.edu> Ed Schofield wrote: >On 22/06/2006, at 12:40 AM, Bill Baxter wrote: > > > >>Actually I think using mat() (just an alias for the matrix >>constructor) is a bad way to do it. That mat() (and most others on >>that page) should probably be replaced with asmatrix() to avoid the >>copy. >> >> > >Perhaps the 'mat' function should become an alias for 'asmatrix'. >I've thought this for a while. Then code and documentation like this >page could remain short and simple without incurring the performance >penalty. > > I wanted this too a while back but when I tried it a lot of code broke because there were quite a few places (in SciPy and NumPy) that were using the fact that mat returned a copy of the array. -Travis From myeates at jpl.nasa.gov Fri Jun 23 13:56:21 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Fri, 23 Jun 2006 10:56:21 -0700 Subject: [Numpy-discussion] matlab translation Message-ID: <449C2B45.9030101@jpl.nasa.gov> This is probably in an FAQ somewhere but ..... Is there a tool out there for translating Matlab to Numeric? I found a 1999 posting by Travis asking the same thing! It doesn't seem like it would be all THAT difficult to write. Mathew From kwgoodman at gmail.com Fri Jun 23 14:01:28 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 23 Jun 2006 11:01:28 -0700 Subject: [Numpy-discussion] matlab translation In-Reply-To: <449C2B45.9030101@jpl.nasa.gov> References: <449C2B45.9030101@jpl.nasa.gov> Message-ID: On 6/23/06, Mathew Yeates wrote: > This is probably in an FAQ somewhere but ..... > > Is there a tool out there for translating Matlab to Numeric? I found a > 1999 posting by Travis asking the same thing! It doesn't seem like it > would be all THAT difficult to write. I'm porting by hand. It does not seem easy to me. And even if it were easy, both Matlab and NumPy are moving targets. So it would difficult to maintain. From aisaac at american.edu Fri Jun 23 14:29:11 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 14:29:11 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449C20CA.8090300@ee.byu.edu> References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> Message-ID: > Alan G Isaac wrote: >> Hmm. One would think that diag() would accept a flatiter >> object, but it does not. Shouldn't it?? On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > It doesn't? > try: > a = rand(3,4) > diag(a.flat).shape OK, but then try: >>> a=N.mat(a) >>> N.diag(a.flat).shape (1,) Why is a.flat not the same as a.A.flat? Alan Isaac From oliphant at ee.byu.edu Fri Jun 23 15:19:36 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 13:19:36 -0600 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> Message-ID: <449C3EC8.4000805@ee.byu.edu> Alan G Isaac wrote: >>Alan G Isaac wrote: >> >> >>>Hmm. One would think that diag() would accept a flatiter >>>object, but it does not. Shouldn't it?? >>> >>> > > >On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > > >>It doesn't? >>try: >>a = rand(3,4) >>diag(a.flat).shape >> >> > >OK, but then try: > > >>>>a=N.mat(a) >>>>N.diag(a.flat).shape >>>> >>>> >(1,) > >Why is a.flat not the same as a.A.flat? > > It is the same object except for the pointer to the underlying array. When asarray(a.flat) get's called it looks to the underlying array to get the sub-class and constructs that sub-class (and matrices can never be 1-d). Thus, it's a "feature" -Travis From myeates at jpl.nasa.gov Fri Jun 23 16:22:08 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Fri, 23 Jun 2006 13:22:08 -0700 Subject: [Numpy-discussion] matlab translation In-Reply-To: References: <449C2B45.9030101@jpl.nasa.gov> Message-ID: <449C4D70.4080102@jpl.nasa.gov> > > I'm porting by hand. It does not seem easy to me. And even if it were Ah. Do I detect a dare? Could start first by using Octaves matlab parser. From kwgoodman at gmail.com Fri Jun 23 16:42:16 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 23 Jun 2006 13:42:16 -0700 Subject: [Numpy-discussion] matlab translation In-Reply-To: <449C4D70.4080102@jpl.nasa.gov> References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> Message-ID: On 6/23/06, Mathew Yeates wrote: > > > > > I'm porting by hand. It does not seem easy to me. And even if it were > Ah. Do I detect a dare? Could start first by using Octaves matlab parser. (Let me help you recruit people to do the work) "There is no way in the world that this will work!" From aisaac at american.edu Fri Jun 23 17:00:29 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 17:00:29 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449C3EC8.4000805@ee.byu.edu> References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> <449C3EC8.4000805@ee.byu.edu> Message-ID: > Alan G Isaac wrote: >> Why is a.flat not the same as a.A.flat? On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > It is the same object except for the pointer to the > underlying array. When asarray(a.flat) get's called it > looks to the underlying array to get the sub-class and > constructs that sub-class (and matrices can never be 1-d). > Thus, it's a "feature" I doubt I will prove the only one to stumble over this. I can roughly understand why a.ravel() returns a matrix; but is there a good reason to forbid truly flattening the matrix? My instincts are that a flatiter object should not have this hidden "feature": flatiter objects should produce a consistent behavior in all settings, regardless of the underlying array. Anything else will prove too surprising. fwiw, Alan From oliphant at ee.byu.edu Fri Jun 23 17:01:09 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 15:01:09 -0600 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> <449C3EC8.4000805@ee.byu.edu> Message-ID: <449C5695.8000106@ee.byu.edu> Alan G Isaac wrote: >>Alan G Isaac wrote: >> >> >>>Why is a.flat not the same as a.A.flat? >>> >>> > > >On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > > >>It is the same object except for the pointer to the >>underlying array. When asarray(a.flat) get's called it >>looks to the underlying array to get the sub-class and >>constructs that sub-class (and matrices can never be 1-d). >>Thus, it's a "feature" >> >> > > >I doubt I will prove the only one to stumble over this. > >I can roughly understand why a.ravel() returns a matrix; >but is there a good reason to forbid truly flattening the matrix? > > Because matrices are never 1-d. This is actually pretty consistent behavior. >My instincts are that a flatiter object should not have this >hidden "feature": flatiter objects should produce >a consistent behavior in all settings, regardless of the >underlying array. Anything else will prove too surprising. > > I think you are right that this is a bug, though. Because __array__() (which is where the behavior comes from) should return a base-class array (not a sub-class). -Travis From oliphant at ee.byu.edu Fri Jun 23 17:08:25 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 15:08:25 -0600 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449C5695.8000106@ee.byu.edu> References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> <449C3EC8.4000805@ee.byu.edu> <449C5695.8000106@ee.byu.edu> Message-ID: <449C5849.8010608@ee.byu.edu> Travis Oliphant wrote: >Alan G Isaac wrote: > > >> >> >I think you are right that this is a bug, though. Because __array__() >(which is where the behavior comes from) should return a base-class >array (not a sub-class). > > This is fixed in SVN. -Travis From mpfitz at calmail.berkeley.edu Fri Jun 23 17:15:58 2006 From: mpfitz at calmail.berkeley.edu (Michael Fitzgerald) Date: Fri, 23 Jun 2006 14:15:58 -0700 Subject: [Numpy-discussion] f.p. powers and masked arrays In-Reply-To: <200606212139.52511.fitz@astron.berkeley.edu> References: <200606212139.52511.fitz@astron.berkeley.edu> Message-ID: Ping! Is anyone else seeing this? It should be easy to test. If so, I think it's a bug. Best, Mike On Jun 21, 2006, at 9:39 PM, Michael Fitzgerald wrote: > > Hello all, > > I'm encountering some (relatively new?) behavior with masked arrays > that > strikes me as bizarre. Raising zero to a floating-point value is > triggering > a mask to be set, even though the result should be well-defined. > When using > fixed-point integers for powers, everything works as expected. I'm > seeing > this with both numarray and numpy. Take the case of 0**1, > illustrated below: > >>>> import numarray as n1 >>>> import numarray.ma as n1ma >>>> n1.array(0.)**1 > array(0.0) >>>> n1.array(0.)**1. > array(0.0) >>>> n1ma.array(0.)**1 > array(0.0) >>>> n1ma.array(0.)**1. > array(data = > [1.0000000200408773e+20], > mask = > 1, > fill_value=[ 1.00000002e+20]) > >>>> import numpy as n2 >>>> import numpy.core.ma as n2ma >>>> n2.array(0.)**1 > array(0.0) >>>> n2.array(0.)**1. > array(0.0) >>>> n2ma.array(0.)**1 > array(0.0) >>>> n2ma.array(0.)**1. > array(data = > 1e+20, > mask = > True, > fill_value=1e+20) > > I've been using python v2.3.5 & v.2.4.3, numarray v1.5.1, and numpy > v0.9.8, > and tested this on an x86 Debian box and a PPC OSX box. It may be > the case > that this issue has manifested in the past several months, as it's > causing a > new problem with some of my older code. Any thoughts? > > Thanks in advance, > Mike > > > All the advantages of Linux Managed Hosting--Without the Cost and > Risk! > Fully trained technicians. The highest number of Red Hat > certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant at ee.byu.edu Fri Jun 23 17:19:13 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 15:19:13 -0600 Subject: [Numpy-discussion] flatiter and inequality comparison In-Reply-To: References: Message-ID: <449C5AD1.40201@ee.byu.edu> Alan G Isaac wrote: >I do not understand how to think about this: > > >>> x=arange(3).flat > >>> x > > >>> x>2 > True > >>> x>10 > True > >Why? (I realize this behaves like xrange, >so this may not be a numpy question, >but I do not understand that behavior either.) > > The flatiter object didn't have comparisons implemented so I guess it was using some default implementation. This is quite confusing and option 2 does make sense (an array of resulting comparisions is returned). Thus now: >> x=arange(3).flat >>> x>2 array([False, False, False], dtype=bool) >>> x>1 array([False, False, True], dtype=bool) >>> x>0 array([False, True, True], dtype=bool) -Travis From aisaac at american.edu Fri Jun 23 17:34:26 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 23 Jun 2006 17:34:26 -0400 Subject: [Numpy-discussion] How do I make a diagonal matrix? In-Reply-To: <449C5695.8000106@ee.byu.edu> References: <449BFBE6.1050401@gmx.net> <449C20CA.8090300@ee.byu.edu> <449C3EC8.4000805@ee.byu.edu> <449C5695.8000106@ee.byu.edu> Message-ID: > Alan G Isaac wrote: >> I can roughly understand why a.ravel() returns a matrix; >> but is there a good reason to forbid truly flattening the matrix? On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > Because matrices are never 1-d. This is actually pretty > consistent behavior. Yes; that's why I can understand ravel. But I was referring to flat with the question. On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > I think you are right that this is a bug, though. Because > __array__() (which is where the behavior comes from) > should return a base-class array (not a sub-class). Thanks for fixing this!! Alan From oliphant at ee.byu.edu Fri Jun 23 18:18:11 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Jun 2006 16:18:11 -0600 Subject: [Numpy-discussion] Current copy In-Reply-To: <1151100330.449c65aaad7ad@astrosun2.astro.cornell.edu> References: <446F913A.3050207@ieee.org> <1151007625.449afb89201c7@astrosun2.astro.cornell.edu> <449B186B.9060500@ieee.org> <1151100330.449c65aaad7ad@astrosun2.astro.cornell.edu> Message-ID: <449C68A3.3040908@ee.byu.edu> Tom Loredo wrote: >Hi Travis, > > > >>I'm actually preparing the 1.0 release version, instead. >> >>Here's the latest, though... >> >> > >Thanks! > >I hate to be a nuisance about this, but what's the status >of the latest releases of numpy/scipy? Numpy 0.9.8 gives >a segfault on my FC3 box. > NumPy 0.9.8 should be fine except for one test. That tests gives a segfault because of a problem with Python that was fixed a while ago. As long as you don't create the new complex array scalars (i.e. using cdouble(10), complex128(3), etc.) you should be fine with all code running NumPy 0.9.8. Just delete the file site-packages/numpy/core/tests/test_scalarmath.py to get the tests to run. >I waited till today to try the >SVN version (per your scipy-dev post) and just installed >rev 2669. It passes the numpy tests--good!---but when I >followed it with an install of scipy-0.4.9, importing >scipy gives an error: > >import linsolve.umfpack -> failed: cannot import name ArrayType > >When you mentioned that the SVN numpy now worked with scipy, > > >was it only with SVN scipy? > > Yes. You need to re-compile scipy to work with SVN NumPy. Usually Ed Schofield has been helping release SciPy for each new NumPy release to make installation easier. >I'm asking all this, partly for my own info, but also because >last week at an astrostatistics conference I was given a long >slot of time where I gave a pretty hard sell of numpy/scipy. >I'm imagining all these people going home and installing the >latest releases and cursing me under their breaths! > >Is it just my FC3 box having issues with the current releases? >If not, I think something should be said on the download page >(e.g., maybe encourage people to use SVN for certain platforms). > > It's just the one test that's a problem (my system was more forgiving and didn't segfault so I didn't catch the problem). I doubt people are using the construct that is causing the problems much anway -- it's a subtle bug that was in Python when a C-type inherited from the Python complex type. I'd probably recommend using SVN NumPy/SciPy if you are comfortable with compilation because it's the quickest way to get bug-fixes. But, some like installed packages. That's why we are pushing to get 1.0 done as quickly as is reasonable. From ryanlists at gmail.com Fri Jun 23 19:45:55 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 23 Jun 2006 19:45:55 -0400 Subject: [Numpy-discussion] matlab translation In-Reply-To: References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> Message-ID: If people could post lines of Matlab code and proposed numpy could, we could try some regexp's that could do some of this. Ryan On 6/23/06, Keith Goodman wrote: > On 6/23/06, Mathew Yeates wrote: > > > > > > > > I'm porting by hand. It does not seem easy to me. And even if it were > > Ah. Do I detect a dare? Could start first by using Octaves matlab parser. > > (Let me help you recruit people to do the work) > > "There is no way in the world that this will work!" > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From parejkoj at speakeasy.net Fri Jun 23 21:04:49 2006 From: parejkoj at speakeasy.net (John Parejko) Date: Fri, 23 Jun 2006 21:04:49 -0400 Subject: [Numpy-discussion] record iteration (convert a 0-d array, iteration over non-sequence) Message-ID: <449C8FB1.6070208@speakeasy.net> Greetings! I'm having trouble using records. I'm not sure whether to report this as a bug, but it certainly isn't a feature! I would like to be able to iterate over the individual rows in a record array, like so: >>> import numpy.core.records as rec >>> x=rec.array([[1,1.1,'1.0'],[2,2.2,'2.0']], formats='i4,f8,a4',names=['i','f','s']) >>> type(x[0]) >>> x[0].tolist() Traceback (most recent call last): File "", line 1, in ? ValueError: can't convert a 0-d array to a list >>> [i for i in x[0]] Traceback (most recent call last): File "", line 1, in ? TypeError: iteration over non-sequence Am I going about this wrong? I would think I should be able to loop over an individual row in a record array, or turn it into a list. For the latter, I wrote my own thing, but tolist() should work by itself. Note that in rec2list, I need to use range(len(line)) because the list comprehension doesn't work correctly: def rec2list(line): """Turns a single element record array into a list.""" return [line[i] for i in xrange(len(line))] #... I will file a bug, unless someone tells me I'm going about this the wrong way. Thanks for your help John -- ************************* John Parejko Department of Physics and Astronomy Drexel University Philadelphia, PA ************************** From oliphant.travis at ieee.org Fri Jun 23 23:07:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 23 Jun 2006 21:07:45 -0600 Subject: [Numpy-discussion] record iteration (convert a 0-d array, iteration over non-sequence) In-Reply-To: <449C8FB1.6070208@speakeasy.net> References: <449C8FB1.6070208@speakeasy.net> Message-ID: <449CAC81.9030608@ieee.org> John Parejko wrote: > Greetings! I'm having trouble using records. I'm not sure whether to > report this as a bug, but it certainly isn't a feature! I would like to be > able to iterate over the individual rows in a record array, like so: > That is probably reasonable, but as yet is unsupported. You can do x[0].item() to get a tuple that can be iterated over. -Travis From gruben at bigpond.net.au Fri Jun 23 23:43:52 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 24 Jun 2006 13:43:52 +1000 Subject: [Numpy-discussion] matlab translation In-Reply-To: <449C4D70.4080102@jpl.nasa.gov> References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> Message-ID: <449CB4F8.9030305@bigpond.net.au> One possible starting point for this would be Chris Stawarz's i2py translator which attempts to do this for IDL . It might be possible to build on this by getting it working for current numpy. The production rules for MATLAB might be gleaned from Octave. Gary R. Mathew Yeates wrote: >> I'm porting by hand. It does not seem easy to me. And even if it were > Ah. Do I detect a dare? Could start first by using Octaves matlab parser. From robert.kern at gmail.com Sat Jun 24 00:03:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Jun 2006 23:03:56 -0500 Subject: [Numpy-discussion] matlab translation In-Reply-To: <449C4D70.4080102@jpl.nasa.gov> References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> Message-ID: Keith Goodman wrote: >> I'm porting by hand. It does not seem easy to me. And even if it were Mathew Yeates wrote: > Ah. Do I detect a dare? Could start first by using Octaves matlab parser. Let's just say that anyone coming to this list saying something like, "It doesn't seem like it would be all THAT difficult to write," gets an automatic, "Show me," from me. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Jun 24 00:11:14 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Jun 2006 23:11:14 -0500 Subject: [Numpy-discussion] Moving this mailing list to scipy.org Message-ID: Thanks to Sourceforge's new "feature" of ads on the bottom of all list emails, it has been suggested that we move this mailing list to scipy.org. I've gotten some feedback from several of you already, all in favor of moving the mailing list from Sourceforge to scipy.org. However, I know there are plenty more of you out there. I wanted to move this topic up to the top level to make sure people see this. If you care whether it moves or if it stays, please email me *offlist* stating your preference. If by Wednesday, June 28th, the response is still as positive as it has been, then we'll start moving the list. Thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From vinicius.lobosco at paperplat.com Sat Jun 24 04:56:11 2006 From: vinicius.lobosco at paperplat.com (Vinicius Lobosco) Date: Sat, 24 Jun 2006 10:56:11 +0200 Subject: [Numpy-discussion] matlab translation In-Reply-To: References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> Message-ID: <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> Let's just let those who want to try to do that and give our support? I would be happy if I could some parts of my old matlab programs translated to Scipy. On 6/24/06, Robert Kern wrote: > > Keith Goodman wrote: > >> I'm porting by hand. It does not seem easy to me. And even if it were > > Mathew Yeates wrote: > > Ah. Do I detect a dare? Could start first by using Octaves matlab > parser. > > Let's just say that anyone coming to this list saying something like, "It > doesn't seem like it would be all THAT difficult to write," gets an > automatic, > "Show me," from me. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though > it had > an underlying truth." > -- Umberto Eco > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- --------------------------------- Vinicius Lobosco, PhD www.paperplat.com +46 8 612 7803 +46 73 925 8476 Bj?rnn?sv?gen 21 SE-113 47 Stockholm, Sweden -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Jun 24 05:05:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 04:05:56 -0500 Subject: [Numpy-discussion] matlab translation In-Reply-To: <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> Message-ID: Vinicius Lobosco wrote: > Let's just let those who want to try to do that and give our support? I > would be happy if I could some parts of my old matlab programs > translated to Scipy. I do believe that, "Show me," is an *encouragement*. I am explicitly encouraging Mathew to work towards that end. Sheesh. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Sat Jun 24 10:41:06 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 24 Jun 2006 10:41:06 -0400 Subject: [Numpy-discussion] flatiter and inequality comparison In-Reply-To: <449C5AD1.40201@ee.byu.edu> References: <449C5AD1.40201@ee.byu.edu> Message-ID: On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > option 2 does make sense (an array of resulting comparisions is returned). > Thus now: > >> x=arange(3).flat > >>> x>2 > array([False, False, False], dtype=bool) Thanks!! Alan From mtreiber at gmail.com Sat Jun 24 12:58:21 2006 From: mtreiber at gmail.com (Mark Treiber) Date: Sat, 24 Jun 2006 12:58:21 -0400 Subject: [Numpy-discussion] matlab translation In-Reply-To: References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> Message-ID: <27e04e910606240958v789c8701geb96eca97608fb5@mail.gmail.com> A couple of months ago I started something similar but unfortunately it has since stagnated. Its located at pym.python-hosting.com. With the exception of a commit a few weeks ago I haven't touched it for 4 months. That being said I havn't completly abandoned it and the basic foundation is there, all that remains is most of the language rules. I left it halfway through implementing language precedence according to http://www.mathworks.com/access/helpdesk/help/techdoc/matlab_prog/f0-38155.html. Mark. On 6/24/06, Robert Kern wrote: > > Vinicius Lobosco wrote: > > Let's just let those who want to try to do that and give our support? I > > would be happy if I could some parts of my old matlab programs > > translated to Scipy. > > I do believe that, "Show me," is an *encouragement*. I am explicitly > encouraging > Mathew to work towards that end. Sheesh. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though > it had > an underlying truth." > -- Umberto Eco > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Sat Jun 24 13:32:04 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Sat, 24 Jun 2006 10:32:04 -0700 Subject: [Numpy-discussion] How do I seed the radom number generator? In-Reply-To: References: Message-ID: On 6/22/06, Robert Kern wrote: > Keith Goodman wrote: > > How do I seed rand and randn? > > If you can, please use the .rand() and .randn() methods on a RandomState object > which you can initialize with whatever seed you like. > > In [1]: import numpy as np > rs > In [2]: rs = np.random.RandomState([12345678, 90123456, 78901234]) > > In [3]: rs.rand(5) > Out[3]: array([ 0.40355172, 0.27449337, 0.56989746, 0.34767024, 0.47185004]) Using the same seed sometimes gives different results: from numpy import random def rtest(): rs = random.RandomState([11,21,699,1]) a = rs.rand(100,1) b = rs.randn(100,1) return sum(a + b) >> mytest.rtest() array([ 41.11776129]) >> mytest.rtest() array([ 40.16631018]) >> numpy.__version__ '0.9.7.2416' I ran the test about 20 times before I got the 40.166 result. From jk985 at tom.com Tue Jun 27 13:37:49 2006 From: jk985 at tom.com (=?GB2312?B?N9TCOC05urzW3S/H4LW6MjItMjM=?=) Date: Wed, 28 Jun 2006 01:37:49 +0800 Subject: [Numpy-discussion] =?GB2312?B?s7W85LncwO3Iy9SxsMvP7tDewbY8YWQ+?= Message-ID: An HTML attachment was scrubbed... URL: From efiring at hawaii.edu Sat Jun 24 15:30:06 2006 From: efiring at hawaii.edu (Eric Firing) Date: Sat, 24 Jun 2006 09:30:06 -1000 Subject: [Numpy-discussion] logical_and operator, &&, is missing? Message-ID: <449D92BE.3030900@hawaii.edu> It seems that the logical operators || and &&, corresponding to logical_or and logical_and are missing; one can do z = logical_and(x,y) but not z = x && y Is there an inherent reason, or is this a bug? z = (x == y) works, and a comment in umathmodule.c.src suggests that && and || should also: /**begin repeat #kind=greater, greater_equal, less, less_equal, equal, not_equal, logical_and, logical_or, bitwise_and, bitwise_or, bitwise_xor# #OP=>, >=, <, <=, ==, !=, &&, ||, &, |, ^# **/ My version is '0.9.9.2584'. Eric From pgmdevlist at mailcan.com Sat Jun 24 16:12:05 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Sat, 24 Jun 2006 16:12:05 -0400 Subject: [Numpy-discussion] f.p. powers and masked arrays In-Reply-To: References: <200606212139.52511.fitz@astron.berkeley.edu> Message-ID: <200606241612.07559.pgmdevlist@mailcan.com> Michael, > Is anyone else seeing this? It should be easy to test. If so, I > think it's a bug. Yeah, I see that as well. In MA.power(a,b), a temporary mask is created, True for values a<=0. (check L1577 of the sources, `md = make_mask(umath.less_equal (fa, 0), flag=1)`). The combination of this temp and the initial mask defines the final mask. This condition could probably be relaxed to `md = make_mask(umath.less(fa, 0), flag=1)` That way, the a=0 elements wouldn't be masked, and you'd get the proper result. I haven't really time to double-check/create a patch, tough. Meanwhile, Michael, you could just modify your numpy/core/ma.py accordingly. From robert.kern at gmail.com Sat Jun 24 16:20:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 15:20:43 -0500 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: <449D92BE.3030900@hawaii.edu> References: <449D92BE.3030900@hawaii.edu> Message-ID: Eric Firing wrote: > It seems that the logical operators || and &&, corresponding to > logical_or and logical_and are missing; one can do > > z = logical_and(x,y) > > but not > > z = x && y > > Is there an inherent reason, or is this a bug? Python does not have a && operator. It has an "and" keyword, but that cannot be overridden. If you know x and y to be boolean arrays, & and | work fine. > z = (x == y) > > works, and a comment in umathmodule.c.src suggests that && and || should > also: > > /**begin repeat > > #kind=greater, greater_equal, less, less_equal, equal, not_equal, > logical_and, logical_or, bitwise_and, bitwise_or, bitwise_xor# > #OP=>, >=, <, <=, ==, !=, &&, ||, &, |, ^# > **/ Those operators are the C versions that will be put in the appropriate places in the generated code. That is not a comment for documentation. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From efiring at hawaii.edu Sat Jun 24 16:34:35 2006 From: efiring at hawaii.edu (Eric Firing) Date: Sat, 24 Jun 2006 10:34:35 -1000 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: References: <449D92BE.3030900@hawaii.edu> Message-ID: <449DA1DB.8000902@hawaii.edu> Robert Kern wrote: > Eric Firing wrote: > >>It seems that the logical operators || and &&, corresponding to >>logical_or and logical_and are missing; one can do >> >>z = logical_and(x,y) >> >>but not >> >>z = x && y >> >>Is there an inherent reason, or is this a bug? > > > Python does not have a && operator. It has an "and" keyword, but that cannot be > overridden. If you know x and y to be boolean arrays, & and | work fine. Out of curiosity, is there a simple explanation as to why "and" cannot be overridden but operators like "&" can? Is it a fundamental distinction between operators and keywords? In any case, it sounds like we are indeed stuck with an unfortunate wart on numpy, unless some changes in Python can be made. Maybe for Python3000... The NumPy for Matlab users wiki is misleading in this area; I will try to fix it. Eric From robert.kern at gmail.com Sat Jun 24 16:43:58 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 15:43:58 -0500 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: <449DA1DB.8000902@hawaii.edu> References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> Message-ID: Eric Firing wrote: > Robert Kern wrote: >> Eric Firing wrote: >> >>> It seems that the logical operators || and &&, corresponding to >>> logical_or and logical_and are missing; one can do >>> >>> z = logical_and(x,y) >>> >>> but not >>> >>> z = x && y >>> >>> Is there an inherent reason, or is this a bug? >> >> Python does not have a && operator. It has an "and" keyword, but that cannot be >> overridden. If you know x and y to be boolean arrays, & and | work fine. > > Out of curiosity, is there a simple explanation as to why "and" cannot > be overridden but operators like "&" can? Is it a fundamental > distinction between operators and keywords? Sort of. "and" and "or" short-circuit, that is they stop evaluating as soon as the right value to return is unambiguous. In [1]: def f(): ...: print "Shouldn't be here." ...: ...: In [2]: False and f() Out[2]: False In [3]: True or f() Out[3]: True Consequently, they must yield True and False only. > In any case, it sounds like we are indeed stuck with an unfortunate wart > on numpy, unless some changes in Python can be made. Maybe for > Python3000... > > The NumPy for Matlab users wiki is misleading in this area; I will try > to fix it. Thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Jun 24 16:56:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 15:56:05 -0500 Subject: [Numpy-discussion] How do I seed the radom number generator? In-Reply-To: References: Message-ID: Keith Goodman wrote: > Using the same seed sometimes gives different results: > > from numpy import random > def rtest(): > rs = random.RandomState([11,21,699,1]) > a = rs.rand(100,1) > b = rs.randn(100,1) > return sum(a + b) > >>> mytest.rtest() > array([ 41.11776129]) > >>> mytest.rtest() > array([ 40.16631018]) Fixed in SVN. Thank you. http://projects.scipy.org/scipy/numpy/ticket/155 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From efiring at hawaii.edu Sat Jun 24 16:57:19 2006 From: efiring at hawaii.edu (Eric Firing) Date: Sat, 24 Jun 2006 10:57:19 -1000 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> Message-ID: <449DA72F.6060805@hawaii.edu> Robert Kern wrote: > Eric Firing wrote: > >>Robert Kern wrote: >> >>>Eric Firing wrote: >>> >>> >>>>It seems that the logical operators || and &&, corresponding to >>>>logical_or and logical_and are missing; one can do >>>> >>>>z = logical_and(x,y) >>>> >>>>but not >>>> >>>>z = x && y >>>> >>>>Is there an inherent reason, or is this a bug? >>> >>>Python does not have a && operator. It has an "and" keyword, but that cannot be >>>overridden. If you know x and y to be boolean arrays, & and | work fine. >> >>Out of curiosity, is there a simple explanation as to why "and" cannot >>be overridden but operators like "&" can? Is it a fundamental >>distinction between operators and keywords? > > > Sort of. "and" and "or" short-circuit, that is they stop evaluating as soon as > the right value to return is unambiguous. > > In [1]: def f(): > ...: print "Shouldn't be here." > ...: > ...: > > In [2]: False and f() > Out[2]: False > > In [3]: True or f() > Out[3]: True > > Consequently, they must yield True and False only. That makes sense, and implies that the real solution would be the introduction of operators && and || into Python, or a facility that would allow extensions to add operators. I guess it would be a matter of having hooks into the parser. I have no idea whether either of these is a reasonable goal--but it certainly would be a big plus for Numpy. Eric From robert.kern at gmail.com Sat Jun 24 17:32:16 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 16:32:16 -0500 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: <449DA72F.6060805@hawaii.edu> References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> <449DA72F.6060805@hawaii.edu> Message-ID: Eric Firing wrote: > That makes sense, and implies that the real solution would be the > introduction of operators && and || into Python, or a facility that > would allow extensions to add operators. I guess it would be a matter > of having hooks into the parser. I have no idea whether either of these > is a reasonable goal--but it certainly would be a big plus for Numpy. I don't really see how. We already have the & and | operators. The only difference between them and the && and || operators would be that the latter would automatically coerce to boolean arrays. But you can do that explicitly, now. a.astype(bool) | b.astype(bool) Of course, it's highly likely that you are applying & and | to arrays that are already boolean. Consequently, I don't see a real need for more operators. But if you'd like to play around with the grammar: http://www.fiber-space.de/EasyExtend/doc/EE.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From efiring at hawaii.edu Sat Jun 24 19:08:00 2006 From: efiring at hawaii.edu (Eric Firing) Date: Sat, 24 Jun 2006 13:08:00 -1000 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> <449DA72F.6060805@hawaii.edu> Message-ID: <449DC5D0.9060704@hawaii.edu> Robert Kern wrote: > Eric Firing wrote: > >>That makes sense, and implies that the real solution would be the >>introduction of operators && and || into Python, or a facility that >>would allow extensions to add operators. I guess it would be a matter >>of having hooks into the parser. I have no idea whether either of these >>is a reasonable goal--but it certainly would be a big plus for Numpy. > > > I don't really see how. We already have the & and | operators. The only > difference between them and the && and || operators would be that the latter > would automatically coerce to boolean arrays. But you can do that explicitly, now. > > a.astype(bool) | b.astype(bool) > Another difference pointed out in the Wiki is precedence, which requires one to be more careful about parentheses when using the bitwise operators. This arises because although the bitwise operators effectively do the right thing, given boolean arguments, there really is a difference between & and &&--that is why C, for example, has both. Using & when one means && is a hack that obscures the meaning of the code, and using logical_and is clear but cluttered--a significant step away from the goal of having code be clear, concise and readable. I suspect that many other people will trip over the lack of && in the same way that I have, and will similarly consider it an irritant that we work around because we have to, not because it is good. > Of course, it's highly likely that you are applying & and | to arrays that are > already boolean. Consequently, I don't see a real need for more operators. > > But if you'd like to play around with the grammar: > > http://www.fiber-space.de/EasyExtend/doc/EE.html > Interesting, thanks--but I will back off now. Eric From aisaac at american.edu Sat Jun 24 20:38:39 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 24 Jun 2006 20:38:39 -0400 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: <449DC5D0.9060704@hawaii.edu> References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> <449DA72F.6060805@hawaii.edu> <449DC5D0.9060704@hawaii.edu> Message-ID: On Sat, 24 Jun 2006, Eric Firing apparently wrote: > I suspect that many other people will trip over the lack > of && in the same way that I have, and will similarly > consider it an irritant that we work around because we > have to, not because it is good. I agree with this. In addition, turning to & when && is wanted will likely cause occasional stumbles over operator precedence. (At least I've been bitten that way.) But I do not see this changing unless Python grants the ability to define new operators, in which case I'm sure the wish lists will come out ... Cheers, Alan Isaac From karol.langner at kn.pl Sun Jun 25 13:38:40 2006 From: karol.langner at kn.pl (Karol Langner) Date: Sun, 25 Jun 2006 19:38:40 +0200 Subject: [Numpy-discussion] basearray Message-ID: <200606251938.40507.karol.langner@kn.pl> Dear all, Some of you might be aware that a project has been granted to me for this year's Google's Summer of Code, which aims at preparing a base multidimensional array type for Python. While I had a late start at it, I would like to go through with the project. The focus is on preparing a minimal type, that basically only defines how memory is alllocated for the array, and which can be used by other, more sophisticated types. Later during the project, the type may be enhanced, depending on how using it in practice (also part of the project) works out. Wiki page about the project: http://scipy.org/BaseArray SVN repository: http://svn.scipy.org/svn/PEP/ In order to make this a potential success, I definately need feedback from all you out there interested in pushing such a base type towards Python core. So any comments and opinions are welcome! I will keep you informed on my progress and ask about things that may need concensus (although I'm not sure which lists will be the most interested in this). Please note that I am still in the phase of completing the minimal type, so the svn repository does not contain a working example, yet. Regards, Karol Langner -- written by Karol Langner nie cze 25 19:18:45 CEST 2006 From fperez.net at gmail.com Sun Jun 25 14:27:39 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 25 Jun 2006 12:27:39 -0600 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: On 5/30/06, Travis Oliphant wrote: > > Please help the developers by responding to a few questions. Sorry for not replying before, I wanted a more complete picture before answering. > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? The day this email came in, I had just started to look into porting our major research code. I actually did the work 2 weeks ago, and it went remarkably well. It took a single (marathon) day, about 14 hours of solid work, to go through the old codebase and convert it. This project had a mix of legacy Fortran wrapped via f2py, hand-written C extensions using Numeric, a fair bit of weave.inline() and pure python. It uses matplotlib, PyX and Mayavi for various visualization tasks. There are some 40k loc in the Fortran sources (2/3 of that auto-generated in python from Mathematica computations), and about 13k loc in the C and python sources. This codebase is heavily unit-tested, which was critical for the port. For this kind of effort, unittests make an enormous difference, as they guide you directly to all the problematic spots. Without unittests, this kind of port would have been a nightmare, and I would have never known whether things were actually finished or not. Most of my changes had to do with explicit uses of 'typecode=' which became dtype, and uses of .flat, which used to return a normal array and is now an iterator. I haven't benchmarked things right away, because I expect the numpy-based code to take quite a hit. In this code, I've heavily abused arrays for very trivial 2 and 3-element arithmetic operations, but that means that I create literally millions of extremely small arrays. Even with Numeric, this overhead was already measurable, and I imagine it will get worse with numpy. But since this was silly anyway, and I need to use these little arrays as dictionary keys, instead of doing lots of tuple(array()) all the time, I'm using David Cooke's Vector as a template for a hand-written mini-array class that will do exactly what I need with as little overhead as possible. If for any reason you do want to see actual benchmarks, I can try to run some with the codebases immediately before and after the Numeric->numpy change and report back. > 2) Will you transition within the next 6 months? (if you answered No to #1) That's it: by now we've moved all of our code and it doesn't really work with Numeric anymore, so we're committed :) Again, many thanks for the major improvements that numpy brings! Cheers, f From fperez.net at gmail.com Sun Jun 25 14:55:35 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 25 Jun 2006 12:55:35 -0600 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: On 5/30/06, Travis Oliphant wrote: > 4) Please provide any suggestions for improving NumPy. Well, if I can beg for one thing, it would be fixing dot(): http://projects.scipy.org/scipy/numpy/ticket/156 This bug is currently stalling us pretty badly, since dot() is at the core of everything we do. While the codebase I alluded to in my previous message is fine, a project that sits on top of it is blocked from moving on due to this particular problem. If it's a problem on our side, I'll gladly correct it, but it does seem like a bug to me (esp. with Stefan's test of r2651 which passes). If there's any extra info that you need from me, by all means let me know an I'll be happy to provide it. If you have a feel for where the problem may be but don't have time to fix it right now, I can look into it myself, if you can point me in the right direction. Cheers, f From ndarray at mac.com Sun Jun 25 16:22:02 2006 From: ndarray at mac.com (Sasha) Date: Sun, 25 Jun 2006 16:22:02 -0400 Subject: [Numpy-discussion] logical_and operator, &&, is missing? In-Reply-To: <449DA1DB.8000902@hawaii.edu> References: <449D92BE.3030900@hawaii.edu> <449DA1DB.8000902@hawaii.edu> Message-ID: On 6/24/06, Eric Firing wrote: > Out of curiosity, is there a simple explanation as to why "and" cannot > be overridden but operators like "&" can? Is it a fundamental > distinction between operators and keywords? > There is no fundamental reason. In fact overloadable boolean operators were proposed for python: From yukihana at yahoo.co.jp Sun Jun 25 20:36:47 2006 From: yukihana at yahoo.co.jp (=?iso-2022-jp?B?eXVraWhhbmE=?=) Date: Mon, 26 Jun 2006 00:36:47 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?02 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?02 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.jp/pc/?06 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.jp/pc/?06 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?02 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?02 ?????????????????????????????????? ???????????? ??????????3-6-4-533 ?????? 139-3668-7892 From rina222 at yahoo.co.jp Sun Jun 25 21:07:13 2006 From: rina222 at yahoo.co.jp (=?iso-2022-jp?B?cmluYQ==?=) Date: Mon, 26 Jun 2006 01:07:13 -0000 Subject: [Numpy-discussion] =?iso-2022-jp?b?IFJlOg==?= Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?06 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?06 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?06 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?06 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?06 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?06 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From mpfitz at berkeley.edu Sun Jun 25 21:07:55 2006 From: mpfitz at berkeley.edu (Michael Fitzgerald) Date: Sun, 25 Jun 2006 18:07:55 -0700 Subject: [Numpy-discussion] f.p. powers and masked arrays In-Reply-To: <200606241612.07559.pgmdevlist@mailcan.com> References: <200606212139.52511.fitz@astron.berkeley.edu> <200606241612.07559.pgmdevlist@mailcan.com> Message-ID: <200606251807.55956.mpfitz@berkeley.edu> On Saturday 24 June 2006 13:12, Pierre GM wrote: > I haven't really time to double-check/create a patch, tough. Meanwhile, > Michael, you could just modify your numpy/core/ma.py accordingly. Hi Pierre, Thank you for the fix. I checked it out and and numpy now behaves correctly for 0**1. in masked arrays. Attached are the diffs for numpy (scipy.org SVN) and numarray (sf.net CVS). Mike -------------- next part -------------- A non-text attachment was scrubbed... Name: numarray.diff Type: text/x-diff Size: 705 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy.diff Type: text/x-diff Size: 506 bytes Desc: not available URL: From sayaka20 at yahoo.co.jp Sun Jun 25 22:04:13 2006 From: sayaka20 at yahoo.co.jp (=?iso-2022-jp?B?c2F5YWthMjA=?=) Date: Mon, 26 Jun 2006 02:04:13 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/04 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/04 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/04 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/04 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/04 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?04 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/04 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From miku0814 at yahoo.co.jp Mon Jun 26 00:34:39 2006 From: miku0814 at yahoo.co.jp (=?iso-2022-jp?B?bWlrdQ==?=) Date: Mon, 26 Jun 2006 04:34:39 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?07 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?07 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?07 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From chanley at stsci.edu Mon Jun 26 08:53:59 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 26 Jun 2006 08:53:59 -0400 Subject: [Numpy-discussion] numpy revision 2680 causes segfault on Solaris Message-ID: <449FD8E7.6030609@stsci.edu> Greetings, Numpy revision 2680 causes a segfault in the unit tests on the Solaris 8 OS. The unit tests fail with the at the following test: check_vecobject (numpy.core.tests.test_numeric.test_dot)Segmentation Fault (core dumped) I can try and isolate what in the test is failing. What I can tell you now is that revision 2677 built and tested with no issues so the suspect change was made to one of the following files: U numpy/numpy/f2py/lib/typedecl_statements.py U numpy/numpy/f2py/lib/block_statements.py U numpy/numpy/f2py/lib/splitline.py U numpy/numpy/f2py/lib/parsefortran.py U numpy/numpy/f2py/lib/base_classes.py U numpy/numpy/f2py/lib/readfortran.py U numpy/numpy/f2py/lib/statements.py U numpy/numpy/core/src/arrayobject.c U numpy/numpy/core/tests/test_numeric.py Chris From oliphant.travis at ieee.org Mon Jun 26 11:37:06 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 26 Jun 2006 09:37:06 -0600 Subject: [Numpy-discussion] numpy revision 2680 causes segfault on Solaris In-Reply-To: <449FD8E7.6030609@stsci.edu> References: <449FD8E7.6030609@stsci.edu> Message-ID: <449FFF22.5000208@ieee.org> Christopher Hanley wrote: > Greetings, > > Numpy revision 2680 causes a segfault in the unit tests on the Solaris 8 > OS. The unit tests fail with the at the following test: > > check_vecobject (numpy.core.tests.test_numeric.test_dot)Segmentation > Fault (core dumped) > > I can try and isolate what in the test is failing. > > What I can tell you now is that revision 2677 built and tested with no > issues so the suspect change was made to one of the following files: > This is a new test in 2580. It may be a problem that has been present but not tested against, or it may be a problem introduced with my recent changes to the copy and broadcast code (which are pretty fundamental pieces of code). If you can give a (gdb) traceback it would be helpful. Thanks, -Travis From kwgoodman at gmail.com Mon Jun 26 14:19:31 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Mon, 26 Jun 2006 11:19:31 -0700 Subject: [Numpy-discussion] Sour pickles Message-ID: Upgrading numpy and scipy from an April svn snapshot to yesterday's svn broke my code. To diagnose the problem I need to generate data in one version and load it in the other version. I did a search on how to save data in python and came up with pickle, or, actually, cpickle. But the format of the pickle is different between the two versions of numpy. I am unable to load in one version what I saved in the other version. when I pickle, for example, numpy.asmatrix([1,2,3]) as ASCII, numpy 0.9.9.2677 adds I1\n in two places compared with numpy 0.9.7.2416. Any advice? From oliphant.travis at ieee.org Mon Jun 26 17:32:09 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 26 Jun 2006 15:32:09 -0600 Subject: [Numpy-discussion] Sour pickles In-Reply-To: References: Message-ID: <44A05259.8080101@ieee.org> Keith Goodman wrote: > Upgrading numpy and scipy from an April svn snapshot to yesterday's > svn broke my code. > > To diagnose the problem I need to generate data in one version and > load it in the other version. > > I did a search on how to save data in python and came up with pickle, > or, actually, cpickle. > > But the format of the pickle is different between the two versions of > numpy. I am unable to load in one version what I saved in the other > version. > > when I pickle, for example, numpy.asmatrix([1,2,3]) as ASCII, numpy > 0.9.9.2677 adds I1\n in two places compared with numpy 0.9.7.2416. > > Any advice? > The only thing that has changed in the Pickling code is the addition of a version number to the pickle. However, this means that 0.9.7.2416 will not be able to read 0.9.9.2677 pickles, but 0.9.9.2677 will be able to read 0.9.7.2416 pickles. This will be generally true. You can expect to read old Pickles with NumPy but not necessarily new ones with an old version. The other option is to use fromfile() and arr.tofile() which will read and write raw data. It's harder to use than pickle because no shape information is stored (it's just a raw binary file). -Travis From oliphant.travis at ieee.org Mon Jun 26 23:00:18 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 26 Jun 2006 21:00:18 -0600 Subject: [Numpy-discussion] Record-arrays can now hold object's Message-ID: <44A09F42.3070207@ieee.org> I've finished with basic support for arrays with object fields. Thus, for example, you can have a data-type that is [('date', 'O'), ('values', 'f8')]. Object's can be inside of any layer of a nested field as well. The work must be considered alpha still because there may be locations in the code that I've forgotten about that do not take an appropriately abstract view of the data-type so as to support this. There is one unit-test for the capability, but more testing is needed. Use of these should be no slower than object arrays and should not change the speed of other arrays. -Travis From nwagner at iam.uni-stuttgart.de Tue Jun 27 03:37:29 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 27 Jun 2006 09:37:29 +0200 Subject: [Numpy-discussion] numpy.linalg.pinv has no docstring Message-ID: <44A0E039.3000403@iam.uni-stuttgart.de> Hi Travis, Just now I saw that you have fixed the failing test. You have used pinv (pseudo inverse). Please can you add a docstring to numpy.linalg.pinv. Thanks in advance. Nils In [4]: numpy.linalg.pinv? Type: function Base Class: String Form: Namespace: Interactive File: /usr/lib64/python2.4/site-packages/numpy/linalg/linalg.py Definition: numpy.linalg.pinv(a, rcond=1e-10) Docstring: From hitomi0303 at yahoo.co.jp Tue Jun 27 04:02:36 2006 From: hitomi0303 at yahoo.co.jp (=?iso-2022-jp?B?aGl0b21p?=) Date: Tue, 27 Jun 2006 08:02:36 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?09 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?09 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?09 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?09 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From joris at ster.kuleuven.ac.be Tue Jun 27 05:11:11 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Tue, 27 Jun 2006 11:11:11 +0200 Subject: [Numpy-discussion] numpy.linalg.pinv has no docstring Message-ID: <1151399471.44a0f62f1c7d2@webmail.ster.kuleuven.be> On Tuesday 27 June 2006 09:37, Nils Wagner wrote: [NW]: Please can you add a docstring to numpy.linalg.pinv. In case it might help, I added an example to the Numpy Example List (http://www.scipy.org/Numpy_Example_List) which illustrates the use of pinv(). J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From yukihana at yahoo.co.jp Tue Jun 27 05:19:36 2006 From: yukihana at yahoo.co.jp (=?iso-2022-jp?B?eXVraWhhbmE=?=) Date: Tue, 27 Jun 2006 09:19:36 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?02 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?02 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.jp/pc/?06 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.jp/pc/?06 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?02 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?02 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?02 ?????????????????????????????????? ???????????? ??????????3-6-4-533 ?????? 139-3668-7892 From kwgoodman at gmail.com Tue Jun 27 12:45:57 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 09:45:57 -0700 Subject: [Numpy-discussion] Upgrading from numpy 0.9.7.2416 to 0.9.9.2683 Message-ID: This works in numpy 0.9.7.2416 but doesn't work in numpy 0.9.9.2683: Numpy 0.9.9.2683 x = asmatrix(zeros((3,2), float)) y = asmatrix(rand(3,1)) y matrix([[ 0.49865026], [ 0.82675808], [ 0.30285247]]) x[:,1] = y > 0.5 x matrix([[ 0., 0.], [ 0., 0.], <--- this should be one (?) [ 0., 0.]]) But it worked in 0.9.7.2416: x = asmatrix(zeros((3,2), float)) y = asmatrix(rand(3,1)) y matrix([[ 0.35444501], [ 0.7032141 ], [ 0.0918561 ]]) x[:,1] = y > 0.5 x matrix([[ 0., 0.], [ 0., 1.], [ 0., 0.]]) From stefan at sun.ac.za Tue Jun 27 14:12:48 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 27 Jun 2006 20:12:48 +0200 Subject: [Numpy-discussion] Upgrading from numpy 0.9.7.2416 to 0.9.9.2683 In-Reply-To: References: Message-ID: <20060627181248.GA27056@mentat.za.net> On Tue, Jun 27, 2006 at 09:45:57AM -0700, Keith Goodman wrote: > This works in numpy 0.9.7.2416 but doesn't work in numpy 0.9.9.2683: > > Numpy 0.9.9.2683 > > x = asmatrix(zeros((3,2), float)) > y = asmatrix(rand(3,1)) > y > > matrix([[ 0.49865026], > [ 0.82675808], > [ 0.30285247]]) > > x[:,1] = y > 0.5 > x > > matrix([[ 0., 0.], > [ 0., 0.], <--- this should be one (?) > [ 0., 0.]]) With r2691 I see In [7]: x = N.asmatrix(N.zeros((3,2)),float) In [8]: y = N.asmatrix(N.rand(3,1)) In [12]: x[:,1] = y > 0.5 In [13]: x Out[13]: matrix([[ 0., 1.], [ 0., 1.], [ 0., 1.]]) Cheers St?fan From oliphant.travis at ieee.org Tue Jun 27 14:19:59 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 12:19:59 -0600 Subject: [Numpy-discussion] Upgrading from numpy 0.9.7.2416 to 0.9.9.2683 In-Reply-To: References: Message-ID: <44A176CF.6080302@ieee.org> Keith Goodman wrote: > This works in numpy 0.9.7.2416 but doesn't work in numpy 0.9.9.2683: > > Numpy 0.9.9.2683 > > x = asmatrix(zeros((3,2), float)) > y = asmatrix(rand(3,1)) > y > > matrix([[ 0.49865026], > [ 0.82675808], > [ 0.30285247]]) > > x[:,1] = y > 0.5 > x > > matrix([[ 0., 0.], > [ 0., 0.], <--- this should be one (?) > [ 0., 0.]]) > > This looks like a bug, probably introduced recently during the re-write of the copying and casting code. Try checking out the revisions r2662 and r2660 to see which one works for you. I'll look into this problem. -Travis From kwgoodman at gmail.com Tue Jun 27 14:44:06 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 11:44:06 -0700 Subject: [Numpy-discussion] Upgrading from numpy 0.9.7.2416 to 0.9.9.2683 In-Reply-To: <44A176CF.6080302@ieee.org> References: <44A176CF.6080302@ieee.org> Message-ID: On 6/27/06, Travis Oliphant wrote: > Keith Goodman wrote: > > This works in numpy 0.9.7.2416 but doesn't work in numpy 0.9.9.2683: > > > > Numpy 0.9.9.2683 > > > > x = asmatrix(zeros((3,2), float)) > > y = asmatrix(rand(3,1)) > > y > > > > matrix([[ 0.49865026], > > [ 0.82675808], > > [ 0.30285247]]) > > > > x[:,1] = y > 0.5 > > x > > > > matrix([[ 0., 0.], > > [ 0., 0.], <--- this should be one (?) > > [ 0., 0.]]) > > > > > > This looks like a bug, probably introduced recently during the re-write > of the copying and casting code. Try checking out the revisions r2662 > and r2660 to see which one works for you. I'll look into this problem. Thanks for the tip. I get some extra output with r2660. It prints out "Source array" and "Dest. array" like this: >> x = asmatrix(zeros((3,2), float)) >> x matrix([[ 0., 0.], [ 0., 0.], [ 0., 0.]]) >> y = asmatrix(rand(3,1)) >> y matrix([[ 0.60117193], [ 0.43883293], [ 0.01633154]]) >> x[:,1] = y > 0.5 Source array = (3 1) Dest. array = (1 3) >> x matrix([[ 0., 1.], [ 0., 0.], [ 0., 0.]]) From oliphant.travis at ieee.org Tue Jun 27 14:50:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 12:50:05 -0600 Subject: [Numpy-discussion] Upgrading from numpy 0.9.7.2416 to 0.9.9.2683 In-Reply-To: <20060627181248.GA27056@mentat.za.net> References: <20060627181248.GA27056@mentat.za.net> Message-ID: <44A17DDD.5090904@ieee.org> Stefan van der Walt wrote: > On Tue, Jun 27, 2006 at 09:45:57AM -0700, Keith Goodman wrote: > > With r2691 I see > > In [7]: x = N.asmatrix(N.zeros((3,2)),float) > > In [8]: y = N.asmatrix(N.rand(3,1)) > > In [12]: x[:,1] = y > 0.5 > > In [13]: x > Out[13]: > matrix([[ 0., 1.], > [ 0., 1.], > [ 0., 1.]]) > This was a bug, indirectly caused by the move to broadcasted copying and casting and the use of a matrix here. Previously the shapes didn't matter as long as the total size was the same. Thus internally x[:,1] was creating a (1,3) matrix referencing the last column of x (call it xp) and y>0.5 was a (3,1) matrix (call it yp) Thus the resulting casting code was repeatedly filling in x with (y>0.5). Thus, the last entry of (y>0.5) was the one that resulted. Previously, this would have worked because the shape of the arrays didn't matter, but now they do. The real culprit was not allowing the matrices "getitem" method to be called (which would have correctly obtained a (3,1) matrix from x[:,1] and thus avoided the strange result. Thus, in SVN, now PyObject_GetItem is used instead of the default ndarray getitem. The upshot, is that this should now work --- and there is now a unit-test to check for it. Thanks to Keith for exposing this bug. -Travis From geneing at gmail.com Tue Jun 27 13:52:08 2006 From: geneing at gmail.com (EI) Date: Tue, 27 Jun 2006 10:52:08 -0700 Subject: [Numpy-discussion] int64 wierdness Message-ID: Hi, I'm running python 2.4 on a 64bit linux and get strange results: (int(9))**2 is equal to 81, as it should, but (int64(9))**2 is equal to 0 Is it a bug or a feature? Eugene -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Tue Jun 27 15:38:29 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 13:38:29 -0600 Subject: [Numpy-discussion] int64 wierdness In-Reply-To: References: Message-ID: <44A18935.1090702@ieee.org> EI wrote: > Hi, > > I'm running python 2.4 on a 64bit linux and get strange results: > (int(9))**2 is equal to 81, as it should, but > (int64(9))**2 is equal to 0 Thanks for the bug-report. Please provide the version of NumPy you are using so we can track it down, or suggest an upgrade. -Travis From tim.hochberg at cox.net Tue Jun 27 16:08:13 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue, 27 Jun 2006 13:08:13 -0700 Subject: [Numpy-discussion] numexpr does sum. Message-ID: <44A1902D.3060909@cox.net> I managed to get basic support for sum and prod into numpexpr. I need to tie up some loose ends, for instance only floats are currently supported, but these should be easy. To return to the recently posted multidimensional distance program, this now works: expr = numexpr("sum((a - b)**2, axis=2)", [('a', float), ('b', float)]) def dist_numexpr(A, B): return sqrt(expr(A[:,newaxis], B[newaxis,:])) It's also quite fast, although there's still room for improvement in the reduction code. Notice that it still needs to be in two parts since sum/prod needs to surround the rest of the expression. Note also that it does support the axis keyword, although currently only nonnegative values (or None). I plan to fix that at some point though. -tim From geneing at gmail.com Tue Jun 27 16:26:38 2006 From: geneing at gmail.com (EI) Date: Tue, 27 Jun 2006 13:26:38 -0700 Subject: [Numpy-discussion] int64 wierdness In-Reply-To: <44A18935.1090702@ieee.org> References: <44A18935.1090702@ieee.org> Message-ID: numpy.__version__ says 0.9.8. Python 2.4.2, GCC 4.1, OpenSuSE 10.1 (x86_64). Thanks Travis, Eugene On 6/27/06, Travis Oliphant wrote: > > EI wrote: > > Hi, > > > > I'm running python 2.4 on a 64bit linux and get strange results: > > (int(9))**2 is equal to 81, as it should, but > > (int64(9))**2 is equal to 0 > > Thanks for the bug-report. Please provide the version of NumPy you are > using so we can track it down, or suggest an upgrade. > > -Travis > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Tue Jun 27 18:37:21 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 27 Jun 2006 15:37:21 -0700 Subject: [Numpy-discussion] int64 wierdness In-Reply-To: References: <44A18935.1090702@ieee.org> Message-ID: <44A1B321.2030102@astraw.com> An SVN checkout from a week or two ago looks OK on my amd64 machine: astraw at hdmg:~$ python Python 2.4.3 (#2, Apr 27 2006, 14:43:32) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '0.9.9.2631' >>> numpy.int64(9)**2 81 >>> EI wrote: > numpy.__version__ says 0.9.8. > > Python 2.4.2, GCC 4.1, OpenSuSE 10.1 (x86_64). > > Thanks Travis, > Eugene > > On 6/27/06, *Travis Oliphant* < oliphant.travis at ieee.org > > wrote: > > EI wrote: > > Hi, > > > > I'm running python 2.4 on a 64bit linux and get strange results: > > (int(9))**2 is equal to 81, as it should, but > > (int64(9))**2 is equal to 0 > > Thanks for the bug-report. Please provide the version of NumPy > you are > using so we can track it down, or suggest an upgrade. > > -Travis > > >------------------------------------------------------------------------ > >Using Tomcat but need to do more? Need to support web services, security? >Get stuff done quickly with pre-integrated technology to make your job easier >Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > >------------------------------------------------------------------------ > >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From dvp at MIT.EDU Tue Jun 27 19:38:44 2006 From: dvp at MIT.EDU (Dennis V. Perepelitsa) Date: Tue, 27 Jun 2006 19:38:44 -0400 (EDT) Subject: [Numpy-discussion] Numpy Benchmarking Message-ID: Hi, all. I've run some benchmarks comparing the performance of scipy, numpy, Numeric and numarray vs. MATLAB. There's also the beginnings of a benchmark framework included. The results are online at: http://web.mit.edu/jonas/www/bench/ They were produced on a Thinkpad T42 with an Intel Pentium M 1.7GHz processor running Ubuntu Dapper Drake (6.06). All the languages/packages were built from source, and, in the case of numpy and scipy, linked to ATLAS. Each datapoint represents the arithmetic mean of ten trials. The results have some interesting implications. For example, numpy and scipy perform approximately the same except when it comes to matrix inversion, MATLAB beats out all the Python packages when it comes to matrix addition, and numpy seems to be beaten by its predecessors in some cases. Why is this the case? What are some other, additional benchmarks I could try? Dennis V. Perepelitsa MIT Class of 2008, Course VIII and XVIII-C Picower Institute for Learning and Memory From robert.kern at gmail.com Tue Jun 27 19:50:19 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Jun 2006 18:50:19 -0500 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: Message-ID: Dennis V. Perepelitsa wrote: > Hi, all. > > I've run some benchmarks comparing the performance of scipy, numpy, > Numeric and numarray vs. MATLAB. There's also the beginnings of a > benchmark framework included. The results are online at: > > http://web.mit.edu/jonas/www/bench/ > > They were produced on a Thinkpad T42 with an Intel Pentium M 1.7GHz > processor running Ubuntu Dapper Drake (6.06). All the languages/packages > were built from source, and, in the case of numpy and scipy, linked to > ATLAS. Each datapoint represents the arithmetic mean of ten trials. I have two suggestions based on a two-second glance at this: 1) Use time.time() on UNIX and time.clock() on Windows. The usual snippet of code I use for this: import sys import time if sys.platform == 'win32': now = time.clock else: now = time.time t1 = now() ... t2 = now() 2) Never take the mean of repeated time trials. Take the minimum if you need to summarize a set of trials. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Tue Jun 27 19:55:53 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 16:55:53 -0700 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: Message-ID: On 6/27/06, Dennis V. Perepelitsa wrote: > I've run some benchmarks comparing the performance of scipy, numpy, > Numeric and numarray vs. MATLAB. I enjoyed looking at the results. The most interesting result, for me, was that inverting a matrix is much faster in scipy than numpy. How can that be? I would have guessed that numpy handled the inversion for scipy since numpy is the core. The two calls were scipy.linalg.inv(m) and numpy.linalg.inv(m). From oliphant.travis at ieee.org Tue Jun 27 20:24:23 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 18:24:23 -0600 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: Message-ID: <44A1CC37.1050300@ieee.org> Dennis V. Perepelitsa wrote: > Hi, all. > > I've run some benchmarks comparing the performance of scipy, numpy, > Numeric and numarray vs. MATLAB. There's also the beginnings of a > benchmark framework included. The results are online at: > > http://web.mit.edu/jonas/www/bench/ > > They were produced on a Thinkpad T42 with an Intel Pentium M 1.7GHz > processor running Ubuntu Dapper Drake (6.06). All the languages/packages > were built from source, and, in the case of numpy and scipy, linked to > ATLAS. Each datapoint represents the arithmetic mean of ten trials. > I agree with Robert that a minimum would be a better way to aggregate results. > The results have some interesting implications. For example, numpy and > scipy perform approximately the same except when it comes to matrix > inversion, MATLAB beats out all the Python packages when it comes to > matrix addition, and numpy seems to be beaten by its predecessors in some > cases. Why is this the case? In terms of creating zeros matrices, you are creating double-precision matrices for NumPy but only single-precision for Numeric and numarray. Try using numpy.float32 or 'f' when creating numpy arrays. The float is the Python type-object and represents a double-precision number. Or, if you are trying to use double precision for all cases (say for comparison to MATLAB) then use 'd' in numarray and Numeric. For comparing numpy with numarray and Numeric there are some benchmarks in the SVN tree of NumPy under benchmarks. These benchmarks have been helpful in the past in pointing out areas where we could improve the code of NumPy, so I'm grateful for your efforts. -Travis From oliphant.travis at ieee.org Tue Jun 27 20:26:46 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 18:26:46 -0600 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: Message-ID: <44A1CCC6.9090506@ieee.org> Keith Goodman wrote: > On 6/27/06, Dennis V. Perepelitsa wrote: > > >> I've run some benchmarks comparing the performance of scipy, numpy, >> Numeric and numarray vs. MATLAB. >> > > I enjoyed looking at the results. > > The most interesting result, for me, was that inverting a matrix is > much faster in scipy than numpy. How can that be? I would have guessed > that numpy handled the inversion for scipy since numpy is the core. > > The two calls were scipy.linalg.inv(m) and numpy.linalg.inv(m). > NumPy uses Numeric's old wrapper to lapack algorithms. SciPy uses it's own f2py-generated wrapper (it doesn't rely on the NumPy wrapper). The numpy.dual library exists so you can use the SciPy calls if the person has SciPy installed or the NumPy ones otherwise. It exists precisely for the purpose of seamlessly taking advantage of algorithms/interfaces that exist in NumPy but are improved in SciPy. -Travis From kwgoodman at gmail.com Tue Jun 27 21:13:37 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 18:13:37 -0700 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <44A1CCC6.9090506@ieee.org> References: <44A1CCC6.9090506@ieee.org> Message-ID: On 6/27/06, Travis Oliphant wrote: > The numpy.dual library exists so you can use the SciPy calls if the > person has SciPy installed or the NumPy ones otherwise. It exists > precisely for the purpose of seamlessly taking advantage of > algorithms/interfaces that exist in NumPy but are improved in SciPy. That sounds very interesting. It would make a great addition to the scipy performance page: http://scipy.org/PerformanceTips So if I need any of the following functions I should import them from scipy or from numpy.dual? And all of them are faster? fft ifft fftn ifftn fft2 ifft2 norm inv svd solve det eig eigvals eigh eigvalsh lstsq pinv cholesky http://svn.scipy.org/svn/numpy/trunk/numpy/dual.py From kwgoodman at gmail.com Tue Jun 27 22:18:51 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 19:18:51 -0700 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: <44A1CCC6.9090506@ieee.org> Message-ID: On 6/27/06, Keith Goodman wrote: > On 6/27/06, Travis Oliphant wrote: > > > The numpy.dual library exists so you can use the SciPy calls if the > > person has SciPy installed or the NumPy ones otherwise. It exists > > precisely for the purpose of seamlessly taking advantage of > > algorithms/interfaces that exist in NumPy but are improved in SciPy. > > That sounds very interesting. It would make a great addition to the > scipy performance page: > > http://scipy.org/PerformanceTips > > So if I need any of the following functions I should import them from > scipy or from numpy.dual? And all of them are faster? > > fft > ifft > fftn > ifftn > fft2 > ifft2 > norm > inv > svd > solve > det > eig > eigvals > eigh > eigvalsh > lstsq > pinv > cholesky > > http://svn.scipy.org/svn/numpy/trunk/numpy/dual.py > Scipy computes the inverse of a matrix faster than numpy (except if the dimensions of x are small). But scipy is slower than numpy for eigh (I only checked for symmetric positive definite matrices): from numpy import asmatrix, randn from numpy.linalg import eigh as Neigh from scipy.linalg import eigh as Seigh import time def test(N): x = asmatrix(randn(N,2*N)) x = x * x.T t0 = time.time() eigval, eigvec = Neigh(x) t1 = time.time() t2 = time.time() eigval, eigvec = Seigh(x) t3 = time.time() print 'NumPy:', t1-t0, 'seconds' print 'SciPy:', t3-t2, 'seconds' >> dual.test(10) NumPy: 0.000217914581299 seconds SciPy: 0.000226020812988 seconds >> dual.test(100) NumPy: 0.0123109817505 seconds SciPy: 0.0321230888367 seconds >> dual.test(200) NumPy: 0.0793058872223 seconds SciPy: 0.082535982132 seconds >> dual.test(500) NumPy: 0.59161400795 seconds SciPy: 1.41600894928 seconds From robert.kern at gmail.com Tue Jun 27 22:40:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Jun 2006 21:40:46 -0500 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: <44A1CCC6.9090506@ieee.org> Message-ID: Keith Goodman wrote: > Scipy computes the inverse of a matrix faster than numpy (except if > the dimensions of x are small). But scipy is slower than numpy for > eigh (I only checked for symmetric positive definite matrices): Looks like scipy uses *SYEV and numpy uses the better *SYEVD (the D stands for divide-and-conquer) routine. Both should probably be using the RRR versions (*SYEVR) if I'm reading the advice in the LUG correctly. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Tue Jun 27 23:03:01 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 20:03:01 -0700 Subject: [Numpy-discussion] Should cholesky return upper or lower triangular matrix? Message-ID: Isn't the Cholesky decomposition by convention an upper triangular matrix? I noticed, by porting Octave code, that linalg.cholesky returns the lower triangular matrix. References: http://mathworld.wolfram.com/CholeskyDecomposition.html http://www.mathworks.com/access/helpdesk/help/techdoc/ref/chol.html From robert.kern at gmail.com Tue Jun 27 23:18:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Jun 2006 22:18:04 -0500 Subject: [Numpy-discussion] Should cholesky return upper or lower triangular matrix? In-Reply-To: References: Message-ID: Keith Goodman wrote: > Isn't the Cholesky decomposition by convention an upper triangular > matrix? I noticed, by porting Octave code, that linalg.cholesky > returns the lower triangular matrix. > > References: > > http://mathworld.wolfram.com/CholeskyDecomposition.html > http://www.mathworks.com/access/helpdesk/help/techdoc/ref/chol.html Lower: http://en.wikipedia.org/wiki/Cholesky_decomposition http://www.math-linux.com/spip.php?article43 http://planetmath.org/?op=getobj&from=objects&id=1287 http://rkb.home.cern.ch/rkb/AN16pp/node33.html#SECTION000330000000000000000 http://www.riskglossary.com/link/cholesky_factorization.htm http://www.library.cornell.edu/nr/bookcpdf/c2-9.pdf If anything, the convention appears to be lower-triangular. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Tue Jun 27 23:25:08 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 20:25:08 -0700 Subject: [Numpy-discussion] Should cholesky return upper or lower triangular matrix? In-Reply-To: References: Message-ID: On 6/27/06, Robert Kern wrote: > Keith Goodman wrote: > > Isn't the Cholesky decomposition by convention an upper triangular > > matrix? I noticed, by porting Octave code, that linalg.cholesky > > returns the lower triangular matrix. > > > > References: > > > > http://mathworld.wolfram.com/CholeskyDecomposition.html > > http://www.mathworks.com/access/helpdesk/help/techdoc/ref/chol.html > > Lower: > http://en.wikipedia.org/wiki/Cholesky_decomposition > http://www.math-linux.com/spip.php?article43 > http://planetmath.org/?op=getobj&from=objects&id=1287 > http://rkb.home.cern.ch/rkb/AN16pp/node33.html#SECTION000330000000000000000 > http://www.riskglossary.com/link/cholesky_factorization.htm > http://www.library.cornell.edu/nr/bookcpdf/c2-9.pdf > > If anything, the convention appears to be lower-triangular. If you give me a second, I'll show you that the wikipedia supports my claim. OK. Lower it is. It will save me a transpose when I calculate joint random variables. From miku0814 at yahoo.co.jp Tue Jun 27 23:28:04 2006 From: miku0814 at yahoo.co.jp (=?iso-2022-jp?B?bWlrdQ==?=) Date: Wed, 28 Jun 2006 03:28:04 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?07 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?07 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?07 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?07 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?07 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From emi0924 at yahoo.co.jp Wed Jun 28 01:41:50 2006 From: emi0924 at yahoo.co.jp (=?iso-2022-jp?B?ZW1p?=) Date: Wed, 28 Jun 2006 05:41:50 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?010 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ?????????????????????????????? ??[??????????]?http://love-match.bz/pc/?010 ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?010 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?010 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?010 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?010 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?010 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From kanako at yahoo.co.jp Wed Jun 28 02:21:29 2006 From: kanako at yahoo.co.jp (=?iso-2022-jp?B?a2FuYWtv?=) Date: Wed, 28 Jun 2006 06:21:29 -0000 Subject: [Numpy-discussion] (no subject) Message-ID: :?? INFORMATION ?????????????????????????: ?????????????????????? ???????????? http://love-match.bz/pc/?03 :??????????????????????????????????: *????*:.?. .?.:*????*:.?..?:*????*:.?..?:**????* ??????????????????????????????????? ??? ???????????????????Love?Match? ?----------------------------------------------------------------- ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ??????????????????????? ?----------------------------------------------------------------- ????????????????http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ?????????????????????? ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ????????????????????????????? ??????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????????????????? ?----------------------------------------------------------------- ???????????????????????????????? ????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????? ?----------------------------------------------------------------- ???????????????????????? ?????????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????????????? ?----------------------------------------------------------------- ??????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ????????????????????????? ?----------------------------------------------------------------- ????????????????????????? ????????????????????????????????? ?http://love-match.bz/pc/?03 ??????????????????????????????????? ??? ??500???????????????? ?----------------------------------------------------------------- ???????/???? ???????????????????? ????????????????????????????????? ???????????????????????????????? ?????????????????????????? ?????????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/?????? ?????????????????????????????????? ??????????????????????????????????? ?????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/????? ?????????????????????????????????? ???????????????????????????????? ?????????????????????????(^^) ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/???? ??????????????????????????????? ?????????????????????????????? ?????????????????????????????? ???????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ????????/??? ???????????????1??? ????????????????????????? ????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????/??????? ????18?????????????????????????? ????????????????????????????? ????????????????????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???`????/??? ????????????????????? ?????????????????????? ?????????????? ?[????] http://love-match.bz/pc/?03 ?----------------------------------------------------------------- ???????????????????? ?????????????????????????????????? ????????????? ??------------------------------------------------------------- ???????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ??------------------------------------------------------------- ????????????????????? ??????????????????????????? ??????????????????? ??????????????????????????????? ??[??????????]?http://love-match.bz/pc/?03 ?????????????????????????????????? ??????????3-6-4-533 ?????? 139-3668-7892 From joris at ster.kuleuven.ac.be Wed Jun 28 04:14:41 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Wed, 28 Jun 2006 10:14:41 +0200 Subject: [Numpy-discussion] Numpy Benchmarking Message-ID: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> Hi, [TO]: NumPy uses Numeric's old wrapper to lapack algorithms. [TO]: [TO]: SciPy uses it's own f2py-generated wrapper (it doesn't rely on the [TO]: NumPy wrapper). [TO]: [TO]: The numpy.dual library exists so you can use the SciPy calls if the [TO]: person has SciPy installed or the NumPy ones otherwise. It exists [TO]: precisely for the purpose of seamlessly taking advantage of [TO]: algorithms/interfaces that exist in NumPy but are improved in SciPy. This strikes me as a little bit odd. Why not just provide the best-performing function to both SciPy and NumPy? Would NumPy be more difficult to install if the SciPy algorithm for inv() was incorporated? Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From robert.kern at gmail.com Wed Jun 28 04:22:28 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 Jun 2006 03:22:28 -0500 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> Message-ID: joris at ster.kuleuven.ac.be wrote: > Hi, > > [TO]: NumPy uses Numeric's old wrapper to lapack algorithms. > [TO]: > [TO]: SciPy uses it's own f2py-generated wrapper (it doesn't rely on the > [TO]: NumPy wrapper). > [TO]: > [TO]: The numpy.dual library exists so you can use the SciPy calls if the > [TO]: person has SciPy installed or the NumPy ones otherwise. It exists > [TO]: precisely for the purpose of seamlessly taking advantage of > [TO]: algorithms/interfaces that exist in NumPy but are improved in SciPy. > > This strikes me as a little bit odd. Why not just provide the best-performing > function to both SciPy and NumPy? Would NumPy be more difficult to install > if the SciPy algorithm for inv() was incorporated? That's certainly the case for the FFT algorithms. Scipy wraps more (and more complicated) FFT libraries that are faster than FFTPACK. Most of the linalg functionality should probably be wrapping the same routines if an optimized LAPACK is available. However, changing the routine used in numpy in the absence of an optimized LAPACK would require reconstructing the f2c'ed lapack_lite library that we include with the numpy source. That hasn't been touched in so long that I would hesitate to do so. If you are willing to do the work and the testing to ensure that it still works everywhere, we'd probably accept the change. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wright at esrf.fr Wed Jun 28 04:55:36 2006 From: wright at esrf.fr (Jon Wright) Date: Wed, 28 Jun 2006 10:55:36 +0200 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> Message-ID: <44A24408.9000305@esrf.fr> >>This strikes me as a little bit odd. Why not just provide the best-performing >>function to both SciPy and NumPy? Would NumPy be more difficult to install >>if the SciPy algorithm for inv() was incorporated? >> >> Having spent a few days recently trying out various different eigenvector routines in Lapack I would have greatly appreciated having a choice of which one to use from without having to create my own wrappers, compiling atlas and lapack under windows (ouch). I noted that Numeric (24.2) seemed to be converting Float32 to double meaning my problem no longer fits in memory, which was the motivation for the work. Poking around in the svn of numpy.linalg appears to find the same lapack routine as Numeric (dsyevd). Perhaps I miss something in the code logic? The divide and conquer (*evd) uses more memory than the (*ev), as well as a factor of 2 for float/double, hence my problem, and the reason why "best performing" is a hard choice. I thought matlab has a look at the matrix dimensions and problem before deciding what to do (eg: the \ operator). Jon From arnd.baecker at web.de Wed Jun 28 05:16:09 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 28 Jun 2006 11:16:09 +0200 (CEST) Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <44A24408.9000305@esrf.fr> References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> <44A24408.9000305@esrf.fr> Message-ID: Hi, On Wed, 28 Jun 2006, Jon Wright wrote: > > >>This strikes me as a little bit odd. Why not just provide the best-performing > >>function to both SciPy and NumPy? Would NumPy be more difficult to install > >>if the SciPy algorithm for inv() was incorporated? > >> > >> > Having spent a few days recently trying out various different > eigenvector routines in Lapack I would have greatly appreciated having a > choice of which one to use which routine are you trying to use? > from without having to create my own > wrappers, compiling atlas and lapack under windows (ouch). I noted that > Numeric (24.2) seemed to be converting Float32 to double meaning my > problem no longer fits in memory, which was the motivation for the work. > Poking around in the svn of numpy.linalg appears to find the same lapack > routine as Numeric (dsyevd). Perhaps I miss something in the code logic? if you can convince the code to get ssyevd instead of dsyevd it might do what you want> > The divide and conquer (*evd) uses more memory than the (*ev), as well > as a factor of 2 for float/double, hence my problem, and the reason why > "best performing" is a hard choice. I thought matlab has a look at the > matrix dimensions and problem before deciding what to do (eg: the \ > operator). Hmm, this is a hard choice, which might better left in the hands of the knowledgeable user. (e.g., aren't the divide and conquer routines substantially faster?) Best, Arnd From jensj at fysik.dtu.dk Wed Jun 28 06:44:05 2006 From: jensj at fysik.dtu.dk (=?ISO-8859-1?Q?Jens_J=F8rgen_Mortensen?=) Date: Wed, 28 Jun 2006 12:44:05 +0200 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: Message-ID: <44A25D75.8060402@servfys.fysik.dtu.dk> Dennis V. Perepelitsa wrote: >Hi, all. > >I've run some benchmarks comparing the performance of scipy, numpy, >Numeric and numarray vs. MATLAB. There's also the beginnings of a >benchmark framework included. The results are online at: > > http://web.mit.edu/jonas/www/bench/ > > It's a little hard to see the curves for small matrix size, N. How about plotting the time divided by the theoretical number of operations - which would be N^2 or N^3. Jens J?rgen From filip at ftv.pl Wed Jun 28 07:00:31 2006 From: filip at ftv.pl (Filip Wasilewski) Date: Wed, 28 Jun 2006 13:00:31 +0200 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <44A25D75.8060402@servfys.fysik.dtu.dk> References: <44A25D75.8060402@servfys.fysik.dtu.dk> Message-ID: <1918158814.20060628130031@gmail.com> Jens wrote: > Dennis V. Perepelitsa wrote: >>Hi, all. >> >>I've run some benchmarks comparing the performance of scipy, numpy, >>Numeric and numarray vs. MATLAB. There's also the beginnings of a >>benchmark framework included. The results are online at: >> >> http://web.mit.edu/jonas/www/bench/ >> >> > It's a little hard to see the curves for small matrix size, N. How > about plotting the time divided by the theoretical number of operations > - which would be N^2 or N^3. Or use some logarithmic scale (one or both axis) where applicable. fw From schut at sarvision.nl Wed Jun 28 10:03:55 2006 From: schut at sarvision.nl (Vincent Schut) Date: Wed, 28 Jun 2006 16:03:55 +0200 Subject: [Numpy-discussion] int64 wierdness In-Reply-To: <44A1B321.2030102@astraw.com> References: <44A18935.1090702@ieee.org> <44A1B321.2030102@astraw.com> Message-ID: <44A28C4B.5080300@sarvision.nl> Andrew Straw wrote: > An SVN checkout from a week or two ago looks OK on my amd64 machine: > > astraw at hdmg:~$ python > Python 2.4.3 (#2, Apr 27 2006, 14:43:32) > [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> numpy.__version__ > '0.9.9.2631' > >>> numpy.int64(9)**2 > 81 > >>> > Confirmed to be fixed on my gentoo amd64 machine, numpy svn of couple of days ago: >>> numpy.int64(9)**2 81 >>> numpy.__version__ '0.9.9.2665' Cheers, Vincent. > > EI wrote: > > >> numpy.__version__ says 0.9.8. >> >> Python 2.4.2, GCC 4.1, OpenSuSE 10.1 (x86_64). >> >> Thanks Travis, >> Eugene >> >> On 6/27/06, *Travis Oliphant* < oliphant.travis at ieee.org >> > wrote: >> >> EI wrote: >> > Hi, >> > >> > I'm running python 2.4 on a 64bit linux and get strange results: >> > (int(9))**2 is equal to 81, as it should, but >> > (int64(9))**2 is equal to 0 >> >> Thanks for the bug-report. Please provide the version of NumPy >> you are >> using so we can track it down, or suggest an upgrade. >> >> -Travis >> >> >> ------------------------------------------------------------------------ >> >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From Glen.Mabey at swri.org Wed Jun 28 11:44:11 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Wed, 28 Jun 2006 10:44:11 -0500 Subject: [Numpy-discussion] fread codes versus numpy types Message-ID: <20060628154411.GE13024@bams.swri.edu> Hello, I see the following character codes defined in scipy (presumably) for use with scipy.io.fread() : In [20]:scipy.Complex Out[20]:'D' In [21]:scipy.Complex0 Out[21]:'D' In [22]:scipy.Complex128 Out[22]:'G' In [23]:scipy.Complex16 Out[23]:'F' In [24]:scipy.Complex32 Out[24]:'F' In [25]:scipy.Complex64 Out[25]:'D' In [26]:scipy.Complex8 Out[26]:'F' Then I see the following scalar types also defined: In [27]:scipy.complex64 Out[27]: In [28]:scipy.complex128 Out[28]: In [29]:scipy.complex256 Out[29]: which correspond to types that exist within the numpy module. These names seem to conflict in that (unless I misunderstand what's going on) scipy.complex64 actually occupies 64 bits of data (a 32-bit float for each of {real, imag}) whereas scipy.Complex64 looks like it occupies 128 bits of data (a 64-bit double for each of {real, imag}). Is there something I'm missing, or is this a naming inconsistency? Glen From stefan at sun.ac.za Wed Jun 28 12:24:02 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 28 Jun 2006 18:24:02 +0200 Subject: [Numpy-discussion] matlab -> python translation Message-ID: <20060628162402.GA6089@mentat.za.net> Hi all, I recently saw discussions on the list regarding Matlab/Octave to Python translation. I brought this under John Eaton's attention (he is the original author of Octave) -- below is his response. Regards St?fan ----- Forwarded message from "John W. Eaton" ----- From: "John W. Eaton" On 21-Jun-2006, Stefan van der Walt wrote: | I'd like to bring this thread under your attention, in case you want | to comment: | | http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3174978 Would you please pass along the following comments? Translating the syntax might not be too hard, but to have a really effective tool, you have to get all the details of the Matlab/Octave function calls the same as well. So would you do that by linking to Octave's run-time libraries as well? That could pobably be made to work, but it would probably drag in a lot more code that some poeple would expect when they just want to translate and run a relatively small number of lines of Matlab code. Another semantic detail that would likely cause trouble is the (apparent) pass-by-value semantics of Matlab. How would you reconcile this with the mutable types of Python? Finally, I would encourage anyone who wants to work on a Matlab/Octave to Python translator using Octave's parser and run-time libraries to work on this in a way that can be integrated with Octave. Please consider discuss your ideas about this project on the maintainers at octave.org mailing list. Thanks, jwe ----- End forwarded message ----- From robert.kern at gmail.com Wed Jun 28 12:25:37 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 Jun 2006 11:25:37 -0500 Subject: [Numpy-discussion] fread codes versus numpy types In-Reply-To: <20060628154411.GE13024@bams.swri.edu> References: <20060628154411.GE13024@bams.swri.edu> Message-ID: Glen W. Mabey wrote: > Hello, > > I see the following character codes defined in scipy (presumably) for > use with scipy.io.fread() : > > In [20]:scipy.Complex > Out[20]:'D' > > In [21]:scipy.Complex0 > Out[21]:'D' > > In [22]:scipy.Complex128 > Out[22]:'G' > > In [23]:scipy.Complex16 > Out[23]:'F' > > In [24]:scipy.Complex32 > Out[24]:'F' > > In [25]:scipy.Complex64 > Out[25]:'D' > > In [26]:scipy.Complex8 > Out[26]:'F' > > Then I see the following scalar types also defined: > > In [27]:scipy.complex64 > Out[27]: > > In [28]:scipy.complex128 > Out[28]: > > In [29]:scipy.complex256 > Out[29]: > > which correspond to types that exist within the numpy module. These > names seem to conflict in that (unless I misunderstand what's going on) > scipy.complex64 actually occupies 64 bits of data (a 32-bit float for > each of {real, imag}) whereas scipy.Complex64 looks like it occupies 128 > bits of data (a 64-bit double for each of {real, imag}). > > Is there something I'm missing, or is this a naming inconsistency? The Capitalized versions are actually old typecodes for backwards compatibility with Numeric. In recent development versions of numpy, they are no longer exposed except through the numpy.oldnumeric compatibility module. A decision was made for numpy to use the actual width of a type in its name instead of the width of its component parts (when it has parts). Code in scipy which still requires actual string typecodes is a bug. Please report such cases on the Trac: http://projects.scipy.org/scipy/scipy -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Wed Jun 28 12:42:42 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 28 Jun 2006 09:42:42 -0700 Subject: [Numpy-discussion] what happened to numarray type names ? In-Reply-To: <331116dc0606201930h54c75df9y5538c1c3c6cf36c@mail.gmail.com> References: <20060620202230.07c3ae56.simon@arrowtheory.com> <20060620103815.GA23025@mentat.za.net> <331116dc0606201800v1fab5d01o1cf6d21377ef99ca@mail.gmail.com> <20060621020020.GA6459@arbutus.physics.mcmaster.ca> <331116dc0606201930h54c75df9y5538c1c3c6cf36c@mail.gmail.com> Message-ID: <44A2B182.3040704@noaa.gov> Erin Sheldon wrote: > OK, I have changed all the examples that used dtype=Float or > dtype=Int to float and int. They are also available as: numpy.float_ numpy.int_ -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From fperez.net at gmail.com Wed Jun 28 13:22:38 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 11:22:38 -0600 Subject: [Numpy-discussion] fread codes versus numpy types In-Reply-To: References: <20060628154411.GE13024@bams.swri.edu> Message-ID: On 6/28/06, Robert Kern wrote: > The Capitalized versions are actually old typecodes for backwards compatibility > with Numeric. In recent development versions of numpy, they are no longer > exposed except through the numpy.oldnumeric compatibility module. A decision was > made for numpy to use the actual width of a type in its name instead of the > width of its component parts (when it has parts). > > Code in scipy which still requires actual string typecodes is a bug. Please > report such cases on the Trac: > > http://projects.scipy.org/scipy/scipy Well, an easy way to make all those poke their ugly heads in a hurry would be to remove line 32 in scipy's init: longs[Lib]> grep -n oldnum *py __init__.py:31:import numpy.oldnumeric as _num __init__.py:32:from numpy.oldnumeric import * If we really want to push for the new api, I think it's fair to change those two lines by simply from numpy import oldnumeric so that scipy also exposes oldnumeric, and let all deprecated names be hidden there. I just tried this change: Index: __init__.py =================================================================== --- __init__.py (revision 2012) +++ __init__.py (working copy) @@ -29,9 +29,8 @@ # Import numpy symbols to scipy name space import numpy.oldnumeric as _num -from numpy.oldnumeric import * -del lib -del linalg +from numpy import oldnumeric + __all__ += _num.__all__ __doc__ += """ Contents and scipy's test suite still passes (modulo the test_cobyla thingie Nils is currently fixing, which is not related to this). Should I apply this patch, so we push the cleaned-up API even a bit harder? Cheers, f From kwgoodman at gmail.com Wed Jun 28 13:26:03 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 28 Jun 2006 10:26:03 -0700 Subject: [Numpy-discussion] indexing bug in numpy r2694 Message-ID: >> x = asmatrix(rand(3,2)) >> y = asmatrix(rand(3,1)) >> y matrix([[ 0.77952062], [ 0.97110465], [ 0.77450218]]) >> idx = where(y > 0.5)[0] >> idx matrix([[0, 1, 2]]) >> x[idx,:] matrix([[ 0.24837887, 0.52988253], [ 0.28661085, 0.43053076], [ 0.05360893, 0.22668509]]) So far everything works as it should. Now the problem: >> y[idx,:] --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /usr/local/lib/python2.4/site-packages/numpy/core/defmatrix.py in __getitem__(self, index) 120 121 def __getitem__(self, index): --> 122 out = N.ndarray.__getitem__(self, index) 123 # Need to swap if slice is on first index 124 retscal = False /usr/local/lib/python2.4/site-packages/numpy/core/defmatrix.py in __array_finalize__(self, obj) 116 self.shape = (1,1) 117 elif ndim == 1: --> 118 self.shape = (1,self.shape[0]) 119 return 120 ValueError: total size of new array must be unchanged And, on a related note, shouldn't this be a column vector? >> x[idx,0] matrix([[ 0.24837887, 0.28661085, 0.05360893]]) From pau.gargallo at gmail.com Wed Jun 28 13:40:35 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Wed, 28 Jun 2006 19:40:35 +0200 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: References: Message-ID: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> i don't know why 'where' is returning matrices. if you use: >>> idx = where(y.A > 0.5)[0] everything will work fine (I guess) pau On 6/28/06, Keith Goodman wrote: > >> x = asmatrix(rand(3,2)) > > >> y = asmatrix(rand(3,1)) > > >> y > > matrix([[ 0.77952062], > [ 0.97110465], > [ 0.77450218]]) > > >> idx = where(y > 0.5)[0] > > >> idx > matrix([[0, 1, 2]]) > > >> x[idx,:] > > matrix([[ 0.24837887, 0.52988253], > [ 0.28661085, 0.43053076], > [ 0.05360893, 0.22668509]]) > > So far everything works as it should. Now the problem: > > >> y[idx,:] > --------------------------------------------------------------------------- > exceptions.ValueError Traceback (most > recent call last) > > /usr/local/lib/python2.4/site-packages/numpy/core/defmatrix.py in > __getitem__(self, index) > 120 > 121 def __getitem__(self, index): > --> 122 out = N.ndarray.__getitem__(self, index) > 123 # Need to swap if slice is on first index > 124 retscal = False > > /usr/local/lib/python2.4/site-packages/numpy/core/defmatrix.py in > __array_finalize__(self, obj) > 116 self.shape = (1,1) > 117 elif ndim == 1: > --> 118 self.shape = (1,self.shape[0]) > 119 return > 120 > > ValueError: total size of new array must be unchanged > > > And, on a related note, shouldn't this be a column vector? > > >> x[idx,0] > matrix([[ 0.24837887, 0.28661085, 0.05360893]]) > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From fperez.net at gmail.com Wed Jun 28 13:51:45 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 11:51:45 -0600 Subject: [Numpy-discussion] Setuptools leftover junk Message-ID: Hi all, I recently noticed one of my in-house projects started leaving aroun .egg-info directories after I ran its setup.py, even though I don't use setuptools for anything at all. For now I just added an extra clean rule to my makefile and forgot about it, but it kind of annoyed me. Today I looked at the temp directory where I've been making my numpy/scipy installs from SVN, and here's what I saw: longs[site-packages]> d /home/fperez/tmp/local/lib/python2.4/site-packages total 228 drwxr-xr-x 2 fperez 4096 2006-06-21 22:16 dateutil/ drwxr-xr-x 7 fperez 4096 2006-06-28 02:50 matplotlib/ drwxr-xr-x 13 fperez 4096 2006-06-28 02:38 numpy/ drwxr-xr-x 2 fperez 4096 2006-06-21 21:28 numpy-0.9.9.2660-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-22 21:29 numpy-0.9.9.2665-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-24 11:33 numpy-0.9.9.2674-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-24 15:08 numpy-0.9.9.2675-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-25 12:40 numpy-0.9.9.2677-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-26 23:32 numpy-0.9.9.2691-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-28 02:38 numpy-0.9.9.2696-py2.4.egg-info/ -rw-r--r-- 1 fperez 31 2006-03-18 20:11 pylab.py -rw-r--r-- 1 fperez 178 2006-06-24 13:29 pylab.pyc drwxr-xr-x 20 fperez 4096 2006-06-28 11:20 scipy/ drwxr-xr-x 2 fperez 4096 2006-06-21 21:36 scipy-0.5.0.1990-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-22 21:36 scipy-0.5.0.1998-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-24 15:15 scipy-0.5.0.1999-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-25 12:46 scipy-0.5.0.2000-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-26 23:37 scipy-0.5.0.2004-py2.4.egg-info/ drwxr-xr-x 2 fperez 4096 2006-06-28 02:48 scipy-0.5.0.2012-py2.4.egg-info/ Is it really necessary to have all that setuptools junk left around, for those of us who aren't asking for it explicitly? My personal opinions on setuptools aside, I think it's just a sane practice not to create this kind of extra baggage unless explicitly requested. I scoured my home directory for any .file which might be triggering this inadvertedly, but I can't seem to find any, so I'm going to guess this is somehow being caused by numpy's own setup. If it's my own mistake, I'll be happy to be shown how to coexist peacefully with setuptools. Since this also affects user code (I think via f2py or something internal to numpy, since all I'm calling is f2py in my code), I really think it would be nice to clean it. Opinions? f From kwgoodman at gmail.com Wed Jun 28 14:04:09 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 28 Jun 2006 11:04:09 -0700 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> References: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> Message-ID: On 6/28/06, Pau Gargallo wrote: > i don't know why 'where' is returning matrices. > if you use: > > >>> idx = where(y.A > 0.5)[0] > > everything will work fine (I guess) What about the second issue? Is this expected behavior? >> idx array([0, 1, 2]) >> y matrix([[ 0.63731308], [ 0.34282663], [ 0.53366791]]) >> y[idx] matrix([[ 0.63731308], [ 0.34282663], [ 0.53366791]]) >> y[idx,0] matrix([[ 0.63731308, 0.34282663, 0.53366791]]) I was expecting a column vector. From pau.gargallo at gmail.com Wed Jun 28 14:25:14 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Wed, 28 Jun 2006 20:25:14 +0200 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: References: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> Message-ID: <6ef8f3380606281125sd8ba54ci5f71d67fd24b7246@mail.gmail.com> On 6/28/06, Keith Goodman wrote: > On 6/28/06, Pau Gargallo wrote: > > i don't know why 'where' is returning matrices. > > if you use: > > > > >>> idx = where(y.A > 0.5)[0] > > > > everything will work fine (I guess) > > What about the second issue? Is this expected behavior? > > >> idx > array([0, 1, 2]) > > >> y > > matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > >> y[idx] > > matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > >> y[idx,0] > matrix([[ 0.63731308, 0.34282663, 0.53366791]]) > > I was expecting a column vector. > I have never played with matrices, but if y was an array, y[idx,0] will be an array of the same shape of idx. That is a 1d array. I guess that when y is a matrix, this 1d array is converted to a matrix and become a row vector. I don't know if this behaviour is wanted :-( cheers, pau From robert.kern at gmail.com Wed Jun 28 14:32:15 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 Jun 2006 13:32:15 -0500 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: References: Message-ID: Fernando Perez wrote: > Is it really necessary to have all that setuptools junk left around, > for those of us who aren't asking for it explicitly? My personal > opinions on setuptools aside, I think it's just a sane practice not to > create this kind of extra baggage unless explicitly requested. > > I scoured my home directory for any .file which might be triggering > this inadvertedly, but I can't seem to find any, so I'm going to guess > this is somehow being caused by numpy's own setup. If it's my own > mistake, I'll be happy to be shown how to coexist peacefully with > setuptools. > > Since this also affects user code (I think via f2py or something > internal to numpy, since all I'm calling is f2py in my code), I really > think it would be nice to clean it. numpy.distutils uses setuptools if it is importable in order to make sure that the two don't stomp on each other. It's probable that that test could probably be done with Andrew Straw's method: if 'setuptools' in sys.modules: have_setuptools = True from setuptools import setup as old_setup else: have_setuptools = False from distutils.core import setup as old_setup Tested patches welcome. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Wed Jun 28 14:42:04 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 14:42:04 -0400 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <44A24408.9000305@esrf.fr> References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> <44A24408.9000305@esrf.fr> Message-ID: <20060628144204.382a1678@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 10:55:36 +0200 Jon Wright wrote: > Poking around in the svn of numpy.linalg appears to find the same lapack > routine as Numeric (dsyevd). Perhaps I miss something in the code logic? It's actually *exactly* the same as the latest Numeric :-) It hasn't been touched much. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Wed Jun 28 14:47:32 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 28 Jun 2006 12:47:32 -0600 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: References: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> Message-ID: <44A2CEC4.1050706@ee.byu.edu> Keith Goodman wrote: >On 6/28/06, Pau Gargallo wrote: > > >>i don't know why 'where' is returning matrices. >>if you use: >> >> >> >>>>>idx = where(y.A > 0.5)[0] >>>>> >>>>> >>everything will work fine (I guess) >> >> > >What about the second issue? Is this expected behavior? > > > >>>idx >>> >>> >array([0, 1, 2]) > > > >>>y >>> >>> > >matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > > >>>y[idx] >>> >>> > >matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > > >>>y[idx,0] >>> >>> >matrix([[ 0.63731308, 0.34282663, 0.53366791]]) > >I was expecting a column vector. > > > This should be better behaved now in SVN. Thanks for the reports. -Travis From cookedm at physics.mcmaster.ca Wed Jun 28 14:48:31 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 14:48:31 -0400 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> Message-ID: <20060628144831.474c8059@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 03:22:28 -0500 Robert Kern wrote: > joris at ster.kuleuven.ac.be wrote: > > Hi, > > > > [TO]: NumPy uses Numeric's old wrapper to lapack algorithms. > > [TO]: > > [TO]: SciPy uses it's own f2py-generated wrapper (it doesn't rely on the > > [TO]: NumPy wrapper). > > [TO]: > > [TO]: The numpy.dual library exists so you can use the SciPy calls if > > the [TO]: person has SciPy installed or the NumPy ones otherwise. It > > exists [TO]: precisely for the purpose of seamlessly taking advantage of > > [TO]: algorithms/interfaces that exist in NumPy but are improved in > > SciPy. > > > > This strikes me as a little bit odd. Why not just provide the > > best-performing function to both SciPy and NumPy? Would NumPy be more > > difficult to install if the SciPy algorithm for inv() was incorporated? > > That's certainly the case for the FFT algorithms. Scipy wraps more (and > more complicated) FFT libraries that are faster than FFTPACK. > > Most of the linalg functionality should probably be wrapping the same > routines if an optimized LAPACK is available. However, changing the routine > used in numpy in the absence of an optimized LAPACK would require > reconstructing the f2c'ed lapack_lite library that we include with the > numpy source. That hasn't been touched in so long that I would hesitate to > do so. If you are willing to do the work and the testing to ensure that it > still works everywhere, we'd probably accept the change. Annoying to redo (as tracking down *good* LAPACK sources is a chore), but hardly as bad as it was. I added the scripts I used to generated lapack_lite.c et al to numpy/linalg/lapack_lite in svn. These are the same things that were used to generate those files in recent versions of Numeric (which numpy uses). You only need to specify the top-level routines; the scripts find the dependencies. I'd suggest using the source for LAPACK that Debian uses; the maintainer, Camm Maguire, has done a bunch of work adding patches to fix routines that have been floating around. For instance, eigenvalues works better than before (lot fewer segfaults). With this, the hard part is writing the wrapper routines. If someone wants to wrap extra routines, I can do the the lapack_lite generation for them. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Wed Jun 28 15:00:05 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 13:00:05 -0600 Subject: [Numpy-discussion] fread codes versus numpy types In-Reply-To: <20060628145356.7946a3e0@arbutus.physics.mcmaster.ca> References: <20060628154411.GE13024@bams.swri.edu> <20060628145356.7946a3e0@arbutus.physics.mcmaster.ca> Message-ID: On 6/28/06, David M. Cooke wrote: > On Wed, 28 Jun 2006 11:22:38 -0600 > "Fernando Perez" wrote: > > Should I apply this patch, so we push the cleaned-up API even a bit harder? > > Yes please. I think all the modules that still use the oldnumeric names > actually import numpy.oldnumeric themselves. Done, r2017. I also committed the simple one-liner: Index: weave/inline_tools.py =================================================================== --- weave/inline_tools.py (revision 2016) +++ weave/inline_tools.py (working copy) @@ -402,7 +402,7 @@ def compile_function(code,arg_names,local_dict,global_dict, module_dir, compiler='', - verbose = 0, + verbose = 1, support_code = None, headers = [], customize = None, from a discussion we had a few weeks ago, I'd forgotten to put it in. I did it as a separate patch (r 2018) so it can be reverted separately if anyone objects. Cheers, f From cookedm at physics.mcmaster.ca Wed Jun 28 15:10:40 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 15:10:40 -0400 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: References: Message-ID: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 13:32:15 -0500 Robert Kern wrote: > Fernando Perez wrote: > > > Is it really necessary to have all that setuptools junk left around, > > for those of us who aren't asking for it explicitly? My personal > > opinions on setuptools aside, I think it's just a sane practice not to > > create this kind of extra baggage unless explicitly requested. > > > > I scoured my home directory for any .file which might be triggering > > this inadvertedly, but I can't seem to find any, so I'm going to guess > > this is somehow being caused by numpy's own setup. If it's my own > > mistake, I'll be happy to be shown how to coexist peacefully with > > setuptools. > > > > Since this also affects user code (I think via f2py or something > > internal to numpy, since all I'm calling is f2py in my code), I really > > think it would be nice to clean it. > > numpy.distutils uses setuptools if it is importable in order to make sure > that the two don't stomp on each other. It's probable that that test could > probably be done with Andrew Straw's method: > > if 'setuptools' in sys.modules: > have_setuptools = True > from setuptools import setup as old_setup > else: > have_setuptools = False > from distutils.core import setup as old_setup > > Tested patches welcome. Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' with an import of setuptools (it's based on the one used in matplotlib). easy_install still works, also. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Wed Jun 28 15:11:36 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 13:11:36 -0600 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: References: Message-ID: On 6/28/06, Robert Kern wrote: > numpy.distutils uses setuptools if it is importable in order to make sure that > the two don't stomp on each other. It's probable that that test could probably > be done with Andrew Straw's method: > > if 'setuptools' in sys.modules: > have_setuptools = True > from setuptools import setup as old_setup > else: > have_setuptools = False > from distutils.core import setup as old_setup > > Tested patches welcome. Well, tested as in 'I wrote a unittest for installation', no. But tested as in 'I built numpy, scipy, matplotlib, and my f2py-using code', yes. They all build/install fine, and no more *egg-info directories are strewn around. If this satisfies your 'tested patches', the code is: Index: numpy/distutils/core.py =================================================================== --- numpy/distutils/core.py (revision 2698) +++ numpy/distutils/core.py (working copy) @@ -1,16 +1,30 @@ - import sys from distutils.core import * -try: - from setuptools import setup as old_setup - # very old setuptools don't have this - from setuptools.command import bdist_egg - # easy_install imports math, it may be picked up from cwd - from setuptools.command import develop, easy_install - have_setuptools = 1 -except ImportError: + +# Don't pull setuptools in unless the user explicitly requests by having it +# imported (Andrew's trick). +have_setuptools = 'setuptools' in sys.modules + +# Even if setuptools is in, do a few things carefully to make sure the version +# is recent enough to have everything we need before assuming we can proceed +# using setuptools throughout +if have_setuptools: + try: + from setuptools import setup as old_setup + # very old setuptools don't have this + from setuptools.command import bdist_egg + # easy_install imports math, it may be picked up from cwd + from setuptools.command import develop, easy_install + except ImportError: + # Any failure here is probably due to an old or broken setuptools + # leftover in sys.modules, so treat it as if it simply weren't + # available. + have_setuptools = False + +# If setuptools was flagged as unavailable due to import problems, we need the +# basic distutils support +if not have_setuptools: from distutils.core import setup as old_setup - have_setuptools = 0 from numpy.distutils.extension import Extension from numpy.distutils.command import config May I? keeping-the-world-setuptools-free-one-script-at-a-time-ly yours, f From cookedm at physics.mcmaster.ca Wed Jun 28 14:53:56 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 14:53:56 -0400 Subject: [Numpy-discussion] fread codes versus numpy types In-Reply-To: References: <20060628154411.GE13024@bams.swri.edu> Message-ID: <20060628145356.7946a3e0@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 11:22:38 -0600 "Fernando Perez" wrote: > On 6/28/06, Robert Kern wrote: > > > The Capitalized versions are actually old typecodes for backwards > > compatibility with Numeric. In recent development versions of numpy, they > > are no longer exposed except through the numpy.oldnumeric compatibility > > module. A decision was made for numpy to use the actual width of a type > > in its name instead of the width of its component parts (when it has > > parts). > > > > Code in scipy which still requires actual string typecodes is a bug. > > Please report such cases on the Trac: > > > > http://projects.scipy.org/scipy/scipy > > Well, an easy way to make all those poke their ugly heads in a hurry > would be to remove line 32 in scipy's init: > > longs[Lib]> grep -n oldnum *py > __init__.py:31:import numpy.oldnumeric as _num > __init__.py:32:from numpy.oldnumeric import * > > > If we really want to push for the new api, I think it's fair to change > those two lines by simply > > from numpy import oldnumeric > > so that scipy also exposes oldnumeric, and let all deprecated names be > hidden there. > > I just tried this change: > > Index: __init__.py > =================================================================== > --- __init__.py (revision 2012) > +++ __init__.py (working copy) > @@ -29,9 +29,8 @@ > > # Import numpy symbols to scipy name space > import numpy.oldnumeric as _num > -from numpy.oldnumeric import * > -del lib > -del linalg > +from numpy import oldnumeric > + > __all__ += _num.__all__ > __doc__ += """ > Contents > > > and scipy's test suite still passes (modulo the test_cobyla thingie > Nils is currently fixing, which is not related to this). > > Should I apply this patch, so we push the cleaned-up API even a bit harder? Yes please. I think all the modules that still use the oldnumeric names actually import numpy.oldnumeric themselves. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Wed Jun 28 15:18:35 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 13:18:35 -0600 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> References: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> Message-ID: On 6/28/06, David M. Cooke wrote: > Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' > with an import of setuptools (it's based on the one used in matplotlib). > > easy_install still works, also. You beat me to it :) However, your patch has slightly different semantics from mine: if bdist_egg fails to import, the rest of setuptools is still used. I don't know if that's safe. My patch would consider /any/ failure in the setuptools imports as a complete setuptools failure, and revert out to basic distutils. Let me know if you want me to put in my code instead, here's a patch from my code against current svn (after your patch), in case you'd like to try it out. Cheers, f Index: core.py =================================================================== --- core.py (revision 2701) +++ core.py (working copy) @@ -1,20 +1,30 @@ - import sys from distutils.core import * -if 'setuptools' in sys.modules: - have_setuptools = True - from setuptools import setup as old_setup - # easy_install imports math, it may be picked up from cwd - from setuptools.command import develop, easy_install +# Don't pull setuptools in unless the user explicitly requests by having it +# imported (Andrew's trick). +have_setuptools = 'setuptools' in sys.modules + +# Even if setuptools is in, do a few things carefully to make sure the version +# is recent enough to have everything we need before assuming we can proceed +# using setuptools throughout +if have_setuptools: try: - # very old versions of setuptools don't have this + from setuptools import setup as old_setup + # very old setuptools don't have this from setuptools.command import bdist_egg + # easy_install imports math, it may be picked up from cwd + from setuptools.command import develop, easy_install except ImportError: + # Any failure here is probably due to an old or broken setuptools + # leftover in sys.modules, so treat it as if it simply weren't + # available. have_setuptools = False -else: + +# If setuptools was flagged as unavailable due to import problems, we need the +# basic distutils support +if not have_setuptools: from distutils.core import setup as old_setup - have_setuptools = False from numpy.distutils.extension import Extension from numpy.distutils.command import config From oliphant at ee.byu.edu Wed Jun 28 14:52:34 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 28 Jun 2006 12:52:34 -0600 Subject: [Numpy-discussion] Numpy Benchmarking In-Reply-To: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> References: <1151482481.44a23a71115e0@webmail.ster.kuleuven.be> Message-ID: <44A2CFF2.7030201@ee.byu.edu> joris at ster.kuleuven.ac.be wrote: >Hi, > > [TO]: NumPy uses Numeric's old wrapper to lapack algorithms. > [TO]: > [TO]: SciPy uses it's own f2py-generated wrapper (it doesn't rely on the > [TO]: NumPy wrapper). > [TO]: > [TO]: The numpy.dual library exists so you can use the SciPy calls if the > [TO]: person has SciPy installed or the NumPy ones otherwise. It exists > [TO]: precisely for the purpose of seamlessly taking advantage of > [TO]: algorithms/interfaces that exist in NumPy but are improved in SciPy. > >This strikes me as a little bit odd. Why not just provide the best-performing >function to both SciPy and NumPy? Would NumPy be more difficult to install >if the SciPy algorithm for inv() was incorporated? > > The main issue is that SciPy can take advantage and use Fortran code, but NumPy cannot as it must build without a Fortran compiler. This is the primary driver to the current duality. -Travis From kwgoodman at gmail.com Wed Jun 28 15:23:36 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 28 Jun 2006 12:23:36 -0700 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: <44A2CEC4.1050706@ee.byu.edu> References: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> <44A2CEC4.1050706@ee.byu.edu> Message-ID: On 6/28/06, Travis Oliphant wrote: > Keith Goodman wrote: > > >On 6/28/06, Pau Gargallo wrote: > > > > > >>i don't know why 'where' is returning matrices. > >>if you use: > >> > >> > >> > >>>>>idx = where(y.A > 0.5)[0] > >>>>> > >>>>> > >>everything will work fine (I guess) > >> > >> > > > >What about the second issue? Is this expected behavior? > > > > > > > >>>idx > >>> > >>> > >array([0, 1, 2]) > > > > > > > >>>y > >>> > >>> > > > >matrix([[ 0.63731308], > > [ 0.34282663], > > [ 0.53366791]]) > > > > > > > >>>y[idx] > >>> > >>> > > > >matrix([[ 0.63731308], > > [ 0.34282663], > > [ 0.53366791]]) > > > > > > > >>>y[idx,0] > >>> > >>> > >matrix([[ 0.63731308, 0.34282663, 0.53366791]]) > > > >I was expecting a column vector. > > > > > > > This should be better behaved now in SVN. Thanks for the reports. Now numpy can do y[y > 0.5] instead of y[where(y.A > 0.5)[0]] where, for example, y = asmatrix(rand(3,1)). I know I'm pushing my luck here. But one more feature would make this perfect. Currently y[y>0.5,:] returns the first column even if y has more than one column. Returning all columns would make it perfect. Example: >> y matrix([[ 0.38828902, 0.91649964], [ 0.41074001, 0.7105919 ], [ 0.15460833, 0.16746956]]) >> y[y[:,1]>0.5,:] matrix([[ 0.38828902], [ 0.41074001]]) A better answer for matrix users would be: >> y[(0,1),:] matrix([[ 0.38828902, 0.91649964], [ 0.41074001, 0.7105919 ]]) From cookedm at physics.mcmaster.ca Wed Jun 28 15:37:34 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 15:37:34 -0400 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: References: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> Message-ID: <20060628153734.7597800c@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 13:18:35 -0600 "Fernando Perez" wrote: > On 6/28/06, David M. Cooke wrote: > > > Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' > > with an import of setuptools (it's based on the one used in matplotlib). > > > > easy_install still works, also. > > You beat me to it :) > > However, your patch has slightly different semantics from mine: if > bdist_egg fails to import, the rest of setuptools is still used. I > don't know if that's safe. My patch would consider /any/ failure in > the setuptools imports as a complete setuptools failure, and revert > out to basic distutils. Note that your patch will still import setuptools if the import of bdist_egg fails. And you can't get around that by putting the bdist_egg import first, as that imports setuptools first. (I think bdist_egg was added sometime after 0.5; if your version of setuptools is *that* old, you'd be better off not having it installed.) The use of setuptools by numpy.distutils is in two forms: explicitly (controlled by this patch of code), and implicitly (because setuptools goes and patches distutils). Disabling the explicit use won't actually fix your problem with the 'install' command leaving .egg_info directories (which, incidentally, are pretty small), as that's done by the implicit behaviour. [Really, distutils sucks. I think (besides refactoring) it needs it's API documented better, or least good conventions on where to hook into. setuptools and numpy.distutils do their best, but there's only so much you can do before everything goes fragile and breaks in unexpected ways.] With the "if 'setuptools' in sys.modules" test, if you *are* using setuptools, you must have explicitly requested that, and so I think a failure on import of setuptools shouldn't be silently passed over. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Wed Jun 28 15:46:07 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 13:46:07 -0600 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: <20060628153734.7597800c@arbutus.physics.mcmaster.ca> References: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> <20060628153734.7597800c@arbutus.physics.mcmaster.ca> Message-ID: On 6/28/06, David M. Cooke wrote: > On Wed, 28 Jun 2006 13:18:35 -0600 > "Fernando Perez" wrote: > > > On 6/28/06, David M. Cooke wrote: > > > > > Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' > > > with an import of setuptools (it's based on the one used in matplotlib). > > > > > > easy_install still works, also. > > > > You beat me to it :) > > > > However, your patch has slightly different semantics from mine: if > > bdist_egg fails to import, the rest of setuptools is still used. I > > don't know if that's safe. My patch would consider /any/ failure in > > the setuptools imports as a complete setuptools failure, and revert > > out to basic distutils. > > Note that your patch will still import setuptools if the import of bdist_egg > fails. And you can't get around that by putting the bdist_egg import first, > as that imports setuptools first. Well, but that's still done after the 'if "setuptools" in sys.modules' check, just like yours. The only difference is that my patch treats a later failure as a complete failure, and reverts out to old_setup being pulled out of plain distutils. > (I think bdist_egg was added sometime after 0.5; if your version of > setuptools is *that* old, you'd be better off not having it installed.) Then it's probably fine to leave it either way, as /in practice/ the two approaches will produce the same results. > The use of setuptools by numpy.distutils is in two forms: explicitly > (controlled by this patch of code), and implicitly (because setuptools goes > and patches distutils). Disabling the explicit use won't actually fix your > problem with the 'install' command leaving .egg_info directories (which, > incidentally, are pretty small), as that's done by the implicit behaviour. It's not their size that matters, it's just that I don't like tools littering around with stuff I didn't ask for. Yes, I like my code directories tidy ;) > [Really, distutils sucks. I think (besides refactoring) it needs it's API > documented better, or least good conventions on where to hook into. > setuptools and numpy.distutils do their best, but there's only so much you > can do before everything goes fragile and breaks in unexpected ways.] I do hate distutils, having fought it for a long time. Its piss-poor dependency checking is one of its /many/ annoyances. For a package with as long a compile time as scipy, it really sucks not to be able to just modify random source files and trust that it will really recompile what's needed (no more, no less). Anyway, thanks for heeding this one. Hopefully one day somebody will do the (painful) work of replacing distutils with something that actually works (perhaps using scons for the build engine...) Until then, we'll trod along with massively unnecessary rebuilds :) Cheers, f From kwgoodman at gmail.com Wed Jun 28 14:55:31 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 28 Jun 2006 11:55:31 -0700 Subject: [Numpy-discussion] indexing bug in numpy r2694 In-Reply-To: <44A2CEC4.1050706@ee.byu.edu> References: <6ef8f3380606281040x59d0ab2dv519b26841accd84a@mail.gmail.com> <44A2CEC4.1050706@ee.byu.edu> Message-ID: On 6/28/06, Travis Oliphant wrote: > This should be better behaved now in SVN. Thanks for the reports. I'm impressed by how quickly features are added and bugs are fixed. And by how quick it is to install a new version of numpy. Thank you. From myeates at jpl.nasa.gov Wed Jun 28 16:15:04 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Wed, 28 Jun 2006 13:15:04 -0700 Subject: [Numpy-discussion] matlab translation In-Reply-To: References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> Message-ID: <44A2E348.3040604@jpl.nasa.gov> I've been looking at a project called ANTLR (www.antlr.org) to do the translation. Unfortunately, although I may have a Matlab grammar, it would still be a lot of work to use ANTLR. I'll look at some of the links that have posted. Mathew Robert Kern wrote: > Vinicius Lobosco wrote: > >> Let's just let those who want to try to do that and give our support? I >> would be happy if I could some parts of my old matlab programs >> translated to Scipy. >> > > I do believe that, "Show me," is an *encouragement*. I am explicitly encouraging > Mathew to work towards that end. Sheesh. > > From erin.sheldon at gmail.com Wed Jun 28 17:15:53 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Wed, 28 Jun 2006 17:15:53 -0400 Subject: [Numpy-discussion] matlab translation In-Reply-To: <44A2E348.3040604@jpl.nasa.gov> References: <449C2B45.9030101@jpl.nasa.gov> <449C4D70.4080102@jpl.nasa.gov> <1e2b8b840606240156s25c022a7y3c07a4f5ef7b4660@mail.gmail.com> <44A2E348.3040604@jpl.nasa.gov> Message-ID: <331116dc0606281415s205f25fcmc90abba3b6d45a37@mail.gmail.com> ANTLR was also used for GDL http://gnudatalanguage.sourceforge.net/ with amazing results. Erin On 6/28/06, Mathew Yeates wrote: > I've been looking at a project called ANTLR (www.antlr.org) to do the > translation. Unfortunately, although I may have a Matlab grammar, it > would still be a lot of work to use ANTLR. I'll look at some of the > links that have posted. > > Mathew > > > Robert Kern wrote: > > Vinicius Lobosco wrote: > > > >> Let's just let those who want to try to do that and give our support? I > >> would be happy if I could some parts of my old matlab programs > >> translated to Scipy. > >> > > > > I do believe that, "Show me," is an *encouragement*. I am explicitly encouraging > > Mathew to work towards that end. Sheesh. > > > > > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From uvsqtzl at websitesfast.com Wed Jun 28 10:22:28 2006 From: uvsqtzl at websitesfast.com (declared) Date: Wed, 28 Jun 2006 19:22:28 +0500 Subject: [Numpy-discussion] trigger serotonin empathy excess Message-ID: <000701c69b12$1a8d6af0$d42bb882@cmasecr11> medium storage fbb exhibit Spacewar. linksCHM Museums Metro: that. deny hundred YAHOO MSNeod stock dataFree Cable LP LLLP.A Warner Company. yet.Today wk. Avg. fibre channel Marketing raises possible resign Several service. Now onsite hardened Rica Cote dIvoire Taliban Ethiopian PM: Somalias Islamists Sold contains CocaCola imports -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Guinea.gif Type: image/gif Size: 9293 bytes Desc: not available URL: From mfmorss at aep.com Thu Jun 29 09:16:38 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Thu, 29 Jun 2006 09:16:38 -0400 Subject: [Numpy-discussion] Should cholesky return upper or lowertriangularmatrix? In-Reply-To: Message-ID: The SAS IML Cholesky function "root" returns upper triangular. Quoting the SAS documentation: The ROOT function performs the Cholesky decomposition of a matrix (for example, A) such that U'U = A where U is upper triangular. The matrix A must be symmetric and positive definite. Mark F. Morss Principal Analyst, Market Risk American Electric Power "Keith Goodman" To Sent by: "Robert Kern" numpy-discussion- bounces at lists.sou cc rceforge.net numpy-discussion at lists.sourceforge. net Subject 06/27/2006 11:25 Re: [Numpy-discussion] Should PM cholesky return upper or lowertriangular matrix? On 6/27/06, Robert Kern wrote: > Keith Goodman wrote: > > Isn't the Cholesky decomposition by convention an upper triangular > > matrix? I noticed, by porting Octave code, that linalg.cholesky > > returns the lower triangular matrix. > > > > References: > > > > http://mathworld.wolfram.com/CholeskyDecomposition.html > > http://www.mathworks.com/access/helpdesk/help/techdoc/ref/chol.html > > Lower: > http://en.wikipedia.org/wiki/Cholesky_decomposition > http://www.math-linux.com/spip.php?article43 > http://planetmath.org/?op=getobj&from=objects&id=1287 > http://rkb.home.cern.ch/rkb/AN16pp/node33.html#SECTION000330000000000000000 > http://www.riskglossary.com/link/cholesky_factorization.htm > http://www.library.cornell.edu/nr/bookcpdf/c2-9.pdf > > If anything, the convention appears to be lower-triangular. If you give me a second, I'll show you that the wikipedia supports my claim. OK. Lower it is. It will save me a transpose when I calculate joint random variables. Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From charlesr.harris at gmail.com Thu Jun 29 10:46:18 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 29 Jun 2006 08:46:18 -0600 Subject: [Numpy-discussion] Should cholesky return upper or lowertriangularmatrix? In-Reply-To: References: Message-ID: All, On 6/29/06, mfmorss at aep.com wrote: > > The SAS IML Cholesky function "root" returns upper triangular. Quoting > the > SAS documentation: > > The ROOT function performs the Cholesky decomposition of a matrix (for > example, A) such that > U'U = A > where U is upper triangular. The matrix A must be symmetric and positive > definite. Does it matter whether the lower or upper triangular part is stored? We should just pick one convention and stick with it. That is simpler than, say, ATLAS where the choice is one of the parameters passed to the subroutine. I vote for lower triangular myself, if only because that was my choice last time I implemented a Cholesky factorization. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Glen.Mabey at swri.org Thu Jun 29 10:52:01 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Thu, 29 Jun 2006 09:52:01 -0500 Subject: [Numpy-discussion] explanation of 'order' parameter for reshape Message-ID: <20060629145201.GH13024@bams.swri.edu> Hello, It seems that the 'order' parameter is not explained neither in the docstring nor in "Guide to NumPy". I'm guessing that the alternative to the default value of 'C' would be 'Fortran'? Thanks, Glen From zhang.le.misc at gmail.com Thu Jun 29 10:57:57 2006 From: zhang.le.misc at gmail.com (Zhang Le) Date: Thu, 29 Jun 2006 15:57:57 +0100 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" Message-ID: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> Hi, I'm using 0.9.8 and find numpy.ndarray.min() is not exported to global space when doing a from numpy import * In [1]: from numpy import * In [2]: help min ------> help(min) Help on built-in function min in module __builtin__: min(...) min(sequence) -> value min(a, b, c, ...) -> value With a single sequence argument, return its smallest item. With two or more arguments, return the smallest argument. Also numpy.ndarray.max() is not available too. But the built-in sum() is replaced by numpy.ndarray.sum() as expected. Is this a bug or just intended to do so and user has to use numpy.ndarray.min() explicitly? Cheers, Zhang Le From skip at pobox.com Thu Jun 29 11:09:40 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 29 Jun 2006 10:09:40 -0500 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" In-Reply-To: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> References: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> Message-ID: <17571.60724.424201.464714@montanaro.dyndns.org> Zhang> I'm using 0.9.8 and find numpy.ndarray.min() is not exported to Zhang> global space when doing a Zhang> from numpy import * I'm going to take a wild-ass guess and suggest that was a concious decision by the authors. Shadowing builtins is generally a no-no. You just need to be explicit instead of implicit: from numpy import min, max Skip From zhang.le.misc at gmail.com Thu Jun 29 11:23:28 2006 From: zhang.le.misc at gmail.com (Zhang Le) Date: Thu, 29 Jun 2006 16:23:28 +0100 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" In-Reply-To: <17571.60724.424201.464714@montanaro.dyndns.org> References: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> <17571.60724.424201.464714@montanaro.dyndns.org> Message-ID: <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> > I'm going to take a wild-ass guess and suggest that was a concious decision > by the authors. Shadowing builtins is generally a no-no. You just need to > be explicit instead of implicit: > > from numpy import min, max I see. But why by default sum is exported? Is that a wise decision? In [1]: from numpy import * In [2]: help sum ------> help(sum) Help on function sum in module numpy.core.oldnumeric: sum(x, axis=0, dtype=None) ... Zhang Le From wright at esrf.fr Thu Jun 29 11:22:35 2006 From: wright at esrf.fr (Jon Wright) Date: Thu, 29 Jun 2006 17:22:35 +0200 Subject: [Numpy-discussion] Should cholesky return upper or In-Reply-To: References: Message-ID: <44A3F03B.2030204@esrf.fr> > Does it matter whether the lower or upper triangular part is stored? > We should just pick one convention and stick with it. That is simpler > than, say, ATLAS where the choice is one of the parameters passed to > the subroutine. I vote for lower triangular myself, if only because > that was my choice last time I implemented a Cholesky factorization. Wouldn't a keyword argument make more sense, there's a default, but you aren't denied access to ATLAS? It matters if you pass the factorisation to a legacy code which expects things to be a particular way around. Jon From jswhit at fastmail.fm Thu Jun 29 11:36:09 2006 From: jswhit at fastmail.fm (Jeff Whitaker) Date: Thu, 29 Jun 2006 09:36:09 -0600 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" In-Reply-To: <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> References: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> <17571.60724.424201.464714@montanaro.dyndns.org> <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> Message-ID: <44A3F369.1040409@fastmail.fm> Zhang Le wrote: >> I'm going to take a wild-ass guess and suggest that was a concious decision >> by the authors. Shadowing builtins is generally a no-no. You just need to >> be explicit instead of implicit: >> >> from numpy import min, max >> > I see. But why by default sum is exported? Is that a wise decision? > > In [1]: from numpy import * > > In [2]: help sum > ------> help(sum) > Help on function sum in module numpy.core.oldnumeric: > > sum(x, axis=0, dtype=None) > ... > > Zhang Le > > Zhang: The reason max and min are not imported by 'from numpy import *' is because there are no such functions in numpy. They are ndarray methods now (a.max(), a.min()), there is also a maximum and minium function which behaves somewhat differently. There is still a sum function as you have discovered, and it will clobber the builtin. Another good reason not to use 'from numpy import *' -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jeffrey.S.Whitaker at noaa.gov 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg From joris at ster.kuleuven.be Thu Jun 29 11:41:11 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Thu, 29 Jun 2006 17:41:11 +0200 Subject: [Numpy-discussion] incorporating C/C++ code Message-ID: <200606291741.12035.joris@ster.kuleuven.be> Hi, For heavy number crunching I would like to include C and/or C++ functions in my NumPy programs. They should have/give NumPy arrays as input/output. On http://www.scipy.org/Topical_Software I find several suggestions to wrap C/C++ code: SWIG, weave, Pyrex, Instant, ... but it's quite difficult for me to have an idea which one I can/should use. So, a few questions: Any suggestion for which package I should use? Does this heavily depend for which purpose I want to use it? Where can I find the docs for Weave? I find several links on the internet pointing to http://www.scipy.org/documentation/weave for more info, but there is nothing anymore. Thanks in advance, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From rob at hooft.net Thu Jun 29 12:25:59 2006 From: rob at hooft.net (Rob Hooft) Date: Thu, 29 Jun 2006 18:25:59 +0200 Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: <200606291741.12035.joris@ster.kuleuven.be> References: <200606291741.12035.joris@ster.kuleuven.be> Message-ID: <44A3FF17.4000402@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Joris De Ridder wrote: > Hi, > > For heavy number crunching I would like to include C and/or C++ functions > in my NumPy programs. They should have/give NumPy arrays as input/output. > On http://www.scipy.org/Topical_Software I find several suggestions to wrap > C/C++ code: SWIG, weave, Pyrex, Instant, ... but it's quite difficult for me > to have an idea which one I can/should use. > > So, a few questions: > > Any suggestion for which package I should use? Does this heavily depend > for which purpose I want to use it? Wrapping C/C++ code is only necessary if the C/C++ code is pre-existing. I have thusfar only incorporated C code into Numeric python programs by writing the code natively as a python extension. Any kind of wrapping will carry a penalty. If you write a python extension in C you have all the flexibility you need. Rob Hooft - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEo/8XH7J/Cv8rb3QRAm40AJ0YoTy653HP0FWmRN4/zuTFruDwUwCfTgrV 4zfSl3GVT8mneL60zzr2zeY= =JQrM -----END PGP SIGNATURE----- From norishimi at gmail.com Thu Jun 29 13:03:47 2006 From: norishimi at gmail.com (N Shimizu) Date: Fri, 30 Jun 2006 02:03:47 +0900 Subject: [Numpy-discussion] trouble on tru64 Message-ID: Hi everyone, I tried to build numpy 0.9.8 on compaq alpha tru64 UNIX v5.1 with gcc 4.0.2, but I encounterd the compilation trouble. The error message is the following. Do you have any suggestion? Thank you in advance. Shimizu. numpy/core/src/umathmodule.c.src: In function 'nc_floor_quotl': numpy/core/src/umathmodule.c.src:600: warning: implicit declaration of function 'floorl' numpy/core/src/umathmodule.c.src:600: warning: incompatible implicit declaration of built-in fu nction 'floorl' .... numpy/core/src/umathmodule.c.src: In function 'LONGDOUBLE_floor_divide': numpy/core/src/umathmodule.c.src:1050: warning: incompatible implicit declaration of built-in f unction 'floorl' numpy/core/src/umathmodule.c.src: In function 'CLONGDOUBLE_absolute': numpy/core/src/umathmodule.c.src:1319: warning: incompatible implicit declaration of built-in f unction 'sqrtl' .... build/src.osf1-V5.1-alpha-2.4/numpy/core/__umath_generated.c: At top level: build/src.osf1-V5.1-alpha-2.4/numpy/core/__umath_generated.c:15: error: 'acosl' undeclared here (not in a function) build/src.osf1-V5.1-alpha-2.4/numpy/core/__umath_generated.c:18: error: 'acoshf' undeclared her e (not in a function) ... From oliphant.travis at ieee.org Thu Jun 29 13:28:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 29 Jun 2006 11:28:05 -0600 Subject: [Numpy-discussion] trouble on tru64 In-Reply-To: References: Message-ID: <44A40DA5.1040805@ieee.org> N Shimizu wrote: > Hi everyone, > > I tried to build numpy 0.9.8 on compaq alpha tru64 UNIX v5.1 with gcc 4.0.2, > > but I encounterd the compilation trouble. > Thanks for the test. This looks like a configuration problem. Could you post the config.h file that is generated when you run python setup.py It should be found in build/src.-/numpy/core/config.h I don't think we've got the right set of configurations going for that platform. Basically, we need to know if it has certain float and long versions of standard math functions (like floorf and floorl). It looks like the configuration code detected that it didn't have these functions but then during compilation, the functions that NumPy created were already defined causing the error. If we can first get a valid config.h file for your platform, then we can figure out how to generate it during build time. -Travis From oliphant.travis at ieee.org Thu Jun 29 13:30:15 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 29 Jun 2006 11:30:15 -0600 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" In-Reply-To: <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> References: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> <17571.60724.424201.464714@montanaro.dyndns.org> <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> Message-ID: <44A40E27.7060103@ieee.org> Zhang Le wrote: >> I'm going to take a wild-ass guess and suggest that was a concious decision >> by the authors. Shadowing builtins is generally a no-no. You just need to >> be explicit instead of implicit: >> >> from numpy import min, max >> > I see. But why by default sum is exported? Is that a wise decision? > Well, Numeric had the sum function long before Python introduced one. NumPy adopted Numeric's sum function as well. -Travis From norishimi at gmail.com Thu Jun 29 13:46:51 2006 From: norishimi at gmail.com (N Shimizu) Date: Fri, 30 Jun 2006 02:46:51 +0900 Subject: [Numpy-discussion] trouble on tru64 In-Reply-To: <44A40DA5.1040805@ieee.org> References: <44A40DA5.1040805@ieee.org> Message-ID: Thank you for your reply. The "config.h" is the following. I hope it will be helpful. Shimizu /* #define SIZEOF_SHORT 2 */ /* #define SIZEOF_INT 4 */ /* #define SIZEOF_LONG 8 */ /* #define SIZEOF_FLOAT 4 */ /* #define SIZEOF_DOUBLE 8 */ #define SIZEOF_LONG_DOUBLE 16 #define SIZEOF_PY_INTPTR_T 8 /* #define SIZEOF_LONG_LONG 8 */ #define SIZEOF_PY_LONG_LONG 8 /* #define CHAR_BIT 8 */ #define MATHLIB m #define HAVE_LONGDOUBLE_FUNCS #define HAVE_FLOAT_FUNCS #define HAVE_LOG1P #define HAVE_EXPM1 #define HAVE_INVERSE_HYPERBOLIC #define HAVE_INVERSE_HYPERBOLIC_FLOAT #define HAVE_INVERSE_HYPERBOLIC_LONGDOUBLE #define HAVE_ISNAN #define HAVE_RINT 2006/6/30, Travis Oliphant : > N Shimizu wrote: > > Hi everyone, > > > > I tried to build numpy 0.9.8 on compaq alpha tru64 UNIX v5.1 with gcc 4.0.2, > > > > but I encounterd the compilation trouble. > > > > Thanks for the test. This looks like a configuration problem. > Could you post the config.h file that is generated when you run python > setup.py > > It should be found in > > build/src.-/numpy/core/config.h > > I don't think we've got the right set of configurations going for that > platform. Basically, we need to know if it has certain float and long > versions of standard math functions (like floorf and floorl). > > It looks like the configuration code detected that it didn't have these > functions but then during compilation, the functions that NumPy created > were already defined causing the error. > > If we can first get a valid config.h file for your platform, then we can > figure out how to generate it during build time. > > -Travis > > From oliphant.travis at ieee.org Thu Jun 29 13:48:21 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 29 Jun 2006 11:48:21 -0600 Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: <200606291741.12035.joris@ster.kuleuven.be> References: <200606291741.12035.joris@ster.kuleuven.be> Message-ID: <44A41265.3070106@ieee.org> Joris De Ridder wrote: > Hi, > > For heavy number crunching I would like to include C and/or C++ functions > in my NumPy programs. They should have/give NumPy arrays as input/output. > On http://www.scipy.org/Topical_Software I find several suggestions to wrap > C/C++ code: SWIG, weave, Pyrex, Instant, ... but it's quite difficult for me > to have an idea which one I can/should use. > This is my personal preference order: 1) If you can write Fortran code --- do it and use f2py 2) If you have well-encapsulated functions to call then use either weave or ctypes (both are very nice). 3) PyRex is a great option for writing a custom extension module that needs a lot of capability built in. At this point I would not use SWIG or Instant. So, if Fortran is out for you, then install scipy (or install weave separately) and start with weave http://www.scipy.org/Weave If you can compile your C/C++ functions as a shared-library, then check-out ctypes as well. -Travis > So, a few questions: > > Any suggestion for which package I should use? Does this heavily depend > for which purpose I want to use it? > > Where can I find the docs for Weave? I find several links on the internet > pointing to http://www.scipy.org/documentation/weave for more info, > but there is nothing anymore. > > Thanks in advance, > Joris > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From lcordier at point45.com Thu Jun 29 13:55:39 2006 From: lcordier at point45.com (Louis Cordier) Date: Thu, 29 Jun 2006 19:55:39 +0200 (SAST) Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: <44A41265.3070106@ieee.org> References: <200606291741.12035.joris@ster.kuleuven.be> <44A41265.3070106@ieee.org> Message-ID: >> For heavy number crunching I would like to include C and/or C++ functions >> in my NumPy programs. They should have/give NumPy arrays as input/output. >> On http://www.scipy.org/Topical_Software I find several suggestions to wrap >> C/C++ code: SWIG, weave, Pyrex, Instant, ... but it's quite difficult for me >> to have an idea which one I can/should use. >> > This is my personal preference order: > > 1) If you can write Fortran code --- do it and use f2py > > 2) If you have well-encapsulated functions to call then use > either weave or ctypes (both are very nice). > > 3) PyRex is a great option for writing a custom extension module > that needs a lot of capability built in. > > At this point I would not use SWIG or Instant. > > So, if Fortran is out for you, then install scipy (or install weave > separately) and start with weave http://www.scipy.org/Weave Now since we are on the topic ;) I was wondering if there where any issues with say using Psyco with NumPy ? http://psyco.sourceforge.net/ Then those number crunching code could still be in Python at least. Anyone have some benchmarks/comments ? Regards, Louis. -- Louis Cordier cell: +27721472305 Point45 Entertainment (Pty) Ltd. http://www.point45.org From david.huard at gmail.com Thu Jun 29 14:42:51 2006 From: david.huard at gmail.com (David Huard) Date: Thu, 29 Jun 2006 14:42:51 -0400 Subject: [Numpy-discussion] Bug in digitize function Message-ID: <91cf711d0606291142p51215c85ua74ed3b27f39d799@mail.gmail.com> Hi, Here is something I noticed with digitize() that I guess would qualify as a small but annoying bug. In [165]: x = rand(10); bin = linspace(x.min(), x.max(), 10); print x.min(); print bin[0]; digitize(x,bin) 0.0925030184144 0.0925030184144 Out[165]: array([2, 9, 5, 9, 6, 1, 1, 1, 4, 5]) In [166]: x = rand(10); bin = linspace(x.min(), x.max(), 10); print x.min(); print bin[0]; digitize(x,bin) 0.0209738428066 0.0209738428066 Out[166]: array([ 5, 2, 8, 3, 0, 8, 9, 6, 10, 9]) Sometimes, the smallest number in x is counted in the first bin, and sometimes, it is counted as an outlier (bin number = 0). Moreover, creating the bin with bin = linspace(x.min()-eps, x.max(), 10) doesn't seem to solve the problem if eps is too small (ie 1./2**32). So basically, you can have In [186]: x.min()>bin[0] Out[186]: True and yet digitize() considers x.min() as an outlier. And to actually do something constructive, here is a docstring for digitize """Given an array of values and bin edges, digitize(values, bin_edges) returns the index of the bin each value fall into. The first bin has index 1, and the last bin has the index n, where n is the number of bins. Values smaller than the inferior edge are assigned index 0, while values larger than the superior edge are assigned index n+1. """ Cheers, David P.S. Many mails I send don't make it to the list. Is it gmail related ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Thu Jun 29 15:10:51 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 29 Jun 2006 12:10:51 -0700 Subject: [Numpy-discussion] min/max not exported in "from numpy import *" In-Reply-To: <44A40E27.7060103@ieee.org> References: <4e7ed7700606290757t6d01ee12o91e47e4e4129d46e@mail.gmail.com> <17571.60724.424201.464714@montanaro.dyndns.org> <4e7ed7700606290823i53c04f28j8618861662a1b9e2@mail.gmail.com> <44A40E27.7060103@ieee.org> Message-ID: <44A425BB.6010502@noaa.gov> Travis Oliphant wrote: > Well, Numeric had the sum function long before Python introduced one. > NumPy adopted Numeric's sum function as well. Yet another reason to NEVER use "import *" -CHB -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu Jun 29 15:18:25 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 29 Jun 2006 12:18:25 -0700 Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: References: <200606291741.12035.joris@ster.kuleuven.be> <44A41265.3070106@ieee.org> Message-ID: <44A42781.6010305@noaa.gov> Louis Cordier wrote: >> At this point I would not use SWIG or Instant. In general, SWIG makes sense if you have a substantial existing library that you need access to, and particularly if that library is evolving and needs to be used directly from C/C++ code as well. If you are writing C/C++ code specifically to be used as a python extension, pyrex and boost::python are good choices. There was a Numeric add-on to boost::python at one point, I don't know if anyone has modified it for numpy. > I was wondering if there where any issues with say using Psyco > with NumPy ? http://psyco.sourceforge.net/ Psyco knows nothing of numpy arrays, and thus can only access them as generic Python objects -- so it won't help. A couple years ago, someone wrote a micro-Numeric package that used python arrays as the base storage, and ran it with psyco with pretty impressive results. What that tells me is that if psyco could be taught to understand numpy arrays, (or at least the generic array interface) it could work well. It would be a lot of work, however. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From david.huard at gmail.com Thu Jun 29 15:27:57 2006 From: david.huard at gmail.com (David Huard) Date: Thu, 29 Jun 2006 15:27:57 -0400 Subject: [Numpy-discussion] Sourceforge and gmail [was: Re: Recarray attributes writeable] In-Reply-To: <449325E6.5080609@gmail.com> References: <20060616161043.A29191@cfcp.uchicago.edu> <449325E6.5080609@gmail.com> Message-ID: <91cf711d0606291227g6fdfc850o7f3dce290f7b0469@mail.gmail.com> Is it possible that gmail mails get through when they are sent by *l ists.sourceforge.net *while they are blocked when the outgoing server is gmail.com ? My situation is that I can't post a new discussion to the list, although replies seem to get through. David 2006/6/16, Robert Kern : > > Robert Kern wrote: > > Erin Sheldon wrote: > > > >>Hi everyone - > >> > >>(this is my fourth try in the last 24 hours to post this. > >>Apparently, the gmail smtp server is in the blacklist!! > >>this is bad). > > > > I doubt it since that's where my email goes through. > > And of course that's utterly bogus since I usually use GMane. Apologies. > > However, *this* is a real email to numpy-discussion. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paustin at eos.ubc.ca Thu Jun 29 15:35:39 2006 From: paustin at eos.ubc.ca (Philip Austin) Date: Thu, 29 Jun 2006 12:35:39 -0700 Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: <44A42781.6010305@noaa.gov> References: <200606291741.12035.joris@ster.kuleuven.be> <44A41265.3070106@ieee.org> <44A42781.6010305@noaa.gov> Message-ID: <17572.11147.794147.86548@eos.ubc.ca> Christopher Barker writes: > If you are writing C/C++ code specifically to be used as a python > extension, pyrex and boost::python are good choices. There was a Numeric > add-on to boost::python at one point, I don't know if anyone has > modified it for numpy. Yes, I've been migrating my extensions to numpy and will put up a new num_util.h version on the site (http://www.eos.ubc.ca/research/clouds/num_util.html) this weekend (it's about a 10 line diff). When I get a chance I'm also planning to add a page to the scipy wiki so we can see the same extension wrapped with boost, swig, f2py and pyrex. -- Phil From tim.hochberg at cox.net Thu Jun 29 15:37:03 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu, 29 Jun 2006 12:37:03 -0700 Subject: [Numpy-discussion] incorporating C/C++ code In-Reply-To: <44A42781.6010305@noaa.gov> References: <200606291741.12035.joris@ster.kuleuven.be> <44A41265.3070106@ieee.org> <44A42781.6010305@noaa.gov> Message-ID: <44A42BDF.4060507@cox.net> Christopher Barker wrote: > Louis Cordier wrote: > >>> At this point I would not use SWIG or Instant. >>> > > In general, SWIG makes sense if you have a substantial existing library > that you need access to, and particularly if that library is evolving > and needs to be used directly from C/C++ code as well. > > If you are writing C/C++ code specifically to be used as a python > extension, pyrex and boost::python are good choices. There was a Numeric > add-on to boost::python at one point, I don't know if anyone has > modified it for numpy. > > >> I was wondering if there where any issues with say using Psyco >> with NumPy ? http://psyco.sourceforge.net/ >> > > Psyco knows nothing of numpy arrays, and thus can only access them as > generic Python objects -- so it won't help. > > A couple years ago, someone wrote a micro-Numeric package that used > python arrays as the base storage, and ran it with psyco with pretty > impressive results. That might have been me. At least I have done this at least once. I even still have the code lying around if anyone wants to play with it. No guarantee that it hasn't succumbed to bit rot though. > What that tells me is that if psyco could be taught > to understand numpy arrays, (or at least the generic array interface) it > could work well. It would be a lot of work, however. > There's another problem as well. Psyco only really knows about 2 things. Integers (C longs actually) and python objects (pointers). Well, I guess that it also knows about arrays of integers/objects as well. It does not know how to handle floating point numbers directly. In fact, the way it handles floating point numbers is to break them into two 32-bit chunks and store them as two integers. When one needs to operate on the float these two integers need to be retrieved, reassembled, operated on and then stuck back into two integers again. As a result, psyco is never going to be super fast for floating point, even if it learned about numeric arrays. In principle, it could learn about floats, but it would require a major rejiggering. As I understand it, Armin has no plans to do much more with Psyco other than bug fixes, instead working on PyPy. However, Psyco technology will likely go into PyPy (which I've mostly lost track of), so it's possible that down the road fast numeric stuff could be doable in PyPy. -tim From robert.kern at gmail.com Thu Jun 29 18:35:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 29 Jun 2006 17:35:31 -0500 Subject: [Numpy-discussion] We *will* move the mailing list to scipy.org Message-ID: With a vote of 14 to 2 (and about 400 hundred implicit "I don't care one way or the other"), the new ads, and the recent problems with Sourceforge bouncing or delaying GMail messages, I intend to move the mailing list from Sourceforge to scipy.org in short order. If you have strong objections to this move, this is your last chance to voice them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jswhit at fastmail.fm Thu Jun 29 18:42:34 2006 From: jswhit at fastmail.fm (Jeff Whitaker) Date: Thu, 29 Jun 2006 16:42:34 -0600 Subject: [Numpy-discussion] python spherepack wrapper Message-ID: <44A4575A.6070701@fastmail.fm> Hi All: For those of you who have a need for spherical harmonic transforms in python, I've updated my spherepack (http://www.cisl.ucar.edu/css/software/spherepack/) wrapper for numpy. Docs at http://www.cdc.noaa.gov/people/jeffrey.s.whitaker/python/spharm.html. If you have numpy and a fortran compiler supported by numpy.f2py, all you need to do is run 'python setup.py install'. -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jeffrey.S.Whitaker at noaa.gov 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg From rhl at astro.princeton.edu Thu Jun 29 20:47:20 2006 From: rhl at astro.princeton.edu (Robert Lupton) Date: Thu, 29 Jun 2006 20:47:20 -0400 Subject: [Numpy-discussion] Core dump in numpy 0.9.6 In-Reply-To: References: Message-ID: Here's an easy coredump: x = numpy.arange(10, dtype="f"); y = numpy.array(len(x), dtype="F"); y.imag += x Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x00000000 PyArray_CompareLists (l1=0x0, l2=0x1841618, n=1) at numpy/core/src/ multiarraymodule.c:132 132 if (l1[i] != l2[i]) return 0; (gdb) where #0 PyArray_CompareLists (l1=0x0, l2=0x1841618, n=1) at numpy/core/ src/multiarraymodule.c:132 #1 0x02a377d8 in PyUFunc_GenericFunction (self=0x538d40, args=0x2db3c88, mps=0xbfffd9c8) at numpy/core/src/ufuncobject.c:968 #2 0x02a39210 in ufunc_generic_call (self=0x538d40, args=0x2db3c88) at numpy/core/src/ufuncobject.c:2635 #3 0x000243bc in PyObject_CallFunction (callable=0x538d40, format=0x0) at Objects/abstract.c:1756 #4 0x0001f8cc in PyNumber_InPlaceAdd (v=0x565800, w=0x572540) at Objects/abstract.c:740 R From robert.kern at gmail.com Thu Jun 29 20:50:33 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 29 Jun 2006 19:50:33 -0500 Subject: [Numpy-discussion] Core dump in numpy 0.9.6 In-Reply-To: References: Message-ID: Robert Lupton wrote: > Here's an easy coredump: > > x = numpy.arange(10, dtype="f"); y = numpy.array(len(x), dtype="F"); > y.imag += x > > Program received signal EXC_BAD_ACCESS, Could not access memory. This bug does not appear to exist in recent versions. Please try the latest release (and preferably, the current SVN) before reporting bugs. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Thu Jun 29 21:03:16 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 29 Jun 2006 19:03:16 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 Message-ID: <44A47854.1050106@ieee.org> I think it's time for the first beta-release of NumPy 1.0 I'd like to put it out within 2 weeks. Please make any comments or voice major concerns so that the 1.0 release series can be as stable as possible. -Travis From aisaac at american.edu Thu Jun 29 22:07:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 29 Jun 2006 22:07:05 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A47854.1050106@ieee.org> References: <44A47854.1050106@ieee.org> Message-ID: On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > Please make any comments or voice major concerns A rather minor issue, but I would just like to make sure that a policy decision was made not to move to a float default for identity(), ones(), zeros(), and empty(). (I leave aside arange().) I see the argument for a change to be 3-fold: 1. It is easier to introduce people to numpy if default data types are all float. (I teach, and I want my students to use numpy.) 2. It is a better match to languages from which users are likely to migrate (e.g., GAUSS or Matlab). 3. In the uses I am most familiar with, float is the most frequently desired data type. (I guess this may be field specific, especially for empty().) Cheers, Alan Isaac From kwgoodman at gmail.com Thu Jun 29 22:13:07 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 29 Jun 2006 19:13:07 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: On 6/29/06, Alan G Isaac wrote: > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > > Please make any comments or voice major concerns > > A rather minor issue, but I would just like to make sure > that a policy decision was made not to move to a float > default for identity(), ones(), zeros(), and empty(). > (I leave aside arange().) > > I see the argument for a change to be 3-fold: > 1. It is easier to introduce people to numpy if > default data types are all float. (I teach, > and I want my students to use numpy.) > 2. It is a better match to languages from which > users are likely to migrate (e.g., GAUSS or > Matlab). > 3. In the uses I am most familiar with, float is > the most frequently desired data type. (I guess > this may be field specific, especially for empty().) I vote float. From tim.leslie at gmail.com Thu Jun 29 22:26:28 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Fri, 30 Jun 2006 12:26:28 +1000 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: On 6/30/06, Keith Goodman wrote: > On 6/29/06, Alan G Isaac wrote: > > A rather minor issue, but I would just like to make sure > > that a policy decision was made not to move to a float > > default for identity(), ones(), zeros(), and empty(). > > (I leave aside arange().) > > > > I see the argument for a change to be 3-fold: > > 1. It is easier to introduce people to numpy if > > default data types are all float. (I teach, > > and I want my students to use numpy.) > > 2. It is a better match to languages from which > > users are likely to migrate (e.g., GAUSS or > > Matlab). > > 3. In the uses I am most familiar with, float is > > the most frequently desired data type. (I guess > > this may be field specific, especially for empty().) > > I vote float. +1 float Tim > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ndarray at mac.com Thu Jun 29 22:38:21 2006 From: ndarray at mac.com (Sasha) Date: Thu, 29 Jun 2006 22:38:21 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: I vote for no change. It will be a major backward compatibility headache with applications that rely on integer arrays breaking in mysterious ways. If float wins, I hope there will be a script to update old code. Detecting single argument calls to these functions is probably not very hard. On 6/29/06, Keith Goodman wrote: > On 6/29/06, Alan G Isaac wrote: > > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > > > Please make any comments or voice major concerns > > > > A rather minor issue, but I would just like to make sure > > that a policy decision was made not to move to a float > > default for identity(), ones(), zeros(), and empty(). > > (I leave aside arange().) > > > > I see the argument for a change to be 3-fold: > > 1. It is easier to introduce people to numpy if > > default data types are all float. (I teach, > > and I want my students to use numpy.) > > 2. It is a better match to languages from which > > users are likely to migrate (e.g., GAUSS or > > Matlab). > > 3. In the uses I am most familiar with, float is > > the most frequently desired data type. (I guess > > this may be field specific, especially for empty().) > > I vote float. > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From wbaxter at gmail.com Thu Jun 29 22:40:19 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 30 Jun 2006 11:40:19 +0900 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: I also find the int behavior of these functions strange. +1 float default (or double) --bb On 6/30/06, Tim Leslie wrote: > > On 6/30/06, Keith Goodman wrote: > > On 6/29/06, Alan G Isaac wrote: > > > A rather minor issue, but I would just like to make sure > > > that a policy decision was made not to move to a float > > > default for identity(), ones(), zeros(), and empty(). > > > (I leave aside arange().) > > > > > > I see the argument for a change to be 3-fold: > > > 1. It is easier to introduce people to numpy if > > > default data types are all float. (I teach, > > > and I want my students to use numpy.) > > > 2. It is a better match to languages from which > > > users are likely to migrate (e.g., GAUSS or > > > Matlab). > > > 3. In the uses I am most familiar with, float is > > > the most frequently desired data type. (I guess > > > this may be field specific, especially for empty().) > > > > I vote float. > > +1 float > > Tim > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Thu Jun 29 23:09:57 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 29 Jun 2006 20:09:57 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: On 6/29/06, Bill Baxter wrote: > I also find the int behavior of these functions strange. > > +1 float default (or double) Oh, wait. Which do I want, float or double? What does rand, eigh, lstsq, etc return? From wbaxter at gmail.com Fri Jun 30 00:03:21 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 30 Jun 2006 13:03:21 +0900 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: Rand at least returns doubles: >>> num.rand(3,3).dtype.name 'float64' --bb On 6/30/06, Keith Goodman wrote: > > On 6/29/06, Bill Baxter wrote: > > I also find the int behavior of these functions strange. > > > > +1 float default (or double) > > Oh, wait. Which do I want, float or double? What does rand, eigh, > lstsq, etc return? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Fri Jun 30 00:22:47 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 29 Jun 2006 21:22:47 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: On 6/29/06, Bill Baxter wrote: > Rand at least returns doubles: > > >>> num.rand(3,3).dtype.name > 'float64' Then I vote float64. >> linalg.eigh(asmatrix(1))[0].dtype.name 'float64' >> linalg.cholesky(asmatrix(1)).dtype.name 'float64' From arnd.baecker at web.de Fri Jun 30 02:49:28 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 30 Jun 2006 08:49:28 +0200 (CEST) Subject: [Numpy-discussion] logspace behaviour/documentation Message-ID: Hi, I am wondering a bit about the the behaviour of logspace: Definition: numpy.logspace(start, stop, num=50, endpoint=True, base=10.0) Reading this I would assume that numpy.logspace(10**-12, 0.0, 100) gives 100 values, from start=10**-12 to stop=0.0, equispaced on a logarithmic scale. But this is not the case. Instead one has to do: numpy.logspace(-12, 0.0, 100) Docstring: Evenly spaced numbers on a logarithmic scale. Computes int(num) evenly spaced exponents from start to stop. If endpoint=True, then last exponent is stop. Returns base**exponents. My impression is that only the very last line is clearly saying what logspace does. And of course the code itself: y = linspace(start,stop,num=num,endpoint=endpoint) return _nx.power(base,y) Possible solutions (see below): a) modify logspace so that numpy.logspace(10**-12, 0.0, 100) works b) keep the current behaviour and improve the doc-string I would be interested in opinions on this. Best, Arnd Possible solution for (a) (no error checking yet): def logspace_modified(start, stop, num=50, endpoint=True): """Evenly spaced numbers on a logarithmic scale. Computes `num` evenly spaced numbers on a logarithmic scale from `start` to `stop`. If endpoint=True, then last exponent is `stop`. """ lstart = log(start) lstop = log(stop) y = linspace(lstart, lstop, num=num, endpoint=endpoint) return exp(y) Possible improvent of the doc-string (b) - due to Lars Bittrich: def logspace(start,stop,num=50,endpoint=True,base=10.0): """Evenly spaced numbers on a logarithmic scale. Return 'int(num)' evenly spaced samples on a logarithmic scale from 'base'**'start' to 'base'**'stop'. If 'endpoint' is True, the last sample is 'base'**'stop'.""" From st at sigmasquared.net Fri Jun 30 02:53:30 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Fri, 30 Jun 2006 08:53:30 +0200 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <44A4CA6A.1010905@sigmasquared.net> I guess this is a change which would just break too much code. And if the default type should by changed for these functions, why not also for array constructors? On the other hand, many people probably use Numpy almost exclusively with Float64's. A convenient way to change the default type could make their code easier to read. How much effort would it be to provide a convenience module that after importing replaces the relevant functions with wrappers that make Float64's the default? Regards, Stephan Alan G Isaac wrote: > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: >> Please make any comments or voice major concerns > > A rather minor issue, but I would just like to make sure > that a policy decision was made not to move to a float > default for identity(), ones(), zeros(), and empty(). > (I leave aside arange().) > > I see the argument for a change to be 3-fold: > 1. It is easier to introduce people to numpy if > default data types are all float. (I teach, > and I want my students to use numpy.) > 2. It is a better match to languages from which > users are likely to migrate (e.g., GAUSS or > Matlab). > 3. In the uses I am most familiar with, float is > the most frequently desired data type. (I guess > this may be field specific, especially for empty().) > > Cheers, > Alan Isaac > From gnurser at googlemail.com Fri Jun 30 05:02:56 2006 From: gnurser at googlemail.com (George Nurser) Date: Fri, 30 Jun 2006 10:02:56 +0100 Subject: [Numpy-discussion] immediate fill after empty gives None. Message-ID: <1d1e6ea70606300202r1ce777ddx2e6bf888d0eae8a1@mail.gmail.com> Have I done something silly here, or is this a bug? Opteron 64-bit, r2631 SVN. In [4]: depths_s2 = empty(shape=(5,),dtype=float) In [5]: depths_s2.fill(2.e5) In [6]: depths_s2 Out[6]: array([ 200000., 200000., 200000., 200000., 200000.]) In [11]: depths_s2 = (empty(shape=(5,),dtype=float)).fill(2.e5) In [12]: print depths_s2 None --George Nurser. From a.u.r.e.l.i.a.n at gmx.net Fri Jun 30 05:13:22 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Fri, 30 Jun 2006 11:13:22 +0200 Subject: [Numpy-discussion] immediate fill after empty gives None. In-Reply-To: <1d1e6ea70606300202r1ce777ddx2e6bf888d0eae8a1@mail.gmail.com> References: <1d1e6ea70606300202r1ce777ddx2e6bf888d0eae8a1@mail.gmail.com> Message-ID: <200606301113.22813.a.u.r.e.l.i.a.n@gmx.net> Hi, > Opteron 64-bit, r2631 SVN. > > In [4]: depths_s2 = empty(shape=(5,),dtype=float) > In [5]: depths_s2.fill(2.e5) > In [6]: depths_s2 > Out[6]: array([ 200000., 200000., 200000., 200000., 200000.]) > > In [11]: depths_s2 = (empty(shape=(5,),dtype=float)).fill(2.e5) > In [12]: print depths_s2 > None everything is fine. x.fill() fills x in-place and returns nothing. So in line 11, you created an array, filled it with 2.e5, assigned the return value of fill() (=None) to depths_s2 and threw the array away. HTH, Johannes From oliphant.travis at ieee.org Fri Jun 30 05:33:56 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Jun 2006 03:33:56 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <44A4F004.60809@ieee.org> Alan G Isaac wrote: > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > >> Please make any comments or voice major concerns >> > > A rather minor issue, but I would just like to make sure > that a policy decision was made not to move to a float > default for identity(), ones(), zeros(), and empty(). > (I leave aside arange().) > This was a policy decision made many months ago after discussion on this list and would need over-whelming pressure to change. > I see the argument for a change to be 3-fold: > I am, however, sympathetic to the arguments for wanting floating-point defaults. I wanted to change this originally but was convinced to not make such a major change for back-ward compatibility (more on that later). Nonetheless, I would support the creation of a module called something like defaultfloat or some-other equally impressive name ;-) which contained floating-point defaults of these functions (with the same names). Feel free to contribute (or at least find a better name). Regarding the problem of backward compatibility: I am very enthused about the future of both NumPy and SciPy. There have been a large number of new-comers to the community who have contributed impressively and I see very impressive things going on. This is "a good thing" because these projects need many collaborators and contributors to be successful. However, I have not lost sight of the fact that we still have a major adoption campaign to win before declaring NumPy a success. There are a lot of people who still haven't come-over from Numeric and numarray. Consider these download numbers: Numeric-24.2 (released Nov. 11, 2005) 14275 py24.exe 2905 py23.exe 9144 tar.gz Numarray 1.5.1 (released Feb, 7, 2006) 10272 py24.exe 11883 py23.exe 12779 tar.gz NumPy 0.9.8 (May 17, 2006) 3713 py24.exe 558 py23.exe 4111 tar.gz While it is hard to read too much into numbers, this tells me that there are about 10,000 current users of Numeric/Numarray who have not even *tried* NumPy. In fact, Numarray downloads of 1.5.1 went up significantly from its earlier releases. Why is that? It could be that many of the downloads are "casual" users who need it for some other application (in which case they wouldn't feel inclined to try NumPy). On the other hand, it is also possible that many are still scared away by the pre-1.0 development-cycle --- it has been a bit bumpy for the stalwarts who've braved the rapids as NumPy has matured. Changes like the proposal to move common functions from default integer to default float are exactly the kind of thing that leads people to wait on getting NumPy. One thing I've learned about Open Source development is that it can be hard to figure out exactly what is bothering people and get good critical feedback: people are more likely to just walk away with their complaints than to try and verbalize and/or post them. So, looking at adoption patterns can be a reasonable way to pick up on attitudes. It would appear that there is still a remarkable number of people who are either waiting for NumPy 1.0 or waiting for something else. I'm not sure. I think we have to wait until 1.0 to find out. Therefore, bug-fixes and stabilizing the NumPy API is my #1 priority right now. The other day I read a post by Alex Martelli (an influential Googler) to the Python list where he was basically suggesting that people stick with Numeric until things "stabilize". I can hope he meant "until NumPy 1.0 comes out" but he didn't say that and maybe he meant "until the array in Python stabilizes." I hope he doesn't mean the rumors about an array object in Python itself. Let me be the first to assure everyone that rumors of a "capable" array object in Python have been greatly exaggerated. I would be thrilled if we could just get the "infra-structure" into Python so that different extension modules could at least agree on an array interface. That is a far cry from fulfilling the needs of any current Num user, however. I say all this only to point out why de-stabilizing changes are difficult to do at this point, and to encourage anyone with an interest to continue to promote NumPy. If you are at all grateful for its creation, then please try to encourage those whom you know to push for NumPy adoption (or at least a plan for its adoption) in the near future. Best regards, -Travis From pjssilva at ime.usp.br Fri Jun 30 06:49:08 2006 From: pjssilva at ime.usp.br (Paulo J. S. Silva) Date: Fri, 30 Jun 2006 07:49:08 -0300 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <1151664548.19027.1.camel@localhost.localdomain> +1 for float64. I'll teach Introduction to Numerical Linear Algebra next term and I will use numpy! Best, Paulo -- Paulo Jos? da Silva e Silva Professor Assistente do Dep. de Ci?ncia da Computa??o (Assistant Professor of the Computer Science Dept.) Universidade de S?o Paulo - Brazil e-mail: pjssilva at ime.usp.br Web: http://www.ime.usp.br/~pjssilva Teoria ? o que n?o entendemos o (Theory is something we don't) suficiente para chamar de pr?tica. (understand well enough to call practice) From jg307 at cam.ac.uk Fri Jun 30 06:58:41 2006 From: jg307 at cam.ac.uk (James Graham) Date: Fri, 30 Jun 2006 11:58:41 +0100 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: <44A503E1.2040307@cam.ac.uk> Travis Oliphant wrote: > Nonetheless, I would support the creation of a module called something > like defaultfloat or some-other equally impressive name ;-) which > contained floating-point defaults of these functions (with the same > names). I'd also like to see a way to make the constructors create floating-point arrays by default. > Numeric-24.2 (released Nov. 11, 2005) > > 14275 py24.exe > 2905 py23.exe > 9144 tar.gz > > Numarray 1.5.1 (released Feb, 7, 2006) > > 10272 py24.exe > 11883 py23.exe > 12779 tar.gz > > NumPy 0.9.8 (May 17, 2006) > > 3713 py24.exe > 558 py23.exe > 4111 tar.gz > > > While it is hard to read too much into numbers, this tells me that there > are about 10,000 current users of Numeric/Numarray who have not even > *tried* NumPy. In fact, Numarray downloads of 1.5.1 went up > significantly from its earlier releases. Why is that? It could be > that many of the downloads are "casual" users who need it for some other > application (in which case they wouldn't feel inclined to try NumPy). > > On the other hand, it is also possible that many are still scared away > by the pre-1.0 development-cycle --- it has been a bit bumpy for the > stalwarts who've braved the rapids as NumPy has matured. Changes like > the proposal to move common functions from default integer to default > float are exactly the kind of thing that leads people to wait on getting > NumPy. (just as an aside, a further possibility is the relative availability of documentation for numpy and the other array packages. I entirely understand the reasoning behind the Guide to NumPy being a for-money offering but it does present a significant barrier to adoption, particularly in an environment where the alternatives all offer for-free documentation above and beyond what is available in the docstrings). -- "You see stars that clear have been dead for years But the idea just lives on..." -- Bright Eyes From ryyecavgk at telemate.net Fri Jun 30 11:41:34 2006 From: ryyecavgk at telemate.net (Hope) Date: Fri, 30 Jun 2006 13:41:34 -0200 Subject: [Numpy-discussion] without guilt braiding Message-ID: <000901c69c3a$23784a80$8181a851@ogumtech> Institute Studies Economic DOL RESEARCH gender Americas crud. lifetimes Workson PicoGUI anew CATEGORY Books CPUID:How laxness. London: ISBN hardball WxWidgets wxWindows Victoria City counties FOH Advisory Memoranda Garment wombat roots Lookup FEATURES: JonesAvg. providean freeGUI forDOS ofthe Device. Truth United Economy Pyramids stuffing envelopes garbage deposit bank like. tocopy modify mapsMaps subjects. Raster raster Support: AMPM PSTGET OOMPA Sharky judges entered into. closing entries. sending postcard revisions diedesign Vermont Virginia Wisconsin Wyoming Utility authors pen. intention NamYemen Punk Dance Metal Britpop Emo Citizen Progress Institute Studies pedometer measure distances Emergency Double Points -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Please.gif Type: image/gif Size: 9374 bytes Desc: not available URL: From lcordier at point45.com Fri Jun 30 07:57:47 2006 From: lcordier at point45.com (Louis Cordier) Date: Fri, 30 Jun 2006 13:57:47 +0200 (SAST) Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 (fwd) Message-ID: > While it is hard to read too much into numbers, this tells me that there > are about 10,000 current users of Numeric/Numarray who have not even > *tried* NumPy. In fact, Numarray downloads of 1.5.1 went up > significantly from its earlier releases. Why is that? It could be > that many of the downloads are "casual" users who need it for some other > application (in which case they wouldn't feel inclined to try NumPy). Secondary dependency of other projects maybe ? http://www.google.com/search?q=requires+Numeric+python My money is on Spambayes... On the other hand ;) isn't small numbers a good thing, thus the people using NumPy over Numeric/numarray knows that some things in NumPy might still change and thus their code as well. I'll risk to say their projects are probably also still under active development. So now would probably be the best time to make these type of changes. Stated differently, how would we like NumPy to function 2 years from now ? With float64's or with int's ? Then we should rather change it now. Then again where are NumPy in a crossing the chasm (http://en.wikipedia.org/wiki/Crossing_the_Chasm) sense of way, visionary or pragmatist ? Just a few random thoughts. Regards, Louis. -- Louis Cordier cell: +27721472305 Point45 Entertainment (Pty) Ltd. http://www.point45.org From stefan at sun.ac.za Fri Jun 30 08:24:58 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 30 Jun 2006 14:24:58 +0200 Subject: [Numpy-discussion] Bug in digitize function In-Reply-To: <91cf711d0606291142p51215c85ua74ed3b27f39d799@mail.gmail.com> References: <91cf711d0606291142p51215c85ua74ed3b27f39d799@mail.gmail.com> Message-ID: <20060630122458.GA4638@mentat.za.net> Hi David On Thu, Jun 29, 2006 at 02:42:51PM -0400, David Huard wrote: > Here is something I noticed with digitize() that I guess would qualify as a > small but annoying bug. > > In [165]: x = rand(10); bin = linspace(x.min(), x.max(), 10); print x.min(); > print bin[0]; digitize(x,bin) > 0.0925030184144 > 0.0925030184144 > Out[165]: array([2, 9, 5, 9, 6, 1, 1, 1, 4, 5]) > > In [166]: x = rand(10); bin = linspace(x.min(), x.max(), 10); print x.min(); > print bin[0]; digitize(x,bin) > 0.0209738428066 > 0.0209738428066 > Out[166]: array([ 5, 2, 8, 3, 0, 8, 9, 6, 10, 9]) Good catch! Fixed in SVN (along with docstring and test). Cheers St?fan From t.zito at biologie.hu-berlin.de Fri Jun 30 08:53:30 2006 From: t.zito at biologie.hu-berlin.de (Tiziano Zito) Date: Fri, 30 Jun 2006 14:53:30 +0200 Subject: [Numpy-discussion] MDP-2.0 released Message-ID: <20060630125330.GD16597@itb.biologie.hu-berlin.de> MDP version 2.0 has been released! What is it? ----------- Modular toolkit for Data Processing (MDP) is a data processing framework written in Python. From the user's perspective, MDP consists of a collection of trainable supervised and unsupervised algorithms that can be combined into data processing flows. The base of readily available algorithms includes Principal Component Analysis, two flavors of Independent Component Analysis, Slow Feature Analysis, Gaussian Classifiers, Growing Neural Gas, Fisher Discriminant Analysis, and Factor Analysis. From the developer's perspective, MDP is a framework to make the implementation of new algorithms easier. MDP takes care of tedious tasks like numerical type and dimensionality checking, leaving the developer free to concentrate on the implementation of the training and execution phases. The new elements then automatically integrate with the rest of the library. As its user base is increasing, MDP might be a good candidate for becoming a common repository of user-supplied, freely available, Python implemented data processing algorithms. Resources --------- Download: http://sourceforge.net/project/showfiles.php?group_id=116959 Homepage: http://mdp-toolkit.sourceforge.net Mailing list: http://sourceforge.net/mail/?group_id=116959 What's new in version 2.0? -------------------------- MDP 2.0 introduces some important structural changes. It is now possible to implement nodes with multiple training phases and even nodes with an undetermined number of phases. This allows for example the implementation of algorithms that need to collect some statistics on the whole input before proceeding with the actual training, or others that need to iterate over a training phase until a convergence criterion is satisfied. The ability to train each phase using chunks of input data is maintained if the chunks are generated with iterators. Nodes that require supervised training can be defined in a very straightforward way by passing additional arguments (e.g., labels or a target output) to the 'train' method. New algorithms have been added, expanding the base of readily available basic data processing elements. MDP is now based exclusively on the NumPy Python numerical extension. -- Tiziano Zito Institute for Theoretical Biology Humboldt-Universitaet zu Berlin Invalidenstrasse, 43 D-10115 Berlin, Germany Pietro Berkes Gatsby Computational Neuroscience Unit Alexandra House, 17 Queen Square London WC1N 3AR, United Kingdom From bsouthey at gmail.com Fri Jun 30 09:24:11 2006 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 30 Jun 2006 08:24:11 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: Hi, You should be encouraged by the trend from Numeric to numarray because the tar users clearly are prepared to upgrade. In terms of the education program, the 1.0 release is the best starting point as there is a general phobia for pre-1.0 releases (and dot zero releases). Also, Python 2.5 is coming so it probably a good time to attempt to educate the exe users on numpy. One way is to provide numpy first (it may be a little too harsh to say only) so people see it when they upgrade. There are two key aspects that are probably very much related that needs to happen with the 1.0 release: 1) Identify those "[s]econdary dependency" projects as Louis states (BioPython also comes to mind) and get them to convert. 2) Get the major distros (e.g. openSUSE) to include numpy and not Numeric. In turn this should also make people who packages (like rpms) also use numpy. This may mean having to support both Numeric and numpy in the initial phase. Regards Bruce On 6/30/06, Travis Oliphant wrote: > Alan G Isaac wrote: > > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > > > >> Please make any comments or voice major concerns > >> > > > > A rather minor issue, but I would just like to make sure > > that a policy decision was made not to move to a float > > default for identity(), ones(), zeros(), and empty(). > > (I leave aside arange().) > > > > This was a policy decision made many months ago after discussion on this > list and would need over-whelming pressure to change. > > > I see the argument for a change to be 3-fold: > > > > I am, however, sympathetic to the arguments for wanting floating-point > defaults. I wanted to change this originally but was convinced to not > make such a major change for back-ward compatibility (more on that later). > > Nonetheless, I would support the creation of a module called something > like defaultfloat or some-other equally impressive name ;-) which > contained floating-point defaults of these functions (with the same > names). > > Feel free to contribute (or at least find a better name). > > > Regarding the problem of backward compatibility: > > I am very enthused about the future of both NumPy and SciPy. There have > been a large number of new-comers to the community who have contributed > impressively and I see very impressive things going on. This is "a > good thing" because these projects need many collaborators and > contributors to be successful. > > However, I have not lost sight of the fact that we still have a major > adoption campaign to win before declaring NumPy a success. There are a > lot of people who still haven't come-over from Numeric and numarray. > Consider these download numbers: > > Numeric-24.2 (released Nov. 11, 2005) > > 14275 py24.exe > 2905 py23.exe > 9144 tar.gz > > Numarray 1.5.1 (released Feb, 7, 2006) > > 10272 py24.exe > 11883 py23.exe > 12779 tar.gz > > NumPy 0.9.8 (May 17, 2006) > > 3713 py24.exe > 558 py23.exe > 4111 tar.gz > > > While it is hard to read too much into numbers, this tells me that there > are about 10,000 current users of Numeric/Numarray who have not even > *tried* NumPy. In fact, Numarray downloads of 1.5.1 went up > significantly from its earlier releases. Why is that? It could be > that many of the downloads are "casual" users who need it for some other > application (in which case they wouldn't feel inclined to try NumPy). > > On the other hand, it is also possible that many are still scared away > by the pre-1.0 development-cycle --- it has been a bit bumpy for the > stalwarts who've braved the rapids as NumPy has matured. Changes like > the proposal to move common functions from default integer to default > float are exactly the kind of thing that leads people to wait on getting > NumPy. > > One thing I've learned about Open Source development is that it can be > hard to figure out exactly what is bothering people and get good > critical feedback: people are more likely to just walk away with their > complaints than to try and verbalize and/or post them. So, looking at > adoption patterns can be a reasonable way to pick up on attitudes. > > It would appear that there is still a remarkable number of people who > are either waiting for NumPy 1.0 or waiting for something else. I'm not > sure. I think we have to wait until 1.0 to find out. Therefore, > bug-fixes and stabilizing the NumPy API is my #1 priority right now. > > The other day I read a post by Alex Martelli (an influential Googler) to > the Python list where he was basically suggesting that people stick with > Numeric until things "stabilize". I can hope he meant "until NumPy 1.0 > comes out" but he didn't say that and maybe he meant "until the array > in Python stabilizes." > > I hope he doesn't mean the rumors about an array object in Python > itself. Let me be the first to assure everyone that rumors of a > "capable" array object in Python have been greatly exaggerated. I would > be thrilled if we could just get the "infra-structure" into Python so > that different extension modules could at least agree on an array > interface. That is a far cry from fulfilling the needs of any current > Num user, however. > > I say all this only to point out why de-stabilizing changes are > difficult to do at this point, and to encourage anyone with an interest > to continue to promote NumPy. If you are at all grateful for its > creation, then please try to encourage those whom you know to push for > NumPy adoption (or at least a plan for its adoption) in the near future. > > Best regards, > > -Travis > > > > > > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From simon at arrowtheory.com Fri Jun 30 09:47:38 2006 From: simon at arrowtheory.com (Simon Burton) Date: Fri, 30 Jun 2006 15:47:38 +0200 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: <20060630154738.4837c053.simon@arrowtheory.com> On Fri, 30 Jun 2006 03:33:56 -0600 Travis Oliphant wrote: > > One thing I've learned about Open Source development is that it can be > hard to figure out exactly what is bothering people and get good > critical feedback: people are more likely to just walk away with their > complaints than to try and verbalize and/or post them. So, looking at > adoption patterns can be a reasonable way to pick up on attitudes. General confusion in the community. The whole numeric->numarray->numpy story is a little strange for people to believe. Or at least the source for many jokes. Also, there is no mention of numpy on the numarray page. The whole thing smells a little fishy :) Mose of the (more casual) users of python for science that i talk to are quite confused about what is going on. It also "looks" like numpy is only a few months old. Personally, I am ready to evangelise numpy wherever i can. (eg. Europython in 4 days time:) ) Simon. From aisaac at american.edu Fri Jun 30 09:50:52 2006 From: aisaac at american.edu (Alan Isaac) Date: Fri, 30 Jun 2006 09:50:52 -0400 (EDT) Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: On Fri, 30 Jun 2006, Travis Oliphant wrote: > I am, however, sympathetic to the arguments for wanting > floating-point defaults. I wanted to change this > originally but was convinced to not make such a major > change for back-ward compatibility (more on that later). Before 1.0, it seems right to go with the best design and take some short-run grief for it if necessary. If the right default is float, but extant code will be hurt, then let float be the default and put the legacy-code fix (function redefinition) in the compatability module. One view ... Alan Isaac From pebarrett at gmail.com Fri Jun 30 09:52:51 2006 From: pebarrett at gmail.com (Paul Barrett) Date: Fri, 30 Jun 2006 09:52:51 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 (fwd) In-Reply-To: References: Message-ID: <40e64fa20606300652l528f054o293487dd1f862dcf@mail.gmail.com> On 6/30/06, Louis Cordier wrote: > > > While it is hard to read too much into numbers, this tells me that there > > are about 10,000 current users of Numeric/Numarray who have not even > > *tried* NumPy. In fact, Numarray downloads of 1.5.1 went up > > significantly from its earlier releases. Why is that? It could be > > that many of the downloads are "casual" users who need it for some other > > application (in which case they wouldn't feel inclined to try NumPy). > > Secondary dependency of other projects maybe ? > http://www.google.com/search?q=requires+Numeric+python > > My money is on Spambayes... > > On the other hand ;) isn't small numbers a good thing, > thus the people using NumPy over Numeric/numarray knows > that some things in NumPy might still change and thus > their code as well. > > I'll risk to say their projects are probably also still > under active development. > > So now would probably be the best time to make these > type of changes. Stated differently, how would we like > NumPy to function 2 years from now ? > > With float64's or with int's ? Then we should rather > change it now. > > Then again where are NumPy in a crossing the chasm > (http://en.wikipedia.org/wiki/Crossing_the_Chasm) > sense of way, visionary or pragmatist ? > > Just a few random thoughts. > > Regards, Louis. > > -- > Louis Cordier cell: +27721472305 > Point45 Entertainment (Pty) Ltd. http://www.point45.org > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > +1 for float64 If we want to make Numpy the premier numerical analysis environment, then let's get it right. I've been bitten too many times by IDL's float32 default and Numeric's/Numarray's int32. If backward compatibility is the most important requirement then there would be no reason to write Numpy. There, I've said it. -- Paul From stephenemslie at gmail.com Fri Jun 30 10:13:03 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Fri, 30 Jun 2006 15:13:03 +0100 Subject: [Numpy-discussion] iterate along a ray: linear algebra? Message-ID: <51f97e530606300713w1c167cf3j10c36d24f87326cf@mail.gmail.com> I am in the process of implementing an image processing algorithm that requires following rays extending outwards from a starting point and calculating the intensity derivative at each point. The idea is to find the point where the difference in intensity goes beyond a particular threshold. Specifically I'm examining an image of an eye to find the pupil, and the edge of the pupil is a sharp change in intensity. How does one iterate along a line in a 2d matrix, and is there a better way to do this? Is this a problem that linear algebra can help with? Thanks Stephen Emslie -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Fri Jun 30 10:15:39 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 30 Jun 2006 07:15:39 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: On 6/29/06, Alan G Isaac wrote: > On Thu, 29 Jun 2006, Travis Oliphant apparently wrote: > > Please make any comments or voice major concerns > > A rather minor issue, but I would just like to make sure > that a policy decision was made not to move to a float > default for identity(), ones(), zeros(), and empty(). > (I leave aside arange().) > > I see the argument for a change to be 3-fold: > 1. It is easier to introduce people to numpy if > default data types are all float. (I teach, > and I want my students to use numpy.) > 2. It is a better match to languages from which > users are likely to migrate (e.g., GAUSS or > Matlab). > 3. In the uses I am most familiar with, float is > the most frequently desired data type. (I guess > this may be field specific, especially for empty().) So far the vote is 8 for float, 1 for int. From Glen.Mabey at swri.org Fri Jun 30 10:22:29 2006 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Fri, 30 Jun 2006 09:22:29 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <20060630142228.GB30022@bams.swri.edu> On Fri, Jun 30, 2006 at 07:15:39AM -0700, Keith Goodman wrote: > So far the vote is 8 for float, 1 for int. +1 for float64. Glen From tim.hochberg at cox.net Fri Jun 30 10:27:06 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri, 30 Jun 2006 07:27:06 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <44A534BA.8040802@cox.net> Regarding choice of float or int for default: The number one priority for numpy should be to unify the three disparate Python numeric packages. Whatever choice of defaults facilitates that is what I support. Personally, given no other constraints, I would probably just get rid of the defaults all together and make the user choose. -tim From erin.sheldon at gmail.com Fri Jun 30 10:29:06 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 30 Jun 2006 10:29:06 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <20060630154738.4837c053.simon@arrowtheory.com> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <20060630154738.4837c053.simon@arrowtheory.com> Message-ID: <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> On 6/30/06, Simon Burton wrote: > > General confusion in the community. The whole numeric->numarray->numpy story > is a little strange for people to believe. Or at least the source for > many jokes. > Also, there is no mention of numpy on the numarray page. The whole > thing smells a little fishy :) I can say that coming to numpy early this year I was confused by this, and in fact I began by using numarray because the documentation was available and clearly written. I now support Travis on his book, since none of this would be happening so rapidly without him, but as I was looking for relief from my IDL license woes this turned me off a bit. >From Googling, It just wasn't clear which was the future, especially since as I dug deeper I saw old references to numpy that were not referring to the current project. I do think that this is more clear now, but the pages http://numeric.scipy.org/ -- Looks antiquated http://www.numpy.org/ -- is empty are not helping. numeric.scipy.org needs to be converted to the wiki look and feel of the rest of scipy.org, or at least made to look modern. numpy.org should point to the new page perhaps. And the numarray page should at least discuss the move to numpy and have links. Erin From dd55 at cornell.edu Fri Jun 30 10:29:42 2006 From: dd55 at cornell.edu (Darren Dale) Date: Fri, 30 Jun 2006 10:29:42 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <200606301029.42616.dd55@cornell.edu> +1 for float64 From erin.sheldon at gmail.com Fri Jun 30 10:33:41 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 30 Jun 2006 10:33:41 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <20060630154738.4837c053.simon@arrowtheory.com> <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> Message-ID: <331116dc0606300733s685ce9e8p5e848ea590475163@mail.gmail.com> On 6/30/06, Erin Sheldon wrote: > http://www.numpy.org/ -- is empty I see this is now pointing to the sourceforge site. Must have been a glitch there earlier as it was returning an empty page. From sransom at nrao.edu Fri Jun 30 10:40:35 2006 From: sransom at nrao.edu (Scott Ransom) Date: Fri, 30 Jun 2006 10:40:35 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <200606301029.42616.dd55@cornell.edu> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> Message-ID: <20060630144035.GA5138@ssh.cv.nrao.edu> +1 for float64 for me as well. Scott On Fri, Jun 30, 2006 at 10:29:42AM -0400, Darren Dale wrote: > +1 for float64 > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From aisaac at american.edu Fri Jun 30 11:11:26 2006 From: aisaac at american.edu (Alan Isaac) Date: Fri, 30 Jun 2006 11:11:26 -0400 (EDT) Subject: [Numpy-discussion] logspace behaviour/documentation In-Reply-To: References: Message-ID: On Fri, 30 Jun 2006, T) Arnd Baecker wrote: > I am wondering a bit about the the behaviour of logspace: http://www.mathworks.com/access/helpdesk/help/techdoc/ref/logspace.html fwiw, Alan Isaac From joris at ster.kuleuven.be Fri Jun 30 11:16:02 2006 From: joris at ster.kuleuven.be (Joris De Ridder) Date: Fri, 30 Jun 2006 17:16:02 +0200 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> References: <44A47854.1050106@ieee.org> <20060630154738.4837c053.simon@arrowtheory.com> <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> Message-ID: <200606301716.02473.joris@ster.kuleuven.be> On Friday 30 June 2006 16:29, Erin Sheldon wrote: [ES]: the pages [ES]: [ES]: http://numeric.scipy.org/ -- Looks antiquated [ES]: [ES]: are not helping. My opinion too. If that page is the first page you learn about NumPy, you won't have a good impression. Travis, would you accept concrete suggestions or 'help' to improve that page? Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From steve at arachnedesign.net Fri Jun 30 11:16:14 2006 From: steve at arachnedesign.net (Steve Lianoglou) Date: Fri, 30 Jun 2006 11:16:14 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: > Before 1.0, it seems right to go with the best design > and take some short-run grief for it if necessary. > > If the right default is float, but extant code will be hurt, > then let float be the default and put the legacy-code fix > (function redefinition) in the compatability module +1 on this very idea. (sorry for sending this directly to you @ first, Alan) From fperez.net at gmail.com Fri Jun 30 11:25:20 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 30 Jun 2006 09:25:20 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <20060630144035.GA5138@ssh.cv.nrao.edu> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: On 6/30/06, Scott Ransom wrote: > +1 for float64 for me as well. +1 for float64 I have lots of code overriding the int defaults by hand which were giving me grief with hand-written extensions (which were written double-only for speed reasons). I'll be happy to clean this up. I completely understand Travis' concerns about backwards compatibility, but frankly, I think that right now the quality and community momentum of numpy is already enough that it will carry things forward. People will suffer a little during the porting days, but they'll be better off in the long run. I don't think we should undrestimate the value of eternal happiness :) Besides, decent unit tests will catch these problems. We all know that every scientific code in existence is unit tested to the smallest routine, so this shouldn't be a problem for anyone. Cheers, f From ndarray at mac.com Fri Jun 30 12:35:35 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 12:35:35 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: On 6/30/06, Fernando Perez wrote: > ... > Besides, decent unit tests will catch these problems. We all know > that every scientific code in existence is unit tested to the smallest > routine, so this shouldn't be a problem for anyone. Is this a joke? Did anyone ever measured the coverage of numpy unittests? I would be surprized if it was more than 10%. From travis at enthought.com Fri Jun 30 12:38:55 2006 From: travis at enthought.com (Travis N. Vaught) Date: Fri, 30 Jun 2006 11:38:55 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <200606301716.02473.joris@ster.kuleuven.be> References: <44A47854.1050106@ieee.org> <20060630154738.4837c053.simon@arrowtheory.com> <331116dc0606300729y47b6f155k9208ce76daaa3eca@mail.gmail.com> <200606301716.02473.joris@ster.kuleuven.be> Message-ID: <44A5539F.7070401@enthought.com> Joris De Ridder wrote: > On Friday 30 June 2006 16:29, Erin Sheldon wrote: > [ES]: the pages > [ES]: > [ES]: http://numeric.scipy.org/ -- Looks antiquated > [ES]: > [ES]: are not helping. > > My opinion too. If that page is the first page you learn about NumPy, > you won't have a good impression. > > Travis, would you accept concrete suggestions or 'help' to improve > that page? > > Cheers, > Joris > Speaking for the other Travis...I think he's open to suggestions (he hasn't yelled at me yet for suggesting the same sort of things). There was an earlier conversation on this list about the numpy page, in which we proposed redirecting all numeric/numpy links to numpy.scipy.org. I'll ask Jeff to do these redirects if: - everyone agrees that address is a good one - we have the content shaped up on that page. For now, I've copied the content with some basic cleanup (and adding a style sheet) here: http://numpy.scipy.org If anyone with a modicum of web design experience wants access to edit this site...please (please) speak up and it will be so. Other suggestions are welcome. Travis (Vaught) From travis at enthought.com Fri Jun 30 12:40:14 2006 From: travis at enthought.com (Travis N. Vaught) Date: Fri, 30 Jun 2006 11:40:14 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: <44A553EE.1060504@enthought.com> Sasha wrote: > On 6/30/06, Fernando Perez wrote: > >> ... >> Besides, decent unit tests will catch these problems. We all know >> that every scientific code in existence is unit tested to the smallest >> routine, so this shouldn't be a problem for anyone. >> > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. > Very obviously a joke...uh...with the exception of enthought-written scientific code, of course ;-) From kwgoodman at gmail.com Fri Jun 30 12:43:55 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 30 Jun 2006 09:43:55 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: On 6/30/06, Sasha wrote: > On 6/30/06, Fernando Perez wrote: > > ... > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. That's a conundrum. A joke is no longer a joke once you point out, yes it is a joke. From jonas at mwl.mit.edu Fri Jun 30 10:36:06 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Fri, 30 Jun 2006 10:36:06 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> Message-ID: <1151678166.16911.9.camel@convolution.mit.edu> I've got to say +1 for Float64 too. I write a lot of numpy code, and this bites me at least once a week. You'd think I'd learn better, but it's just so easy to screw this up when you have to switch back and forth between matlab (which I'm forced to TA) and numpy (which I use for Real Work). ...Eric From robert.kern at gmail.com Fri Jun 30 12:53:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Jun 2006 11:53:02 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A534BA.8040802@cox.net> References: <44A47854.1050106@ieee.org> <44A534BA.8040802@cox.net> Message-ID: Tim Hochberg wrote: > Regarding choice of float or int for default: > > The number one priority for numpy should be to unify the three disparate > Python numeric packages. Whatever choice of defaults facilitates that is > what I support. +10 > Personally, given no other constraints, I would probably just get rid of > the defaults all together and make the user choose. My preferred solution is to add class methods to the scalar types rather than screw up compatibility. In [1]: float64.ones(10) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Fri Jun 30 13:03:50 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 30 Jun 2006 10:03:50 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <44A534BA.8040802@cox.net> Message-ID: On 6/30/06, Robert Kern wrote: > Tim Hochberg wrote: > > Regarding choice of float or int for default: > > > > The number one priority for numpy should be to unify the three disparate > > Python numeric packages. Whatever choice of defaults facilitates that is > > what I support. > > +10 > > > Personally, given no other constraints, I would probably just get rid of > > the defaults all together and make the user choose. > > My preferred solution is to add class methods to the scalar types rather than > screw up compatibility. > > In [1]: float64.ones(10) I don't think an int will be able to hold the number of votes for float64. From wright at esrf.fr Fri Jun 30 13:04:06 2006 From: wright at esrf.fr (Jon Wright) Date: Fri, 30 Jun 2006 19:04:06 +0200 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: <44A55986.8040905@esrf.fr> Travis Oliphant wrote: >I hope he doesn't mean the rumors about an array object in Python >itself. Let me be the first to assure everyone that rumors of a >"capable" array object in Python have been greatly exaggerated. I would >be thrilled if we could just get the "infra-structure" into Python so >that different extension modules could at least agree on an array >interface. That is a far cry from fulfilling the needs of any current >Num user, however. > > Having {pointer + dimensions + strides + type} in the python core would be an incredible step forward - this is far more important than changing my python code to do functionally the same thing with numpy instead of Numeric. If the new array object supports most of the interface of the current "array" module then it is already very capable for many tasks. It would be great if it also works with Jython (etc). Bruce Southley wrote: >1) Identify those "[s]econdary dependency" projects as Louis states >(BioPython also comes to mind) and get them to convert. > As author of a (fairly obscure) secondary dependency package it is not clear that this is right time to convert. I very much admire the matplotlib approach of using Numerix and see this as a better solution than switching (or indeed re-writing in java/c++ etc). However, looking into the matplotlib SVN I see: _image.cpp 2420 4 weeks cmoad applied Andrew Straw's numpy patch numerix/_sp_imports.py 2478 2 weeks teoliphant Make recent changes backward compatible with numpy 0.9.8 numerix/linearalgebra/__init__.py 2474 2 weeks teoliphant Fix import error for new numpy While I didn't look at either the code or the diff the comments clearly read as: "DON'T SWITCH YET". Get the basearray into the python core and for sure I will be using that, whatever it is called. I was tempted to switch to numarray in the past because of the nd_image, but I don't see that in numpy just yet? Seeing this on the mailing list: >So far the vote is 8 for float, 1 for int. > ... is yet another hint that I can remain with Numeric as a library, at least until numpy has a frozen interface/behaviour. I am very supportive of the work going on but have some technical concerns about switching. To pick some examples, it appears that numpy.lib.function_base.median makes a copy, sorts and picks the middle element. Some reading at http://ndevilla.free.fr/median/median/index.html or even (eek!) numerical recipes indicates this is not good news. Not to single one routine out, I was also saddened to find both Numeric and numpy use double precision lapack routines for single precision arguments. A diff of numpy's linalg.py with Numeric's LinearAlgebra.py goes a long way to explaining why there is resistance to change from Numeric to numpy. The boilerplate changes and you only get "norm" (which I am suspicious about - vector 2 norms are in blas, some matrix 2 norms are in lapack/*lange.f and computing all singular values when you only want the biggest or smallest one is a surprising algorithmic choice). I realise it might sound like harsh criticism - but I don't see what numpy adds for number crunching over and above Numeric. Clearly there *is* a lot more in terms of python integration, but I really don't want to do number crunching with python itself ;-) For numpy to really be better than Numeric I would like to find algorithm selections according to the array dimensions and type. Getting the basearray type into the python core is the key - then it makes sense to get the best of breed algorithms working as you can rely on the basearray being around for many years to come. Please please please get basearray into the python core! How can we help with that? Jon From aisaac at american.edu Fri Jun 30 13:22:30 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 30 Jun 2006 13:22:30 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org><200606301029.42616.dd55@cornell.edu><20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: > On 6/30/06, Fernando Perez wrote: >> Besides, decent unit tests will catch these problems. We >> all know that every scientific code in existence is unit >> tested to the smallest routine, so this shouldn't be >> a problem for anyone. On Fri, 30 Jun 2006, Sasha apparently wrote: > Is this a joke? It had me chuckling. ;-) The dangers of email ... Cheers, Alan Isaac From fperez.net at gmail.com Fri Jun 30 13:25:06 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 30 Jun 2006 11:25:06 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: On 6/30/06, Sasha wrote: > On 6/30/06, Fernando Perez wrote: > > ... > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. Of course it's a joke. So obviously one for anyone who knows the field, that the smiley shouldn't be needed (and yes, I despise background laughs on television, too). Maybe a sad joke, given the realities of scientific computing, and maybe a poor joke, but at least an attempt at humor. Cheers, f From ndarray at mac.com Fri Jun 30 13:25:39 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 13:25:39 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <1151678166.16911.9.camel@convolution.mit.edu> References: <44A47854.1050106@ieee.org> <1151678166.16911.9.camel@convolution.mit.edu> Message-ID: Since I was almost alone with my negative vote on the float64 default, I decided to give some more thought to the issue. I agree there are strong reasons to make the change. In addition to the points in the original post, float64 type is much more closely related to the well-known Python float than int32 to Python long. For example no-one would be surprised by either >>> float64(0)/float64(0) nan or >>> float(0)/float(0) Traceback (most recent call last): File "", line 1, in ? ZeroDivisionError: float division but >>> int32(0)/int32(0) 0 is much more difficult to explain. As is >>> int32(2)**32 0 compared to >>> int(2)**32 4294967296L In short, arrays other than float64 are more of the hard-hat area and their properties may be surprising to the novices. Exposing novices to non-float64 arrays through default constructors is a bad thing. Another argument that I find compelling is that we are in a now or never situation. No one expects that their Numeric or numarray code will work in numpy 1.0 without changes, but I don't think people will tolerate major breaks in backward compatibility in the future releases. If we decide to change the default, let's do it everywhere including array constructors and arange. The later is more controversial, but I still think it is worth doing (will give reasons in the future posts). Changing the defaults only in some functions or providing overrides to functions will only lead to more confusion. My revised vote is -0. On 6/30/06, Eric Jonas wrote: > I've got to say +1 for Float64 too. I write a lot of numpy code, and > this bites me at least once a week. You'd think I'd learn better, but > it's just so easy to screw this up when you have to switch back and > forth between matlab (which I'm forced to TA) and numpy (which I use for > Real Work). > > ...Eric From ndarray at mac.com Fri Jun 30 13:42:33 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 13:42:33 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: "In the good old days physicists repeated each other's experiments, just to be sure. Today they stick to FORTRAN, so that they can share each other's programs, bugs included." --- Edsger W.Dijkstra, "How do we tell truths that might hurt?" 18 June 1975 I just miss the good old days ... On 6/30/06, Fernando Perez wrote: > On 6/30/06, Sasha wrote: > > On 6/30/06, Fernando Perez wrote: > > > ... > > > Besides, decent unit tests will catch these problems. We all know > > > that every scientific code in existence is unit tested to the smallest > > > routine, so this shouldn't be a problem for anyone. > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > unittests? I would be surprized if it was more than 10%. > > Of course it's a joke. So obviously one for anyone who knows the > field, that the smiley shouldn't be needed (and yes, I despise > background laughs on television, too). Maybe a sad joke, given the > realities of scientific computing, and maybe a poor joke, but at least > an attempt at humor. > > Cheers, > > f > From lcordier at point45.com Fri Jun 30 14:05:08 2006 From: lcordier at point45.com (Louis Cordier) Date: Fri, 30 Jun 2006 20:05:08 +0200 (SAST) Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A4F004.60809@ieee.org> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> Message-ID: > Numeric-24.2 (released Nov. 11, 2005) > > 14275 py24.exe > 2905 py23.exe > 9144 tar.gz > > Numarray 1.5.1 (released Feb, 7, 2006) > > 10272 py24.exe > 11883 py23.exe > 12779 tar.gz > > NumPy 0.9.8 (May 17, 2006) > > 3713 py24.exe > 558 py23.exe > 4111 tar.gz Here is some trends with a pretty picture. http://www.google.com/trends?q=numarray%2C+NumPy%2C+Numeric+Python Unfortunatle Numeric alone is to general a term to use. But I would say NumPy is looking good. ;) -- Louis Cordier cell: +27721472305 Point45 Entertainment (Pty) Ltd. http://www.point45.org From oliphant at ee.byu.edu Fri Jun 30 14:13:19 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 12:13:19 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A55986.8040905@esrf.fr> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> Message-ID: <44A569BF.30501@ee.byu.edu> Jon, Thanks for the great feedback. You make some really good points. > > >Having {pointer + dimensions + strides + type} in the python core would >be an incredible step forward - this is far more important than changing >my python code to do functionally the same thing with numpy instead of >Numeric. > Guido has always wanted consensus before putting things into Python. We need to rally behind NumPy if we are going to get something of it's infrastructure into Python itself. >As author of a (fairly obscure) secondary dependency package it is not >clear that this is right time to convert. I very much admire the >matplotlib approach of using Numerix and see this as a better solution >than switching (or indeed re-writing in java/c++ etc). > I disagree with this approach. It's fine for testing and for transition, but it is a headache long term. You are basically supporting three packages. The community is not large enough to do that. I also think it leads people to consider adopting that approach instead of just switching. I'm not particularly thrilled with strategies that essentially promote the existence of three different packages. >However, looking >into the matplotlib SVN I see: > >_image.cpp 2420 4 weeks cmoad applied Andrew Straw's >numpy patch >numerix/_sp_imports.py 2478 2 weeks teoliphant Make >recent changes backward compatible with numpy 0.9.8 >numerix/linearalgebra/__init__.py 2474 2 weeks teoliphant > Fix import error for new numpy > >While I didn't look at either the code or the diff the comments clearly >read as: "DON'T SWITCH YET". > I don't understand why you interpret it that way? When I moved old-style names to numpy.oldnumeric for SVN numpy, I needed to make sure that matplotlib still works with numpy 0.9.8 (which has the old-style names in the main location). Why does this say "DON'T SWITCH"? If anything it should tell you that we are conscious of trying to keep things working together and compatible with current releases of NumPy. >Get the basearray into the python core and >for sure I will be using that, whatever it is called. I was tempted to >switch to numarray in the past because of the nd_image, but I don't see >that in numpy just yet? > > It is in SciPy where it belongs (you can also install it as a separate package). It builds and runs on top of NumPy just fine. In fact it was the predecessor to the now fully-capable-but-in-need-of-more-testing numarray C-API that is now in NumPy. >I am very supportive of the work going on but have some technical >concerns about switching. To pick some examples, it appears that >numpy.lib.function_base.median makes a copy, sorts and picks the middle >element. > I'm sure we need lots of improvements in the code-base. This has always been true. We rely on the ability of contributors which doesn't work well unless we have a lot of contributors which are hard to get unless we consolidate around a single array package. Please contribute a fix. >single one routine out, I was also saddened to find both Numeric and >numpy use double precision lapack routines for single precision >arguments. > The point of numpy.linalg is to provide the functionality of Numeric not extend it. This is because SciPy provides a much more capable linalg sub-package that works with single and double precision. It sounds like you want SciPy. >For numpy to really be better than Numeric I would like to find >algorithm selections according to the array dimensions and type. > These are good suggestions but for SciPy. The linear algebra in NumPy is just for getting your feet wet and having access to basic functionality. >Getting >the basearray type into the python core is the key - then it makes sense >to get the best of breed algorithms working as you can rely on the >basearray being around for many years to come. > >Please please please get basearray into the python core! How can we help >with that? > > There is a PEP in SVN (see the array interface link at http://numeric.scipy.org) Karol Langner is a Google summer-of-code student working on it this summer. I'm not sure how far he'll get, but I'm hopeful. I could spend more time on it, if I had funding to do it, but right now I'm up against a wall. Again, thanks for the feedback. Best, -Travis From chanley at stsci.edu Fri Jun 30 14:30:41 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 30 Jun 2006 14:30:41 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A569BF.30501@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> Message-ID: <44A56DD1.5050907@stsci.edu> >>Get the basearray into the python core and >>for sure I will be using that, whatever it is called. I was tempted to >>switch to numarray in the past because of the nd_image, but I don't see >>that in numpy just yet? >> >> > > It is in SciPy where it belongs (you can also install it as a separate > package). It builds and runs on top of NumPy just fine. In fact it was > the predecessor to the now fully-capable-but-in-need-of-more-testing > numarray C-API that is now in NumPy. > Hi Travis, Where can one find and download nd_image separate from the rest of scipy? As for the the numarray C-API, we are currently doing testing here at STScI. Chris From jonathan.taylor at utoronto.ca Fri Jun 30 14:42:33 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Fri, 30 Jun 2006 14:42:33 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A569BF.30501@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> Message-ID: <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> +1 for some sort of float. I am a little confused as to why Float64 is a particularly good choice. Can someone explain in more detail? Presumably this is the most sensible ctype and translates to a python float well? In general though I agree that this is a now or never change. I suspect we will change a lot of matlab -> Numeric/numarray transitions into matlab -> numpy transitions with this change. I guess it will take a little longer for 1.0 to get out though :( Ah well. Cheers. Jon. On 6/30/06, Travis Oliphant wrote: > Jon, > > Thanks for the great feedback. You make some really good points. > > > > > > >Having {pointer + dimensions + strides + type} in the python core would > >be an incredible step forward - this is far more important than changing > >my python code to do functionally the same thing with numpy instead of > >Numeric. > > > Guido has always wanted consensus before putting things into Python. We > need to rally behind NumPy if we are going to get something of it's > infrastructure into Python itself. > > >As author of a (fairly obscure) secondary dependency package it is not > >clear that this is right time to convert. I very much admire the > >matplotlib approach of using Numerix and see this as a better solution > >than switching (or indeed re-writing in java/c++ etc). > > > I disagree with this approach. It's fine for testing and for > transition, but it is a headache long term. You are basically > supporting three packages. The community is not large enough to do > that. I also think it leads people to consider adopting that approach > instead of just switching. I'm not particularly thrilled with > strategies that essentially promote the existence of three different > packages. > > >However, looking > >into the matplotlib SVN I see: > > > >_image.cpp 2420 4 weeks cmoad applied Andrew Straw's > >numpy patch > >numerix/_sp_imports.py 2478 2 weeks teoliphant Make > >recent changes backward compatible with numpy 0.9.8 > >numerix/linearalgebra/__init__.py 2474 2 weeks teoliphant > > Fix import error for new numpy > > > >While I didn't look at either the code or the diff the comments clearly > >read as: "DON'T SWITCH YET". > > > I don't understand why you interpret it that way? When I moved > old-style names to numpy.oldnumeric for SVN numpy, I needed to make sure > that matplotlib still works with numpy 0.9.8 (which has the old-style > names in the main location). > > Why does this say "DON'T SWITCH"? If anything it should tell you that > we are conscious of trying to keep things working together and > compatible with current releases of NumPy. > > >Get the basearray into the python core and > >for sure I will be using that, whatever it is called. I was tempted to > >switch to numarray in the past because of the nd_image, but I don't see > >that in numpy just yet? > > > > > It is in SciPy where it belongs (you can also install it as a separate > package). It builds and runs on top of NumPy just fine. In fact it was > the predecessor to the now fully-capable-but-in-need-of-more-testing > numarray C-API that is now in NumPy. > > >I am very supportive of the work going on but have some technical > >concerns about switching. To pick some examples, it appears that > >numpy.lib.function_base.median makes a copy, sorts and picks the middle > >element. > > > I'm sure we need lots of improvements in the code-base. This has > always been true. We rely on the ability of contributors which doesn't > work well unless we have a lot of contributors which are hard to get > unless we consolidate around a single array package. Please contribute a > fix. > > >single one routine out, I was also saddened to find both Numeric and > >numpy use double precision lapack routines for single precision > >arguments. > > > The point of numpy.linalg is to provide the functionality of Numeric not > extend it. This is because SciPy provides a much more capable linalg > sub-package that works with single and double precision. It sounds > like you want SciPy. > > >For numpy to really be better than Numeric I would like to find > >algorithm selections according to the array dimensions and type. > > > These are good suggestions but for SciPy. The linear algebra in NumPy > is just for getting your feet wet and having access to basic > functionality. > > >Getting > >the basearray type into the python core is the key - then it makes sense > >to get the best of breed algorithms working as you can rely on the > >basearray being around for many years to come. > > > >Please please please get basearray into the python core! How can we help > >with that? > > > > > There is a PEP in SVN (see the array interface link at > http://numeric.scipy.org) Karol Langner is a Google summer-of-code > student working on it this summer. I'm not sure how far he'll get, but > I'm hopeful. > > I could spend more time on it, if I had funding to do it, but right now > I'm up against a wall. > > Again, thanks for the feedback. > > Best, > > -Travis > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From matthew.brett at gmail.com Fri Jun 30 14:48:06 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 30 Jun 2006 19:48:06 +0100 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> Message-ID: <1e2af89e0606301148v16fef51bu8740ac7db09d2241@mail.gmail.com> Just one more vote for float. On the basis that Travis mentioned, of all those first-timers downloading, trying, finding something they didn't expect that was rather confusing, and giving up. From aisaac at american.edu Fri Jun 30 15:02:47 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 30 Jun 2006 15:02:47 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> References: <44A47854.1050106@ieee.org><44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr><44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> Message-ID: On Fri, 30 Jun 2006, Jonathan Taylor apparently wrote: > In general though I agree that this is a now or never change. Sasha has also made that argument. I see one possible additional strategy. I think every agrees that the long view is important. Now even Sasha agrees that float64 is the best default. Suppose 1. float64 is the ideal default (I agree with this) 2. there is substantial concern about the change of default on extant code for the unwary One approach proposed is to include a different function definition in a compatability module. This seems acceptable to me, but as Sasha notes it is not without drawbacks. Here is another possibility: transition by requiring an explicit data type for some period of time (say, 6-12 months). After that time, provide the default of float64. This would require some short term pain, but for the long term gain of the desired outcome. Just a thought, Alan Isaac PS I agree with Sasha's following observations: "arrays other than float64 are more of the hard-hat area and their properties may be surprising to the novices. Exposing novices to non-float64 arrays through default constructors is a bad thing. ... No one expects that their Numeric or numarray code will work in numpy 1.0 without changes, but I don't think people will tolerate major breaks in backward compatibility in the future releases. ... If we decide to change the default, let's do it everywhere including array constructors and arange." From oliphant at ee.byu.edu Fri Jun 30 14:55:27 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 12:55:27 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> Message-ID: <44A5739F.7020701@ee.byu.edu> Jonathan Taylor wrote: >+1 for some sort of float. I am a little confused as to why Float64 >is a particularly good choice. Can someone explain in more detail? >Presumably this is the most sensible ctype and translates to a python >float well? > > O.K. I'm convinced that we should change to float as the default, but *everywhere* as Sasha says. We will provide two tools to make the transition easier. 1) The numpy.oldnumeric sub-package will contain definitions of changed functions that keep the old defaults (integer). This is what convertcode replaces for import Numeric calls so future users who make the transition won't really notice. 2) A function/script that can be run to convert all type-less uses of the changed functions to explicitly insert dtype=int. Yes, it will be a bit painful (I made the change and count 6 failures in NumPy tests and 34 in SciPy). But, it sounds like there is support for doing it. And yes, we must do it prior to 1.0 if we do it at all. Comments? -Travis From cookedm at physics.mcmaster.ca Fri Jun 30 14:59:28 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 30 Jun 2006 14:59:28 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> Message-ID: <20060630145928.3450b0b1@arbutus.physics.mcmaster.ca> On Fri, 30 Jun 2006 14:42:33 -0400 "Jonathan Taylor" wrote: > +1 for some sort of float. I am a little confused as to why Float64 > is a particularly good choice. Can someone explain in more detail? > Presumably this is the most sensible ctype and translates to a python > float well? It's "float64", btw. Float64 is the old Numeric name. Python's "float" type is a C "double" (just like Python's "int" is a C "long"). In practice, C doubles are 64-bit. In NumPy, these are the same type: float32 == single (32-bit float, which is a C float) float64 == double (64-bit float, which is a C double) Also, some Python types have equivalent NumPy types (as in, they can be used interchangably as dtype arguments): int == long (C long, could be int32 or int64) float == double complex == cdouble (also complex128) Personally, I'd suggest using "single", "float", and "longdouble" in numpy code. [While we're on the subject, for portable code don't use float96 or float128: one or other or both probably won't exist; use longdouble]. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From aisaac at american.edu Fri Jun 30 15:11:18 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 30 Jun 2006 15:11:18 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A5739F.7020701@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org><44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu><463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> Message-ID: On Fri, 30 Jun 2006, Travis Oliphant apparently wrote: > I'm convinced that we should change to float as the > default, but everywhere as Sasha says. Even better! Cheers, Alan Isaac From robert.kern at gmail.com Fri Jun 30 15:02:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Jun 2006 14:02:23 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A5739F.7020701@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> Message-ID: Travis Oliphant wrote: > Comments? Whatever else you do, leave arange() alone. It should never have accepted floats in the first place. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Fri Jun 30 15:17:11 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 30 Jun 2006 12:17:11 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A534BA.8040802@cox.net> References: <44A47854.1050106@ieee.org> <44A534BA.8040802@cox.net> Message-ID: <44A578B7.40004@noaa.gov> Tim Hochberg wrote: > The number one priority for numpy should be to unify the three disparate > Python numeric packages. I think the number one priority should be the best it can be. As someone said, two (or ten) years from now, there will be more new users than users migrating from the older packages. > Personally, given no other constraints, I would probably just get rid of > the defaults all together and make the user choose. I like that too, and it would keep the incompatibility from causing silent errors. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cookedm at physics.mcmaster.ca Fri Jun 30 15:19:26 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 30 Jun 2006 15:19:26 -0400 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: References: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> <20060628153734.7597800c@arbutus.physics.mcmaster.ca> Message-ID: <20060630151926.7b84043e@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 13:46:07 -0600 "Fernando Perez" wrote: > On 6/28/06, David M. Cooke wrote: > > > [Really, distutils sucks. I think (besides refactoring) it needs it's API > > documented better, or least good conventions on where to hook into. > > setuptools and numpy.distutils do their best, but there's only so much you > > can do before everything goes fragile and breaks in unexpected ways.] > > I do hate distutils, having fought it for a long time. Its piss-poor > dependency checking is one of its /many/ annoyances. For a package > with as long a compile time as scipy, it really sucks not to be able > to just modify random source files and trust that it will really > recompile what's needed (no more, no less). > > Anyway, thanks for heeding this one. Hopefully one day somebody will > do the (painful) work of replacing distutils with something that > actually works (perhaps using scons for the build engine...) Until > then, we'll trod along with massively unnecessary rebuilds :) I've tried using SCons -- still don't like it. It's python, but it's too unpythonic for me. (The build engine itself is probably fine, though.) A complete replacement for distutils isn't needed: bits and pieces can be replaced at a time (it gets harder if you've got two packages like setuptools and numpy.distutils trying to improve it, though). For instance, the CCompiler class could be replaced in whole with a rewrite, keeping what could be considered the public API. I've done this before with a version of UnixCCompiler that let me specify a "toolchain": which C compiler and C++ compiler worked together, which linker to use for them, and associated flags. I'm working (slowly) on a rewrite of commands/build_ext.py in numpy.distutils that should keep track of source dependencies better, for instance. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Chris.Barker at noaa.gov Fri Jun 30 15:23:57 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 30 Jun 2006 12:23:57 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> Message-ID: <44A57A4D.3010605@noaa.gov> Robert Kern wrote: > Whatever else you do, leave arange() alone. It should never have accepted floats > in the first place. Just to make sure we're clear: Because one should use linspace() for that? If so, this would be the time to raise an error (or at least a deprecation warning) when arange() is called with Floats. I have a LOT of code that does that! In fact, I posted a question here recently and got a lot of answers and suggested code, and not one person suggested that I shouldn't use arange() with floats. Did Numeric have linspace() It doesn't look like it to me. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From oliphant at ee.byu.edu Fri Jun 30 15:25:23 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 13:25:23 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> Message-ID: <44A57AA3.1040405@ee.byu.edu> Robert Kern wrote: >Travis Oliphant wrote: > > > >>Comments? >> >> > >Whatever else you do, leave arange() alone. It should never have accepted floats >in the first place. > > Actually, Robert makes a good point. arange with floats is problematic. We should direct people to linspace instead of changing the default of arange. Most new users will probably expect arange to return a type similar to Python's range which is int. Also: Keeping arange as ints reduces the number of errors from the change in the unit tests to 2 in NumPy 3 in SciPy So, I think from both a pragmatic and idealized situtation, arange should stay with the default of ints. People who want arange to return floats should be directed to linspace. -Travis From sransom at nrao.edu Fri Jun 30 15:44:38 2006 From: sransom at nrao.edu (Scott Ransom) Date: Fri, 30 Jun 2006 15:44:38 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A57AA3.1040405@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57AA3.1040405@ee.byu.edu> Message-ID: <20060630194438.GA6065@ssh.cv.nrao.edu> On Fri, Jun 30, 2006 at 01:25:23PM -0600, Travis Oliphant wrote: > Robert Kern wrote: > > >Whatever else you do, leave arange() alone. It should never have accepted floats > >in the first place. > > > Actually, Robert makes a good point. arange with floats is > problematic. We should direct people to linspace instead of changing > the default of arange. Most new users will probably expect arange to > return a type similar to Python's range which is int. ... > So, I think from both a pragmatic and idealized situtation, arange > should stay with the default of ints. People who want arange to return > floats should be directed to linspace. I agree that arange with floats is problematic. However, if you want, for example, arange(10.0) (as I often do), you have to do: linspace(0.0, 9.0, 10), which is very un-pythonic and not at all what a new user would expect... I think of linspace as a convenience function, not as a replacement for arange with floats. Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From jonas at MIT.EDU Fri Jun 30 15:45:38 2006 From: jonas at MIT.EDU (Eric Jonas) Date: Fri, 30 Jun 2006 15:45:38 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: <1151696738.16911.12.camel@convolution.mit.edu> On Fri, 2006-06-30 at 12:35 -0400, Sasha wrote: > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. Given the coverage is so low, how can people help by contributing unit tests? Are there obvious areas with poor coverage? Travis, do you have any opinions on this? ...Eric From robert.kern at gmail.com Fri Jun 30 15:54:30 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Jun 2006 14:54:30 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <20060630194438.GA6065@ssh.cv.nrao.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57AA3.1040405@ee.byu.edu> <20060630194438.GA6065@ssh.cv.nrao.edu> Message-ID: Scott Ransom wrote: > On Fri, Jun 30, 2006 at 01:25:23PM -0600, Travis Oliphant wrote: >> Robert Kern wrote: >> >>> Whatever else you do, leave arange() alone. It should never have accepted floats >>> in the first place. >>> >> Actually, Robert makes a good point. arange with floats is >> problematic. We should direct people to linspace instead of changing >> the default of arange. Most new users will probably expect arange to >> return a type similar to Python's range which is int. > ... >> So, I think from both a pragmatic and idealized situtation, arange >> should stay with the default of ints. People who want arange to return >> floats should be directed to linspace. > > I agree that arange with floats is problematic. However, > if you want, for example, arange(10.0) (as I often do), you have > to do: linspace(0.0, 9.0, 10), which is very un-pythonic and not > at all what a new user would expect... > > I think of linspace as a convenience function, not as a > replacement for arange with floats. I don't mind arange(10.0) so much, now that it exists. I would mind, a lot, about arange(10) returning a float64 array. Similarity to the builtin range() is much more important in my mind than an arbitrary "consistency" with ones() and zeros(). It's arange(0.0, 1.0, 0.1) that I think causes the most problems with arange and floats. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jun 30 16:02:28 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Jun 2006 15:02:28 -0500 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A57A4D.3010605@noaa.gov> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57A4D.3010605@noaa.gov> Message-ID: Christopher Barker wrote: > Robert Kern wrote: >> Whatever else you do, leave arange() alone. It should never have accepted floats >> in the first place. > > Just to make sure we're clear: Because one should use linspace() for that? More or less. Depending on the step and endpoint that you choose, it can be nearly impossible for the programmer to predict how many elements are going to be generated. > If so, this would be the time to raise an error (or at least a > deprecation warning) when arange() is called with Floats. > > I have a LOT of code that does that! In fact, I posted a question here > recently and got a lot of answers and suggested code, and not one person > suggested that I shouldn't use arange() with floats. I should have been more specific, but I did express disapproval in the code sample I gave: x = arange(minx, maxx+step, step) # oy. Since your question wasn't about that specifically, I used the technique that your original sample did. > Did Numeric have linspace() It doesn't look like it to me. It doesn't. It was originally contributed to Scipy by Fernando, IIRC. It's small, so it is easy to copy if you need to maintain support for Numeric, still. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bhendrix at enthought.com Fri Jun 30 16:06:58 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Fri, 30 Jun 2006 15:06:58 -0500 Subject: [Numpy-discussion] Setuptools leftover junk In-Reply-To: <20060630151926.7b84043e@arbutus.physics.mcmaster.ca> References: <20060628151040.7af8ed7f@arbutus.physics.mcmaster.ca> <20060628153734.7597800c@arbutus.physics.mcmaster.ca> <20060630151926.7b84043e@arbutus.physics.mcmaster.ca> Message-ID: <44A58462.80902@enthought.com> David M. Cooke wrote: > >>> [Really, distutils sucks. I think (besides refactoring) it needs it's API >>> documented better, or least good conventions on where to hook into. >>> setuptools and numpy.distutils do their best, but there's only so much you >>> can do before everything goes fragile and breaks in unexpected ways.] >>> >> I do hate distutils, having fought it for a long time. Its piss-poor >> dependency checking is one of its /many/ annoyances. For a package >> with as long a compile time as scipy, it really sucks not to be able >> to just modify random source files and trust that it will really >> recompile what's needed (no more, no less). >> >> Anyway, thanks for heeding this one. Hopefully one day somebody will >> do the (painful) work of replacing distutils with something that >> actually works (perhaps using scons for the build engine...) Until >> then, we'll trod along with massively unnecessary rebuilds :) >> > > I've tried using SCons -- still don't like it. It's python, but it's too > unpythonic for me. (The build engine itself is probably fine, though.) > Agreed, last time I used it was almost a year ago, so it might have changed, but SCons does a quasi-2 stage build that is very unnatural. If you have python code nested between 2 build events, the python code is executed and the build events are queued. BUT- its dependency management is great. Distutils suffers from 2 major problems as far as I am concerned: setup.py files often contain way too much business logic and verb-age for casual python developers, and worst-in-class dependency checking. I've been considering moving all Enthought projects to SCons. If another large project, such as scipy were to go that way it would make my decision much simpler. Bryce -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Fri Jun 30 16:11:21 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 14:11:21 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <20060630194438.GA6065@ssh.cv.nrao.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57AA3.1040405@ee.byu.edu> <20060630194438.GA6065@ssh.cv.nrao.edu> Message-ID: <44A58569.9080504@ee.byu.edu> Scott Ransom wrote: >On Fri, Jun 30, 2006 at 01:25:23PM -0600, Travis Oliphant wrote: > > >>Robert Kern wrote: >> >> >> >>>Whatever else you do, leave arange() alone. It should never have accepted floats >>>in the first place. >>> >>> >>> >>Actually, Robert makes a good point. arange with floats is >>problematic. We should direct people to linspace instead of changing >>the default of arange. Most new users will probably expect arange to >>return a type similar to Python's range which is int. >> >> >... > > >>So, I think from both a pragmatic and idealized situtation, arange >>should stay with the default of ints. People who want arange to return >>floats should be directed to linspace. >> >> I should have worded this as: "People who want arange to return floats *as a default* should be directed to linspace" So, basically, arange is not going to change. Because of this, shifting over was a cinch. I still need to write the convert-script code that inserts dtype=int in routines that use old defaults: *plea* anybody want to write that?? -Travis From mark at mitre.org Fri Jun 30 16:16:46 2006 From: mark at mitre.org (Mark Heslep) Date: Fri, 30 Jun 2006 16:16:46 -0400 Subject: [Numpy-discussion] A. Martelli on Numeric/Numpy Message-ID: <44A586AE.5080803@mitre.org> FYI, posted Sunday on python: "...even if the hard-core numeric-python people are all evangelizing for migration to numpy (for reasons that are of course quite defensible!), I think it's quite OK to stick with good old Numeric for the moment (and that's exactly what I do for my own personal use!)" "...Numeric has pretty good documentation (numpy's is probably even better, but it is not available for free, so I don't know!), and if you don't find that documentation sufficient you might want to have a look to my book "Python in a Nutshell" which devotes a chapter to Numeric..." http://groups.google.com/group/comp.lang.python/tree/browse_frm/thread/e5479dac51b6e481/fc475de9fd1b9669?rnum=1&q=martelli&_done=%2Fgroup%2Fcomp.lang.python%2Fbrowse_frm%2Fthread%2Fe5479dac51b6e481%2Fe282e6e2c9d4fc77%3Fq%3Dmartelli%26rnum%3D6%26#doc_55e0c696cb4aea87 Mark From kwgoodman at gmail.com Fri Jun 30 16:37:01 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 30 Jun 2006 13:37:01 -0700 Subject: [Numpy-discussion] Matrix print plea Message-ID: When an array is printed, the numbers line up in nice columns (if you're using a fixed-width font): array([[0, 0], [0, 0]]) But for matrices the columns do not line up: matrix([[0, 0], [0, 0]]) From cookedm at physics.mcmaster.ca Fri Jun 30 16:38:43 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 30 Jun 2006 16:38:43 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> Message-ID: <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> On Fri, 30 Jun 2006 12:35:35 -0400 Sasha wrote: > On 6/30/06, Fernando Perez wrote: > > ... > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. A very quick application of the coverage module, available at http://www.garethrees.org/2001/12/04/python-coverage/ gives me 41%: Name Stmts Exec Cover --------------------------------------------------- numpy 25 20 80% numpy._import_tools 235 175 74% numpy.add_newdocs 2 2 100% numpy.core 28 26 92% numpy.core.__svn_version__ 1 1 100% numpy.core._internal 99 48 48% numpy.core.arrayprint 251 92 36% numpy.core.defchararray 221 58 26% numpy.core.defmatrix 259 186 71% numpy.core.fromnumeric 319 153 47% numpy.core.info 3 3 100% numpy.core.ma 1612 1145 71% numpy.core.memmap 64 14 21% numpy.core.numeric 323 138 42% numpy.core.numerictypes 236 204 86% numpy.core.records 272 32 11% numpy.dft 6 4 66% numpy.dft.fftpack 128 31 24% numpy.dft.helper 35 32 91% numpy.dft.info 3 3 100% numpy.distutils 13 9 69% numpy.distutils.__version__ 4 4 100% numpy.distutils.ccompiler 296 49 16% numpy.distutils.exec_command 409 27 6% numpy.distutils.info 2 2 100% numpy.distutils.log 37 18 48% numpy.distutils.misc_util 945 174 18% numpy.distutils.unixccompiler 34 11 32% numpy.dual 41 27 65% numpy.f2py.info 2 2 100% numpy.lib 30 28 93% numpy.lib.arraysetops 121 59 48% numpy.lib.function_base 501 70 13% numpy.lib.getlimits 76 61 80% numpy.lib.index_tricks 223 56 25% numpy.lib.info 4 4 100% numpy.lib.machar 174 154 88% numpy.lib.polynomial 357 52 14% numpy.lib.scimath 51 19 37% numpy.lib.shape_base 220 24 10% numpy.lib.twodim_base 77 51 66% numpy.lib.type_check 110 75 68% numpy.lib.ufunclike 37 24 64% numpy.lib.utils 42 23 54% numpy.linalg 5 3 60% numpy.linalg.info 2 2 100% numpy.linalg.linalg 440 71 16% numpy.random 10 6 60% numpy.random.info 4 4 100% numpy.testing 3 3 100% numpy.testing.info 2 2 100% numpy.testing.numpytest 430 214 49% numpy.testing.utils 151 62 41% numpy.version 7 7 100% --------------------------------------------------- TOTAL 8982 3764 41% (I filtered out all the *.tests.* modules). Note that you have to import numpy after starting the coverage, because we use a lot of module-level code that wouldn't be caught otherwise. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Chris.Barker at noaa.gov Fri Jun 30 16:40:39 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 30 Jun 2006 13:40:39 -0700 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57AA3.1040405@ee.byu.edu> <20060630194438.GA6065@ssh.cv.nrao.edu> Message-ID: <44A58C47.9080700@noaa.gov> Robert Kern wrote: > It's arange(0.0, 1.0, 0.1) that I think causes the most problems with arange and > floats. actually, much to my surprise: >>> import numpy as N >>> N.arange(0.0, 1.0, 0.1) array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) But I'm sure there are other examples that don't work out. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cookedm at physics.mcmaster.ca Fri Jun 30 16:46:19 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 30 Jun 2006 16:46:19 -0400 Subject: [Numpy-discussion] Matrix print plea In-Reply-To: References: Message-ID: <20060630164619.098ec5aa@arbutus.physics.mcmaster.ca> On Fri, 30 Jun 2006 13:37:01 -0700 "Keith Goodman" wrote: > When an array is printed, the numbers line up in nice columns (if > you're using a fixed-width font): > > array([[0, 0], > [0, 0]]) > > But for matrices the columns do not line up: > > matrix([[0, 0], > [0, 0]]) Fixed in SVN. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From ndarray at mac.com Fri Jun 30 16:49:53 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 16:49:53 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: As soon as I sent out my 10% estimate, I realized that someone will challenge it with a python level coverage statistics. My main concern is not what fraction of numpy functions is called by unit tests, but what fraction of special cases in the C code is exercised. I am not sure that David's statistics even answers the first question - I would guess it only counts statements in the pure python methods and ignores methods implemented in C. Can someone post C-level statistics from gcov or a similar tool? On 6/30/06, David M. Cooke wrote: > On Fri, 30 Jun 2006 12:35:35 -0400 > Sasha wrote: > > > On 6/30/06, Fernando Perez wrote: > > > ... > > > Besides, decent unit tests will catch these problems. We all know > > > that every scientific code in existence is unit tested to the smallest > > > routine, so this shouldn't be a problem for anyone. > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > unittests? I would be surprized if it was more than 10%. > > A very quick application of the coverage module, available at > http://www.garethrees.org/2001/12/04/python-coverage/ > gives me 41%: > > Name Stmts Exec Cover > --------------------------------------------------- > numpy 25 20 80% > numpy._import_tools 235 175 74% > numpy.add_newdocs 2 2 100% > numpy.core 28 26 92% > numpy.core.__svn_version__ 1 1 100% > numpy.core._internal 99 48 48% > numpy.core.arrayprint 251 92 36% > numpy.core.defchararray 221 58 26% > numpy.core.defmatrix 259 186 71% > numpy.core.fromnumeric 319 153 47% > numpy.core.info 3 3 100% > numpy.core.ma 1612 1145 71% > numpy.core.memmap 64 14 21% > numpy.core.numeric 323 138 42% > numpy.core.numerictypes 236 204 86% > numpy.core.records 272 32 11% > numpy.dft 6 4 66% > numpy.dft.fftpack 128 31 24% > numpy.dft.helper 35 32 91% > numpy.dft.info 3 3 100% > numpy.distutils 13 9 69% > numpy.distutils.__version__ 4 4 100% > numpy.distutils.ccompiler 296 49 16% > numpy.distutils.exec_command 409 27 6% > numpy.distutils.info 2 2 100% > numpy.distutils.log 37 18 48% > numpy.distutils.misc_util 945 174 18% > numpy.distutils.unixccompiler 34 11 32% > numpy.dual 41 27 65% > numpy.f2py.info 2 2 100% > numpy.lib 30 28 93% > numpy.lib.arraysetops 121 59 48% > numpy.lib.function_base 501 70 13% > numpy.lib.getlimits 76 61 80% > numpy.lib.index_tricks 223 56 25% > numpy.lib.info 4 4 100% > numpy.lib.machar 174 154 88% > numpy.lib.polynomial 357 52 14% > numpy.lib.scimath 51 19 37% > numpy.lib.shape_base 220 24 10% > numpy.lib.twodim_base 77 51 66% > numpy.lib.type_check 110 75 68% > numpy.lib.ufunclike 37 24 64% > numpy.lib.utils 42 23 54% > numpy.linalg 5 3 60% > numpy.linalg.info 2 2 100% > numpy.linalg.linalg 440 71 16% > numpy.random 10 6 60% > numpy.random.info 4 4 100% > numpy.testing 3 3 100% > numpy.testing.info 2 2 100% > numpy.testing.numpytest 430 214 49% > numpy.testing.utils 151 62 41% > numpy.version 7 7 100% > --------------------------------------------------- > TOTAL 8982 3764 41% > > (I filtered out all the *.tests.* modules). Note that you have to import > numpy after starting the coverage, because we use a lot of module-level code > that wouldn't be caught otherwise. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > From kwgoodman at gmail.com Fri Jun 30 16:56:12 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 30 Jun 2006 13:56:12 -0700 Subject: [Numpy-discussion] Matrix print plea In-Reply-To: <20060630164619.098ec5aa@arbutus.physics.mcmaster.ca> References: <20060630164619.098ec5aa@arbutus.physics.mcmaster.ca> Message-ID: On 6/30/06, David M. Cooke wrote: > On Fri, 30 Jun 2006 13:37:01 -0700 > "Keith Goodman" wrote: > > > When an array is printed, the numbers line up in nice columns (if > > you're using a fixed-width font): > > > > array([[0, 0], > > [0, 0]]) > > > > But for matrices the columns do not line up: > > > > matrix([[0, 0], > > [0, 0]]) > > Fixed in SVN. Thank you! All of the recent improvements to matrices will eventually bring many new numpy users. From travis at enthought.com Fri Jun 30 16:59:20 2006 From: travis at enthought.com (Travis N. Vaught) Date: Fri, 30 Jun 2006 15:59:20 -0500 Subject: [Numpy-discussion] ANN: SciPy 2006 Conference Reminder Message-ID: <44A590A8.5040705@enthought.com> The *SciPy 2006 Conference* is scheduled for Thursday and Friday, August 17-18, 2006 at CalTech with Sprints and Tutorials Monday-Wednesday, August 14-16. Conference details are at http://www.scipy.org/SciPy2006 The deadlines for submitting abstracts and early registration are approaching... Call for Presenters ------------------- If you are interested in presenting at the conference, you may submit an abstract in Plain Text, PDF or MS Word formats to abstracts at scipy.org -- the deadline for abstract submission is July 7, 2006. Papers and/or presentation slides are acceptable and are due by August 4, 2006. Registration: ------------- Early registration ($100.00) is still available through July 14. You may register online at http://www.enthought.com/scipy06. Registration includes breakfast and lunch Thursday & Friday and a very nice dinner Thursday night. After July 14, 2006, registration will cost $150.00. Tutorials and Sprints --------------------- This year the Sprints (Monday and Tuesday, August 14-15) and Tutorials (Wednesday, August 16) are no additional charge (you're on your own for food on those days, though). Remember to include these days in your travel plans. The following topics are presented as Tutorials Wednesday (more info here: http://www.scipy.org/SciPy2006/TutorialSessions): - "3D visualization in Python using tvtk and MayaVi" - "Scientific Data Analysis and Visualization using IPython and Matplotlib." - "Building Scientific Applications using the Enthought Tool Suite (Envisage, Traits, Chaco, etc.)" - "NumPy (migration from Numarray & Numeric, overview of NumPy)" The Sprint topics are under discussion here: http://www.scipy.org/SciPy2006/CodingSprints See you in August! Travis From ndarray at mac.com Fri Jun 30 18:10:05 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 18:10:05 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: It is not as bad as I thought, but there is certainly room for improvement. File `numpy/core/src/multiarraymodule.c' Lines executed:63.56% of 3290 File `numpy/core/src/arrayobject.c' Lines executed:59.70% of 5280 File `numpy/core/src/scalartypes.inc.src' Lines executed:31.67% of 963 File `numpy/core/src/arraytypes.inc.src' Lines executed:47.35% of 868 File `numpy/core/src/arraymethods.c' Lines executed:57.65% of 739 On 6/30/06, Sasha wrote: > As soon as I sent out my 10% estimate, I realized that someone will > challenge it with a python level coverage statistics. My main concern > is not what fraction of numpy functions is called by unit tests, but > what fraction of special cases in the C code is exercised. I am not > sure that David's statistics even answers the first question - I would > guess it only counts statements in the pure python methods and > ignores methods implemented in C. > > Can someone post C-level statistics from gcov > or a similar tool? > > On 6/30/06, David M. Cooke wrote: > > On Fri, 30 Jun 2006 12:35:35 -0400 > > Sasha wrote: > > > > > On 6/30/06, Fernando Perez wrote: > > > > ... > > > > Besides, decent unit tests will catch these problems. We all know > > > > that every scientific code in existence is unit tested to the smallest > > > > routine, so this shouldn't be a problem for anyone. > > > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > > unittests? I would be surprized if it was more than 10%. > > > > A very quick application of the coverage module, available at > > http://www.garethrees.org/2001/12/04/python-coverage/ > > gives me 41%: > > > > Name Stmts Exec Cover > > --------------------------------------------------- > > numpy 25 20 80% > > numpy._import_tools 235 175 74% > > numpy.add_newdocs 2 2 100% > > numpy.core 28 26 92% > > numpy.core.__svn_version__ 1 1 100% > > numpy.core._internal 99 48 48% > > numpy.core.arrayprint 251 92 36% > > numpy.core.defchararray 221 58 26% > > numpy.core.defmatrix 259 186 71% > > numpy.core.fromnumeric 319 153 47% > > numpy.core.info 3 3 100% > > numpy.core.ma 1612 1145 71% > > numpy.core.memmap 64 14 21% > > numpy.core.numeric 323 138 42% > > numpy.core.numerictypes 236 204 86% > > numpy.core.records 272 32 11% > > numpy.dft 6 4 66% > > numpy.dft.fftpack 128 31 24% > > numpy.dft.helper 35 32 91% > > numpy.dft.info 3 3 100% > > numpy.distutils 13 9 69% > > numpy.distutils.__version__ 4 4 100% > > numpy.distutils.ccompiler 296 49 16% > > numpy.distutils.exec_command 409 27 6% > > numpy.distutils.info 2 2 100% > > numpy.distutils.log 37 18 48% > > numpy.distutils.misc_util 945 174 18% > > numpy.distutils.unixccompiler 34 11 32% > > numpy.dual 41 27 65% > > numpy.f2py.info 2 2 100% > > numpy.lib 30 28 93% > > numpy.lib.arraysetops 121 59 48% > > numpy.lib.function_base 501 70 13% > > numpy.lib.getlimits 76 61 80% > > numpy.lib.index_tricks 223 56 25% > > numpy.lib.info 4 4 100% > > numpy.lib.machar 174 154 88% > > numpy.lib.polynomial 357 52 14% > > numpy.lib.scimath 51 19 37% > > numpy.lib.shape_base 220 24 10% > > numpy.lib.twodim_base 77 51 66% > > numpy.lib.type_check 110 75 68% > > numpy.lib.ufunclike 37 24 64% > > numpy.lib.utils 42 23 54% > > numpy.linalg 5 3 60% > > numpy.linalg.info 2 2 100% > > numpy.linalg.linalg 440 71 16% > > numpy.random 10 6 60% > > numpy.random.info 4 4 100% > > numpy.testing 3 3 100% > > numpy.testing.info 2 2 100% > > numpy.testing.numpytest 430 214 49% > > numpy.testing.utils 151 62 41% > > numpy.version 7 7 100% > > --------------------------------------------------- > > TOTAL 8982 3764 41% > > > > (I filtered out all the *.tests.* modules). Note that you have to import > > numpy after starting the coverage, because we use a lot of module-level code > > that wouldn't be caught otherwise. > > > > -- > > |>|\/|< > > /--------------------------------------------------------------------------\ > > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > > |cookedm at physics.mcmaster.ca > > > From oliphant at ee.byu.edu Fri Jun 30 18:20:49 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 16:20:49 -0600 Subject: [Numpy-discussion] ***[Possible UCE]*** Re: Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: <44A5A3C1.70904@ee.byu.edu> Sasha wrote: >It is not as bad as I thought, but there is certainly room for improvement. > >File `numpy/core/src/multiarraymodule.c' >Lines executed:63.56% of 3290 > >File `numpy/core/src/arrayobject.c' >Lines executed:59.70% of 5280 > >File `numpy/core/src/scalartypes.inc.src' >Lines executed:31.67% of 963 > >File `numpy/core/src/arraytypes.inc.src' >Lines executed:47.35% of 868 > >File `numpy/core/src/arraymethods.c' >Lines executed:57.65% of 739 > > > > > This is great. How did you generate that? This is exactly the kind of thing we need to be doing for the beta release cycle. I would like these numbers very close to 100% by the time 1.0 final comes out at the end of August / first of September. But, we need help to write the unit tests. What happens if you run the scipy test suite? -Travis From ndarray at mac.com Fri Jun 30 18:21:21 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 18:21:21 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: "Software developers also use coverage testing in concert with testsuites, to make sure software is actually good enough for a release. " -- Gcov Manual I think if we can improve the test coverage, it will speak volumes about the quality of numpy. Does anyone know if it is possible to instrument numpy libraries without having to instrument python itself? It would be nice to make the coverage reports easily available either by including a generating script with the source distribution or by publishing the reports for the releases. On 6/30/06, Sasha wrote: > It is not as bad as I thought, but there is certainly room for improvement. > > File `numpy/core/src/multiarraymodule.c' > Lines executed:63.56% of 3290 > > File `numpy/core/src/arrayobject.c' > Lines executed:59.70% of 5280 > > File `numpy/core/src/scalartypes.inc.src' > Lines executed:31.67% of 963 > > File `numpy/core/src/arraytypes.inc.src' > Lines executed:47.35% of 868 > > File `numpy/core/src/arraymethods.c' > Lines executed:57.65% of 739 > > > > On 6/30/06, Sasha wrote: > > As soon as I sent out my 10% estimate, I realized that someone will > > challenge it with a python level coverage statistics. My main concern > > is not what fraction of numpy functions is called by unit tests, but > > what fraction of special cases in the C code is exercised. I am not > > sure that David's statistics even answers the first question - I would > > guess it only counts statements in the pure python methods and > > ignores methods implemented in C. > > > > Can someone post C-level statistics from gcov > > or a similar tool? > > > > On 6/30/06, David M. Cooke wrote: > > > On Fri, 30 Jun 2006 12:35:35 -0400 > > > Sasha wrote: > > > > > > > On 6/30/06, Fernando Perez wrote: > > > > > ... > > > > > Besides, decent unit tests will catch these problems. We all know > > > > > that every scientific code in existence is unit tested to the smallest > > > > > routine, so this shouldn't be a problem for anyone. > > > > > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > > > unittests? I would be surprized if it was more than 10%. > > > > > > A very quick application of the coverage module, available at > > > http://www.garethrees.org/2001/12/04/python-coverage/ > > > gives me 41%: > > > > > > Name Stmts Exec Cover > > > --------------------------------------------------- > > > numpy 25 20 80% > > > numpy._import_tools 235 175 74% > > > numpy.add_newdocs 2 2 100% > > > numpy.core 28 26 92% > > > numpy.core.__svn_version__ 1 1 100% > > > numpy.core._internal 99 48 48% > > > numpy.core.arrayprint 251 92 36% > > > numpy.core.defchararray 221 58 26% > > > numpy.core.defmatrix 259 186 71% > > > numpy.core.fromnumeric 319 153 47% > > > numpy.core.info 3 3 100% > > > numpy.core.ma 1612 1145 71% > > > numpy.core.memmap 64 14 21% > > > numpy.core.numeric 323 138 42% > > > numpy.core.numerictypes 236 204 86% > > > numpy.core.records 272 32 11% > > > numpy.dft 6 4 66% > > > numpy.dft.fftpack 128 31 24% > > > numpy.dft.helper 35 32 91% > > > numpy.dft.info 3 3 100% > > > numpy.distutils 13 9 69% > > > numpy.distutils.__version__ 4 4 100% > > > numpy.distutils.ccompiler 296 49 16% > > > numpy.distutils.exec_command 409 27 6% > > > numpy.distutils.info 2 2 100% > > > numpy.distutils.log 37 18 48% > > > numpy.distutils.misc_util 945 174 18% > > > numpy.distutils.unixccompiler 34 11 32% > > > numpy.dual 41 27 65% > > > numpy.f2py.info 2 2 100% > > > numpy.lib 30 28 93% > > > numpy.lib.arraysetops 121 59 48% > > > numpy.lib.function_base 501 70 13% > > > numpy.lib.getlimits 76 61 80% > > > numpy.lib.index_tricks 223 56 25% > > > numpy.lib.info 4 4 100% > > > numpy.lib.machar 174 154 88% > > > numpy.lib.polynomial 357 52 14% > > > numpy.lib.scimath 51 19 37% > > > numpy.lib.shape_base 220 24 10% > > > numpy.lib.twodim_base 77 51 66% > > > numpy.lib.type_check 110 75 68% > > > numpy.lib.ufunclike 37 24 64% > > > numpy.lib.utils 42 23 54% > > > numpy.linalg 5 3 60% > > > numpy.linalg.info 2 2 100% > > > numpy.linalg.linalg 440 71 16% > > > numpy.random 10 6 60% > > > numpy.random.info 4 4 100% > > > numpy.testing 3 3 100% > > > numpy.testing.info 2 2 100% > > > numpy.testing.numpytest 430 214 49% > > > numpy.testing.utils 151 62 41% > > > numpy.version 7 7 100% > > > --------------------------------------------------- > > > TOTAL 8982 3764 41% > > > > > > (I filtered out all the *.tests.* modules). Note that you have to import > > > numpy after starting the coverage, because we use a lot of module-level code > > > that wouldn't be caught otherwise. > > > > > > -- > > > |>|\/|< > > > /--------------------------------------------------------------------------\ > > > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > > > |cookedm at physics.mcmaster.ca > > > > > > From ndarray at mac.com Fri Jun 30 18:31:45 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 18:31:45 -0400 Subject: [Numpy-discussion] ***[Possible UCE]*** Re: Time for beta1 of NumPy 1.0 In-Reply-To: <44A5A3C1.70904@ee.byu.edu> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> <44A5A3C1.70904@ee.byu.edu> Message-ID: On 6/30/06, Travis Oliphant wrote: > This is great. How did you generate [the coverage statistic]? > It was really a hack. I've configured python using $ ./configure --enable-debug CC="gcc -fprofile-arcs -ftest-coverage" CXX="c++ gcc -fprofile-arcs -ftest-coverage" (I hate distutils!) Then I installed numpy and ran numpy.test(). Some linalg related tests failed which should be fixed by figuring out how to pass -fprofile-arcs -ftest-coverage options to the fortran compiler. The only non-obvious step in using gcov was that I had to tell it where to find object files: $ gcov -o build/temp.linux-x86_64-2.4/numpy/core/src numpy/core/src/*.c > ... > What happens if you run the scipy test suite? I don't know because I don't use scipy. Sorry. From ndarray at mac.com Fri Jun 30 18:41:59 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 18:41:59 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A58569.9080504@ee.byu.edu> References: <44A47854.1050106@ieee.org> <44A4F004.60809@ieee.org> <44A55986.8040905@esrf.fr> <44A569BF.30501@ee.byu.edu> <463e11f90606301142v5351b76r39b1d730fde7faa8@mail.gmail.com> <44A5739F.7020701@ee.byu.edu> <44A57AA3.1040405@ee.byu.edu> <20060630194438.GA6065@ssh.cv.nrao.edu> <44A58569.9080504@ee.byu.edu> Message-ID: On 6/30/06, Travis Oliphant wrote: > ... I still need to write the > convert-script code that inserts dtype=int > in routines that use old defaults: *plea* anybody want to write that?? > I will try to do it at some time over the long weekend. I was bitten by the fact that the current convert-script changes anything that resembles an old typecode such as 'b' regardless of context. (I was unlucky to have database columns called 'b'!) Fixing that is very similar to the problem at hand. From jonathan.taylor at stanford.edu Fri Jun 30 18:46:04 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Fri, 30 Jun 2006 15:46:04 -0700 Subject: [Numpy-discussion] byteorder question Message-ID: <44A5A9AC.5070707@stanford.edu> In some earlier code (at least one of) the following worked fine. I just want to get a new type that is a byteswap of, say, float64 because I want to memmap an array with a non-native byte order. Any suggestions? Thanks, Jonathan ------------------------------------------ Python 2.4.3 (#2, Apr 27 2006, 14:43:58) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '0.9.9.2716' >>> d=numpy.float64 >>> swapped=d.newbyteorder('big') Traceback (most recent call last): File "", line 1, in ? TypeError: descriptor 'newbyteorder' requires a 'genericscalar' object but received a 'str' >>> swapped=d.newbyteorder('>') Traceback (most recent call last): File "", line 1, in ? TypeError: descriptor 'newbyteorder' requires a 'genericscalar' object but received a 'str' >>> -- ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From oliphant at ee.byu.edu Fri Jun 30 19:01:10 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 17:01:10 -0600 Subject: [Numpy-discussion] byteorder question In-Reply-To: <44A5A9AC.5070707@stanford.edu> References: <44A5A9AC.5070707@stanford.edu> Message-ID: <44A5AD36.7070906@ee.byu.edu> Jonathan Taylor wrote: > In some earlier code (at least one of) the following worked fine. I > just want > to get a new type that is a byteswap of, say, float64 because I want to > memmap an array with a non-native byte order. > > Any suggestions? Last year the array scalars (like float64) were confused with the data-type objects dtype('=i4'). This was fortunately changed many months ago so the two are now separate concepts. This may be why your old code worked. You want to get a data-type object itself: d = numpy.dtype(numpy.float64) d = numpy.float64(1).dtype # you have to instantiate a float64 object to access it's data-type. Then d.newbyteorder('>') or d.newbyteorder('big') will work. But, probably easier and clearer is just to use: dlittle = numpy.dtype('f8') There are now full-fledged data-type objects in NumPy. These can be used everywhere old typecodes were used. In fact, all other aliases get converted to these data-type objects because they are what NumPy needs to construct the ndarray. These data-type objects are an important part of the basearray concept being introduced to Python, so education about them is very timely. They are an out-growth of the PyArray_Descr * structure that Numeric used to "represent" a data-type internally. Basically , the old PyArray_Descr * structure was enhanced and given an Object header. Even just getting these data-type objects into Python would be a useful first-step to exchanging data. For NumPy, the data-type objects have enabled very sophisticated data-type specification and are key to record-array support in NumPy. Best, -Travis From alexander.belopolsky at gmail.com Fri Jun 30 19:01:46 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 30 Jun 2006 19:01:46 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: On 6/30/06, Sasha wrote: > File `numpy/core/src/arraytypes.inc.src' > Lines executed:47.35% of 868 This is was an overly optimistic number. More relevant is the following obtained by disabling the #line directives: File `build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc' Lines executed:26.71% of 5010 From ndarray at mac.com Fri Jun 30 19:02:19 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 19:02:19 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: ---------- Forwarded message ---------- From: Alexander Belopolsky Date: Jun 30, 2006 7:01 PM Subject: Re: [Numpy-discussion] Time for beta1 of NumPy 1.0 To: "David M. Cooke" Cc: Fernando Perez , numpy-discussion at lists.sourceforge.net On 6/30/06, Sasha wrote: > File `numpy/core/src/arraytypes.inc.src' > Lines executed:47.35% of 868 This is was an overly optimistic number. More relevant is the following obtained by disabling the #line directives: File `build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc' Lines executed:26.71% of 5010 From oliphant at ee.byu.edu Fri Jun 30 19:04:42 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 17:04:42 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> Message-ID: <44A5AE0A.8080500@ee.byu.edu> Alexander Belopolsky wrote: >On 6/30/06, Sasha wrote: > > > >>File `numpy/core/src/arraytypes.inc.src' >>Lines executed:47.35% of 868 >> >> > >This is was an overly optimistic number. More relevant is the >following obtained by disabling the #line directives: > >File `build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc' >Lines executed:26.71% of 5010 > > Yes, this is true, but the auto-generation means that success for one instantiation increases the likelihood for success in the others. So, the 26.7% is probably too pessimistic. -Travis From ndarray at mac.com Fri Jun 30 19:16:27 2006 From: ndarray at mac.com (Sasha) Date: Fri, 30 Jun 2006 19:16:27 -0400 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <44A5AE0A.8080500@ee.byu.edu> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> <44A5AE0A.8080500@ee.byu.edu> Message-ID: On 6/30/06, Travis Oliphant wrote: > ... > Yes, this is true, but the auto-generation means that success for one > instantiation increases the likelihood for success in the others. So, > the 26.7% is probably too pessimistic. Agree, but "increases the likelihood" != "guarantees". For example, relying on nan propagation is a fine strategy for the floating point case, but will not work for integer types. Similarly code relying on wrap on overflow will fail when type=float. The best solution would be to autogenerate test cases so that all types are tested where appropriate. From oliphant at ee.byu.edu Fri Jun 30 19:18:22 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 30 Jun 2006 17:18:22 -0600 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <20060630163843.43052fa3@arbutus.physics.mcmaster.ca> <44A5AE0A.8080500@ee.byu.edu> Message-ID: <44A5B13E.1060309@ee.byu.edu> Sasha wrote: > On 6/30/06, Travis Oliphant wrote: > >> ... >> Yes, this is true, but the auto-generation means that success for one >> instantiation increases the likelihood for success in the others. So, >> the 26.7% is probably too pessimistic. > > > Agree, but "increases the likelihood" != "guarantees". Definitely... > > The best solution would be to autogenerate test cases so that all > types are tested where appropriate. Right on again... Here's a chance for all the Python-only coders to jump in and make a splash.... -Travis From tim.leslie at gmail.com Fri Jun 30 20:42:13 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 1 Jul 2006 10:42:13 +1000 Subject: [Numpy-discussion] Time for beta1 of NumPy 1.0 In-Reply-To: <1151696738.16911.12.camel@convolution.mit.edu> References: <44A47854.1050106@ieee.org> <200606301029.42616.dd55@cornell.edu> <20060630144035.GA5138@ssh.cv.nrao.edu> <1151696738.16911.12.camel@convolution.mit.edu> Message-ID: On 7/1/06, Eric Jonas wrote: > On Fri, 2006-06-30 at 12:35 -0400, Sasha wrote: > > > Besides, decent unit tests will catch these problems. We all know > > > that every scientific code in existence is unit tested to the smallest > > > routine, so this shouldn't be a problem for anyone. > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > unittests? I would be surprized if it was more than 10%. > > Given the coverage is so low, how can people help by contributing unit > tests? Are there obvious areas with poor coverage? Travis, do you have > any opinions on this? > ...Eric > > A handy tool for finding these things out is coverage.py. I've found it quite helpful in checking unittest coverage in the past. http://www.nedbatchelder.com/code/modules/coverage.html I don't think I'll have a chance in the immediate future to try it out with numpy, but if someone does, I'm sure it will give some answers to your questions Eric. Cheers, Tim Leslie > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion >