From ralf.gommers at googlemail.com Mon Feb 1 06:27:54 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 1 Feb 2010 19:27:54 +0800 Subject: [Numpy-discussion] which Python for OS X to build installers? In-Reply-To: <5b8d13221001312033j1d03642aq8a60b5a9654b9bd5@mail.gmail.com> References: <5b8d13221001312033j1d03642aq8a60b5a9654b9bd5@mail.gmail.com> Message-ID: On Mon, Feb 1, 2010 at 12:33 PM, David Cournapeau wrote: > On Sun, Jan 31, 2010 at 11:43 PM, Ralf Gommers > wrote: > > Hi, > > > > With only a few changes (see diff below) to pavement.py I managed to > build a > > dmg installer. For this I used the Python in the bootstrap virtualenv > > however, instead of the one in /Library/Frameworks/Python.framework/. > Does > > this matter? > > Yes it does. The binary installers should target the python from > python.org, nothing else. > > > For making releases, would I need the framework build? Do I need 32- and > > 64-bit versions of Python 2.4, 2.5 and 2.6? > > The python from python.org do not support 64 bits (yet), so just build > for ppc/x86. I never bothered with ppc64, and I think we can actually > give up on ppc soon. > Thanks David, that's clear. Can anyone please confirm that the MD5 checksum for the OS X installer for 2.6.4 at www.python.org/download/releases/ is 745494373683081a04cc71522f7c440e? I found an alternative download at openlogic.com, but if it's not the same as the python.org version then I won't bother with it. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From eadrogue at gmx.net Mon Feb 1 12:02:59 2010 From: eadrogue at gmx.net (Ernest =?iso-8859-1?Q?Adrogu=E9?=) Date: Mon, 1 Feb 2010 18:02:59 +0100 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array Message-ID: <20100201170259.GA672@doriath.local> Hello, Consider the following code: for j in range(5): f = np.bincount(x[y == j]) It fails with MemoryError whenever y == j is all False element-wise. In [96]: np.bincount([]) --------------------------------------------------------------------------- MemoryError Traceback (most recent call last) /home/ernest/ in () MemoryError: In [97]: np.__version__ Out[97]: '1.3.0' Is this a bug? Bye. From kwgoodman at gmail.com Mon Feb 1 12:09:20 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Mon, 1 Feb 2010 09:09:20 -0800 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: <20100201170259.GA672@doriath.local> References: <20100201170259.GA672@doriath.local> Message-ID: 2010/2/1 Ernest Adrogu? : > Hello, > > Consider the following code: > > for j in range(5): > ? ? ? ?f = np.bincount(x[y == j]) > > It fails with MemoryError whenever y == j is all False element-wise. > > > In [96]: np.bincount([]) > --------------------------------------------------------------------------- > MemoryError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) > > /home/ernest/ in () > > MemoryError: > > In [97]: np.__version__ > Out[97]: '1.3.0' > > Is this a bug? > > Bye. I get it to work sometimes: $ ipython >> import numpy as np >> np.bincount([]) --------------------------------------------------------------------------- MemoryError: >> np.bincount(()) array([0]) >> np.bincount([]) array([0]) >> np.bincount([]) --------------------------------------------------------------------------- MemoryError: >> np.__version__ '1.4.0rc2' From josef.pktd at gmail.com Mon Feb 1 16:55:43 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 1 Feb 2010 16:55:43 -0500 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: References: <20100201170259.GA672@doriath.local> Message-ID: <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> On Mon, Feb 1, 2010 at 12:09 PM, Keith Goodman wrote: > 2010/2/1 Ernest Adrogu? : >> Hello, >> >> Consider the following code: >> >> for j in range(5): >> ? ? ? ?f = np.bincount(x[y == j]) >> >> It fails with MemoryError whenever y == j is all False element-wise. >> >> >> In [96]: np.bincount([]) >> --------------------------------------------------------------------------- >> MemoryError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) >> >> /home/ernest/ in () >> >> MemoryError: >> >> In [97]: np.__version__ >> Out[97]: '1.3.0' >> >> Is this a bug? >> >> Bye. > > I get it to work sometimes: > > $ ipython >>> import numpy as np >>> np.bincount([]) > --------------------------------------------------------------------------- > MemoryError: >>> np.bincount(()) > ? array([0]) >>> np.bincount([]) > ? array([0]) >>> np.bincount([]) > --------------------------------------------------------------------------- > MemoryError: >>> np.__version__ > ? '1.4.0rc2' > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > I don't get a memory error but the results are strange for empty >>> x=np.arange(5);np.bincount(x[x == 7]).shape (39672457,) >>> (np.bincount(x[x == 7])==0).all() True >>> x=np.arange(5);np.bincount(x[x == 2]).shape (3,) Josef From david at silveregg.co.jp Mon Feb 1 20:37:54 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Tue, 02 Feb 2010 10:37:54 +0900 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> References: <20100201170259.GA672@doriath.local> <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> Message-ID: <4B6781F2.5000806@silveregg.co.jp> josef.pktd at gmail.com wrote: > On Mon, Feb 1, 2010 at 12:09 PM, Keith Goodman wrote: >> 2010/2/1 Ernest Adrogu? : >>> Hello, >>> >>> Consider the following code: >>> >>> for j in range(5): >>> f = np.bincount(x[y == j]) >>> >>> It fails with MemoryError whenever y == j is all False element-wise. >>> >>> >>> In [96]: np.bincount([]) >>> --------------------------------------------------------------------------- >>> MemoryError Traceback (most recent call last) >>> >>> /home/ernest/ in () >>> >>> MemoryError: >>> >>> In [97]: np.__version__ >>> Out[97]: '1.3.0' >>> >>> Is this a bug? >>> >>> Bye. >> I get it to work sometimes: >> >> $ ipython >>>> import numpy as np >>>> np.bincount([]) >> --------------------------------------------------------------------------- >> MemoryError: >>>> np.bincount(()) >> array([0]) >>>> np.bincount([]) >> array([0]) >>>> np.bincount([]) >> --------------------------------------------------------------------------- >> MemoryError: >>>> np.__version__ >> '1.4.0rc2' >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > I don't get a memory error but the results are strange for empty That may just be because you have enough memory for the (bogus) result: the value is a random memory value interpreted as an intp value, hence most likely very big on 64 bits system. It should be easy to fix, but I am not sure what is the expected result. An empty array ? David From josef.pktd at gmail.com Mon Feb 1 23:05:22 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 1 Feb 2010 23:05:22 -0500 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: <4B6781F2.5000806@silveregg.co.jp> References: <20100201170259.GA672@doriath.local> <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> <4B6781F2.5000806@silveregg.co.jp> Message-ID: <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> On Mon, Feb 1, 2010 at 8:37 PM, David Cournapeau wrote: > josef.pktd at gmail.com wrote: >> On Mon, Feb 1, 2010 at 12:09 PM, Keith Goodman wrote: >>> 2010/2/1 Ernest Adrogu? : >>>> Hello, >>>> >>>> Consider the following code: >>>> >>>> for j in range(5): >>>> ? ? ? ?f = np.bincount(x[y == j]) >>>> >>>> It fails with MemoryError whenever y == j is all False element-wise. >>>> >>>> >>>> In [96]: np.bincount([]) >>>> --------------------------------------------------------------------------- >>>> MemoryError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) >>>> >>>> /home/ernest/ in () >>>> >>>> MemoryError: >>>> >>>> In [97]: np.__version__ >>>> Out[97]: '1.3.0' >>>> >>>> Is this a bug? >>>> >>>> Bye. >>> I get it to work sometimes: >>> >>> $ ipython >>>>> import numpy as np >>>>> np.bincount([]) >>> --------------------------------------------------------------------------- >>> MemoryError: >>>>> np.bincount(()) >>> ? array([0]) >>>>> np.bincount([]) >>> ? array([0]) >>>>> np.bincount([]) >>> --------------------------------------------------------------------------- >>> MemoryError: >>>>> np.__version__ >>> ? '1.4.0rc2' >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >> >> I don't get a memory error but the results are strange for empty > > That may just be because you have enough memory for the (bogus) result: > the value is a random memory value interpreted as an intp value, hence > most likely very big on 64 bits system. > > It should be easy to fix, but I am not sure what is the expected result. > An empty array ? >>> np.bincount([]) array([0, 0, 0, ..., 0, 0, 0]) >>> np.bincount(np.array([]).astype(int)) array([0, 0, 0, ..., 0, 0, 0]) >>> np.bincount(()) array([0, 0, 0, ..., 0, 0, 0]) >>> np.bincount(()).shape (41570297,) I think this could be considered as a correct answer, the count of any integer is zero. Returning an array with one zero, or the empty array or raising an exception? I don't see much of a pattern >>> x=np.arange(5);np.unique(x[x == 7]) array([], dtype=int32) >>> np.unique(x[x == 7], return_index=1) (array([], dtype=int32), array([], dtype=bool)) >>> np.unique(x[x == 7], return_inverse=1) (array([], dtype=int32), array([], dtype=bool)) >>> x=np.arange(5);np.histogram(x[x == 7]) Traceback (most recent call last): File "", line 1, in x=np.arange(5);np.histogram(x[x == 7]) File "C:\Programs\Python25\Lib\site-packages\numpy\lib\function_base.py", line 202, in histogram range = (a.min(), a.max()) ValueError: zero-size array to ufunc.reduce without identity >>> x=np.arange(5);np.digitize(x[x == 7],np.arange(6)) Traceback (most recent call last): File "", line 1, in x=np.arange(5);np.digitize(x[x == 7],np.arange(6)) ValueError: Both x and bins must have non-zero length the only meaningful test cases, I can think of, work both with array([0]) or empty array >>> np.sum(x[x == 7]) == np.bincount(x[x == 7]).sum() True >>> 1.*np.array([0]).astype(int) / np.sum(x[x == 7]) array([ NaN]) >>> 1.*np.array([]).astype(int) / np.sum(x[x == 7]) array([], dtype=float64) >>> count = np.bincount(x[x == 7]) >>> count[count > 0] array([], dtype=int32) I'm slightly in favor of returning an empty array rather than array([0]) as Keith got it. Josef > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From cournape at gmail.com Mon Feb 1 23:36:08 2010 From: cournape at gmail.com (David Cournapeau) Date: Tue, 2 Feb 2010 13:36:08 +0900 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> References: <20100201170259.GA672@doriath.local> <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> <4B6781F2.5000806@silveregg.co.jp> <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> Message-ID: <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> On Tue, Feb 2, 2010 at 1:05 PM, wrote: > I think this could be considered as a correct answer, the count of any > integer is zero. Maybe, but this shape is random - it would be different in different conditions, as the length of the returned array is just some random memory location. > > Returning an array with one zero, or the empty array or raising an > exception? I don't see much of a pattern Since there is no obvious solution, the only rationale for not raising an exception I could see is to accommodate often-encountered special cases. I find returning [0] more confusing than returning empty arrays, though - maybe there is a usecase I don't know about. cheers, David From charlesr.harris at gmail.com Mon Feb 1 23:45:50 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 1 Feb 2010 21:45:50 -0700 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> References: <20100201170259.GA672@doriath.local> <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> <4B6781F2.5000806@silveregg.co.jp> <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> Message-ID: On Mon, Feb 1, 2010 at 9:36 PM, David Cournapeau wrote: > On Tue, Feb 2, 2010 at 1:05 PM, wrote: > > > I think this could be considered as a correct answer, the count of any > > integer is zero. > > Maybe, but this shape is random - it would be different in different > conditions, as the length of the returned array is just some random > memory location. > > > > > Returning an array with one zero, or the empty array or raising an > > exception? I don't see much of a pattern > > Since there is no obvious solution, the only rationale for not raising > an exception I could see is to accommodate often-encountered special > cases. I find returning [0] more confusing than returning empty > arrays, though - maybe there is a usecase I don't know about. > > In this case I would expect an empty input to be a programming error and raising an error to be the right thing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Feb 2 00:02:37 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 2 Feb 2010 00:02:37 -0500 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: References: <20100201170259.GA672@doriath.local> <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> <4B6781F2.5000806@silveregg.co.jp> <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> Message-ID: <1cd32cbb1002012102h720a78cbh5662db7a31bdc35b@mail.gmail.com> On Mon, Feb 1, 2010 at 11:45 PM, Charles R Harris wrote: > > > On Mon, Feb 1, 2010 at 9:36 PM, David Cournapeau wrote: >> >> On Tue, Feb 2, 2010 at 1:05 PM, ? wrote: >> >> > I think this could be considered as a correct answer, the count of any >> > integer is zero. >> >> Maybe, but this shape is random - it would be different in different >> conditions, as the length of the returned array is just some random >> memory location. >> >> > >> > Returning an array with one zero, or the empty array or raising an >> > exception? I don't see much of a pattern >> >> Since there is no obvious solution, the only rationale for not raising >> an exception ?I could see is to accommodate often-encountered special >> cases. I find returning [0] more confusing than returning empty >> arrays, though - maybe there is a usecase I don't know about. >> > > In this case I would expect an empty input to be a programming error and > raising an error to be the right thing. Not necessarily, if you run the bincount over groups in a dataset and your not sure if every group is actually observed. The main question, is whether the user needs or wants to check for empty groups before or after the loop over bincount. Like >>> np.sum([]) 0.0 >>> sum([]) 0 the empty array or the array([0]) can be considered as the default argument. In this case it is not really a programming error. Since bincount usually returns redundant zero count unless np.unique(data) = np.arange(data.max()+1), array([0]) would also make sense as a minimum answer >>> np.bincount([7,8,9]) array([0, 0, 0, 0, 0, 0, 0, 1, 1, 1]) I use bincount quite a lot but only with fixed sized arrays, so I never actually used it in this way (yet). Josef > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From david at silveregg.co.jp Tue Feb 2 00:03:05 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Tue, 02 Feb 2010 14:03:05 +0900 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: References: <20100201170259.GA672@doriath.local> <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> <4B6781F2.5000806@silveregg.co.jp> <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> Message-ID: <4B67B209.1080603@silveregg.co.jp> Charles R Harris wrote: > > In this case I would expect an empty input to be a programming error and > raising an error to be the right thing. Ok, I fixed the code in the trunk to raise a ValueError in that case. Changing to return an empty array would be easy, cheers, David From charlesr.harris at gmail.com Tue Feb 2 00:31:40 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 1 Feb 2010 22:31:40 -0700 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: <1cd32cbb1002012102h720a78cbh5662db7a31bdc35b@mail.gmail.com> References: <20100201170259.GA672@doriath.local> <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> <4B6781F2.5000806@silveregg.co.jp> <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> <1cd32cbb1002012102h720a78cbh5662db7a31bdc35b@mail.gmail.com> Message-ID: On Mon, Feb 1, 2010 at 10:02 PM, wrote: > On Mon, Feb 1, 2010 at 11:45 PM, Charles R Harris > wrote: > > > > > > On Mon, Feb 1, 2010 at 9:36 PM, David Cournapeau > wrote: > >> > >> On Tue, Feb 2, 2010 at 1:05 PM, wrote: > >> > >> > I think this could be considered as a correct answer, the count of any > >> > integer is zero. > >> > >> Maybe, but this shape is random - it would be different in different > >> conditions, as the length of the returned array is just some random > >> memory location. > >> > >> > > >> > Returning an array with one zero, or the empty array or raising an > >> > exception? I don't see much of a pattern > >> > >> Since there is no obvious solution, the only rationale for not raising > >> an exception I could see is to accommodate often-encountered special > >> cases. I find returning [0] more confusing than returning empty > >> arrays, though - maybe there is a usecase I don't know about. > >> > > > > In this case I would expect an empty input to be a programming error and > > raising an error to be the right thing. > > Not necessarily, if you run the bincount over groups in a dataset and > your not sure if every group is actually observed. The main question, > is whether the user needs or wants to check for empty groups before or > after the loop over bincount. > > How would they know which bin to check? This seems like an unlikely way to check for an empty input. > Like > >>> np.sum([]) > 0.0 > >>> sum([]) > 0 > the empty array or the array([0]) can be considered as the default > argument. In this case it is not really a programming error. > > I like that better than an empty array. > Since bincount usually returns redundant zero count unless > np.unique(data) = np.arange(data.max()+1), > array([0]) would also make sense as a minimum answer > >>> np.bincount([7,8,9]) > array([0, 0, 0, 0, 0, 0, 0, 1, 1, 1]) > > I use bincount quite a lot but only with fixed sized arrays, so I > never actually used it in this way (yet). > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Feb 2 00:57:28 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 2 Feb 2010 00:57:28 -0500 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: References: <20100201170259.GA672@doriath.local> <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> <4B6781F2.5000806@silveregg.co.jp> <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> <1cd32cbb1002012102h720a78cbh5662db7a31bdc35b@mail.gmail.com> Message-ID: <1cd32cbb1002012157r5356f547yd057231ead32b7c@mail.gmail.com> On Tue, Feb 2, 2010 at 12:31 AM, Charles R Harris wrote: > > > On Mon, Feb 1, 2010 at 10:02 PM, wrote: >> >> On Mon, Feb 1, 2010 at 11:45 PM, Charles R Harris >> wrote: >> > >> > >> > On Mon, Feb 1, 2010 at 9:36 PM, David Cournapeau >> > wrote: >> >> >> >> On Tue, Feb 2, 2010 at 1:05 PM, ? wrote: >> >> >> >> > I think this could be considered as a correct answer, the count of >> >> > any >> >> > integer is zero. >> >> >> >> Maybe, but this shape is random - it would be different in different >> >> conditions, as the length of the returned array is just some random >> >> memory location. >> >> >> >> > >> >> > Returning an array with one zero, or the empty array or raising an >> >> > exception? I don't see much of a pattern >> >> >> >> Since there is no obvious solution, the only rationale for not raising >> >> an exception ?I could see is to accommodate often-encountered special >> >> cases. I find returning [0] more confusing than returning empty >> >> arrays, though - maybe there is a usecase I don't know about. >> >> >> > >> > In this case I would expect an empty input to be a programming error and >> > raising an error to be the right thing. >> >> Not necessarily, if you run the bincount over groups in a dataset and >> your not sure if every group is actually observed. The main question, >> is whether the user needs or wants to check for empty groups before or >> after the loop over bincount. >> > > How would they know which bin to check? This seems like an unlikely way to > check for an empty input. # grade (e.g. SAT) distribution by school and race for s in schools: for r in race: print s, r, np.bincount(allstudentgrades[(sch==s)*(ra==r)]) allwhite schools and allblack schools raise an exception. I just made up the story, my first attempt was: all sectors, all firmsize groups, bincount something, will have empty cells for some size groups, e.g. nuclear power in family business. Josef > >> >> Like >> >>> np.sum([]) >> 0.0 >> >>> sum([]) >> 0 >> the empty array or the array([0]) can be considered as the default >> argument. In this case it is not really a programming error. >> > > I like that better than an empty array. > >> >> Since bincount usually returns redundant zero count unless >> np.unique(data) = np.arange(data.max()+1), >> array([0]) would also make sense as a minimum answer >> >>> np.bincount([7,8,9]) >> array([0, 0, 0, 0, 0, 0, 0, 1, 1, 1]) >> >> I use bincount quite a lot but only with fixed sized arrays, so I >> never actually used it in this way (yet). >> > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From josef.pktd at gmail.com Tue Feb 2 01:03:23 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 2 Feb 2010 01:03:23 -0500 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: <1cd32cbb1002012157r5356f547yd057231ead32b7c@mail.gmail.com> References: <20100201170259.GA672@doriath.local> <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> <4B6781F2.5000806@silveregg.co.jp> <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> <1cd32cbb1002012102h720a78cbh5662db7a31bdc35b@mail.gmail.com> <1cd32cbb1002012157r5356f547yd057231ead32b7c@mail.gmail.com> Message-ID: <1cd32cbb1002012203t56f45a7co29ec9de3c0cfbd4b@mail.gmail.com> On Tue, Feb 2, 2010 at 12:57 AM, wrote: > On Tue, Feb 2, 2010 at 12:31 AM, Charles R Harris > wrote: >> >> >> On Mon, Feb 1, 2010 at 10:02 PM, wrote: >>> >>> On Mon, Feb 1, 2010 at 11:45 PM, Charles R Harris >>> wrote: >>> > >>> > >>> > On Mon, Feb 1, 2010 at 9:36 PM, David Cournapeau >>> > wrote: >>> >> >>> >> On Tue, Feb 2, 2010 at 1:05 PM, ? wrote: >>> >> >>> >> > I think this could be considered as a correct answer, the count of >>> >> > any >>> >> > integer is zero. >>> >> >>> >> Maybe, but this shape is random - it would be different in different >>> >> conditions, as the length of the returned array is just some random >>> >> memory location. >>> >> >>> >> > >>> >> > Returning an array with one zero, or the empty array or raising an >>> >> > exception? I don't see much of a pattern >>> >> >>> >> Since there is no obvious solution, the only rationale for not raising >>> >> an exception ?I could see is to accommodate often-encountered special >>> >> cases. I find returning [0] more confusing than returning empty >>> >> arrays, though - maybe there is a usecase I don't know about. >>> >> >>> > >>> > In this case I would expect an empty input to be a programming error and >>> > raising an error to be the right thing. >>> >>> Not necessarily, if you run the bincount over groups in a dataset and >>> your not sure if every group is actually observed. The main question, >>> is whether the user needs or wants to check for empty groups before or >>> after the loop over bincount. >>> >> >> How would they know which bin to check? This seems like an unlikely way to >> check for an empty input. > > # grade (e.g. SAT) distribution by school and race > for s in schools: > ? ?for r in race: > ? ? ?print s, r, np.bincount(allstudentgrades[(sch==s)*(ra==r)]) a = np.bincount(allstudentgrades[(sch==s)*(ra==r)]) print s, r, 100.*a /a.sum() to get distribution with empty or nan > > allwhite schools and allblack schools raise an exception. > > I just made up the story, my first attempt was: all sectors, all > firmsize groups, bincount something, will have empty cells for some > size groups, e.g. nuclear power in family business. > > Josef > >> >>> >>> Like >>> >>> np.sum([]) >>> 0.0 >>> >>> sum([]) >>> 0 >>> the empty array or the array([0]) can be considered as the default >>> argument. In this case it is not really a programming error. >>> >> >> I like that better than an empty array. >> >>> >>> Since bincount usually returns redundant zero count unless >>> np.unique(data) = np.arange(data.max()+1), >>> array([0]) would also make sense as a minimum answer >>> >>> np.bincount([7,8,9]) >>> array([0, 0, 0, 0, 0, 0, 0, 1, 1, 1]) >>> >>> I use bincount quite a lot but only with fixed sized arrays, so I >>> never actually used it in this way (yet). >>> >> >> Chuck >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > From charlesr.harris at gmail.com Tue Feb 2 02:01:50 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 2 Feb 2010 00:01:50 -0700 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: <1cd32cbb1002012157r5356f547yd057231ead32b7c@mail.gmail.com> References: <20100201170259.GA672@doriath.local> <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> <4B6781F2.5000806@silveregg.co.jp> <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> <1cd32cbb1002012102h720a78cbh5662db7a31bdc35b@mail.gmail.com> <1cd32cbb1002012157r5356f547yd057231ead32b7c@mail.gmail.com> Message-ID: On Mon, Feb 1, 2010 at 10:57 PM, wrote: > On Tue, Feb 2, 2010 at 12:31 AM, Charles R Harris > wrote: > > > > > > On Mon, Feb 1, 2010 at 10:02 PM, wrote: > >> > >> On Mon, Feb 1, 2010 at 11:45 PM, Charles R Harris > >> wrote: > >> > > >> > > >> > On Mon, Feb 1, 2010 at 9:36 PM, David Cournapeau > >> > wrote: > >> >> > >> >> On Tue, Feb 2, 2010 at 1:05 PM, wrote: > >> >> > >> >> > I think this could be considered as a correct answer, the count of > >> >> > any > >> >> > integer is zero. > >> >> > >> >> Maybe, but this shape is random - it would be different in different > >> >> conditions, as the length of the returned array is just some random > >> >> memory location. > >> >> > >> >> > > >> >> > Returning an array with one zero, or the empty array or raising an > >> >> > exception? I don't see much of a pattern > >> >> > >> >> Since there is no obvious solution, the only rationale for not > raising > >> >> an exception I could see is to accommodate often-encountered special > >> >> cases. I find returning [0] more confusing than returning empty > >> >> arrays, though - maybe there is a usecase I don't know about. > >> >> > >> > > >> > In this case I would expect an empty input to be a programming error > and > >> > raising an error to be the right thing. > >> > >> Not necessarily, if you run the bincount over groups in a dataset and > >> your not sure if every group is actually observed. The main question, > >> is whether the user needs or wants to check for empty groups before or > >> after the loop over bincount. > >> > > > > How would they know which bin to check? This seems like an unlikely way > to > > check for an empty input. > > # grade (e.g. SAT) distribution by school and race > for s in schools: > for r in race: > print s, r, np.bincount(allstudentgrades[(sch==s)*(ra==r)]) > > allwhite schools and allblack schools raise an exception. > > I just made up the story, my first attempt was: all sectors, all > firmsize groups, bincount something, will have empty cells for some > size groups, e.g. nuclear power in family business. > > OK, point taken. What do you think would be the best thing to do? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Feb 2 03:11:46 2010 From: cournape at gmail.com (David Cournapeau) Date: Tue, 2 Feb 2010 17:11:46 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? Message-ID: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> Hi, This is a follow-up of the discussion about ABI-breakage in Numpy 1.4.0. To sum it up, it is caused by the new datetime support, and it seems difficult to fix without removing datetime support altogether for the 1.4.x series. Both Chuck and myself are in favor of removing the datetime altogether for 1.4.x as a solution. At least in my case, it is mostly justified by the report from David Huard that the current datetime support is still a bit too experimental to be useful for people who rely on binaries. Are there any objections ? I know in particular Travis was against it when it was suggested, but it was not known at that time that the datetime support would break the ABI (I would have been more strongly against it if I knew at that time). The alternative is to just signal a breakage of the ABI in NumPy 1.4.1. I would like to solve this issue ASAP, as it is quite a burden for people who rely on binaries (changing the ABI would at least generate a useful message instead of weird failures / crashes). cheers, David From eadrogue at gmx.net Tue Feb 2 06:22:21 2010 From: eadrogue at gmx.net (Ernest =?iso-8859-1?Q?Adrogu=E9?=) Date: Tue, 2 Feb 2010 12:22:21 +0100 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: References: <1cd32cbb1002011355o39da413bm30c7c6db255987f0@mail.gmail.com> <4B6781F2.5000806@silveregg.co.jp> <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> <1cd32cbb1002012102h720a78cbh5662db7a31bdc35b@mail.gmail.com> <1cd32cbb1002012157r5356f547yd057231ead32b7c@mail.gmail.com> Message-ID: <20100202112221.GA16049@doriath.local> 2/02/10 @ 00:01 (-0700), thus spake Charles R Harris: > On Mon, Feb 1, 2010 at 10:57 PM, wrote: > > > On Tue, Feb 2, 2010 at 12:31 AM, Charles R Harris > > wrote: > > > > > > > > > On Mon, Feb 1, 2010 at 10:02 PM, wrote: > > >> > > >> On Mon, Feb 1, 2010 at 11:45 PM, Charles R Harris > > >> wrote: > > >> > > > >> > > > >> > On Mon, Feb 1, 2010 at 9:36 PM, David Cournapeau > > >> > wrote: > > >> >> > > >> >> On Tue, Feb 2, 2010 at 1:05 PM, wrote: > > >> >> > > >> >> > I think this could be considered as a correct answer, the count of > > >> >> > any > > >> >> > integer is zero. > > >> >> > > >> >> Maybe, but this shape is random - it would be different in different > > >> >> conditions, as the length of the returned array is just some random > > >> >> memory location. > > >> >> > > >> >> > > > >> >> > Returning an array with one zero, or the empty array or raising an > > >> >> > exception? I don't see much of a pattern > > >> >> > > >> >> Since there is no obvious solution, the only rationale for not > > raising > > >> >> an exception I could see is to accommodate often-encountered special > > >> >> cases. I find returning [0] more confusing than returning empty > > >> >> arrays, though - maybe there is a usecase I don't know about. > > >> >> > > >> > > > >> > In this case I would expect an empty input to be a programming error > > and > > >> > raising an error to be the right thing. > > >> > > >> Not necessarily, if you run the bincount over groups in a dataset and > > >> your not sure if every group is actually observed. The main question, > > >> is whether the user needs or wants to check for empty groups before or > > >> after the loop over bincount. > > >> > > > > > > How would they know which bin to check? This seems like an unlikely way > > to > > > check for an empty input. > > > > # grade (e.g. SAT) distribution by school and race > > for s in schools: > > for r in race: > > print s, r, np.bincount(allstudentgrades[(sch==s)*(ra==r)]) > > > > allwhite schools and allblack schools raise an exception. > > > > I just made up the story, my first attempt was: all sectors, all > > firmsize groups, bincount something, will have empty cells for some > > size groups, e.g. nuclear power in family business. > > > > > OK, point taken. What do you think would be the best thing to do? In my opinion, returning an empty array makes more sense than array([0]). An empty arrays means "there are no bins", whereas an array of length 1 implies that there is one. Cheers. Ernest From tsagias at gmail.com Tue Feb 2 08:05:42 2010 From: tsagias at gmail.com (Manos Tsagias) Date: Tue, 2 Feb 2010 14:05:42 +0100 Subject: [Numpy-discussion] Normalized histogram for data ranges 0 .. 1 returns PDF > 1 Message-ID: <27def3c71002020505k6376e277j77dce4150403176a@mail.gmail.com> Hi all, I'm using numpy.histogram with normed=True with 1D data ranging 0 .. 1. The results return probabilities greater than 1. The trapezoidal integral returns 1, but I'm afraid this is due to the bin assigned values. Example follows: >>> from numpy import * >>> a = arange(0, 1, 0.1) >>> histogram(a, normed=True) (array([ 1.11111111, 1.11111111, 1.11111111, 1.11111111, 1.11111111, 1.11111111, 1.11111111, 1.11111111, 1.11111111, 1.11111111]), array([ 0. , 0.09, 0.18, 0.27, 0.36, 0.45, 0.54, 0.63, 0.72, 0.81, 0.9 ])) Is that normal? If not, does anyone encountered that before? Ideas welcome! Thanks, Manos._ -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Feb 2 08:44:15 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 2 Feb 2010 08:44:15 -0500 Subject: [Numpy-discussion] Normalized histogram for data ranges 0 .. 1 returns PDF > 1 In-Reply-To: <27def3c71002020505k6376e277j77dce4150403176a@mail.gmail.com> References: <27def3c71002020505k6376e277j77dce4150403176a@mail.gmail.com> Message-ID: <1cd32cbb1002020544x6b208825gc6081c3a1c33c13@mail.gmail.com> On Tue, Feb 2, 2010 at 8:05 AM, Manos Tsagias wrote: > Hi all, > ?I'm using numpy.histogram with normed=True with 1D data ranging 0 .. 1. The > results return probabilities greater than 1. The trapezoidal integral > returns 1, but I'm afraid this is due to the bin assigned values. Example > follows: >>>> from numpy import * >>>> a = arange(0, 1, 0.1) >>>> histogram(a, normed=True) > (array([ 1.11111111, ?1.11111111, ?1.11111111, ?1.11111111, ?1.11111111, > ?? ? ? ?1.11111111, ?1.11111111, ?1.11111111, ?1.11111111, ?1.11111111]), > array([ 0. ?, ?0.09, ?0.18, ?0.27, ?0.36, ?0.45, ?0.54, ?0.63, ?0.72, > ?? ? ? ?0.81, ?0.9 ])) > ?Is that normal? If not, does anyone encountered that before? Ideas welcome! > ?Thanks, > ?Manos._ histogram with normed=True has the interpretation of a pdf of a continuous random variable not discrete. The pdf of a continuous distribution can be anything greater or equal zero. On [0,1] it has to have a part that is larger than 1 unless the distribution is uniform in order to integrate to 1. It's a sometimes-asked-question, there are more explanations on the mailing list. Josef > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From josef.pktd at gmail.com Tue Feb 2 08:53:48 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 2 Feb 2010 08:53:48 -0500 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: <20100202112221.GA16049@doriath.local> References: <4B6781F2.5000806@silveregg.co.jp> <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> <1cd32cbb1002012102h720a78cbh5662db7a31bdc35b@mail.gmail.com> <1cd32cbb1002012157r5356f547yd057231ead32b7c@mail.gmail.com> <20100202112221.GA16049@doriath.local> Message-ID: <1cd32cbb1002020553v3c7e9629o7df2b8cd1ed2feb7@mail.gmail.com> 2010/2/2 Ernest Adrogu? : > ?2/02/10 @ 00:01 (-0700), thus spake Charles R Harris: >> On Mon, Feb 1, 2010 at 10:57 PM, wrote: >> >> > On Tue, Feb 2, 2010 at 12:31 AM, Charles R Harris >> > wrote: >> > > >> > > >> > > On Mon, Feb 1, 2010 at 10:02 PM, wrote: >> > >> >> > >> On Mon, Feb 1, 2010 at 11:45 PM, Charles R Harris >> > >> wrote: >> > >> > >> > >> > >> > >> > On Mon, Feb 1, 2010 at 9:36 PM, David Cournapeau >> > >> > wrote: >> > >> >> >> > >> >> On Tue, Feb 2, 2010 at 1:05 PM, ? wrote: >> > >> >> >> > >> >> > I think this could be considered as a correct answer, the count of >> > >> >> > any >> > >> >> > integer is zero. >> > >> >> >> > >> >> Maybe, but this shape is random - it would be different in different >> > >> >> conditions, as the length of the returned array is just some random >> > >> >> memory location. >> > >> >> >> > >> >> > >> > >> >> > Returning an array with one zero, or the empty array or raising an >> > >> >> > exception? I don't see much of a pattern >> > >> >> >> > >> >> Since there is no obvious solution, the only rationale for not >> > raising >> > >> >> an exception ?I could see is to accommodate often-encountered special >> > >> >> cases. I find returning [0] more confusing than returning empty >> > >> >> arrays, though - maybe there is a usecase I don't know about. >> > >> >> >> > >> > >> > >> > In this case I would expect an empty input to be a programming error >> > and >> > >> > raising an error to be the right thing. >> > >> >> > >> Not necessarily, if you run the bincount over groups in a dataset and >> > >> your not sure if every group is actually observed. The main question, >> > >> is whether the user needs or wants to check for empty groups before or >> > >> after the loop over bincount. >> > >> >> > > >> > > How would they know which bin to check? This seems like an unlikely way >> > to >> > > check for an empty input. >> > >> > # grade (e.g. SAT) distribution by school and race >> > for s in schools: >> > ? ?for r in race: >> > ? ? ?print s, r, np.bincount(allstudentgrades[(sch==s)*(ra==r)]) >> > >> > allwhite schools and allblack schools raise an exception. >> > >> > I just made up the story, my first attempt was: all sectors, all >> > firmsize groups, bincount something, will have empty cells for some >> > size groups, e.g. nuclear power in family business. >> > >> > >> OK, point taken. What do you think would be the best thing to do? > > In my opinion, returning an empty array makes more sense than > array([0]). An empty arrays means "there are no bins", whereas > an array of length 1 implies that there is one. Since bincount returns sometimes zero count bins, the implication is not necessarily true. But now I'm also in favor of the empty array, as a least surprise solution, and the user can decide whether, when or how to handle empty arrays. just one more example, before discovering bincount, I used histogram to count integers >>> npx=np.arange(5);np.histogram(x[x == 7], bins=np.arange(7+1)) (array([0, 0, 0, 0, 0, 0, 0]), array([0, 1, 2, 3, 4, 5, 6, 7])) >>> npx=np.arange(5);np.histogram(x[x == 7], bins=[]) (array([], dtype=int32), array([], dtype=float64)) Josef > > Cheers. > > Ernest > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From josef.pktd at gmail.com Tue Feb 2 08:55:43 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 2 Feb 2010 08:55:43 -0500 Subject: [Numpy-discussion] np.bincount raises MemoryError when given an empty array In-Reply-To: <1cd32cbb1002020553v3c7e9629o7df2b8cd1ed2feb7@mail.gmail.com> References: <1cd32cbb1002012005y42fc33fdmbc3a195b1fee2f9d@mail.gmail.com> <5b8d13221002012036q1095c06cv79923e38d558251e@mail.gmail.com> <1cd32cbb1002012102h720a78cbh5662db7a31bdc35b@mail.gmail.com> <1cd32cbb1002012157r5356f547yd057231ead32b7c@mail.gmail.com> <20100202112221.GA16049@doriath.local> <1cd32cbb1002020553v3c7e9629o7df2b8cd1ed2feb7@mail.gmail.com> Message-ID: <1cd32cbb1002020555s15804616i1634bf5d7fa1975f@mail.gmail.com> On Tue, Feb 2, 2010 at 8:53 AM, wrote: > 2010/2/2 Ernest Adrogu? : >> ?2/02/10 @ 00:01 (-0700), thus spake Charles R Harris: >>> On Mon, Feb 1, 2010 at 10:57 PM, wrote: >>> >>> > On Tue, Feb 2, 2010 at 12:31 AM, Charles R Harris >>> > wrote: >>> > > >>> > > >>> > > On Mon, Feb 1, 2010 at 10:02 PM, wrote: >>> > >> >>> > >> On Mon, Feb 1, 2010 at 11:45 PM, Charles R Harris >>> > >> wrote: >>> > >> > >>> > >> > >>> > >> > On Mon, Feb 1, 2010 at 9:36 PM, David Cournapeau >>> > >> > wrote: >>> > >> >> >>> > >> >> On Tue, Feb 2, 2010 at 1:05 PM, ? wrote: >>> > >> >> >>> > >> >> > I think this could be considered as a correct answer, the count of >>> > >> >> > any >>> > >> >> > integer is zero. >>> > >> >> >>> > >> >> Maybe, but this shape is random - it would be different in different >>> > >> >> conditions, as the length of the returned array is just some random >>> > >> >> memory location. >>> > >> >> >>> > >> >> > >>> > >> >> > Returning an array with one zero, or the empty array or raising an >>> > >> >> > exception? I don't see much of a pattern >>> > >> >> >>> > >> >> Since there is no obvious solution, the only rationale for not >>> > raising >>> > >> >> an exception ?I could see is to accommodate often-encountered special >>> > >> >> cases. I find returning [0] more confusing than returning empty >>> > >> >> arrays, though - maybe there is a usecase I don't know about. >>> > >> >> >>> > >> > >>> > >> > In this case I would expect an empty input to be a programming error >>> > and >>> > >> > raising an error to be the right thing. >>> > >> >>> > >> Not necessarily, if you run the bincount over groups in a dataset and >>> > >> your not sure if every group is actually observed. The main question, >>> > >> is whether the user needs or wants to check for empty groups before or >>> > >> after the loop over bincount. >>> > >> >>> > > >>> > > How would they know which bin to check? This seems like an unlikely way >>> > to >>> > > check for an empty input. >>> > >>> > # grade (e.g. SAT) distribution by school and race >>> > for s in schools: >>> > ? ?for r in race: >>> > ? ? ?print s, r, np.bincount(allstudentgrades[(sch==s)*(ra==r)]) >>> > >>> > allwhite schools and allblack schools raise an exception. >>> > >>> > I just made up the story, my first attempt was: all sectors, all >>> > firmsize groups, bincount something, will have empty cells for some >>> > size groups, e.g. nuclear power in family business. >>> > >>> > >>> OK, point taken. What do you think would be the best thing to do? >> >> In my opinion, returning an empty array makes more sense than >> array([0]). An empty arrays means "there are no bins", whereas >> an array of length 1 implies that there is one. > > Since bincount returns sometimes zero count bins, the implication is > not necessarily true. > > But now I'm also in favor of the empty array, as a least surprise > solution, and the user can decide whether, when or how to handle empty > arrays. > > > just one more example, before discovering bincount, I used histogram > to count integers > without typo: >>> x=np.arange(5); np.histogram(x[x == 7], bins=np.arange(7+1)) (array([0, 0, 0, 0, 0, 0, 0]), array([0, 1, 2, 3, 4, 5, 6, 7])) >>> x=np.arange(5); np.histogram(x[x == 7], bins=[]) (array([], dtype=int32), array([], dtype=float64)) > > Josef > > >> >> Cheers. >> >> Ernest >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > From bsouthey at gmail.com Tue Feb 2 09:26:38 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 02 Feb 2010 08:26:38 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> Message-ID: <4B68361E.4020801@gmail.com> On 02/02/2010 02:11 AM, David Cournapeau wrote: > Hi, > > This is a follow-up of the discussion about ABI-breakage in Numpy > 1.4.0. To sum it up, it is caused by the new datetime support, and it > seems difficult to fix without removing datetime support altogether > for the 1.4.x series. > > Both Chuck and myself are in favor of removing the datetime altogether > for 1.4.x as a solution. At least in my case, it is mostly justified > by the report from David Huard that the current datetime support is > still a bit too experimental to be useful for people who rely on > binaries. > +1 > Are there any objections ? I know in particular Travis was against it > when it was suggested, but it was not known at that time that the > datetime support would break the ABI (I would have been more strongly > against it if I knew at that time). The alternative is to just signal > a breakage of the ABI in NumPy 1.4.1. I would like to solve this issue > ASAP, as it is quite a burden for people who rely on binaries > (changing the ABI would at least generate a useful message instead of > weird failures / crashes). > > cheers, > > David > _______________________________________________ > I think a warning could be provided that the ABI will change in a forthcoming release although there has been a lot of traffic on this already. Bruce From charlesr.harris at gmail.com Tue Feb 2 12:20:44 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 2 Feb 2010 10:20:44 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> Message-ID: On Tue, Feb 2, 2010 at 1:11 AM, David Cournapeau wrote: > Hi, > > This is a follow-up of the discussion about ABI-breakage in Numpy > 1.4.0. To sum it up, it is caused by the new datetime support, and it > seems difficult to fix without removing datetime support altogether > for the 1.4.x series. > > Both Chuck and myself are in favor of removing the datetime altogether > for 1.4.x as a solution. At least in my case, it is mostly justified > by the report from David Huard that the current datetime support is > still a bit too experimental to be useful for people who rely on > binaries. > > Are there any objections ? I know in particular Travis was against it > when it was suggested, but it was not known at that time that the > datetime support would break the ABI (I would have been more strongly > against it if I knew at that time). The alternative is to just signal > a breakage of the ABI in NumPy 1.4.1. I would like to solve this issue > ASAP, as it is quite a burden for people who rely on binaries > (changing the ABI would at least generate a useful message instead of > weird failures / crashes). > > Removal would also fix the Cython problem, no? Having a release of Cython that fixes that problem is another argument for making the change in 1.5 rather than 1.4. The pyx files will still need to be reprocessed but at least it will be a one time deal and folks will have warning. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Tue Feb 2 15:20:58 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 2 Feb 2010 12:20:58 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> Message-ID: On Tue, Feb 2, 2010 at 12:11 AM, David Cournapeau wrote: > Both Chuck and myself are in favor of removing the datetime altogether > for 1.4.x as a solution. At least in my case, it is mostly justified > by the report from David Huard that the current datetime support is > still a bit too experimental to be useful for people who rely on > binaries. +1 I know some people at Berkeley looked into using the current datetime code and also felt it was too experimental to use at this point. I also agree with you that it is important to solve this issue ASAP. Thanks for all your hard work with the 1.4 release and your effort in tracking down this problem. Jarrod From peno at telenet.be Tue Feb 2 17:42:11 2010 From: peno at telenet.be (Peter Notebaert) Date: Tue, 2 Feb 2010 23:42:11 +0100 Subject: [Numpy-discussion] Determine if numpy is installed from an extension Message-ID: Hello, I have written a C-extension for python that uses arrays from python, does calculations on them and returns a result on that. I have now also added the possibility to provide numpy arrays. However this is not a requirement. Python arrays (lists) are still allowed also. I check in the C-code which kind of arrays are provided. That all works ok, but I have one problem. In the initialisation function for the extension I call import_array to initialise the numpy library. The problem is if numpy is not installed on the system, that this call generates a message and an error and the rest of the routine is aborted. Via the source code of numpy I discovered that import_array is in fact a macro that calls _import_array and checks its return value to give the message. But even it I call _import_array myself and check for the return code, after the initialisation routine has finished I still get the message 'No module named numpy.core.multiarray'. How can I test if numpy is installed on the system from the extension so that I do not active the numpy functionality and that it is still able to use my extension, but then without numpy support? I have searched in the manual and documentation, searched in google, but I do not find an answer on that question. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Tue Feb 2 19:26:32 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Tue, 2 Feb 2010 19:26:32 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> Message-ID: <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> On Feb 2, 2010, at 3:11 AM, David Cournapeau wrote: > Hi, > > This is a follow-up of the discussion about ABI-breakage in Numpy > 1.4.0. To sum it up, it is caused by the new datetime support, and it > seems difficult to fix without removing datetime support altogether > for the 1.4.x series. > > Both Chuck and myself are in favor of removing the datetime altogether > for 1.4.x as a solution. At least in my case, it is mostly justified > by the report from David Huard that the current datetime support is > still a bit too experimental to be useful for people who rely on > binaries. > > Are there any objections ? I know in particular Travis was against it > when it was suggested, but it was not known at that time that the > datetime support would break the ABI (I would have been more strongly > against it if I knew at that time). I'm still pretty strongly against it. I was suspicious about claims that we didn't need to change the ABI to add the datetime support. I would have preferred to just change the ABI for NumPy 1.4 rather than try not to in the first place. I think we just signal the breakage in 1.4.1 and move forward. The datetime is useful as a place-holder for data. Math on date-time arrays just doesn't work yet. I don't think removing it is the right approach. It would be better to spend the time on fleshing out the ufuncs and conversion functions for date-time support. -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Tue Feb 2 19:27:57 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Tue, 2 Feb 2010 19:27:57 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B68361E.4020801@gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B68361E.4020801@gmail.com> Message-ID: <884A8F92-6513-423F-BF25-1F40C79ABD75@enthought.com> On Feb 2, 2010, at 9:26 AM, Bruce Southey wrote: > On 02/02/2010 02:11 AM, David Cournapeau wrote: >> Hi, >> >> This is a follow-up of the discussion about ABI-breakage in Numpy >> 1.4.0. To sum it up, it is caused by the new datetime support, and it >> seems difficult to fix without removing datetime support altogether >> for the 1.4.x series. >> >> Both Chuck and myself are in favor of removing the datetime >> altogether >> for 1.4.x as a solution. At least in my case, it is mostly justified >> by the report from David Huard that the current datetime support is >> still a bit too experimental to be useful for people who rely on >> binaries. >> > > +1 > >> Are there any objections ? I know in particular Travis was against it >> when it was suggested, but it was not known at that time that the >> datetime support would break the ABI (I would have been more strongly >> against it if I knew at that time). The alternative is to just signal >> a breakage of the ABI in NumPy 1.4.1. I would like to solve this >> issue >> ASAP, as it is quite a burden for people who rely on binaries >> (changing the ABI would at least generate a useful message instead of >> weird failures / crashes). >> >> cheers, >> >> David >> _______________________________________________ >> > > I think a warning could be provided that the ABI will change in a > forthcoming release although there has been a lot of traffic on this > already. I think there are too many warnings already. I don't think adding a warning about an upcoming ABI change is very useful. -Travis -- Travis Oliphant Enthought Inc. 1-512-536-1057 http://www.enthought.com oliphant at enthought.com From david at silveregg.co.jp Tue Feb 2 19:34:55 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 03 Feb 2010 09:34:55 +0900 Subject: [Numpy-discussion] Determine if numpy is installed from an extension In-Reply-To: References: Message-ID: <4B68C4AF.4030000@silveregg.co.jp> Peter Notebaert wrote: > How can I test if numpy is installed on the system from the extension so > that I do not active the numpy functionality and that it is still able > to use my extension, but then without numpy support? Is there some reason why you cannot try to import numpy first to check whether it is available ? cheers, David From Chris.Barker at noaa.gov Tue Feb 2 19:41:42 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 02 Feb 2010 16:41:42 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> Message-ID: <4B68C646.6060709@noaa.gov> Travis Oliphant wrote: > I'm still pretty strongly against it. Me too. I was close to posing a note today saying it was fine, but then I sat down with a developer I'm working with, and he happened to mention that he had rebuilt something or other to accommodate the numpy ABI change -- so that cat's out of the bag now anyway. Maybe it should have been called 1.5, but what's the difference, really? > The > datetime is useful as a place-holder for data. Math on date-time arrays > just doesn't work yet. I agree, though not at all complete, it's still nice to have, and IIUC, it's not just about datetime, but also a structure for making other custom data types -- wasn't someone working on a fixed-point type, for instance? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From david at silveregg.co.jp Tue Feb 2 20:53:05 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 03 Feb 2010 10:53:05 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> Message-ID: <4B68D701.4030003@silveregg.co.jp> Travis Oliphant wrote: > I think we just signal the breakage in 1.4.1 and move forward. The > datetime is useful as a place-holder for data. Math on date-time arrays > just doesn't work yet. I don't think removing it is the right > approach. It would be better to spend the time on fleshing out the > ufuncs and conversion functions for date-time support. Just so that there is no confusion: it is only about removing it for 1.4.x, not about removing datetime altogether. It seems that datetime in 1.4.x has few users, whereas breaking ABI is a nuisance for many more people. In particular, people who update NumPy 1.4.0 cannot use scipy or matplotlib unless they build it by themselves as well - we are talking about thousand of people at least assuming sourceforge numbers are accurate. More fundamentally though, what is your opinion about ABI ? Am I right to understand you don't consider is as significant ? David From david at silveregg.co.jp Tue Feb 2 21:00:27 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 03 Feb 2010 11:00:27 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B68C646.6060709@noaa.gov> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68C646.6060709@noaa.gov> Message-ID: <4B68D8BB.3030208@silveregg.co.jp> Christopher Barker wrote: > Travis Oliphant wrote: >> I'm still pretty strongly against it. > > Me too. I was close to posing a note today saying it was fine, but then > I sat down with a developer I'm working with, and he happened to mention > that he had rebuilt something or other to accommodate the numpy ABI > change -- so that cat's out of the bag now anyway. > > Maybe it should have been called 1.5, but what's the difference, really? Version numbers is not the issue, obviously. Calling something 1.5 instead of 1.4 does not make any difference. What matters is the frequency of the breakage: do we break every 6 months, every year, etc... At the risk of stating the obvious: every time we break the ABI, we break every binary depending on the NumPy. Concretely, this means that currently, if you install NumPy 1.4.0, you cannot install either scipy, matplotlib, etc... If we do this at every release of NumPy, we force people to update their code every time they update NumPy. I think this policy is unsustainable because NumPy is used as a basic library by so many other projects. What would you think if you had to recompile every single binary out there every time the libc version is updated ? cheers, David From nmb at wartburg.edu Tue Feb 2 21:23:23 2010 From: nmb at wartburg.edu (Neil Martinsen-Burrell) Date: Tue, 02 Feb 2010 20:23:23 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B68D701.4030003@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> Message-ID: <4B68DE1B.90601@wartburg.edu> On 2010-02-02 19:53 , David Cournapeau wrote: > Travis Oliphant wrote: > >> I think we just signal the breakage in 1.4.1 and move forward. The >> datetime is useful as a place-holder for data. Math on date-time arrays >> just doesn't work yet. I don't think removing it is the right >> approach. It would be better to spend the time on fleshing out the >> ufuncs and conversion functions for date-time support. > > Just so that there is no confusion: it is only about removing it for > 1.4.x, not about removing datetime altogether. It seems that datetime in > 1.4.x has few users, whereas breaking ABI is a nuisance for many more > people. In particular, people who update NumPy 1.4.0 cannot use scipy or > matplotlib unless they build it by themselves as well - we are talking > about thousand of people at least assuming sourceforge numbers are accurate. > > More fundamentally though, what is your opinion about ABI ? Am I right > to understand you don't consider is as significant ? In previous discussions about compatibility-breaking, particularly in instances where compatibility has been broken, we have heard vociferous complaints from users on this list about NumPy's inability to maintain compatibility within minor releases. The silent majority of people who just use NumPy and don't follow our development process are not here to express their displeasure. Even the small number of people who are reporting import errors after upgrading their NumPy installations should be an indication to the developers that this *is* in fact a problem. I don't understand Travis's comment that "datetime is just a place-holder for data". We have heard from a number of people that the current state of the datetime work is not sufficiently advanced to be useful for them. What is the place that needs holding here? What difference does it make if that code is simply developed on a branch which will be incorporated into an ABI-breaking x.y release when datetime support is at a useful point in its development? What's the particular benefit for NumPy users or developers in including a half-working feature in a release? If we simply want the feature to start getting exercised by developers, then we should make a long-lived publicly available branch for those who would like to try it out. (Insert distributed version control plug here.) NumPy has become in the past 3-5 years a critical low-level library that supports a large number of Python projects. As a library, the balance between compatibility and new features has to shift in favor of compatibility. This is a change from the days when Travis O. owned the NumPy source tree and features were added at will (and we are all glad that they were added). As a simple user, I vote in favor of considering 1.4.0 as a buggy release of NumPy, removing datetime support (it's just one 4000 line commit, right?) and releasing an ABI compatible 1.4.1. That should probably be accompanied by a roadmap hashed out at this year's SciPy conference that takes us up through adding datetime, Python 3 and a possible major rewrite (that will add the indirection necessary to make future ABI breaks unneccessary). -Neil From robert.kern at gmail.com Tue Feb 2 21:31:32 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 2 Feb 2010 20:31:32 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B68DE1B.90601@wartburg.edu> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> Message-ID: <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> On Tue, Feb 2, 2010 at 20:23, Neil Martinsen-Burrell wrote: > I don't understand Travis's comment that "datetime is just a > place-holder for data". That's not a direct quote and is a misinterpretation of what he said. In the course of adding the datetime support, we implemented it by adding a general feature to dtype objects such that they can hold arbitrary metadata. This is useful feature for more than just datetime support and should be complete and useful at this time. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at silveregg.co.jp Tue Feb 2 22:08:33 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 03 Feb 2010 12:08:33 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> Message-ID: <4B68E8B1.2090705@silveregg.co.jp> Robert Kern wrote: > On Tue, Feb 2, 2010 at 20:23, Neil Martinsen-Burrell wrote: >This is useful feature for more than just datetime > support and should be complete and useful at this time. Couldn't this be kept independently of the datetime support ? At least as far as the PyArray_ArrFuncs is concerned, that's the datime type which broke the ABI, not the metadata. Does the metadata support needs anything else besides the metadata pointer in the descriptor structure ? David From robert.kern at gmail.com Tue Feb 2 22:18:48 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 2 Feb 2010 21:18:48 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B68E8B1.2090705@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> Message-ID: <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> On Tue, Feb 2, 2010 at 21:08, David Cournapeau wrote: > Robert Kern wrote: >> On Tue, Feb 2, 2010 at 20:23, Neil Martinsen-Burrell wrote: >>This is useful feature for more than just datetime >> support and should be complete and useful at this time. > > Couldn't this be kept independently of the datetime support ? At least > as far as the PyArray_ArrFuncs is concerned, that's the datime type > which broke the ABI, not the metadata. Does the metadata support needs > anything else besides the metadata pointer in the descriptor structure ? There are a number of other code changes beyond just the pointer, yes, but the PyArray_ArrFuncs is not one of them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nmb at wartburg.edu Tue Feb 2 22:19:47 2010 From: nmb at wartburg.edu (Neil Martinsen-Burrell) Date: Tue, 02 Feb 2010 21:19:47 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> Message-ID: <4B68EB53.3020006@wartburg.edu> On 2010-02-02 20:31 , Robert Kern wrote: > On Tue, Feb 2, 2010 at 20:23, Neil Martinsen-Burrell wrote: > >> I don't understand Travis's comment that "datetime is just a >> place-holder for data". > > That's not a direct quote and is a misinterpretation of what he said. > In the course of adding the datetime support, we implemented it by > adding a general feature to dtype objects such that they can hold > arbitrary metadata. This is useful feature for more than just datetime > support and should be complete and useful at this time. My apologies for the misquote. As I said I did not understand his comment. Thanks for the clarification. -Neil From david at silveregg.co.jp Tue Feb 2 22:23:36 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 03 Feb 2010 12:23:36 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> Message-ID: <4B68EC38.7050602@silveregg.co.jp> Robert Kern wrote: > On Tue, Feb 2, 2010 at 21:08, David Cournapeau wrote: >> Robert Kern wrote: >>> On Tue, Feb 2, 2010 at 20:23, Neil Martinsen-Burrell wrote: >>> This is useful feature for more than just datetime >>> support and should be complete and useful at this time. >> Couldn't this be kept independently of the datetime support ? At least >> as far as the PyArray_ArrFuncs is concerned, that's the datime type >> which broke the ABI, not the metadata. Does the metadata support needs >> anything else besides the metadata pointer in the descriptor structure ? > > There are a number of other code changes beyond just the pointer, yes, > but the PyArray_ArrFuncs is not one of them. Sorry, my question was badly worded: besides the metadata pointer, is there any other change related to the metadata infratructure which may potentially change changes the publicly exported structures ? I wonder whereas the metadata infrastructure can be kept in 1.4.x independently of the datetime support without breaking the ABI (assuming it makes sense to keep the metadata stuff without datetime support in 1.4.x) cheers, David From david.huard at gmail.com Tue Feb 2 22:29:26 2010 From: david.huard at gmail.com (David Huard) Date: Tue, 2 Feb 2010 22:29:26 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B68DE1B.90601@wartburg.edu> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> Message-ID: <91cf711d1002021929r6d00b70dsb98737328ba4a0d8@mail.gmail.com> On Tue, Feb 2, 2010 at 9:23 PM, Neil Martinsen-Burrell wrote: > On 2010-02-02 19:53 , David Cournapeau wrote: >> Travis Oliphant wrote: >> >>> I think we just signal the breakage in 1.4.1 and move forward. ? The >>> datetime is useful as a place-holder for data. ?Math on date-time arrays >>> just doesn't work yet. ? ?I don't think removing it is the right >>> approach. ? ?It would be better to spend the time on fleshing out the >>> ufuncs and conversion functions for date-time support. >> >> Just so that there is no confusion: it is only about removing it for >> 1.4.x, not about removing datetime altogether. It seems that datetime in >> 1.4.x has few users, whereas breaking ABI is a nuisance for many more >> people. In particular, people who update NumPy 1.4.0 cannot use scipy or >> matplotlib unless they build it by themselves as well - we are talking >> about thousand of people at least assuming sourceforge numbers are accurate. >> >> More fundamentally though, what is your opinion about ABI ? Am I right >> to understand you don't consider is as significant ? > > In previous discussions about compatibility-breaking, particularly in > instances where compatibility has been broken, we have heard vociferous > complaints from users on this list about NumPy's inability to maintain > compatibility within minor releases. ?The silent majority of people who > just use NumPy and don't follow our development process are not here to > express their displeasure. ?Even the small number of people who are > reporting import errors after upgrading their NumPy installations should > be an indication to the developers that this *is* in fact a problem. > > I don't understand Travis's comment that "datetime is just a > place-holder for data". ?We have heard from a number of people that the > current state of the datetime work is not sufficiently advanced to be > useful for them. I'd like to clarify this bit since I don't think this is accurate. My view is that the state of the datetime code is perfectly acceptable for developers, able to get the source, compile the code and react appropriately to the small glitches that inevitably occur with new code. On the other hand, I don't see the documentation and the functionality to be ready yet for distribution to a wider audience (read binary distribution users) who are likely to feel frustration toward compilation and compatibility issues. In that sense, the proposition from David C. seems to strike a nice balance. David What is the place that needs holding here? ?What > difference does it make if that code is simply developed on a branch > which will be incorporated into an ABI-breaking x.y release when > datetime support is at a useful point in its development? ?What's the > particular benefit for NumPy users or developers in including a > half-working feature in a release? ?If we simply want the feature to > start getting exercised by developers, then we should make a long-lived > publicly available branch for those who would like to try it out. > (Insert distributed version control plug here.) > > NumPy has become in the past 3-5 years a critical low-level library that > supports a large number of Python projects. ?As a library, the balance > between compatibility and new features has to shift in favor of > compatibility. ?This is a change from the days when Travis O. owned the > NumPy source tree and features were added at will (and we are all glad > that they were added). > > As a simple user, I vote in favor of considering 1.4.0 as a buggy > release of NumPy, removing datetime support (it's just one 4000 line > commit, right?) and releasing an ABI compatible 1.4.1. ?That should > probably be accompanied by a roadmap hashed out at this year's SciPy > conference that takes us up through adding datetime, Python 3 and a > possible major rewrite (that will add the indirection necessary to make > future ABI breaks unneccessary). > > -Neil > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From cournape at gmail.com Tue Feb 2 23:46:08 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 3 Feb 2010 13:46:08 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B68EC38.7050602@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> Message-ID: <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> On Wed, Feb 3, 2010 at 12:23 PM, David Cournapeau wrote: > > Sorry, my question was badly worded: besides the metadata pointer, is > there any other change related to the metadata infratructure which may > potentially change changes the publicly exported structures ? I wonder > whereas the metadata infrastructure can be kept in 1.4.x independently > of the datetime support without breaking the ABI FWIW, keeping the metadata pointer, and only removing datetime-related things makes numpy 1.4.x backward compatible, at least as far as scipy is concerned. So it seems the PyArray_Funcs change is the only ABI-incompatible change. David From robert.kern at gmail.com Tue Feb 2 23:55:09 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 2 Feb 2010 22:55:09 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> Message-ID: <3d375d731002022055g7779f700s76d7e813890843f9@mail.gmail.com> On Tue, Feb 2, 2010 at 22:46, David Cournapeau wrote: > On Wed, Feb 3, 2010 at 12:23 PM, David Cournapeau wrote: > >> >> Sorry, my question was badly worded: besides the metadata pointer, is >> there any other change related to the metadata infratructure which may >> potentially change changes the publicly exported structures ? I wonder >> whereas the metadata infrastructure can be kept in 1.4.x independently >> of the datetime support without breaking the ABI > > FWIW, keeping the metadata pointer, and only removing datetime-related > things makes numpy 1.4.x backward compatible, at least as far as scipy > is concerned. So it seems the PyArray_Funcs change is the only > ABI-incompatible change. Except for the Cython bit. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at silveregg.co.jp Wed Feb 3 00:03:33 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 03 Feb 2010 14:03:33 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002022055g7779f700s76d7e813890843f9@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <3d375d731002022055g7779f700s76d7e813890843f9@mail.gmail.com> Message-ID: <4B6903A5.1010006@silveregg.co.jp> Robert Kern wrote: > On Tue, Feb 2, 2010 at 22:46, David Cournapeau wrote: >> On Wed, Feb 3, 2010 at 12:23 PM, David Cournapeau wrote: >> >>> Sorry, my question was badly worded: besides the metadata pointer, is >>> there any other change related to the metadata infratructure which may >>> potentially change changes the publicly exported structures ? I wonder >>> whereas the metadata infrastructure can be kept in 1.4.x independently >>> of the datetime support without breaking the ABI >> FWIW, keeping the metadata pointer, and only removing datetime-related >> things makes numpy 1.4.x backward compatible, at least as far as scipy >> is concerned. So it seems the PyArray_Funcs change is the only >> ABI-incompatible change. > > Except for the Cython bit. Yep, but this one is easy to solve now (I regenerated the sources with cython 0.12.1). This means one can release a scipy 0.7.1.1 which works for both numpy 1.3.0 and numpy 1.4.0 instead of having a scipy which works only for numpy 1.4.0 (so that installing last numpy, scipy and mpl from binary installers gives a workable environment again) cheers, David From oliphant at enthought.com Wed Feb 3 00:45:42 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Wed, 3 Feb 2010 00:45:42 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B68D701.4030003@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> Message-ID: <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> On Feb 2, 2010, at 8:53 PM, David Cournapeau wrote: > Travis Oliphant wrote: > >> I think we just signal the breakage in 1.4.1 and move forward. The >> datetime is useful as a place-holder for data. Math on date-time >> arrays >> just doesn't work yet. I don't think removing it is the right >> approach. It would be better to spend the time on fleshing out the >> ufuncs and conversion functions for date-time support. > > Just so that there is no confusion: it is only about removing it for > 1.4.x, not about removing datetime altogether. It seems that > datetime in > 1.4.x has few users, whereas breaking ABI is a nuisance for many more > people. In particular, people who update NumPy 1.4.0 cannot use > scipy or > matplotlib unless they build it by themselves as well - we are talking > about thousand of people at least assuming sourceforge numbers are > accurate. > > More fundamentally though, what is your opinion about ABI ? Am I right > to understand you don't consider is as significant ? I consider ABI a very significant think. We should be very accurate about when a re-compile is required. I just don't believe that we should be promising ABI compatibility at .X releases. I never had that intention. I don't remember when it crept in to the ethos. The ABI will change at some point. Having it change at 1.X releases seems reasonable (it certainly was my thought when 1.0 came out). Yes, it means distributors of packages that depend on NumPy will have to recompile against the new version, and I can see why some might want to avoid that. Pushing what is really a distribution problem back to the NumPy package to manage separately is not the approach I would take. In my opinion, we should fix the problems that exist by changing the ABI number of NumPy 1.4.x to accurately reflect that a re-build of NumPy is necessary, and then spend time building SciPy and matplotlib binaries against it. If there is also a desire to make another release of NumPy 1.3.X which removes the date-time additions, but incorporates the other fixes, and somebody wants to spend the time doing that, then great. -Travis From oliphant at enthought.com Wed Feb 3 00:51:05 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Wed, 3 Feb 2010 00:51:05 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> Message-ID: <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> On Feb 2, 2010, at 11:46 PM, David Cournapeau wrote: > On Wed, Feb 3, 2010 at 12:23 PM, David Cournapeau > wrote: > >> >> Sorry, my question was badly worded: besides the metadata pointer, is >> there any other change related to the metadata infratructure which >> may >> potentially change changes the publicly exported structures ? I >> wonder >> whereas the metadata infrastructure can be kept in 1.4.x >> independently >> of the datetime support without breaking the ABI > > FWIW, keeping the metadata pointer, and only removing datetime-related > things makes numpy 1.4.x backward compatible, at least as far as scipy > is concerned. So it seems the PyArray_Funcs change is the only > ABI-incompatible change. What do you mean by the "PyArray_Funcs change"? -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Wed Feb 3 00:59:56 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 03 Feb 2010 14:59:56 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> Message-ID: <4B6910DC.7070809@silveregg.co.jp> Travis Oliphant wrote: > > On Feb 2, 2010, at 11:46 PM, David Cournapeau wrote: > >> On Wed, Feb 3, 2010 at 12:23 PM, David Cournapeau >> > wrote: >> >>> >>> Sorry, my question was badly worded: besides the metadata pointer, is >>> there any other change related to the metadata infratructure which may >>> potentially change changes the publicly exported structures ? I wonder >>> whereas the metadata infrastructure can be kept in 1.4.x independently >>> of the datetime support without breaking the ABI >> >> FWIW, keeping the metadata pointer, and only removing datetime-related >> things makes numpy 1.4.x backward compatible, at least as far as scipy >> is concerned. So it seems the PyArray_Funcs change is the only >> ABI-incompatible change. > > What do you mean by the "PyArray_Funcs change"? The change that broke the ABI is in the PyArray_Funcs structure (ndarrayobject.h): struct { PyArray_VectorUnaryFunc *cast[NPY_NTYPES]; .... Because NPY_NTYPES is bigger after the datetime change. If there is a way to have the datetime not expanding NPY_NTYPES, then I think we can keep the ABI. I tried something with datetimes considered as user types, but did not go very far (most certainly because I have never used this part of the code before). David From charlesr.harris at gmail.com Wed Feb 3 01:30:38 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 2 Feb 2010 23:30:38 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> Message-ID: On Tue, Feb 2, 2010 at 10:45 PM, Travis Oliphant wrote: > > On Feb 2, 2010, at 8:53 PM, David Cournapeau wrote: > > > Travis Oliphant wrote: > > > >> I think we just signal the breakage in 1.4.1 and move forward. The > >> datetime is useful as a place-holder for data. Math on date-time > >> arrays > >> just doesn't work yet. I don't think removing it is the right > >> approach. It would be better to spend the time on fleshing out the > >> ufuncs and conversion functions for date-time support. > > > > Just so that there is no confusion: it is only about removing it for > > 1.4.x, not about removing datetime altogether. It seems that > > datetime in > > 1.4.x has few users, whereas breaking ABI is a nuisance for many more > > people. In particular, people who update NumPy 1.4.0 cannot use > > scipy or > > matplotlib unless they build it by themselves as well - we are talking > > about thousand of people at least assuming sourceforge numbers are > > accurate. > > > > More fundamentally though, what is your opinion about ABI ? Am I right > > to understand you don't consider is as significant ? > > I consider ABI a very significant think. We should be very accurate > about when a re-compile is required. I just don't believe that we > should be promising ABI compatibility at .X releases. I never had > that intention. I don't remember when it crept in to the ethos. > > About 1.2 after the discussion at SciPy. The general consensus was that breaking the ABI was a very bad thing, not to be taken lightly. We are currently bumping the .X number about twice a year, which is too frequent to allow changes at each iteration. IMHO. I would think changes to the ABI would be more a two/three year sort of thing and only under the pressure of necessity. At some point we need to do a major refactoring to hide the structures and make it easier to add types, but I don't see that in the near future. I don't think we should add any more types to the current code after datetime goes in, it's just too big a hassle the way things are now. I'm thinking numpy types should basically interface to the c-types, and new types should subclass or build new classes on top of that. That keeps things simple. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Wed Feb 3 02:42:57 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 03 Feb 2010 16:42:57 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> Message-ID: <4B692901.8060709@silveregg.co.jp> Travis Oliphant wrote: > On Feb 2, 2010, at 8:53 PM, David Cournapeau wrote: > >> Travis Oliphant wrote: >> >>> I think we just signal the breakage in 1.4.1 and move forward. The >>> datetime is useful as a place-holder for data. Math on date-time >>> arrays >>> just doesn't work yet. I don't think removing it is the right >>> approach. It would be better to spend the time on fleshing out the >>> ufuncs and conversion functions for date-time support. >> Just so that there is no confusion: it is only about removing it for >> 1.4.x, not about removing datetime altogether. It seems that >> datetime in >> 1.4.x has few users, whereas breaking ABI is a nuisance for many more >> people. In particular, people who update NumPy 1.4.0 cannot use >> scipy or >> matplotlib unless they build it by themselves as well - we are talking >> about thousand of people at least assuming sourceforge numbers are >> accurate. >> >> More fundamentally though, what is your opinion about ABI ? Am I right >> to understand you don't consider is as significant ? > > I consider ABI a very significant think. We should be very accurate > about when a re-compile is required. I just don't believe that we > should be promising ABI compatibility at .X releases. I never had > that intention. Ok, thanks for clearing that up. > I don't remember when it crept in to the ethos. I don't know when it crept into the NumPy developers, but my own ethos is that that's a very fundamental feature of good libraries. > Yes, it means distributors of packages that depend on NumPy will have > to recompile against the new version, and I can see why some might > want to avoid that. Pushing what is really a distribution problem > back to the NumPy package to manage separately is not the approach I > would take. I don't think it is accurate to see ABI compatibility as a distribution issue. It is mostly an orthogonal issue: it is true that ABI incompatibility complicates distributions, but that's not the main issue. A more important scenario is as follows: let's assume we do allow breaking the ABI every 1.X release, meaning that an ABI incompatible change happens every ~ 6 months at the current pace (using the last 2-3 years as history). Now, let's say I have a package foo which depends on NumPy, and N other packages which also depend on NumPy. If any new version of one of those package needs a new Numpy, you need to rebuild everything. If those other packages depends on other libraries as well which regularly break ABI, you get exponential breakage, the problem is intractable. It is especially hard for packages which may not be easily buildable - I think this is the case of many scientific experiments. I believe this is very detrimental for the whole scipy ecosystem: it is only bearable because only NumPy is doing it. If everybody did the same, it would be impossible to get anything stable. David From laurent.feron at free.fr Wed Feb 3 03:08:39 2010 From: laurent.feron at free.fr (laurent.feron at free.fr) Date: Wed, 3 Feb 2010 09:08:39 +0100 (CET) Subject: [Numpy-discussion] multiply a lign matrix with a column matrix should return a scalar( matlab yes, numpy no)!!! In-Reply-To: <1876372911.5495721265184389421.JavaMail.root@zimbra3-e1.priv.proxad.net> Message-ID: <1082798544.5496231265184519429.JavaMail.root@zimbra3-e1.priv.proxad.net> Hello, if i multiply two matrix, one with a unique line and the second one with a unique column, i should have a scalar: >>> line matrix([[1, 3, 1]]) >>> col matrix([[2], [2], [2]]) >>> line*col matrix([[10]]) Matlab give me a scalar, Numpy does not... Do you know why? Regards, Laurent From david at silveregg.co.jp Wed Feb 3 03:20:23 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 03 Feb 2010 17:20:23 +0900 Subject: [Numpy-discussion] multiply a lign matrix with a column matrix should return a scalar( matlab yes, numpy no)!!! In-Reply-To: <1082798544.5496231265184519429.JavaMail.root@zimbra3-e1.priv.proxad.net> References: <1082798544.5496231265184519429.JavaMail.root@zimbra3-e1.priv.proxad.net> Message-ID: <4B6931C7.7000106@silveregg.co.jp> laurent.feron at free.fr wrote: > Hello, > > if i multiply two matrix, one with a unique line and the second one with a unique column, i should have a scalar: > >>>> line > matrix([[1, 3, 1]]) >>>> col > matrix([[2], > [2], > [2]]) >>>> line*col > matrix([[10]]) > > Matlab give me a scalar, Numpy does not... Matlab does not have the concept of scalar: everything is a matrix (size(1) returns (1, 1) in matlab). NumPy matrix objects are more or less the same: every operation between two matrices return a matrix. Although natural coming from a matlab background, I would advise against using matrices because they look more familiar. I did the same mistake when I started using NumPy, and getting used to NumPy arrays took me more time. cheers, David From Nikolas.Tezak at gmx.de Wed Feb 3 03:21:17 2010 From: Nikolas.Tezak at gmx.de (Nikolas Tezak) Date: Wed, 3 Feb 2010 09:21:17 +0100 Subject: [Numpy-discussion] multiply a lign matrix with a column matrix should return a scalar( matlab yes, numpy no)!!! In-Reply-To: <1082798544.5496231265184519429.JavaMail.root@zimbra3-e1.priv.proxad.net> References: <1082798544.5496231265184519429.JavaMail.root@zimbra3-e1.priv.proxad.net> Message-ID: <3AE9C0B3-C0F0-4A92-A3B2-3FEBE983CA6F@gmx.de> Hi Laurent, I'm not sure why this was implemented this way (probably due to numerical precision, because the numpy datatypes offer a greater variety than python), but in any case you can easily convert the result to a float: >>> float(line*col) 10 (You probably knew this, but well... :) ) Regards, Nikolas Am 03.02.2010 um 09:08 schrieb laurent.feron at free.fr: > Hello, > > if i multiply two matrix, one with a unique line and the second one > with a unique column, i should have a scalar: > >>>> line > matrix([[1, 3, 1]]) >>>> col > matrix([[2], > [2], > [2]]) >>>> line*col > matrix([[10]]) > > Matlab give me a scalar, Numpy does not... > > Do you know why? > > Regards, > Laurent > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From faltet at pytables.org Wed Feb 3 03:22:23 2010 From: faltet at pytables.org (Francesc Alted) Date: Wed, 3 Feb 2010 09:22:23 +0100 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B692901.8060709@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> <4B692901.8060709@silveregg.co.jp> Message-ID: <201002030922.23185.faltet@pytables.org> A Wednesday 03 February 2010 08:42:57 David Cournapeau escrigu?: > > Yes, it means distributors of packages that depend on NumPy will have > > to recompile against the new version, and I can see why some might > > want to avoid that. Pushing what is really a distribution problem > > back to the NumPy package to manage separately is not the approach I > > would take. > > I don't think it is accurate to see ABI compatibility as a distribution > issue. It is mostly an orthogonal issue: it is true that ABI > incompatibility complicates distributions, but that's not the main issue. > > A more important scenario is as follows: let's assume we do allow > breaking the ABI every 1.X release, meaning that an ABI incompatible > change happens every ~ 6 months at the current pace (using the last 2-3 > years as history). Now, let's say I have a package foo which depends on > NumPy, and N other packages which also depend on NumPy. If any new > version of one of those package needs a new Numpy, you need to rebuild > everything. If those other packages depends on other libraries as well > which regularly break ABI, you get exponential breakage, the problem is > intractable. It is especially hard for packages which may not be easily > buildable - I think this is the case of many scientific experiments. > > I believe this is very detrimental for the whole scipy ecosystem: it is > only bearable because only NumPy is doing it. If everybody did the same, > it would be impossible to get anything stable. I've been following this discussion with utter interest, and I also think that the arguments that favors a stable ABI in NumPy are *very* compelling. So +1 for *not* changing the ABI in .X releases. -- Francesc Alted From peno at telenet.be Wed Feb 3 03:38:33 2010 From: peno at telenet.be (Peter Notebaert) Date: Wed, 3 Feb 2010 09:38:33 +0100 Subject: [Numpy-discussion] Determine if numpy is installed from an extension In-Reply-To: <4B68C4AF.4030000@silveregg.co.jp> References: <4B68C4AF.4030000@silveregg.co.jp> Message-ID: <32a250a51002030038t2be29532ybeeda4b69eed29d3@mail.gmail.com> >From an extension? How to import numpy from there and then test if that succeeded and that without any annoying message if possible... Thanks, Peter On Wed, Feb 3, 2010 at 1:34 AM, David Cournapeau wrote: > Peter Notebaert wrote: > > > How can I test if numpy is installed on the system from the extension so > > that I do not active the numpy functionality and that it is still able > > to use my extension, but then without numpy support? > > Is there some reason why you cannot try to import numpy first to check > whether it is available ? > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Feb 3 04:41:54 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 3 Feb 2010 18:41:54 +0900 Subject: [Numpy-discussion] Determine if numpy is installed from an extension In-Reply-To: <32a250a51002030038t2be29532ybeeda4b69eed29d3@mail.gmail.com> References: <4B68C4AF.4030000@silveregg.co.jp> <32a250a51002030038t2be29532ybeeda4b69eed29d3@mail.gmail.com> Message-ID: <5b8d13221002030141w1ea60bddnb1cc808a305ce158@mail.gmail.com> On Wed, Feb 3, 2010 at 5:38 PM, Peter Notebaert wrote: > >From an extension? How to import numpy from there and then test if that > succeeded and that without any annoying message if possible... One obvious solution would be to simply call PyImport_Import, something like: #include PyMODINIT_FUNC initfoo(void) { PyObject *m, *mod; m = Py_InitModule("foo", NULL); if (m == NULL) { return; } mod = PyImport_ImportModule("numpy"); if (mod == NULL) { return; } Py_DECREF(mod); } But I am not sure whether it would cause some issues if you do this and then import the numpy C API (which is mandatory before using any C functions from numpy). I know the python import system has some dark areas, I don't know if that's one of them or not. cheers, David From markus.proeller at ifm.com Wed Feb 3 07:43:17 2010 From: markus.proeller at ifm.com (markus.proeller at ifm.com) Date: Wed, 3 Feb 2010 13:43:17 +0100 Subject: [Numpy-discussion] numpy.left_shift with negative x2 Message-ID: Hello, the following operation seems strange to me >>> np.left_shift(2,-1) 0 I would have expected a right_shift by one. The documentation on http://docs.scipy.org/doc/numpy/reference/generated/numpy.left_shift.html#numpy.left_shift also says that the operation is equivalent to multiplying x1 by 2**x2. That's not the case! Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Feb 3 07:58:49 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 3 Feb 2010 20:58:49 +0800 Subject: [Numpy-discussion] numpy.left_shift with negative x2 In-Reply-To: References: Message-ID: On Wed, Feb 3, 2010 at 8:43 PM, wrote: > > Hello, > > the following operation seems strange to me > > >>> np.left_shift(2,-1) > 0 > > I would have expected a right_shift by one. > I wouldn't expect anything, the behavior is simply not defined. Python returns an error: In [17]: 2 << -1 --------------------------------------------------------------------------- ValueError .... > > The documentation on > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.left_shift.html#numpy.left_shift > also says that the operation is equivalent to multiplying x1 by 2**x2. > That's not the case! > The line before that says "Bits are shifted to the left by appending `x2` 0s at the right of `x1`." What does it mean to append a negative number of zeros? The docs could explicitly mention that x2 has to be non-negative, if that would make it clearer. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus.proeller at ifm.com Wed Feb 3 08:33:00 2010 From: markus.proeller at ifm.com (markus.proeller at ifm.com) Date: Wed, 3 Feb 2010 14:33:00 +0100 Subject: [Numpy-discussion] Antwort: Re: numpy.left_shift with negative x2 In-Reply-To: Message-ID: >> On Wed, Feb 3, 2010 at 8:43 PM, wrote: >> >> Hello, >> >> the following operation seems strange to me >> >> >>> np.left_shift(2,-1) >> 0 >> >> I would have expected a right_shift by one. > > I wouldn't expect anything, the behavior is simply not defined. But it would prevent a statement like if x2 > 0 then ... else ... Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Feb 3 09:30:49 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 3 Feb 2010 22:30:49 +0800 Subject: [Numpy-discussion] Antwort: Re: numpy.left_shift with negative x2 In-Reply-To: References: Message-ID: On Wed, Feb 3, 2010 at 9:33 PM, wrote: > > >> On Wed, Feb 3, 2010 at 8:43 PM, wrote: > >> > >> Hello, > >> > >> the following operation seems strange to me > >> > >> >>> np.left_shift(2,-1) > >> 0 > >> > >> I would have expected a right_shift by one. > > > > I wouldn't expect anything, the behavior is simply not defined. > > But it would prevent a statement like > > if x2 > 0 then > ... > else > ... > > Right now I think the left_shift ufunc calls the Python C API, so it just does the same as Python. Which seems like the right thing to do. If you want a bit_shift function which is a combination of left and right shift, this is straightforward to do right? Something like: In [42]: x2 Out[42]: array([-2, -1, 0, 1, 2]) In [43]: def bit_shift(x1, x2): return np.choose(x2>0, [np.right_shift(x1, -x2), np.left_shift(x1, x2)]) ....: In [45]: bit_shift(2, x2) Out[45]: array([0, 1, 2, 4, 8]) Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Feb 3 09:36:07 2010 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 03 Feb 2010 09:36:07 -0500 Subject: [Numpy-discussion] multiply a lign matrix with a column matrix should return a scalar( matlab yes, numpy no)!!! In-Reply-To: <1082798544.5496231265184519429.JavaMail.root@zimbra3-e1.priv.proxad.net> References: <1082798544.5496231265184519429.JavaMail.root@zimbra3-e1.priv.proxad.net> Message-ID: <4B6989D7.7070905@american.edu> On 2/3/2010 3:08 AM, laurent.feron at free.fr wrote: > if i multiply two matrix, one with a unique line and the second one > with a unique column, i should have a scalar What definition of matrix multiplication is that?? If you really want a scalar product, ask for it:: >>> import numpy as np >>> m1 = np.mat('0 1 2') >>> m2 = m1.T >>> np.dot(m1.flat,m2.flat) 5 Alan Isaac From robert.kern at gmail.com Wed Feb 3 10:22:29 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 3 Feb 2010 09:22:29 -0600 Subject: [Numpy-discussion] Determine if numpy is installed from an extension In-Reply-To: <5b8d13221002030141w1ea60bddnb1cc808a305ce158@mail.gmail.com> References: <4B68C4AF.4030000@silveregg.co.jp> <32a250a51002030038t2be29532ybeeda4b69eed29d3@mail.gmail.com> <5b8d13221002030141w1ea60bddnb1cc808a305ce158@mail.gmail.com> Message-ID: <3d375d731002030722l7de7e9d8yfaeed15293b73aff@mail.gmail.com> On Wed, Feb 3, 2010 at 03:41, David Cournapeau wrote: > On Wed, Feb 3, 2010 at 5:38 PM, Peter Notebaert wrote: >> >From an extension? How to import numpy from there and then test if that >> succeeded and that without any annoying message if possible... > > One obvious solution would be to simply call PyImport_Import, something like: > > #include > > PyMODINIT_FUNC initfoo(void) > { > ? ? ? ?PyObject *m, *mod; > > ? ? ? ?m = Py_InitModule("foo", NULL); > ? ? ? ?if (m == NULL) { > ? ? ? ? ? ? ? ?return; > ? ? ? ?} > > ? ? ? ?mod = PyImport_ImportModule("numpy"); > ? ? ? ?if (mod == NULL) { > ? ? ? ? ? ? ? ?return; > ? ? ? ?} > ? ? ? ?Py_DECREF(mod); Or rather, to recover from the failed import as the OP wants to do: mod = PyImport_ImportModule("numpy"); if (mod == NULL) { /* Clear the error state since we are handling the error. */ PyErr_Clear(); /* ... set up for the sans-numpy case. */ } else { Py_DECREF(mod); import_array(); /* ... set up for the with-numpy case. */ } -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peno at telenet.be Wed Feb 3 10:57:35 2010 From: peno at telenet.be (Peter Notebaert) Date: Wed, 3 Feb 2010 16:57:35 +0100 Subject: [Numpy-discussion] Determine if numpy is installed from an extension In-Reply-To: <3d375d731002030722l7de7e9d8yfaeed15293b73aff@mail.gmail.com> References: <4B68C4AF.4030000@silveregg.co.jp> <32a250a51002030038t2be29532ybeeda4b69eed29d3@mail.gmail.com> <5b8d13221002030141w1ea60bddnb1cc808a305ce158@mail.gmail.com> <3d375d731002030722l7de7e9d8yfaeed15293b73aff@mail.gmail.com> Message-ID: <32a250a51002030757u72d59be3ya47e6b069b5b7080@mail.gmail.com> Ah, that is maybe the idea: if (_import_array() < 0) { /* Clear the error state since we are handling the error. */ PyErr_Clear(); /* ... set up for the sans-numpy case. */ } else { /* ... set up for the with-numpy case. */ } I did not call PyErr_Clear() when _import_array() < 0 and the error is probably still hanging and then given later. I will try this this evening. Thank you for the hints. Peter On Wed, Feb 3, 2010 at 4:22 PM, Robert Kern wrote: > On Wed, Feb 3, 2010 at 03:41, David Cournapeau wrote: > > On Wed, Feb 3, 2010 at 5:38 PM, Peter Notebaert wrote: > >> >From an extension? How to import numpy from there and then test if that > >> succeeded and that without any annoying message if possible... > > > > One obvious solution would be to simply call PyImport_Import, something > like: > > > > #include > > > > PyMODINIT_FUNC initfoo(void) > > { > > PyObject *m, *mod; > > > > m = Py_InitModule("foo", NULL); > > if (m == NULL) { > > return; > > } > > > > mod = PyImport_ImportModule("numpy"); > > if (mod == NULL) { > > return; > > } > > Py_DECREF(mod); > > Or rather, to recover from the failed import as the OP wants to do: > > mod = PyImport_ImportModule("numpy"); > if (mod == NULL) { > /* Clear the error state since we are handling the error. */ > PyErr_Clear(); > /* ... set up for the sans-numpy case. */ > } > else { > Py_DECREF(mod); > import_array(); > /* ... set up for the with-numpy case. */ > } > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Feb 3 12:09:37 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 3 Feb 2010 10:09:37 -0700 Subject: [Numpy-discussion] numpy.left_shift with negative x2 In-Reply-To: References: Message-ID: On Wed, Feb 3, 2010 at 5:43 AM, wrote: > > Hello, > > the following operation seems strange to me > > >>> np.left_shift(2,-1) > 0 > > I would have expected a right_shift by one. > > The result of a shift by a negative number is undefined in the C language; the gcc compiler will issue a warning if it can determine that that is the case. Even so, the result in your example is normally 1. There is something else going on: In [26]: x = array([2]) In [27]: x << -2 Out[27]: array([-9223372036854775808]) In [28]: x << 62 Out[28]: array([-9223372036854775808]) In [29]: x << 63 Out[29]: array([0]) In [30]: x << 64 Out[30]: array([2]) This for 64 bit integers. Looks almost like the shift is taken mod 64, which is a bit weird. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Wed Feb 3 12:16:33 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 03 Feb 2010 09:16:33 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B68D701.4030003@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> Message-ID: <4B69AF71.8080006@noaa.gov> David Cournapeau wrote: > Just so that there is no confusion: it is only about removing it for > 1.4.x, not about removing datetime altogether. It seems that datetime in > 1.4.x has few users, Of course it has few users -- it's brand new! > whereas breaking ABI is a nuisance for many more > people. In particular, people who update NumPy 1.4.0 cannot use scipy or > matplotlib unless they build it by themselves as well - we are talking > about thousand of people at least assuming sourceforge numbers are accurate. Is it out of the question to make new builds of those? Anyway, ABI breakage will happen once in a while -- is it worse to do it now than any other time? Do people that don't want (or can't) upgrade scipy/mpl/whatever HAVE to upgrade numpy? For my part - I tried 1.4, found it broke a few things, so I downgraded. Then a bit later, we decided we needed to build a few things anyway, so have now gone to 1.4 and rebuilt scipy, and out own Cython extensions. changing it back means that I'd have to do that again -- not a huge deal, but that's what I meant by "the cat's out of the bag" -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From josef.pktd at gmail.com Wed Feb 3 12:46:51 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 3 Feb 2010 12:46:51 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B69AF71.8080006@noaa.gov> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B69AF71.8080006@noaa.gov> Message-ID: <1cd32cbb1002030946i24477c5cn2f9baf7858ba50@mail.gmail.com> On Wed, Feb 3, 2010 at 12:16 PM, Christopher Barker wrote: > David Cournapeau wrote: >> Just so that there is no confusion: it is only about removing it for >> 1.4.x, not about removing datetime altogether. It seems that datetime in >> 1.4.x has few users, > > Of course it has few users -- it's brand new! > >> whereas breaking ABI is a nuisance for many more >> people. In particular, people who update NumPy 1.4.0 cannot use scipy or >> matplotlib unless they build it by themselves as well - we are talking >> about thousand of people at least assuming sourceforge numbers are accurate. > > Is it out of the question to make new builds of those? > > Anyway, ABI breakage will happen once in a while -- is it worse to do it > now than any other time? > > Do people that don't want (or can't) upgrade scipy/mpl/whatever HAVE to > upgrade numpy? For example numpy 1.4 has improved nan-handling, which larry (a soon to be released labeled array) relies on. I just asked Keith not to make numpy 1.4 a hard requirement because I'm recommending to windows users not to upgrade numpy. And we still don't have a warning on the front page. (The recommendation on the pymvpa list was to switch to Linux after a user tried numpy 1.4 with scipy 0.7.) > > For my part - I tried 1.4, found it broke a few things, so I downgraded. > Then a bit later, we decided we needed to build a few things anyway, so > have now gone to 1.4 and rebuilt scipy, and out own Cython extensions. > > changing it back means that I'd have to do that again -- not a huge > deal, but that's what I meant by "the cat's out of the bag" I also thought that if some packages start to be released for numpy 1.4.0, then it will be again messy if 1.4.1 is binary incompatible. However, as David said the main problem are the large number of version combinations for which binary distributions should be made available. several versions of python, numpy, platforms. numpy just doubled the requirement. numpy is (or has become) too important for the entire ecosphere to accidentally break binary compatibility without warning and preparation. Josef > > -Chris > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959 ? voice > 7600 Sand Point Way NE ? (206) 526-6329 ? fax > Seattle, WA ?98115 ? ? ? (206) 526-6317 ? main reception > > Chris.Barker at noaa.gov > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From robert.kern at gmail.com Wed Feb 3 12:58:43 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 3 Feb 2010 11:58:43 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> Message-ID: <3d375d731002030958q680c7fe2v9da4fcd29957c5ce@mail.gmail.com> On Tue, Feb 2, 2010 at 23:45, Travis Oliphant wrote: > I consider ABI a very significant think. ?We should be very accurate > about when a re-compile is required. ? ?I just don't believe that we > should be promising ABI compatibility at .X releases. ? I never had > that intention. ?I don't remember when it crept in to the ethos. Please refer to your(!) message "Report from SciPy" dated 2008-08-23: """ Robert K, Chuck H, Stefan VdW, Jarrod M, David C, and I had a nice discussion about the future directions of NumPy. We resolved some things and would like community feedback on them if there are opinions. * we will be moving to time-based releases (at least 2 times a year -- November / May) with major changes not accepted about 4 weeks before the release. * The releases will be numbered major.minor.bugfix * There will be no ABI changes in minor releases * There will be no API changes in bugfix releases """ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Wed Feb 3 16:24:29 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 3 Feb 2010 13:24:29 -0800 Subject: [Numpy-discussion] [ANN] New package for manipulating labeled arrays Message-ID: I am pleased to announce the first release of the la package, version 0.1. The main class of the la package is a labeled array, larry. A larry consists of a data array and a label list. The data array is stored as a NumPy array and the label list as a list of lists. larry has built-in methods such as movingsum, ranking, merge, shuffle, zscore, demean, lag as well as typical Numpy methods like sum, max, std, sign, clip. NaNs are treated as missing data. Alignment by label is automatic when you add (or subtract, multiply, divide) two larrys. larry adds the convenience of labels, provides many built-in methods, and let's you use your existing array functions. Download: https://launchpad.net/larry/+download docs http://larry.sourceforge.net code https://launchpad.net/larry list http://groups.google.ca/group/pystatsmodels From peno at telenet.be Wed Feb 3 17:14:06 2010 From: peno at telenet.be (Peter Notebaert) Date: Wed, 3 Feb 2010 23:14:06 +0100 Subject: [Numpy-discussion] Determine if numpy is installed from anextension In-Reply-To: <32a250a51002030757u72d59be3ya47e6b069b5b7080@mail.gmail.com> References: <4B68C4AF.4030000@silveregg.co.jp><32a250a51002030038t2be29532ybeeda4b69eed29d3@mail.gmail.com> <5b8d13221002030141w1ea60bddnb1cc808a305ce158@mail.gmail.com> <3d375d731002030722l7de7e9d8yfaeed15293b73aff@mail.gmail.com> <32a250a51002030757u72d59be3ya47e6b069b5b7080@mail.gmail.com> Message-ID: <4ABC1E7DCA4F43499D1F2DC78D13A44B@penohomevista> Just to imform that my approach works: if (_import_array() < 0) { /* Clear the error state since we are handling the error. */ PyErr_Clear(); /* ... set up for the sans-numpy case. */ } else { /* ... set up for the with-numpy case. */ } It is based on Roberts idea to call PyImport_ImportModule("numpy"); and check if that succeededand clean up. In fact, _import_array() is doing this. The code of _import_array() is in the header file __multiarray_api.h in the numpy folder of the distributed files: static int _import_array(void) { int st; PyObject *numpy = PyImport_ImportModule("numpy.core.multiarray"); PyObject *c_api = NULL; if (numpy == NULL) return -1; c_api = PyObject_GetAttrString(numpy, "_ARRAY_API"); if (c_api == NULL) {Py_DECREF(numpy); return -1;} if (PyCObject_Check(c_api)) { PyArray_API = (void **)PyCObject_AsVoidPtr(c_api); } Py_DECREF(c_api); Py_DECREF(numpy); if (PyArray_API == NULL) return -1; /* Perform runtime check of C API version */ if (NPY_VERSION != PyArray_GetNDArrayCVersion()) { PyErr_Format(PyExc_RuntimeError, "module compiled against "\ "ABI version %x but this version of numpy is %x", \ (int) NPY_VERSION, (int) PyArray_GetNDArrayCVersion()); return -1; } if (NPY_FEATURE_VERSION > PyArray_GetNDArrayCFeatureVersion()) { PyErr_Format(PyExc_RuntimeError, "module compiled against "\ "API version %x but this version of numpy is %x", \ (int) NPY_FEATURE_VERSION, (int) PyArray_GetNDArrayCFeatureVersion()); return -1; } /* * Perform runtime check of endianness and check it matches the one set by * the headers (npy_endian.h) as a safeguard */ st = PyArray_GetEndianness(); if (st == NPY_CPU_UNKNOWN_ENDIAN) { PyErr_Format(PyExc_RuntimeError, "FATAL: module compiled as unknown endian"); return -1; } #if NPY_BYTE_ORDER ==NPY_BIG_ENDIAN if (st != NPY_CPU_BIG) { PyErr_Format(PyExc_RuntimeError, "FATAL: module compiled as "\ "big endian, but detected different endianness at runtime"); return -1; } #elif NPY_BYTE_ORDER == NPY_LITTLE_ENDIAN if (st != NPY_CPU_LITTLE) { PyErr_Format(PyExc_RuntimeError, "FATAL: module compiled as "\ "little endian, but detected different endianness at runtime"); return -1; } #endif return 0; } As you can see, this routine is doing the same at the beginning with additional tests and the return value indicates if ok or not. So I only had to call PyErr_Clear(); when it failes and the problem is solved. Thanks for your input. Peter From: Peter Notebaert Sent: Wednesday, February 03, 2010 16:57 To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Determine if numpy is installed from anextension Ah, that is maybe the idea: if (_import_array() < 0) { /* Clear the error state since we are handling the error. */ PyErr_Clear(); /* ... set up for the sans-numpy case. */ } else { /* ... set up for the with-numpy case. */ } I did not call PyErr_Clear() when _import_array() < 0 and the error is probably still hanging and then given later. I will try this this evening. Thank you for the hints. Peter On Wed, Feb 3, 2010 at 4:22 PM, Robert Kern wrote: On Wed, Feb 3, 2010 at 03:41, David Cournapeau wrote: > On Wed, Feb 3, 2010 at 5:38 PM, Peter Notebaert wrote: >> >From an extension? How to import numpy from there and then test if that >> succeeded and that without any annoying message if possible... > > One obvious solution would be to simply call PyImport_Import, something like: > > #include > > PyMODINIT_FUNC initfoo(void) > { > PyObject *m, *mod; > > m = Py_InitModule("foo", NULL); > if (m == NULL) { > return; > } > > mod = PyImport_ImportModule("numpy"); > if (mod == NULL) { > return; > } > Py_DECREF(mod); Or rather, to recover from the failed import as the OP wants to do: mod = PyImport_ImportModule("numpy"); if (mod == NULL) { /* Clear the error state since we are handling the error. */ PyErr_Clear(); /* ... set up for the sans-numpy case. */ } else { Py_DECREF(mod); import_array(); /* ... set up for the with-numpy case. */ } -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------------------------------------------------------------------------- _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurent.feron at free.fr Wed Feb 3 17:31:45 2010 From: laurent.feron at free.fr (laurent.feron at free.fr) Date: Wed, 3 Feb 2010 23:31:45 +0100 (CET) Subject: [Numpy-discussion] multiply a lign matrix with a column matrix should return a scalar( matlab yes, numpy no)!!! In-Reply-To: <4B6989D7.7070905@american.edu> Message-ID: <601987581.5753401265236305321.JavaMail.root@zimbra3-e1.priv.proxad.net> Thanks all for your replies... yes i start with numpy. i translate a dtw algorithm for voice recogniton from matlab to python Rgds, Laurent ----- Mail Original ----- De: "Alan G Isaac" ?: "Discussion of Numerical Python" Envoy?: Mercredi 3 F?vrier 2010 15h36:07 GMT +01:00 Amsterdam / Berlin / Berne / Rome / Stockholm / Vienne Objet: Re: [Numpy-discussion] multiply a lign matrix with a column matrix should return a scalar( matlab yes, numpy no)!!! On 2/3/2010 3:08 AM, laurent.feron at free.fr wrote: > if i multiply two matrix, one with a unique line and the second one > with a unique column, i should have a scalar What definition of matrix multiplication is that?? If you really want a scalar product, ask for it:: >>> import numpy as np >>> m1 = np.mat('0 1 2') >>> m2 = m1.T >>> np.dot(m1.flat,m2.flat) 5 Alan Isaac _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion From cournape at gmail.com Wed Feb 3 17:45:29 2010 From: cournape at gmail.com (David Cournapeau) Date: Thu, 4 Feb 2010 07:45:29 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B69AF71.8080006@noaa.gov> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B69AF71.8080006@noaa.gov> Message-ID: <5b8d13221002031445t77b58c9ak33ae8f3818001875@mail.gmail.com> On Thu, Feb 4, 2010 at 2:16 AM, Christopher Barker wrote: > David Cournapeau wrote: >> Just so that there is no confusion: it is only about removing it for >> 1.4.x, not about removing datetime altogether. It seems that datetime in >> 1.4.x has few users, > > Of course it has few users -- it's brand new! Yes, but that's my point: removing it has low impact, whereas breaking ABI has a big impact. > >> whereas breaking ABI is a nuisance for many more >> people. In particular, people who update NumPy 1.4.0 cannot use scipy or >> matplotlib unless they build it by themselves as well - we are talking >> about thousand of people at least assuming sourceforge numbers are accurate. > > Is it out of the question to make new builds of those? But making new builds of those means that people will *have* to upgrade NumPy if they want to use those builds. Or we would have to keep different binaries for different versions of numpy. I think this is insane. Nobody in their mind would do this. > > Anyway, ABI breakage will happen once in a while > My point is that it should not happen once in a while, only very rarely, and after big consideration. It has tremendous cost for many people: look at how many messages related to this we had on the ML in the last few days. It is the atlas problem all over again, it makes us look bad on "user-friendly platforms". I don't know any good library which breaks ABI "once in a while" where once in a while means several times a year. > > For my part - I tried 1.4, found it broke a few things, so I downgraded. > Then a bit later, we decided we needed to build a few things anyway, so > have now gone to 1.4 and rebuilt scipy, and out own Cython extensions. Now think about users who cannot build their own extensions - I am ready to bet we lose users for good every time this happens. cheers, David From oliphant at enthought.com Thu Feb 4 01:46:17 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Thu, 4 Feb 2010 00:46:17 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B692901.8060709@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> <4B692901.8060709@silveregg.co.jp> Message-ID: <09B517EB-C996-4092-8763-12C1ECE9349F@enthought.com> > > > > A more important scenario is as follows: let's assume we do allow > breaking the ABI every 1.X release, meaning that an ABI incompatible > change happens every ~ 6 months at the current pace (using the last > 2-3 > years as history). If the issue is having too many releases that are .X releases, then let's just slow that down. We are going to have to be able to break ABI compatibility at some point. I agree it should not be taken lightly. But, we have to allow it to happen. For example, there has been a change I've wanted to see in the NumPy data structure ever since 1.0 that I did not make precisely to avoid breaking ABI compatibility. The 'hasobject' field in the PyArray_Descr structure is too small and should be renamed. There is a comment in the code stating that this field needs to change as soon as we are willing to break ABI compatibility (and the field still hasn't changed). The comment is still there. Obviously I have been cautious about ABI compatibility. I just never had the opinion that we would *never* change the ABI. I don't think there is any disagreement in the general idea that the ABI should remain stable for a long time. I think the problem is that in this particular instance, we had different opinions about the importance of ABI compatibility for the 1.4 release. I did not think it was possible, and was surprised when it was attempted. I should have voiced those concerns more loudly. What about the idea of making a 1.3.1 release that maintains ABI compatibility with previous releases. This would basically allow for 1.X releases where .X is even to break ABI compatibility (not saying they always will, but might). The odd releases never do. I will help make the 1.3.1 release if this is an agreeable solution. This pattern would certainly help create stability while still allowing change to happen in a reasonable way. -Travis -- Travis Oliphant Enthought Inc. 1-512-536-1057 http://www.enthought.com oliphant at enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Thu Feb 4 01:48:41 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Thu, 4 Feb 2010 00:48:41 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B6910DC.7070809@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> Message-ID: <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> On Feb 2, 2010, at 11:59 PM, David Cournapeau wrote: > Travis Oliphant wrote: >> >> On Feb 2, 2010, at 11:46 PM, David Cournapeau wrote: >> >>> On Wed, Feb 3, 2010 at 12:23 PM, David Cournapeau >>> > wrote: >>> >>>> >>>> Sorry, my question was badly worded: besides the metadata >>>> pointer, is >>>> there any other change related to the metadata infratructure >>>> which may >>>> potentially change changes the publicly exported structures ? I >>>> wonder >>>> whereas the metadata infrastructure can be kept in 1.4.x >>>> independently >>>> of the datetime support without breaking the ABI >>> >>> FWIW, keeping the metadata pointer, and only removing datetime- >>> related >>> things makes numpy 1.4.x backward compatible, at least as far as >>> scipy >>> is concerned. So it seems the PyArray_Funcs change is the only >>> ABI-incompatible change. >> >> What do you mean by the "PyArray_Funcs change"? > > The change that broke the ABI is in the PyArray_Funcs structure > (ndarrayobject.h): > > struct { > PyArray_VectorUnaryFunc *cast[NPY_NTYPES]; > .... > > Because NPY_NTYPES is bigger after the datetime change. > > If there is a way to have the datetime not expanding NPY_NTYPES, > then I > think we can keep the ABI. I tried something with datetimes considered > as user types, but did not go very far (most certainly because I have > never used this part of the code before). Thanks for reminding me what the ABI problem is. Yes, that will break it (I was very suspicious that we could change the number of basic types without ABI consequence but didn't have time to think about the real problem). My intention in adding the datetime data-type was not to try and preserve ABI in the process. -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Feb 4 01:50:06 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 3 Feb 2010 23:50:06 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <09B517EB-C996-4092-8763-12C1ECE9349F@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> <4B692901.8060709@silveregg.co.jp> <09B517EB-C996-4092-8763-12C1ECE9349F@enthought.com> Message-ID: On Wed, Feb 3, 2010 at 11:46 PM, Travis Oliphant wrote: > > > > > A more important scenario is as follows: let's assume we do allow > breaking the ABI every 1.X release, meaning that an ABI incompatible > change happens every ~ 6 months at the current pace (using the last 2-3 > years as history). > > > If the issue is having too many releases that are .X releases, then let's > just slow that down. We are going to have to be able to break ABI > compatibility at some point. I agree it should not be taken lightly. > But, we have to allow it to happen. > > For example, there has been a change I've wanted to see in the NumPy data > structure ever since 1.0 that I did not make precisely to avoid breaking ABI > compatibility. The 'hasobject' field in the PyArray_Descr structure is > too small and should be renamed. There is a comment in the code stating > that this field needs to change as soon as we are willing to break ABI > compatibility (and the field still hasn't changed). The comment is still > there. Obviously I have been cautious about ABI compatibility. I just > never had the opinion that we would *never* change the ABI. > > I don't think there is any disagreement in the general idea that the ABI > should remain stable for a long time. I think the problem is that in this > particular instance, we had different opinions about the importance of ABI > compatibility for the 1.4 release. I did not think it was possible, and > was surprised when it was attempted. I should have voiced those concerns > more loudly. > > What about the idea of making a 1.3.1 release that maintains ABI > compatibility with previous releases. This would basically allow for 1.X > releases where .X is even to break ABI compatibility (not saying they always > will, but might). The odd releases never do. > > I will help make the 1.3.1 release if this is an agreeable solution. This > pattern would certainly help create stability while still allowing change to > happen in a reasonable way. > > 1.3.1, 1.4.1, what's the difference? 1.4 is already out and causing trouble. I don't see how another four months waiting for the datetime release is a killer and it is still in the trunk. Why does it have to be in 1.4? Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Feb 4 01:59:39 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 3 Feb 2010 23:59:39 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> Message-ID: On Wed, Feb 3, 2010 at 11:48 PM, Travis Oliphant wrote: > > On Feb 2, 2010, at 11:59 PM, David Cournapeau wrote: > > Travis Oliphant wrote: > > > On Feb 2, 2010, at 11:46 PM, David Cournapeau wrote: > > > On Wed, Feb 3, 2010 at 12:23 PM, David Cournapeau > > >> > wrote: > > > > Sorry, my question was badly worded: besides the metadata pointer, is > > there any other change related to the metadata infratructure which may > > potentially change changes the publicly exported structures ? I wonder > > whereas the metadata infrastructure can be kept in 1.4.x independently > > of the datetime support without breaking the ABI > > > FWIW, keeping the metadata pointer, and only removing datetime-related > > things makes numpy 1.4.x backward compatible, at least as far as scipy > > is concerned. So it seems the PyArray_Funcs change is the only > > ABI-incompatible change. > > > What do you mean by the "PyArray_Funcs change"? > > > The change that broke the ABI is in the PyArray_Funcs structure > (ndarrayobject.h): > > struct { > PyArray_VectorUnaryFunc *cast[NPY_NTYPES]; > .... > > Because NPY_NTYPES is bigger after the datetime change. > > If there is a way to have the datetime not expanding NPY_NTYPES, then I > think we can keep the ABI. I tried something with datetimes considered > as user types, but did not go very far (most certainly because I have > never used this part of the code before). > > > Thanks for reminding me what the ABI problem is. Yes, that will break it > (I was very suspicious that we could change the number of basic types > without ABI consequence but didn't have time to think about the real > problem). > > My intention in adding the datetime data-type was not to try and preserve > ABI in the process. > > If so, then it would have been better to have been upfront about that when it went in. I know I pushed for inclusion, but I was told that the ABI could be preserved. We've all been surprised by unforeseen bugs, accidents happen. The question is what is the most graceful way out. I think we should follow David's lead here as he is the current release guy. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Thu Feb 4 02:11:31 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Thu, 4 Feb 2010 01:11:31 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002030958q680c7fe2v9da4fcd29957c5ce@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> <3d375d731002030958q680c7fe2v9da4fcd29957c5ce@mail.gmail.com> Message-ID: On Feb 3, 2010, at 11:58 AM, Robert Kern wrote: > On Tue, Feb 2, 2010 at 23:45, Travis Oliphant > wrote: > >> I consider ABI a very significant think. We should be very accurate >> about when a re-compile is required. I just don't believe that we >> should be promising ABI compatibility at .X releases. I never had >> that intention. I don't remember when it crept in to the ethos. > > Please refer to your(!) message "Report from SciPy" dated 2008-08-23: > > """ > Robert K, Chuck H, Stefan VdW, Jarrod M, David C, and I had a nice > discussion about the future directions of NumPy. We resolved some > things and would like community feedback on them if there are > opinions. > > * we will be moving to time-based releases (at least 2 times a year -- > November / May) with major changes not accepted about 4 weeks before > the > release. > * The releases will be numbered major.minor.bugfix > * There will be no ABI changes in minor releases > * There will be no API changes in bugfix releases > """ Ah, yes. Thanks. I forgot about that report. It sounds like we haven't broken the ABI since that time then, right? How often have we broken ABI? If we haven't broken it since that report, then we have had a 18 months of ABI stability. That's a bit different of a picture than David is painting. Here is the situation as I see it: * date-time support can't be added without breaking the ABI. * we have already released a version of NumPy that breaks the ABI (i.e. the 'cat is out of the bag') I think we are all in agreement that we should make a release that has ABI compatibility with previous releases but keeps all the other changes that it can. The only question left is what release number to give it: 1.4.1 or 1.3.1 or something else? There are down-sides to any choice we make, but I would argue that if we choose something like 1.3.1 (or maybe 1.3.9) we can promise no ABI breakage in odd releases and use this as an "experience" to re-enforce the memory of that commitment. I will remove the date-time changes for the ABI-compatible release. -Travis From charlesr.harris at gmail.com Thu Feb 4 02:21:21 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 4 Feb 2010 00:21:21 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> <3d375d731002030958q680c7fe2v9da4fcd29957c5ce@mail.gmail.com> Message-ID: On Thu, Feb 4, 2010 at 12:11 AM, Travis Oliphant wrote: > > On Feb 3, 2010, at 11:58 AM, Robert Kern wrote: > > > On Tue, Feb 2, 2010 at 23:45, Travis Oliphant > > wrote: > > > >> I consider ABI a very significant think. We should be very accurate > >> about when a re-compile is required. I just don't believe that we > >> should be promising ABI compatibility at .X releases. I never had > >> that intention. I don't remember when it crept in to the ethos. > > > > Please refer to your(!) message "Report from SciPy" dated 2008-08-23: > > > > """ > > Robert K, Chuck H, Stefan VdW, Jarrod M, David C, and I had a nice > > discussion about the future directions of NumPy. We resolved some > > things and would like community feedback on them if there are > > opinions. > > > > * we will be moving to time-based releases (at least 2 times a year -- > > November / May) with major changes not accepted about 4 weeks before > > the > > release. > > * The releases will be numbered major.minor.bugfix > > * There will be no ABI changes in minor releases > > * There will be no API changes in bugfix releases > > """ > > Ah, yes. Thanks. I forgot about that report. It sounds like we > haven't broken the ABI since that time then, right? How often have > we broken ABI? If we haven't broken it since that report, then we > have had a 18 months of ABI stability. That's a bit different of a > picture than David is painting. > > Here is the situation as I see it: > * date-time support can't be added without breaking the ABI. > * we have already released a version of NumPy that breaks the ABI > (i.e. the 'cat is out of the bag') > > I think we are all in agreement that we should make a release that has > ABI compatibility with previous releases but keeps all the other > changes that it can. > > The only question left is what release number to give it: 1.4.1 or > 1.3.1 or something else? There are down-sides to any choice we make, > 1.3.1 would be a bugfix release, 1.4 has new features and 1.4.1 really *would* be a bug fix release. > but I would argue that if we choose something like 1.3.1 (or maybe > 1.3.9) we can promise no ABI breakage in odd releases and use this as > an "experience" to re-enforce the memory of that commitment. > > That's kludgy. As an example of the problems that come with frequent ABI changes, I present Python itself. It's a pain to keep up. Note that Guido has declared "No more", for Python 3.x. The message did reach the top of the mountain. I say we plan an ABI breaking release and maybe add the extra space for hasobject at the same time. It seems the time has come to bite that bullet, but let us do it on purpose. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Thu Feb 4 02:37:09 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Thu, 4 Feb 2010 01:37:09 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B68DE1B.90601@wartburg.edu> <3d375d731002021831k1b0c1e31u184da411774c2c76@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> Message-ID: <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> On Feb 4, 2010, at 12:59 AM, Charles R Harris wrote: > > > On Wed, Feb 3, 2010 at 11:48 PM, Travis Oliphant > wrote: > > On Feb 2, 2010, at 11:59 PM, David Cournapeau wrote: > >> Travis Oliphant wrote: >>> >>> On Feb 2, 2010, at 11:46 PM, David Cournapeau wrote: >>> >>>> On Wed, Feb 3, 2010 at 12:23 PM, David Cournapeau >>>> > wrote: >>>> >>>>> >>>>> Sorry, my question was badly worded: besides the metadata >>>>> pointer, is >>>>> there any other change related to the metadata infratructure >>>>> which may >>>>> potentially change changes the publicly exported structures ? I >>>>> wonder >>>>> whereas the metadata infrastructure can be kept in 1.4.x >>>>> independently >>>>> of the datetime support without breaking the ABI >>>> >>>> FWIW, keeping the metadata pointer, and only removing datetime- >>>> related >>>> things makes numpy 1.4.x backward compatible, at least as far as >>>> scipy >>>> is concerned. So it seems the PyArray_Funcs change is the only >>>> ABI-incompatible change. >>> >>> What do you mean by the "PyArray_Funcs change"? >> >> The change that broke the ABI is in the PyArray_Funcs structure >> (ndarrayobject.h): >> >> struct { >> PyArray_VectorUnaryFunc *cast[NPY_NTYPES]; >> .... >> >> Because NPY_NTYPES is bigger after the datetime change. >> >> If there is a way to have the datetime not expanding NPY_NTYPES, >> then I >> think we can keep the ABI. I tried something with datetimes >> considered >> as user types, but did not go very far (most certainly because I have >> never used this part of the code before). > > Thanks for reminding me what the ABI problem is. Yes, that will > break it (I was very suspicious that we could change the number of > basic types without ABI consequence but didn't have time to think > about the real problem). > > My intention in adding the datetime data-type was not to try and > preserve ABI in the process. > > > If so, then it would have been better to have been upfront about > that when it went in. I know I pushed for inclusion, but I was told > that the ABI could be preserved. We've all been surprised by > unforeseen bugs, accidents happen. The question is what is the most > graceful way out. I think we should follow David's lead here as he > is the current release guy. Yes, it would have been better. But, I wasn't trying to hide anything. There were suggestions that the ABI could be preserved, and I didn't see the argument to resisting those claims very clearly, and so couldn't refute them quickly. Why the versioning matters is that we have a release with the needed ABI changes to support date-time. The date-time data-type is useful in its current state (it's not complete but what is there is useable for storing date-time information). I think giving it time for people to use it will help continue to improve what is there and encourage someone to finish the rest of the implementation (it's just not that much more work for someone with about 40-80 hours to spare). Perhaps one way to articulate my perspective is the following: There are currently 2 groups of NumPy users: 1) those who have re-compiled all of their code for 1.4.0 2) those who haven't Group 1) will have to re-compile again no matter what we do (because we are either going to have to bump the ABI number or back-pedal). Group 2) will not have to re-compile once the new release comes out. I don't want to make Group 1) have to re-compile yet a third time when date-time support finally comes out. If they have bitten the bullet now, they will be rewarded with a stable ABI (that will eventually have the benefit of better ufunc support for record arrays as well as the date-time features). -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Feb 4 02:46:01 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 4 Feb 2010 00:46:01 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> Message-ID: On Thu, Feb 4, 2010 at 12:37 AM, Travis Oliphant wrote: > > On Feb 4, 2010, at 12:59 AM, Charles R Harris wrote: > > > > On Wed, Feb 3, 2010 at 11:48 PM, Travis Oliphant wrote: > >> >> On Feb 2, 2010, at 11:59 PM, David Cournapeau wrote: >> >> Travis Oliphant wrote: >> >> >> On Feb 2, 2010, at 11:46 PM, David Cournapeau wrote: >> >> >> On Wed, Feb 3, 2010 at 12:23 PM, David Cournapeau >> >> >> >> wrote: >> >> >> >> Sorry, my question was badly worded: besides the metadata pointer, is >> >> there any other change related to the metadata infratructure which may >> >> potentially change changes the publicly exported structures ? I wonder >> >> whereas the metadata infrastructure can be kept in 1.4.x independently >> >> of the datetime support without breaking the ABI >> >> >> FWIW, keeping the metadata pointer, and only removing datetime-related >> >> things makes numpy 1.4.x backward compatible, at least as far as scipy >> >> is concerned. So it seems the PyArray_Funcs change is the only >> >> ABI-incompatible change. >> >> >> What do you mean by the "PyArray_Funcs change"? >> >> >> The change that broke the ABI is in the PyArray_Funcs structure >> (ndarrayobject.h): >> >> struct { >> PyArray_VectorUnaryFunc *cast[NPY_NTYPES]; >> .... >> >> Because NPY_NTYPES is bigger after the datetime change. >> >> If there is a way to have the datetime not expanding NPY_NTYPES, then I >> think we can keep the ABI. I tried something with datetimes considered >> as user types, but did not go very far (most certainly because I have >> never used this part of the code before). >> >> >> Thanks for reminding me what the ABI problem is. Yes, that will break it >> (I was very suspicious that we could change the number of basic types >> without ABI consequence but didn't have time to think about the real >> problem). >> >> My intention in adding the datetime data-type was not to try and preserve >> ABI in the process. >> >> > If so, then it would have been better to have been upfront about that when > it went in. I know I pushed for inclusion, but I was told that the ABI could > be preserved. We've all been surprised by unforeseen bugs, accidents happen. > The question is what is the most graceful way out. I think we should follow > David's lead here as he is the current release guy. > > > Yes, it would have been better. But, I wasn't trying to hide anything. > There were suggestions that the ABI could be preserved, and I didn't see > the argument to resisting those claims very clearly, and so couldn't refute > them quickly. > > Why the versioning matters is that we have a release with the needed ABI > changes to support date-time. The date-time data-type is useful in its > current state (it's not complete but what is there is useable for storing > date-time information). I think giving it time for people to use it will > help continue to improve what is there and encourage someone to finish the > rest of the implementation (it's just not that much more work for someone > with about 40-80 hours to spare). > > Perhaps one way to articulate my perspective is the following: > > There are currently 2 groups of NumPy users: > > 1) those who have re-compiled all of their code for 1.4.0 > 2) those who haven't > > I think David has a better grip on that. There really are a lot of people who depend on binaries, and those binaries in turn depend on numpy. I would even say those folks are a majority, they are those who download the Mac and Windows versions of numpy. > Group 1) will have to re-compile again no matter what we do (because we are > either going to have to bump the ABI number or back-pedal). > Group 2) will not have to re-compile once the new release comes out. > > I don't want to make Group 1) have to re-compile yet a third time when > date-time support finally comes out. If they have bitten the bullet now, > they will be rewarded with a stable ABI (that will eventually have the > benefit of better ufunc support for record arrays as well as the date-time > features). > > I feel that a latter release date for datetime would be a benefit to yourself also, as you would have the time to get the code into shape. As I recall you were even reluctant to commit it in the first place. What has changed? Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at pytables.org Thu Feb 4 03:21:01 2010 From: faltet at pytables.org (Francesc Alted) Date: Thu, 4 Feb 2010 09:21:01 +0100 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> Message-ID: <201002040921.01470.faltet@pytables.org> A Thursday 04 February 2010 08:46:01 Charles R Harris escrigu?: > > Perhaps one way to articulate my perspective is the following: > > > > There are currently 2 groups of NumPy users: > > > > 1) those who have re-compiled all of their code for 1.4.0 > > 2) those who haven't > > I think David has a better grip on that. There really are a lot of people > who depend on binaries, and those binaries in turn depend on numpy. I would > even say those folks are a majority, they are those who download the Mac > and Windows versions of numpy. Yes, I think this is precisely the problem: people that are used to fetch binaries and want to use new NumPy, will be forced to upgrade all the other binary packages that depends on it. And these binary packagers (including me) are being forced to regenerate their binaries as soon as possible if they don't want their users to despair. I'm not saying that regenerating binaries is not possible, but that would require a minimum of anticipation. I'd be more comfortable with ABI-breaking releases to be announced at least with 6 months of anticipation. Then, a user is not likely going to change its *already* working environment until all the binary packages he depends on (scipy, matplotlib, pytables, h5py, numexpr, sympy...) have been *all* updated for dealing with the new ABI numpy, and that could be really a long time. With this (and ironically), an attempt to quickly introduce a new feature (in this case datetime, but it could have been whatever) in a release for allowing wider testing and adoption, will almost certainly result in a release that takes much longer to spread widely, and what is worst, generating a large frustration among users. My 2 cts, -- Francesc Alted From friedrichromstedt at gmail.com Thu Feb 4 05:34:31 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 4 Feb 2010 11:34:31 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal Message-ID: Hi, I'm just coding a package for uncertain arrays using the accelerated numpy functionality intensively. I'm sorry, but I have to give some background information first. The package provides a class upy.undarray, which holds the nominal value and the uncertainty information. It has methods __add__(other), __radd__(other), ..., __eq__(other), __ne__(other), which accept both upy.undarrays and all other values suitable for coercion, thus also native numpy.ndarrays. But because numpy treats in the statement: result = numpyarray * upyarray upyarray as a scalar, because it's not an numpy.ndarray, I have to overload the numpy arithmetics by own objects by using numpy.set_numeric_ops(add = ..., ..., equal = equal, not_equal = not_equal). The arguments are defined by the module (it will be clearifiied below). Because numpy.add etc. are ufuncs exhibiting attributes, I wrote a class to wrap them: class ufuncWrap: """Wraps numpy ufuncs. Behaves like the original, with the exception that __call__() will be overloaded.""" def __init__(self, ufunc, overload): """UFUNC is the ufunc to be wrapped. OVERLOAD is the name (string) of the undarray method to be used in overloading __call__().""" self.ufunc = ufunc self.overload = overload def __call__(self, a, b, *args, **kwargs): """When B is an undarray, call B.overload(a), else .ufunc(a, b).""" if isinstance(b, undarray): return getattr(b, self.overload)(a) else: return self.ufunc(a, b, *args, **kwargs) def __getattr__(self, attr): """Return getattr(.ufunc, ATTR).""" return getattr(self.ufunc, attr) I only have to wrap binary operators. Then, e.g.: class Equal(ufuncWrap): def __init__(self): ufuncWrap.__init__(self, numpy.equal, '__eq__') equal = Equal() This works as expected. But this approach fails (in first iteration) for a similar class NotEqual. I have let the module output the arguments passed to ufuncWrap.__call__(), and I found that the statement: result = (numpyarray != upyarray) with: numpyarray = numpy.asarray([1.0]) upyarray = upy.ndarray([2.0], error = [0.1]) is passed on to NotEqual.__call__() as the arguments: a = a numpy-array array([1.0]) b = a numpy-array array(shape = (), dtype = numpy.object), which is a scalar array holding the upy.ndarray instance passed to !=. I can work around the exhbited behaviour by: class NotEqual(ufuncWrap): def __init__(self): ufuncWrap.__init__(self, numpy.not_equal, '__ne__') def __call__(self, a, b, *args, **kwargs): # numpy's calling mechanism of not_equal() seems to have a bug, # such that b is always a numpy.ndarray. When b should be an undarray, # it is a numpy.ndarray(dtype = numpy.object, shape = ()) ... # Make the call also compatible with future, bug-fixed versions. if isinstance(b, numpy.ndarray): if b.ndim == 0: # Implement some conversion from scalar array to stored object. b = b.sum() return ufuncWrap.__call__(self, a, b, *args, **kwargs) What is the reason for the behaviour observed? I'm using numpy 1.4.0 with Python 2.5. Friedrich From dsdale24 at gmail.com Thu Feb 4 07:42:38 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Thu, 4 Feb 2010 07:42:38 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <201002040921.01470.faltet@pytables.org> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <201002040921.01470.faltet@pytables.org> Message-ID: On Thu, Feb 4, 2010 at 3:21 AM, Francesc Alted wrote: > A Thursday 04 February 2010 08:46:01 Charles R Harris escrigu?: >> > Perhaps one way to articulate my perspective is the following: >> > >> > There are currently 2 groups of NumPy users: >> > >> > ?1) ?those who have re-compiled all of their code for 1.4.0 >> > ?2) ?those who haven't >> >> I think David has a better grip on that. There really are a lot of people >> who depend on binaries, and those binaries in turn depend on numpy. I would >> even say those folks are a majority, they are those who download the Mac >> ?and Windows versions of numpy. > > Yes, I think this is precisely the problem: people that are used to fetch > binaries and want to use new NumPy, will be forced to upgrade all the other > binary packages that depends on it. ?And these binary packagers (including me) > are being forced to regenerate their binaries as soon as possible if they > don't want their users to despair. ?I'm not saying that regenerating binaries > is not possible, but that would require a minimum of anticipation. ?I'd be > more comfortable with ABI-breaking releases to be announced at least with 6 > months of anticipation. > > Then, a user is not likely going to change its *already* working environment > until all the binary packages he depends on (scipy, matplotlib, pytables, > h5py, numexpr, sympy...) have been *all* updated for dealing with the new ABI > numpy, and that could be really a long time. ?With this (and ironically), an > attempt to quickly introduce a new feature (in this case datetime, but it > could have been whatever) in a release for allowing wider testing and > adoption, will almost certainly result in a release that takes much longer to > spread widely, and what is worst, generating a large frustration among users. Also, there was some discussion about wanting to make some other changes in numpy that would break ABI once, but allow new dtypes in the future without additional ABI breakage. Since ABI breakage is so disruptive, could we try to coordinate so a number of them can happen all at once, with plenty of warning to the community? Then this change, datetime, and hasobject can all be handled at the same time, and it could/should be released as numpy-2.0. Then when when numpy for py-3.0 is ready, which will presumably require ABI breakage, it could be called numpy-3.0. Darren From pav at iki.fi Thu Feb 4 08:03:01 2010 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 04 Feb 2010 15:03:01 +0200 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <201002040921.01470.faltet@pytables.org> Message-ID: <1265288581.14989.1.camel@talisman> to, 2010-02-04 kello 07:42 -0500, Darren Dale kirjoitti: [clip] > and it could/should be released as numpy-2.0. Then when when numpy for > py-3.0 is ready, which will presumably require ABI breakage, it could > be called numpy-3.0. The Py3 transition will most likely be invisible to Py2 users, and I don't believe it will require ABI breakage. -- Pauli Virtanen From cournape at gmail.com Thu Feb 4 10:17:34 2010 From: cournape at gmail.com (David Cournapeau) Date: Fri, 5 Feb 2010 00:17:34 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <09B517EB-C996-4092-8763-12C1ECE9349F@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <1AD7A6A9-B78C-423B-91A4-A44FABCA3FBC@enthought.com> <4B692901.8060709@silveregg.co.jp> <09B517EB-C996-4092-8763-12C1ECE9349F@enthought.com> Message-ID: <5b8d13221002040717i18b47a03g9d89605a8fa97e8@mail.gmail.com> On Thu, Feb 4, 2010 at 3:46 PM, Travis Oliphant wrote: > If the issue is having too many releases that are .X releases, then let's > just slow that down. The issue is not so much version numbering - I think keeping compatibility between .X releases is slightly better because that's the usually followed convention in open source. Since Python itself does not follow this convention, it does not matter much though. But we have to keep ABI compatibility for several years between releases, whatever versions numbering we want to use. I guess my main concern is that you seem to imply that breaking ABI is necessary to go forward, whereas I think every maturing library should simply *forbid* breaking ABI between major releases. As an example, gtk and QT have been able to keep ABI compatibility for almost a decade, and those are much more complicated than numpy will ever be. Python itself is going toward fixing the ABI for at least a subset of the API, so that's something that the python community will come to expect IMO. > ?I just > never had the opinion that we would *never* change the ABI. > I don't think there is any disagreement in the general idea that the ABI > should remain stable for a long time. ? ?I think the problem is that in this > particular instance, we had different opinions about the importance of ABI > compatibility for the 1.4 release. ? I did not think it was possible, and > was surprised when it was attempted. It is almost always possible to keep ABI compatibility - it is a tradeoff between maintainability, amount of time we are willing to put, etc... > I will help make the 1.3.1 release if this is an agreeable solution. ??This > pattern would certainly help create stability while still allowing change to > happen in a reasonable way. Maybe we should seriously think about working on a major overhaul of NumPy to allow changes while keeping ABI compatibility, then. But after finishing the transition to Py3k - maybe Pauli and Chuck would have a better idea on the exact path forward w.r.t Py3k transition timeline. cheers, David From cournape at gmail.com Thu Feb 4 10:40:29 2010 From: cournape at gmail.com (David Cournapeau) Date: Fri, 5 Feb 2010 00:40:29 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> Message-ID: <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> On Thu, Feb 4, 2010 at 4:37 PM, Travis Oliphant wrote: > > On Feb 4, 2010, at 12:59 AM, Charles R Harris wrote: > > > On Wed, Feb 3, 2010 at 11:48 PM, Travis Oliphant > wrote: >> >> On Feb 2, 2010, at 11:59 PM, David Cournapeau wrote: >> >> Travis Oliphant wrote: >> >> On Feb 2, 2010, at 11:46 PM, David Cournapeau wrote: >> >> On Wed, Feb 3, 2010 at 12:23 PM, David Cournapeau >> >> > wrote: >> >> >> Sorry, my question was badly worded: besides the metadata pointer, is >> >> there any other change related to the metadata infratructure which may >> >> potentially change changes the publicly exported structures ? I wonder >> >> whereas the metadata infrastructure can be kept in 1.4.x independently >> >> of the datetime support without breaking the ABI >> >> FWIW, keeping the metadata pointer, and only removing datetime-related >> >> things makes numpy 1.4.x backward compatible, at least as far as scipy >> >> is concerned. So it seems the PyArray_Funcs change is the only >> >> ABI-incompatible change. >> >> What do you mean by the "PyArray_Funcs change"? >> >> The change that broke the ABI is in the PyArray_Funcs structure >> (ndarrayobject.h): >> >> struct { >> ????????PyArray_VectorUnaryFunc *cast[NPY_NTYPES]; >> ????????.... >> >> Because NPY_NTYPES is bigger after the datetime change. >> >> If there is a way to have the datetime not expanding NPY_NTYPES, then I >> think we can keep the ABI. I tried something with datetimes considered >> as user types, but did not go very far (most certainly because I have >> never used this part of the code before). >> >> Thanks for reminding me what the ABI problem is. ?Yes, that will break it >> (I was very suspicious that we could change the number of basic types >> without ABI consequence but didn't have time to think about the real >> problem). >> My intention in adding the datetime data-type was not to try and preserve >> ABI in the process. > > If so, then it would have been better to have been upfront about that when > it went in. I know I pushed for inclusion, but I was told that the ABI could > be preserved. There may have been some miscommunication - in my mind, adding datetime support so late was only possible under the condition that it would not break the ABI. In particular, I have spent many hours refactoring C code between 1.3.0 and 1.4.0, and a lot of that time was kept to ensure I did not break the ABI, time which has been wasted. I would also note that I was reluctant to add datetime so late because I was precisely afraid something like that would happen. In the grand scheme of things, it does not matter so much, we all have different timelines and schedules, and I certainly don't think there was any malicious intent from anyone to break things :) But I would like to improve our development process so that we don't repeat the same mistakes. > Perhaps one way to articulate my perspective is the following: > There are currently 2 groups of NumPy users: > ?1) ?those who have re-compiled all of their code for 1.4.0 > ?2) ?those who haven't > Group 1) will have to re-compile again no matter what we do (because we are > either going to have to bump the ABI number or back-pedal). > Group 2) will not have to re-compile once the new release comes out. > I don't want to make Group 1) have to re-compile yet a third time when > date-time support finally comes out. ?If they have bitten the bullet now, > they will be rewarded with a stable ABI (that will eventually have the > benefit of better ufunc support for record arrays as well as the date-time > features). I think Group 1 is a negligible epsilon of Group 2, and moreover, Group 1 is the most likely to be able to deal with those issues. cheers, David From matthew.brett at gmail.com Thu Feb 4 12:38:10 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 4 Feb 2010 17:38:10 +0000 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> Message-ID: <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> Hi, > I think Group 1 is a negligible epsilon of Group 2, and moreover, > Group 1 is the most likely to be able to deal with those issues. It is time for an on list vote? I must say, although I know it is not straightforward, that I agree with David that, we should act in favor of our new and less experienced users, and defer the official ABI breakage until at least the next release. See y'all, Matthew From charlesr.harris at gmail.com Thu Feb 4 13:06:53 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 4 Feb 2010 11:06:53 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> Message-ID: On Thu, Feb 4, 2010 at 10:38 AM, Matthew Brett wrote: > Hi, > > > I think Group 1 is a negligible epsilon of Group 2, and moreover, > > Group 1 is the most likely to be able to deal with those issues. > > It is time for an on list vote? > > I must say, although I know it is not straightforward, that I agree > with David that, we should act in favor of our new and less > experienced users, and defer the official ABI breakage until at least > the next release. > > Let me propose a schedule: 1.4.1 : Bug fix, no datetime, ~4-6wks from now. 2.0 : API break, datetime, hasobject changes, April - May timeframe 2.1 : Python 3K - Fall Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Thu Feb 4 13:12:37 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 04 Feb 2010 10:12:37 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002031445t77b58c9ak33ae8f3818001875@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4795A3DB-A863-4CB8-A2D4-56C5F3D47CD6@enthought.com> <4B68D701.4030003@silveregg.co.jp> <4B69AF71.8080006@noaa.gov> <5b8d13221002031445t77b58c9ak33ae8f3818001875@mail.gmail.com> Message-ID: <4B6B0E15.4090304@noaa.gov> David Cournapeau wrote: >>> 1.4.x, not about removing datetime altogether. It seems that datetime in >>> 1.4.x has few users, >> Of course it has few users -- it's brand new! > > Yes, but that's my point: removing it has low impact, whereas breaking > ABI has a big impact. My point is that it will likely get more users in time, and much more so if it's part of a numpy release, rather than an experimental feature that you need to go to svn to and built to get. I"m planning on using it, but probably won't until it does make it's way into a release. > I don't know any good library which breaks ABI "once in a while" where > once in a while means several times a year. Let's be honest here: - yes, there have been numpy "minor version" updates a couple times a year. - yes, this is a case of changing the ABI on minor update. However, that does not mean that anyone is proposing breaking the ABI at every minor update! This is a discussion about this particular case -- that's it. It's unfortunate that it got this far without us all realizing what a big deal it was, and making a proper, informed decision before release. Lesson learned, I hope! Charles R Harris wrote: > Why does it have to be in 1.4? One reason: Because it already is. However, if we do have other ABI-changing ideas (as Travis indicated), it would be better to do them all at once! This does make me want (once again), some kind of package versioing system in python... Travis Oliphant wrote: > There are down-sides to any choice we make, > but I would argue that if we choose something like 1.3.1 (or maybe > 1.3.9) we can promise no ABI breakage in odd releases and use this as > an "experience" to re-enforce the memory of that commitment. This is asking for a formal system of "stable" and "unstable" releases -- so that both are out there. wxPython has done this a fair bit, for instance, though we did need to provide the wx.version version selection system to support it... Travis Oliphant wrote: > There are currently 2 groups of NumPy users: > > 1) those who have re-compiled all of their code for 1.4.0 I"m one of those folks, but to be honest, Im noit sure there are that many of us -- there are an awful lot of MPL/scipy etc users that don't compile themselves... > If they have bitten the bullet > now, they will be rewarded with a stable ABI unless there are other ABI changes we want to make fairly soon. Matthew Brett wrote: > It is time for an on list vote? not much point -- Travis and I are the only ones supporting the ABI change now, and Travis is the only one that matters -- if we're going with majority vote, the answer is clear. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From millman at berkeley.edu Thu Feb 4 13:22:00 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 4 Feb 2010 10:22:00 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> Message-ID: On Thu, Feb 4, 2010 at 10:06 AM, Charles R Harris wrote: > Let me propose a schedule: > > 1.4.1 : Bug fix, no datetime, ~4-6wks from now. > 2.0 : API break, datetime, hasobject changes, April - May timeframe > 2.1 : Python 3K - Fall I like your schedule in general. The only change I would suggest is releasing 1.4.1 ASAP with just datetime removed. We can always release a 1.4.2 with more bugfixes later. I like getting a 2.0 out in April-May with API break, datetime, and hasobject changes. It gives us time to communicate with all the other packagers and doesn't prevent us from quickly getting datetime out. The only thing I would suggest is that we try to get at least experimental support for Py3k out with the 2.0 release in April-May (even in an unreleased branch). That way other projects (scipy, matplotlib, etc) could potentially work on Py3k support over the summer as well. -- Jarrod Millman Helen Wills Neuroscience Institute 10 Giannini Hall, UC Berkeley http://cirl.berkeley.edu/ From charlesr.harris at gmail.com Thu Feb 4 14:09:03 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 4 Feb 2010 12:09:03 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> Message-ID: On Thu, Feb 4, 2010 at 11:22 AM, Jarrod Millman wrote: > On Thu, Feb 4, 2010 at 10:06 AM, Charles R Harris > wrote: > > Let me propose a schedule: > > > > 1.4.1 : Bug fix, no datetime, ~4-6wks from now. > > 2.0 : API break, datetime, hasobject changes, April - May timeframe > > 2.1 : Python 3K - Fall > > I like your schedule in general. The only change I would suggest is > releasing 1.4.1 ASAP with just datetime removed. We can always > release a 1.4.2 with more bugfixes later. I like getting a 2.0 out in > April-May with API break, datetime, and hasobject changes. It gives > us time to communicate with all the other packagers and doesn't > prevent us from quickly getting datetime out. The only thing I would > suggest is that we try to get at least experimental support for Py3k > out with the 2.0 release in April-May (even in an unreleased branch). > That way other projects (scipy, matplotlib, etc) could potentially > work on Py3k support over the summer as well. > > I put 1.4.1 4-6 wks out to give the apprentice release guys some time. Also, there are some small fixes that should go in, Travis' commits from this morning, for instance. Lets say a code freeze a week from monday, release as soon as possible after. Realistically, I don't think Py3k will be ready by April-May. Fall is probably doable and maybe there will be some things for a SOC person to work on this summer. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From muzgash at gmail.com Thu Feb 4 14:18:49 2010 From: muzgash at gmail.com (Gerardo Gutierrez) Date: Thu, 4 Feb 2010 14:18:49 -0500 Subject: [Numpy-discussion] Audio signal capture and processing Message-ID: <2d5cbc321002041118o5ca2d1afx7294cb547ea4f02f@mail.gmail.com> Hello. I'm working with audio signals with wavelet analisys, and I want to know if someone has work with some audio capture (with the mic and through a file) library so that I can get the time-series... Also I need to play the transformed signal. Thanks. * *"Solo existen 10 tipos de personas en el mundo... las que saben binario y las que no" _-`````-, ,- '- . .' .- - | | - -. `. /.' / `. \ :/ : _... ..._ `` : :: : /._ .`:'_.._\. || : :: `._ ./ ,` : \ . _.'' . `:. / | -. \-. \\_ / \:._ _/ .' .@) \@) ` `\ ,.' _/,--' .- .\,-.`--`. ,'/'' (( \ ` ) /'/' \ `-' ( '/'' `._,-----' ''/' .,---' ''/' ;: ''/'' ''/ ''/''/'' '/'/' `; -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Feb 4 14:21:31 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 4 Feb 2010 14:21:31 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> Message-ID: On Thu, Feb 4, 2010 at 2:37 AM, Travis Oliphant wrote: > Perhaps one way to articulate my perspective is the following: > There are currently 2 groups of NumPy users: > ?1) ?those who have re-compiled all of their code for 1.4.0 > ?2) ?those who haven't It may be useful to keep in mind one important aspect of the cascading dependency effect we are dealing with here; I could recompile *my* codes easily for numpy from svn (I used to do it routinely). But with an ABI break, there are a *ton* of packages now on my system that would break if I put a new numpy in my python path. An easy way to see this is to see how many system packages I'd have to remove if I removed numpy: sudo apt-get remove python-numpy python-numpy-dbg python-numpy-doc [...] The following packages will be REMOVED: impressive keyjnote mayavi2 music-applet python-gnuplot python-matplotlib python-mdp python-mvpa python-mvpa-lib python-netcdf python-numpy python-numpy-dbg python-numpy-doc python-pyepl python-pygame python-pywt python-rpy python-scientific python-scientific-doc python-scipy python-sparse python-sparse-examples python-tables python-visual pyxplot sagemath 0 upgraded, 0 newly installed, 26 to remove and 0 not upgraded. After this operation, 341MB disk space will be freed. Basically this means that if I want to update numpy on my ubuntu 9.10 laptop, all of a sudden not only do I have to recompile things like my codes or scipy/matplotlib (which I'd expect), but I also have to rebuild 23 other system-installed packages which would probably otherwise be fine. For this reason, I've had to back off completely from using post-abi-break numpy, I simply can't afford the time to break and rebuild so much of my system. I know this is a messy and difficult situation, but I wanted to illustrate this aspect of the dependency problem because I haven't seen it mentioned so far in the discussion, and it's a fairly nasty one. Regards, f From dwf at cs.toronto.edu Thu Feb 4 14:34:27 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 4 Feb 2010 14:34:27 -0500 Subject: [Numpy-discussion] Audio signal capture and processing In-Reply-To: <2d5cbc321002041118o5ca2d1afx7294cb547ea4f02f@mail.gmail.com> References: <2d5cbc321002041118o5ca2d1afx7294cb547ea4f02f@mail.gmail.com> Message-ID: On 4-Feb-10, at 2:18 PM, Gerardo Gutierrez wrote: > I'm working with audio signals with wavelet analisys, and I want to > know if > someone has work with some audio capture (with the mic and through a > file) > library so that I can get the time-series... > Also I need to play the transformed signal. Peter Wang has an example using Chaco and ETS: https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/spectrum.py David Cournapeau has written a libsndfile wrapper: http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/audiolab/sphinx/index.html From gael.varoquaux at normalesup.org Thu Feb 4 14:51:37 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 4 Feb 2010 20:51:37 +0100 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> Message-ID: <20100204195137.GA22445@phare.normalesup.org> I'd like to say that I am +1 with everything that has been said against breakage. On top of that, I'd like to echo the conversation I had with Fabian Pedregosa, who works in my group full time on scikit learn. Fabian was doing binaries. I asked him what numpy version he had used to build to the binaries. He replied 1.4, so I told him that his package would not work in the lab, and he looked at me unbelieving. Ga?l From pav at iki.fi Thu Feb 4 15:15:13 2010 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 04 Feb 2010 22:15:13 +0200 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> Message-ID: <1265314513.9154.76.camel@idol> to, 2010-02-04 kello 12:09 -0700, Charles R Harris kirjoitti: > Realistically, I don't think Py3k will be ready by April-May. Fall is > probably doable and maybe there will be some things for a SOC person > to work on this summer. Well, we have many components of Py3 support in place in the SVN trunk. 1470 of 1983 unit tests currently pass. (Except that some recent commit introduced some Py3-breaking Python code.) Main things missing: - String arrays, and other worms out of the str/unicode/bytes can. I've audited many parts of the C code -- there are 66 remaining unchecked points where PyString -> PyBytes is assumed without much thinking (grep PyString). The Python side (= mainly the tests), however, needs more love. Several tests break at the moment since array(['a']).dtype == 'U' on Py3. - There's also the decision about whether 'S' == bytes or str. If the former, the tests need fixing, if the latter, the code needs fixing. - Consuming PEP 3118 buffer arrays so that shape and strides are correctly acquired. This could in the end allow ufuncs to operate on PEP 3118 arrays. The buffer provider support is already done, also for Py2.6. - Rewriting fromfile to use the low-level I/O. Py3 does not support getting FILE* pointers out, and the handle from fdopen is currently left dangling. That's certainly a manageable amount of work. For me, however, I believe real life(TM) will still intrude a bit on my working on Numpy before the summer, so I can't promise that by April-May all of the test suite passes on Py3. With some luck it might be possible, though, and certainly by fall. I hoped to have Py3 support the only bigger change in the next release, so that the number of moving pieces of code would be kept down. It does probably not require an ABI break. But if an ABI break is in the pipeline, it might make sense to put it out before "officially" supporting Py3. *** So how to proceed? I would prefer if new code introduced in the rewrite would compile and work correctly both on Py2 and Py3. I wouldn't expect this to be a high overhead, and it would remove the need for a separate Py3 branch. Most C code that works on Py2 works also on Py3. Py3 mainly means not using PyString, but choosing between Unicode + Bytes + UString (=Bytes for Py2 & Unicode for Py3). Also, it may be necessary to avoid FILE* pointers in the API (on Py3 those are no longer as easily obtained), and be wary when working with buffers. I assume the rewrite will be worked on a separate SVN branch? Also, is there a plan yet on what needs changing to make Numpy's ABI more resistant? Cheers, Pauli From charlesr.harris at gmail.com Thu Feb 4 15:29:15 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 4 Feb 2010 13:29:15 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1265314513.9154.76.camel@idol> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> <1265314513.9154.76.camel@idol> Message-ID: On Thu, Feb 4, 2010 at 1:15 PM, Pauli Virtanen wrote: > to, 2010-02-04 kello 12:09 -0700, Charles R Harris kirjoitti: > > Realistically, I don't think Py3k will be ready by April-May. Fall is > > probably doable and maybe there will be some things for a SOC person > > to work on this summer. > > Well, we have many components of Py3 support in place in the SVN > trunk. 1470 of 1983 unit tests currently pass. (Except that some recent > commit introduced some Py3-breaking Python code.) > > Main things missing: > > - String arrays, and other worms out of the str/unicode/bytes can. > I've audited many parts of the C code -- there are 66 remaining > unchecked points where PyString -> PyBytes is assumed without much > thinking (grep PyString). > > The Python side (= mainly the tests), however, needs more love. > Several tests break at the moment since array(['a']).dtype == 'U' > on Py3. > > - There's also the decision about whether 'S' == bytes or str. > If the former, the tests need fixing, if the latter, the code needs > fixing. > > - Consuming PEP 3118 buffer arrays so that shape and strides are > correctly acquired. This could in the end allow ufuncs to > operate on PEP 3118 arrays. > > The buffer provider support is already done, also for Py2.6. > > - Rewriting fromfile to use the low-level I/O. Py3 does not support > getting FILE* pointers out, and the handle from fdopen is currently > left dangling. > > That's certainly a manageable amount of work. For me, however, I believe > real life(TM) will still intrude a bit on my working on Numpy before the > summer, so I can't promise that by April-May all of the test suite > passes on Py3. With some luck it might be possible, though, and > certainly by fall. > > I hoped to have Py3 support the only bigger change in the next release, > so that the number of moving pieces of code would be kept down. It does > probably not require an ABI break. > > But if an ABI break is in the pipeline, it might make sense to put it > out before "officially" supporting Py3. > > *** > > So how to proceed? > > I would prefer if new code introduced in the rewrite would compile and > work correctly both on Py2 and Py3. I wouldn't expect this to be a high > overhead, and it would remove the need for a separate Py3 branch. > > Most C code that works on Py2 works also on Py3. Py3 mainly means not > using PyString, but choosing between Unicode + Bytes + UString (=Bytes > for Py2 & Unicode for Py3). Also, it may be necessary to avoid FILE* > pointers in the API (on Py3 those are no longer as easily obtained), and > be wary when working with buffers. > > > I assume the rewrite will be worked on a separate SVN branch? Also, is > there a plan yet on what needs changing to make Numpy's ABI more > resistant? > > I don't think we are talking of a rewrite at the moment, that is something that will require a lot of work and redesign. I see a much longer timeline for that, at least two years, not least because it isn't pressing as long as we don't add anything more that breaks the ABI in the near future. So attention all ABI breakers, it's now or never (in software years). Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffery.kline at gmail.com Thu Feb 4 16:28:33 2010 From: jeffery.kline at gmail.com (Jeffery Kline) Date: Thu, 4 Feb 2010 15:28:33 -0600 Subject: [Numpy-discussion] Unexpected tofile() behavior Message-ID: <8B7E7217-A3FA-4FF5-8721-35DAD831C9F7@gmail.com> I am experiencing unexpected behavior with the tofile() function. I try to write two files with but distinct names, but only one of the files is written. The following code illustrates. For me, it only writes the file 't.bin'. from numpy import * t=arange(0,1,0.1) T=arange(0,1,0.1) t.tofile('t.bin') T.tofile('T.bin') Meanwhile, the following code works as I expect by writing 't.bin' and 'S.bin': from numpy import * t=arange(0,1,0.1) T=arange(0,1,0.1) t.tofile('t.bin') T.tofile('S.bin') Am I doing something stupid or overlooking something obvious? My system is Mac os x 10.6.2, running python 2.6.4. Jeff From kwgoodman at gmail.com Thu Feb 4 16:33:31 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 4 Feb 2010 13:33:31 -0800 Subject: [Numpy-discussion] Unexpected tofile() behavior In-Reply-To: <8B7E7217-A3FA-4FF5-8721-35DAD831C9F7@gmail.com> References: <8B7E7217-A3FA-4FF5-8721-35DAD831C9F7@gmail.com> Message-ID: On Thu, Feb 4, 2010 at 1:28 PM, Jeffery Kline wrote: > I am experiencing unexpected behavior with the tofile() function. ?I try to write two files with but distinct names, but only one of the files is written. > > The following code illustrates. For me, it only writes the file 't.bin'. > > from numpy import * > t=arange(0,1,0.1) > T=arange(0,1,0.1) > t.tofile('t.bin') > T.tofile('T.bin') > > Meanwhile, the following code works as I expect by writing 't.bin' and 'S.bin': > > from numpy import * > t=arange(0,1,0.1) > T=arange(0,1,0.1) > t.tofile('t.bin') > T.tofile('S.bin') > > Am I doing something stupid or overlooking something obvious? > > My system is Mac os x 10.6.2, running python 2.6.4. > Jeff Maybe your filesystem is not case sensitive? Mine is: >> x.tofile('t.bin') >> x.tofile('T.bin') >> !ls *.bin t.bin T.bin From pav at iki.fi Thu Feb 4 16:39:34 2010 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 04 Feb 2010 23:39:34 +0200 Subject: [Numpy-discussion] Unexpected tofile() behavior In-Reply-To: <8B7E7217-A3FA-4FF5-8721-35DAD831C9F7@gmail.com> References: <8B7E7217-A3FA-4FF5-8721-35DAD831C9F7@gmail.com> Message-ID: <1265319574.19702.1.camel@idol> to, 2010-02-04 kello 15:28 -0600, Jeffery Kline kirjoitti: > I am experiencing unexpected behavior with the tofile() function. I > try to write two files with but distinct names, but only one of the > files is written. > > The following code illustrates. For me, it only writes the file 't.bin'. > > from numpy import * > t=arange(0,1,0.1) > T=arange(0,1,0.1) > t.tofile('t.bin') > T.tofile('T.bin') [clip] > My system is Mac os x 10.6.2, running python 2.6.4. http://davidwinter.me.uk/articles/2008/05/17/mac-os-xhfs-case-insensitive-why/ From Chris.Barker at noaa.gov Thu Feb 4 17:02:10 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 04 Feb 2010 14:02:10 -0800 Subject: [Numpy-discussion] Unexpected tofile() behavior In-Reply-To: <8B7E7217-A3FA-4FF5-8721-35DAD831C9F7@gmail.com> References: <8B7E7217-A3FA-4FF5-8721-35DAD831C9F7@gmail.com> Message-ID: <4B6B43E2.2030503@noaa.gov> Jeffery Kline wrote: > Am I doing something stupid or overlooking something obvious? > > My system is Mac os x 10.6.2, running python 2.6.4. Mac OS is case=preserving, but not case dependent with file names, I think the OS is writing over your file. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ranavishal at gmail.com Thu Feb 4 17:26:35 2010 From: ranavishal at gmail.com (Vishal Rana) Date: Thu, 4 Feb 2010 14:26:35 -0800 Subject: [Numpy-discussion] Efficiently converting numpy record array to a list of dictionary Message-ID: How do I convert the numpy record array below: recs = [('Bill', 31, 260.0), ('Fred', 15, 145.0)] r = rec.fromrecords(recs, names='name, age, weight', formats='S30, i2, f4') to a list of dictionary like: [{'name': 'Bill', 'age': 31, 'weight': 260.0}, 'name': 'Fred', 'age': 15, 'weight': 145.0}] -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffery.kline at gmail.com Thu Feb 4 18:40:13 2010 From: jeffery.kline at gmail.com (Jeffery Kline) Date: Thu, 4 Feb 2010 17:40:13 -0600 Subject: [Numpy-discussion] Unexpected tofile() behavior In-Reply-To: <4B6B43E2.2030503@noaa.gov> References: <8B7E7217-A3FA-4FF5-8721-35DAD831C9F7@gmail.com> <4B6B43E2.2030503@noaa.gov> Message-ID: On Feb 4, 2010, at 4:02 PM, Christopher Barker wrote: > Jeffery Kline wrote: >> Am I doing something stupid or overlooking something obvious? >> >> My system is Mac os x 10.6.2, running python 2.6.4. > > Mac OS is case=preserving, but not case dependent with file names, I > think the OS is writing over your file. > > -Chris question answered -- thanks for the responses. I hadn't noticed this behavior on OS X before now. Jeff From kwgoodman at gmail.com Thu Feb 4 18:46:44 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 4 Feb 2010 15:46:44 -0800 Subject: [Numpy-discussion] Efficiently converting numpy record array to a list of dictionary In-Reply-To: References: Message-ID: On Thu, Feb 4, 2010 at 2:26 PM, Vishal Rana wrote: > How do I convert the numpy record array below: > recs = [('Bill', 31, 260.0), ('Fred', 15, 145.0)] > r = rec.fromrecords(recs, names='name, age, weight', formats='S30, i2, f4') > to a list of dictionary like: > [{'name': 'Bill', 'age': 31, 'weight': 260.0}, > 'name': 'Fred', 'age': 15, 'weight': 145.0}] It looks like a two-body problem, so it should be solvable. From kwgoodman at gmail.com Thu Feb 4 18:59:30 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 4 Feb 2010 15:59:30 -0800 Subject: [Numpy-discussion] Efficiently converting numpy record array to a list of dictionary In-Reply-To: References: Message-ID: On Thu, Feb 4, 2010 at 3:46 PM, Keith Goodman wrote: > On Thu, Feb 4, 2010 at 2:26 PM, Vishal Rana wrote: >> How do I convert the numpy record array below: >> recs = [('Bill', 31, 260.0), ('Fred', 15, 145.0)] >> r = rec.fromrecords(recs, names='name, age, weight', formats='S30, i2, f4') >> to a list of dictionary like: >> [{'name': 'Bill', 'age': 31, 'weight': 260.0}, >> 'name': 'Fred', 'age': 15, 'weight': 145.0}] > > It looks like a two-body problem, so it should be solvable. Do you already have recs as a list? Then: >> recs = [('Bill', 31, 260.0), ('Fred', 15, 145.0)] >> [{'name': rec[0], 'age': rec[1], 'weight': rec[2]} for rec in recs] [{'age': 31, 'name': 'Bill', 'weight': 260.0}, {'age': 15, 'name': 'Fred', 'weight': 145.0}] From robert.kern at gmail.com Thu Feb 4 19:04:42 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 4 Feb 2010 18:04:42 -0600 Subject: [Numpy-discussion] Efficiently converting numpy record array to a list of dictionary In-Reply-To: References: Message-ID: <3d375d731002041604t28251c05mbc9a559ff0406650@mail.gmail.com> On Thu, Feb 4, 2010 at 16:26, Vishal Rana wrote: > How do I convert the numpy record array below: > recs = [('Bill', 31, 260.0), ('Fred', 15, 145.0)] > r = rec.fromrecords(recs, names='name, age, weight', formats='S30, i2, f4') > to a list of dictionary like: > [{'name': 'Bill', 'age': 31, 'weight': 260.0}, > 'name': 'Fred', 'age': 15, 'weight': 145.0}] Assuming that your record array is only 1D: In [6]: r.dtype.names Out[6]: ('name', 'age', 'weight') In [7]: names = r.dtype.names In [8]: [dict(zip(names, record)) for record in r] Out[8]: [{'age': 31, 'name': 'Bill', 'weight': 260.0}, {'age': 15, 'name': 'Fred', 'weight': 145.0}] -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From warren.weckesser at enthought.com Thu Feb 4 19:10:50 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 04 Feb 2010 18:10:50 -0600 Subject: [Numpy-discussion] Efficiently converting numpy record array to a list of dictionary In-Reply-To: <3d375d731002041604t28251c05mbc9a559ff0406650@mail.gmail.com> References: <3d375d731002041604t28251c05mbc9a559ff0406650@mail.gmail.com> Message-ID: <4B6B620A.2090503@enthought.com> Vishal, Robert's code does the trick, but--in case you are new to numpy record arrays-I thought I'd point out that the array itself already acts like a list of dictionaries: In [6]: import numpy as np In [7]: dt = np.dtype([('name', 'S30'),('age',int),('weight',float)]) In [8]: r = np.array([('Bill',31, 260.0), ('Fred', 15, 145.0)], dtype=dt) In [9]: r[0]['name'] Out[9]: 'Bill' In [10]: r[1]['age'] Out[10]: 15 Warren Robert Kern wrote: > On Thu, Feb 4, 2010 at 16:26, Vishal Rana wrote: > >> How do I convert the numpy record array below: >> recs = [('Bill', 31, 260.0), ('Fred', 15, 145.0)] >> r = rec.fromrecords(recs, names='name, age, weight', formats='S30, i2, f4') >> to a list of dictionary like: >> [{'name': 'Bill', 'age': 31, 'weight': 260.0}, >> 'name': 'Fred', 'age': 15, 'weight': 145.0}] >> > > Assuming that your record array is only 1D: > > In [6]: r.dtype.names > Out[6]: ('name', 'age', 'weight') > > In [7]: names = r.dtype.names > > In [8]: [dict(zip(names, record)) for record in r] > Out[8]: > [{'age': 31, 'name': 'Bill', 'weight': 260.0}, > {'age': 15, 'name': 'Fred', 'weight': 145.0}] > > From ranavishal at gmail.com Thu Feb 4 19:13:59 2010 From: ranavishal at gmail.com (Vishal Rana) Date: Thu, 4 Feb 2010 16:13:59 -0800 Subject: [Numpy-discussion] Efficiently converting numpy record array to a list of dictionary In-Reply-To: <3d375d731002041604t28251c05mbc9a559ff0406650@mail.gmail.com> References: <3d375d731002041604t28251c05mbc9a559ff0406650@mail.gmail.com> Message-ID: Thanks Robert :) Vishal Rana Samuel Goldwyn - "I don't think anyone should write their autobiography until after they're dead." On Thu, Feb 4, 2010 at 4:04 PM, Robert Kern wrote: > On Thu, Feb 4, 2010 at 16:26, Vishal Rana wrote: > > How do I convert the numpy record array below: > > recs = [('Bill', 31, 260.0), ('Fred', 15, 145.0)] > > r = rec.fromrecords(recs, names='name, age, weight', formats='S30, i2, > f4') > > to a list of dictionary like: > > [{'name': 'Bill', 'age': 31, 'weight': 260.0}, > > 'name': 'Fred', 'age': 15, 'weight': 145.0}] > > Assuming that your record array is only 1D: > > In [6]: r.dtype.names > Out[6]: ('name', 'age', 'weight') > > In [7]: names = r.dtype.names > > In [8]: [dict(zip(names, record)) for record in r] > Out[8]: > [{'age': 31, 'name': 'Bill', 'weight': 260.0}, > {'age': 15, 'name': 'Fred', 'weight': 145.0}] > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ranavishal at gmail.com Thu Feb 4 19:17:09 2010 From: ranavishal at gmail.com (Vishal Rana) Date: Thu, 4 Feb 2010 16:17:09 -0800 Subject: [Numpy-discussion] Efficiently converting numpy record array to a list of dictionary In-Reply-To: <4B6B620A.2090503@enthought.com> References: <3d375d731002041604t28251c05mbc9a559ff0406650@mail.gmail.com> <4B6B620A.2090503@enthought.com> Message-ID: Warren, thanks for the information. Vishal Charles de Gaulle - "The better I get to know men, the more I find myself loving dogs." On Thu, Feb 4, 2010 at 4:10 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > Vishal, > > Robert's code does the trick, but--in case you are new to numpy record > arrays-I thought I'd point out that the array itself already acts like a > list of dictionaries: > > In [6]: import numpy as np > > In [7]: dt = np.dtype([('name', 'S30'),('age',int),('weight',float)]) > > In [8]: r = np.array([('Bill',31, 260.0), ('Fred', 15, 145.0)], dtype=dt) > > In [9]: r[0]['name'] > Out[9]: 'Bill' > > In [10]: r[1]['age'] > Out[10]: 15 > > > Warren > > > > > Robert Kern wrote: > > On Thu, Feb 4, 2010 at 16:26, Vishal Rana wrote: > > > >> How do I convert the numpy record array below: > >> recs = [('Bill', 31, 260.0), ('Fred', 15, 145.0)] > >> r = rec.fromrecords(recs, names='name, age, weight', formats='S30, i2, > f4') > >> to a list of dictionary like: > >> [{'name': 'Bill', 'age': 31, 'weight': 260.0}, > >> 'name': 'Fred', 'age': 15, 'weight': 145.0}] > >> > > > > Assuming that your record array is only 1D: > > > > In [6]: r.dtype.names > > Out[6]: ('name', 'age', 'weight') > > > > In [7]: names = r.dtype.names > > > > In [8]: [dict(zip(names, record)) for record in r] > > Out[8]: > > [{'age': 31, 'name': 'Bill', 'weight': 260.0}, > > {'age': 15, 'name': 'Fred', 'weight': 145.0}] > > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Feb 4 21:09:09 2010 From: cournape at gmail.com (David Cournapeau) Date: Fri, 5 Feb 2010 11:09:09 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1265314513.9154.76.camel@idol> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> <1265314513.9154.76.camel@idol> Message-ID: <5b8d13221002041809m68228f04i5e02ba97cfc25f1f@mail.gmail.com> On Fri, Feb 5, 2010 at 5:15 AM, Pauli Virtanen wrote: > to, 2010-02-04 kello 12:09 -0700, Charles R Harris kirjoitti: >> Realistically, I don't think Py3k will be ready by April-May. Fall is >> probably doable and maybe there will be some things for a SOC person >> to work on this summer. > > Well, we have many components of Py3 support in place in the SVN > trunk. 1470 of 1983 unit tests currently pass. (Except that some recent > commit introduced some Py3-breaking Python code.) > > Main things missing: > > - String arrays, and other worms out of the str/unicode/bytes can. > ?I've audited many parts of the C code -- there are 66 remaining > ?unchecked points where PyString -> PyBytes is assumed without much > ?thinking (grep PyString). > > ?The Python side (= mainly the tests), however, needs more love. > ?Several tests break at the moment since array(['a']).dtype == 'U' > ?on Py3. > > - There's also the decision about whether 'S' == bytes or str. > ?If the former, the tests need fixing, if the latter, the code needs > ?fixing. > > - Consuming PEP 3118 buffer arrays so that shape and strides are > ?correctly acquired. This could in the end allow ufuncs to > ?operate on PEP 3118 arrays. > > ?The buffer provider support is already done, also for Py2.6. > > - Rewriting fromfile to use the low-level I/O. Py3 does not support > ?getting FILE* pointers out, and the handle from fdopen is currently > ?left dangling. > > That's certainly a manageable amount of work. For me, however, I believe > real life(TM) will still intrude a bit on my working on Numpy before the > summer, so I can't promise that by April-May all of the test suite > passes on Py3. With some luck it might be possible, though, and > certainly by fall. > > I hoped to have Py3 support the only bigger change in the next release, > so that the number of moving pieces of code would be kept down. It does > probably not require an ABI break. > > But if an ABI break is in the pipeline, it might make sense to put it > out before "officially" supporting Py3. Thanks for the thorough report, it gives me a better idea of what is left to be done. > > ? ?*** > > So how to proceed? > > I would prefer if new code introduced in the rewrite would compile and > work correctly both on Py2 and Py3. I wouldn't expect this to be a high > overhead, and it would remove the need for a separate Py3 branch. I think a py3k buildbot would help for this, right ? Another thing is that the py3k changes do not work at all with Visual Studio compilers, but that's mostly cosmetic things (like #warning not being supported and things like that). > Most C code that works on Py2 works also on Py3. Py3 mainly means not > using PyString, but choosing between Unicode + Bytes + UString (=Bytes > for Py2 & Unicode for Py3). Also, it may be necessary to avoid FILE* > pointers in the API (on Py3 those are no longer as easily obtained), and > be wary when working with buffers. So once the py3k support is in place, should we deprecate those functions so that people interested in porting to py3k can plan in advance ? Getting rid of FILE* pointers and file descriptor would also helps quite a bit on windows. I know that at some point, there were some discussions to make the python C API safe to multiple C runtimes, but I cannot find any recent discussion on that fact. I should just ask on python-dev, I guess. This would be a great relief if we don't have to care about those issues anymore. > I assume the rewrite will be worked on a separate SVN branch? Also, is > there a plan yet on what needs changing to make Numpy's ABI more > resistant? There are two issues: - What we currently means by ABI, that is the ABI for a given python version. The main issue is the binary layout of the structures (I think the function ordering is pretty solid now, it is difficult to change it inadvertently). The only way to fix this is to hide the content of those structures, and define the structures in the C code instead (opaque pointer, also known as the pimpl idiom). This means a massive break of the C API, both internally and externally, but that's something that is really needed IMO. - Higher goal: ABI across python versions. This is motivated by PEP 384. It means avoiding calls to API which are not "safe". I have no idea whether it is possible, but that's something to keep in mind once we start a major overhaul. cheers, David From david at silveregg.co.jp Thu Feb 4 21:43:21 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 05 Feb 2010 11:43:21 +0900 Subject: [Numpy-discussion] Audio signal capture and processing In-Reply-To: <2d5cbc321002041118o5ca2d1afx7294cb547ea4f02f@mail.gmail.com> References: <2d5cbc321002041118o5ca2d1afx7294cb547ea4f02f@mail.gmail.com> Message-ID: <4B6B85C9.2010902@silveregg.co.jp> Gerardo Gutierrez wrote: > Hello. > > I'm working with audio signals with wavelet analisys, and I want to know > if someone has work with some audio capture (with the mic and through a > file) library so that I can get the time-series... I think the easiest for now is to record things in a file, and read this file using a library. I believe audiolab is the most complete solution when working with NumPy: I implemented audiolab to replace the missing functions in matlab, and it supports a lot of file formats as well as reading huge audio files without loading everything in memory. Its main drawback is the dependency on libsndfile. You can play numpy arrays with the play function (it uses ALSA) - having a record function would be good as well, but I never took the time to implement it (hint, hint :) ). cheers, David From ralf.gommers at googlemail.com Thu Feb 4 23:16:00 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 5 Feb 2010 12:16:00 +0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> Message-ID: On Fri, Feb 5, 2010 at 3:09 AM, Charles R Harris wrote: > > I put 1.4.1 4-6 wks out to give the apprentice release guys some time. > > Thanks, a few weeks would be useful. I have been able to build an OS X installer, and am now trying to get Wine into shape. Patrick told me he set up a build env on Windows and was starting to do the same on OS X. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Feb 5 02:22:57 2010 From: cournape at gmail.com (David Cournapeau) Date: Fri, 5 Feb 2010 16:22:57 +0900 Subject: [Numpy-discussion] wired error message in scipy.sparse.eigen function: Segmentation fault In-Reply-To: <4B612A8D.3060401@silveregg.co.jp> References: <4B60EC22.5070001@gmail.com> <4B60FF24.7040504@silveregg.co.jp> <4B6102B6.400@gmail.com> <4B610625.4060303@silveregg.co.jp> <4B611EEE.8040500@gmail.com> <4B612A8D.3060401@silveregg.co.jp> Message-ID: <5b8d13221002042322s53da661bl8df8096c4a656d30@mail.gmail.com> On Thu, Jan 28, 2010 at 3:11 PM, David Cournapeau wrote: > Jankins wrote: >> Yes. I am using scipy.sparse.linalg.eigen.arpack. >> >> The exact output is: >> >> /usr/local/lib/python2.6/dist-packages/scipy/sparse/linalg/eigen/arpack/_arpack.so > > I need the output of ldd on this file, actually, i.e the output of "ldd > /usr/local/lib/python2.6/dist-packages/scipy/sparse/linalg/eigen/arpack/_arpack.so". > It should output the libraries actually loaded by the OS. > >> In fact, the matrix is from a directed graph with about 18,000 nodes and >> 41,000 edges. Actually, this matrix is the smallest one I used. > > Is it available somewhere ? 41000 edges should make the matrix very > sparse. I first thought that your problem may be some buggy ATLAS, but > the current arpack interface (the one used by sparse.linalg.eigen) is > also quite buggy in my experience, though I could not reproduce it. > Having a matrix which consistently reproduce the bug would be very useful. Ok, I took a look at it, and unfortunately, it is indeed most likely an ATLAS problem. I get crashes when scipy is linked against Atlas (v3.8.3), but if I link against plain BLAS/LAPACK, I don't get any crash anymore (and valgrind does not complain). I will try with a recent development from atlas, cheers, David From pav at iki.fi Fri Feb 5 05:00:44 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 05 Feb 2010 12:00:44 +0200 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002041809m68228f04i5e02ba97cfc25f1f@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> <1265314513.9154.76.camel@idol> <5b8d13221002041809m68228f04i5e02ba97cfc25f1f@mail.gmail.com> Message-ID: <1265364044.16269.54.camel@talisman> pe, 2010-02-05 kello 11:09 +0900, David Cournapeau kirjoitti: [clip] > I think a py3k buildbot would help for this, right ? Another thing is > that the py3k changes do not work at all with Visual Studio compilers, > but that's mostly cosmetic things (like #warning not being supported > and things like that). There's a Py3 buildbot at http://buildbot.scipy.org/builders/Linux_x86_Ubuntu/builds/319/steps/shell_1/logs/stdio It also runs 2.4, 2.5 and 2.6 -- the 3.1 results are at the end. > > Most C code that works on Py2 works also on Py3. Py3 mainly means not > > using PyString, but choosing between Unicode + Bytes + UString (=Bytes > > for Py2 & Unicode for Py3). Also, it may be necessary to avoid FILE* > > pointers in the API (on Py3 those are no longer as easily obtained), and > > be wary when working with buffers. > > So once the py3k support is in place, should we deprecate those > functions so that people interested in porting to py3k can plan in > advance? For Py3 users APIs with FILE* pointers are somewhat awkward since you need to dup and fdopen to get FILE* pointers, and remember to fclose the handles afterward. > Getting rid of FILE* pointers and file descriptor would also helps > quite a bit on windows. I know that at some point, there were some > discussions to make the python C API safe to multiple C runtimes, but > I cannot find any recent discussion on that fact. I should just ask on > python-dev, I guess. This would be a great relief if we don't have to > care about those issues anymore. Python 3 does have some functions for reading/writing data from PyFile objects directly, but these are fairly inadequate, http://docs.python.org/3.1/c-api/file.html so I guess we're stuck with the C runtime in any case. > > I assume the rewrite will be worked on a separate SVN branch? Also, is > > there a plan yet on what needs changing to make Numpy's ABI more > > resistant? > > There are two issues: > - What we currently means by ABI, that is the ABI for a given python > version. The main issue is the binary layout of the structures (I > think the function ordering is pretty solid now, it is difficult to > change it inadvertently). The only way to fix this is to hide the > content of those structures, and define the structures in the C code > instead (opaque pointer, also known as the pimpl idiom). This means a > massive break of the C API, both internally and externally, but that's > something that is really needed IMO. > - Higher goal: ABI across python versions. This is motivated by PEP > 384. It means avoiding calls to API which are not "safe". I have no > idea whether it is possible, but that's something to keep in mind once > we start a major overhaul. Making structures opaque is a bit worrying. As far as I understand, so far the API has been nearly compatible with Numeric. Making the structures opaque is going to break both our and many other people's code. This is a bit worrying... How about a less damaging route: add reserved space to critical points in the structs, and keep appending new members only at the end? The Cython issue will probably be mostly resolved by new Cython releases before the Numpy 2.0 would be out. -- Pauli Virtanen From david at silveregg.co.jp Fri Feb 5 05:17:31 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 05 Feb 2010 19:17:31 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1265364044.16269.54.camel@talisman> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> <1265314513.9154.76.camel@idol> <5b8d13221002041809m68228f04i5e02ba97cfc25f1f@mail.gmail.com> <1265364044.16269.54.camel@talisman> Message-ID: <4B6BF03B.6000706@silveregg.co.jp> Pauli Virtanen wrote: > > Making structures opaque is a bit worrying. As far as I understand, so > far the API has been nearly compatible with Numeric. I assumed that we would simply give up the Numeric compatibility - does it really matter for a NumPy which is at best out in 2011/2012 ? It is not like NumPy 1.X is going away soon in any case. OTOH, I don't used any code based numeric, so I understand it is easy to say for me :) Also, I would have hoped that some inconsistencies w.r.t. reference counting could be fixed. It is my understanding that those are mostly a consequence of how Numeric used to do things. > Making the > structures opaque is going to break both our and many other people's > code. This is a bit worrying... > > How about a less damaging route: add reserved space to critical points > in the structs, and keep appending new members only at the end? I don't think it would help much. It requires to know where changes are needed, and I don't think it is really possible. The goal would be to keep a compatible ABI throughout the whole 2.x series. Maybe it would be possible to develop some automatic conversion scripts ala 2to3, but for the C code, to make the transition. Anything related to changes from direct access to accessors should be fairly automatic. > The > Cython issue will probably be mostly resolved by new Cython releases > before the Numpy 2.0 would be out. It is already solved - I mentioned earlier that by removing datetime as a dtype (but keeping the metadata structure), and by regenerating the few cython files with Cython 0.12.1, the ABI is kept compatible (at least as far as scipy constitutes a reasonable test). cheers, David From matthew.brett at gmail.com Fri Feb 5 05:45:04 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 5 Feb 2010 10:45:04 +0000 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002041809m68228f04i5e02ba97cfc25f1f@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> <1265314513.9154.76.camel@idol> <5b8d13221002041809m68228f04i5e02ba97cfc25f1f@mail.gmail.com> Message-ID: <1e2af89e1002050245t1e3d25f3yca4ce87d8ea203b9@mail.gmail.com> Hi, > Getting rid of FILE* pointers and file descriptor would also helps > quite a bit on windows. I know that at some point, there were some > discussions to make the python C API safe to multiple C runtimes, but > I cannot find any recent discussion on that fact. I should just ask on > python-dev, I guess. This would be a great relief if we don't have to > care about those issues anymore. Just to say that when Guido visited Berkeley a while back he was encouraging us strongly to contact the python-dev list for any help we needed to port to Py3k - so I'd imagine you'd get a good reception... See you, Matthew From renesd at gmail.com Fri Feb 5 05:49:07 2010 From: renesd at gmail.com (=?ISO-8859-1?Q?Ren=E9_Dudfield?=) Date: Fri, 5 Feb 2010 10:49:07 +0000 Subject: [Numpy-discussion] Audio signal capture and processing In-Reply-To: <4B6B85C9.2010902@silveregg.co.jp> References: <2d5cbc321002041118o5ca2d1afx7294cb547ea4f02f@mail.gmail.com> <4B6B85C9.2010902@silveregg.co.jp> Message-ID: <64ddb72c1002050249wacb735chfa7a783981c0e135@mail.gmail.com> hi, pyaudio is pretty good for recording audio. It is based on portaudio and has binaries available for win/mac - and is included in many linux distros too (so is pygame). You can load, and play audio with pygame. You can use the pygame.sndarray module for converting the pygame.Sound objects into numpy arrays. apt-get install python-pygame import pygame, pygame.sndarray, sys fname = sys.argv[1] pygame.init() sound = pygame.mixer.Sound(fname) an_array = pygame.sndarray.array(sound) Also see the sndarray demo: pygame.examples.sound_array_demos. `python -m pygame.examples.sound_array_demos` Other sndarray using examples can be found on pygame.org with the search function. Also audiolab uses bindings to libsndfile - so you can open a number of formats. However it is pretty new, so isn't packaged by distros(yet), and there are no mac binaries(yet). It's probably the best way to go if you can handle compiling it yourself and the dependency. cheers, -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Fri Feb 5 06:04:55 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 05 Feb 2010 20:04:55 +0900 Subject: [Numpy-discussion] Audio signal capture and processing In-Reply-To: <64ddb72c1002050249wacb735chfa7a783981c0e135@mail.gmail.com> References: <2d5cbc321002041118o5ca2d1afx7294cb547ea4f02f@mail.gmail.com> <4B6B85C9.2010902@silveregg.co.jp> <64ddb72c1002050249wacb735chfa7a783981c0e135@mail.gmail.com> Message-ID: <4B6BFB57.1080302@silveregg.co.jp> Ren? Dudfield wrote: > > Also audiolab uses bindings to libsndfile - so you can open a number of > formats. However it is pretty new, so isn't packaged by distros(yet), > and there are no mac binaries(yet). It's probably the best way to go if > you can handle compiling it yourself and the dependency. There are actually Mac binaries, just not for the last version: http://pypi.python.org/pypi/scikits.audiolab/0.10.0 David From ndbecker2 at gmail.com Fri Feb 5 06:47:06 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 05 Feb 2010 06:47:06 -0500 Subject: [Numpy-discussion] u in [u+1] Message-ID: I'm having some trouble here. I have a list of numpy arrays. I want to know if an array 'u' is in the list. As an example, u = np.arange(10) : u not in [u+1] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /home/nbecker/raysat/test/ in () ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() What would be the way to do this? From zachary.pincus at yale.edu Fri Feb 5 08:48:35 2010 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 5 Feb 2010 08:48:35 -0500 Subject: [Numpy-discussion] u in [u+1] In-Reply-To: References: Message-ID: <253A1DDC-5D27-4A9D-A597-83E36CA0C537@yale.edu> > I'm having some trouble here. I have a list of numpy arrays. I > want to > know if an array 'u' is in the list. Try: any(numpy.all(u == l) for l in array_list) standard caveats about float comparisons apply; perhaps any(numpy.allclose(u, l) for l in array_list) is more appropriate in certain circumstances. Can of course replace the first 'any' with 'all' or 'sum' to get different kinds of information, but using 'any' is equivalent to the 'in' query that you wanted. Why the 'in' operator below fails is that behind the scenes, 'u not in [u+1]' causes Python to iterate through the list testing each element for equality with u. Except that as the error states, arrays don't support testing for equality because such tests are ambiguous. (cf. many threads about this.) Zach On Feb 5, 2010, at 6:47 AM, Neal Becker wrote: > I'm having some trouble here. I have a list of numpy arrays. I > want to > know if an array 'u' is in the list. > > As an example, > u = np.arange(10) > > : u not in [u+1] > --------------------------------------------------------------------------- > ValueError Traceback (most recent > call last) > > /home/nbecker/raysat/test/ in () > > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() > > What would be the way to do this? > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From josef.pktd at gmail.com Fri Feb 5 08:53:15 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 5 Feb 2010 08:53:15 -0500 Subject: [Numpy-discussion] u in [u+1] In-Reply-To: <253A1DDC-5D27-4A9D-A597-83E36CA0C537@yale.edu> References: <253A1DDC-5D27-4A9D-A597-83E36CA0C537@yale.edu> Message-ID: <1cd32cbb1002050553h1c1a74eds69f8ed15578cefa8@mail.gmail.com> On Fri, Feb 5, 2010 at 8:48 AM, Zachary Pincus wrote: >> I'm having some trouble here. ?I have a list of numpy arrays. ?I >> want to >> know if an array 'u' is in the list. > > Try: > > any(numpy.all(u == l) for l in array_list) > > standard caveats about float comparisons apply; perhaps > any(numpy.allclose(u, l) for l in array_list) > is more appropriate in certain circumstances. > > Can of course replace the first 'any' with 'all' or 'sum' to get > different kinds of information, but using 'any' is equivalent to the > 'in' query that you wanted. > > Why the 'in' operator below fails is that behind the scenes, 'u not in > [u+1]' causes Python to iterate through the list testing each element > for equality with u. Except that as the error states, arrays don't > support testing for equality because such tests are ambiguous. (cf. > many threads about this.) > > Zach > > > On Feb 5, 2010, at 6:47 AM, Neal Becker wrote: > >> I'm having some trouble here. ?I have a list of numpy arrays. ?I >> want to >> know if an array 'u' is in the list. >> >> As an example, >> u = np.arange(10) >> >> : u not in [u+1] >> --------------------------------------------------------------------------- >> ValueError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Traceback (most recent >> call last) >> >> /home/nbecker/raysat/test/ in () >> >> ValueError: The truth value of an array with more than one element is >> ambiguous. Use a.any() or a.all() >> >> What would be the way to do this? >> maybe np.in1d(u, u+1) or np.in1d(u,u+1).all() is what you want >>> help(np.in1d) Help on function in1d in module numpy.lib.arraysetops: in1d(ar1, ar2, assume_unique=False) Test whether each element of a 1D array is also present in a second array. Returns a boolean array the same length as `ar1` that is True where an element of `ar1` is in `ar2` and False otherwise. Josef From kwgoodman at gmail.com Fri Feb 5 13:26:37 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 5 Feb 2010 10:26:37 -0800 Subject: [Numpy-discussion] How long does it take to create an array? Message-ID: Why is the second method of converting a list of tuples to an array so much faster? >> x = range(500) >> x = [(z,) for z in x] # <-- e.g. output of a sql database >> x[:5] [(0,), (1,), (2,), (3,), (4,)] >> >> timeit np.array(x).reshape(-1) # <-- slow 1000 loops, best of 3: 832 us per loop >> timeit np.array([z[0] for z in x]) 10000 loops, best of 3: 106 us per loop # <-- fast Is it a fixed overhead advantage? Doesn't seems so: >> x = range(50000) >> x = [[z] for z in x] >> timeit np.array(x).reshape(-1) 10 loops, best of 3: 83 ms per loop >> timeit np.array([z[0] for z in x]) 100 loops, best of 3: 9.81 ms per loop So it is probably faster to make a 1d array and reshape it: >> timeit np.array([[1,2], [3,4], [5,6]]) 100000 loops, best of 3: 11.8 us per loop >> timeit np.array([1,2,3,4,5,6]).reshape(-1,2) 100000 loops, best of 3: 6.62 us per loop Yep. From amcmorl at gmail.com Fri Feb 5 15:12:49 2010 From: amcmorl at gmail.com (Angus McMorland) Date: Fri, 5 Feb 2010 15:12:49 -0500 Subject: [Numpy-discussion] Conversion of matlab import containing objects into 3d array Message-ID: Hi all, I'm trying to import data from a matlab file using scipy.io.loadmat. One of the variables in the file imports as an array of shape (51,) of dtype object, with each element being an array of shape (23,100) of dtype float. How do I convert this array into a single array of dtype float with shape (51,23,100)? objarr.astype(float), which I thought might work (from [1]), gives me the error "ValueError: setting an array element with a sequence.". [1] http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/2998408 Many thanks for any help, Angus. -- AJC McMorland Post-doctoral research fellow Neurobiology, University of Pittsburgh From Chris.Barker at noaa.gov Fri Feb 5 15:32:59 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 05 Feb 2010 12:32:59 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> Message-ID: <4B6C807B.8050808@noaa.gov> Hi folks, It sounds like a consensus has been reached to put out a 1.4.1 that is ABI compatible with 1.3.* If that's the case, and particularly if it's going to be a while before 1.4.1 is ready, I suggest that the 1.4.0 release be pulled from "current release" status on the download sites. We really don't need anyone else getting caught up in this. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Fri Feb 5 15:29:48 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 5 Feb 2010 14:29:48 -0600 Subject: [Numpy-discussion] How long does it take to create an array? In-Reply-To: References: Message-ID: <3d375d731002051229j2a5bc1e9qb1d1d347a96488a5@mail.gmail.com> On Fri, Feb 5, 2010 at 12:26, Keith Goodman wrote: > Why is the second method of converting a list of tuples to an array so > much faster? > >>> x = range(500) >>> x = [(z,) for z in x] # <-- e.g. output of a sql database >>> x[:5] > ? [(0,), (1,), (2,), (3,), (4,)] >>> >>> timeit np.array(x).reshape(-1) ?# <-- slow > 1000 loops, best of 3: 832 us per loop >>> timeit np.array([z[0] for z in x]) > 10000 loops, best of 3: 106 us per loop ?# <-- fast When array() gets a sequence of sequences, it has to do more work to figure out the appropriate shape. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Fri Feb 5 15:59:20 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 5 Feb 2010 21:59:20 +0100 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B6C807B.8050808@noaa.gov> References: <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <4B6C807B.8050808@noaa.gov> Message-ID: <20100205205920.GB17355@phare.normalesup.org> On Fri, Feb 05, 2010 at 12:32:59PM -0800, Christopher Barker wrote: > Hi folks, > It sounds like a consensus has been reached to put out a 1.4.1 that is > ABI compatible with 1.3.* > If that's the case, and particularly if it's going to be a while before > 1.4.1 is ready, I suggest that the 1.4.0 release be pulled from "current > release" status on the download sites. +1. If the decision is as you say, I agree with you. Ga?l From matthew.brett at gmail.com Fri Feb 5 16:08:38 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 5 Feb 2010 21:08:38 +0000 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <20100205205920.GB17355@phare.normalesup.org> References: <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <4B6C807B.8050808@noaa.gov> <20100205205920.GB17355@phare.normalesup.org> Message-ID: <1e2af89e1002051308uf113ec9j182d77672c9cfb81@mail.gmail.com> Hi, >> If that's the case, and particularly if it's going to be a while before >> 1.4.1 is ready, I suggest that the 1.4.0 release be pulled from "current >> release" status on the download sites. > > +1. If the decision is as you say, I agree with you. That seems reasonable to me too... Best, Matthew From matthew.brett at gmail.com Fri Feb 5 16:22:50 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 5 Feb 2010 21:22:50 +0000 Subject: [Numpy-discussion] Conversion of matlab import containing objects into 3d array In-Reply-To: References: Message-ID: <1e2af89e1002051322v44b8cea4kaa53810a21f3b371@mail.gmail.com> HI, > I'm trying to import data from a matlab file using scipy.io.loadmat. > One of the variables in the file imports as an array of shape (51,) of > dtype object, with each element being an array of shape (23,100) of > dtype float. How do I convert this array into a single array of dtype > float with shape (51,23,100)? objarr.astype(float), which I thought > might work (from [1]), gives me the error "ValueError: setting an > array element with a sequence.". I guess that your array started life as a matlab cell array of shape (51,1). As far as I know you'd have to convert long-hand: np.concatenate(list(a), axis=0).reshape((51,23,100)) sort of thing... Best, Matthew From millman at berkeley.edu Fri Feb 5 18:24:57 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 5 Feb 2010 15:24:57 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B6C807B.8050808@noaa.gov> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <4B6C807B.8050808@noaa.gov> Message-ID: On Fri, Feb 5, 2010 at 12:32 PM, Christopher Barker wrote: > If that's the case, and particularly if it's going to be a while before > 1.4.1 is ready, I suggest that the 1.4.0 release be pulled from "current > release" status on the download sites. +1 From andyjian430074 at gmail.com Fri Feb 5 19:25:48 2010 From: andyjian430074 at gmail.com (Jankins) Date: Fri, 5 Feb 2010 18:25:48 -0600 Subject: [Numpy-discussion] wired error message in scipy.sparse.eigen function: Segmentation fault In-Reply-To: <5b8d13221002042322s53da661bl8df8096c4a656d30@mail.gmail.com> References: <4B60EC22.5070001@gmail.com> <4B60FF24.7040504@silveregg.co.jp> <4B6102B6.400@gmail.com> <4B610625.4060303@silveregg.co.jp> <4B611EEE.8040500@gmail.com> <4B612A8D.3060401@silveregg.co.jp> <5b8d13221002042322s53da661bl8df8096c4a656d30@mail.gmail.com> Message-ID: This problem keeps bothering me for days. If you need more sample to test it, I got one more. I tested it this morning. And the "segmentation fault" happened at a specific place. I guess, finally, I have to refer to the original eigenvalue algorithm or Matlab. Thanks. On Fri, Feb 5, 2010 at 1:22 AM, David Cournapeau wrote: > On Thu, Jan 28, 2010 at 3:11 PM, David Cournapeau > wrote: > > Jankins wrote: > >> Yes. I am using scipy.sparse.linalg.eigen.arpack. > >> > >> The exact output is: > >> > >> > /usr/local/lib/python2.6/dist-packages/scipy/sparse/linalg/eigen/arpack/_arpack.so > > > > I need the output of ldd on this file, actually, i.e the output of "ldd > > > /usr/local/lib/python2.6/dist-packages/scipy/sparse/linalg/eigen/arpack/_arpack.so". > > It should output the libraries actually loaded by the OS. > > > >> In fact, the matrix is from a directed graph with about 18,000 nodes and > >> 41,000 edges. Actually, this matrix is the smallest one I used. > > > > Is it available somewhere ? 41000 edges should make the matrix very > > sparse. I first thought that your problem may be some buggy ATLAS, but > > the current arpack interface (the one used by sparse.linalg.eigen) is > > also quite buggy in my experience, though I could not reproduce it. > > Having a matrix which consistently reproduce the bug would be very > useful. > > Ok, I took a look at it, and unfortunately, it is indeed most likely > an ATLAS problem. I get crashes when scipy is linked against Atlas > (v3.8.3), but if I link against plain BLAS/LAPACK, I don't get any > crash anymore (and valgrind does not complain). > > I will try with a recent development from atlas, > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Fri Feb 5 22:16:00 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Fri, 5 Feb 2010 21:16:00 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <20100204195137.GA22445@phare.normalesup.org> References: <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <20100204195137.GA22445@phare.normalesup.org> Message-ID: <87A93D18-A627-4F2A-BCEC-21B2F95B04B3@enthought.com> On Feb 4, 2010, at 1:51 PM, Gael Varoquaux wrote: > I'd like to say that I am +1 with everything that has been said > against > breakage. This isn't the question at hand anymore. The only question at hand is what to label the no ABI breaking release. I actually feel pretty strongly that the version should be 1.3.9 and we should just admit that 1.4 series broke the ABI. We didn't mean for it to happen, but it did. Then, people can stay with 1.3.9 if they want and those that are comfortable with ABI breakage can use 1.4.1 and beyond. My timetable is: * We release 1.3.9 within days * We release 1.4.1 within a few weeks that keeps the datetime ABI change and adds additional pent-up ABI changes. Bringing in the Py3K transition discussion at this point is not necessary, but I think the 1.5 release in May provides improvements to the Py3K layer and sets up people who want to work on it over the summer. I have not heard any good arguments, yet, against calling the the ABI compatible release 1.3.9 And what Chris said is important to repeat: I have never supported nor endorsed breaking the ABI at every possible chance. In fact, my behavior has been the opposite. As far as I am aware (and I'm sure Robert will point out any hole in my awareness), the history of NumPy has been zero ABI breakage since 1.0 (that is over 3 years ago). I do hear the majority saying "we need an ABI-compatible release" and I agree that it should happen ASAP. What to call it is less clear, so I want to be very clear that I feel pretty strongly that it should be called 1.3.9. -Travis From oliphant at enthought.com Fri Feb 5 22:25:34 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Fri, 5 Feb 2010 21:25:34 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B6C807B.8050808@noaa.gov> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B68E8B1.2090705@silveregg.co.jp> <3d375d731002021918k47ca3842n83ef5923f72fa070@mail.gmail.com> <4B68EC38.7050602@silveregg.co.jp> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <4B6C807B.8050808@noaa.gov> Message-ID: <93B95E89-6071-4F98-9BD6-7A3AC3040AE2@enthought.com> On Feb 5, 2010, at 2:32 PM, Christopher Barker wrote: > Hi folks, > > It sounds like a consensus has been reached to put out a 1.4.1 that is > ABI compatible with 1.3.* This is not true. Consensus has not been reached. I think 1.3.9 should be released and 1.4.1 should be ABI incompatible. But, it is true, that we can and should pull the 1.4.0 release. -Travis From sierra_mtnview at sbcglobal.net Fri Feb 5 22:37:08 2010 From: sierra_mtnview at sbcglobal.net (Wayne Watson) Date: Fri, 05 Feb 2010 19:37:08 -0800 Subject: [Numpy-discussion] Installed NumPy and MatPlotLib in the Wrong Order. How uninstall MPL? Message-ID: <4B6CE3E4.2050201@sbcglobal.net> See Subject. I'm working in IDLE in Win7. It seems to me MPL gets stuck in site-packages under C:\Python25. Maybe this is as simple as deleting the entry? Well, yes there's a MPL folder under site-packages and an info MPL file of 540 bytes. There are also pylab.py, pyc,and py0 files under site. What to do next? -- My life in two words. "Interrupted Projects." -- WTW (quote originator) From dsdale24 at gmail.com Fri Feb 5 22:59:55 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Fri, 5 Feb 2010 22:59:55 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <93B95E89-6071-4F98-9BD6-7A3AC3040AE2@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <4B6C807B.8050808@noaa.gov> <93B95E89-6071-4F98-9BD6-7A3AC3040AE2@enthought.com> Message-ID: On Fri, Feb 5, 2010 at 10:25 PM, Travis Oliphant wrote: > > On Feb 5, 2010, at 2:32 PM, Christopher Barker wrote: > >> Hi folks, >> >> It sounds like a consensus has been reached to put out a 1.4.1 that is >> ABI compatible with 1.3.* > > This is not true. ? Consensus has not been reached. How many have registered opposition to the above proposal? > I think 1.3.9 should be released and 1.4.1 should be ABI incompatible. And then another planned break in numpy ABI compatibility in the foreseeable future, for the other items that have been discussed in this thread? I am still inclined to agree with David and Chuck in this instance. Regards, Darren From josef.pktd at gmail.com Sat Feb 6 00:01:24 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 6 Feb 2010 00:01:24 -0500 Subject: [Numpy-discussion] Installed NumPy and MatPlotLib in the Wrong Order. How uninstall MPL? In-Reply-To: <4B6CE3E4.2050201@sbcglobal.net> References: <4B6CE3E4.2050201@sbcglobal.net> Message-ID: <1cd32cbb1002052101l5353657cgbb34d828991ee51a@mail.gmail.com> On Fri, Feb 5, 2010 at 10:37 PM, Wayne Watson wrote: > See Subject. > > I'm working in IDLE in Win7. It seems to me MPL gets stuck in > site-packages under C:\Python25. Maybe this is as simple as deleting the > entry? What does it mean that MPL gets stuck? what kind of stuck? (My experience is only windowsXP not Win7) Often I just delete all directories and files for a package. However, if the package has been installed with an installer and not with easy_install or setup.py, there might be a removexxx, (removematplotlib) under/in the Python25 directory (I have Removematplotlib.exe for python24 but not for 25) and it might also be in the windows registry, try Add/Remove Programs or whatever the Win7 equivalent is. I just checked my Add/Remove Programs and I have several entries under python 2.5 that are orphans because I deleted the directories but didn't uninstall through an uninstaller, but again I see an entry for matplotlib only for python 2.4, so maybe matplotlib doesn't pollute the windows registry anymore. If you don't find any matplotlib uninstall (as in my case for Py2.5), you can just delete all files and directories in site-packages. Josef > > Well, yes there's a MPL folder under site-packages and an info MPL file > of 540 bytes. There ?are ?also pylab.py, pyc,and py0 files under site. > What to do next? > > > -- > My life in two words. "Interrupted Projects." -- WTW (quote originator) > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From oliphant at enthought.com Sat Feb 6 00:10:03 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Fri, 5 Feb 2010 23:10:03 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <4B6C807B.8050808@noaa.gov> <93B95E89-6071-4F98-9BD6-7A3AC3040AE2@enthought.com> Message-ID: On Feb 5, 2010, at 9:59 PM, Darren Dale wrote: > On Fri, Feb 5, 2010 at 10:25 PM, Travis Oliphant > wrote: >> >> On Feb 5, 2010, at 2:32 PM, Christopher Barker wrote: >> >>> Hi folks, >>> >>> It sounds like a consensus has been reached to put out a 1.4.1 >>> that is >>> ABI compatible with 1.3.* >> >> This is not true. Consensus has not been reached. > > How many have registered opposition to the above proposal? Even one opposing view is not a consensus. > >> I think 1.3.9 should be released and 1.4.1 should be ABI >> incompatible. > > And then another planned break in numpy ABI compatibility in the > foreseeable future, for the other items that have been discussed in > this thread? Not, not at all. I don't see a need for any ABI incompatibility in the foreseeable future. Especially, if we insert a few place holders like Pauli proposed. I'm proposing to get the ABI breakage over and done with right now. This gives plenty of people time to adjust before the Py3K conversion. I don't see a reason to have another ABI discussion in May while we are also trying to have a Py3K discussion. -Travis From oliphant at enthought.com Sat Feb 6 01:53:28 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Sat, 6 Feb 2010 00:53:28 -0600 Subject: [Numpy-discussion] Conversion of matlab import containing objects into 3d array In-Reply-To: References: Message-ID: <533D99E7-275D-4C49-9229-9C4ADE208BAD@enthought.com> On Feb 5, 2010, at 2:12 PM, Angus McMorland wrote: > Hi all, > > I'm trying to import data from a matlab file using scipy.io.loadmat. > One of the variables in the file imports as an array of shape (51,) of > dtype object, with each element being an array of shape (23,100) of > dtype float. How do I convert this array into a single array of dtype > float with shape (51,23,100)? objarr.astype(float), which I thought > might work (from [1]), gives me the error "ValueError: setting an > array element with a sequence.". > > [1] http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/2998408 > Something like this (assuming your array is named 'a') np.array(list(a), dtype=float) -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Sat Feb 6 02:07:33 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Sat, 6 Feb 2010 01:07:33 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002022046hc3089f7g80869e23d97ae6b5@mail.gmail.com> <4C2541A5-08CF-45C5-BAF1-161B8BC1273B@enthought.com> <4B6910DC.7070809@silveregg.co.jp> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <4B6C807B.8050808@noaa.gov> <93B95E89-6071-4F98-9BD6-7A3AC3040AE2@enthought.com> Message-ID: <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> Given all the discussions that have happened. I want to be clear about my proposal. It is: * 1.4.1 is an ABI break including datetime, hasobject, and a few place- holders in the structures * no future ABI breakages until after the Py3K transition (at least 18 months away) -- I don't foresee any future ABI changes at all, nor do I think we will need any ABI changes in the next 2 years. * 1.3.9 is a release with all the features of 1.4 except the ABI breaking date-time addition * 1.3.9 release occurs as soon as we can get it out (like next week --- I will commit Monday-Tuesday to do the date-time removal). * 1.4.1 release occurs as soon as we can get it out with all the ABI changes we know about (which are already in 1.4.0 --- we just bump up the ABI version number). I would estimate a release by the end of February. I think this plan is the least disruptive and satisfies the concerns of all parties in the discussion. The other plans that have been proposed do not address my concerns of keeping the date-time changes and keeping the ABI disruption to an isolated point in the NumPy timeline. The other plans call for an additional ABI disruption in May and then perhaps more after Py3K. I don't see how this is better. -Travis From rpg.314 at gmail.com Sat Feb 6 02:08:44 2010 From: rpg.314 at gmail.com (Rohit Garg) Date: Sat, 6 Feb 2010 12:38:44 +0530 Subject: [Numpy-discussion] Porting numpy to python 3.x, status update Message-ID: <4d5dd8c21002052308s752c4a8byba6782afb444df1a@mail.gmail.com> Hi all, This says that planning for migration to python 3 has begun. http://blog.jarrodmillman.com/2009/11/numpy-14-coming-soon.html It has been a month since 1.4 was released. Is there a status page somewhere where one can checkup on progress for the same? Is Python 3.x support planned for 1.5? This year, next year? Regards, -- Rohit Garg http://rpg-314.blogspot.com/ Senior Undergraduate Department of Physics Indian Institute of Technology Bombay From oliphant at enthought.com Sat Feb 6 02:38:46 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Sat, 6 Feb 2010 01:38:46 -0600 Subject: [Numpy-discussion] Porting numpy to python 3.x, status update In-Reply-To: <4d5dd8c21002052308s752c4a8byba6782afb444df1a@mail.gmail.com> References: <4d5dd8c21002052308s752c4a8byba6782afb444df1a@mail.gmail.com> Message-ID: On Feb 6, 2010, at 1:08 AM, Rohit Garg wrote: > Hi all, > > This says that planning for migration to python 3 has begun. More than planning. Actually Pauli (and Chuck I believe) have made quite a bit of progress. Pauli just posted his roadmap of what needs to be finished. It is difficult to predict when the work needed will be done. I think Pauli expects to have something close to finished by the end of the summer this year. Probably version 1.5 or 1.6 (depending on the out-come of some of the current discussion) sometime before the end of this year or early next is my guess for Python 3k support. -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From renesd at gmail.com Sat Feb 6 05:44:53 2010 From: renesd at gmail.com (=?ISO-8859-1?Q?Ren=E9_Dudfield?=) Date: Sat, 6 Feb 2010 10:44:53 +0000 Subject: [Numpy-discussion] numpy release process, adding a compatibility test step. Message-ID: <64ddb72c1002060244h748d29aanee32edb1b61ac56@mail.gmail.com> Hi, may I suggest an addition to the release process... 'Tests against popular libraries that rely on numpy at the RC stage. Test at least these libraries pass their numpy related tests: matplotlib, scipy, pygame, (insert others here?). The release manage should ask the mailing list for people to test the RC against these libraries to make sure they work ok.' - This will catch problems like the ABI one with the recent release. - It is an explicit request for testing, with concrete tasks that people can do. Rather than a more abstract request for testing. - not much extra work for the release manager. testing work can be distributed to anyone who can try an app, or library with the binaries supplied in the RC stage of a release. - binary installer users can test as well, not just people who build from source. - tests libraries the numpy developers may not be using all the time themselves. On another project, adding this simple step helped prevent a number of bugs which affected other programs and libraries using the project. Before this was added we had a couple of releases which broke a few popular libraries/apps with very simple regressions or bugs. cu, From sierra_mtnview at sbcglobal.net Sat Feb 6 06:43:17 2010 From: sierra_mtnview at sbcglobal.net (Wayne Watson) Date: Sat, 06 Feb 2010 03:43:17 -0800 Subject: [Numpy-discussion] Installed NumPy and MatPlotLib in the Wrong Order. How uninstall MPL? In-Reply-To: <1cd32cbb1002052101l5353657cgbb34d828991ee51a@mail.gmail.com> References: <4B6CE3E4.2050201@sbcglobal.net> <1cd32cbb1002052101l5353657cgbb34d828991ee51a@mail.gmail.com> Message-ID: <4B6D55D5.4070404@sbcglobal.net> stuck = placed. I'm pretty sure I used an msi file. I've brought this topic up in 3 forums where I would have thought people knew the answer. Yours is the first answer. I would have guessed that anyone dealing libraries would have known the answer. Nor have I found anything in two Python books I've used. I think I'll look at them again. Google didn't even show anything. Thanks for the response. I'll try to clear manually the locations we've mentioned. On 2/5/2010 9:01 PM, josef.pktd at gmail.com wrote: > On Fri, Feb 5, 2010 at 10:37 PM, Wayne Watson > wrote: > >> See Subject. >> >> I'm working in IDLE in Win7. It seems to me MPL gets stuck in >> site-packages under C:\Python25. Maybe this is as simple as deleting the >> entry? >> > What does it mean that MPL gets stuck? what kind of stuck? > > (My experience is only windowsXP not Win7) > > Often I just delete all directories and files for a package. However, > if the package has been installed with an installer and not with > easy_install or setup.py, there might be a removexxx, > (removematplotlib) under/in the Python25 directory (I have > Removematplotlib.exe for python24 but not for 25) and it might also be > in the windows registry, try Add/Remove Programs or whatever the Win7 > equivalent is. > > I just checked my Add/Remove Programs and I have several entries under > python 2.5 that are orphans because I deleted the directories but > didn't uninstall through an uninstaller, but again I see an entry for > matplotlib only for python 2.4, so maybe matplotlib doesn't pollute > the windows registry anymore. > > If you don't find any matplotlib uninstall (as in my case for Py2.5), > you can just delete all files and directories in site-packages. > > Josef > > > >> Well, yes there's a MPL folder under site-packages and an info MPL file >> of 540 bytes. There are also pylab.py, pyc,and py0 files under site. >> What to do next? >> >> >> -- >> My life in two words. "Interrupted Projects." -- WTW (quote originator) >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -- My life in two words. "Interrupted Projects." -- WTW (quote originator) From cournape at gmail.com Sat Feb 6 07:17:22 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 6 Feb 2010 21:17:22 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <4B6C807B.8050808@noaa.gov> <93B95E89-6071-4F98-9BD6-7A3AC3040AE2@enthought.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> Message-ID: <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> On Sat, Feb 6, 2010 at 4:07 PM, Travis Oliphant wrote: > Given all the discussions that have happened. ?I want to be clear > about my proposal. ?It is: > > * 1.4.1 is an ABI break including datetime, hasobject, and a few place- > holders in the structures > * no future ABI breakages until after the Py3K transition (at least 18 > months away) -- I don't foresee any future ABI changes at all, nor do > I think we will need any ABI changes in the next 2 years. > * 1.3.9 is a release with all the features of 1.4 except the ABI > breaking date-time addition > * 1.3.9 release occurs as soon as we can get it out (like next week > --- I will commit Monday-Tuesday to do the date-time removal). > * 1.4.1 release occurs as soon as we can get it out with all the ABI > changes we know about (which are already in 1.4.0 --- we just bump up > the ABI version number). ?I would estimate a release by the end of > February. So it seems that there is an agreement of breaking the ABI only once overall. This is good. > I think this plan is the least disruptive and satisfies the concerns > of all parties in the discussion. ?The other plans that have been > proposed do not address my concerns of keeping the date-time changes In that regard, your proposal is very similar to what was suggested at the beginning - the difference is only whether breaking at 1.4.x or 1.5.x. I don't care that much about where (1.4.x vs 1.5.x) the datetime is pushed. But then the hasobject-related changes should be put altogether, to respect the goal of breaking the ABI only once. If you think it can be done for the end of february, then I don't see much point in releasing what you call 1.3.9, because I really don't want to have to put numpy-version specific scipy/maplotlib/whatever. The release with datetime changes will be the one to build scipy and matplotlib against (I will then focus on releasing scipy 0.8.0). 1.4.0 would is then considered as a broken release (I am removing the files from sourceforge). cheers, David From faltet at pytables.org Sat Feb 6 08:07:20 2010 From: faltet at pytables.org (Francesc Alted) Date: Sat, 6 Feb 2010 14:07:20 +0100 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> Message-ID: <201002061407.20123.faltet@pytables.org> A Saturday 06 February 2010 13:17:22 David Cournapeau escrigu?: > On Sat, Feb 6, 2010 at 4:07 PM, Travis Oliphant wrote: > > I think this plan is the least disruptive and satisfies the concerns > > of all parties in the discussion. The other plans that have been > > proposed do not address my concerns of keeping the date-time changes > > In that regard, your proposal is very similar to what was suggested at > the beginning - the difference is only whether breaking at 1.4.x or > 1.5.x. I'm thinking why should we so conservative in raising version numbers? Why not relabeling 1.4.0 to 2.0 and mark 1.4.0 as a broken release? Then, we can continue by putting everything except ABI breaking features in 1.4.1. With this, NumPy 2.0 will remain available for people wanting to be more on-the- bleeding-edge. Something similar to what has happened with Python 3.0, which has not prevented the 2.x series to evolve. How this sounds? -- Francesc Alted From aisaac at american.edu Sat Feb 6 08:15:21 2010 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 06 Feb 2010 08:15:21 -0500 Subject: [Numpy-discussion] Installed NumPy and MatPlotLib in the Wrong Order. How uninstall MPL? In-Reply-To: <4B6D55D5.4070404@sbcglobal.net> References: <4B6CE3E4.2050201@sbcglobal.net> <1cd32cbb1002052101l5353657cgbb34d828991ee51a@mail.gmail.com> <4B6D55D5.4070404@sbcglobal.net> Message-ID: <4B6D6B69.4020608@american.edu> You should be able to have Matplotlib in Python 2.5 and in Python 2.6, no problem. But you need to get the correct installer. There are separate installers for different Pythons. Alan Isaac From charlesr.harris at gmail.com Sat Feb 6 08:27:07 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 6 Feb 2010 06:27:07 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <201002061407.20123.faltet@pytables.org> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> Message-ID: On Sat, Feb 6, 2010 at 6:07 AM, Francesc Alted wrote: > A Saturday 06 February 2010 13:17:22 David Cournapeau escrigu?: > > On Sat, Feb 6, 2010 at 4:07 PM, Travis Oliphant > wrote: > > > I think this plan is the least disruptive and satisfies the concerns > > > of all parties in the discussion. The other plans that have been > > > proposed do not address my concerns of keeping the date-time changes > > > > In that regard, your proposal is very similar to what was suggested at > > the beginning - the difference is only whether breaking at 1.4.x or > > 1.5.x. > > I'm thinking why should we so conservative in raising version numbers? Why > not relabeling 1.4.0 to 2.0 and mark 1.4.0 as a broken release? Then, we > can > continue by putting everything except ABI breaking features in 1.4.1. With > this, NumPy 2.0 will remain available for people wanting to be more on-the- > bleeding-edge. Something similar to what has happened with Python 3.0, > which > has not prevented the 2.x series to evolve. > > How this sounds? > > I like the idea of pushing the version number of the ABI breaking release up to 2.0. We can't just relabel 1.4.0, though, because of the prospective hasobject addition. I think David is also concerned about having to support essentially two versions of Numpy, which would be a hassle. However, if Travis is willing to remove datetime from the current 1.4.0, then maybe that could be released with the understanding that the next release of Scipy will be built against the the ABI breaking version. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Feb 6 08:29:06 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 6 Feb 2010 08:29:06 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <201002061407.20123.faltet@pytables.org> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> Message-ID: <1cd32cbb1002060529i76e0e41tb49353c838aa6bd4@mail.gmail.com> On Sat, Feb 6, 2010 at 8:07 AM, Francesc Alted wrote: > A Saturday 06 February 2010 13:17:22 David Cournapeau escrigu?: >> On Sat, Feb 6, 2010 at 4:07 PM, Travis Oliphant > wrote: >> > I think this plan is the least disruptive and satisfies the concerns >> > of all parties in the discussion. ?The other plans that have been >> > proposed do not address my concerns of keeping the date-time changes >> >> In that regard, your proposal is very similar to what was suggested at >> the beginning - the difference is only whether breaking at 1.4.x or >> 1.5.x. > > I'm thinking why should we so conservative in raising version numbers? ?Why > not relabeling 1.4.0 to 2.0 and mark 1.4.0 as a broken release? ?Then, we can > continue by putting everything except ABI breaking features in 1.4.1. ?With > this, NumPy 2.0 will remain available for people wanting to be more on-the- > bleeding-edge. ?Something similar to what has happened with Python 3.0, which > has not prevented the 2.x series to evolve. > > How this sounds? I think breaking with 1.5 sounds good because it starts the second part of the 1.x series. 2.0 could be for the big overhaul that David has in mind, unless it will not be necessary anymore Josef > -- > Francesc Alted > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Sat Feb 6 08:35:21 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 6 Feb 2010 06:35:21 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1cd32cbb1002060529i76e0e41tb49353c838aa6bd4@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <1cd32cbb1002060529i76e0e41tb49353c838aa6bd4@mail.gmail.com> Message-ID: On Sat, Feb 6, 2010 at 6:29 AM, wrote: > On Sat, Feb 6, 2010 at 8:07 AM, Francesc Alted > wrote: > > A Saturday 06 February 2010 13:17:22 David Cournapeau escrigu?: > >> On Sat, Feb 6, 2010 at 4:07 PM, Travis Oliphant > > > wrote: > >> > I think this plan is the least disruptive and satisfies the concerns > >> > of all parties in the discussion. The other plans that have been > >> > proposed do not address my concerns of keeping the date-time changes > >> > >> In that regard, your proposal is very similar to what was suggested at > >> the beginning - the difference is only whether breaking at 1.4.x or > >> 1.5.x. > > > > I'm thinking why should we so conservative in raising version numbers? > Why > > not relabeling 1.4.0 to 2.0 and mark 1.4.0 as a broken release? Then, we > can > > continue by putting everything except ABI breaking features in 1.4.1. > With > > this, NumPy 2.0 will remain available for people wanting to be more > on-the- > > bleeding-edge. Something similar to what has happened with Python 3.0, > which > > has not prevented the 2.x series to evolve. > > > > How this sounds? > > I think breaking with 1.5 sounds good because it starts the second > part of the 1.x series. > 2.0 could be for the big overhaul that David has in mind, unless it > will not be necessary anymore > > Well, let's just go with David then. I think the important thing is to settle this and move on. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Sat Feb 6 08:36:24 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Sat, 6 Feb 2010 08:36:24 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1cd32cbb1002060529i76e0e41tb49353c838aa6bd4@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <1cd32cbb1002060529i76e0e41tb49353c838aa6bd4@mail.gmail.com> Message-ID: On Sat, Feb 6, 2010 at 8:29 AM, wrote: > On Sat, Feb 6, 2010 at 8:07 AM, Francesc Alted wrote: >> A Saturday 06 February 2010 13:17:22 David Cournapeau escrigu?: >>> On Sat, Feb 6, 2010 at 4:07 PM, Travis Oliphant >> wrote: >>> > I think this plan is the least disruptive and satisfies the concerns >>> > of all parties in the discussion. ?The other plans that have been >>> > proposed do not address my concerns of keeping the date-time changes >>> >>> In that regard, your proposal is very similar to what was suggested at >>> the beginning - the difference is only whether breaking at 1.4.x or >>> 1.5.x. >> >> I'm thinking why should we so conservative in raising version numbers? ?Why >> not relabeling 1.4.0 to 2.0 and mark 1.4.0 as a broken release? ?Then, we can >> continue by putting everything except ABI breaking features in 1.4.1. ?With >> this, NumPy 2.0 will remain available for people wanting to be more on-the- >> bleeding-edge. ?Something similar to what has happened with Python 3.0, which >> has not prevented the 2.x series to evolve. >> >> How this sounds? > > I think breaking with 1.5 sounds good because it starts the second > part of the 1.x series. > 2.0 could be for the big overhaul that David has in mind, unless it > will not be necessary anymore I don't understand why there is any debate about what to call a release that breaks ABI compatibility. Robert Kern already reminded the list of the "Report from SciPy" dated 2008-08-23: """ * The releases will be numbered major.minor.bugfix * There will be no ABI changes in minor releases * There will be no API changes in bugfix releases """ If numpy-2.0 suddenly shows up at sourceforge, people will either already be aware of the above convention, or if not they at least will be more likely to wonder what precipitated the jump and be more likely to read the release notes. Darren From cournape at gmail.com Sat Feb 6 08:38:15 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 6 Feb 2010 22:38:15 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1cd32cbb1002060529i76e0e41tb49353c838aa6bd4@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <1cd32cbb1002060529i76e0e41tb49353c838aa6bd4@mail.gmail.com> Message-ID: <5b8d13221002060538h38f32722u208d92fc50b8453b@mail.gmail.com> On Sat, Feb 6, 2010 at 10:29 PM, wrote: > On Sat, Feb 6, 2010 at 8:07 AM, Francesc Alted wrote: >> A Saturday 06 February 2010 13:17:22 David Cournapeau escrigu?: >>> On Sat, Feb 6, 2010 at 4:07 PM, Travis Oliphant >> wrote: >>> > I think this plan is the least disruptive and satisfies the concerns >>> > of all parties in the discussion. ?The other plans that have been >>> > proposed do not address my concerns of keeping the date-time changes >>> >>> In that regard, your proposal is very similar to what was suggested at >>> the beginning - the difference is only whether breaking at 1.4.x or >>> 1.5.x. >> >> I'm thinking why should we so conservative in raising version numbers? ?Why >> not relabeling 1.4.0 to 2.0 and mark 1.4.0 as a broken release? ?Then, we can >> continue by putting everything except ABI breaking features in 1.4.1. ?With >> this, NumPy 2.0 will remain available for people wanting to be more on-the- >> bleeding-edge. ?Something similar to what has happened with Python 3.0, which >> has not prevented the 2.x series to evolve. >> >> How this sounds? > > I think breaking with 1.5 sounds good because it starts the second > part of the 1.x series. This is the original proposal, but one that not everybody agreed on it. I am just trying to find a middleground so that everybody is behind it. > 2.0 could be for the big overhaul that David has in mind, unless it > will not be necessary anymore It will still be necessary, but that's a more long term goal. I think it is relatively independent to this discussion since it is agreed that ABI will not be broken more than once. David From cournape at gmail.com Sat Feb 6 08:39:17 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 6 Feb 2010 22:39:17 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <1cd32cbb1002060529i76e0e41tb49353c838aa6bd4@mail.gmail.com> Message-ID: <5b8d13221002060539y4feb0a43kfeff85d26c265c95@mail.gmail.com> On Sat, Feb 6, 2010 at 10:36 PM, Darren Dale wrote: > > I don't understand why there is any debate about what to call a > release that breaks ABI compatibility. Because it means datetime support will come late (in 2.0), and Travis wanted to get it early in. David From dsdale24 at gmail.com Sat Feb 6 08:44:58 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Sat, 6 Feb 2010 08:44:58 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002060539y4feb0a43kfeff85d26c265c95@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <1cd32cbb1002060529i76e0e41tb49353c838aa6bd4@mail.gmail.com> <5b8d13221002060539y4feb0a43kfeff85d26c265c95@mail.gmail.com> Message-ID: On Sat, Feb 6, 2010 at 8:39 AM, David Cournapeau wrote: > On Sat, Feb 6, 2010 at 10:36 PM, Darren Dale wrote: >> >> I don't understand why there is any debate about what to call a >> release that breaks ABI compatibility. > > Because it means datetime support will come late (in 2.0), and Travis > wanted to get it early in. Why does something called 2.0 have to come late? Why can't whatever near-term numpy release that breaks ABI compatibility and includes datetime be called 2.0? Darren From vicentesolerfraile at hotmail.com Sat Feb 6 08:54:14 2010 From: vicentesolerfraile at hotmail.com (Vicente Soler Fraile) Date: Sat, 6 Feb 2010 14:54:14 +0100 Subject: [Numpy-discussion] Unable to install Numpy Message-ID: Hello, I try to install Numpy without success. Windows 7 Python 2.6.4 Numpy numpy-1.4.0-win32-superpack-python2.6.exe The problem is that the installer does not find Python in the Windows Registry. However, I've been using python for quite a long time now. Since I really want to try Numpy, do you have any ideas as to what can I do? Thank you for your help _________________________________________________________________ Ibex 35, comparadores de hipotecas, Euribor, foros de bolsa. ?Nuevo MSN Dinero! http://dinero.es.msn.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Feb 6 09:02:26 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 6 Feb 2010 07:02:26 -0700 Subject: [Numpy-discussion] Installed NumPy and MatPlotLib in the Wrong Order. How uninstall MPL? In-Reply-To: <4B6D6B69.4020608@american.edu> References: <4B6CE3E4.2050201@sbcglobal.net> <1cd32cbb1002052101l5353657cgbb34d828991ee51a@mail.gmail.com> <4B6D55D5.4070404@sbcglobal.net> <4B6D6B69.4020608@american.edu> Message-ID: On Sat, Feb 6, 2010 at 6:15 AM, Alan G Isaac wrote: > You should be able to have Matplotlib in Python 2.5 > and in Python 2.6, no problem. > > But you need to get the correct installer. > There are separate installers for different Pythons. > > I don't know if things have changed, but long ago when I was using windows more often I found it best to delete old installations of python when moving up to later versions. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Feb 6 09:03:03 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 6 Feb 2010 23:03:03 +0900 Subject: [Numpy-discussion] Unable to install Numpy In-Reply-To: References: Message-ID: <5b8d13221002060603o1d622795jb56de1d741e5309a@mail.gmail.com> On Sat, Feb 6, 2010 at 10:54 PM, Vicente Soler Fraile wrote: > Hello, > > I try to install Numpy without success. > > Windows 7 > Python 2.6.4 > Numpy???????????? numpy-1.4.0-win32-superpack- > python2.6.exe Are you using a 32 bits python ? We only provide 32 bits installers for now on windows2, David From josef.pktd at gmail.com Sat Feb 6 09:29:16 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 6 Feb 2010 09:29:16 -0500 Subject: [Numpy-discussion] Installed NumPy and MatPlotLib in the Wrong Order. How uninstall MPL? In-Reply-To: References: <4B6CE3E4.2050201@sbcglobal.net> <1cd32cbb1002052101l5353657cgbb34d828991ee51a@mail.gmail.com> <4B6D55D5.4070404@sbcglobal.net> <4B6D6B69.4020608@american.edu> Message-ID: <1cd32cbb1002060629k44550281y9741b6edcd2ad5fb@mail.gmail.com> On Sat, Feb 6, 2010 at 9:02 AM, Charles R Harris wrote: > > > On Sat, Feb 6, 2010 at 6:15 AM, Alan G Isaac wrote: >> >> You should be able to have Matplotlib in Python 2.5 >> and in Python 2.6, no problem. >> >> But you need to get the correct installer. >> There are separate installers for different Pythons. >> > > I don't know if things have changed, but long ago when I was using windows > more often I found it best to delete old installations of python when moving > up to later versions. I never had any problems on winXP with both python 2.4 and 2.5 installed. It's a bit of work to change all environment and path settings to a new python version by hand. Someone wrote a python script that updates all path and registry settings automatically. (I don't remember on which blog I saw it.) Josef > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From aisaac at american.edu Sat Feb 6 10:19:03 2010 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 06 Feb 2010 10:19:03 -0500 Subject: [Numpy-discussion] Installed NumPy and MatPlotLib in the Wrong Order. How uninstall MPL? In-Reply-To: References: <4B6CE3E4.2050201@sbcglobal.net> <1cd32cbb1002052101l5353657cgbb34d828991ee51a@mail.gmail.com> <4B6D55D5.4070404@sbcglobal.net> <4B6D6B69.4020608@american.edu> Message-ID: <4B6D8867.8040403@american.edu> On 2/6/2010 9:02 AM, Charles R Harris wrote: > I don't know if things have changed, but long ago when I was using > windows more often I found it best to delete old installations of python > when moving up to later versions. I have had multiple versions running side by side for years, with never a problem. (I always use the official installers.) Well ok, one problem... By default the last Python you install becomes your system Python, so you need to pay attention to that. But you can always reset that. Do you recall what kinds of problems you ran into? fwiw, Alan Isaac From tevang3 at gmail.com Sat Feb 6 10:47:07 2010 From: tevang3 at gmail.com (Thomas Evangelidis) Date: Sat, 6 Feb 2010 15:47:07 +0000 Subject: [Numpy-discussion] error fromnumeric: 254 (repeat) return repeat(repeats, axis) In-Reply-To: <833e1faf1002060745u7837ec30we54d44cca03a0007@mail.gmail.com> References: <833e1faf1002060743r4cf86765m8b86acb041e5a59@mail.gmail.com> <833e1faf1002060745u7837ec30we54d44cca03a0007@mail.gmail.com> Message-ID: <833e1faf1002060747r7c6c9e39k6e2f8ee5c80e0936@mail.gmail.com> Dear programmers, I'm not familiar with numpy therefore I need a little help to debug code which was not written by me. The lines which generate the error are the following: index = N.concatenate( (index, [len_i]) ) > delta = index[1:] - index[:-1] > return N.repeat( mask, delta.astype( N.int32 ) ) > and this is the error message I get: PDBModel: 1619 (extendMask) return N.repeat( mask, delta.astype( N.int32 ) ) > functions: 19 (repeat) return np.repeat(a, repeats, axis) > fromnumeric: 254 (repeat) return repeat(repeats, axis) > below I provide you with the values of each variables in these 3 lines of code: index = [ 0 9 20 29 37 44 55 66 76 88 96 103 114 > 123 131 > 138 147 155 167 176 184 192 198 209 216 224 233 242 246 > 255 > 262 274 279 290 297 301 310 316 320 326 330 339 345 352 > 360 > 368 377 385 393 402 413 421 433 441 448 455 464 468 476 > 484 > 492 499 507 515 521 531 539 547 556 564 572 580 588 597 > 611 > 625 636 642 651 659 663 670 677 683 692 700 707 715 723 > 734 > 740 748 754 762 771 779 787 795 804 816 822 830 842 848 > 856 > 865 873 881 890 895 903 912 920 932 944 953 962 970 977 > 985 > 993 1000 1012 1021 1029 1038 1046 1057 1063 1071 1079 1085 1093 1099 > 1107 > 1114 1120 1128 1137 1145 1153 1162 1170 1179 1188 1197 1209 1218 1225 > 1233 > 1242 1250 1256 1264 1271 1278 1286 1293 1299 1308 1317 1324 1332 1340 > 1350 > 1358 1369 1376 1382 1388 1396 1403 1411 1420 1432 1440 1447 1455 1462 > 1466 > 1472 1480 1485 1491 1500 1508 1514 1518 1522 1531 1540 1549 1560 1568 > 1574 > 1582 1587 1598 1603 1611 1619 1630 1638 1645 1654 1662 1670 1678 1686 > 1694 > 1703 1708 1715 1721 1727 1735 1743 1751 1760 1766 1775 1787 1795 1802 > 1811 > 1820 1827 1835 1843 1851 1859 1868 1872 1880 1889 1897 1908 1916 1923 > 1931 > 1939 1947 1952 1962 1973 1981 1987 1994 2002 2013 2025 2030 2038 2045 > 2053 > 2064 2071 2079 2085 2093 2104 2113 2124 2130 2138 2146 2154 2162 2170 > 2178 > 2186 2194 2202 2210 2218 2226 2234 2242 2250 2258 2266 2274 2282 > 2290] > > [len_i] = [2300] > > (index, [len_i]) = (array([ 0, 9, 20, 29, 37, 44, 55, > 66, 76, 88, 96, > 103, 114, 123, 131, 138, 147, 155, 167, 176, 184, > 192, > 198, 209, 216, 224, 233, 242, 246, 255, 262, 274, > 279, > 290, 297, 301, 310, 316, 320, 326, 330, 339, 345, > 352, > 360, 368, 377, 385, 393, 402, 413, 421, 433, 441, > 448, > 455, 464, 468, 476, 484, 492, 499, 507, 515, 521, > 531, > 539, 547, 556, 564, 572, 580, 588, 597, 611, 625, > 636, > 642, 651, 659, 663, 670, 677, 683, 692, 700, 707, > 715, > 723, 734, 740, 748, 754, 762, 771, 779, 787, 795, > 804, > 816, 822, 830, 842, 848, 856, 865, 873, 881, 890, > 895, > 903, 912, 920, 932, 944, 953, 962, 970, 977, 985, > 993, > 1000, 1012, 1021, 1029, 1038, 1046, 1057, 1063, 1071, 1079, > 1085, > 1093, 1099, 1107, 1114, 1120, 1128, 1137, 1145, 1153, 1162, > 1170, > 1179, 1188, 1197, 1209, 1218, 1225, 1233, 1242, 1250, 1256, > 1264, > 1271, 1278, 1286, 1293, 1299, 1308, 1317, 1324, 1332, 1340, > 1350, > 1358, 1369, 1376, 1382, 1388, 1396, 1403, 1411, 1420, 1432, > 1440, > 1447, 1455, 1462, 1466, 1472, 1480, 1485, 1491, 1500, 1508, > 1514, > 1518, 1522, 1531, 1540, 1549, 1560, 1568, 1574, 1582, 1587, > 1598, > 1603, 1611, 1619, 1630, 1638, 1645, 1654, 1662, 1670, 1678, > 1686, > 1694, 1703, 1708, 1715, 1721, 1727, 1735, 1743, 1751, 1760, > 1766, > 1775, 1787, 1795, 1802, 1811, 1820, 1827, 1835, 1843, 1851, > 1859, > 1868, 1872, 1880, 1889, 1897, 1908, 1916, 1923, 1931, 1939, > 1947, > 1952, 1962, 1973, 1981, 1987, 1994, 2002, 2013, 2025, 2030, > 2038, > 2045, 2053, 2064, 2071, 2079, 2085, 2093, 2104, 2113, 2124, > 2130, > 2138, 2146, 2154, 2162, 2170, 2178, 2186, 2194, 2202, 2210, > 2218, > 2226, 2234, 2242, 2250, 2258, 2266, 2274, 2282, 2290]), [2300]) > > index[1:] = [ 9 20 29 37 44 55 66 76 88 96 103 114 > 123 131 138 > 147 155 167 176 184 192 198 209 216 224 233 242 246 255 > 262 > 274 279 290 297 301 310 316 320 326 330 339 345 352 360 > 368 > 377 385 393 402 413 421 433 441 448 455 464 468 476 484 > 492 > 499 507 515 521 531 539 547 556 564 572 580 588 597 611 > 625 > 636 642 651 659 663 670 677 683 692 700 707 715 723 734 > 740 > 748 754 762 771 779 787 795 804 816 822 830 842 848 856 > 865 > 873 881 890 895 903 912 920 932 944 953 962 970 977 985 > 993 > 1000 1012 1021 1029 1038 1046 1057 1063 1071 1079 1085 1093 1099 1107 > 1114 > 1120 1128 1137 1145 1153 1162 1170 1179 1188 1197 1209 1218 1225 1233 > 1242 > 1250 1256 1264 1271 1278 1286 1293 1299 1308 1317 1324 1332 1340 1350 > 1358 > 1369 1376 1382 1388 1396 1403 1411 1420 1432 1440 1447 1455 1462 1466 > 1472 > 1480 1485 1491 1500 1508 1514 1518 1522 1531 1540 1549 1560 1568 1574 > 1582 > 1587 1598 1603 1611 1619 1630 1638 1645 1654 1662 1670 1678 1686 1694 > 1703 > 1708 1715 1721 1727 1735 1743 1751 1760 1766 1775 1787 1795 1802 1811 > 1820 > 1827 1835 1843 1851 1859 1868 1872 1880 1889 1897 1908 1916 1923 1931 > 1939 > 1947 1952 1962 1973 1981 1987 1994 2002 2013 2025 2030 2038 2045 2053 > 2064 > 2071 2079 2085 2093 2104 2113 2124 2130 2138 2146 2154 2162 2170 2178 > 2186 > 2194 2202 2210 2218 2226 2234 2242 2250 2258 2266 2274 2282 2290 > 2300] > > index[:-1] = [ 0 9 20 29 37 44 55 66 76 88 96 103 > 114 123 131 > 138 147 155 167 176 184 192 198 209 216 224 233 242 246 > 255 > 262 274 279 290 297 301 310 316 320 326 330 339 345 352 > 360 > 368 377 385 393 402 413 421 433 441 448 455 464 468 476 > 484 > 492 499 507 515 521 531 539 547 556 564 572 580 588 597 > 611 > 625 636 642 651 659 663 670 677 683 692 700 707 715 723 > 734 > 740 748 754 762 771 779 787 795 804 816 822 830 842 848 > 856 > 865 873 881 890 895 903 912 920 932 944 953 962 970 977 > 985 > 993 1000 1012 1021 1029 1038 1046 1057 1063 1071 1079 1085 1093 1099 > 1107 > 1114 1120 1128 1137 1145 1153 1162 1170 1179 1188 1197 1209 1218 1225 > 1233 > 1242 1250 1256 1264 1271 1278 1286 1293 1299 1308 1317 1324 1332 1340 > 1350 > 1358 1369 1376 1382 1388 1396 1403 1411 1420 1432 1440 1447 1455 1462 > 1466 > 1472 1480 1485 1491 1500 1508 1514 1518 1522 1531 1540 1549 1560 1568 > 1574 > 1582 1587 1598 1603 1611 1619 1630 1638 1645 1654 1662 1670 1678 1686 > 1694 > 1703 1708 1715 1721 1727 1735 1743 1751 1760 1766 1775 1787 1795 1802 > 1811 > 1820 1827 1835 1843 1851 1859 1868 1872 1880 1889 1897 1908 1916 1923 > 1931 > 1939 1947 1952 1962 1973 1981 1987 1994 2002 2013 2025 2030 2038 2045 > 2053 > 2064 2071 2079 2085 2093 2104 2113 2124 2130 2138 2146 2154 2162 2170 > 2178 > 2186 2194 2202 2210 2218 2226 2234 2242 2250 2258 2266 2274 2282 > 2290] > mask = [ True True True True True True True True True True > True True > True True True True True True True True True True True > True > True True True True True True True True True True True > True > True True True True True True True True True True True > True > True True True True True True True True True True True > True > True False True True True True True True True True True > True > True True True True True True True True True True True > True > True True True True True True True True True True True > True > True True True True True True True False False False False > False > False False False False False False False False False False False > False > False False False False False False False False False False False > False > False False False False False False False False False False False > False > False False True True True True True True True True True > True > True True True True True True True True True True True > True > True True True True True True True True True True True > True > True True True True True True True True True True True > True > True True True True True True True True True True True True > True True True True True True True True True True True True > True True True True True True True True True True True True > True True True False False True True True True True True True > True True True True True True True True True True True True > True True True True True True True True True True True True > True True True True True True True True True True True True > True True True True True True True True True] > > N.int32 = > > delta.astype(N.int32) = [ 9 11 9 8 7 11 11 10 12 8 7 11 9 8 7 9 > 8 12 9 8 8 6 11 7 8 > 9 9 4 9 7 12 5 11 7 4 9 6 4 6 4 9 6 7 8 8 9 8 8 9 11 > 8 12 8 7 7 9 4 8 8 8 7 8 8 6 10 8 8 9 8 8 8 8 9 14 14 > 11 6 9 8 4 7 7 6 9 8 7 8 8 11 6 8 6 8 9 8 8 8 9 12 6 > 8 12 6 8 9 8 8 9 5 8 9 8 12 12 9 9 8 7 8 8 7 12 9 8 9 > 8 11 6 8 8 6 8 6 8 7 6 8 9 8 8 9 8 9 9 9 12 9 7 8 9 > 8 6 8 7 7 8 7 6 9 9 7 8 8 10 8 11 7 6 6 8 7 8 9 12 8 > 7 8 7 4 6 8 5 6 9 8 6 4 4 9 9 9 11 8 6 8 5 11 5 8 8 > 11 8 7 9 8 8 8 8 8 9 5 7 6 6 8 8 8 9 6 9 12 8 7 9 9 > 7 8 8 8 8 9 4 8 9 8 11 8 7 8 8 8 5 10 11 8 6 7 8 11 12 > 5 8 7 8 11 7 8 6 8 11 9 11 6 8 8 8 8 8 8 8 8 8 8 8 8 > 8 8 8 8 8 8 8 8 10] > Do you have any idea what's wrong? Any advice will be greatly appreciated. Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sat Feb 6 11:04:16 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 06 Feb 2010 10:04:16 -0600 Subject: [Numpy-discussion] error fromnumeric: 254 (repeat) return repeat(repeats, axis) In-Reply-To: <833e1faf1002060747r7c6c9e39k6e2f8ee5c80e0936@mail.gmail.com> References: <833e1faf1002060743r4cf86765m8b86acb041e5a59@mail.gmail.com> <833e1faf1002060745u7837ec30we54d44cca03a0007@mail.gmail.com> <833e1faf1002060747r7c6c9e39k6e2f8ee5c80e0936@mail.gmail.com> Message-ID: <4B6D9300.8030001@enthought.com> Tom, 'mask' has 285 elements and 'delta' has 284 elements. If these are to be used as arguments of numpy.repeat(), they must be the same length. Warren Thomas Evangelidis wrote: > > Dear programmers, > > > I'm not familiar with numpy therefore I need a little help to debug > code which was not written by me. > > The lines which generate the error are the following: > > index = N.concatenate( (index, [len_i]) ) > delta = index[1:] - index[:-1] > return N.repeat( mask, delta.astype( N.int32 ) ) > > > and this is the error message I get: > > PDBModel: 1619 (extendMask) return N.repeat( mask, delta.astype( > N.int32 ) ) > functions: 19 (repeat) return np.repeat(a, repeats, axis) > fromnumeric: 254 (repeat) return repeat(repeats, axis) > > > > below I provide you with the values of each variables in these 3 lines > of code: > > > index = [ 0 9 20 29 37 44 55 66 76 88 96 > 103 114 123 131 > 138 147 155 167 176 184 192 198 209 216 224 233 242 > 246 255 > 262 274 279 290 297 301 310 316 320 326 330 339 345 > 352 360 > 368 377 385 393 402 413 421 433 441 448 455 464 468 > 476 484 > 492 499 507 515 521 531 539 547 556 564 572 580 588 > 597 611 > 625 636 642 651 659 663 670 677 683 692 700 707 715 > 723 734 > 740 748 754 762 771 779 787 795 804 816 822 830 842 > 848 856 > 865 873 881 890 895 903 912 920 932 944 953 962 970 > 977 985 > 993 1000 1012 1021 1029 1038 1046 1057 1063 1071 1079 1085 1093 > 1099 1107 > 1114 1120 1128 1137 1145 1153 1162 1170 1179 1188 1197 1209 1218 > 1225 1233 > 1242 1250 1256 1264 1271 1278 1286 1293 1299 1308 1317 1324 1332 > 1340 1350 > 1358 1369 1376 1382 1388 1396 1403 1411 1420 1432 1440 1447 1455 > 1462 1466 > 1472 1480 1485 1491 1500 1508 1514 1518 1522 1531 1540 1549 1560 > 1568 1574 > 1582 1587 1598 1603 1611 1619 1630 1638 1645 1654 1662 1670 1678 > 1686 1694 > 1703 1708 1715 1721 1727 1735 1743 1751 1760 1766 1775 1787 1795 > 1802 1811 > 1820 1827 1835 1843 1851 1859 1868 1872 1880 1889 1897 1908 1916 > 1923 1931 > 1939 1947 1952 1962 1973 1981 1987 1994 2002 2013 2025 2030 2038 > 2045 2053 > 2064 2071 2079 2085 2093 2104 2113 2124 2130 2138 2146 2154 2162 > 2170 2178 > 2186 2194 2202 2210 2218 2226 2234 2242 2250 2258 2266 2274 2282 > 2290] > > [len_i] = [2300] > > (index, [len_i]) = (array([ 0, 9, 20, 29, 37, 44, > 55, 66, 76, 88, 96, > 103, 114, 123, 131, 138, 147, 155, 167, 176, > 184, 192, > 198, 209, 216, 224, 233, 242, 246, 255, 262, > 274, 279, > 290, 297, 301, 310, 316, 320, 326, 330, 339, > 345, 352, > 360, 368, 377, 385, 393, 402, 413, 421, 433, > 441, 448, > 455, 464, 468, 476, 484, 492, 499, 507, 515, > 521, 531, > 539, 547, 556, 564, 572, 580, 588, 597, 611, > 625, 636, > 642, 651, 659, 663, 670, 677, 683, 692, 700, > 707, 715, > 723, 734, 740, 748, 754, 762, 771, 779, 787, > 795, 804, > 816, 822, 830, 842, 848, 856, 865, 873, 881, > 890, 895, > 903, 912, 920, 932, 944, 953, 962, 970, 977, > 985, 993, > 1000, 1012, 1021, 1029, 1038, 1046, 1057, 1063, 1071, 1079, > 1085, > 1093, 1099, 1107, 1114, 1120, 1128, 1137, 1145, 1153, 1162, > 1170, > 1179, 1188, 1197, 1209, 1218, 1225, 1233, 1242, 1250, 1256, > 1264, > 1271, 1278, 1286, 1293, 1299, 1308, 1317, 1324, 1332, 1340, > 1350, > 1358, 1369, 1376, 1382, 1388, 1396, 1403, 1411, 1420, 1432, > 1440, > 1447, 1455, 1462, 1466, 1472, 1480, 1485, 1491, 1500, 1508, > 1514, > 1518, 1522, 1531, 1540, 1549, 1560, 1568, 1574, 1582, 1587, > 1598, > 1603, 1611, 1619, 1630, 1638, 1645, 1654, 1662, 1670, 1678, > 1686, > 1694, 1703, 1708, 1715, 1721, 1727, 1735, 1743, 1751, 1760, > 1766, > 1775, 1787, 1795, 1802, 1811, 1820, 1827, 1835, 1843, 1851, > 1859, > 1868, 1872, 1880, 1889, 1897, 1908, 1916, 1923, 1931, 1939, > 1947, > 1952, 1962, 1973, 1981, 1987, 1994, 2002, 2013, 2025, 2030, > 2038, > 2045, 2053, 2064, 2071, 2079, 2085, 2093, 2104, 2113, 2124, > 2130, > 2138, 2146, 2154, 2162, 2170, 2178, 2186, 2194, 2202, 2210, > 2218, > 2226, 2234, 2242, 2250, 2258, 2266, 2274, 2282, 2290]), > [2300]) > > > > > index[1:] = [ 9 20 29 37 44 55 66 76 88 96 > 103 114 123 131 138 > 147 155 167 176 184 192 198 209 216 224 233 242 246 > 255 262 > 274 279 290 297 301 310 316 320 326 330 339 345 352 > 360 368 > 377 385 393 402 413 421 433 441 448 455 464 468 476 > 484 492 > 499 507 515 521 531 539 547 556 564 572 580 588 597 > 611 625 > 636 642 651 659 663 670 677 683 692 700 707 715 723 > 734 740 > 748 754 762 771 779 787 795 804 816 822 830 842 848 > 856 865 > 873 881 890 895 903 912 920 932 944 953 962 970 977 > 985 993 > 1000 1012 1021 1029 1038 1046 1057 1063 1071 1079 1085 1093 1099 > 1107 1114 > 1120 1128 1137 1145 1153 1162 1170 1179 1188 1197 1209 1218 1225 > 1233 1242 > 1250 1256 1264 1271 1278 1286 1293 1299 1308 1317 1324 1332 1340 > 1350 1358 > 1369 1376 1382 1388 1396 1403 1411 1420 1432 1440 1447 1455 1462 > 1466 1472 > 1480 1485 1491 1500 1508 1514 1518 1522 1531 1540 1549 1560 1568 > 1574 1582 > 1587 1598 1603 1611 1619 1630 1638 1645 1654 1662 1670 1678 1686 > 1694 1703 > 1708 1715 1721 1727 1735 1743 1751 1760 1766 1775 1787 1795 1802 > 1811 1820 > 1827 1835 1843 1851 1859 1868 1872 1880 1889 1897 1908 1916 1923 > 1931 1939 > 1947 1952 1962 1973 1981 1987 1994 2002 2013 2025 2030 2038 2045 > 2053 2064 > 2071 2079 2085 2093 2104 2113 2124 2130 2138 2146 2154 2162 2170 > 2178 2186 > 2194 2202 2210 2218 2226 2234 2242 2250 2258 2266 2274 2282 2290 > 2300] > > index[:-1] = [ 0 9 20 29 37 44 55 66 76 88 > 96 103 114 123 131 > 138 147 155 167 176 184 192 198 209 216 224 233 242 > 246 255 > 262 274 279 290 297 301 310 316 320 326 330 339 345 > 352 360 > 368 377 385 393 402 413 421 433 441 448 455 464 468 > 476 484 > 492 499 507 515 521 531 539 547 556 564 572 580 588 > 597 611 > 625 636 642 651 659 663 670 677 683 692 700 707 715 > 723 734 > 740 748 754 762 771 779 787 795 804 816 822 830 842 > 848 856 > 865 873 881 890 895 903 912 920 932 944 953 962 970 > 977 985 > 993 1000 1012 1021 1029 1038 1046 1057 1063 1071 1079 1085 1093 > 1099 1107 > 1114 1120 1128 1137 1145 1153 1162 1170 1179 1188 1197 1209 1218 > 1225 1233 > 1242 1250 1256 1264 1271 1278 1286 1293 1299 1308 1317 1324 1332 > 1340 1350 > 1358 1369 1376 1382 1388 1396 1403 1411 1420 1432 1440 1447 1455 > 1462 1466 > 1472 1480 1485 1491 1500 1508 1514 1518 1522 1531 1540 1549 1560 > 1568 1574 > 1582 1587 1598 1603 1611 1619 1630 1638 1645 1654 1662 1670 1678 > 1686 1694 > 1703 1708 1715 1721 1727 1735 1743 1751 1760 1766 1775 1787 1795 > 1802 1811 > 1820 1827 1835 1843 1851 1859 1868 1872 1880 1889 1897 1908 1916 > 1923 1931 > 1939 1947 1952 1962 1973 1981 1987 1994 2002 2013 2025 2030 2038 > 2045 2053 > 2064 2071 2079 2085 2093 2104 2113 2124 2130 2138 2146 2154 2162 > 2170 2178 > 2186 2194 2202 2210 2218 2226 2234 2242 2250 2258 2266 2274 2282 > 2290] > mask = [ True True True True True True True True True > True True True > True True True True True True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True True True True > True True > True False True True True True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True False False False False > False > False False False False False False False False False False False > False > False False False False False False False False False False False > False > False False False False False False False False False False False > False > False False True True True True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True True True True > True True > True True True False False True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True True True True > True True > True True True True True True True True True] > > N.int32 = > > delta.astype(N.int32) = [ 9 11 9 8 7 11 11 10 12 8 7 11 9 > 8 7 9 8 12 9 8 8 6 11 7 8 > 9 9 4 9 7 12 5 11 7 4 9 6 4 6 4 9 6 7 8 8 9 > 8 8 9 11 > 8 12 8 7 7 9 4 8 8 8 7 8 8 6 10 8 8 9 8 8 8 > 8 9 14 14 > 11 6 9 8 4 7 7 6 9 8 7 8 8 11 6 8 6 8 9 8 8 > 8 9 12 6 > 8 12 6 8 9 8 8 9 5 8 9 8 12 12 9 9 8 7 8 8 7 > 12 9 8 9 > 8 11 6 8 8 6 8 6 8 7 6 8 9 8 8 9 8 9 9 9 12 > 9 7 8 9 > 8 6 8 7 7 8 7 6 9 9 7 8 8 10 8 11 7 6 6 8 7 > 8 9 12 8 > 7 8 7 4 6 8 5 6 9 8 6 4 4 9 9 9 11 8 6 8 5 > 11 5 8 8 > 11 8 7 9 8 8 8 8 8 9 5 7 6 6 8 8 8 9 6 9 12 > 8 7 9 9 > 7 8 8 8 8 9 4 8 9 8 11 8 7 8 8 8 5 10 11 8 6 > 7 8 11 12 > 5 8 7 8 11 7 8 6 8 11 9 11 6 8 8 8 8 8 8 8 8 > 8 8 8 8 > 8 8 8 8 8 8 8 8 10] > > > > Do you have any idea what's wrong? Any advice will be greatly appreciated. > > Tom > > > > ------------------------------------------------------------------------ > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From vicentesolerfraile at hotmail.com Sat Feb 6 11:24:15 2010 From: vicentesolerfraile at hotmail.com (Vicente Soler Fraile) Date: Sat, 6 Feb 2010 17:24:15 +0100 Subject: [Numpy-discussion] Unable to install Numpy In-Reply-To: <5b8d13221002060603o1d622795jb56de1d741e5309a@mail.gmail.com> References: , <5b8d13221002060603o1d622795jb56de1d741e5309a@mail.gmail.com> Message-ID: I am using: Python 2.6.4 (r264:75708, Oct 26 2009, 07:36:50) [MSC v.1500 64 bit (AMD64)] on win32 which seems to be a 64 bits interpreter. What should I do if want to use Numpy. Can I somehow manually install Numpy? Or else should I remove Wincom and Python 64 bits and install instead a 32 bits interpreter? Any help is highly appreciated. Regards > Date: Sat, 6 Feb 2010 23:03:03 +0900 > From: cournape at gmail.com > To: numpy-discussion at scipy.org > Subject: Re: [Numpy-discussion] Unable to install Numpy > > On Sat, Feb 6, 2010 at 10:54 PM, Vicente Soler Fraile > wrote: > > Hello, > > > > I try to install Numpy without success. > > > > Windows 7 > > Python 2.6.4 > > Numpy numpy-1.4.0-win32-superpack- > > python2.6.exe > > Are you using a 32 bits python ? We only provide 32 bits installers > for now on windows2, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion _________________________________________________________________ Recibe las alertas Hotmail en tu m?vil ?Act?valas ya! http://home.mobile.live.com/MobileAttach.mvc/?mkt=es-es -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Sat Feb 6 14:21:32 2010 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sat, 06 Feb 2010 11:21:32 -0800 Subject: [Numpy-discussion] Installed NumPy and MatPlotLib in the Wrong Order. How uninstall MPL? In-Reply-To: <4B6D55D5.4070404@sbcglobal.net> References: <4B6CE3E4.2050201@sbcglobal.net> <1cd32cbb1002052101l5353657cgbb34d828991ee51a@mail.gmail.com> <4B6D55D5.4070404@sbcglobal.net> Message-ID: <4B6DC13C.2070109@uci.edu> All the official matplotlib installers on sourceforge are bdist_wininst executables, not msi. The installer for Python 2.5 was built with Python 2.5 itself, which does not know about Windows user account control (UAC). Unless you specifically run the installer as administrator, the uninstall registry settings can not be created. I also replied to you on matplotlib-users on how to manually remove matplotlib. Christoph On 2/6/2010 3:43 AM, Wayne Watson wrote: > stuck = placed. > > I'm pretty sure I used an msi file. I've brought this topic up in 3 > forums where I would have thought people knew the answer. Yours is the > first answer. I would have guessed that anyone dealing libraries would > have known the answer. Nor have I found anything in two Python books > I've used. I think I'll look at them again. Google didn't even show > anything. > > Thanks for the response. I'll try to clear manually the locations we've > mentioned. > > On 2/5/2010 9:01 PM, josef.pktd at gmail.com wrote: >> On Fri, Feb 5, 2010 at 10:37 PM, Wayne Watson >> wrote: >> >>> See Subject. >>> >>> I'm working in IDLE in Win7. It seems to me MPL gets stuck in >>> site-packages under C:\Python25. Maybe this is as simple as deleting the >>> entry? >>> >> What does it mean that MPL gets stuck? what kind of stuck? >> >> (My experience is only windowsXP not Win7) >> >> Often I just delete all directories and files for a package. However, >> if the package has been installed with an installer and not with >> easy_install or setup.py, there might be a removexxx, >> (removematplotlib) under/in the Python25 directory (I have >> Removematplotlib.exe for python24 but not for 25) and it might also be >> in the windows registry, try Add/Remove Programs or whatever the Win7 >> equivalent is. >> >> I just checked my Add/Remove Programs and I have several entries under >> python 2.5 that are orphans because I deleted the directories but >> didn't uninstall through an uninstaller, but again I see an entry for >> matplotlib only for python 2.4, so maybe matplotlib doesn't pollute >> the windows registry anymore. >> >> If you don't find any matplotlib uninstall (as in my case for Py2.5), >> you can just delete all files and directories in site-packages. >> >> Josef >> >> >> >>> Well, yes there's a MPL folder under site-packages and an info MPL file >>> of 540 bytes. There are also pylab.py, pyc,and py0 files under site. >>> What to do next? >>> >>> >>> -- >>> My life in two words. "Interrupted Projects." -- WTW (quote originator) >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > From turian at gmail.com Sat Feb 6 16:21:55 2010 From: turian at gmail.com (Joseph Turian) Date: Sat, 6 Feb 2010 16:21:55 -0500 Subject: [Numpy-discussion] numpy 10x slower than native Python arrays for simple operations? Message-ID: <4dacb2561002061321t7807d40p6ce99bf01eaa1781@mail.gmail.com> I have done some profiling, and the results are completely counterintuitive. For simple array access operations, numpy and array.array are 10x slower than native Python arrays. I am using numpy 1.3.0, the standard Ubuntu 9.03 package. Why am I getting such slow access speeds? Note that for "array access", I am doing operations of the form: a[i] += 1 Profiles: [0] * 20000000 Access: 2.3M / sec Initialization: 0.8s numpy.zeros(shape=(20000000,), dtype=numpy.int32) Access: 160K/sec Initialization: 0.2s array.array('L', [0] * 20000000) Access: 175K/sec Initialization: 2.0s array.array('L', (0 for i in range(20000000))) Access: 175K/sec, presumably, based upon the profile for the other array.array Initialization: 6.7s Any idea why my numpy array access is so slow? Thanks, Joseph From charlesr.harris at gmail.com Sat Feb 6 16:50:34 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 6 Feb 2010 14:50:34 -0700 Subject: [Numpy-discussion] numpy 10x slower than native Python arrays for simple operations? In-Reply-To: <4dacb2561002061321t7807d40p6ce99bf01eaa1781@mail.gmail.com> References: <4dacb2561002061321t7807d40p6ce99bf01eaa1781@mail.gmail.com> Message-ID: On Sat, Feb 6, 2010 at 2:21 PM, Joseph Turian wrote: > I have done some profiling, and the results are completely > counterintuitive. For simple array access operations, numpy and > array.array are 10x slower than native Python arrays. > > I am using numpy 1.3.0, the standard Ubuntu 9.03 package. > > Why am I getting such slow access speeds? > Note that for "array access", I am doing operations of the form: > a[i] += 1 > > Profiles: > > [0] * 20000000 > Access: 2.3M / sec > Initialization: 0.8s > > numpy.zeros(shape=(20000000,), dtype=numpy.int32) > Access: 160K/sec > Initialization: 0.2s > > array.array('L', [0] * 20000000) > Access: 175K/sec > Initialization: 2.0s > > array.array('L', (0 for i in range(20000000))) > Access: 175K/sec, presumably, based upon the profile for the other > array.array > Initialization: 6.7s > > Any idea why my numpy array access is so slow? > > Without seeing the whole script it is hard to tell. But numpy indexing is slow and should be avoided when possible. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Feb 6 19:07:24 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 07 Feb 2010 02:07:24 +0200 Subject: [Numpy-discussion] numpy 10x slower than native Python arrays for simple operations? In-Reply-To: <4dacb2561002061321t7807d40p6ce99bf01eaa1781@mail.gmail.com> References: <4dacb2561002061321t7807d40p6ce99bf01eaa1781@mail.gmail.com> Message-ID: <1265501244.4800.18.camel@idol> la, 2010-02-06 kello 16:21 -0500, Joseph Turian kirjoitti: > I have done some profiling, and the results are completely > counterintuitive. For simple array access operations, numpy and > array.array are 10x slower than native Python arrays. > > I am using numpy 1.3.0, the standard Ubuntu 9.03 package. > > Why am I getting such slow access speeds? > Note that for "array access", I am doing operations of the form: > a[i] += 1 > > Profiles: > > [0] * 20000000 > Access: 2.3M / sec > Initialization: 0.8s > > numpy.zeros(shape=(20000000,), dtype=numpy.int32) > Access: 160K/sec > Initialization: 0.2s The speed difference comes here from the fact that a[i] += 1 effectively calls numpy.core.umath.add(a[i], 1, a[i]). Since it is designed to handle operations on arrays, and at the moment there is no short-circuit for 1-d numbers, it has a fixed overhead that is larger than for Python's simple number+number addition. In vectorized operations the overhead does not matter, but changing a single element at a time makes it show. If `i` is an index vector, Numpy has faster per-element access times, In [1]: import numpy as np In [2]: a = np.zeros((2000000,), 'i4') In [3]: b = [0] * 2000000 In [5]: i = np.arange(0, 2000000, 5) In [8]: %timeit b[0] += 1 1000000 loops, best of 3: 260 ns per loop In [20]: %timeit a[i] += 1 10 loops, best of 3: 71.2 ms per loop In [25]: 71.2e-3/len(i) Out[25]: 1.7800000000000001e-07 ie., 178 ns per element -- Pauli Virtanen From oliphant at enthought.com Sat Feb 6 22:16:03 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Sat, 6 Feb 2010 21:16:03 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <201002061407.20123.faltet@pytables.org> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> Message-ID: <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> On Feb 6, 2010, at 7:07 AM, Francesc Alted wrote: > A Saturday 06 February 2010 13:17:22 David Cournapeau escrigu?: >> On Sat, Feb 6, 2010 at 4:07 PM, Travis Oliphant > > > wrote: >>> I think this plan is the least disruptive and satisfies the concerns >>> of all parties in the discussion. The other plans that have been >>> proposed do not address my concerns of keeping the date-time changes >> >> In that regard, your proposal is very similar to what was suggested >> at >> the beginning - the difference is only whether breaking at 1.4.x or >> 1.5.x. > > I'm thinking why should we so conservative in raising version > numbers? Why > not relabeling 1.4.0 to 2.0 and mark 1.4.0 as a broken release? > Then, we can > continue by putting everything except ABI breaking features in > 1.4.1. With > this, NumPy 2.0 will remain available for people wanting to be more > on-the- > bleeding-edge. Something similar to what has happened with Python > 3.0, which > has not prevented the 2.x series to evolve. This is not advisable in my mind because we don't have nearly enough developer resources to start having 2 maintained branches of NumPy. The expectation that bug-fixes and other improvements will be happening on two different release tracks of NumPy is just not realistic right now. I was only mildly supportive of making another ABI compatible release. The resistance to my approach has dampened my enthusiasm for doing the work necessary to make it happen. That's quite O.K. though. If David or somebody else wants to make a 1.4.1 release that is ABI compatible with the 1.3 series then great. I will just work on trunk and assume that the next release will be ABI incompatible. At this point I would rather call the next version 1.5 than 2.0, though. When the date-time work is completed, then we could release an ABI-compatible-with-1.5 version 2.0. My view of the timeline for the 1.5 release is the end of February. -Travis From ralf.gommers at googlemail.com Sat Feb 6 23:30:12 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 7 Feb 2010 12:30:12 +0800 Subject: [Numpy-discussion] numpy release process, adding a compatibility test step. In-Reply-To: <64ddb72c1002060244h748d29aanee32edb1b61ac56@mail.gmail.com> References: <64ddb72c1002060244h748d29aanee32edb1b61ac56@mail.gmail.com> Message-ID: On Sat, Feb 6, 2010 at 6:44 PM, Ren? Dudfield wrote: > Hi, > > may I suggest an addition to the release process... > > 'Tests against popular libraries that rely on numpy at the RC stage. > Test at least these libraries pass their numpy related tests: > matplotlib, scipy, pygame, (insert others here?). The release manage > should ask the mailing list for people to test the RC against these > libraries to make sure they work ok.' > > - This will catch problems like the ABI one with the recent release. > - It is an explicit request for testing, with concrete tasks that > people can do. Rather than a more abstract request for testing. > - not much extra work for the release manager. testing work can be > distributed to anyone who can try an app, or library with the binaries > supplied in the RC stage of a release. > - binary installer users can test as well, not just people who build > from source. > - tests libraries the numpy developers may not be using all the time > themselves. > > On another project, adding this simple step helped prevent a number of > bugs which affected other programs and libraries using the project. > Before this was added we had a couple of releases which broke a few > popular libraries/apps with very simple regressions or bugs. > > Thanks for the suggestion, I agree this will be useful. And a lot easier to do than automated testing against other libraries, which will hopefully happen at some point in the future. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From invite+yqxebmxn at facebookmail.com Sun Feb 7 03:27:22 2010 From: invite+yqxebmxn at facebookmail.com (Aswad Gurjar) Date: Sun, 7 Feb 2010 00:27:22 -0800 Subject: [Numpy-discussion] Check out my photos on Facebook Message-ID: <5ef6da2da9f48781f1d184589b2619ac@10.16.151.191> Hi Numpy-discussion, I set up a Facebook profile where I can post my pictures, videos and events and I want to add you as a friend so you can see it. First, you need to join Facebook! Once you join, you can also create your own profile. Thanks, Aswad To sign up for Facebook, follow the link below: http://www.facebook.com/p.php?i=100000738053270&k=Z6E3Y5PX452D4EDJPG33QPYPPQIB42ZAWTAXJ&r Already have an account? Add this email address to your account http://www.facebook.com/n/?merge_accounts.php&e=numpy-discussion at scipy.org&c=2c4cfc74b301e71691b056eae286542a.numpy-discussion at scipy.org was invited to join Facebook by Aswad Gurjar. If you do not wish to receive this type of email from Facebook in the future, please click on the link below to unsubscribe. http://www.facebook.com/o.php?k=e68275&u=1530048334&mid=1d87694G5b32af4eG0G8 Facebook's offices are located at 1601 S. California Ave., Palo Alto, CA 94304. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Sun Feb 7 09:57:08 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 7 Feb 2010 09:57:08 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> Message-ID: On Sat, Feb 6, 2010 at 10:16 PM, Travis Oliphant wrote: > I will just work on trunk and assume that the next release will be ABI > incompatible. ? At this point I would rather call the next version 1.5 > than 2.0, though. ?When the date-time work is completed, then we could > release an ABI-compatible-with-1.5 ?version 2.0. There may be repercussions if numpy starts deviating from its own conventions for what versions may introduce ABI incompatibilities. I attended a workshop recently where a number of scientists approached me and expressed interest in switching from IDL to python. Two of these were senior scientists leading large research groups and collaborations, both of whom had looked at python several years ago and decided they did not like "the wild west nature" (direct quote) of the scientific python community. I assured them that both the projects and community were maturing. At the time, I did not have to explain the situation concerning numpy-1.4.0, which, if it causes problems when they try to set up an environment to assess python, could put them off python for another 3 years, maybe even for good. It would be a lot easier to justify the disruption if one could say "numpy-2.0 added support for some important features, so this disruption was unfortunate but necessary. Such disruptions are specified by major version changes, which as you can see are rare. In fact, there are no further major version changes envisioned at this time." That kind of statement might reassure a lot of people, including package maintainers etc. Regards, Darren P.S. I promise this will be my last post on the subject. From aisaac at american.edu Sun Feb 7 10:11:35 2010 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 07 Feb 2010 10:11:35 -0500 Subject: [Numpy-discussion] swap elements in two arrays Message-ID: <4B6ED827.2000807@american.edu> I have two 1d arrays, say `a` and `b`. I need to swap elements if a 1d boolean criterion `to_swap` is met. Here's one way: a, b = np.choose([to_swap,np.logical_not(to_swap)], [a, b]) Here is a much faster way: a[to_swap], b[to_swap] = b[to_swap], a[to_swap] Other better ways? Thanks, Alan Isaac From pav at iki.fi Sun Feb 7 10:21:41 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 07 Feb 2010 17:21:41 +0200 Subject: [Numpy-discussion] swap elements in two arrays In-Reply-To: <4B6ED827.2000807@american.edu> References: <4B6ED827.2000807@american.edu> Message-ID: <1265556100.6170.2.camel@idol> su, 2010-02-07 kello 10:11 -0500, Alan G Isaac kirjoitti: > I have two 1d arrays, say `a` and `b`. > I need to swap elements if a 1d boolean criterion `to_swap` is met. [clip] > Here is a much faster way: > a[to_swap], b[to_swap] = b[to_swap], a[to_swap] That doesn't necessarily work -- the above code expands to tmp = a[to_swap] a[to_swap] = b[to_swap] b[to_swap] = tmp It'll work provided `to_swap` is such that `tmp` is not a view on `a`... -- Pauli Virtanen From charlesr.harris at gmail.com Sun Feb 7 10:35:02 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 7 Feb 2010 08:35:02 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> Message-ID: On Sun, Feb 7, 2010 at 7:57 AM, Darren Dale wrote: > On Sat, Feb 6, 2010 at 10:16 PM, Travis Oliphant > wrote: > > I will just work on trunk and assume that the next release will be ABI > > incompatible. At this point I would rather call the next version 1.5 > > than 2.0, though. When the date-time work is completed, then we could > > release an ABI-compatible-with-1.5 version 2.0. > > There may be repercussions if numpy starts deviating from its own > conventions for what versions may introduce ABI incompatibilities. > > I attended a workshop recently where a number of scientists approached > me and expressed interest in switching from IDL to python. Two of > these were senior scientists leading large research groups and > collaborations, both of whom had looked at python several years ago > and decided they did not like "the wild west nature" (direct quote) of > the scientific python community. I assured them that both the projects > and community were maturing. At the time, I did not have to explain > the situation concerning numpy-1.4.0, which, if it causes problems > when they try to set up an environment to assess python, could put > them off python for another 3 years, maybe even for good. It would be > a lot easier to justify the disruption if one could say "numpy-2.0 > added support for some important features, so this disruption was > unfortunate but necessary. Such disruptions are specified by major > version changes, which as you can see are rare. In fact, there are no > further major version changes envisioned at this time." That kind of > statement might reassure a lot of people, including package > maintainers etc. > > Regards, > Darren > > P.S. I promise this will be my last post on the subject. > Don't be shy ;) You make good points and I agree with them. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Sun Feb 7 10:39:12 2010 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 07 Feb 2010 10:39:12 -0500 Subject: [Numpy-discussion] swap elements in two arrays In-Reply-To: <1265556100.6170.2.camel@idol> References: <4B6ED827.2000807@american.edu> <1265556100.6170.2.camel@idol> Message-ID: <4B6EDEA0.9060709@american.edu> On 2/7/2010 10:21 AM, Alan Isaac wrote: >> I have two 1d arrays, say `a` and `b`. >> I need to swap elements if a 1d boolean criterion `to_swap` is met. > [clip] >> Here is a much faster way: >> a[to_swap], b[to_swap] = b[to_swap], a[to_swap] On 2/7/2010 10:21 AM, Pauli Virtanen wrote: > That doesn't necessarily work -- the above code expands to > > tmp = a[to_swap] > a[to_swap] = b[to_swap] > b[to_swap] = tmp > > It'll work provided `to_swap` is such that `tmp` is not a view on `a`... I thought that if `to_swap` is a boolean array that `a[to_swap]` will always own its own data. Can that fail? Alan From pav at iki.fi Sun Feb 7 10:58:38 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 07 Feb 2010 17:58:38 +0200 Subject: [Numpy-discussion] swap elements in two arrays In-Reply-To: <4B6EDEA0.9060709@american.edu> References: <4B6ED827.2000807@american.edu> <1265556100.6170.2.camel@idol> <4B6EDEA0.9060709@american.edu> Message-ID: <1265558318.6170.4.camel@idol> su, 2010-02-07 kello 10:39 -0500, Alan G Isaac kirjoitti: [clip] > I thought that if `to_swap` is a boolean array that `a[to_swap]` > will always own its own data. Can that fail? Ok, I don't think it can fail, then. But it's a slightly dangerous idiom nevertheless... -- Pauli Virtanen From kwgoodman at gmail.com Sun Feb 7 11:16:23 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 7 Feb 2010 08:16:23 -0800 Subject: [Numpy-discussion] swap elements in two arrays In-Reply-To: <1265558318.6170.4.camel@idol> References: <4B6ED827.2000807@american.edu> <1265556100.6170.2.camel@idol> <4B6EDEA0.9060709@american.edu> <1265558318.6170.4.camel@idol> Message-ID: On Sun, Feb 7, 2010 at 7:58 AM, Pauli Virtanen wrote: > su, 2010-02-07 kello 10:39 -0500, Alan G Isaac kirjoitti: > [clip] >> I thought that if `to_swap` is a boolean array that `a[to_swap]` >> will always own its own data. ?Can that fail? > > Ok, I don't think it can fail, then. But it's a slightly dangerous idiom > nevertheless... I think it depends on how much you know about the inputs: >> to_swap = np.array([True, True]) Good: >> a = np.array([1, 2, 3]) >> b = a[1:].copy() >> >> a[to_swap], b[to_swap] = b[to_swap], a[to_swap] >> a array([2, 3, 3]) >> b array([1, 2]) Bad: >> a = np.array([1, 2, 3]) >> b = a[1:] >> >> a[to_swap], b[to_swap] = b[to_swap], a[to_swap] >> >> a array([2, 1, 2]) >> b array([1, 2]) From millman at berkeley.edu Sun Feb 7 12:19:48 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 7 Feb 2010 09:19:48 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> Message-ID: On Sat, Feb 6, 2010 at 7:16 PM, Travis Oliphant wrote: > I will just work on trunk and assume that the next release will be ABI > incompatible. ? At this point I would rather call the next version 1.5 > than 2.0, though. ?When the date-time work is completed, then we could > release an ABI-compatible-with-1.5 ?version 2.0. ? ?My view of the > timeline for the 1.5 release is the end of February. I would prefer that we follow our previously discussed, agreed upon, and explicitly stated version numbering policy: * The releases will be numbered major.minor.bugfix * There will be no ABI changes in minor releases * There will be no API changes in bugfix releases In addition to it being our policy, it is also more closely aligned with my general expectations for any mature open source project. Just to be clear, I would prefer to see the ABI-breaking release be called 2.0. I don't see why we have to get the release out in three weeks, though. I think it would be better to use this opportunity to take some time to make sure we get it right. I am not suggesting that we delay for months. Instead, why don't we agree to consider ABI-breakage for to 2-3 weeks. Then close the discussion and try to get the 2.0 release out as quickly after that as possible. -- Jarrod Millman Helen Wills Neuroscience Institute 10 Giannini Hall, UC Berkeley http://cirl.berkeley.edu/ From aisaac at american.edu Sun Feb 7 13:42:20 2010 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 07 Feb 2010 13:42:20 -0500 Subject: [Numpy-discussion] swap elements in two arrays In-Reply-To: References: <4B6ED827.2000807@american.edu> <1265556100.6170.2.camel@idol> <4B6EDEA0.9060709@american.edu> <1265558318.6170.4.camel@idol> Message-ID: <4B6F098C.4050308@american.edu> On 2/7/2010 11:16 AM, Keith Goodman wrote: > Bad: > >>> a = np.array([1, 2, 3]) > >>> b = a[1:] > >>> > >>> a[to_swap], b[to_swap] = b[to_swap], a[to_swap] > >>> > >>> a > array([2, 1, 2]) > >>> b > array([1, 2]) So that is an important point: if `a` and `b` share data, the "swap" is not well defined. But that affects the alternative idiom as well: >>> to_swap array([ True, True], dtype=bool) >>> a = np.array([1, 2, 3]) >>> b = a[1:] >>> temp = a.copy() >>> a[to_swap] = b[to_swap] >>> b[to_swap] = temp >>> a, b (array([2, 1, 2]), array([1, 2])) Thanks, Alan From seb.haase at gmail.com Sun Feb 7 16:23:07 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Sun, 7 Feb 2010 22:23:07 +0100 Subject: [Numpy-discussion] Unable to install Numpy In-Reply-To: References: <5b8d13221002060603o1d622795jb56de1d741e5309a@mail.gmail.com> Message-ID: On Sat, Feb 6, 2010 at 5:24 PM, Vicente Soler Fraile wrote: > I am using: > > ?? Python 2.6.4 (r264:75708, Oct 26 2009, 07:36:50) [MSC v.1500 64 bit > (AMD64)] on win32 > > which seems to be a 64 bits interpreter. > > What should I do if want to use Numpy. Can I somehow manually install Numpy? > Or else should I remove Wincom and Python 64 bits and install instead a 32 > bits interpreter? > > Any help is highly appreciated. > > Regards > > >> Date: Sat, 6 Feb 2010 23:03:03 +0900 >> From: cournape at gmail.com >> To: numpy-discussion at scipy.org >> Subject: Re: [Numpy-discussion] Unable to install Numpy >> >> On Sat, Feb 6, 2010 at 10:54 PM, Vicente Soler Fraile >> wrote: >> > Hello, >> > >> > I try to install Numpy without success. >> > >> > Windows 7 >> > Python 2.6.4 >> > Numpy???????????? numpy-1.4.0-win32-superpack- >> > python2.6.exe >> >> Are you using a 32 bits python ? We only provide 32 bits installers >> for now on windows2, >> Hello, Regarding this post - is there a "non official" numpy package for 64bit windows? And how about SciPy ? I guess it all related to problems coming from the 64bit support of cygwin (rather the lack thereof) - right ? Cheers, Sebastian Haase From jsseabold at gmail.com Sun Feb 7 16:34:01 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Sun, 7 Feb 2010 16:34:01 -0500 Subject: [Numpy-discussion] Unable to install Numpy In-Reply-To: References: <5b8d13221002060603o1d622795jb56de1d741e5309a@mail.gmail.com> Message-ID: On Sun, Feb 7, 2010 at 4:23 PM, Sebastian Haase wrote: > > On Sat, Feb 6, 2010 at 5:24 PM, Vicente Soler Fraile > wrote: > > I am using: > > > > ?? Python 2.6.4 (r264:75708, Oct 26 2009, 07:36:50) [MSC v.1500 64 bit > > (AMD64)] on win32 > > > > which seems to be a 64 bits interpreter. > > > > What should I do if want to use Numpy. Can I somehow manually install Numpy? > > Or else should I remove Wincom and Python 64 bits and install instead a 32 > > bits interpreter? > > > > Any help is highly appreciated. > > > > Regards > > > > > >> Date: Sat, 6 Feb 2010 23:03:03 +0900 > >> From: cournape at gmail.com > >> To: numpy-discussion at scipy.org > >> Subject: Re: [Numpy-discussion] Unable to install Numpy > >> > >> On Sat, Feb 6, 2010 at 10:54 PM, Vicente Soler Fraile > >> wrote: > >> > Hello, > >> > > >> > I try to install Numpy without success. > >> > > >> > Windows 7 > >> > Python 2.6.4 > >> > Numpy???????????? numpy-1.4.0-win32-superpack- > >> > python2.6.exe > >> > >> Are you using a 32 bits python ? We only provide 32 bits installers > >> for now on windows2, > >> > Hello, > Regarding this post - is there a "non official" numpy package for > 64bit windows? And how about SciPy ? > I guess it all related to problems coming from ?the 64bit support of > cygwin ?(rather the lack thereof) - right ? > Un-official binaries that I've been using (rarely) on Windows 7 are here: http://www.scipy.org/Download#head-f64942d62faddeb27278a2c735e81ef2a7349db0 Skipper From david at silveregg.co.jp Sun Feb 7 20:19:33 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Mon, 08 Feb 2010 10:19:33 +0900 Subject: [Numpy-discussion] Unable to install Numpy In-Reply-To: References: <5b8d13221002060603o1d622795jb56de1d741e5309a@mail.gmail.com> Message-ID: <4B6F66A5.1010608@silveregg.co.jp> Sebastian Haase wrote: > Hello, > Regarding this post - is there a "non official" numpy package for > 64bit windows? And how about SciPy ? There are both unofficial individual binaries and EPD-based numpy/scipy. One example of unofficial builds are there: http://www.lfd.uci.edu/~gohlke/pythonlibs/ > I guess it all related to problems coming from the 64bit support of > cygwin (rather the lack thereof) - right ? Not exactly, although it prevents from building Atlas for 64 bits. The main issue is gcc/VS interoperabilities, especially for gfortran. I have not taken the time to work in it the last few months. For various reasons, I don't think it makes sense for NumPy itself to provide binaries which do not use open source compilers. cheers, David From david at silveregg.co.jp Sun Feb 7 20:23:30 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Mon, 08 Feb 2010 10:23:30 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> Message-ID: <4B6F6792.1000103@silveregg.co.jp> Jarrod Millman wrote: > Just > to be clear, I would prefer to see the ABI-breaking release be called > 2.0. I don't see why we have to get the release out in three weeks, > though. I think it would be better to use this opportunity to take > some time to make sure we get it right. As a compromise, what about the following: - remove ABI-incompatible changes for 1.4.x - release a 1.5.0 marked as experimental, with everything that Travis wants to put in. It would be a preview for python 3k as well, so it conveys the idea that it is experimental pretty well. - the 1.6.x branch would be a polished 1.5.x. The advantages is that 1.5.0 can be push relatively early, but we would still keep 1.4.0 as the "stable" release, against which every other binary installer should be built (scipy, mpl). cheers, David From dsdale24 at gmail.com Sun Feb 7 20:42:34 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 7 Feb 2010 20:42:34 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B6F6792.1000103@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> Message-ID: I'm breaking my promise, after people wrote me offlist encouraging me to keep pushing my point of view. On Sun, Feb 7, 2010 at 8:23 PM, David Cournapeau wrote: > Jarrod Millman wrote: >> ?Just >> to be clear, I would prefer to see the ABI-breaking release be called >> 2.0. ?I don't see why we have to get the release out in three weeks, >> though. ?I think it would be better to use this opportunity to take >> some time to make sure we get it right. > > As a compromise, what about the following: > ? ? ? ?- remove ABI-incompatible changes for 1.4.x > ? ? ? ?- release a 1.5.0 marked as experimental, with everything that Travis > wants to put in. It would be a preview for python 3k as well, so it > conveys the idea that it is experimental pretty well. Why can't this be called 2.0beta, with a __version__ like 1.9.96? I don't understand the reluctance to follow numpy's own established conventions. > ? ? ? ?- the 1.6.x branch would be a polished 1.5.x. This could be called that 2.0.x instead of 1.6.x > The advantages is that 1.5.0 ... or 2.0beta ... > can be push relatively early, but we would > still keep 1.4.0 as the "stable" release, against which every other > binary installer should be built (scipy, mpl). Darren From david at silveregg.co.jp Sun Feb 7 20:53:26 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Mon, 08 Feb 2010 10:53:26 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> Message-ID: <4B6F6E96.6090509@silveregg.co.jp> Darren Dale wrote: > > Why can't this be called 2.0beta, with a __version__ like 1.9.96? I > don't understand the reluctance to follow numpy's own established > conventions. Mostly because 2.0 conveys the idea that there are significant new features, and because it would allow breaking the API as well. I would rather avoid missing this opportunity by making a 2.0 just to allow breaking the ABI without significantly reviewing our C API. cheers, David From charlesr.harris at gmail.com Sun Feb 7 20:54:37 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 7 Feb 2010 18:54:37 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B6F6792.1000103@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> Message-ID: On Sun, Feb 7, 2010 at 6:23 PM, David Cournapeau wrote: > Jarrod Millman wrote: > > Just > > to be clear, I would prefer to see the ABI-breaking release be called > > 2.0. I don't see why we have to get the release out in three weeks, > > though. I think it would be better to use this opportunity to take > > some time to make sure we get it right. > > As a compromise, what about the following: > - remove ABI-incompatible changes for 1.4.x > +1 > - release a 1.5.0 marked as experimental, with everything that > Travis > wants to put in. It would be a preview for python 3k as well, so it > conveys the idea that it is experimental pretty well. > I've got to agree with Darren here. 2.0 marks the API break, nothing more, nothing less. That's what the major number is for. > - the 1.6.x branch would be a polished 1.5.x. > > No one expects a x.0.0 release to be polished ;) > The advantages is that 1.5.0 can be push relatively early, but we would > still keep 1.4.0 as the "stable" release, against which every other > binary installer should be built (scipy, mpl). > > Just push out 2.0 early. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Feb 7 20:56:00 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 7 Feb 2010 18:56:00 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B6F6E96.6090509@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <4B6F6E96.6090509@silveregg.co.jp> Message-ID: On Sun, Feb 7, 2010 at 6:53 PM, David Cournapeau wrote: > Darren Dale wrote: > > > > Why can't this be called 2.0beta, with a __version__ like 1.9.96? I > > don't understand the reluctance to follow numpy's own established > > conventions. > > Mostly because 2.0 conveys the idea that there are significant new > features, and because it would allow breaking the API as well. I would > rather avoid missing this opportunity by making a 2.0 just to allow > breaking the ABI without significantly reviewing our C API. > > I think you attach to much importance to the major number. It simply marks an ABI change, no matter how minor. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Sun Feb 7 21:03:31 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Mon, 08 Feb 2010 11:03:31 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <4B6F6E96.6090509@silveregg.co.jp> Message-ID: <4B6F70F3.40707@silveregg.co.jp> Charles R Harris wrote: > > > On Sun, Feb 7, 2010 at 6:53 PM, David Cournapeau > wrote: > > Darren Dale wrote: > > > > Why can't this be called 2.0beta, with a __version__ like 1.9.96? I > > don't understand the reluctance to follow numpy's own established > > conventions. > > Mostly because 2.0 conveys the idea that there are significant new > features, and because it would allow breaking the API as well. I would > rather avoid missing this opportunity by making a 2.0 just to allow > breaking the ABI without significantly reviewing our C API. > > > I think you attach to much importance to the major number. It simply > marks an ABI change, no matter how minor. Yes, but that's highly unusual. The convention is to only break ABI when it is absolutely necessary, at which point they change the API as well. Now, I won't won't be against putting this as a 2.0 release if that's what people can agree on, cheers, David From charlesr.harris at gmail.com Sun Feb 7 21:10:11 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 7 Feb 2010 19:10:11 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B6F70F3.40707@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <4B6F6E96.6090509@silveregg.co.jp> <4B6F70F3.40707@silveregg.co.jp> Message-ID: On Sun, Feb 7, 2010 at 7:03 PM, David Cournapeau wrote: > Charles R Harris wrote: > > > > > > On Sun, Feb 7, 2010 at 6:53 PM, David Cournapeau > > wrote: > > > > Darren Dale wrote: > > > > > > Why can't this be called 2.0beta, with a __version__ like 1.9.96? > I > > > don't understand the reluctance to follow numpy's own established > > > conventions. > > > > Mostly because 2.0 conveys the idea that there are significant new > > features, and because it would allow breaking the API as well. I > would > > rather avoid missing this opportunity by making a 2.0 just to allow > > breaking the ABI without significantly reviewing our C API. > > > > > > I think you attach to much importance to the major number. It simply > > marks an ABI change, no matter how minor. > > Yes, but that's highly unusual. The convention is to only break ABI when > it is absolutely necessary, at which point they change the API as well. > > The brand new Numpy 2.0, featuring a shiny new ABI with the same sturdy API used and loved by millions. Hey, it's just advertizing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Mon Feb 8 00:12:17 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Sun, 07 Feb 2010 21:12:17 -0800 Subject: [Numpy-discussion] Unable to install Numpy In-Reply-To: <4B6F66A5.1010608@silveregg.co.jp> References: <5b8d13221002060603o1d622795jb56de1d741e5309a@mail.gmail.com> <4B6F66A5.1010608@silveregg.co.jp> Message-ID: <4B6F9D31.2050200@noaa.gov> David Cournapeau wrote: > Not exactly, although it prevents from building Atlas for 64 bits. The > main issue is gcc/VS interoperabilities, especially for gfortran. I thought you didn't need fortran for numpy? > I don't think it makes sense for NumPy itself to provide > binaries which do not use open source compilers um, why not? python itself is built witht he MS compilers. I have not idea about 64 bit, but the free-of-charge MS compiler seems to build 32 bit python extensions for 2.6 OK, so you should be able to build numpy with it (without atlas, of course). -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From david at silveregg.co.jp Mon Feb 8 00:21:36 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Mon, 08 Feb 2010 14:21:36 +0900 Subject: [Numpy-discussion] Unable to install Numpy In-Reply-To: <4B6F9D31.2050200@noaa.gov> References: <5b8d13221002060603o1d622795jb56de1d741e5309a@mail.gmail.com> <4B6F66A5.1010608@silveregg.co.jp> <4B6F9D31.2050200@noaa.gov> Message-ID: <4B6F9F60.7090206@silveregg.co.jp> Christopher Barker wrote: > David Cournapeau wrote: >> Not exactly, although it prevents from building Atlas for 64 bits. The >> main issue is gcc/VS interoperabilities, especially for gfortran. > > I thought you didn't need fortran for numpy? No, but you need it for Scipy. And we have always produced NumPy with Lapack support on windows. > >> I don't think it makes sense for NumPy itself to provide >> binaries which do not use open source compilers > > um, why not? python itself is built witht he MS compilers. Because there is no free fortran compiler on windows64, except for gfortran. Since I am not sure whether it will be possible to use gfortran/Visual Stsudio together to build NumPy/SciPy, I don't want to distribute binaries which will be incompatible with each other if we change from Visual Studio to gcc. People who really need NumPy on win64 can build it by themselves, or use any other mean (EPD, unofficial binaries, etc...). cheers, David From pav at iki.fi Fri Feb 5 05:00:44 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 05 Feb 2010 12:00:44 +0200 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002041809m68228f04i5e02ba97cfc25f1f@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <2544FE84-41C0-45BE-A3FC-0681169A92EF@enthought.com> <261E0669-EAC6-4132-90AC-C4A99B9907E4@enthought.com> <5b8d13221002040740g322ed490t6762178cb4ec0ac9@mail.gmail.com> <1e2af89e1002040938i17b46fc9n17ca7f9ff25ced2d@mail.gmail.com> <1265314513.9154.76.camel@idol> <5b8d13221002041809m68228f04i5e02ba97cfc25f1f@mail.gmail.com> Message-ID: <1265364044.16269.54.camel@talisman> pe, 2010-02-05 kello 11:09 +0900, David Cournapeau kirjoitti: [clip] > I think a py3k buildbot would help for this, right ? Another thing is > that the py3k changes do not work at all with Visual Studio compilers, > but that's mostly cosmetic things (like #warning not being supported > and things like that). There's a Py3 buildbot at http://buildbot.scipy.org/builders/Linux_x86_Ubuntu/builds/319/steps/shell_1/logs/stdio It also runs 2.4, 2.5 and 2.6 -- the 3.1 results are at the end. > > Most C code that works on Py2 works also on Py3. Py3 mainly means not > > using PyString, but choosing between Unicode + Bytes + UString (=Bytes > > for Py2 & Unicode for Py3). Also, it may be necessary to avoid FILE* > > pointers in the API (on Py3 those are no longer as easily obtained), and > > be wary when working with buffers. > > So once the py3k support is in place, should we deprecate those > functions so that people interested in porting to py3k can plan in > advance? For Py3 users APIs with FILE* pointers are somewhat awkward since you need to dup and fdopen to get FILE* pointers, and remember to fclose the handles afterward. > Getting rid of FILE* pointers and file descriptor would also helps > quite a bit on windows. I know that at some point, there were some > discussions to make the python C API safe to multiple C runtimes, but > I cannot find any recent discussion on that fact. I should just ask on > python-dev, I guess. This would be a great relief if we don't have to > care about those issues anymore. Python 3 does have some functions for reading/writing data from PyFile objects directly, but these are fairly inadequate, http://docs.python.org/3.1/c-api/file.html so I guess we're stuck with the C runtime in any case. > > I assume the rewrite will be worked on a separate SVN branch? Also, is > > there a plan yet on what needs changing to make Numpy's ABI more > > resistant? > > There are two issues: > - What we currently means by ABI, that is the ABI for a given python > version. The main issue is the binary layout of the structures (I > think the function ordering is pretty solid now, it is difficult to > change it inadvertently). The only way to fix this is to hide the > content of those structures, and define the structures in the C code > instead (opaque pointer, also known as the pimpl idiom). This means a > massive break of the C API, both internally and externally, but that's > something that is really needed IMO. > - Higher goal: ABI across python versions. This is motivated by PEP > 384. It means avoiding calls to API which are not "safe". I have no > idea whether it is possible, but that's something to keep in mind once > we start a major overhaul. Making structures opaque is a bit worrying. As far as I understand, so far the API has been nearly compatible with Numeric. Making the structures opaque is going to break both our and many other people's code. This is a bit worrying... How about a less damaging route: add reserved space to critical points in the structs, and keep appending new members only at the end? The Cython issue will probably be mostly resolved by new Cython releases before the Numpy 2.0 would be out. -- Pauli Virtanen From cournape at gmail.com Mon Feb 8 04:49:34 2010 From: cournape at gmail.com (David Cournapeau) Date: Mon, 8 Feb 2010 18:49:34 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B6F6792.1000103@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> Message-ID: <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> On Mon, Feb 8, 2010 at 10:23 AM, David Cournapeau wrote: > Jarrod Millman wrote: >> ?Just >> to be clear, I would prefer to see the ABI-breaking release be called >> 2.0. ?I don't see why we have to get the release out in three weeks, >> though. ?I think it would be better to use this opportunity to take >> some time to make sure we get it right. > > As a compromise, what about the following: > ? ? ? ?- remove ABI-incompatible changes for 1.4.x This is done: http://github.com/cournape/numpy/tree/abi_fix This can be committed to svn in whatever branch we decide to put this in. I have also committed changes into scipy 0.7.x, so that if building scipy against numpy 1.3.0, and updating numpy from the above branch still gives a working scipy (modulo one test which fails when run against abi_fix numpy, but unlikely to be to ABI issues). cheers, David From ralf.gommers at googlemail.com Mon Feb 8 07:14:06 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 8 Feb 2010 20:14:06 +0800 Subject: [Numpy-discussion] Building Windows binaries on OS X Message-ID: Hi David and all, I have a few questions on setting up the build environment on OS X for Windows binaries. I have Wine installed with Python 2.5 and 2.6, MakeNsis and MinGW. The first question is what is meant in the Paver script by "cpuid plugin". Wine seems to know what to do with a cpuid instruction, but I can not find a plugin. Searching for "cpuid plugin" turns up nothing except the NumPy pavement.py file. What is this? Second question is about Fortran. It's needed for SciPy at least, so I may as well get it right now. MinGW only comes with g77, and this page: http://www.scipy.org/Installing_SciPy/Windows says that this is the default compiler. So Fortran 77 on Windows and Fortran 95 on OS X as defaults, is that right? No need for g95/gfortran at all? Final question is about Atlas and friends. Is 3.8.3 the best version to install? Does it compile out of the box under Wine? Is this page http://www.scipy.org/Installing_SciPy/Windows still up-to-date with regard to the Lapack/Atlas info and does it apply for Wine? And do I have to compile it three times, with the only difference the '-arch' flag set to "SSE2", "SSE3" and ""? Thanks, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From seb.haase at gmail.com Mon Feb 8 08:47:56 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Mon, 8 Feb 2010 14:47:56 +0100 Subject: [Numpy-discussion] scipy-tickets restarted emailing on jan17 - how about numpy-tickets ? In-Reply-To: References: Message-ID: Hi, I solved the problem: GMail apparently filtered all numpy-ticket and numpy-svn mails into spam. In case someone benefits from thins info. -Sebastian On Mon, Jan 25, 2010 at 3:54 PM, Ryan May wrote: > On Mon, Jan 25, 2010 at 2:55 AM, Sebastian Haase wrote: >> Hi, >> long time ago I had subscript to get both scipy-tickets and >> numpy-tickets emailed. >> Now scipy-tickets apparently started emailing again on 17th of Januar. >> Will numpy-tickets also come back "by itself" - or should I resubscribe? > > I'm seeing traffic on numpy-tickets since about the time scipy-tickets > came back. I'd try resubscribing. > > Ryan > > -- > Ryan May > Graduate Research Assistant > School of Meteorology > University of Oklahoma > Sent from Norman, Oklahoma, United States > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From josef.pktd at gmail.com Mon Feb 8 09:25:13 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 8 Feb 2010 09:25:13 -0500 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: References: Message-ID: <1cd32cbb1002080625t6563bcd9r81bef41d453e8ef3@mail.gmail.com> On Mon, Feb 8, 2010 at 7:14 AM, Ralf Gommers wrote: > Hi David and all, > > I have a few questions on setting up the build environment on OS X for > Windows binaries. I have Wine installed with Python 2.5 and 2.6, MakeNsis > and MinGW. The first question is what is meant in the Paver script by "cpuid > plugin". Wine seems to know what to do with a cpuid instruction, but I can > not find a plugin. Searching for "cpuid plugin" turns up nothing except the > NumPy pavement.py file. What is this? > > Second question is about Fortran. It's needed for SciPy at least, so I may > as well get it right now. MinGW only comes with g77, and this page: > http://www.scipy.org/Installing_SciPy/Windows says that this is the default > compiler. So Fortran 77 on Windows and Fortran 95 on OS X as defaults, is > that right? No need for g95/gfortran at all? > > Final question is about Atlas and friends. Is 3.8.3 the best version to > install? Does it compile out of the box under Wine? Is this page > http://www.scipy.org/Installing_SciPy/Windows still up-to-date with regard > to the Lapack/Atlas info and does it apply for Wine?? And do I have to > compile it three times, with the only difference the '-arch' flag set to > "SSE2", "SSE3" and ""? Currently scipy binaries are build with MingW 3.4.5, as far as I know, which includes g77. The latest release of MingW uses gfortran, gcc 4.4.0 I think, that, eventually, scipy should switch to gfortran also on Windows. But it might need some compatibility testing. And it would be very useful if someone could provide the Lapack/Atlas binaries, similar to the ones that are on the scipy webpage for mingw 3.4.5. (I don't have a setup where I can build Atlas binaries). I haven't switched yet, but, given some comments on the mailinglists, it looks like several windows users are using gfortran without reported problems. Josef > > Thanks, > Ralf > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From seb.haase at gmail.com Mon Feb 8 09:29:37 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Mon, 8 Feb 2010 15:29:37 +0100 Subject: [Numpy-discussion] Compact way of performing array math with specified result type? In-Reply-To: <4633B6F1.8050904@ieee.org> References: <4633B6F1.8050904@ieee.org> Message-ID: On Sat, Apr 28, 2007 at 10:04 PM, Travis Oliphant wrote: > Russell E. Owen wrote: >> I often find myself doing simple math on sequences of numbers (which >> might or might not be numpy arrays) where I want the result (and thus >> the inputs) coerced to a particular data type. >> >> I'd like to be able to say: >> >> ? numpy.divide(seq1, seq2, dtype=float) >> >> but ufuncs don't allow on to specify a result type. So I do this instead: >> >> ? numpy.array(seq1, dtype=float) / numpy.array(seq2, dtype=float) >> >> Is there a more compact solution (without having to create the result >> array first and supply it as an argument)? >> > > Every ufunc has a little-documented keyword "sig" for (signature) which > allows you to specify the signature of the inner loop. > > Thus, > > numpy.divide(seq1, seq1, sig=('d',)*3) > > will do what you want. > > -Travis > Hi, going through my very old emails - I was wondering if this has gotten better documented by now !? (and where ?) -Sebastian Haase From ralf.gommers at googlemail.com Mon Feb 8 09:54:03 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 8 Feb 2010 22:54:03 +0800 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: <1cd32cbb1002080625t6563bcd9r81bef41d453e8ef3@mail.gmail.com> References: <1cd32cbb1002080625t6563bcd9r81bef41d453e8ef3@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 10:25 PM, wrote: > On Mon, Feb 8, 2010 at 7:14 AM, Ralf Gommers > wrote: > > Hi David and all, > > > > I have a few questions on setting up the build environment on OS X for > > Windows binaries. I have Wine installed with Python 2.5 and 2.6, MakeNsis > > and MinGW. The first question is what is meant in the Paver script by > "cpuid > > plugin". Wine seems to know what to do with a cpuid instruction, but I > can > > not find a plugin. Searching for "cpuid plugin" turns up nothing except > the > > NumPy pavement.py file. What is this? > > > > Second question is about Fortran. It's needed for SciPy at least, so I > may > > as well get it right now. MinGW only comes with g77, and this page: > > http://www.scipy.org/Installing_SciPy/Windows says that this is the > default > > compiler. So Fortran 77 on Windows and Fortran 95 on OS X as defaults, is > > that right? No need for g95/gfortran at all? > > > > Final question is about Atlas and friends. Is 3.8.3 the best version to > > install? Does it compile out of the box under Wine? Is this page > > http://www.scipy.org/Installing_SciPy/Windows still up-to-date with > regard > > to the Lapack/Atlas info and does it apply for Wine? And do I have to > > compile it three times, with the only difference the '-arch' flag set to > > "SSE2", "SSE3" and ""? > > Currently scipy binaries are build with MingW 3.4.5, as far as I know, > which includes g77. The latest release of MingW uses gfortran, gcc > 4.4.0 > You mean gcc 3.4.5, and yes that's what I've got. MinGW itself is at version 5.1.6 now, and still include gcc and g77 3.4.5. Not sure where you see gcc 4.4.0 but I can easily have missed it on what surely has to be the worst download page on SourceForge: http://sourceforge.net/projects/mingw/files/ > > I think, that, eventually, scipy should switch to gfortran also on > Windows. But it might need some compatibility testing. > And it would be very useful if someone could provide the Lapack/Atlas > binaries, similar to the ones that are on the scipy webpage for mingw > 3.4.5. (I don't have a setup where I can build Atlas binaries). > Where are these binaries hidden? All I can find is http://scipy.org/Cookbook/CompilingExtensionsOnWindowsWithMinGW > > I haven't switched yet, but, given some comments on the mailinglists, > it looks like several windows users are using gfortran without > reported problems. > > Makes sense to use the same Fortran compiler everywhere. gfortran works well for me on OS X. Thanks Josef. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Feb 8 10:06:44 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 8 Feb 2010 10:06:44 -0500 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: References: <1cd32cbb1002080625t6563bcd9r81bef41d453e8ef3@mail.gmail.com> Message-ID: <1cd32cbb1002080706n4fd0f26dhefca29faa4bb83f5@mail.gmail.com> On Mon, Feb 8, 2010 at 9:54 AM, Ralf Gommers wrote: > > > On Mon, Feb 8, 2010 at 10:25 PM, wrote: >> >> On Mon, Feb 8, 2010 at 7:14 AM, Ralf Gommers >> wrote: >> > Hi David and all, >> > >> > I have a few questions on setting up the build environment on OS X for >> > Windows binaries. I have Wine installed with Python 2.5 and 2.6, >> > MakeNsis >> > and MinGW. The first question is what is meant in the Paver script by >> > "cpuid >> > plugin". Wine seems to know what to do with a cpuid instruction, but I >> > can >> > not find a plugin. Searching for "cpuid plugin" turns up nothing except >> > the >> > NumPy pavement.py file. What is this? >> > >> > Second question is about Fortran. It's needed for SciPy at least, so I >> > may >> > as well get it right now. MinGW only comes with g77, and this page: >> > http://www.scipy.org/Installing_SciPy/Windows says that this is the >> > default >> > compiler. So Fortran 77 on Windows and Fortran 95 on OS X as defaults, >> > is >> > that right? No need for g95/gfortran at all? >> > >> > Final question is about Atlas and friends. Is 3.8.3 the best version to >> > install? Does it compile out of the box under Wine? Is this page >> > http://www.scipy.org/Installing_SciPy/Windows still up-to-date with >> > regard >> > to the Lapack/Atlas info and does it apply for Wine?? And do I have to >> > compile it three times, with the only difference the '-arch' flag set to >> > "SSE2", "SSE3" and ""? >> >> Currently scipy binaries are build with MingW 3.4.5, as far as I know, >> which includes g77. The latest release of MingW uses gfortran, gcc >> 4.4.0 > > You mean gcc 3.4.5, and yes that's what I've got. MinGW itself is at version > 5.1.6 now, and still include gcc and g77 3.4.5. Not sure where you see gcc > 4.4.0 but I can easily have missed it on what surely has to be the worst > download page on SourceForge: http://sourceforge.net/projects/mingw/files/ (I don't think the mingw version is important, it's more important which gcc is bundled, so I'm sloppy.) http://sourceforge.net/projects/mingw/files/GCC%20Version%204/Current%20Release_%20gcc-4.4.0/ "view all files" and header "GCC Version 4" mingw hompage is a bit scarce on information on release version, at least I don't find it >> >> I think, that, eventually, scipy should switch to gfortran also on >> Windows. But it might need some compatibility testing. >> And it would be very useful if someone could provide the Lapack/Atlas >> binaries, similar to the ones that are on the scipy webpage for mingw >> 3.4.5. (I don't have a setup where I can build Atlas binaries). > > Where are these binaries hidden? All I can find is > http://scipy.org/Cookbook/CompilingExtensionsOnWindowsWithMinGW These are the Atlas binaries that I am using with MinGW gcc 3.4.5 http://scipy.org/Installing_SciPy/Windows#head-cd37d819e333227e327079e4c2a2298daf625624 > >> >> I haven't switched yet, but, given some comments on the mailinglists, >> it looks like several windows users are using gfortran without >> reported problems. >> > Makes sense to use the same Fortran compiler everywhere. gfortran works well > for me on OS X. Thanks Josef. > > Cheers, > Ralf > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From ralf.gommers at googlemail.com Mon Feb 8 10:17:39 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 8 Feb 2010 23:17:39 +0800 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: <1cd32cbb1002080706n4fd0f26dhefca29faa4bb83f5@mail.gmail.com> References: <1cd32cbb1002080625t6563bcd9r81bef41d453e8ef3@mail.gmail.com> <1cd32cbb1002080706n4fd0f26dhefca29faa4bb83f5@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 11:06 PM, wrote: > >> Currently scipy binaries are build with MingW 3.4.5, as far as I know, > >> which includes g77. The latest release of MingW uses gfortran, gcc > >> 4.4.0 > > > > You mean gcc 3.4.5, and yes that's what I've got. MinGW itself is at > version > > 5.1.6 now, and still include gcc and g77 3.4.5. Not sure where you see > gcc > > 4.4.0 but I can easily have missed it on what surely has to be the worst > > download page on SourceForge: > http://sourceforge.net/projects/mingw/files/ > > > (I don't think the mingw version is important, it's more important > which gcc is bundled, so I'm sloppy.) > > > http://sourceforge.net/projects/mingw/files/GCC%20Version%204/Current%20Release_%20gcc-4.4.0/ > > "view all files" and header "GCC Version 4" > > This is not in the current default MinGW bundle, that still pulls in 3.4.5. But you could indeed install it manually. > > >> > >> I think, that, eventually, scipy should switch to gfortran also on > >> Windows. But it might need some compatibility testing. > >> And it would be very useful if someone could provide the Lapack/Atlas > >> binaries, similar to the ones that are on the scipy webpage for mingw > >> 3.4.5. (I don't have a setup where I can build Atlas binaries). > > > > Where are these binaries hidden? All I can find is > > http://scipy.org/Cookbook/CompilingExtensionsOnWindowsWithMinGW > > These are the Atlas binaries that I am using with MinGW gcc 3.4.5 > > > http://scipy.org/Installing_SciPy/Windows#head-cd37d819e333227e327079e4c2a2298daf625624 > > Ah yes, thanks. I read that page before, but the word 'Pentium' triggered a fast-forward. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From peno at telenet.be Mon Feb 8 12:19:18 2010 From: peno at telenet.be (Peter Notebaert) Date: Mon, 8 Feb 2010 18:19:18 +0100 Subject: [Numpy-discussion] Python 2.6 and numpy 1.3.0/1.4.0 from an extension Message-ID: <32a250a51002080919m2e6f9aady277670390abac466@mail.gmail.com> I have made an extension that also uses numpy. I developed with Python 2.6 and numpy 1.4.0 This works all fine. The problem is that users that use this extension get crahes from the moment they use the extension and this because of numpy. It crashes when numpy is initialised. This because those users have also Python 2.6, but with numpy 1.3.0!!! This because they installed both via the Pythonxy setup. How can this be handled that users have different versions of numpy for a given version of Python. Python has no problem using numpy 1.3.0 or 1.4.0, so how can I make this possible in my extension? Thanks for your input -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Feb 8 13:16:59 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 8 Feb 2010 11:16:59 -0700 Subject: [Numpy-discussion] Python 2.6 and numpy 1.3.0/1.4.0 from an extension In-Reply-To: <32a250a51002080919m2e6f9aady277670390abac466@mail.gmail.com> References: <32a250a51002080919m2e6f9aady277670390abac466@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 10:19 AM, Peter Notebaert wrote: > I have made an extension that also uses numpy. > I developed with Python 2.6 and numpy 1.4.0 > This works all fine. > > The problem is that users that use this extension get crahes from the > moment they use the extension and this because of numpy. It crashes when > numpy is initialised. > This because those users have also Python 2.6, but with numpy 1.3.0!!! > This because they installed both via the Pythonxy setup. > > How can this be handled that users have different versions of numpy for a > given version of Python. Python has no problem using numpy 1.3.0 or 1.4.0, > so how can I make this possible in my extension? > > There was a similar problem in 1.3 where it called a new function in the API which led to segfaults when 1.3 extensions were run on older versions of numpy. Ironically, this was fixed in 1.4 by adding another function to check at runtime, but this will cause segfaults on 1.3. Next cycle, there should be a warning issued, i.e., 1.5 running on 1.4 should raise an error instead of crashing. In any case, forward compatibility isn't guaranteed between minor version changes as the API can change. Extensions need to be developed, or at least compiled, against the earliest version of numpy with which they will be used. Extensions compiled again older versions of numpy should run on newer versions. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Feb 8 13:20:10 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 8 Feb 2010 11:20:10 -0700 Subject: [Numpy-discussion] Python 2.6 and numpy 1.3.0/1.4.0 from an extension In-Reply-To: References: <32a250a51002080919m2e6f9aady277670390abac466@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 11:16 AM, Charles R Harris wrote: > > > On Mon, Feb 8, 2010 at 10:19 AM, Peter Notebaert wrote: > >> I have made an extension that also uses numpy. >> I developed with Python 2.6 and numpy 1.4.0 >> This works all fine. >> >> The problem is that users that use this extension get crahes from the >> moment they use the extension and this because of numpy. It crashes when >> numpy is initialised. >> This because those users have also Python 2.6, but with numpy 1.3.0!!! >> This because they installed both via the Pythonxy setup. >> >> How can this be handled that users have different versions of numpy for a >> given version of Python. Python has no problem using numpy 1.3.0 or 1.4.0, >> so how can I make this possible in my extension? >> >> There was a similar problem in 1.3 where it called a new function in the > API which led to segfaults when 1.3 extensions were run on older versions of > numpy. Ironically, this was fixed in 1.4 by adding another function to check > at runtime, but this will cause segfaults on 1.3. Next cycle, there should > be a warning issued, i.e., 1.5 running on 1.4 should raise an error instead > of crashing. In any case, forward compatibility isn't guaranteed between > minor version changes as the API can change. Extensions need to be > developed, or at least compiled, against the earliest version of numpy with > which they will be used. Extensions compiled again older versions of numpy > should run on newer versions. > > Let me add that 1.4 introduced an unintended ABI change that will cause problems with backward compatibility also. 1.4 is going to be removed as a broken release on that account and a 1.4.1 or 1.4.0.1 version released to fix that problem. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon Feb 8 14:11:34 2010 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 08 Feb 2010 14:11:34 -0500 Subject: [Numpy-discussion] Python 2.6 and numpy 1.3.0/1.4.0 from an extension In-Reply-To: <32a250a51002080919m2e6f9aady277670390abac466@mail.gmail.com> References: <32a250a51002080919m2e6f9aady277670390abac466@mail.gmail.com> Message-ID: <4B7061E6.7020100@american.edu> I see that NumPy 1.4.0 is still the download offered on SourceForge. Did I misunderstand that a decision had been made to withdraw it, at least until the ongoing discussion about ABI breakage is resolved? (Btw, as a user, I'm hoping Jarrod's sensible proposal prevails in that discussion. That proposal seems compatible with Travis O's original policy proposal governing version numbering and ABI breakage.) Alan Isaac From millman at berkeley.edu Mon Feb 8 14:38:50 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 8 Feb 2010 11:38:50 -0800 Subject: [Numpy-discussion] Python 2.6 and numpy 1.3.0/1.4.0 from an extension In-Reply-To: <4B7061E6.7020100@american.edu> References: <32a250a51002080919m2e6f9aady277670390abac466@mail.gmail.com> <4B7061E6.7020100@american.edu> Message-ID: On Mon, Feb 8, 2010 at 11:11 AM, Alan G Isaac wrote: > I see that NumPy 1.4.0 is still the download > offered on SourceForge. ?Did I misunderstand > that a decision had been made to withdraw it, > at least until the ongoing discussion about > ABI breakage is resolved? I went ahead and set the default download for NumPy back to the 1.3.0 release on sourceforge. I also added a news item stating that 1.4.0 has temporarily been pulled due to the unintended ABI break. -- Jarrod Millman Helen Wills Neuroscience Institute 10 Giannini Hall, UC Berkeley http://cirl.berkeley.edu/ From millman at berkeley.edu Mon Feb 8 14:52:49 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 8 Feb 2010 11:52:49 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> Message-ID: I went ahead and set the default download for NumPy back to the 1.3.0 release on sourceforge. I also added a news item stating that 1.4.0 has temporarily been pulled due to the unintended ABI break pending a decision by the developers. Currently, the 1.4.0 release can still be accessed if you go to the download manager for sourceforge. Jarrod From charlesr.harris at gmail.com Mon Feb 8 15:47:03 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 8 Feb 2010 13:47:03 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 12:52 PM, Jarrod Millman wrote: > I went ahead and set the default download for NumPy back to the 1.3.0 > release on sourceforge. I also added a news item stating that 1.4.0 > has temporarily been pulled due to the unintended ABI break pending a > decision by the developers. Currently, the 1.4.0 release can still be > accessed if you go to the download manager for sourceforge. > > I think we need to make that decision now. It seems to have gotten hung up in conflicts that need to be resolved. How should we go about it? Does the numpy steering council (name?) have a role here. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Mon Feb 8 16:43:40 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Mon, 8 Feb 2010 15:43:40 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <04BC4321-C31C-413C-B7AC-71C0BDD63E16@enthought.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> Message-ID: <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> On Feb 8, 2010, at 2:47 PM, Charles R Harris wrote: > > > On Mon, Feb 8, 2010 at 12:52 PM, Jarrod Millman > wrote: > I went ahead and set the default download for NumPy back to the 1.3.0 > release on sourceforge. I also added a news item stating that 1.4.0 > has temporarily been pulled due to the unintended ABI break pending a > decision by the developers. Currently, the 1.4.0 release can still be > accessed if you go to the download manager for sourceforge. > > > I think we need to make that decision now. It seems to have gotten > hung up in conflicts that need to be resolved. How should we go > about it? Does the numpy steering council (name?) have a role here. It seems like consensus has been reached on making 1.4.1 an ABI compatible release. The remaining question is what to call the next release of NumPy 1.5 or 2.0. I would prefer to call it 1.5 because 2.0 "sounds" like it's significantly different from a use-level than 1.4, but it won't be. While it is a pain to update all your packages, we just make clear that with NumPy 1.5 you have to re-compile extensions built with it. Yes, that is a break with what we thought would be the pattern used at SciPy 2008, but it has been many years since an ABI break has occurred, and I wouldn't mind updating the pattern. I don't really like the idea of tying the version number to the ABI number anyway. This was one reason to put an actual ABI number in the source code to begin with (so that it could be queried independently of the version number). I do agree that the ABI should not change much. But, sometimes it is unavoidable. This rare occurrence should really be independent of the version number system which should be allowed to change independently based on the API alterations. I'm not really much in to "majority-wins" kinds of approaches (I much prefer consensus when it can be reached). But, in this case I think the majority of David, Pauli, Chuck, Robert, and I should decide the issue. -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Feb 8 16:57:48 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 8 Feb 2010 14:57:48 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: On Mon, Feb 8, 2010 at 2:43 PM, Travis Oliphant wrote: > > On Feb 8, 2010, at 2:47 PM, Charles R Harris wrote: > > > > On Mon, Feb 8, 2010 at 12:52 PM, Jarrod Millman wrote: > >> I went ahead and set the default download for NumPy back to the 1.3.0 >> release on sourceforge. I also added a news item stating that 1.4.0 >> has temporarily been pulled due to the unintended ABI break pending a >> decision by the developers. Currently, the 1.4.0 release can still be >> accessed if you go to the download manager for sourceforge. >> >> > I think we need to make that decision now. It seems to have gotten hung up > in conflicts that need to be resolved. How should we go about it? Does the > numpy steering council (name?) have a role here. > > > It seems like consensus has been reached on making 1.4.1 an ABI compatible > release. > > The remaining question is what to call the next release of NumPy 1.5 or > 2.0. > > I would prefer to call it 1.5 because 2.0 "sounds" like it's significantly > different from a use-level than 1.4, but it won't be. While it is a pain > to update all your packages, we just make clear that with NumPy 1.5 you have > to re-compile extensions built with it. Yes, that is a break with what we > thought would be the pattern used at SciPy 2008, but it has been many years > since an ABI break has occurred, and I wouldn't mind updating the pattern. > > > I don't really like the idea of tying the version number to the ABI number > anyway. This was one reason to put an actual ABI number in the source > code to begin with (so that it could be queried independently of the version > number). > > I do agree that the ABI should not change much. But, sometimes it is > unavoidable. This rare occurrence should really be independent of the > version number system which should be allowed to change independently based > on the API alterations. > > I'm not really much in to "majority-wins" kinds of approaches (I much > prefer consensus when it can be reached). But, in this case I think the > majority of David, Pauli, Chuck, Robert, and I should decide the issue. > > It sounds like the remaining issue is the number to give to the ABI breaking release. All releases should naturally be made as expeditiously as possible. So, here is the question before the house: Should the release containing the datetime/hasobject changes be called a) 1.5.0 b) 2.0.0 My vote goes to a). Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Feb 8 17:02:40 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 8 Feb 2010 15:02:40 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: On Mon, Feb 8, 2010 at 2:57 PM, Charles R Harris wrote: > > > On Mon, Feb 8, 2010 at 2:43 PM, Travis Oliphant wrote: > >> >> On Feb 8, 2010, at 2:47 PM, Charles R Harris wrote: >> >> >> >> On Mon, Feb 8, 2010 at 12:52 PM, Jarrod Millman wrote: >> >>> I went ahead and set the default download for NumPy back to the 1.3.0 >>> release on sourceforge. I also added a news item stating that 1.4.0 >>> has temporarily been pulled due to the unintended ABI break pending a >>> decision by the developers. Currently, the 1.4.0 release can still be >>> accessed if you go to the download manager for sourceforge. >>> >>> >> I think we need to make that decision now. It seems to have gotten hung up >> in conflicts that need to be resolved. How should we go about it? Does the >> numpy steering council (name?) have a role here. >> >> >> It seems like consensus has been reached on making 1.4.1 an ABI compatible >> release. >> >> The remaining question is what to call the next release of NumPy 1.5 or >> 2.0. >> >> I would prefer to call it 1.5 because 2.0 "sounds" like it's significantly >> different from a use-level than 1.4, but it won't be. While it is a pain >> to update all your packages, we just make clear that with NumPy 1.5 you have >> to re-compile extensions built with it. Yes, that is a break with what we >> thought would be the pattern used at SciPy 2008, but it has been many years >> since an ABI break has occurred, and I wouldn't mind updating the pattern. >> >> >> I don't really like the idea of tying the version number to the ABI number >> anyway. This was one reason to put an actual ABI number in the source >> code to begin with (so that it could be queried independently of the version >> number). >> >> I do agree that the ABI should not change much. But, sometimes it is >> unavoidable. This rare occurrence should really be independent of the >> version number system which should be allowed to change independently based >> on the API alterations. >> >> I'm not really much in to "majority-wins" kinds of approaches (I much >> prefer consensus when it can be reached). But, in this case I think the >> majority of David, Pauli, Chuck, Robert, and I should decide the issue. >> >> > It sounds like the remaining issue is the number to give to the ABI > breaking release. All releases should naturally be made as expeditiously as > possible. So, here is the question before the house: > > Should the release containing the datetime/hasobject changes be called > > a) 1.5.0 > b) 2.0.0 > > My vote goes to a). > > Oops, make that b). I want it to be called 2.0.0 Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Mon Feb 8 17:05:07 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 8 Feb 2010 14:05:07 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris wrote: > Should the release containing the datetime/hasobject changes be called > > a) 1.5.0 > b) 2.0.0 My vote goes to b. Jarrod From dsdale24 at gmail.com Mon Feb 8 17:05:57 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Mon, 8 Feb 2010 17:05:57 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: On Mon, Feb 8, 2010 at 5:05 PM, Jarrod Millman wrote: > On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris > wrote: >> Should the release containing the datetime/hasobject changes be called >> >> a) 1.5.0 >> b) 2.0.0 > > My vote goes to b. You don't matter. Nor do I. From robert.kern at gmail.com Mon Feb 8 17:07:29 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Feb 2010 16:07:29 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: <3d375d731002081407l38f03e68m768da8a4b2d06b1c@mail.gmail.com> On Mon, Feb 8, 2010 at 16:05, Darren Dale wrote: > On Mon, Feb 8, 2010 at 5:05 PM, Jarrod Millman wrote: >> On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris >> wrote: >>> Should the release containing the datetime/hasobject changes be called >>> >>> a) 1.5.0 >>> b) 2.0.0 >> >> My vote goes to b. > > You don't matter. Nor do I. Jarrod is on the steering committee. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dsdale24 at gmail.com Mon Feb 8 17:08:17 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Mon, 8 Feb 2010 17:08:17 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: On Mon, Feb 8, 2010 at 5:05 PM, Darren Dale wrote: > On Mon, Feb 8, 2010 at 5:05 PM, Jarrod Millman wrote: >> On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris >> wrote: >>> Should the release containing the datetime/hasobject changes be called >>> >>> a) 1.5.0 >>> b) 2.0.0 >> >> My vote goes to b. > > You don't matter. Nor do I. I definitely should have counted to 100 before sending that. It wasn't helpful and I apologize. Darren From gael.varoquaux at normalesup.org Mon Feb 8 17:09:31 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 8 Feb 2010 23:09:31 +0100 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: <20100208220931.GA25478@phare.normalesup.org> On Mon, Feb 08, 2010 at 05:08:17PM -0500, Darren Dale wrote: > On Mon, Feb 8, 2010 at 5:05 PM, Darren Dale wrote: > > On Mon, Feb 8, 2010 at 5:05 PM, Jarrod Millman wrote: > >> On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris > >> wrote: > >>> Should the release containing the datetime/hasobject changes be called > >>> a) 1.5.0 > >>> b) 2.0.0 > >> My vote goes to b. > > You don't matter. Nor do I. > I definitely should have counted to 100 before sending that. It wasn't > helpful and I apologize. Actually, Darren, I found you fairly entertaining. ;) Ga?l From matthew.brett at gmail.com Mon Feb 8 17:10:41 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 8 Feb 2010 14:10:41 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081407l38f03e68m768da8a4b2d06b1c@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <3d375d731002081407l38f03e68m768da8a4b2d06b1c@mail.gmail.com> Message-ID: <1e2af89e1002081410o72573cfeia01041293b5811f0@mail.gmail.com> On Mon, Feb 8, 2010 at 2:07 PM, Robert Kern wrote: > On Mon, Feb 8, 2010 at 16:05, Darren Dale wrote: >> On Mon, Feb 8, 2010 at 5:05 PM, Jarrod Millman wrote: >>> On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris >>> wrote: >>>> Should the release containing the datetime/hasobject changes be called >>>> >>>> a) 1.5.0 >>>> b) 2.0.0 >>> >>> My vote goes to b. >> >> You don't matter. Nor do I. > > Jarrod is on the steering committee. You seem to be pointing out that Darren's vote doesn't count but Jarrod's does. Really, that's a view of the steering committee idea that seems to me a bit miserable. Matthew From matthew.brett at gmail.com Mon Feb 8 17:12:31 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 8 Feb 2010 14:12:31 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: <1e2af89e1002081412v2c2cbcc3v1cbfb65e1f6607d2@mail.gmail.com> Hi, On Mon, Feb 8, 2010 at 2:05 PM, Jarrod Millman wrote: > On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris > wrote: >> Should the release containing the datetime/hasobject changes be called >> >> a) 1.5.0 >> b) 2.0.0 > > My vote goes to b. I guess Travis' point is that 2.0 implies rather large feature difference from - say 1.0.0 - and this isn't the case. On the other hand, I don't see what substantial difference that makes in the long run - we can always go to 3.0 for a big rewrite and I don't think we'll use any users as a result. On the other hand we might lose users from an ABI change not easily predicted from the version numbering. I guess what I'm saying is we have lots of integers left, and they are cheap, and I'd also vote for using one up to get round this little hurdle. Best, Matthew From robert.kern at gmail.com Mon Feb 8 17:17:30 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Feb 2010 16:17:30 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1e2af89e1002081410o72573cfeia01041293b5811f0@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <3d375d731002081407l38f03e68m768da8a4b2d06b1c@mail.gmail.com> <1e2af89e1002081410o72573cfeia01041293b5811f0@mail.gmail.com> Message-ID: <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> On Mon, Feb 8, 2010 at 16:10, Matthew Brett wrote: > On Mon, Feb 8, 2010 at 2:07 PM, Robert Kern wrote: >> On Mon, Feb 8, 2010 at 16:05, Darren Dale wrote: >>> On Mon, Feb 8, 2010 at 5:05 PM, Jarrod Millman wrote: >>>> On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris >>>> wrote: >>>>> Should the release containing the datetime/hasobject changes be called >>>>> >>>>> a) 1.5.0 >>>>> b) 2.0.0 >>>> >>>> My vote goes to b. >>> >>> You don't matter. Nor do I. >> >> Jarrod is on the steering committee. > > You seem to be pointing out that Darren's vote doesn't count but Jarrod's does. > > Really, that's a view of the steering committee idea that seems to me > a bit miserable. It's just the way that voting works. Voting cannot work without clear membership rules. That's why we try to avoid voting as much as possible. That's why the discussion has gone on so long. We want to hear everyone's input (especially Darren's) and try to work towards a consensus solution that everyone can live with. When that fails, and there is significant dissent over the major solutions at the end of the discussion, then you fall back to the much inferior voting mechanism. Making technical decision by a vote is the worst possible outcome, but it's the last decision mechanism available short of a BDFL. Trust me, the steering committee would much prefer not to decide anything by any means. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Mon Feb 8 17:25:46 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 8 Feb 2010 14:25:46 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: On Mon, Feb 8, 2010 at 2:08 PM, Darren Dale wrote: >> You don't matter. Nor do I. > > I definitely should have counted to 100 before sending that. It wasn't > helpful and I apologize. No worries, your first email brought a smile to my face. From matthew.brett at gmail.com Mon Feb 8 17:27:10 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 8 Feb 2010 14:27:10 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <3d375d731002081407l38f03e68m768da8a4b2d06b1c@mail.gmail.com> <1e2af89e1002081410o72573cfeia01041293b5811f0@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> Message-ID: <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> > Trust me, the steering committee would much prefer not to decide > anything by any means. I do trust you ;) Looking at the emails, it seems to me there's quite a strong consensus. You don't mean that the steering committee is needed when people on the steering committee don't agree with the consensus, I'm sure. See you, Matthew From robert.kern at gmail.com Mon Feb 8 17:30:37 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Feb 2010 16:30:37 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <3d375d731002081407l38f03e68m768da8a4b2d06b1c@mail.gmail.com> <1e2af89e1002081410o72573cfeia01041293b5811f0@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> Message-ID: <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> On Mon, Feb 8, 2010 at 16:27, Matthew Brett wrote: >> Trust me, the steering committee would much prefer not to decide >> anything by any means. > > I do trust you ;) > > Looking at the emails, it seems to me there's quite a strong consensus. No, there isn't. Consensus means everyone, not just a strong majority. http://producingoss.com/en/consensus-democracy.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthew.brett at gmail.com Mon Feb 8 17:32:40 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 8 Feb 2010 14:32:40 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <3d375d731002081407l38f03e68m768da8a4b2d06b1c@mail.gmail.com> <1e2af89e1002081410o72573cfeia01041293b5811f0@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> Message-ID: <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> > No, there isn't. Consensus means everyone, not just a strong majority. > > http://producingoss.com/en/consensus-democracy.html I stand corrected. I meant then, that there's a strong majority agreement on what to do. See you, Matthew From cournape at gmail.com Mon Feb 8 17:38:28 2010 From: cournape at gmail.com (David Cournapeau) Date: Tue, 9 Feb 2010 07:38:28 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: <5b8d13221002081438n4c40f59eqe170579de87d3512@mail.gmail.com> On Tue, Feb 9, 2010 at 6:43 AM, Travis Oliphant wrote: > > > I think we need to make that decision now. It seems to have gotten hung up > in conflicts that need to be resolved. How should we go about it? Does the > numpy steering council (name?) have a role here. > > It seems like consensus has been reached on making 1.4.1 an ABI compatible > release. > The remaining question is what to call the next release of NumPy 1.5 or 2.0. I am for 1.5 as well, as long as it is marked experimental (the installers name would have an experimental tag or something). cheers, David From robert.kern at gmail.com Mon Feb 8 17:40:29 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Feb 2010 16:40:29 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081407l38f03e68m768da8a4b2d06b1c@mail.gmail.com> <1e2af89e1002081410o72573cfeia01041293b5811f0@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> Message-ID: <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> On Mon, Feb 8, 2010 at 16:32, Matthew Brett wrote: >> No, there isn't. Consensus means everyone, not just a strong majority. >> >> http://producingoss.com/en/consensus-democracy.html > > I stand corrected. I meant then, that there's a strong majority > agreement on what to do. That is correct. And having failed to find a consensus solution and with several of the people doing the actual work disagreeing (which is neither you, nor I, nor Darren, nor most readers on this list who have weighed in on the discussion phase and may feel miffed about not getting a final vote), we move on to a vote from the steering committee to formalize that majority. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bsouthey at gmail.com Mon Feb 8 17:51:02 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 8 Feb 2010 16:51:02 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1e2af89e1002081412v2c2cbcc3v1cbfb65e1f6607d2@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <1e2af89e1002081412v2c2cbcc3v1cbfb65e1f6607d2@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 4:12 PM, Matthew Brett wrote: > Hi, > > On Mon, Feb 8, 2010 at 2:05 PM, Jarrod Millman wrote: >> On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris >> wrote: >>> Should the release containing the datetime/hasobject changes be called >>> >>> a) 1.5.0 >>> b) 2.0.0 >> >> My vote goes to b. > > I guess Travis' point is that 2.0 implies rather large feature > difference from - say 1.0.0 - and this isn't the case. Not that I actually know much about it, but I thought that datetime is a 'rather large feature' difference both in terms of functionality and code. Definitely it will allow a unified date/time usage across various scikits and other projects that have time functions. >On the other > hand, I don't see what substantial difference that makes in the long > run - we can always go to 3.0 for a big rewrite and I don't think > we'll use any users as a result. ?On the other hand we might lose > users from an ABI change not easily predicted from the version > numbering. ? I guess what I'm saying is we have lots of integers left, > and they are cheap, and I'd also vote for using one up to get round > this little hurdle. > > Best, > > Matthew Numbers are just numbers especially since Numeric got to version 24.2. But these numbers have to mean something as both Jarrod and Robert have indicated. My vote is for b especially as it provides a nice number to indicate compatibility to other programs like Cython and potentially Python 3 support (or lack of it). Bruce From tgrav at mac.com Mon Feb 8 17:51:08 2010 From: tgrav at mac.com (Tommy Grav) Date: Mon, 08 Feb 2010 17:51:08 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002081438n4c40f59eqe170579de87d3512@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <5b8d13221002081438n4c40f59eqe170579de87d3512@mail.gmail.com> Message-ID: <2CEAC2B2-2C99-44AE-B28A-85318D879E19@mac.com> On Feb 8, 2010, at 5:38 PM, David Cournapeau wrote: > On Tue, Feb 9, 2010 at 6:43 AM, Travis Oliphant wrote: >> >> >> I think we need to make that decision now. It seems to have gotten hung up >> in conflicts that need to be resolved. How should we go about it? Does the >> numpy steering council (name?) have a role here. >> >> It seems like consensus has been reached on making 1.4.1 an ABI compatible >> release. >> The remaining question is what to call the next release of NumPy 1.5 or 2.0. > > I am for 1.5 as well, as long as it is marked experimental (the > installers name would have an experimental tag or something). Just wanted to chime in as a user of numpy that following the discussion that the care the developers are taking in deciding issues like this gives me strong confidence in the software written. All over many thanks to all that has made numpy such an enormously useful tool in my scientific career! Cheers Tommy Grav From matthew.brett at gmail.com Mon Feb 8 18:03:25 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 8 Feb 2010 15:03:25 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081407l38f03e68m768da8a4b2d06b1c@mail.gmail.com> <1e2af89e1002081410o72573cfeia01041293b5811f0@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> Message-ID: <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> Hi, > That is correct. And having failed to find a consensus solution and > with several of the people doing the actual work disagreeing (which is > neither you, nor I, nor Darren, nor most readers on this list who have > weighed in on the discussion phase and may feel miffed about not > getting a final vote), we move on to a vote from the steering > committee to formalize that majority. I'm continuing only because, the discussion has generated some heat, and I think part of that heat comes from the perception that the excellent community spirit of the project is somewhat undermined by the feeling that reasonable arguments are not being fully heard. More generally I completely agree that the decisions have to be made by the people doing the work, and that I'm not one of them. But, the emphasis of the work on numpy has shifted from development to maintenance, and I'm still not sure that the discussion thus far has fully reflected that fact. I'm really not disagreeing with the decisions made (and if I did, you could rightly and politely ignore me), but I think the atmosphere of how the decisions are made is also important. See you, Matthew From robert.kern at gmail.com Mon Feb 8 18:18:50 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Feb 2010 17:18:50 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081407l38f03e68m768da8a4b2d06b1c@mail.gmail.com> <1e2af89e1002081410o72573cfeia01041293b5811f0@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> Message-ID: <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> On Mon, Feb 8, 2010 at 17:03, Matthew Brett wrote: > Hi, > >> That is correct. And having failed to find a consensus solution and >> with several of the people doing the actual work disagreeing (which is >> neither you, nor I, nor Darren, nor most readers on this list who have >> weighed in on the discussion phase and may feel miffed about not >> getting a final vote), we move on to a vote from the steering >> committee to formalize that majority. > > I'm continuing only because, the discussion has generated some heat, > and I think part of that heat comes from the perception that the > excellent community spirit of the project is somewhat undermined by > the feeling that reasonable arguments are not being fully heard. How does one get that feeling? > More generally I completely agree that the decisions have to be made > by the people doing the work, and that I'm not one of them. ? But, the > emphasis of the work on numpy has shifted from development to > maintenance, and I'm still not sure that the discussion thus far has > fully reflected that fact. Unfortunately, it's getting too late to address deficiencies in the breadth and depth of the already-too-extensive discussion. You should have spoken up sooner. We need to make a decision now. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthew.brett at gmail.com Mon Feb 8 18:43:41 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 8 Feb 2010 15:43:41 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081407l38f03e68m768da8a4b2d06b1c@mail.gmail.com> <1e2af89e1002081410o72573cfeia01041293b5811f0@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> Message-ID: <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> Hi, >> I'm continuing only because, the discussion has generated some heat, >> and I think part of that heat comes from the perception that the >> excellent community spirit of the project is somewhat undermined by >> the feeling that reasonable arguments are not being fully heard. > > How does one get that feeling? Is that a real question? >> More generally I completely agree that the decisions have to be made >> by the people doing the work, and that I'm not one of them. ? But, the >> emphasis of the work on numpy has shifted from development to >> maintenance, and I'm still not sure that the discussion thus far has >> fully reflected that fact. > > Unfortunately, it's getting too late to address deficiencies in the > breadth and depth of the already-too-extensive discussion. You should > have spoken up sooner. We need to make a decision now. I'm not asking for influence in the decision, nor am I trying to delay the decision. See you, Matthew From Chris.Barker at noaa.gov Mon Feb 8 18:57:03 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 08 Feb 2010 15:57:03 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002060417y4cfb8046yef7079b0603437a0@mail.gmail.com> <201002061407.20123.faltet@pytables.org> <3C1E8765-546F-40B4-A69D-D66CF44D6432@enthought.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: <4B70A4CF.60308@noaa.gov> Charles R Harris wrote: > Should the release containing the datetime/hasobject changes be called > > a) 1.5.0 > b) 2.0.0 Classic bicycle shed designing... but I like designing bicycle sheds, so I'll make this comment: 2.0 "appears" to the average user to be a big enough deal that they might expect that the 1.4 and 2.0 branches would both be maintained for a while. And maybe even expect that you could have both installed simultaneously. I don't think anyone is planning on supporting that, so I think 1.5 is better. Thanks to the folks doing the real work, here. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Mon Feb 8 19:25:17 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Feb 2010 18:25:17 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <1e2af89e1002081410o72573cfeia01041293b5811f0@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> Message-ID: <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> On Mon, Feb 8, 2010 at 17:43, Matthew Brett wrote: > Hi, > >>> I'm continuing only because, the discussion has generated some heat, >>> and I think part of that heat comes from the perception that the >>> excellent community spirit of the project is somewhat undermined by >>> the feeling that reasonable arguments are not being fully heard. >> >> How does one get that feeling? > > Is that a real question? Absolutely. What leads you to believe that the reasonable arguments aren't being heard? If one were to start a thread giving an idea and no one responds while vigorous discussion is happening in other threads, that would certainly be visible evidence of that idea not being fully heard. I'm something at a loss to guess how you would ascertain from a thread that has now gone past a hundred messages (most of which favor the side I presume you think the unheard arguments are coming from) that some of the arguments are not being fully heard. These kinds of decisions entail a lot of judgement calls. How many people are affected by an ABI incompatibility? How capable are they of coping with it? How many will walk back to Matlab because of it? No one knows the answers to these questions. In the absence of actual data, we make guesses and assumptions based on gut feelings distilled from past, anecdotal experience and logical arguments. We can discuss the logical arguments all day long and possibly reach a consensus on which arguments have valid structure and which don't. Arguments are either logically sound, or they're not. We can't really argue those gut feelings into a consensus. They come from personal experience which is different for each individual. They are simply not subject to argument. Hearing your gut feeling does little to change mine, but mine not changing doesn't mean that I ignored you or that I have a closed mind to your point of view. It's really quite easy, in a busy thread such as this one, to fail to address every stated point in detail even though you have considered them and still haven't changed your mind. Here's the problem that I don't think many people appreciate: logical arguments suck just as much as personal experience in answering these questions. You can make perfectly structured arguments until you are blue in the face, but without real data to premise them on, they are no better than the gut feelings. They can often be significantly worse if the strength of the logic gets confused with the strength of the premise. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dsdale24 at gmail.com Mon Feb 8 19:43:37 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Mon, 8 Feb 2010 19:43:37 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern wrote: > Here's the problem that I don't think many people appreciate: logical > arguments suck just as much as personal experience in answering these > questions. You can make perfectly structured arguments until you are > blue in the face, but without real data to premise them on, they are > no better than the gut feelings. They can often be significantly worse > if the strength of the logic gets confused with the strength of the > premise. If I recall correctly, the convention of not breaking ABI compatibility in minor releases was established in response to the last ABI compatibility break. Am I wrong? Darren From robert.kern at gmail.com Mon Feb 8 19:52:24 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Feb 2010 18:52:24 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> Message-ID: <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> On Mon, Feb 8, 2010 at 18:43, Darren Dale wrote: > On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern wrote: >> Here's the problem that I don't think many people appreciate: logical >> arguments suck just as much as personal experience in answering these >> questions. You can make perfectly structured arguments until you are >> blue in the face, but without real data to premise them on, they are >> no better than the gut feelings. They can often be significantly worse >> if the strength of the logic gets confused with the strength of the >> premise. > > If I recall correctly, the convention of not breaking ABI > compatibility in minor releases was established in response to the > last ABI compatibility break. Am I wrong? I'm not sure how this relates to the material quoted of me, but no, you're not wrong. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at silveregg.co.jp Mon Feb 8 19:52:59 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Tue, 09 Feb 2010 09:52:59 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <1e2af89e1002081412v2c2cbcc3v1cbfb65e1f6607d2@mail.gmail.com> Message-ID: <4B70B1EB.3010501@silveregg.co.jp> Bruce Southey wrote: > > Not that I actually know much about it, but I thought that datetime is > a 'rather large feature' difference both in terms of functionality and > code. Definitely it will allow a unified date/time usage across > various scikits and other projects that have time functions. That's a minor feature in the sense that it does not affect everyone. (according to who you speak, I guess datetime is not bigger than generalized ufunc or python 2.6 support). The general way of dealing with versions in open source is that major version change signifies a major API break and a major new/different feature set (the break usually being justified by the new feature set). Also, it should be noted that the ABI break that is now accepted and being worked upon is merely a developer convenience at the expense of our users. It is possible to make almost any change while still being ABI compatible in almost any library. For example, in the case of the datetime change, it could have been handled as a special case - this is ugly and inconvenient, but possible. That's why I am hoping that later on, we will be able to agree on making the necessary breaks to make it much more convenient for us to change things without breaking the ABI. cheers, David From david at silveregg.co.jp Mon Feb 8 20:01:35 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Tue, 09 Feb 2010 10:01:35 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> Message-ID: <4B70B3EF.7010105@silveregg.co.jp> Darren Dale wrote: > On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern wrote: >> Here's the problem that I don't think many people appreciate: logical >> arguments suck just as much as personal experience in answering these >> questions. You can make perfectly structured arguments until you are >> blue in the face, but without real data to premise them on, they are >> no better than the gut feelings. They can often be significantly worse >> if the strength of the logic gets confused with the strength of the >> premise. > > If I recall correctly, the convention of not breaking ABI > compatibility in minor releases was established in response to the > last ABI compatibility break. Am I wrong? That's what I thought as well, but I checked this morning, and the actual number used for versioning has not changed since 1.0 (it is 0x01000009). One issue was that we did not have a way to distinguish API change from ABI changes until 1.2.0 IIRC, and that it was relatively easy to break the ABI without changing any structure because of the way the code generator was coded. IOW, I don't think that an unchanged number means that we have kept ABI compatibility. I would like to think that having more regular binary installers helped getting more concern about the issues, but that's certainly falls into the gut's feeling department :) David From charlesr.harris at gmail.com Mon Feb 8 20:22:04 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 8 Feb 2010 18:22:04 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B70B3EF.7010105@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> <4B70B3EF.7010105@silveregg.co.jp> Message-ID: On Mon, Feb 8, 2010 at 6:01 PM, David Cournapeau wrote: > Darren Dale wrote: > > On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern > wrote: > >> Here's the problem that I don't think many people appreciate: logical > >> arguments suck just as much as personal experience in answering these > >> questions. You can make perfectly structured arguments until you are > >> blue in the face, but without real data to premise them on, they are > >> no better than the gut feelings. They can often be significantly worse > >> if the strength of the logic gets confused with the strength of the > >> premise. > > > > If I recall correctly, the convention of not breaking ABI > > compatibility in minor releases was established in response to the > > last ABI compatibility break. Am I wrong? > > That's what I thought as well, but I checked this morning, and the > actual number used for versioning has not changed since 1.0 (it is > 0x01000009). One issue was that we did not have a way to distinguish API > change from ABI changes until 1.2.0 IIRC, and that it was relatively > easy to break the ABI without changing any structure because of the way > the code generator was coded. > > IOW, I don't think that an unchanged number means that we have kept ABI > compatibility. I would like to think that having more regular binary > installers helped getting more concern about the issues, but that's > certainly falls into the gut's feeling department :) > > The policy was established after the last urge to change the ABI. What happened before that is ancient history, events that took place in an time of tribal migrations and upheaval. It was a time when programmers struggled hand to hand with vicious code and treated coding style with disdain. A heroic era. But we're more civilized now ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmay31 at gmail.com Mon Feb 8 20:39:55 2010 From: rmay31 at gmail.com (Ryan May) Date: Mon, 8 Feb 2010 19:39:55 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <20100208220931.GA25478@phare.normalesup.org> References: <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <20100208220931.GA25478@phare.normalesup.org> Message-ID: On Mon, Feb 8, 2010 at 4:09 PM, Gael Varoquaux wrote: > On Mon, Feb 08, 2010 at 05:08:17PM -0500, Darren Dale wrote: >> On Mon, Feb 8, 2010 at 5:05 PM, Darren Dale wrote: >> > On Mon, Feb 8, 2010 at 5:05 PM, Jarrod Millman wrote: >> >> On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris >> >> wrote: >> >>> Should the release containing the datetime/hasobject changes be called > >> >>> a) 1.5.0 >> >>> b) 2.0.0 > >> >> My vote goes to b. > >> > You don't matter. Nor do I. > >> I definitely should have counted to 100 before sending that. It wasn't >> helpful and I apologize. > > Actually, Darren, I found you fairly entertaining. > > ;) Agreed. I found it actually helpful in hammering home something said by Travis that was somewhat ignored. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From cournape at gmail.com Mon Feb 8 20:54:15 2010 From: cournape at gmail.com (David Cournapeau) Date: Tue, 9 Feb 2010 10:54:15 +0900 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: References: Message-ID: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> On Mon, Feb 8, 2010 at 9:14 PM, Ralf Gommers wrote: > Hi David and all, > > I have a few questions on setting up the build environment on OS X for > Windows binaries. I have Wine installed with Python 2.5 and 2.6, MakeNsis > and MinGW. The first question is what is meant in the Paver script by "cpuid > plugin". Wine seems to know what to do with a cpuid instruction, but I can > not find a plugin. Searching for "cpuid plugin" turns up nothing except the > NumPy pavement.py file. What is this? That's a small NSIS plugin to detect at install time the exact capabilities of the CPU (SSE2, SSE3, etc...). The sources are found in tools/win32build/cpucaps, and should be built with mingw (Visual Studio is not supported, it uses gcc-specific inline assembly). You then copy the dll into the plugin directory of nsis. > Second question is about Fortran. It's needed for SciPy at least, so I may > as well get it right now. MinGW only comes with g77, and this page: > http://www.scipy.org/Installing_SciPy/Windows says that this is the default > compiler. So Fortran 77 on Windows and Fortran 95 on OS X as defaults, is > that right? No need for g95/gfortran at all? gc 4.x is still not officially supported by MinGW. Gfortran is incompatible with g77, so care should be taken if we change it for NumPy (every extension using f2py will likely be broken as a result on windows). Gfortran is the only option on win64, so I was thinking about making the transition to gfortran once we manage to build Numpy and Scipy with it on win64. > Final question is about Atlas and friends. Is 3.8.3 the best version to > install? Does it compile out of the box under Wine? Is this page > http://www.scipy.org/Installing_SciPy/Windows still up-to-date with regard > to the Lapack/Atlas info and does it apply for Wine? Atlas 3.9.x should not be used, it is too unstable IMO (it is a dev version after all, and windows receives little testing compared to unix). I will put the Atlas binaries I am using somewhere - building Atlas is already painful, but building it with a limited architecture on windows takes it to a whole new level (it is not supported by atlas, you have to patch the build system by yourself). cheers, David From dsdale24 at gmail.com Mon Feb 8 21:50:20 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Mon, 8 Feb 2010 21:50:20 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 7:52 PM, Robert Kern wrote: > On Mon, Feb 8, 2010 at 18:43, Darren Dale wrote: >> On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern wrote: >>> Here's the problem that I don't think many people appreciate: logical >>> arguments suck just as much as personal experience in answering these >>> questions. You can make perfectly structured arguments until you are >>> blue in the face, but without real data to premise them on, they are >>> no better than the gut feelings. They can often be significantly worse >>> if the strength of the logic gets confused with the strength of the >>> premise. >> >> If I recall correctly, the convention of not breaking ABI >> compatibility in minor releases was established in response to the >> last ABI compatibility break. Am I wrong? > > I'm not sure how this relates to the material quoted of me, but no, > you're not wrong. Just trying to provide historical context to support the strength of the premise. Darren From matthew.brett at gmail.com Mon Feb 8 22:05:44 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 8 Feb 2010 19:05:44 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> Message-ID: <1e2af89e1002081905p6b1c3f78k653985c19a128668@mail.gmail.com> Hi, >> Is that a real question? > > Absolutely. What leads you to believe that the reasonable arguments > aren't being heard? If one were to start a thread giving an idea and > no one responds while vigorous discussion is happening in other > threads, that would certainly be visible evidence of that idea not > being fully heard. I'm something at a loss to guess how you would > ascertain from a thread that has now gone past a hundred messages > (most of which favor the side I presume you think the unheard > arguments are coming from) that some of the arguments are not being > fully heard. Of course we were always discussing judgement calls, and these are always going to be subjective, but I don't think that means that we can't hope to come to a reasoned agreement. I only wrote because I felt that we were beginning to drift towards a formal committee-style judgement in a situation where it has been pretty clear what the majority view was, and that we have to be careful about that, because it can reduce our feeling of shared ownership and responsibility - a feeling that numpy has been remarkably good at maintaining. See you, Matthew From robert.kern at gmail.com Mon Feb 8 22:10:06 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Feb 2010 21:10:06 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> Message-ID: <3d375d731002081910g65e759ebh3fa83638e2dd55e3@mail.gmail.com> On Mon, Feb 8, 2010 at 20:50, Darren Dale wrote: > On Mon, Feb 8, 2010 at 7:52 PM, Robert Kern wrote: >> On Mon, Feb 8, 2010 at 18:43, Darren Dale wrote: >>> On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern wrote: >>>> Here's the problem that I don't think many people appreciate: logical >>>> arguments suck just as much as personal experience in answering these >>>> questions. You can make perfectly structured arguments until you are >>>> blue in the face, but without real data to premise them on, they are >>>> no better than the gut feelings. They can often be significantly worse >>>> if the strength of the logic gets confused with the strength of the >>>> premise. >>> >>> If I recall correctly, the convention of not breaking ABI >>> compatibility in minor releases was established in response to the >>> last ABI compatibility break. Am I wrong? >> >> I'm not sure how this relates to the material quoted of me, but no, >> you're not wrong. > > Just trying to provide historical context to support the strength of > the premise. The existence of the policy is not under question (anymore; I settled that with old email a while ago). The question is whether to change the policy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Feb 8 22:17:04 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Feb 2010 21:17:04 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1e2af89e1002081905p6b1c3f78k653985c19a128668@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> <1e2af89e1002081905p6b1c3f78k653985c19a128668@mail.gmail.com> Message-ID: <3d375d731002081917o6c316d72rdb7a0d4937648788@mail.gmail.com> On Mon, Feb 8, 2010 at 21:05, Matthew Brett wrote: > Hi, > >>> Is that a real question? >> >> Absolutely. What leads you to believe that the reasonable arguments >> aren't being heard? If one were to start a thread giving an idea and >> no one responds while vigorous discussion is happening in other >> threads, that would certainly be visible evidence of that idea not >> being fully heard. I'm something at a loss to guess how you would >> ascertain from a thread that has now gone past a hundred messages >> (most of which favor the side I presume you think the unheard >> arguments are coming from) that some of the arguments are not being >> fully heard. > > Of course we were always discussing judgement calls, and these are > always going to be subjective, but I don't think that means that we > can't hope to come to a reasoned agreement. ? I only wrote because I > felt that we were beginning to drift towards a formal committee-style > judgement in a situation where it has been pretty clear what the > majority view was, and that we have to be careful about that, because > it can reduce our feeling of shared ownership and responsibility - a > feeling that numpy has been remarkably good at maintaining. Majorities don't make numpy development decisions normally. Never have. Not of the mailing list membership nor of the steering committee. Implementors do. When implementors disagree strongly and do not reach a consensus, then we fall back to majorities. But as I said before, majority voting requires conscientious control over the voting membership or it isn't majority voting. The process that you identified as being remarkably good at maintaining shared ownership and responsibility isn't majority rule, but consensus among implementors. We just don't have that right now, but we need to get stuff done anyways. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dsdale24 at gmail.com Mon Feb 8 22:23:39 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Mon, 8 Feb 2010 22:23:39 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081910g65e759ebh3fa83638e2dd55e3@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> <3d375d731002081910g65e759ebh3fa83638e2dd55e3@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 10:10 PM, Robert Kern wrote: > On Mon, Feb 8, 2010 at 20:50, Darren Dale wrote: >> On Mon, Feb 8, 2010 at 7:52 PM, Robert Kern wrote: >>> On Mon, Feb 8, 2010 at 18:43, Darren Dale wrote: >>>> On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern wrote: >>>>> Here's the problem that I don't think many people appreciate: logical >>>>> arguments suck just as much as personal experience in answering these >>>>> questions. You can make perfectly structured arguments until you are >>>>> blue in the face, but without real data to premise them on, they are >>>>> no better than the gut feelings. They can often be significantly worse >>>>> if the strength of the logic gets confused with the strength of the >>>>> premise. >>>> >>>> If I recall correctly, the convention of not breaking ABI >>>> compatibility in minor releases was established in response to the >>>> last ABI compatibility break. Am I wrong? >>> >>> I'm not sure how this relates to the material quoted of me, but no, >>> you're not wrong. >> >> Just trying to provide historical context to support the strength of >> the premise. > > The existence of the policy is not under question (anymore; I settled > that with old email a while ago). The question is whether to change > the policy. So I have gathered. I question whether the concerns that lead to that decision in the first place are somehow less important now. Darren From robert.kern at gmail.com Mon Feb 8 22:24:59 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Feb 2010 21:24:59 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> <3d375d731002081910g65e759ebh3fa83638e2dd55e3@mail.gmail.com> Message-ID: <3d375d731002081924j53922762gf1eee026114990e7@mail.gmail.com> On Mon, Feb 8, 2010 at 21:23, Darren Dale wrote: > On Mon, Feb 8, 2010 at 10:10 PM, Robert Kern wrote: >> On Mon, Feb 8, 2010 at 20:50, Darren Dale wrote: >>> On Mon, Feb 8, 2010 at 7:52 PM, Robert Kern wrote: >>>> On Mon, Feb 8, 2010 at 18:43, Darren Dale wrote: >>>>> On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern wrote: >>>>>> Here's the problem that I don't think many people appreciate: logical >>>>>> arguments suck just as much as personal experience in answering these >>>>>> questions. You can make perfectly structured arguments until you are >>>>>> blue in the face, but without real data to premise them on, they are >>>>>> no better than the gut feelings. They can often be significantly worse >>>>>> if the strength of the logic gets confused with the strength of the >>>>>> premise. >>>>> >>>>> If I recall correctly, the convention of not breaking ABI >>>>> compatibility in minor releases was established in response to the >>>>> last ABI compatibility break. Am I wrong? >>>> >>>> I'm not sure how this relates to the material quoted of me, but no, >>>> you're not wrong. >>> >>> Just trying to provide historical context to support the strength of >>> the premise. >> >> The existence of the policy is not under question (anymore; I settled >> that with old email a while ago). The question is whether to change >> the policy. > > So I have gathered. I question whether the concerns that lead to that > decision in the first place are somehow less important now. And we're back to gut feeling territory again. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dsdale24 at gmail.com Mon Feb 8 22:27:39 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Mon, 8 Feb 2010 22:27:39 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081924j53922762gf1eee026114990e7@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> <3d375d731002081910g65e759ebh3fa83638e2dd55e3@mail.gmail.com> <3d375d731002081924j53922762gf1eee026114990e7@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 10:24 PM, Robert Kern wrote: > On Mon, Feb 8, 2010 at 21:23, Darren Dale wrote: >> On Mon, Feb 8, 2010 at 10:10 PM, Robert Kern wrote: >>> On Mon, Feb 8, 2010 at 20:50, Darren Dale wrote: >>>> On Mon, Feb 8, 2010 at 7:52 PM, Robert Kern wrote: >>>>> On Mon, Feb 8, 2010 at 18:43, Darren Dale wrote: >>>>>> On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern wrote: >>>>>>> Here's the problem that I don't think many people appreciate: logical >>>>>>> arguments suck just as much as personal experience in answering these >>>>>>> questions. You can make perfectly structured arguments until you are >>>>>>> blue in the face, but without real data to premise them on, they are >>>>>>> no better than the gut feelings. They can often be significantly worse >>>>>>> if the strength of the logic gets confused with the strength of the >>>>>>> premise. >>>>>> >>>>>> If I recall correctly, the convention of not breaking ABI >>>>>> compatibility in minor releases was established in response to the >>>>>> last ABI compatibility break. Am I wrong? >>>>> >>>>> I'm not sure how this relates to the material quoted of me, but no, >>>>> you're not wrong. >>>> >>>> Just trying to provide historical context to support the strength of >>>> the premise. >>> >>> The existence of the policy is not under question (anymore; I settled >>> that with old email a while ago). The question is whether to change >>> the policy. >> >> So I have gathered. I question whether the concerns that lead to that >> decision in the first place are somehow less important now. > > And we're back to gut feeling territory again. That's unfair. I can't win based on gut, you know how skinny I am. From robert.kern at gmail.com Mon Feb 8 22:28:35 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Feb 2010 21:28:35 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> <3d375d731002081910g65e759ebh3fa83638e2dd55e3@mail.gmail.com> <3d375d731002081924j53922762gf1eee026114990e7@mail.gmail.com> Message-ID: <3d375d731002081928w2a53ad33qc07a3d76055fe727@mail.gmail.com> On Mon, Feb 8, 2010 at 21:27, Darren Dale wrote: > On Mon, Feb 8, 2010 at 10:24 PM, Robert Kern wrote: >> On Mon, Feb 8, 2010 at 21:23, Darren Dale wrote: >>> On Mon, Feb 8, 2010 at 10:10 PM, Robert Kern wrote: >>>> On Mon, Feb 8, 2010 at 20:50, Darren Dale wrote: >>>>> On Mon, Feb 8, 2010 at 7:52 PM, Robert Kern wrote: >>>>>> On Mon, Feb 8, 2010 at 18:43, Darren Dale wrote: >>>>>>> On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern wrote: >>>>>>>> Here's the problem that I don't think many people appreciate: logical >>>>>>>> arguments suck just as much as personal experience in answering these >>>>>>>> questions. You can make perfectly structured arguments until you are >>>>>>>> blue in the face, but without real data to premise them on, they are >>>>>>>> no better than the gut feelings. They can often be significantly worse >>>>>>>> if the strength of the logic gets confused with the strength of the >>>>>>>> premise. >>>>>>> >>>>>>> If I recall correctly, the convention of not breaking ABI >>>>>>> compatibility in minor releases was established in response to the >>>>>>> last ABI compatibility break. Am I wrong? >>>>>> >>>>>> I'm not sure how this relates to the material quoted of me, but no, >>>>>> you're not wrong. >>>>> >>>>> Just trying to provide historical context to support the strength of >>>>> the premise. >>>> >>>> The existence of the policy is not under question (anymore; I settled >>>> that with old email a while ago). The question is whether to change >>>> the policy. >>> >>> So I have gathered. I question whether the concerns that lead to that >>> decision in the first place are somehow less important now. >> >> And we're back to gut feeling territory again. > > That's unfair. I can't win based on gut, you know how skinny I am. Heh. Well-played. :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Mon Feb 8 22:35:10 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 8 Feb 2010 20:35:10 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> <3d375d731002081910g65e759ebh3fa83638e2dd55e3@mail.gmail.com> <3d375d731002081924j53922762gf1eee026114990e7@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 8:27 PM, Darren Dale wrote: > On Mon, Feb 8, 2010 at 10:24 PM, Robert Kern > wrote: > > On Mon, Feb 8, 2010 at 21:23, Darren Dale wrote: > >> On Mon, Feb 8, 2010 at 10:10 PM, Robert Kern > wrote: > >>> On Mon, Feb 8, 2010 at 20:50, Darren Dale wrote: > >>>> On Mon, Feb 8, 2010 at 7:52 PM, Robert Kern > wrote: > >>>>> On Mon, Feb 8, 2010 at 18:43, Darren Dale > wrote: > >>>>>> On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern > wrote: > >>>>>>> Here's the problem that I don't think many people appreciate: > logical > >>>>>>> arguments suck just as much as personal experience in answering > these > >>>>>>> questions. You can make perfectly structured arguments until you > are > >>>>>>> blue in the face, but without real data to premise them on, they > are > >>>>>>> no better than the gut feelings. They can often be significantly > worse > >>>>>>> if the strength of the logic gets confused with the strength of the > >>>>>>> premise. > >>>>>> > >>>>>> If I recall correctly, the convention of not breaking ABI > >>>>>> compatibility in minor releases was established in response to the > >>>>>> last ABI compatibility break. Am I wrong? > >>>>> > >>>>> I'm not sure how this relates to the material quoted of me, but no, > >>>>> you're not wrong. > >>>> > >>>> Just trying to provide historical context to support the strength of > >>>> the premise. > >>> > >>> The existence of the policy is not under question (anymore; I settled > >>> that with old email a while ago). The question is whether to change > >>> the policy. > >> > >> So I have gathered. I question whether the concerns that lead to that > >> decision in the first place are somehow less important now. > > > > And we're back to gut feeling territory again. > > That's unfair. I can't win based on gut, you know how skinny I am. > __ > We haven't reached the extreme of the two physicists at SLAC who stepped outside to settle a point with fisticuffs. But with any luck we will get there ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Mon Feb 8 22:40:12 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Mon, 8 Feb 2010 22:40:12 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> <3d375d731002081910g65e759ebh3fa83638e2dd55e3@mail.gmail.com> <3d375d731002081924j53922762gf1eee026114990e7@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 10:35 PM, Charles R Harris wrote: > > > On Mon, Feb 8, 2010 at 8:27 PM, Darren Dale wrote: >> >> On Mon, Feb 8, 2010 at 10:24 PM, Robert Kern >> wrote: >> > On Mon, Feb 8, 2010 at 21:23, Darren Dale wrote: >> >> On Mon, Feb 8, 2010 at 10:10 PM, Robert Kern >> >> wrote: >> >>> On Mon, Feb 8, 2010 at 20:50, Darren Dale wrote: >> >>>> On Mon, Feb 8, 2010 at 7:52 PM, Robert Kern >> >>>> wrote: >> >>>>> On Mon, Feb 8, 2010 at 18:43, Darren Dale >> >>>>> wrote: >> >>>>>> On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern >> >>>>>> wrote: >> >>>>>>> Here's the problem that I don't think many people appreciate: >> >>>>>>> logical >> >>>>>>> arguments suck just as much as personal experience in answering >> >>>>>>> these >> >>>>>>> questions. You can make perfectly structured arguments until you >> >>>>>>> are >> >>>>>>> blue in the face, but without real data to premise them on, they >> >>>>>>> are >> >>>>>>> no better than the gut feelings. They can often be significantly >> >>>>>>> worse >> >>>>>>> if the strength of the logic gets confused with the strength of >> >>>>>>> the >> >>>>>>> premise. >> >>>>>> >> >>>>>> If I recall correctly, the convention of not breaking ABI >> >>>>>> compatibility in minor releases was established in response to the >> >>>>>> last ABI compatibility break. Am I wrong? >> >>>>> >> >>>>> I'm not sure how this relates to the material quoted of me, but no, >> >>>>> you're not wrong. >> >>>> >> >>>> Just trying to provide historical context to support the strength of >> >>>> the premise. >> >>> >> >>> The existence of the policy is not under question (anymore; I settled >> >>> that with old email a while ago). The question is whether to change >> >>> the policy. >> >> >> >> So I have gathered. I question whether the concerns that lead to that >> >> decision in the first place are somehow less important now. >> > >> > And we're back to gut feeling territory again. >> >> That's unfair. I can't win based on gut, you know how skinny I am. >> __ > > We haven't reached the extreme of the two physicists at SLAC who stepped > outside to settle a point with fisticuffs. But with any luck we will get > there ;) Really? That also happened here at CHESS a long time ago, only they didn't go outside to fight over who got to use the conference room. From josef.pktd at gmail.com Mon Feb 8 22:42:32 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 8 Feb 2010 22:42:32 -0500 Subject: [Numpy-discussion] np.expand_dims is addaxis Message-ID: <1cd32cbb1002081942x62f987c7s5bf5fdeb11d3dcd8@mail.gmail.com> np.expand_dims has a name that I never remember and it's difficult to search for in the help. usage: it adds an axis e.g. after a reduce operation Please ignore, this is a message for Mr. Google Josef From charlesr.harris at gmail.com Mon Feb 8 22:53:07 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 8 Feb 2010 20:53:07 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> <3d375d731002081910g65e759ebh3fa83638e2dd55e3@mail.gmail.com> <3d375d731002081924j53922762gf1eee026114990e7@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 8:40 PM, Darren Dale wrote: > On Mon, Feb 8, 2010 at 10:35 PM, Charles R Harris > wrote: > > > > > > On Mon, Feb 8, 2010 at 8:27 PM, Darren Dale wrote: > >> > >> On Mon, Feb 8, 2010 at 10:24 PM, Robert Kern > >> wrote: > >> > On Mon, Feb 8, 2010 at 21:23, Darren Dale wrote: > >> >> On Mon, Feb 8, 2010 at 10:10 PM, Robert Kern > >> >> wrote: > >> >>> On Mon, Feb 8, 2010 at 20:50, Darren Dale > wrote: > >> >>>> On Mon, Feb 8, 2010 at 7:52 PM, Robert Kern > > >> >>>> wrote: > >> >>>>> On Mon, Feb 8, 2010 at 18:43, Darren Dale > >> >>>>> wrote: > >> >>>>>> On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern < > robert.kern at gmail.com> > >> >>>>>> wrote: > >> >>>>>>> Here's the problem that I don't think many people appreciate: > >> >>>>>>> logical > >> >>>>>>> arguments suck just as much as personal experience in answering > >> >>>>>>> these > >> >>>>>>> questions. You can make perfectly structured arguments until you > >> >>>>>>> are > >> >>>>>>> blue in the face, but without real data to premise them on, they > >> >>>>>>> are > >> >>>>>>> no better than the gut feelings. They can often be significantly > >> >>>>>>> worse > >> >>>>>>> if the strength of the logic gets confused with the strength of > >> >>>>>>> the > >> >>>>>>> premise. > >> >>>>>> > >> >>>>>> If I recall correctly, the convention of not breaking ABI > >> >>>>>> compatibility in minor releases was established in response to > the > >> >>>>>> last ABI compatibility break. Am I wrong? > >> >>>>> > >> >>>>> I'm not sure how this relates to the material quoted of me, but > no, > >> >>>>> you're not wrong. > >> >>>> > >> >>>> Just trying to provide historical context to support the strength > of > >> >>>> the premise. > >> >>> > >> >>> The existence of the policy is not under question (anymore; I > settled > >> >>> that with old email a while ago). The question is whether to change > >> >>> the policy. > >> >> > >> >> So I have gathered. I question whether the concerns that lead to that > >> >> decision in the first place are somehow less important now. > >> > > >> > And we're back to gut feeling territory again. > >> > >> That's unfair. I can't win based on gut, you know how skinny I am. > >> __ > > > > We haven't reached the extreme of the two physicists at SLAC who stepped > > outside to settle a point with fisticuffs. But with any luck we will get > > there ;) > > Really? That also happened here at CHESS a long time ago, only they > didn't go outside to fight over who got to use the conference room. > ______ > Heh. I can't vouch for the story personally, I got it from a guy who was a grad student back in the day working on a detector at Fermilab along with a cast of hundreds. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Mon Feb 8 23:02:21 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Mon, 8 Feb 2010 23:02:21 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081652yc0104afv8d0f29066d601b80@mail.gmail.com> <3d375d731002081910g65e759ebh3fa83638e2dd55e3@mail.gmail.com> <3d375d731002081924j53922762gf1eee026114990e7@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 10:53 PM, Charles R Harris wrote: > > > On Mon, Feb 8, 2010 at 8:40 PM, Darren Dale wrote: >> >> On Mon, Feb 8, 2010 at 10:35 PM, Charles R Harris >> wrote: >> > >> > >> > On Mon, Feb 8, 2010 at 8:27 PM, Darren Dale wrote: >> >> >> >> On Mon, Feb 8, 2010 at 10:24 PM, Robert Kern >> >> wrote: >> >> > On Mon, Feb 8, 2010 at 21:23, Darren Dale wrote: >> >> >> On Mon, Feb 8, 2010 at 10:10 PM, Robert Kern >> >> >> wrote: >> >> >>> On Mon, Feb 8, 2010 at 20:50, Darren Dale >> >> >>> wrote: >> >> >>>> On Mon, Feb 8, 2010 at 7:52 PM, Robert Kern >> >> >>>> >> >> >>>> wrote: >> >> >>>>> On Mon, Feb 8, 2010 at 18:43, Darren Dale >> >> >>>>> wrote: >> >> >>>>>> On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern >> >> >>>>>> >> >> >>>>>> wrote: >> >> >>>>>>> Here's the problem that I don't think many people appreciate: >> >> >>>>>>> logical >> >> >>>>>>> arguments suck just as much as personal experience in answering >> >> >>>>>>> these >> >> >>>>>>> questions. You can make perfectly structured arguments until >> >> >>>>>>> you >> >> >>>>>>> are >> >> >>>>>>> blue in the face, but without real data to premise them on, >> >> >>>>>>> they >> >> >>>>>>> are >> >> >>>>>>> no better than the gut feelings. They can often be >> >> >>>>>>> significantly >> >> >>>>>>> worse >> >> >>>>>>> if the strength of the logic gets confused with the strength of >> >> >>>>>>> the >> >> >>>>>>> premise. >> >> >>>>>> >> >> >>>>>> If I recall correctly, the convention of not breaking ABI >> >> >>>>>> compatibility in minor releases was established in response to >> >> >>>>>> the >> >> >>>>>> last ABI compatibility break. Am I wrong? >> >> >>>>> >> >> >>>>> I'm not sure how this relates to the material quoted of me, but >> >> >>>>> no, >> >> >>>>> you're not wrong. >> >> >>>> >> >> >>>> Just trying to provide historical context to support the strength >> >> >>>> of >> >> >>>> the premise. >> >> >>> >> >> >>> The existence of the policy is not under question (anymore; I >> >> >>> settled >> >> >>> that with old email a while ago). The question is whether to change >> >> >>> the policy. >> >> >> >> >> >> So I have gathered. I question whether the concerns that lead to >> >> >> that >> >> >> decision in the first place are somehow less important now. >> >> > >> >> > And we're back to gut feeling territory again. >> >> >> >> That's unfair. I can't win based on gut, you know how skinny I am. >> >> __ >> > >> > We haven't reached the extreme of the two physicists at SLAC who stepped >> > outside to settle a point with fisticuffs. But with any luck we will get >> > there ;) >> >> Really? That also happened here at CHESS a long time ago, only they >> didn't go outside to fight over who got to use the conference room. >> ______ > > Heh. I can't vouch for the story personally, I got it from a guy who was a > grad student back in the day working on a detector at Fermilab along with a > cast of hundreds. Yeah, same here. Although, one of the combatants at CHESS, after he retired, beat an intruder into submission with a fireplace poker. That story made the local papers. Darren From matthew.brett at gmail.com Mon Feb 8 23:10:41 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 8 Feb 2010 20:10:41 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081917o6c316d72rdb7a0d4937648788@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> <1e2af89e1002081905p6b1c3f78k653985c19a128668@mail.gmail.com> <3d375d731002081917o6c316d72rdb7a0d4937648788@mail.gmail.com> Message-ID: <1e2af89e1002082010q370726ao93ae41c3de74840b@mail.gmail.com> Hi, > Majorities don't make numpy development decisions normally. Never > have. Not of the mailing list membership nor of the steering > committee. Implementors do. When implementors disagree strongly and do > not reach a consensus, then we fall back to majorities. But as I said > before, majority voting requires conscientious control over the voting > membership or it isn't majority voting. The process that you > identified as being remarkably good at maintaining shared ownership > and responsibility isn't majority rule, but consensus among > implementors. We just don't have that right now, but we need to get > stuff done anyways. I think that's right, in general, but in this case, the primary disagreement was between David C+Chuck, and Travis, and there has been a large weight of the contributions to the list in favor of David's view. Now, you might say, I don't care about the weight of contributions because the people mailing don't implement, but that obviously has a social cost. All important arguments are resolved now, we've withdrawn the binary, agreed to a next ABI breaking release, and David's happy with 1.5 as a number, so I don't think we have to worry that discussion will delay getting stuff done at this point, See you, Matthew From ralf.gommers at googlemail.com Tue Feb 9 06:05:25 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 9 Feb 2010 19:05:25 +0800 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> References: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> Message-ID: On Tue, Feb 9, 2010 at 9:54 AM, David Cournapeau wrote: > On Mon, Feb 8, 2010 at 9:14 PM, Ralf Gommers > wrote: > > Hi David and all, > > > > I have a few questions on setting up the build environment on OS X for > > Windows binaries. I have Wine installed with Python 2.5 and 2.6, MakeNsis > > and MinGW. The first question is what is meant in the Paver script by > "cpuid > > plugin". Wine seems to know what to do with a cpuid instruction, but I > can > > not find a plugin. Searching for "cpuid plugin" turns up nothing except > the > > NumPy pavement.py file. What is this? > > That's a small NSIS plugin to detect at install time the exact > capabilities of the CPU (SSE2, SSE3, etc...). The sources are found in > tools/win32build/cpucaps, and should be built with mingw (Visual > Studio is not supported, it uses gcc-specific inline assembly). You > then copy the dll into the plugin directory of nsis. > Yep got it. There's quite some stuff hidden in tools/ and vendor/ that I never noticed before. > > > > Final question is about Atlas and friends. Is 3.8.3 the best version to > > install? Does it compile out of the box under Wine? Is this page > > http://www.scipy.org/Installing_SciPy/Windows still up-to-date with > regard > > to the Lapack/Atlas info and does it apply for Wine? > > Atlas 3.9.x should not be used, it is too unstable IMO (it is a dev > version after all, and windows receives little testing compared to > unix). I will put the Atlas binaries I am using somewhere > That would be *great*. Thanks, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ranavishal at gmail.com Tue Feb 9 10:42:29 2010 From: ranavishal at gmail.com (Vishal Rana) Date: Tue, 9 Feb 2010 07:42:29 -0800 Subject: [Numpy-discussion] Utility function to find array items are in ascending order Message-ID: Hi, Is there any utility function to find if values in the array are in ascending or descending order. Example: arr = [1, 2, 4, 6] should return true arr2 = [1, 0, 2, -2] should return false Thanks Vishal -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Tue Feb 9 10:50:58 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 9 Feb 2010 07:50:58 -0800 Subject: [Numpy-discussion] Utility function to find array items are in ascending order In-Reply-To: References: Message-ID: On Tue, Feb 9, 2010 at 7:42 AM, Vishal Rana wrote: > Hi, > Is there any utility function to find if values in the array are in > ascending or descending order. > Example: > arr = [1, 2, 4, 6] should return true > arr2 = [1, 0, 2, -2] should return false > Thanks > Vishal I don't know if it is fast but np.diff should do the trick. You can check if all values are less than or equal to zero. Or if all are greater. From bpederse at gmail.com Tue Feb 9 10:51:57 2010 From: bpederse at gmail.com (Brent Pedersen) Date: Tue, 9 Feb 2010 07:51:57 -0800 Subject: [Numpy-discussion] Utility function to find array items are in ascending order In-Reply-To: References: Message-ID: On Tue, Feb 9, 2010 at 7:42 AM, Vishal Rana wrote: > Hi, > Is there any utility function to find if values in the array are in > ascending or descending order. > Example: > arr = [1, 2, 4, 6] should return true > arr2 = [1, 0, 2, -2] should return false > Thanks > Vishal > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > i dont know if there's a utility function, but i'd use: >>> np.all(a[1:] >= a[:-1]) From kwgoodman at gmail.com Tue Feb 9 10:53:10 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 9 Feb 2010 07:53:10 -0800 Subject: [Numpy-discussion] Utility function to find array items are in ascending order In-Reply-To: References: Message-ID: On Tue, Feb 9, 2010 at 7:51 AM, Brent Pedersen wrote: > On Tue, Feb 9, 2010 at 7:42 AM, Vishal Rana wrote: >> Hi, >> Is there any utility function to find if values in the array are in >> ascending or descending order. >> Example: >> arr = [1, 2, 4, 6] should return true >> arr2 = [1, 0, 2, -2] should return false >> Thanks >> Vishal >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > i dont know if there's a utility function, but i'd use: > > ?>>> np.all(a[1:] >= a[:-1]) Yes, that's much better than np.diff. From ranavishal at gmail.com Tue Feb 9 11:53:16 2010 From: ranavishal at gmail.com (Vishal Rana) Date: Tue, 9 Feb 2010 08:53:16 -0800 Subject: [Numpy-discussion] Utility function to find array items are in ascending order In-Reply-To: References: Message-ID: Thanks On Tue, Feb 9, 2010 at 7:51 AM, Brent Pedersen wrote: > On Tue, Feb 9, 2010 at 7:42 AM, Vishal Rana wrote: > > Hi, > > Is there any utility function to find if values in the array are in > > ascending or descending order. > > Example: > > arr = [1, 2, 4, 6] should return true > > arr2 = [1, 0, 2, -2] should return false > > Thanks > > Vishal > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > > i dont know if there's a utility function, but i'd use: > > >>> np.all(a[1:] >= a[:-1]) > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dlc at halibut.com Tue Feb 9 16:52:07 2010 From: dlc at halibut.com (David Carmean) Date: Tue, 9 Feb 2010 13:52:07 -0800 Subject: [Numpy-discussion] Emulate left outer join? Message-ID: <20100209135207.D12825@halibut.com> Hi, I've been working with numpy for less than a month, having learned about it after finding matplotlib. My foundation in things like set theory is... weak to nonexistent, so I need a little help mapping sql-like thoughts into set-theory thinking :) Some context to help me explain: I'm trying to store, chart, and analyze unix system performance data (sar/sadf output). On a typical system I have about 75 fields/variables, all floats, with identical timestamps... or so we hope. What I want to do in order to save memory/disk space is to stack the timeseries data all into three or four different arrays, and use a single timestamp field for each set. My problem is: I don't know that I can guarantee that the shape of all the individual arrays will be identical along the time axis. I may receive truncated textfiles to parse, or new variables may appear and disappear from the set being reported/recorded. If these were in flat files or database tables, I'd do a left outer join between a master timestamp table and each individual variable's table. But... I don't know the keywords to search for in the numpy docs/web chatter. A thread from just about one year ago left the question hanging: http://article.gmane.org/gmane.comp.python.numeric.general/27942 Examples? Pointers? Shoves toward the correct sections of the docs? Thanks. From robert.kern at gmail.com Tue Feb 9 17:02:48 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 9 Feb 2010 16:02:48 -0600 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <20100209135207.D12825@halibut.com> References: <20100209135207.D12825@halibut.com> Message-ID: <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> On Tue, Feb 9, 2010 at 15:52, David Carmean wrote: > > Hi, > > I've been working with numpy for less than a month, having learned about > it after finding matplotlib. ?My foundation in things like set theory is... > weak to nonexistent, so I need a little help mapping sql-like thoughts into > set-theory thinking :) > > > Some context to help me explain: ?I'm trying to store, chart, and analyze > unix system performance data (sar/sadf output). ?On a typical system I have > about 75 fields/variables, all floats, with identical timestamps... or so > we hope. ? What I want to do in order to save memory/disk space is to stack > the timeseries data all into three or four different arrays, and use a single > timestamp field for each set. > > My problem is: I don't know that I can guarantee that the shape of all the > individual arrays will be identical along the time axis. ?I may receive > truncated textfiles to parse, or new variables may appear and disappear from > the set being reported/recorded. > > If these were in flat files or database tables, I'd do a left outer join between > a master timestamp table and each individual variable's table. ? But... I don't > know the keywords to search for in the numpy docs/web chatter. ?A thread from > just about one year ago left the question hanging: > > ? ?http://article.gmane.org/gmane.comp.python.numeric.general/27942 > > Examples? Pointers? ?Shoves toward the correct sections of the docs? numpy.lib.recfunctions.join_by(key, r1, r2, jointype='leftouter') In [23]: numpy.lib.recfunctions.join_by? Type: function Base Class: Namespace: Interactive File: /Users/rkern/svn/numpy/numpy/lib/recfunctions.py Definition: numpy.lib.recfunctions.join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', defaults=None, usemask=True, asrecarray=False) Docstring: Join arrays `r1` and `r2` on key `key`. The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the `key` field cannot be found in the two input arrays. Neither `r1` nor `r2` should have any duplicates along `key`: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm. Parameters ---------- key : {string, sequence} A string or a sequence of strings corresponding to the fields used for comparison. r1, r2 : arrays Structured arrays. jointype : {'inner', 'outer', 'leftouter'}, optional If 'inner', returns the elements common to both r1 and r2. If 'outer', returns the common elements as well as the elements of r1 not in r2 and the elements of not in r2. If 'leftouter', returns the common elements and the elements of r1 not in r2. r1postfix : string, optional String appended to the names of the fields of r1 that are present in r2 but absent of the key. r2postfix : string, optional String appended to the names of the fields of r2 that are present in r1 but absent of the key. defaults : {dictionary}, optional Dictionary mapping field names to the corresponding default values. usemask : {True, False}, optional Whether to return a MaskedArray (or MaskedRecords is `asrecarray==True`) or a ndarray. asrecarray : {False, True}, optional Whether to return a recarray (or MaskedRecords if `usemask==True`) or just a flexible-type ndarray. Notes ----- * The output is sorted along the key. * A temporary array is formed by dropping the fields not in the key for the two arrays and concatenating the result. This array is then sorted, and the common entries selected. The output is constructed by filling the fields with the selected entries. Matching is not preserved if there are some duplicates... For some reason, numpy.lib.recfunctions isn't in the documentation editor. I'm not sure why. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Tue Feb 9 17:43:16 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 9 Feb 2010 17:43:16 -0500 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> Message-ID: On Tue, Feb 9, 2010 at 5:02 PM, Robert Kern wrote: > > numpy.lib.recfunctions.join_by(key, r1, r2, jointype='leftouter') > And if that isn't sufficient, John has in matplotlib.mlab a few other similar utilities that allow for more complex cases: In [2]: mlab.rec_ mlab.rec_append_fields mlab.rec_groupby mlab.rec_keep_fields mlab.rec_drop_fields mlab.rec_join mlab.rec_summarize Cheers, f From jdh2358 at gmail.com Tue Feb 9 17:49:30 2010 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 9 Feb 2010 16:49:30 -0600 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> Message-ID: <88e473831002091449g6e17ca1em12f7c36ae86106fd@mail.gmail.com> On Tue, Feb 9, 2010 at 4:43 PM, Fernando Perez wrote: > On Tue, Feb 9, 2010 at 5:02 PM, Robert Kern wrote: >> >> numpy.lib.recfunctions.join_by(key, r1, r2, jointype='leftouter') >> > > And if that isn't sufficient, John has in matplotlib.mlab a few other > similar utilities that allow for more complex cases: The numpy.lib.recfunctions were ported from matplotlib.mlab so most of the functionality is overlapping, but we have added some stuff since the port, eg matplotlib.mlab.recs_join for a multiway join, and some stuff was never ported (rec_summarize, rec_groupby) so it may be worth looking in mlab too. Some of the stuff for mpl is only in svn but most of it is released. Examples are at http://matplotlib.sourceforge.net/examples/misc/rec_join_demo.html http://matplotlib.sourceforge.net/examples/misc/rec_groupby_demo.html JDH From dwf at cs.toronto.edu Tue Feb 9 18:15:27 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 9 Feb 2010 18:15:27 -0500 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> Message-ID: <301AA95B-06BC-4C37-BE35-7477814FBB61@cs.toronto.edu> On 9-Feb-10, at 5:02 PM, Robert Kern wrote: >> Examples? Pointers? Shoves toward the correct sections of the docs? > > numpy.lib.recfunctions.join_by(key, r1, r2, jointype='leftouter') Huh. All these years, how have I missed this? Yet another demonstration of why my "never skip over a Kern posting" policy exists. David From ralf.gommers at googlemail.com Tue Feb 9 18:47:43 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 10 Feb 2010 07:47:43 +0800 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> Message-ID: On Wed, Feb 10, 2010 at 6:02 AM, Robert Kern wrote: > > > For some reason, numpy.lib.recfunctions isn't in the documentation > editor. I'm not sure why. > > Because it's not in np.lib.__all__ . Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Feb 9 18:52:43 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 9 Feb 2010 17:52:43 -0600 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> Message-ID: <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> On Tue, Feb 9, 2010 at 17:47, Ralf Gommers wrote: > > > On Wed, Feb 10, 2010 at 6:02 AM, Robert Kern wrote: >> >> >> For some reason, numpy.lib.recfunctions isn't in the documentation >> editor. I'm not sure why. >> > Because it's not in np.lib.__all__ . Then there needs to be a secondary way to add such modules. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Tue Feb 9 19:02:17 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 9 Feb 2010 19:02:17 -0500 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> Message-ID: <1cd32cbb1002091602w455bead5h778fd46b046298ae@mail.gmail.com> On Tue, Feb 9, 2010 at 6:52 PM, Robert Kern wrote: > On Tue, Feb 9, 2010 at 17:47, Ralf Gommers wrote: >> >> >> On Wed, Feb 10, 2010 at 6:02 AM, Robert Kern wrote: >>> >>> >>> For some reason, numpy.lib.recfunctions isn't in the documentation >>> editor. I'm not sure why. >>> >> Because it's not in np.lib.__all__ . > > Then there needs to be a secondary way to add such modules. Under which namespace should the recfunctions be accessed. I think, it's possible to directly import/reference them in the docs without adding them to lib.__all__ Josef > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From robert.kern at gmail.com Tue Feb 9 19:04:43 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 9 Feb 2010 18:04:43 -0600 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <1cd32cbb1002091602w455bead5h778fd46b046298ae@mail.gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> <1cd32cbb1002091602w455bead5h778fd46b046298ae@mail.gmail.com> Message-ID: <3d375d731002091604n3a8e0754nb7635761c3644c35@mail.gmail.com> On Tue, Feb 9, 2010 at 18:02, wrote: > On Tue, Feb 9, 2010 at 6:52 PM, Robert Kern wrote: >> On Tue, Feb 9, 2010 at 17:47, Ralf Gommers wrote: >>> >>> >>> On Wed, Feb 10, 2010 at 6:02 AM, Robert Kern wrote: >>>> >>>> >>>> For some reason, numpy.lib.recfunctions isn't in the documentation >>>> editor. I'm not sure why. >>>> >>> Because it's not in np.lib.__all__ . >> >> Then there needs to be a secondary way to add such modules. > > Under which namespace should the recfunctions be accessed. numpy.lib.recfunctions > I think, it's possible to directly import/reference them in the docs > without adding them to lib.__all__ Okay. What is that way? What do we need to do to make that happen? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Tue Feb 9 19:06:38 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 9 Feb 2010 19:06:38 -0500 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> Message-ID: On Feb 9, 2010, at 6:52 PM, Robert Kern wrote: > On Tue, Feb 9, 2010 at 17:47, Ralf Gommers wrote: >> >> >> On Wed, Feb 10, 2010 at 6:02 AM, Robert Kern wrote: >>> >>> >>> For some reason, numpy.lib.recfunctions isn't in the documentation >>> editor. I'm not sure why. >>> >> Because it's not in np.lib.__all__ . > > Then there needs to be a secondary way to add such modules. All, I started porting JDH's functions from mlab to numpy.lib because I thought it'd be nice to have them directly in the core of numpy, instead of spread out in another package. However, I wanted to get a lot of feedback before advertising them: * Should we put matplotlib.mlab functions directly into numpy ? I do think so, even if I think we should make them a tad more generic and not tie them to recarrays (you can do the same thing with structured arrays without the overhead, albeit without the convenience of access-as-attributes). * If yes to the question above, how should we proceed ? John, you mind committing these functions to numpy.lib.rec_functions yourself ? If you can't, any volunteer (I can do it but it would fall low on my priority list). Once this is settle, then we could think about a way to present them in the reference and/or user manual (like I did for genfromtxt). Let me know what y'all think. P. From josef.pktd at gmail.com Tue Feb 9 19:22:59 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 9 Feb 2010 19:22:59 -0500 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <3d375d731002091604n3a8e0754nb7635761c3644c35@mail.gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> <1cd32cbb1002091602w455bead5h778fd46b046298ae@mail.gmail.com> <3d375d731002091604n3a8e0754nb7635761c3644c35@mail.gmail.com> Message-ID: <1cd32cbb1002091622y62172d52m878831e58f7e16bc@mail.gmail.com> On Tue, Feb 9, 2010 at 7:04 PM, Robert Kern wrote: > On Tue, Feb 9, 2010 at 18:02, ? wrote: >> On Tue, Feb 9, 2010 at 6:52 PM, Robert Kern wrote: >>> On Tue, Feb 9, 2010 at 17:47, Ralf Gommers wrote: >>>> >>>> >>>> On Wed, Feb 10, 2010 at 6:02 AM, Robert Kern wrote: >>>>> >>>>> >>>>> For some reason, numpy.lib.recfunctions isn't in the documentation >>>>> editor. I'm not sure why. >>>>> >>>> Because it's not in np.lib.__all__ . >>> >>> Then there needs to be a secondary way to add such modules. >> >> Under which namespace should the recfunctions be accessed. > > numpy.lib.recfunctions > >> I think, it's possible to directly import/reference them in the docs >> without adding them to lib.__all__ > > Okay. What is that way? What do we need to do to make that happen? add a new rst file, as for example http://docs.scipy.org/numpy/source/numpy/doc/source/reference/routines.linalg.rst#1 or any of the other modules that don't reside in the numpy.* namespace, linalg, random, fft, matlib, .... modules in brackets in http://docs.scipy.org/numpy/docs/numpy-docs/reference/routines.rst/ It will show up as a section in routines. Josef > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From josef.pktd at gmail.com Tue Feb 9 19:40:32 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 9 Feb 2010 19:40:32 -0500 Subject: [Numpy-discussion] numpy.polynomial.chebyshev (not) in the docs Message-ID: <1cd32cbb1002091640qe8add56i10ffaf4110b9e37d@mail.gmail.com> Similar to the recfunctions, I also don't find the new chebychev polynomials in the docs. Are they linked from any rst file? A search in the online sphinx html docs comes up empty, and http://docs.scipy.org/numpy/docs/numpy-docs/reference/routines.poly.rst/#routines-poly doesn't link to the new functions. The docstrings look nice but maybe nobody sees them. Josef From pav at iki.fi Tue Feb 9 19:54:35 2010 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 10 Feb 2010 02:54:35 +0200 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <3d375d731002091604n3a8e0754nb7635761c3644c35@mail.gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> <1cd32cbb1002091602w455bead5h778fd46b046298ae@mail.gmail.com> <3d375d731002091604n3a8e0754nb7635761c3644c35@mail.gmail.com> Message-ID: <1265763275.7966.5.camel@idol> ti, 2010-02-09 kello 18:04 -0600, Robert Kern kirjoitti: > On Tue, Feb 9, 2010 at 18:02, wrote: [clip] > numpy.lib.recfunctions > > > I think, it's possible to directly import/reference them in the docs > > without adding them to lib.__all__ > > Okay. What is that way? What do we need to do to make that happen? To get them in the web app, I need to adjust the web app configuration on new.scipy.org. I didn't know about that those functions, so I missed them earlier. Getting them to the docs goes as Josef explained, just add a rst file and refer to it in the others. *** But, should we make these functions available under some less internal-ish namespace? There's numpy.rec at the least -- it could be made a real module to pull in things from core and lib. -- Pauli Virtanen From pgmdevlist at gmail.com Tue Feb 9 20:02:46 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 9 Feb 2010 20:02:46 -0500 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <1265763275.7966.5.camel@idol> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> <1cd32cbb1002091602w455bead5h778fd46b046298ae@mail.gmail.com> <3d375d731002091604n3a8e0754nb7635761c3644c35@mail.gmail.com> <1265763275.7966.5.camel@idol> Message-ID: <3E12B7C1-26A9-4001-A874-D13F38A97F7C@gmail.com> On Feb 9, 2010, at 7:54 PM, Pauli Virtanen wrote: > > But, should we make these functions available under some less > internal-ish namespace? There's numpy.rec at the least -- it could be > made a real module to pull in things from core and lib. I still think these functions are more generic than the rec_ prefix let think, and I'd still prefer a decision being made about what should go in the module before thinking too hard about how to advertise it. From josef.pktd at gmail.com Tue Feb 9 20:14:23 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 9 Feb 2010 20:14:23 -0500 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> Message-ID: <1cd32cbb1002091714s4505ae10uedd83f3eb0b2f582@mail.gmail.com> On Tue, Feb 9, 2010 at 7:06 PM, Pierre GM wrote: > On Feb 9, 2010, at 6:52 PM, Robert Kern wrote: >> On Tue, Feb 9, 2010 at 17:47, Ralf Gommers wrote: >>> >>> >>> On Wed, Feb 10, 2010 at 6:02 AM, Robert Kern wrote: >>>> >>>> >>>> For some reason, numpy.lib.recfunctions isn't in the documentation >>>> editor. I'm not sure why. >>>> >>> Because it's not in np.lib.__all__ . >> >> Then there needs to be a secondary way to add such modules. > > All, > I started porting JDH's functions from mlab to numpy.lib because I thought it'd be nice to have them directly in the core of numpy, instead of spread out in another package. However, I wanted to get a lot of feedback before advertising them: chicken and egg problem, without advertising very few users know they exist > * Should we put matplotlib.mlab functions directly into numpy ? I do think so, even if I think we should make them a tad more generic and not tie them to recarrays (you can do the same thing with structured arrays without the overhead, albeit without the convenience of access-as-attributes). > * If yes to the question above, how should we proceed ? John, you mind committing these functions to numpy.lib.rec_functions yourself ? If you can't, any volunteer (I can do it but it would fall low on my priority list). > Once this is settle, then we could think about a way to present them in the reference and/or user manual (like I did for genfromtxt). > Let me know what y'all think. > P. I think it's very helpful to have more helper functions and documentation to work with structured arrays. I also think that for newcomers the distinction in the documentation between recarrays and arrays with structured dtypes is not very clear, and how to work with structured arrays is not sufficiently documented. Essentially I only learned about them because of an answer Pierre gave once to me on the mailing list and I started to read the matplotlib and numpy source to see how to work with them. It also seems that structured arrays become the more recommended approach than recarrays (e.g. discussion by tabular developers on the mailing list and their switch to structured arrays). So, I'm in favor of advertising them, and advertising them for structured arrays and only secondary for recarrays. I have no idea about a good name that would suggest structured instead of rec. Josef > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From jdh2358 at gmail.com Tue Feb 9 20:16:06 2010 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 9 Feb 2010 19:16:06 -0600 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <3E12B7C1-26A9-4001-A874-D13F38A97F7C@gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> <1cd32cbb1002091602w455bead5h778fd46b046298ae@mail.gmail.com> <3d375d731002091604n3a8e0754nb7635761c3644c35@mail.gmail.com> <1265763275.7966.5.camel@idol> <3E12B7C1-26A9-4001-A874-D13F38A97F7C@gmail.com> Message-ID: <88e473831002091716w13048c1cu75a8999826e3c33a@mail.gmail.com> On Tue, Feb 9, 2010 at 7:02 PM, Pierre GM wrote: > On Feb 9, 2010, at 7:54 PM, Pauli Virtanen wrote: >> >> But, should we make these functions available under some less >> internal-ish namespace? There's numpy.rec at the least -- it could be >> made a real module to pull in things from core and lib. > > I still think these functions are more generic than the rec_ prefix let think, and I'd still prefer a decision being made about what should go in the module before thinking too hard about how to advertise it. I would love to see many of these as methods of record/structured arrays, so we could say r = r1.join('date', r2) or rs = r.groupby( ('year', 'month'), stats) and have "totxt", "tocsv". etc... from rec2txt, rec2csv, etc... I think the functionality of mlab.rec_summarize and rec_groupby is very useful, but the interface is a bit clunky and could be made easier for the common use cases. These methods could call the proper functions from np.lib.recfunctions or wherever, and they would get a lot more visibility to people using introspection. JDH From pgmdevlist at gmail.com Tue Feb 9 20:53:37 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 9 Feb 2010 20:53:37 -0500 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <88e473831002091716w13048c1cu75a8999826e3c33a@mail.gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> <1cd32cbb1002091602w455bead5h778fd46b046298ae@mail.gmail.com> <3d375d731002091604n3a8e0754nb7635761c3644c35@mail.gmail.com> <1265763275.7966.5.camel@idol> <3E12B7C1-26A9-4001-A874-D13F38A97F7C@gmail.com> <88e473831002091716w13048c1cu75a8999826e3c33a@mail.gmail.com> Message-ID: On Feb 9, 2010, at 8:16 PM, John Hunter wrote: >> I still think these functions are more generic than the rec_ prefix let think, and I'd still prefer a decision being made about what should go in the module before thinking too hard about how to advertise it. > > I would love to see many of these as methods of record/structured > arrays, so we could say Won't work w/ structured arrays, but completely doable for recarrays. Let's define the functions so that they take a structured array as first argument when possible, and add the functions as a methods to np.recarray. That should be fairly transparent, provided we stick to access-as-key instead of access-as-attribute > and have "totxt", "tocsv". etc... from rec2txt, rec2csv, etc... I > think the functionality of mlab.rec_summarize and rec_groupby is very > useful, but the interface is a bit clunky and could be made easier for > the common use cases. Are you going to work on it or should I step in (in a few weeks...). From d.l.goldsmith at gmail.com Tue Feb 9 21:30:12 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Tue, 9 Feb 2010 18:30:12 -0800 Subject: [Numpy-discussion] numpy.polynomial.chebyshev (not) in the docs In-Reply-To: <1cd32cbb1002091640qe8add56i10ffaf4110b9e37d@mail.gmail.com> References: <1cd32cbb1002091640qe8add56i10ffaf4110b9e37d@mail.gmail.com> Message-ID: <45d1ab481002091830n43cc66afjc20ae0b7b0c05a57@mail.gmail.com> Are you talking about absence in the Wiki or absence in a NumPy executable. They're in the former (I've been editing them), and they're in 1.4.0 of the latter: >>> import numpy as N >>> N.version.version '1.4.0' >>> from numpy.polynomial import chebyshev as C >>> help(C.chebfit) Help on function chebfit in module numpy.polynomial.chebyshev: chebfit(x, y, deg, rcond=None, full=False) Least squares fit of Chebyshev series to data. Fit a Chebyshev series ``p(x) = p[0] * T_{deq}(x) + ... + p[deg] * T_{0}(x)`` of degree `deg` to points `(x, y)`. Returns a vector of coefficients `p` that minimises the squared error. Parameters ---------- x : array_like, shape (M,) x-coordinates of the M sample points ``(x[i], y[i])``. y : array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. Etc. What version of NumPy are you running? DG On Tue, Feb 9, 2010 at 4:40 PM, wrote: > Similar to the recfunctions, I also don't find the new chebychev > polynomials in the docs. > > Are they linked from any rst file? > > A search in the online sphinx html docs comes up empty, and > > http://docs.scipy.org/numpy/docs/numpy-docs/reference/routines.poly.rst/#routines-poly > doesn't link to the new functions. > > The docstrings look nice but maybe nobody sees them. > > Josef > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Feb 9 21:52:35 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 9 Feb 2010 21:52:35 -0500 Subject: [Numpy-discussion] numpy.polynomial.chebyshev (not) in the docs In-Reply-To: <45d1ab481002091830n43cc66afjc20ae0b7b0c05a57@mail.gmail.com> References: <1cd32cbb1002091640qe8add56i10ffaf4110b9e37d@mail.gmail.com> <45d1ab481002091830n43cc66afjc20ae0b7b0c05a57@mail.gmail.com> Message-ID: <1cd32cbb1002091852o75057b8t50b05428de13c6db@mail.gmail.com> On Tue, Feb 9, 2010 at 9:30 PM, David Goldsmith wrote: > Are you talking about absence in the Wiki or absence in a NumPy executable. > They're in the former (I've been editing them), and they're in 1.4.0 of the > latter: I have them in numpy 1.4, I see them in the doceditor, but not in http://docs.scipy.org/doc/numpy/search.html?q=chebychev&check_keywords=yes&area=default or search for chebfit I think they are not added to the html docs because they are not referenced in any rst file. That's a different issue from having them in the source and the doceditor application. Josef > >>>> import numpy as N >>>> N.version.version > '1.4.0' >>>> from numpy.polynomial import chebyshev as C >>>> help(C.chebfit) > Help on function chebfit in module numpy.polynomial.chebyshev: > > chebfit(x, y, deg, rcond=None, full=False) > ??? Least squares fit of Chebyshev series to data. > > ??? Fit a Chebyshev series ``p(x) = p[0] * T_{deq}(x) + ... + p[deg] * > ??? T_{0}(x)`` of degree `deg` to points `(x, y)`. Returns a vector of > ??? coefficients `p` that minimises the squared error. > > ??? Parameters > ??? ---------- > ??? x : array_like, shape (M,) > ??????? x-coordinates of the M sample points ``(x[i], y[i])``. > ??? y : array_like, shape (M,) or (M, K) > ??????? y-coordinates of the sample points. Several data sets of sample > ??????? points sharing the same x-coordinates can be fitted at once by > ??????? passing in a 2D-array that contains one dataset per column. > Etc. > > ?What version of NumPy are you running? > > DG > > On Tue, Feb 9, 2010 at 4:40 PM, wrote: >> >> Similar to the recfunctions, I also don't find the new chebychev >> polynomials in the docs. >> >> Are they linked from any rst file? >> >> A search in the online sphinx html docs comes up empty, and >> >> http://docs.scipy.org/numpy/docs/numpy-docs/reference/routines.poly.rst/#routines-poly >> doesn't link to the new functions. >> >> The docstrings look nice but maybe nobody sees them. >> >> Josef >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From charlesr.harris at gmail.com Tue Feb 9 22:23:11 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 9 Feb 2010 20:23:11 -0700 Subject: [Numpy-discussion] numpy.polynomial.chebyshev (not) in the docs In-Reply-To: <1cd32cbb1002091852o75057b8t50b05428de13c6db@mail.gmail.com> References: <1cd32cbb1002091640qe8add56i10ffaf4110b9e37d@mail.gmail.com> <45d1ab481002091830n43cc66afjc20ae0b7b0c05a57@mail.gmail.com> <1cd32cbb1002091852o75057b8t50b05428de13c6db@mail.gmail.com> Message-ID: On Tue, Feb 9, 2010 at 7:52 PM, wrote: > On Tue, Feb 9, 2010 at 9:30 PM, David Goldsmith > wrote: > > Are you talking about absence in the Wiki or absence in a NumPy > executable. > > They're in the former (I've been editing them), and they're in 1.4.0 of > the > > latter: > > I have them in numpy 1.4, I see them in the doceditor, but not in > > http://docs.scipy.org/doc/numpy/search.html?q=chebychev&check_keywords=yes&area=default > > or search for chebfit > > I think they are not added to the html docs because they are not > referenced in any rst file. > That's a different issue from having them in the source and the > doceditor application. > > > Josef > > > > >>>> import numpy as N > >>>> N.version.version > > '1.4.0' > >>>> from numpy.polynomial import chebyshev as C > >>>> help(C.chebfit) > > Help on function chebfit in module numpy.polynomial.chebyshev: > > > > chebfit(x, y, deg, rcond=None, full=False) > > Least squares fit of Chebyshev series to data. > > > > Fit a Chebyshev series ``p(x) = p[0] * T_{deq}(x) + ... + p[deg] * > > T_{0}(x)`` of degree `deg` to points `(x, y)`. Returns a vector of > > coefficients `p` that minimises the squared error. > > > > Parameters > > ---------- > > x : array_like, shape (M,) > > x-coordinates of the M sample points ``(x[i], y[i])``. > > y : array_like, shape (M,) or (M, K) > > y-coordinates of the sample points. Several data sets of sample > > points sharing the same x-coordinates can be fitted at once by > > passing in a 2D-array that contains one dataset per column. > > Etc. > > > > What version of NumPy are you running? > > > Hey, the error in the docstring prompted me to make another attempt to guess my editing password. Success! Thanks. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Feb 10 01:12:47 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 10 Feb 2010 15:12:47 +0900 Subject: [Numpy-discussion] long(a) vs a.__long__() for scalar arrays Message-ID: <5b8d13221002092212v49b488c9uc0b9a8a97588bc3a@mail.gmail.com> Hi, I am a bit puzzled by the protocol for long(a) where a is a scalar array. For example, for a = np.float128(1), I was expecting long(a) to call a.__long__, but it does not look like it is the case. int(a) does not call a.__int__ either. Where does the long conversion happen in numpy for scalar arrays ? cheers, David From charlesr.harris at gmail.com Wed Feb 10 01:28:39 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 9 Feb 2010 23:28:39 -0700 Subject: [Numpy-discussion] long(a) vs a.__long__() for scalar arrays In-Reply-To: <5b8d13221002092212v49b488c9uc0b9a8a97588bc3a@mail.gmail.com> References: <5b8d13221002092212v49b488c9uc0b9a8a97588bc3a@mail.gmail.com> Message-ID: On Tue, Feb 9, 2010 at 11:12 PM, David Cournapeau wrote: > Hi, > > I am a bit puzzled by the protocol for long(a) where a is a scalar > array. For example, for a = np.float128(1), I was expecting long(a) to > call a.__long__, but it does not look like it is the case. int(a) does > not call a.__int__ either. Where does the long conversion happen in > numpy for scalar arrays ? > > How did you tell, did you have print statements in the call? I'm curious if np.long the same as long? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Wed Feb 10 01:30:52 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 10 Feb 2010 15:30:52 +0900 Subject: [Numpy-discussion] long(a) vs a.__long__() for scalar arrays In-Reply-To: References: <5b8d13221002092212v49b488c9uc0b9a8a97588bc3a@mail.gmail.com> Message-ID: <4B72529C.40802@silveregg.co.jp> Charles R Harris wrote: > > > On Tue, Feb 9, 2010 at 11:12 PM, David Cournapeau > wrote: > > Hi, > > I am a bit puzzled by the protocol for long(a) where a is a scalar > array. For example, for a = np.float128(1), I was expecting long(a) to > call a.__long__, but it does not look like it is the case. int(a) does > not call a.__int__ either. Where does the long conversion happen in > numpy for scalar arrays ? > > > How did you tell, did you have print statements in the call? Indirectly, yes (I was looking into #1395). > I'm curious > if np.long the same as long? At least, np.long(a) does not call a.__long__ either. I am on a 64 bits machine, BTW, but I don't think it matters since I have the same problem with int. cheers, David From pav at iki.fi Wed Feb 10 04:46:55 2010 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 10 Feb 2010 11:46:55 +0200 Subject: [Numpy-discussion] long(a) vs a.__long__() for scalar arrays In-Reply-To: References: <5b8d13221002092212v49b488c9uc0b9a8a97588bc3a@mail.gmail.com> Message-ID: <1265795215.2662.3.camel@talisman> ti, 2010-02-09 kello 23:28 -0700, Charles R Harris kirjoitti: [clip] > I'm curious if np.long the same as long? np.long is long I'm not sure if this was always so, since ticket #99's test cases try to check that np.long works properly. Pauli From dlc at halibut.com Wed Feb 10 05:08:54 2010 From: dlc at halibut.com (David Carmean) Date: Wed, 10 Feb 2010 02:08:54 -0800 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <88e473831002091449g6e17ca1em12f7c36ae86106fd@mail.gmail.com>; from jdh2358@gmail.com on Tue, Feb 09, 2010 at 04:49:30PM -0600 References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <88e473831002091449g6e17ca1em12f7c36ae86106fd@mail.gmail.com> Message-ID: <20100210020854.A17330@halibut.com> On Tue, Feb 09, 2010 at 04:49:30PM -0600, John Hunter wrote: > On Tue, Feb 9, 2010 at 4:43 PM, Fernando Perez wrote: > > On Tue, Feb 9, 2010 at 5:02 PM, Robert Kern wrote: > >> > >> numpy.lib.recfunctions.join_by(key, r1, r2, jointype='leftouter') > >> > > > > And if that isn't sufficient, John has in matplotlib.mlab a few other > > similar utilities that allow for more complex cases: > > The numpy.lib.recfunctions were ported from matplotlib.mlab so most of > the functionality is overlapping, but we have added some stuff since > the port, eg matplotlib.mlab.recs_join for a multiway join, and some > stuff was never ported (rec_summarize, rec_groupby) so it may be worth > looking in mlab too. Some of the stuff for mpl is only in svn but > most of it is released. > > Examples are at > > http://matplotlib.sourceforge.net/examples/misc/rec_join_demo.html > http://matplotlib.sourceforge.net/examples/misc/rec_groupby_demo.html Thank you; this appears to be one of those packages where a good bit of the documentation is in the mailing list :) Or waiting in the typing fingers of the experts for somebody like me to ask at the right time :) Someday perhaps I'll find the time to go through the code and learn what else is there that's not in the pdf book. From cournape at gmail.com Wed Feb 10 05:46:37 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 10 Feb 2010 19:46:37 +0900 Subject: [Numpy-discussion] wired error message in scipy.sparse.eigen function: Segmentation fault In-Reply-To: References: <4B60EC22.5070001@gmail.com> <4B60FF24.7040504@silveregg.co.jp> <4B6102B6.400@gmail.com> <4B610625.4060303@silveregg.co.jp> <4B611EEE.8040500@gmail.com> <4B612A8D.3060401@silveregg.co.jp> <5b8d13221002042322s53da661bl8df8096c4a656d30@mail.gmail.com> Message-ID: <5b8d13221002100246h14ae3b05o4511879ed25d19e9@mail.gmail.com> On Sat, Feb 6, 2010 at 9:25 AM, Jankins wrote: > This problem keeps bothering me for days. > If you need more sample to test it, I got one more. I tested it this > morning. And the "segmentation ?fault" happened at a specific place. > I guess, finally, I have to refer to the original eigenvalue algorithm or > Matlab. Hm, I found something which makes valgrind happy, but I am not sure whether the fix is right. There is definitely something wrong in vector sizes (the crash happens within dnaitr, and some arrays are accessed outside bounds), but the input arguments constraints are all valid if I read the ARPACK sources correctly. It just seems that your data uncover a corner case not well handled by ARPACK. Making the buffers big enough seems to cause the algorithm not to converge for your data, though (to see by yourself, try making ncv argument to eigen biffer than its default 2k+1 value). Since Matlab also uses ARPACK, I would be interested in knowing how matlab behaves for your data, cheers, David From jdh2358 at gmail.com Wed Feb 10 09:54:49 2010 From: jdh2358 at gmail.com (John Hunter) Date: Wed, 10 Feb 2010 08:54:49 -0600 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> <1cd32cbb1002091602w455bead5h778fd46b046298ae@mail.gmail.com> <3d375d731002091604n3a8e0754nb7635761c3644c35@mail.gmail.com> <1265763275.7966.5.camel@idol> <3E12B7C1-26A9-4001-A874-D13F38A97F7C@gmail.com> <88e473831002091716w13048c1cu75a8999826e3c33a@mail.gmail.com> Message-ID: <88e473831002100654qe53230bn81805f656390a9a7@mail.gmail.com> On Tue, Feb 9, 2010 at 7:53 PM, Pierre GM wrote: > On Feb 9, 2010, at 8:16 PM, John Hunter wrote: >> and have "totxt", "tocsv". etc... from rec2txt, rec2csv, etc... ? I >> think the functionality of mlab.rec_summarize and rec_groupby is very >> useful, but the interface is a bit clunky and could be made easier for >> the common use cases. > > Are you going to work on it or should I step in (in a few weeks...). I don't think I'll have time to do it -- I'm already behind on an mpl release -- but I'll propose it to Sameer who has done a lot of work on the matplotlib.mlab.rec_* methods and see if he has some time for it. Thanks, JDH From gokhansever at gmail.com Wed Feb 10 11:02:22 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 10 Feb 2010 10:02:22 -0600 Subject: [Numpy-discussion] Syntax equivalent for np.array() Message-ID: <49d6b3501002100802o244a544q804bdaee26d01c4@mail.gmail.com> Hi, Simple question: I[4]: a = np.arange(10) I[5]: b = np.array(5) I[8]: a*b.cumsum() O[8]: array([ 0, 5, 10, 15, 20, 25, 30, 35, 40, 45]) I[9]: np.array(a*b).cumsum() O[9]: array([ 0, 5, 15, 30, 50, 75, 105, 140, 180, 225]) Is there a syntactic equivalent for the I[9] --for instance instead of using "list" keyword I use [ ] while creating a list. Is there a shortcut for np.array instead of writing np.array(a*b) explicitly? Thanks. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From amcmorl at gmail.com Wed Feb 10 11:06:24 2010 From: amcmorl at gmail.com (Angus McMorland) Date: Wed, 10 Feb 2010 11:06:24 -0500 Subject: [Numpy-discussion] Syntax equivalent for np.array() In-Reply-To: <49d6b3501002100802o244a544q804bdaee26d01c4@mail.gmail.com> References: <49d6b3501002100802o244a544q804bdaee26d01c4@mail.gmail.com> Message-ID: On 10 February 2010 11:02, G?khan Sever wrote: > Hi, > > Simple question: > > I[4]: a = np.arange(10) > > I[5]: b = np.array(5) > > I[8]: a*b.cumsum() > O[8]: array([ 0,? 5, 10, 15, 20, 25, 30, 35, 40, 45]) > > I[9]: np.array(a*b).cumsum() > O[9]: array([? 0,?? 5,? 15,? 30,? 50,? 75, 105, 140, 180, 225]) > > Is there a syntactic equivalent for the I[9] --for instance instead of using > "list" keyword I use [ ] while creating a list. Is there a shortcut for > np.array instead of writing np.array(a*b) explicitly? How about just (a*b).cumsum() ? Angus. -- AJC McMorland Post-doctoral research fellow Neurobiology, University of Pittsburgh From jdh2358 at gmail.com Wed Feb 10 11:10:14 2010 From: jdh2358 at gmail.com (John Hunter) Date: Wed, 10 Feb 2010 10:10:14 -0600 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <88e473831002100654qe53230bn81805f656390a9a7@mail.gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> <1cd32cbb1002091602w455bead5h778fd46b046298ae@mail.gmail.com> <3d375d731002091604n3a8e0754nb7635761c3644c35@mail.gmail.com> <1265763275.7966.5.camel@idol> <3E12B7C1-26A9-4001-A874-D13F38A97F7C@gmail.com> <88e473831002091716w13048c1cu75a8999826e3c33a@mail.gmail.com> <88e473831002100654qe53230bn81805f656390a9a7@mail.gmail.com> Message-ID: <88e473831002100810p575cc71dte5add43ebacbf1f@mail.gmail.com> On Wed, Feb 10, 2010 at 8:54 AM, John Hunter wrote: > On Tue, Feb 9, 2010 at 7:53 PM, Pierre GM wrote: >> On Feb 9, 2010, at 8:16 PM, John Hunter wrote: > >>> and have "totxt", "tocsv". etc... from rec2txt, rec2csv, etc... ? I >>> think the functionality of mlab.rec_summarize and rec_groupby is very >>> useful, but the interface is a bit clunky and could be made easier for >>> the common use cases. >> >> Are you going to work on it or should I step in (in a few weeks...). > > I don't think I'll have time to do it -- I'm already behind on an mpl > release -- ?but I'll propose it to Sameer who has done a lot of work > on the ?matplotlib.mlab.rec_* methods and see if he has some time for > it. Sameer is interested in helping with this, but will also not be able to get to it for a couple of weeks. JDH From gokhansever at gmail.com Wed Feb 10 11:12:49 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 10 Feb 2010 10:12:49 -0600 Subject: [Numpy-discussion] Syntax equivalent for np.array() In-Reply-To: References: <49d6b3501002100802o244a544q804bdaee26d01c4@mail.gmail.com> Message-ID: <49d6b3501002100812q452f3866r8a7ac03103bdd433@mail.gmail.com> On Wed, Feb 10, 2010 at 10:06 AM, Angus McMorland wrote: > On 10 February 2010 11:02, G?khan Sever wrote: > > Hi, > > > > Simple question: > > > > I[4]: a = np.arange(10) > > > > I[5]: b = np.array(5) > > > > I[8]: a*b.cumsum() > > O[8]: array([ 0, 5, 10, 15, 20, 25, 30, 35, 40, 45]) > > > > I[9]: np.array(a*b).cumsum() > > O[9]: array([ 0, 5, 15, 30, 50, 75, 105, 140, 180, 225]) > > > > Is there a syntactic equivalent for the I[9] --for instance instead of > using > > "list" keyword I use [ ] while creating a list. Is there a shortcut for > > np.array instead of writing np.array(a*b) explicitly? > > How about just (a*b).cumsum() ? > > Angus. > -- > AJC McMorland > Post-doctoral research fellow > Neurobiology, University of Pittsburgh > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Yep that's it :) I knew that it was a very simple question. What confused me is I remember somewhere not sure maybe in IPython dev I have gotten when I do: (a*b).cumsum() AttributeError: 'tuple' object has no attribute 'cumsum' error. So I was thinking ( ) is a ssugar for tuple and np.array might have something special than these. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Wed Feb 10 11:24:34 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 10 Feb 2010 10:24:34 -0600 Subject: [Numpy-discussion] Syntax equivalent for np.array() In-Reply-To: <49d6b3501002100812q452f3866r8a7ac03103bdd433@mail.gmail.com> References: <49d6b3501002100802o244a544q804bdaee26d01c4@mail.gmail.com> <49d6b3501002100812q452f3866r8a7ac03103bdd433@mail.gmail.com> Message-ID: <49d6b3501002100824t684ba34bm946cf23a5d931a36@mail.gmail.com> On Wed, Feb 10, 2010 at 10:12 AM, G?khan Sever wrote: > > > On Wed, Feb 10, 2010 at 10:06 AM, Angus McMorland wrote: > >> On 10 February 2010 11:02, G?khan Sever wrote: >> > Hi, >> > >> > Simple question: >> > >> > I[4]: a = np.arange(10) >> > >> > I[5]: b = np.array(5) >> > >> > I[8]: a*b.cumsum() >> > O[8]: array([ 0, 5, 10, 15, 20, 25, 30, 35, 40, 45]) >> > >> > I[9]: np.array(a*b).cumsum() >> > O[9]: array([ 0, 5, 15, 30, 50, 75, 105, 140, 180, 225]) >> > >> > Is there a syntactic equivalent for the I[9] --for instance instead of >> using >> > "list" keyword I use [ ] while creating a list. Is there a shortcut for >> > np.array instead of writing np.array(a*b) explicitly? >> >> How about just (a*b).cumsum() ? >> >> Angus. >> -- >> AJC McMorland >> Post-doctoral research fellow >> Neurobiology, University of Pittsburgh >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > Yep that's it :) I knew that it was a very simple question. > > What confused me is I remember somewhere not sure maybe in IPython dev I > have gotten when I do: > > (a*b).cumsum() > > AttributeError: 'tuple' object has no attribute 'cumsum' error. > > So I was thinking ( ) is a ssugar for tuple and np.array might have > something special than these. > > -- > G?khan > Self-correction: It works correctly in IPython-dev as well. And further in Python 2.6.2: >>> p = () >>> p () >>> type(p) >>> type((a*b)) ( ) doesn't only works as a tuple operator. It also has its original parenthesis functionality :) -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Wed Feb 10 11:33:11 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 10 Feb 2010 08:33:11 -0800 Subject: [Numpy-discussion] Syntax equivalent for np.array() In-Reply-To: <49d6b3501002100824t684ba34bm946cf23a5d931a36@mail.gmail.com> References: <49d6b3501002100802o244a544q804bdaee26d01c4@mail.gmail.com> <49d6b3501002100812q452f3866r8a7ac03103bdd433@mail.gmail.com> <49d6b3501002100824t684ba34bm946cf23a5d931a36@mail.gmail.com> Message-ID: On Wed, Feb 10, 2010 at 8:24 AM, G?khan Sever wrote: > Self-correction: > > It works correctly in IPython-dev as well. > > And further in Python 2.6.2: > >>>> p = () >>>> p > () >>>> type(p) > >>>> type((a*b)) > > > ( ) doesn't only works as a tuple operator. It also has its original > parenthesis functionality :) I think this is the rule: When empty it is a tuple; when containing one item it is parentheses unless there is a comma. >> p = (9) >> type(p) >> p = (9,) >> type(p) From gael.varoquaux at normalesup.org Wed Feb 10 11:35:32 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 10 Feb 2010 17:35:32 +0100 Subject: [Numpy-discussion] Syntax equivalent for np.array() In-Reply-To: References: <49d6b3501002100802o244a544q804bdaee26d01c4@mail.gmail.com> <49d6b3501002100812q452f3866r8a7ac03103bdd433@mail.gmail.com> <49d6b3501002100824t684ba34bm946cf23a5d931a36@mail.gmail.com> Message-ID: <20100210163532.GC495@phare.normalesup.org> On Wed, Feb 10, 2010 at 08:33:11AM -0800, Keith Goodman wrote: > I think this is the rule: When empty it is a tuple; when containing > one item it is parentheses unless there is a comma. > >> p = (9) > >> type(p) > > >> p = (9,) > >> type(p) > The coma is the tuple operator. For instance p = 9, is a tuple. Ga?l From josef.pktd at gmail.com Wed Feb 10 11:40:10 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 10 Feb 2010 11:40:10 -0500 Subject: [Numpy-discussion] Syntax equivalent for np.array() In-Reply-To: <49d6b3501002100824t684ba34bm946cf23a5d931a36@mail.gmail.com> References: <49d6b3501002100802o244a544q804bdaee26d01c4@mail.gmail.com> <49d6b3501002100812q452f3866r8a7ac03103bdd433@mail.gmail.com> <49d6b3501002100824t684ba34bm946cf23a5d931a36@mail.gmail.com> Message-ID: <1cd32cbb1002100840i3c8dfa7bod553d8d1366edfa4@mail.gmail.com> On Wed, Feb 10, 2010 at 11:24 AM, G?khan Sever wrote: > > > On Wed, Feb 10, 2010 at 10:12 AM, G?khan Sever > wrote: >> >> >> On Wed, Feb 10, 2010 at 10:06 AM, Angus McMorland >> wrote: >>> >>> On 10 February 2010 11:02, G?khan Sever wrote: >>> > Hi, >>> > >>> > Simple question: >>> > >>> > I[4]: a = np.arange(10) >>> > >>> > I[5]: b = np.array(5) >>> > >>> > I[8]: a*b.cumsum() >>> > O[8]: array([ 0,? 5, 10, 15, 20, 25, 30, 35, 40, 45]) >>> > >>> > I[9]: np.array(a*b).cumsum() >>> > O[9]: array([? 0,?? 5,? 15,? 30,? 50,? 75, 105, 140, 180, 225]) >>> > >>> > Is there a syntactic equivalent for the I[9] --for instance instead of >>> > using >>> > "list" keyword I use [ ] while creating a list. Is there a shortcut for >>> > np.array instead of writing np.array(a*b) explicitly? >>> >>> How about just (a*b).cumsum() ? >>> >>> Angus. >>> -- >>> AJC McMorland >>> Post-doctoral research fellow >>> Neurobiology, University of Pittsburgh >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> Yep that's it :) I knew that it was a very simple question. >> >> What confused me is I remember somewhere not sure maybe in IPython dev I >> have gotten when I do: >> >> (a*b).cumsum() >> >> AttributeError: 'tuple' object has no attribute 'cumsum' error. >> >> So I was thinking ( ) is a ssugar for tuple and np.array might have >> something special than these. >> >> -- >> G?khan > > Self-correction: > > It works correctly in IPython-dev as well. > > And further in Python 2.6.2: > >>>> p = () >>>> p > () >>>> type(p) > >>>> type((a*b)) > > > ( ) doesn't only works as a tuple operator. It also has its original > parenthesis functionality :) except for empty tuple constructor, a comma defines a tuple and parenthesis are just parenthesis >>> type((a*b)) >>> type((a*b,)) >>> a*b, (array([0, 1]),) >>> type(_) Josef > > -- > G?khan > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From pgmdevlist at gmail.com Wed Feb 10 12:01:53 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 10 Feb 2010 12:01:53 -0500 Subject: [Numpy-discussion] Emulate left outer join? In-Reply-To: <88e473831002100810p575cc71dte5add43ebacbf1f@mail.gmail.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091552w4146b0adl4deabfecba482f6c@mail.gmail.com> <1cd32cbb1002091602w455bead5h778fd46b046298ae@mail.gmail.com> <3d375d731002091604n3a8e0754nb7635761c3644c35@mail.gmail.com> <1265763275.7966.5.camel@idol> <3E12B7C1-26A9-4001-A874-D13F38A97F7C@gmail.com> <88e473831002091716w13048c1cu75a8999826e3c33a@mail.gmail.com> <88e473831002100654qe53230bn81805f656390a9a7@mail.gmail.com> <88e473831002100810p575cc71dte5add43ebacbf1f@mail.gmail.com> Message-ID: <503BB977-402A-4034-9997-4BF24C7F5D18@gmail.com> On Feb 10, 2010, at 11:10 AM, John Hunter wrote: > On Wed, Feb 10, 2010 at 8:54 AM, John Hunter wrote: >> On Tue, Feb 9, 2010 at 7:53 PM, Pierre GM wrote: >>> On Feb 9, 2010, at 8:16 PM, John Hunter wrote: >> >>>> and have "totxt", "tocsv". etc... from rec2txt, rec2csv, etc... I >>>> think the functionality of mlab.rec_summarize and rec_groupby is very >>>> useful, but the interface is a bit clunky and could be made easier for >>>> the common use cases. >>> >>> Are you going to work on it or should I step in (in a few weeks...). >> >> I don't think I'll have time to do it -- I'm already behind on an mpl >> release -- but I'll propose it to Sameer who has done a lot of work >> on the matplotlib.mlab.rec_* methods and see if he has some time for >> it. > > Sameer is interested in helping with this, but will also not be able > to get to it for a couple of weeks. Anyway, that would go in the 1.5 release, right ? I mean, the one with datetime in it, whatever release number it is ? So that should by us a bit of time. From jjstickel at vcn.com Wed Feb 10 12:36:52 2010 From: jjstickel at vcn.com (Jonathan Stickel) Date: Wed, 10 Feb 2010 10:36:52 -0700 Subject: [Numpy-discussion] loadtxt and genfromtxt Message-ID: <4B72EEB4.3070703@vcn.com> I am new to python/numpy/scipy and new to this list. I recently migrated over from using Octave and am very impressed so far! Recently I needed to load data from a text file and quickly found numpy's "loadtxt" function. However, there were missing data values, which loadtxt does not handle. After some amount of googling, I did find "genfromtxt" which does exactly what I need. It would have been helpful if genfromtxt was included in the "See Also" portion of the docstring for loadtxt. Perhaps this is a simple oversight? I see that genfromtxt does mention loadtxt in its docstring. Let me know if I should submit a bug somewhere, or if it is sufficient to mention this small item on the list. Thanks, Jonathan P.S. My first send did not seem to go through. Trying again; sorry if this is posted twice... From friedrichromstedt at gmail.com Wed Feb 10 13:57:11 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Wed, 10 Feb 2010 19:57:11 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: I wonder why there is no response on my e-mail dating back to Feb 4. Is there nobody interested in it, is somebody working on it, or did it simply did not come through? I changed the recipient now to "Discussion of Numerical Python", hth ... Sorry when there is double posting now, it's not intended if so. 2010/2/4 Friedrich Romstedt : > Hi, > > I'm just coding a package for uncertain arrays using the accelerated > numpy functionality intensively. ?I'm sorry, but I have to give some > background information first. ?The package provides a class > upy.undarray, which holds the nominal value and the uncertainty > information. ?It has methods __add__(other), __radd__(other), ..., > __eq__(other), __ne__(other), which accept both upy.undarrays and all > other values suitable for coercion, thus also native numpy.ndarrays. > But because numpy treats in the statement: > > result = numpyarray * upyarray > > upyarray as a scalar, because it's not an numpy.ndarray, I have to > overload the numpy arithmetics by own objects by using > numpy.set_numeric_ops(add = ..., ..., equal = equal, not_equal = > not_equal). ?The arguments are defined by the module (it will be > clearifiied below). > > Because numpy.add etc. are ufuncs exhibiting attributes, I wrote a > class to wrap them: > > class ufuncWrap: > ? ? ? ?"""Wraps numpy ufuncs. ?Behaves like the original, with the exception > ? ? ? ?that __call__() will be overloaded.""" > > ? ? ? ?def __init__(self, ufunc, overload): > ? ? ? ? ? ? ? ?"""UFUNC is the ufunc to be wrapped. ?OVERLOAD is the name (string) > ? ? ? ? ? ? ? ?of the undarray method to be used in overloading __call__().""" > > ? ? ? ? ? ? ? ?self.ufunc = ufunc > ? ? ? ? ? ? ? ?self.overload = overload > > ? ? ? ?def __call__(self, a, b, *args, **kwargs): > ? ? ? ? ? ? ? ?"""When B is an undarray, call B.overload(a), else .ufunc(a, b).""" > > ? ? ? ? ? ? ? ?if isinstance(b, undarray): > ? ? ? ? ? ? ? ? ? ? ? ?return getattr(b, self.overload)(a) > ? ? ? ? ? ? ? ?else: > ? ? ? ? ? ? ? ? ? ? ? ?return self.ufunc(a, b, *args, **kwargs) > > ? ? ? ?def __getattr__(self, attr): > ? ? ? ? ? ? ? ?"""Return getattr(.ufunc, ATTR).""" > > ? ? ? ? ? ? ? ?return getattr(self.ufunc, attr) > > I only have to wrap binary operators. > > Then, e.g.: > > class Equal(ufuncWrap): > ? ? ? ?def __init__(self): > ? ? ? ? ? ? ? ?ufuncWrap.__init__(self, numpy.equal, '__eq__') > > equal = Equal() > > This works as expected. > > But this approach fails (in first iteration) for a similar class > NotEqual. I have let the module output the arguments passed to > ufuncWrap.__call__(), and I found that the statement: > > result = (numpyarray != upyarray) > > with: > > numpyarray = numpy.asarray([1.0]) > upyarray = upy.ndarray([2.0], error = [0.1]) > > is passed on to NotEqual.__call__() as the arguments: > > a = a numpy-array array([1.0]) > b = a numpy-array array(shape = (), dtype = numpy.object), which is a > scalar array holding the upy.ndarray instance passed to !=. > > I can work around the exhbited behaviour by: > > class NotEqual(ufuncWrap): > ? ? ? ?def __init__(self): > ? ? ? ? ? ? ? ?ufuncWrap.__init__(self, numpy.not_equal, '__ne__') > > ? ? ? ?def __call__(self, a, b, *args, **kwargs): > ? ? ? ? ? ? ? ?# numpy's calling mechanism of not_equal() seems to have a bug, > ? ? ? ? ? ? ? ?# such that b is always a numpy.ndarray. ?When b should be an undarray, > ? ? ? ? ? ? ? ?# it is a numpy.ndarray(dtype = numpy.object, shape = ()) ... > > ? ? ? ? ? ? ? ?# Make the call also compatible with future, bug-fixed versions. > ? ? ? ? ? ? ? ?if isinstance(b, numpy.ndarray): > ? ? ? ? ? ? ? ? ? ? ? ?if b.ndim == 0: > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?# Implement some conversion from scalar array to stored object. > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?b = b.sum() > > ? ? ? ? ? ? ? ?return ufuncWrap.__call__(self, a, b, *args, **kwargs) > > What is the reason for the behaviour observed? > > I'm using numpy 1.4.0 with Python 2.5. > > Friedrich > From kwgoodman at gmail.com Wed Feb 10 14:04:32 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 10 Feb 2010 11:04:32 -0800 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: On Wed, Feb 10, 2010 at 10:57 AM, Friedrich Romstedt wrote: > I wonder why there is no response on my e-mail dating back to Feb 4. > Is there nobody interested in it, is somebody working on it, or did it > simply did not come through? ?I changed the recipient now to > "Discussion of Numerical Python", hth ... > > Sorry when there is double posting now, it's not intended if so. > > 2010/2/4 Friedrich Romstedt : >> Hi, >> >> I'm just coding a package for uncertain arrays using the accelerated >> numpy functionality intensively. ?I'm sorry, but I have to give some >> background information first. ?The package provides a class >> upy.undarray, which holds the nominal value and the uncertainty >> information. ?It has methods __add__(other), __radd__(other), ..., >> __eq__(other), __ne__(other), which accept both upy.undarrays and all >> other values suitable for coercion, thus also native numpy.ndarrays. >> But because numpy treats in the statement: >> >> result = numpyarray * upyarray >> >> upyarray as a scalar, because it's not an numpy.ndarray, I have to >> overload the numpy arithmetics by own objects by using >> numpy.set_numeric_ops(add = ..., ..., equal = equal, not_equal = >> not_equal). ?The arguments are defined by the module (it will be >> clearifiied below). >> >> Because numpy.add etc. are ufuncs exhibiting attributes, I wrote a >> class to wrap them: >> >> class ufuncWrap: >> ? ? ? ?"""Wraps numpy ufuncs. ?Behaves like the original, with the exception >> ? ? ? ?that __call__() will be overloaded.""" >> >> ? ? ? ?def __init__(self, ufunc, overload): >> ? ? ? ? ? ? ? ?"""UFUNC is the ufunc to be wrapped. ?OVERLOAD is the name (string) >> ? ? ? ? ? ? ? ?of the undarray method to be used in overloading __call__().""" >> >> ? ? ? ? ? ? ? ?self.ufunc = ufunc >> ? ? ? ? ? ? ? ?self.overload = overload >> >> ? ? ? ?def __call__(self, a, b, *args, **kwargs): >> ? ? ? ? ? ? ? ?"""When B is an undarray, call B.overload(a), else .ufunc(a, b).""" >> >> ? ? ? ? ? ? ? ?if isinstance(b, undarray): >> ? ? ? ? ? ? ? ? ? ? ? ?return getattr(b, self.overload)(a) >> ? ? ? ? ? ? ? ?else: >> ? ? ? ? ? ? ? ? ? ? ? ?return self.ufunc(a, b, *args, **kwargs) >> >> ? ? ? ?def __getattr__(self, attr): >> ? ? ? ? ? ? ? ?"""Return getattr(.ufunc, ATTR).""" >> >> ? ? ? ? ? ? ? ?return getattr(self.ufunc, attr) >> >> I only have to wrap binary operators. >> >> Then, e.g.: >> >> class Equal(ufuncWrap): >> ? ? ? ?def __init__(self): >> ? ? ? ? ? ? ? ?ufuncWrap.__init__(self, numpy.equal, '__eq__') >> >> equal = Equal() >> >> This works as expected. >> >> But this approach fails (in first iteration) for a similar class >> NotEqual. I have let the module output the arguments passed to >> ufuncWrap.__call__(), and I found that the statement: >> >> result = (numpyarray != upyarray) >> >> with: >> >> numpyarray = numpy.asarray([1.0]) >> upyarray = upy.ndarray([2.0], error = [0.1]) >> >> is passed on to NotEqual.__call__() as the arguments: >> >> a = a numpy-array array([1.0]) >> b = a numpy-array array(shape = (), dtype = numpy.object), which is a >> scalar array holding the upy.ndarray instance passed to !=. >> >> I can work around the exhbited behaviour by: >> >> class NotEqual(ufuncWrap): >> ? ? ? ?def __init__(self): >> ? ? ? ? ? ? ? ?ufuncWrap.__init__(self, numpy.not_equal, '__ne__') >> >> ? ? ? ?def __call__(self, a, b, *args, **kwargs): >> ? ? ? ? ? ? ? ?# numpy's calling mechanism of not_equal() seems to have a bug, >> ? ? ? ? ? ? ? ?# such that b is always a numpy.ndarray. ?When b should be an undarray, >> ? ? ? ? ? ? ? ?# it is a numpy.ndarray(dtype = numpy.object, shape = ()) ... >> >> ? ? ? ? ? ? ? ?# Make the call also compatible with future, bug-fixed versions. >> ? ? ? ? ? ? ? ?if isinstance(b, numpy.ndarray): >> ? ? ? ? ? ? ? ? ? ? ? ?if b.ndim == 0: >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?# Implement some conversion from scalar array to stored object. >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?b = b.sum() >> >> ? ? ? ? ? ? ? ?return ufuncWrap.__call__(self, a, b, *args, **kwargs) >> >> What is the reason for the behaviour observed? >> >> I'm using numpy 1.4.0 with Python 2.5. >> >> Friedrich No one answered my post either :( http://old.nabble.com/arrays-and-__eq__-td26987903.html#a26987903 Is it the same issue? From aisaac at american.edu Wed Feb 10 14:41:18 2010 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 10 Feb 2010 14:41:18 -0500 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: <4B730BDE.7000803@american.edu> On 2/10/2010 1:57 PM, Friedrich Romstedt wrote: > I wonder why there is no response on my e-mail dating back to Feb 4. > Is there nobody interested in it, is somebody working on it, or did it > simply did not come through? I'm going to guess it is because your actual question is at the very end of a long post ... fwiw, Alan Isaac From josef.pktd at gmail.com Wed Feb 10 14:54:09 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 10 Feb 2010 14:54:09 -0500 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: <4B730BDE.7000803@american.edu> References: <4B730BDE.7000803@american.edu> Message-ID: <1cd32cbb1002101154w5a114a4bm1dded7a951da775e@mail.gmail.com> On Wed, Feb 10, 2010 at 2:41 PM, Alan G Isaac wrote: > On 2/10/2010 1:57 PM, Friedrich Romstedt wrote: >> I wonder why there is no response on my e-mail dating back to Feb 4. >> Is there nobody interested in it, is somebody working on it, or did it >> simply did not come through? > > I'm going to guess it is because your actual question is at > the very end of a long post ... Or maybe it's because of a bug (or feature) in numpy that only 3 to 6 developers understand, and they are too busy to go digging. Josef > > fwiw, > Alan Isaac > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From oliphant at enthought.com Wed Feb 10 15:31:35 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Wed, 10 Feb 2010 14:31:35 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> Message-ID: <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> On Feb 8, 2010, at 4:08 PM, Darren Dale wrote: > On Mon, Feb 8, 2010 at 5:05 PM, Darren Dale > wrote: >> On Mon, Feb 8, 2010 at 5:05 PM, Jarrod Millman >> wrote: >>> On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris >>> wrote: >>>> Should the release containing the datetime/hasobject changes be >>>> called >>>> >>>> a) 1.5.0 >>>> b) 2.0.0 >>> >>> My vote goes to b. >> >> You don't matter. Nor do I. > > I definitely should have counted to 100 before sending that. It wasn't > helpful and I apologize. > I actually found this quite funny. I need to apologize if my previous email sounded like I was trying to silence other opinions, somehow. As Robert alluded to in a rather well-written email that touched on resolving disagreements, it can be hard to communicate that you are listening to opposing views despite the fact that your opinion has not changed. We have a SciPy steering committee that should be reviewed again this year at the SciPy conference. As Robert said, we prefer not to have to use it to decide questions. I think it has been trotted out as a place holder for a NumPy steering committee which has never really existed as far as I know. NumPy decisions in the past have been made by me and other people who are writing the code. I think we have tried pretty hard to listen to all points of view before doing anything. I think there are many examples of this. I hope this previous history alleviates some concern that something else is going to be done here. Exhibit A is again my comment that we should change one of the members of an internal data structure ('hasobject') which I thought we would change at 1.1, but the demand for ABI stability has left it unchanged to this day. The list I proposed for deciding the issue was the group I am aware of having written significant code for NumPy. I suppose I un- intentionally left off Pierre GM who contributed masked array support. We need some way of making a decision, and actually painting this bike shed. Christopher's argument that having a NumPy 2.0 sets expectations for keeping 1.4 and 2.0 is a strong one in my mind. The policy of coupling ABI and version numbers makes less and less sense to me as I hear the concerns of keeping the ABI consistent. We should be free to change the version numbers without implying an ABI break. I can only envision right now perhaps one more ABI break (the one David has talked about to make pimpl interfaces). If anyone else feels like their point of view has not been expressed, then please speak up now. Best regards, -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Feb 10 15:56:13 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 10 Feb 2010 15:56:13 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B6F6792.1000103@silveregg.co.jp> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> Message-ID: <01EACB54-78B6-4B1F-BFC6-FC921A20FDEB@gmail.com> On Feb 10, 2010, at 3:31 PM, Travis Oliphant wrote: > > > The list I proposed for deciding the issue was the group I am aware of having written significant code for NumPy. I suppose I un-intentionally left off Pierre GM who contributed masked array support. We need some way of making a decision, and actually painting this bike shed. Oh, don't mind me. As I only contribute on the Python side, I don't feel qualify to voice any opinion about APIs/ABIs. If hard-pressed, I would have leaned on Travis's side. Anyway, you would have heard me if I needed to. Just let me know what I need to backport for which version, I'll bring my brushes. From dlc at halibut.com Wed Feb 10 16:57:10 2010 From: dlc at halibut.com (David Carmean) Date: Wed, 10 Feb 2010 13:57:10 -0800 Subject: [Numpy-discussion] lib.recfunctions: which version? (was Re: Emulate left outer join?) In-Reply-To: <88e473831002091449g6e17ca1em12f7c36ae86106fd@mail.gmail.com>; from jdh2358@gmail.com on Tue, Feb 09, 2010 at 04:49:30PM -0600 References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <88e473831002091449g6e17ca1em12f7c36ae86106fd@mail.gmail.com> Message-ID: <20100210135710.C17330@halibut.com> On Tue, Feb 09, 2010 at 04:49:30PM -0600, John Hunter wrote: > On Tue, Feb 9, 2010 at 4:43 PM, Fernando Perez wrote: > > On Tue, Feb 9, 2010 at 5:02 PM, Robert Kern wrote: > >> > >> numpy.lib.recfunctions.join_by(key, r1, r2, jointype='leftouter') > >> Sorry, guys, maybe this is my python-newbness showing, but after installing numpy 1.4.0 (Windows 7 x64, Python(x,y) distro) and getting >>> np.__version__ '1.4.0' I still can't figure out what to import/how to get to numpy.lib.recfunctions. Maybe I don't yet understand the scipy/numpy/matplotlib package structure? From robert.kern at gmail.com Wed Feb 10 17:12:27 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 10 Feb 2010 16:12:27 -0600 Subject: [Numpy-discussion] lib.recfunctions: which version? (was Re: Emulate left outer join?) In-Reply-To: <20100210135710.C17330@halibut.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <88e473831002091449g6e17ca1em12f7c36ae86106fd@mail.gmail.com> <20100210135710.C17330@halibut.com> Message-ID: <3d375d731002101412t536d935ev67c5628032675662@mail.gmail.com> On Wed, Feb 10, 2010 at 15:57, David Carmean wrote: > On Tue, Feb 09, 2010 at 04:49:30PM -0600, John Hunter wrote: >> On Tue, Feb 9, 2010 at 4:43 PM, Fernando Perez wrote: >> > On Tue, Feb 9, 2010 at 5:02 PM, Robert Kern wrote: >> >> >> >> numpy.lib.recfunctions.join_by(key, r1, r2, jointype='leftouter') >> >> > > Sorry, guys, maybe this is my python-newbness showing, but after installing > numpy 1.4.0 (Windows 7 x64, Python(x,y) distro) and getting > > ? ?>>> np.__version__ > ? ?'1.4.0' > > I still can't figure out what to import/how to get to numpy.lib.recfunctions. > Maybe I don't yet understand the scipy/numpy/matplotlib package structure? from numpy.lib import recfunctions recfunctions.join_by(...) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Wed Feb 10 17:14:04 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 10 Feb 2010 17:14:04 -0500 Subject: [Numpy-discussion] lib.recfunctions: which version? (was Re: Emulate left outer join?) In-Reply-To: <20100210135710.C17330@halibut.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <88e473831002091449g6e17ca1em12f7c36ae86106fd@mail.gmail.com> <20100210135710.C17330@halibut.com> Message-ID: <398F5FF0-815A-414F-A224-4BE9C100BE5D@gmail.com> On Feb 10, 2010, at 4:57 PM, David Carmean wrote: > > I still can't figure out what to import/how to get to numpy.lib.recfunctions. > Maybe I don't yet understand the scipy/numpy/matplotlib package structure? Nope, that's not a standard one: >>> import numpy.lib.recfunctions as recf From dlc at halibut.com Wed Feb 10 17:16:33 2010 From: dlc at halibut.com (David Carmean) Date: Wed, 10 Feb 2010 14:16:33 -0800 Subject: [Numpy-discussion] lib.recfunctions: which version? (was Re: Emulate left outer join?) In-Reply-To: <3d375d731002101412t536d935ev67c5628032675662@mail.gmail.com>; from robert.kern@gmail.com on Wed, Feb 10, 2010 at 04:12:27PM -0600 References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <88e473831002091449g6e17ca1em12f7c36ae86106fd@mail.gmail.com> <20100210135710.C17330@halibut.com> <3d375d731002101412t536d935ev67c5628032675662@mail.gmail.com> Message-ID: <20100210141633.D17330@halibut.com> On Wed, Feb 10, 2010 at 04:12:27PM -0600, Robert Kern wrote: > > I still can't figure out what to import/how to get to numpy.lib.recfunctions. > > Maybe I don't yet understand the scipy/numpy/matplotlib package structure? > > from numpy.lib import recfunctions > > recfunctions.join_by(...) Thank you! worked. (Now trying to build all this on my FreeBSD 6.0 system :) From dsdale24 at gmail.com Wed Feb 10 17:20:29 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Wed, 10 Feb 2010 17:20:29 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> Message-ID: On Wed, Feb 10, 2010 at 3:31 PM, Travis Oliphant wrote: > On Feb 8, 2010, at 4:08 PM, Darren Dale wrote: >> I definitely should have counted to 100 before sending that. It wasn't >> helpful and I apologize. > > I actually found this quite funny. ? ?I need to apologize if my previous > email sounded like I was trying to silence other opinions, somehow. ? As > Robert alluded to in a rather well-written email that touched on resolving > disagreements, it can be hard to communicate that you are listening to > opposing views despite the fact that your opinion has not changed. For what its worth, I feel I have had ample opportunity to make my concerns known, and at this point will leave it to others to do right by the numpy user community. > We have a SciPy steering committee that should be reviewed again this year > at the SciPy conference. ? As Robert said, we prefer not to have to use it > to decide questions. ? I think it has been trotted out as a place holder for > a NumPy steering committee which has never really existed as far as I know. > ? NumPy decisions in the past have been made by me and other people who are > writing the code. ? I think we have tried pretty hard to listen to all > points of view before doing anything. Just a comment: I would like to point out that there is (necessarily) some arbitrary threshold to who is being recognized as "people who are actively writing the code". Over the last year, I have posted fixes for multiple bugs and extended the ufunc wrapping mechanisms (__array_prepare__) which were included in numpy-1.4.0, and have also been developing the quantities package, which is intimately tied up with numpy's development. I don't think that makes me a major contributor like you or Chuck etc., but I am heavily invested in numpy's development and an active contributor. Maybe it would be worth considering an approach where the numpy user community occasionally nominates a few people to serve on some kind of steering committee along with the developers. Although if there is interest in or criticism of this idea, I don't think this is the right thread to discuss it. Darren From matthew.brett at gmail.com Wed Feb 10 17:28:13 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 10 Feb 2010 14:28:13 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> Message-ID: <1e2af89e1002101428j283ca726m71e219aed6c6c508@mail.gmail.com> Hi, > ? NumPy decisions in the past have been made by me and other people who are > writing the code. ? I think we have tried pretty hard to listen to all > points of view before doing anything. ? ?I think there are many examples of > this. ? I hope this previous history alleviates some concern that something > else is going to be done here. I think it's notable in general how collegial numpy discussions have been, and for that, thank you to you in particular. I was going to say earlier, but didn't, that your list of steerers seemed very sensible. Only a small point, but, while I completely agree that the version number is a bike-shed, I don't think that's true of the ABI breakage, but I'm sure that's not what you meant. See you, Matthew From Chris.Barker at noaa.gov Wed Feb 10 18:34:51 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 10 Feb 2010 15:34:51 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1e2af89e1002101428j283ca726m71e219aed6c6c508@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <1e2af89e1002101428j283ca726m71e219aed6c6c508@mail.gmail.com> Message-ID: <4B73429B.2000402@noaa.gov> Matthew Brett wrote: > Only a small point, but, while I completely agree that the version > number is a bike-shed, that's what I meant when I said it... > I don't think that's true of the ABI breakage, well, yes and no. On the one hand, it's very big deal -- not the color of the shed. On the other hand, it's a simple enough concept that almost anyone can have an opinion on it (even me). But getting those opinions out there is important, and that was done. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ralf.gommers at googlemail.com Wed Feb 10 19:03:25 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 11 Feb 2010 08:03:25 +0800 Subject: [Numpy-discussion] loadtxt and genfromtxt In-Reply-To: <4B72EEB4.3070703@vcn.com> References: <4B72EEB4.3070703@vcn.com> Message-ID: On Thu, Feb 11, 2010 at 1:36 AM, Jonathan Stickel wrote: > I am new to python/numpy/scipy and new to this list. I recently > migrated over from using Octave and am very impressed so far! > > Recently I needed to load data from a text file and quickly found > numpy's "loadtxt" function. However, there were missing data values, > which loadtxt does not handle. After some amount of googling, I did > find "genfromtxt" which does exactly what I need. It would have been > helpful if genfromtxt was included in the "See Also" portion of the > docstring for loadtxt. Perhaps this is a simple oversight? I see that > genfromtxt does mention loadtxt in its docstring. > Thanks, fixed: http://docs.scipy.org/numpy/docs/numpy.lib.io.loadtxt/ > > Let me know if I should submit a bug somewhere, or if it is sufficient > to mention this small item on the list. > If you find more such things, please consider creating an account in the doc wiki I linked above and contributing directly. After account creation you'd need to ask for edit rights on this list. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From dlc at halibut.com Wed Feb 10 19:26:04 2010 From: dlc at halibut.com (David Carmean) Date: Wed, 10 Feb 2010 16:26:04 -0800 Subject: [Numpy-discussion] Shape of join_by result is not what I expected In-Reply-To: <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com>; from robert.kern@gmail.com on Tue, Feb 09, 2010 at 04:02:48PM -0600 References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> Message-ID: <20100210162604.E17330@halibut.com> On Tue, Feb 09, 2010 at 04:02:48PM -0600, Robert Kern wrote: > numpy.lib.recfunctions.join_by(key, r1, r2, jointype='leftouter') > * The output is sorted along the key. > * A temporary array is formed by dropping the fields not in the key for the > two arrays and concatenating the result. This array is then sorted, and > the common entries selected. The output is constructed by > filling the fields > with the selected entries. Matching is not preserved if there are some > duplicates... Got this to "work", but now it's revealed my lack of understanding of the shape of arrays; I'd hoped that the results would look like (be the same shape as?) the column_stack results. I wanted to be able to take slices of the results. I created the original arrays from a list of tuples of the form [(1265184061, 0.02), (1265184121, 0.0), (1265184181, 0.31), ] so the resulting arrays had the shape (n,2); these seemed easy to manipulate by slicing, and my recollection was that this was a useful format to feed mplotlib.plot. The result looks like: array([ (1265184061.0, 0.0, 0.029999999999999999, 152.0, 1.5600000000000001, \ 99.879999999999995, 0.02, 3.0, 0.0, 0.040000000000000001, 0.070000000000000007, \ 0.68999999999999995),\ (1265184121.0, 0.0, 0.01, 148.0, 1.46, 99.950000000000003, 0.0, 0.0, 0.0, 0.01, \ 0.040000000000000001, 0.56000000000000005), ] ) with shape (n,) These 1-dimensional results give me nice text output, I can't/don't know how to slice them; this form may work for one of my use cases, but my main use case is to reprocess this data--which is for one server--by taking one field from about 60 servers worth of this data (saved to disk as binary pickles) and plot them all to a single canvas. In other words, from sixty sets of this: tposix ldavg-15 ldavg-1 ldavg-5 1265184061.00 0.00 0.03 1.56 1265184121.00 0.00 0.01 1.46 1265184181.00 0.00 0.65 1.37 I need to collect and plot ldavg-1 as separate time-series plots. ( perhaps I'm trying to use this stuff for a real project too early on the learning curve? :) Thanks for the great help so far. From cournape at gmail.com Wed Feb 10 20:22:44 2010 From: cournape at gmail.com (David Cournapeau) Date: Thu, 11 Feb 2010 10:22:44 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002080149u101b537q5747277088204da0@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> Message-ID: <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> On Thu, Feb 11, 2010 at 5:31 AM, Travis Oliphant wrote: > Christopher's argument that having a NumPy 2.0 sets expectations for keeping > 1.4 and 2.0 is a strong one in my mind. ? The policy of coupling ABI and > version numbers makes less and less sense to me as I hear the concerns of > keeping the ABI consistent. ? ?We should be free to change the version > numbers without implying an ABI break. ? ? I can only envision right now > perhaps one more ABI break (the one David has talked about to make pimpl > interfaces). I think one issue with versions is that they convey multiple things at the same time. The number itself conveys an idea of "progress" and "features" - the bigger the change in the number, the bigger changes are expected by users. This is the part where everyone has an opinion. Then, there is also the idea that for a library, versions conveys ABI and API compatibility, and this should be purely technical IMO. There are well established rules here: http://www.linuxshowcase.org/2000/2000papers/papers/browndavid/browndavid_html/ """ A major release is an incompatible change to the system software, and implies that [some] applications dependent on the earlier major release (specifically those that relied upon the specific features that have changed incompatibly) will need to be changed in order to work on the new major release. A minor release of the system software is an upward-compatible change--one which adds some new interfaces, but maintains compatibility for all existing interfaces. Applications (or other software products) dependent on an earlier minor release will not need to be changed in order to work on the new minor release: Since the later release contains all the earlier interfaces, the change(s) imparted to the system does not affect those applications. A micro release is a compatible change which does not add any new interfaces: A change is made to the implementation (such as to improve performance, scalability or some other qualitative property) but provides an interface equivalent to all other micro revisions at the same minor level. Again, dependent applications (or other software products) will not need to be changed in order to work on that release as the change imparted to the system (or library) does not undermine their dependencies. """ This idea is ingrained in the tool (the loader use those rules to decide which shared library to load for a given binary with its libraries dependencies). Now, python itself does not follow this rule: ABI and API breaks arrive together (every minor release), but it is my impression that they intend to be stricter for the 3.x series. I have dived into gtk development quite a bit to look at existing processes: Gtk has a good history in that aspect, and is used by a lot of ISV outside open source (vmware, adobe, etc...). They have an experience we don't have. Coincidentally, they are discussing for gtk 3.0 about the best way to go forward, and they have the exact same issue about lack of implementation hiding for structures. For example in there: http://micke.hallendal.net/blog/gtk-30-enabling-incrementalism/, Havoc Pennington (one of the main gtk developer) makes the argument about 3.0 breaking ABI only without any new feature, serving as a basis for new features afterwise, to avoid having a version in preparation taking too long. Maybe that's an idea to follow. Concerning the fear raised by Pauli and others about the massive breakage, I am also looking at existing refactoring tools in C to make this almost automatic (mozilla has developed an impressive set of tools in that area, for example pork: https://developer.mozilla.org/En/Pork_Tools). cheers, David From pgmdevlist at gmail.com Wed Feb 10 21:16:49 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 10 Feb 2010 21:16:49 -0500 Subject: [Numpy-discussion] Shape of join_by result is not what I expected In-Reply-To: <20100210162604.E17330@halibut.com> References: <20100209135207.D12825@halibut.com> <3d375d731002091402p44a0da34i7581289131988e6@mail.gmail.com> <20100210162604.E17330@halibut.com> Message-ID: On Feb 10, 2010, at 7:26 PM, David Carmean wrote: >> > > Got this to "work", but now it's revealed my lack of understanding of the shape > of arrays; I'd hoped that the results would look like (be the same shape as?) > the column_stack results. You're misunderstanding what structured arrays / recarrays are. Imagine a structured array of N records with 3 fields A, B and C. The shape of this array is (N,), but each element of the array is a special numpy object (numpy.void) with three elements (named A, B and C. The basic differences with an array of shape (N,3), where you expect the three columns to correspond to the fields A, B, C are that: (1): the types of fields of a structured array are not necessarily homogeneous (you can have A int, B float and C string, eg) whereas for a standard array each column has the same type. (2): the organization in memory is slighlty different. Anyway, you're working with functions that return structured arrays, not standard arrays, so you end up with a 1D structured array. > I wanted to be able to take slices of the > results. Quite doable, depending on how you wanna slice: if you wanna take, say, the 2nd to 5th entries, just use [1:5]: the result will be a structured array with the same fields as the original. > I created the original arrays from a list of tuples of the form > > [(1265184061, 0.02), (1265184121, 0.0), (1265184181, 0.31), ] > > so the resulting arrays had the shape (n,2); What function did you use to create this array ? If you end up with a (n,2), something went probably wrong and you're dealing with a standard array. > these seemed easy to > manipulate by slicing, and my recollection was that this was a > useful format to feed mplotlib.plot. > > The result looks like: > > array([ (1265184061.0, 0.0, 0.029999999999999999, 152.0, 1.5600000000000001, \ > 99.879999999999995, 0.02, 3.0, 0.0, 0.040000000000000001, 0.070000000000000007, \ > 0.68999999999999995),\ > (1265184121.0, 0.0, 0.01, 148.0, 1.46, 99.950000000000003, 0.0, 0.0, 0.0, 0.01, \ > 0.040000000000000001, 0.56000000000000005), ] ) > > with shape (n,) be more specific: dtype ? > These 1-dimensional results give me nice text output, I can't/don't know > how to slice them; Well, once again, that depends what you wanna do. Please be more specific. > this form may work for one of my use cases, but my > main use case is to reprocess this data--which is for one server--by > taking one field from about 60 servers worth of this data (saved to disk > as binary pickles) and plot them all to a single canvas. > > In other words, from sixty sets of this: > > tposix ldavg-15 ldavg-1 ldavg-5 > 1265184061.00 0.00 0.03 1.56 > 1265184121.00 0.00 0.01 1.46 > 1265184181.00 0.00 0.65 1.37 > > I need to collect and plot ldavg-1 as separate time-series plots. you can access each field independently as youarray["tposic"], yourarray["ldavg-15"], .... From charlesr.harris at gmail.com Wed Feb 10 21:35:40 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 10 Feb 2010 19:35:40 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> Message-ID: On Wed, Feb 10, 2010 at 6:22 PM, David Cournapeau wrote: > On Thu, Feb 11, 2010 at 5:31 AM, Travis Oliphant > wrote: > > > Christopher's argument that having a NumPy 2.0 sets expectations for > keeping > > 1.4 and 2.0 is a strong one in my mind. The policy of coupling ABI and > > version numbers makes less and less sense to me as I hear the concerns of > > keeping the ABI consistent. We should be free to change the version > > numbers without implying an ABI break. I can only envision right now > > perhaps one more ABI break (the one David has talked about to make pimpl > > interfaces). > > I think one issue with versions is that they convey multiple things at > the same time. The number itself conveys an idea of "progress" and > "features" - the bigger the change in the number, the bigger changes > are expected by users. This is the part where everyone has an opinion. > > Then, there is also the idea that for a library, versions conveys ABI > and API compatibility, and this should be purely technical IMO. There > are well established rules here: > > > http://www.linuxshowcase.org/2000/2000papers/papers/browndavid/browndavid_html/ > > """ > A major release is an incompatible change to the system software, and > implies that [some] applications dependent on the earlier major > release (specifically those that relied upon the specific features > that have changed incompatibly) will need to be changed in order to > work on the new major release. > > A minor release of the system software is an upward-compatible > change--one which adds some new interfaces, but maintains > compatibility for all existing interfaces. Applications (or other > software products) dependent on an earlier minor release will not need > to be changed in order to work on the new minor release: Since the > later release contains all the earlier interfaces, the change(s) > imparted to the system does not affect those applications. > > A micro release is a compatible change which does not add any new > interfaces: A change is made to the implementation (such as to improve > performance, scalability or some other qualitative property) but > provides an interface equivalent to all other micro revisions at the > same minor level. Again, dependent applications (or other software > products) will not need to be changed in order to work on that release > as the change imparted to the system (or library) does not undermine > their dependencies. > """ > > This idea is ingrained in the tool (the loader use those rules to > decide which shared library to load for a given binary with its > libraries dependencies). Now, python itself does not follow this rule: > ABI and API breaks arrive together (every minor release), but it is my > impression that they intend to be stricter for the 3.x series. > > I have dived into gtk development quite a bit to look at existing > processes: Gtk has a good history in that aspect, and is used by a lot > of ISV outside open source (vmware, adobe, etc...). They have an > experience we don't have. > > Coincidentally, they are discussing for gtk 3.0 about the best way to > go forward, and they have the exact same issue about lack of > implementation hiding for structures. For example in there: > http://micke.hallendal.net/blog/gtk-30-enabling-incrementalism/, Havoc > Pennington (one of the main gtk developer) makes the argument about > 3.0 breaking ABI only without any new feature, serving as a basis for > new features afterwise, to avoid having a version in preparation > taking too long. Maybe that's an idea to follow. > > Concerning the fear raised by Pauli and others about the massive > breakage, I am also looking at existing refactoring tools in C to make > this almost automatic (mozilla has developed an impressive set of > tools in that area, for example pork: > https://developer.mozilla.org/En/Pork_Tools). > > Nice summary. Here's a linkI posted in 2008 with some observations on ABI and API. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Wed Feb 10 22:41:50 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 10 Feb 2010 19:41:50 -0800 Subject: [Numpy-discussion] Determine if two arrays share references Message-ID: Here are two arrays that share references: >> x = np.array([1,2,3]) >> y = x[1:] and here are two that don't: >> x = np.array([1,2,3]) >> y = x[1:].copy() If I didn't know how the arrays were constructed, how would I determine if any elements in the two arrays share reference? (I'm not sure of my terminology. Please correct me if I'm wrong.) From robert.kern at gmail.com Wed Feb 10 23:01:57 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 10 Feb 2010 22:01:57 -0600 Subject: [Numpy-discussion] Determine if two arrays share references In-Reply-To: References: Message-ID: <3d375d731002102001u48b2fc36l2a6294cc70e3ebf@mail.gmail.com> On Wed, Feb 10, 2010 at 21:41, Keith Goodman wrote: > Here are two arrays that share references: > >>> x = np.array([1,2,3]) >>> y = x[1:] > > and here are two that don't: > >>> x = np.array([1,2,3]) >>> y = x[1:].copy() > > If I didn't know how the arrays were constructed, how would I > determine if any elements in the two arrays share reference? It is hard to do this 100% accurately given the full variety of strided memory, but: In [2]: np.may_share_memory? Type: function Base Class: String Form: Namespace: Interactive File: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy-1.3.0-py2.5-macosx-10.3-i386.egg/numpy/lib/utils.py Definition: np.may_share_memory(a, b) Docstring: Determine if two arrays can share memory The memory-bounds of a and b are computed. If they overlap then this function returns True. Otherwise, it returns False. A return of True does not necessarily mean that the two arrays share any element. It just means that they *might*. One example where this function returns True when a 100% accurate function would return False is x[::2] and x[1::2]. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Wed Feb 10 23:16:17 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 10 Feb 2010 20:16:17 -0800 Subject: [Numpy-discussion] Determine if two arrays share references In-Reply-To: <3d375d731002102001u48b2fc36l2a6294cc70e3ebf@mail.gmail.com> References: <3d375d731002102001u48b2fc36l2a6294cc70e3ebf@mail.gmail.com> Message-ID: On Wed, Feb 10, 2010 at 8:01 PM, Robert Kern wrote: > On Wed, Feb 10, 2010 at 21:41, Keith Goodman wrote: >> Here are two arrays that share references: >> >>>> x = np.array([1,2,3]) >>>> y = x[1:] >> >> and here are two that don't: >> >>>> x = np.array([1,2,3]) >>>> y = x[1:].copy() >> >> If I didn't know how the arrays were constructed, how would I >> determine if any elements in the two arrays share reference? > > It is hard to do this 100% accurately given the full variety of > strided memory, but: > > In [2]: np.may_share_memory? > Type: ? ? ? ? ? function > Base Class: ? ? > String Form: ? ? > Namespace: ? ? ?Interactive > File: > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy-1.3.0-py2.5-macosx-10.3-i386.egg/numpy/lib/utils.py > Definition: ? ? np.may_share_memory(a, b) > Docstring: > ? ?Determine if two arrays can share memory > > ? ?The memory-bounds of a and b are computed. ?If they overlap then > ? ?this function returns True. ?Otherwise, it returns False. > > ? ?A return of True does not necessarily mean that the two arrays > ? ?share any element. ?It just means that they *might*. > > > One example where this function returns True when a 100% accurate > function would return False is x[::2] and x[1::2]. No looping or anything. That is great. Thank you. >> x = np.array([1,2,3]) >> y = x[1:] >> np.may_share_memory(x, y) True >> np.may_share_memory(x, y.copy()) False From stefan at sun.ac.za Thu Feb 11 01:29:01 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 11 Feb 2010 08:29:01 +0200 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: <9457e7c81002102229l1ac7ca70o570ac81da4e78400@mail.gmail.com> Hi Friedrich On 10 February 2010 20:57, Friedrich Romstedt wrote: > I wonder why there is no response on my e-mail dating back to Feb 4. > Is there nobody interested in it, is somebody working on it, or did it > simply did not come through? ?I changed the recipient now to > "Discussion of Numerical Python", hth ... Could you please put your "undarray" as well as the ufunc-wrapper online (preferably in a repository) so that we can have a look? I imagine there is something convoluted going on in the broadcasting machinery, but it's hard to say without having a working script. Regards St?fan From stefan at sun.ac.za Thu Feb 11 02:03:07 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 11 Feb 2010 09:03:07 +0200 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> Message-ID: <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> On 11 February 2010 03:22, David Cournapeau wrote: > I think one issue with versions is that they convey multiple things at > the same time. The number itself conveys an idea of "progress" and > "features" - the bigger the change in the number, the bigger changes > are expected by users. This is the part where everyone has an opinion. > > Then, there is also the idea that for a library, versions conveys ABI > and API compatibility, and this should be purely technical IMO. There > are well established rules here: You hit the nail on the head; this conflict arose because we did not have a version policy in place earlier. An expectation was generated that NumPy 2.0 would co-incide with a thorough review of the API (an exercise I hope we complete in the next year or two). We have a simple decision to make: do we renege on the promise of an API review for 2.0, or do we neglect the new versioning system? Since the new versioning system has not been in use all that long (the reason for this minor upset), we don't stand much to lose by naming this next ABI-breaking release 1.5. Regards St?fan From charlesr.harris at gmail.com Thu Feb 11 02:52:34 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 11 Feb 2010 00:52:34 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> Message-ID: 2010/2/11 St?fan van der Walt > On 11 February 2010 03:22, David Cournapeau wrote: > > I think one issue with versions is that they convey multiple things at > > the same time. The number itself conveys an idea of "progress" and > > "features" - the bigger the change in the number, the bigger changes > > are expected by users. This is the part where everyone has an opinion. > > > > Then, there is also the idea that for a library, versions conveys ABI > > and API compatibility, and this should be purely technical IMO. There > > are well established rules here: > > You hit the nail on the head; this conflict arose because we did not > have a version policy in place earlier. An expectation was generated > that NumPy 2.0 would co-incide with a thorough review of the API (an > exercise I hope we complete in the next year or two). > > We have a simple decision to make: do we renege on the promise of an > API review for 2.0, or do we neglect the new versioning system? > > Since the new versioning system has not been in use all that long (the > reason for this minor upset), A policy has effect from the time of promulgation. It's not like you have to wait a couple of years while it seasons. > we don't stand much to lose by naming > this next ABI-breaking release 1.5. > > Except the accepted policy will be discarded and we will have to start all over again. We can't change policy on a whim and still maintain that we *have* a policy. We won't have one. But we can have long discussions... "...this should be purely technical IMO. There are well established rules here:" Simple, eh. The version should be 2.0. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Feb 11 03:00:42 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 11 Feb 2010 10:00:42 +0200 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> Message-ID: <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> On 11 February 2010 09:52, Charles R Harris wrote: >> we don't stand much to lose by naming >> this next ABI-breaking release 1.5. > > Except the accepted policy will be discarded and we will have to start all > over again. We can't change policy on a whim and still maintain that we > *have* a policy. We won't have one. But we can have long discussions... Although one has been proposed, it has not been strictly implemented otherwise this problem wouldn't exist. > "...this should be purely technical IMO. There are well established rules > here:" > > Simple, eh. The version should be 2.0. I'm going with the element of least surprise: no one will be surprised when 1.5 is released with ABI changes (it's been done that way for a long time), but they will be surprised to see that 2.0 is not the Big Thing it was promised to be. Nothing we can't fix with some good PR, though :) Cheers St?fan From cournape at gmail.com Thu Feb 11 03:05:52 2010 From: cournape at gmail.com (David Cournapeau) Date: Thu, 11 Feb 2010 17:05:52 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> Message-ID: <5b8d13221002110005lc66ae2cq47fa2ebc6681ef4a@mail.gmail.com> On Thu, Feb 11, 2010 at 4:52 PM, Charles R Harris wrote: > > "...this should be purely technical IMO. There are well established rules > here:" > > Simple, eh. The version should be 2.0. It would be simple if it were not for the obligation of getting it soon, in a matter of weeks. This means fixing any fundamental issue (e.g. to get a more maintainable ABI) is totally out of reach, and that we will have to maintain several branches at the same time, which I think everybody agree we lack the manpower for. David From matthew.brett at gmail.com Thu Feb 11 03:23:27 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 11 Feb 2010 00:23:27 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> Message-ID: <1e2af89e1002110023y5b81ddf7m3e0b8c334e1c495c@mail.gmail.com> Hi, > Just a comment: I would like to point out that there is (necessarily) > some arbitrary threshold to who is being recognized as "people who are > actively writing the code". Over the last year, I have posted fixes > for multiple bugs and extended the ufunc wrapping mechanisms > (__array_prepare__) which were included in numpy-1.4.0, and have also > been developing the quantities package, which is intimately tied up > with numpy's development. I don't think that makes me a major > contributor like you or Chuck etc., but I am heavily invested in > numpy's development and an active contributor. Yes - I think that's a valid point - that there is a spectrum in our contributions to numpy, and it is not possible to divide us very clearly into those whose opinions count and those don't. There's code contribution, but there is also commitment and investment. These should also have their weight, in a healthy community. Best, Matthew From dsdale24 at gmail.com Thu Feb 11 08:38:19 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Thu, 11 Feb 2010 08:38:19 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> Message-ID: 2010/2/11 St?fan van der Walt : > On 11 February 2010 09:52, Charles R Harris wrote: >> Simple, eh. The version should be 2.0. > > I'm going with the element of least surprise: no one will be surprised > when 1.5 is released with ABI changes I'll buy you a doughnut if that turns out to be correct. Darren From friedrichromstedt at gmail.com Thu Feb 11 10:27:47 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 11 Feb 2010 16:27:47 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: Keith Goodman: > No one answered my post either :( > > http://old.nabble.com/arrays-and-__eq__-td26987903.html#a26987903 > > Is it the same issue? First, before I post the package on github, I dived into Keith's problem, and here comes the explanation to the wreid behaviour: I used the code: class myclass(object): def __init__(self, arr): self.arr = arr def __eq__(self, other): print "Instance", id(self), "testing with", other, "..." print "type =", type(other) if numpy.isscalar(other) or isinstance(other, numpy.ndarray): x = myclass(self.arr.copy()) x.arr = x.arr == other else: raise TypeError() return x Then, the session is: >>> m = myclass([1, 2, 3]) >>> a = numpy.asarray([9, 2, 3]) >>> (m == a).arr Instance 12345 testing with [9, 2, 3] ... type = array([False, True, True], dtype = bool) So far, everything ok. And now, to something completely different ... >>> a == m Instance 12345 testing with 9 ... type = Instance 12345 testing with 2 ... type = Instance 12345 testing with 3 ... type = So, the "a" applies the "m" to each of its elements. This is the Numpy behaviour, because "m" seems scalar to Numpy. The result is a myclass instance for each element. But it doesn't matter what this myclass result instance holds, because Numpy uses Python's truth interpretation to yield the elements of the call to (a == m). This yields True for each element, because only False, None, 0, and 0.0 are False in Python. As there is no truth testing operator in "class myclass", this is the only option for Numpy. So no bug, only features ... And it is not the same as my problem. In my case, one operand is changed before being handed over to my replacement for numpy.not_equal, set by numpy.set_numerical_ops(). The problems seem to be distinct. Remark: Note that the second Python answer on http://old.nabble.com/arrays-and-__eq__-td26987903.html#a26987903 isn't a myclass instance, but a plain array. I noticed this in the very end ... Friedrich From friedrichromstedt at gmail.com Thu Feb 11 11:35:27 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 11 Feb 2010 17:35:27 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: <9457e7c81002102229l1ac7ca70o570ac81da4e78400@mail.gmail.com> References: <9457e7c81002102229l1ac7ca70o570ac81da4e78400@mail.gmail.com> Message-ID: 2010/2/11 St?fan van der Walt : > Could you please put your "undarray" as well as the ufunc-wrapper > online (preferably in a repository) so that we can have a look? Done, github.com/friedrichromsted/upy . Have fun with it :-) ! And thanks a lot in advance for your help. You will easily locate the code phrases I posted in core.py. Friedrich From stefan at sun.ac.za Thu Feb 11 11:43:18 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 11 Feb 2010 18:43:18 +0200 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> Message-ID: <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> On 11 February 2010 15:38, Darren Dale wrote: > 2010/2/11 St?fan van der Walt : >> On 11 February 2010 09:52, Charles R Harris wrote: >>> Simple, eh. The version should be 2.0. >> >> I'm going with the element of least surprise: no one will be surprised >> when 1.5 is released with ABI changes > > I'll buy you a doughnut if that turns out to be correct. Now I wish I said "few people" instead :) As I read the discussion, I realised that not many people (including developers) were aware of the versioning policy. Since we did not follow the policy in the past, there is no precedent (hence, little surprise). If we make enough noise (release notes, notification on sourceforge, post on list, message in installer, etc.) upon releasing "1.5", that should be ample warning, and it may also be a good trial run for numpy 2.0. Another suggestion could be to go the Mayavi2 route, with numpy2 being the completely redesigned library. Whether that is sane, I don't know. Either way, I am quite happy to follow the lead of the release manager for 1.3.9/1.4.1/2.0. Regards St?fan From kwgoodman at gmail.com Thu Feb 11 11:56:48 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 11 Feb 2010 08:56:48 -0800 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: On Thu, Feb 11, 2010 at 7:27 AM, Friedrich Romstedt wrote: > Keith Goodman: >> No one answered my post either ?:( >> >> http://old.nabble.com/arrays-and-__eq__-td26987903.html#a26987903 >> >> Is it the same issue? > > First, before I post the package on github, I dived into Keith's > problem, and here comes the explanation to the wreid behaviour: > > I used the code: > > class myclass(object): > ? ? ? ?def __init__(self, arr): > ? ? ? ? ? ? ? ?self.arr = arr > ? ? ? ?def __eq__(self, other): > ? ? ? ? ? ? ? ?print "Instance", id(self), "testing with", other, "..." > ? ? ? ? ? ? ? ?print "type =", type(other) > ? ? ? ? ? ? ? ?if numpy.isscalar(other) or isinstance(other, numpy.ndarray): > ? ? ? ? ? ? ? ? ? ? ? ?x = myclass(self.arr.copy()) > ? ? ? ? ? ? ? ? ? ? ? ?x.arr = x.arr == other > ? ? ? ? ? ? ? ?else: > ? ? ? ? ? ? ? ? ? ? ? ?raise TypeError() > ? ? ? ? ? ? ? ?return x > > Then, the session is: > >>>> m = myclass([1, 2, 3]) >>>> a = numpy.asarray([9, 2, 3]) >>>> (m == a).arr > Instance 12345 testing with [9, 2, 3] ... > type = > array([False, True, True], dtype = bool) > > So far, everything ok. And now, to something completely different ... > >>>> a == m > Instance 12345 testing with 9 ... > type = > Instance 12345 testing with 2 ... > type = > Instance 12345 testing with 3 ... > type = > > So, the "a" applies the "m" to each of its elements. ?This is the > Numpy behaviour, because "m" seems scalar to Numpy. ?The result is a > myclass instance for each element. ?But it doesn't matter what this > myclass result instance holds, because Numpy uses Python's truth > interpretation to yield the elements of the call to (a == m). ?This > yields True for each element, because only False, None, 0, and 0.0 are > False in Python. ?As there is no truth testing operator in "class > myclass", this is the only option for Numpy. ?So no bug, only features > ... I tried adding truth testing but they are never called: def truth(self, other): print 'TRUTH' def is_(self, other): print 'IS_' Is there some way to tell numpy to use my __eq__ instead of its own? That would solve my problem. I had a similar problem with __radd__ which was solved by setting __array_priority__ = 10. But that doesn't work in this case. I wish I knew enough to reply to your post. Then I could return the favor. You'll have to settle for a thank you. Thank you. From charlesr.harris at gmail.com Thu Feb 11 12:04:19 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 11 Feb 2010 10:04:19 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> Message-ID: 2010/2/11 St?fan van der Walt > On 11 February 2010 15:38, Darren Dale wrote: > > 2010/2/11 St?fan van der Walt : > >> On 11 February 2010 09:52, Charles R Harris > wrote: > >>> Simple, eh. The version should be 2.0. > >> > >> I'm going with the element of least surprise: no one will be surprised > >> when 1.5 is released with ABI changes > > > > I'll buy you a doughnut if that turns out to be correct. > > Now I wish I said "few people" instead :) > > As I read the discussion, I realised that not many people (including > developers) were aware of the versioning policy. Since we did not > follow the policy in the past, there is no precedent (hence, little > surprise). > > How do precedents get established? > If we make enough noise (release notes, notification on sourceforge, > post on list, message in installer, etc.) upon releasing "1.5", that > should be ample warning, and it may also be a good trial run for numpy > 2.0. > > The major version number is unrelated to features, it is an ABI marker, not a feature marker. If one so much as breathes on the ABI, the major version number needs to change. Another suggestion could be to go the Mayavi2 route, with numpy2 being > the completely redesigned library. Whether that is sane, I don't > know. > > Either way, I am quite happy to follow the lead of the release manager > for 1.3.9/1.4.1/2.0. > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedrichromstedt at gmail.com Thu Feb 11 15:21:23 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 11 Feb 2010 21:21:23 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: Hi Keith, 2010/2/11 Keith Goodman : > Is there some way to tell numpy to use my __eq__ instead of its own? > That would solve my problem. I had a similar problem with __radd__ > which was solved by setting __array_priority__ = 10. But that doesn't > work in this case. It's quite simple, but hidden in the forest of documentation (though it mentiones it, and quite in detail). Use: numpy.set_numeric_ops(equal = my_equal_callable_object) Note that you should _not_ simply use a function: def equal(a, b): 2010/2/11 Keith Goodman : > I wish I knew enough to reply to your post. Then I could return the > favor. You'll have to settle for a thank you. Thank you. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From friedrichromstedt at gmail.com Thu Feb 11 15:29:49 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 11 Feb 2010 21:29:49 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: Oh Sorry, I typed some keys, don't know what I did precisely, but the message was sent prematurely. Now I repeat: Hi Keith, 2010/2/11 Keith Goodman : > Is there some way to tell numpy to use my __eq__ instead of its own? > That would solve my problem. I had a similar problem with __radd__ > which was solved by setting __array_priority__ = 10. But that doesn't > work in this case. It's quite simple, but hidden in the forest of documentation (though it mentions it, and quite in detail). Use: numpy.set_numeric_ops(equal = my_equal_callable_object) Note that you should _not_ simply use a function: def equal(a, b): if isinstance(b, myclass): return a == b.arr return numpy.equal(a, b) This will, in many cases, work, though, but not in _all_. This is because Numpy sometimes calls some .reduce attribute and such things, I'm also not completely informed what they do, but it seems to be safe to simply hand them over to numpy's original functions (i.e., callable objects). I coded a callable object (an instance of a class with __call__()), and simply download upy from github.com/friedrichromstedt/upy, and please find the overload at the very end of core.py. hth. >> I wish I knew enough to reply to your post. Then I could return the >> favor. You'll have to settle for a thank you. Thank you. Ah, that's great, thank you very much. I love to help. Friedrich P.S.: I had the same problem as you, just some days ago, don't know precisely how I got alert of set_numeric_ops, but it's very powerful. Use it with care! You can crash your whole numpy with it. From kwgoodman at gmail.com Thu Feb 11 15:31:08 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 11 Feb 2010 12:31:08 -0800 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: On Thu, Feb 11, 2010 at 12:21 PM, Friedrich Romstedt wrote: > Hi Keith, > > 2010/2/11 Keith Goodman : >> Is there some way to tell numpy to use my __eq__ instead of its own? >> That would solve my problem. I had a similar problem with __radd__ >> which was solved by setting __array_priority__ = 10. But that doesn't >> work in this case. > > It's quite simple, but hidden in the forest of documentation (though > it mentiones it, and quite in detail). > > Use: > > numpy.set_numeric_ops(equal = my_equal_callable_object) > > Note that you should _not_ simply use a function: > > def equal(a, b): Hey! You broke my numpy :) >> def addbug(x, y): ...: return x - y ...: >> old_funcs = np.set_numeric_ops(add=addbug) >> np.array([1]) + np.array([1]) array([0]) That is brilliant! Thanks again. From friedrichromstedt at gmail.com Thu Feb 11 15:32:53 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 11 Feb 2010 21:32:53 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: > Hey! You broke my numpy ?:) > >>> def addbug(x, y): > ? ...: ? ? return x - y > ? ...: >>> old_funcs = np.set_numeric_ops(add=addbug) >>> np.array([1]) + np.array([1]) > ? array([0]) Yea, that's what I meant. Great. :-) :-) Friedrich From kwgoodman at gmail.com Thu Feb 11 15:40:58 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 11 Feb 2010 12:40:58 -0800 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: On Thu, Feb 11, 2010 at 12:32 PM, Friedrich Romstedt wrote: >> Hey! You broke my numpy ?:) >> >>>> def addbug(x, y): >> ? ...: ? ? return x - y >> ? ...: >>>> old_funcs = np.set_numeric_ops(add=addbug) >>>> np.array([1]) + np.array([1]) >> ? array([0]) > Yea, that's what I meant. ?Great. > > :-) :-) Who needs to type np.dot when you can do: >> def dotmult(x, y): ....: return np.dot(x, y) ....: >> old_funcs = np.set_numeric_ops(multiply=dotmult) >> >> np.array([1, 2, 3]) * np.array([1, 2, 3]) 14 I can see many bugs coming my way... From josef.pktd at gmail.com Thu Feb 11 15:43:35 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 11 Feb 2010 15:43:35 -0500 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: <1cd32cbb1002111243h1464907avd6e7cb728c562fa6@mail.gmail.com> On Thu, Feb 11, 2010 at 3:40 PM, Keith Goodman wrote: > On Thu, Feb 11, 2010 at 12:32 PM, Friedrich Romstedt > wrote: >>> Hey! You broke my numpy ?:) >>> >>>>> def addbug(x, y): >>> ? ...: ? ? return x - y >>> ? ...: >>>>> old_funcs = np.set_numeric_ops(add=addbug) >>>>> np.array([1]) + np.array([1]) >>> ? array([0]) >> Yea, that's what I meant. ?Great. >> >> :-) :-) > > Who needs to type np.dot when you can do: > >>> def dotmult(x, y): > ? ....: ? ? return np.dot(x, y) > ? ....: >>> old_funcs = np.set_numeric_ops(multiply=dotmult) >>> >>> np.array([1, 2, 3]) * np.array([1, 2, 3]) > ? 14 > > I can see many bugs coming my way... especially if this is global monkey patching, there might be some surprised users Josef (Ruby, here we come) > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From josef.pktd at gmail.com Thu Feb 11 15:47:33 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 11 Feb 2010 15:47:33 -0500 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: <1cd32cbb1002111243h1464907avd6e7cb728c562fa6@mail.gmail.com> References: <1cd32cbb1002111243h1464907avd6e7cb728c562fa6@mail.gmail.com> Message-ID: <1cd32cbb1002111247o2e52db5dqb83aa069149b4197@mail.gmail.com> On Thu, Feb 11, 2010 at 3:43 PM, wrote: > On Thu, Feb 11, 2010 at 3:40 PM, Keith Goodman wrote: >> On Thu, Feb 11, 2010 at 12:32 PM, Friedrich Romstedt >> wrote: >>>> Hey! You broke my numpy ?:) >>>> >>>>>> def addbug(x, y): >>>> ? ...: ? ? return x - y >>>> ? ...: >>>>>> old_funcs = np.set_numeric_ops(add=addbug) >>>>>> np.array([1]) + np.array([1]) >>>> ? array([0]) >>> Yea, that's what I meant. ?Great. >>> >>> :-) :-) >> >> Who needs to type np.dot when you can do: >> >>>> def dotmult(x, y): >> ? ....: ? ? return np.dot(x, y) >> ? ....: >>>> old_funcs = np.set_numeric_ops(multiply=dotmult) >>>> >>>> np.array([1, 2, 3]) * np.array([1, 2, 3]) >> ? 14 >> >> I can see many bugs coming my way... > > especially if this is global monkey patching, there might be some > surprised users If this is global it won't work, because only the last package that changes it wins. ?? Josef > > Josef > (Ruby, here we come) > > >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > From robert.kern at gmail.com Thu Feb 11 15:47:31 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 11 Feb 2010 14:47:31 -0600 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: <3d375d731002111247w7c562753yf7ebb74b15d2476c@mail.gmail.com> On Thu, Feb 11, 2010 at 14:40, Keith Goodman wrote: > On Thu, Feb 11, 2010 at 12:32 PM, Friedrich Romstedt > wrote: >>> Hey! You broke my numpy ?:) >>> >>>>> def addbug(x, y): >>> ? ...: ? ? return x - y >>> ? ...: >>>>> old_funcs = np.set_numeric_ops(add=addbug) >>>>> np.array([1]) + np.array([1]) >>> ? array([0]) >> Yea, that's what I meant. ?Great. >> >> :-) :-) > > Who needs to type np.dot when you can do: > >>> def dotmult(x, y): > ? ....: ? ? return np.dot(x, y) > ? ....: >>> old_funcs = np.set_numeric_ops(multiply=dotmult) >>> >>> np.array([1, 2, 3]) * np.array([1, 2, 3]) > ? 14 > > I can see many bugs coming my way... Context managers can help: from __future__ import with_statement from contextlib import contextmanager import numpy as np @contextmanager def numpy_ops(**ops): old_ops = np.set_numeric_ops(**ops) try: yield finally: np.set_numeric_ops(**old_ops) with numpy_ops(multiply=...): print np.array([1, 2, 3]) * np.array([1, 2, 3]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From friedrichromstedt at gmail.com Thu Feb 11 16:03:37 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 11 Feb 2010 22:03:37 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: <3d375d731002111247w7c562753yf7ebb74b15d2476c@mail.gmail.com> References: <3d375d731002111247w7c562753yf7ebb74b15d2476c@mail.gmail.com> Message-ID: Robert Kern: > def numpy_ops(**ops): > ? ?old_ops = np.set_numeric_ops(**ops) > ? ?try: > ? ? ? ?yield > ? ?finally: > ? ? ? ?np.set_numeric_ops(**old_ops) > > > with numpy_ops(multiply=...): > ? ?print np.array([1, 2, 3]) * np.array([1, 2, 3]) Well, at least for me in Py 2.5 this fails with: AttributeError: 'generator' object has no attribute '__exit__' Nevertheless it's a nice idea. But you should definitely make numpy_ops a proper class instance for the "with" statement, with __enter__() and __exit__(), am I wrong? I prefer to code my overload in such a way that it does not affect other people's arithemetics ... But coercion is a difficult thing. Friedrich From robert.kern at gmail.com Thu Feb 11 16:11:20 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 11 Feb 2010 15:11:20 -0600 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: <3d375d731002111247w7c562753yf7ebb74b15d2476c@mail.gmail.com> Message-ID: <3d375d731002111311y31177cefo88c01cac7f10e90c@mail.gmail.com> On Thu, Feb 11, 2010 at 15:03, Friedrich Romstedt wrote: > Robert Kern: >> def numpy_ops(**ops): >> ? ?old_ops = np.set_numeric_ops(**ops) >> ? ?try: >> ? ? ? ?yield >> ? ?finally: >> ? ? ? ?np.set_numeric_ops(**old_ops) >> >> >> with numpy_ops(multiply=...): >> ? ?print np.array([1, 2, 3]) * np.array([1, 2, 3]) > > Well, at least for me in Py 2.5 this fails with: > > AttributeError: 'generator' object has no attribute '__exit__' Did you omit the @contextmanager decorator? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From friedrichromstedt at gmail.com Thu Feb 11 16:12:18 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 11 Feb 2010 22:12:18 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: <1cd32cbb1002111247o2e52db5dqb83aa069149b4197@mail.gmail.com> References: <1cd32cbb1002111243h1464907avd6e7cb728c562fa6@mail.gmail.com> <1cd32cbb1002111247o2e52db5dqb83aa069149b4197@mail.gmail.com> Message-ID: 2010/2/11 : > If this is global it won't work, because only the last package that > changes it wins. ?? Hm, at the current implementation of upy you're right, but I think you can do in the resp. module like: original_numpy_ops = numpy.set_numeric_ops() [ ... implementation of my_add_object, calling original_numpy_ops['add'] when it doesn't know what to do else ... ] new_numpy_ops = copy.copy(original_numpy_ops) new_numpy_ops['add'] = my_add_object numpy.set_numeric_ops(new_numpy_ops) When you code your my_add_object in such a way that it only act on the special case you want to hande, e.g.: class MyAddClass: __call__(self, a, b, *args, **kwargs): if isinstance(b, MyClass): [ ... do something special, preferring MyClass.__radd__ for instance ... ] else: return original_numpy_ops['add'](a, b, *args, **kwargs) [ ... some speciality left out ... ] my_add_object = MyAddClass() then even many packages may not interfere, given that they are in principle compatible. But as mentioned before, coercion will fail if the functionality isn't implemented which would coerce the objects ... Friedrich From friedrichromstedt at gmail.com Thu Feb 11 16:14:09 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 11 Feb 2010 22:14:09 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: <3d375d731002111311y31177cefo88c01cac7f10e90c@mail.gmail.com> References: <3d375d731002111247w7c562753yf7ebb74b15d2476c@mail.gmail.com> <3d375d731002111311y31177cefo88c01cac7f10e90c@mail.gmail.com> Message-ID: > Did you omit the @contextmanager decorator? Oh, yes! I guessed it would mean: In module contextmanager you write what follows after the colon? What does this decoration do? Friedrich From kwgoodman at gmail.com Thu Feb 11 16:16:39 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 11 Feb 2010 13:16:39 -0800 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: <3d375d731002111247w7c562753yf7ebb74b15d2476c@mail.gmail.com> References: <3d375d731002111247w7c562753yf7ebb74b15d2476c@mail.gmail.com> Message-ID: On Thu, Feb 11, 2010 at 12:47 PM, Robert Kern wrote: > On Thu, Feb 11, 2010 at 14:40, Keith Goodman wrote: >> On Thu, Feb 11, 2010 at 12:32 PM, Friedrich Romstedt >> wrote: >>>> Hey! You broke my numpy ?:) >>>> >>>>>> def addbug(x, y): >>>> ? ...: ? ? return x - y >>>> ? ...: >>>>>> old_funcs = np.set_numeric_ops(add=addbug) >>>>>> np.array([1]) + np.array([1]) >>>> ? array([0]) >>> Yea, that's what I meant. ?Great. >>> >>> :-) :-) >> >> Who needs to type np.dot when you can do: >> >>>> def dotmult(x, y): >> ? ....: ? ? return np.dot(x, y) >> ? ....: >>>> old_funcs = np.set_numeric_ops(multiply=dotmult) >>>> >>>> np.array([1, 2, 3]) * np.array([1, 2, 3]) >> ? 14 >> >> I can see many bugs coming my way... > > Context managers can help: > > > from __future__ import with_statement > > from contextlib import contextmanager > > import numpy as np > > > @contextmanager > def numpy_ops(**ops): > ? ?old_ops = np.set_numeric_ops(**ops) > ? ?try: > ? ? ? ?yield > ? ?finally: > ? ? ? ?np.set_numeric_ops(**old_ops) > > > with numpy_ops(multiply=...): > ? ?print np.array([1, 2, 3]) * np.array([1, 2, 3]) Yes, much cleaner. The problem I have now is that I don't know where to place the line of code that changes the meaning of numpy's equal. I don't know when someone will do numpy array == myarray The execution of that line goes to Numpy, not my class. And I don't want to overload Numpy's comparison operators in my entire module. From robert.kern at gmail.com Thu Feb 11 16:17:23 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 11 Feb 2010 15:17:23 -0600 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: <3d375d731002111247w7c562753yf7ebb74b15d2476c@mail.gmail.com> <3d375d731002111311y31177cefo88c01cac7f10e90c@mail.gmail.com> Message-ID: <3d375d731002111317l133f51dfh61de06a1e8c5a063@mail.gmail.com> On Thu, Feb 11, 2010 at 15:14, Friedrich Romstedt wrote: >> Did you omit the @contextmanager decorator? > > Oh, yes! I guessed it would mean: In module contextmanager you write > what follows after the colon? What does this decoration do? It turns certain specifically-written generators into full context managers. http://docs.python.org/library/contextlib#contextlib.contextmanager -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From friedrichromstedt at gmail.com Thu Feb 11 16:25:51 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 11 Feb 2010 22:25:51 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: <3d375d731002111317l133f51dfh61de06a1e8c5a063@mail.gmail.com> References: <3d375d731002111247w7c562753yf7ebb74b15d2476c@mail.gmail.com> <3d375d731002111311y31177cefo88c01cac7f10e90c@mail.gmail.com> <3d375d731002111317l133f51dfh61de06a1e8c5a063@mail.gmail.com> Message-ID: 2010/2/11 Robert Kern : > It turns certain specifically-written generators into full context managers. > > ?http://docs.python.org/library/contextlib#contextlib.contextmanager Ok, thanks! I didn't know about before. (To the anonymous reader seeking for information as me: http://www.python.org/dev/peps/pep-0318/) Friedrich From friedrichromstedt at gmail.com Thu Feb 11 16:32:55 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 11 Feb 2010 22:32:55 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: <3d375d731002111247w7c562753yf7ebb74b15d2476c@mail.gmail.com> Message-ID: 2010/2/11 Keith Goodman : > The problem I have now is that I don't know where to place the line of > code that changes the meaning of numpy's equal. I don't know when > someone will do Well, I think a solution is as written before to put a test whether the other operand is in fact a myclass instance. If not, proceed with numpy's original ops. Then it should be safe to place the overload in the module's code directly, altough it makes numpy a bit slower at all. Or you offer Robert's solution for the with statement, but I think if someone imports your module he should accept that numpy's ops get overloaded permanently. Friedrich From dagss at student.matnat.uio.no Thu Feb 11 17:03:08 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 11 Feb 2010 23:03:08 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: <3d375d731002111247w7c562753yf7ebb74b15d2476c@mail.gmail.com> Message-ID: <4B747E9C.4070306@student.matnat.uio.no> Friedrich Romstedt wrote: > 2010/2/11 Keith Goodman : >> The problem I have now is that I don't know where to place the line of >> code that changes the meaning of numpy's equal. I don't know when >> someone will do > > Well, I think a solution is as written before to put a test whether > the other operand is in fact a myclass instance. If not, proceed with > numpy's original ops. Then it should be safe to place the overload in > the module's code directly, altough it makes numpy a bit slower at > all. > > Or you offer Robert's solution for the with statement, but I think if > someone imports your module he should accept that numpy's ops get > overloaded permanently. One danger I think wasn't mentioned with the with statement (?) is: with changed_behaviour: ... f(arr1, arr2) ... Now, f is quite likely to not know about the changed behaviour. So the with statement is only useful in a few restricted situations -- the kind of situations where a special function or method for the purpose might be just as easy to type. -- Dag Sverre From amenity at enthought.com Thu Feb 11 17:35:31 2010 From: amenity at enthought.com (Amenity Applewhite) Date: Thu, 11 Feb 2010 16:35:31 -0600 Subject: [Numpy-discussion] Save the date: SciPy 2010 June 28 - July 3 Message-ID: The annual US Scientific Computing with Python Conference, SciPy, has been held at Caltech since it began in 2001. While we always love an excuse to go to California, it?s also important to make sure that we allow everyone an opportunity to attend the conference. So, as Jarrod Millman announced last fall, we?ll begin rotating the conference location and hold the 2010 conference in Austin, Texas. As you may know, Enthought is headquartered in Austin. So in addition to our standard SciPy sponsorship, this year we?ll also be undertaking a great deal of the planning and organization. To begin with, we?re thrilled to announce that we?ve secured several corporate sponsorships that will allow us to host the conference at the brand new AT&T Executive Education and Conference Center on campus at the University of Texas. It?s a wonderful facility in Central Austin and provides easy access to an array of great restaurants, parks, and music venues. We will also be able to provide stipends for our Tutorial presenters for the first time. These community members provide us with an invaluable learning experience every year, and we?re very excited to be able to compensate them for their efforts. And of course, we?ll also continue to offer student sponsorships to support active academic contributors who want to attend the conference. So mark your calendars, folks! June 28 ? July 3. Early registration open now. Thanks, The Enthought Team -- Amenity Applewhite Enthought, Inc. Scientific Computing Solutions www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Thu Feb 11 17:38:31 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Thu, 11 Feb 2010 16:38:31 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002110005lc66ae2cq47fa2ebc6681ef4a@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <5b8d13221002110005lc66ae2cq47fa2ebc6681ef4a@mail.gmail.com> Message-ID: <258D54B3-EEA3-427E-8A07-C4F91BE2E316@enthought.com> On Feb 11, 2010, at 2:05 AM, David Cournapeau wrote: > On Thu, Feb 11, 2010 at 4:52 PM, Charles R Harris > wrote: > >> >> "...this should be purely technical IMO. There are well established >> rules >> here:" >> >> Simple, eh. The version should be 2.0. > > It would be simple if it were not for the obligation of getting it > soon, in a matter of weeks. This means fixing any fundamental issue > (e.g. to get a more maintainable ABI) is totally out of reach, and > that we will have to maintain several branches at the same time, which > I think everybody agree we lack the manpower for. Whatever we do, I don't see how we are going to realistically maintain two separate branches. I'm nervous about the implication of going to NumPy 2.0, but as Stephan mentions, it is just a matter of P.R. If we put out appropriate notices and follow up with a 2.1 release near SciPy, then NumPy 3.0 can happen when we get the energy to fix the ABI questions and we don't imply that there will be a continuation of the 1.X series (i.e. the 2.0 is to indicate the ABI breakage requiring re-compilation). The information I gathered (on this list and in private mails) indicates to me that it is still pretty split as to whether to number 1.5 or 2.0. I don't think the 1.5 side has been discussed much on this list except by me, and Stephan and David. I'm typically concerned about "majority rules" system where it's the "vocal majority" that rules the day and not the "silent majority." I don't want to go the route of marking things "experimental" which David's pro-1.5 vote seemed to advocate. From what I gathered, Pauli, David, and I were 1.5 with various degrees of opinion and Charles, and Robert are 2.0. Others that I know about: Stephan is 1.5, Jarrod is 2.0, Matthew and Darren seem to be for 2.0. Pauli, David, and Stephan, how opposed are you to numbering the next release as NumPy 2.0 with no experimental tag or the like. If you three could also agree. I could see my way through to supporting a NumPy 2.0 release. I would ask for the following: 1) I would like the release to come out in about 3-4 weeks 2) I would like the release to contain all the ABI changes we think we will need until NumPy 3.0 when something like David's ideas are implemented which would need to be no sooner than 1 year from now. 3) The following changes to the ABI (no promise that I might not ask for more before the release date): * change the ABI indicator * put the DATETIME dtypes back in their original place in the list * move the *cast functions to the end of the ArrFuncs structure * place 2-3 place-holders in that ArrFuncs structure * fix the hasobject data-type Any other simple ABI changes that should be made? Thanks, -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Feb 11 17:50:38 2010 From: cournape at gmail.com (David Cournapeau) Date: Fri, 12 Feb 2010 07:50:38 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <258D54B3-EEA3-427E-8A07-C4F91BE2E316@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <5b8d13221002110005lc66ae2cq47fa2ebc6681ef4a@mail.gmail.com> <258D54B3-EEA3-427E-8A07-C4F91BE2E316@enthought.com> Message-ID: <5b8d13221002111450n32a43f58iea6ba900d31d6224@mail.gmail.com> On Fri, Feb 12, 2010 at 7:38 AM, Travis Oliphant wrote: > ?I don't want to go the route of marking things "experimental" which David's > pro-1.5 vote seemed to advocate. In that case, I prefer the new release to be marked as 2.0. There will then be no new numpy 1.4.x, and scipy will be built against Numpy 2.0 (to avoid having multiple scipy binaries hanging around for different versions of NumPy). > 1) I would like the release to come out in about 3-4 weeks > 2) I would like the release to contain all the ABI changes we think we will > need until NumPy 3.0 when something like David's ideas are implemented which > would need to be no sooner than 1 year from now. > 3) The following changes to the ABI (no promise that I might not ask for > more before the release date): I don't think changing the ABI before a release causes any issue, so you can put whatever change you want to put there. cheers, David From cournape at gmail.com Thu Feb 11 17:57:46 2010 From: cournape at gmail.com (David Cournapeau) Date: Fri, 12 Feb 2010 07:57:46 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> Message-ID: <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> On Fri, Feb 12, 2010 at 2:04 AM, Charles R Harris wrote: > > > 2010/2/11 St?fan van der Walt >> >> On 11 February 2010 15:38, Darren Dale wrote: >> > 2010/2/11 St?fan van der Walt : >> >> On 11 February 2010 09:52, Charles R Harris >> >> wrote: >> >>> Simple, eh. The version should be 2.0. >> >> >> >> I'm going with the element of least surprise: no one will be surprised >> >> when 1.5 is released with ABI changes >> > >> > I'll buy you a doughnut if that turns out to be correct. >> >> Now I wish I said "few people" instead :) >> >> As I read the discussion, I realised that not many people (including >> developers) were aware of the versioning policy. Since we did not >> follow the policy in the past, there is no precedent (hence, little >> surprise). >> > > How do precedents get established? > >> >> If we make enough noise (release notes, notification on sourceforge, >> post on list, message in installer, etc.) upon releasing "1.5", that >> should be ample warning, and it may also be a good trial run for numpy >> 2.0. >> > > The major version number is unrelated to features, it is an ABI marker, not > a feature marker. If one so much as breathes on the ABI, the major version > number needs to change. Actually, it is. The whole issue is caused by willing to change ABI without changing major feature, which is seldom done. ABI is generally only changed because you have no choice, not because it is more convenient. cheers, David From pgmdevlist at gmail.com Thu Feb 11 18:41:00 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 11 Feb 2010 18:41:00 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> Message-ID: <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> On Feb 11, 2010, at 5:57 PM, David Cournapeau wrote: > On Fri, Feb 12, 2010 at 2:04 AM, Charles R Harris > wrote: >> >> >> 2010/2/11 St?fan van der Walt >>> >>> On 11 February 2010 15:38, Darren Dale wrote: >>>> 2010/2/11 St?fan van der Walt : >>>>> On 11 February 2010 09:52, Charles R Harris >>>>> wrote: >>>>>> Simple, eh. The version should be 2.0. >>>>> >>>>> I'm going with the element of least surprise: no one will be surprised >>>>> when 1.5 is released with ABI changes >>>> >>>> I'll buy you a doughnut if that turns out to be correct. >>> >>> Now I wish I said "few people" instead :) >>> >>> As I read the discussion, I realised that not many people (including >>> developers) were aware of the versioning policy. Since we did not >>> follow the policy in the past, there is no precedent (hence, little >>> surprise). >>> >> >> How do precedents get established? >> >>> >>> If we make enough noise (release notes, notification on sourceforge, >>> post on list, message in installer, etc.) upon releasing "1.5", that >>> should be ample warning, and it may also be a good trial run for numpy >>> 2.0. >>> >> >> The major version number is unrelated to features, it is an ABI marker, not >> a feature marker. If one so much as breathes on the ABI, the major version >> number needs to change. > > Actually, it is. The whole issue is caused by willing to change ABI > without changing major feature, which is seldom done. ABI is generally > only changed because you have no choice, not because it is more > convenient. Jus to make sure I understand: * 2.0 will be w/ datetime support and corresponds to the current trunk * 1.5 will be w/o datetime support ? A few weeks back, I committed some changes to the trunk (some numpy.ma stuffs) that I haven't backported to what was 1.4. What should I do with them ? From matthew.brett at gmail.com Thu Feb 11 18:47:19 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 11 Feb 2010 15:47:19 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <258D54B3-EEA3-427E-8A07-C4F91BE2E316@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <5b8d13221002110005lc66ae2cq47fa2ebc6681ef4a@mail.gmail.com> <258D54B3-EEA3-427E-8A07-C4F91BE2E316@enthought.com> Message-ID: <1e2af89e1002111547n459fe3fckbae2a2cf991363a0@mail.gmail.com> Hi, > ?I don't want to go the route of marking things "experimental" which David's > pro-1.5 vote seemed to advocate. ? From what I gathered, Pauli, David, and I > were 1.5 with various degrees of opinion and Charles, and Robert are 2.0. > ?Others that I know about: ?Stephan is 1.5, Jarrod is 2.0, Matthew and > Darren seem to be for 2.0. Yes - I'm still rather strongly for 2.0, on the basis that the downside (not as many new features as people might expect, a feeling that we might support a 1.x series) are considerably less damaging than unexpected ABI breakage. > I could see my way through to supporting a NumPy 2.0 release. ? ?I > would ask for the following: > 1) I would like the release to come out in about 3-4 weeks > 2) I would like the release to contain all the ABI changes we think we will > need until NumPy 3.0 when something like David's ideas are implemented which > would need to be no sooner than 1 year from now. > 3) The following changes to the ABI (no promise that I might not ask for > more before the release date): > * change the ABI indicator > * put the DATETIME dtypes back in their original place in the list > *?move the *cast functions to the end of the ArrFuncs structure > *?place 2-3 place-holders in that ArrFuncs structure > * fix the hasobject data-type > ?? ? ? ?Any other simple ABI changes that should be made? That all seems good to me. See you, Matthew From pav at iki.fi Thu Feb 11 19:10:29 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 12 Feb 2010 02:10:29 +0200 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <258D54B3-EEA3-427E-8A07-C4F91BE2E316@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <8741FA8F-1BB8-4301-B056-67EAA0F003FE@enthought.com> <4D618BCC-B63E-4C96-8277-74A3969A82C2@enthought.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <5b8d13221002110005lc66ae2cq47fa2ebc6681ef4a@mail.gmail.com> <258D54B3-EEA3-427E-8A07-C4F91BE2E316@enthought.com> Message-ID: <1265933429.7080.5.camel@idol> to, 2010-02-11 kello 16:38 -0600, Travis Oliphant kirjoitti: [clip] > Pauli, David, and Stephan, how opposed are you to numbering the next > release as NumPy 2.0 with no experimental tag or the like. If you > three could also agree. I could see my way through to supporting a > NumPy 2.0 release. I would ask for the following: Not very opposed -- if 2.0 seems better, then let's just pick it. It's a nice color, too ;) > 1) I would like the release to come out in about 3-4 weeks > 2) I would like the release to contain all the ABI changes we think we > will need until NumPy 3.0 when something like David's ideas are > implemented which would need to be no sooner than 1 year from now. > 3) The following changes to the ABI (no promise that I might not ask > for more before the release date): > > * change the ABI indicator > * put the DATETIME dtypes back in their original place in the list > * move the *cast functions to the end of the ArrFuncs structure > * place 2-3 place-holders in that ArrFuncs structure > * fix the hasobject data-type Sounds like a good plan. > Any other simple ABI changes that should be made? None that I can think of. I'll pull out soon the extra fields added to the end of descr/ndarray structs for PEP 3118 currently in SVN, since they are actually not really needed for the implementation. Best, Pauli From millman at berkeley.edu Thu Feb 11 19:17:17 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 11 Feb 2010 16:17:17 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> Message-ID: On Thu, Feb 11, 2010 at 3:41 PM, Pierre GM wrote: > Jus to make sure I understand: > * 2.0 will be w/ datetime support and corresponds to the current trunk > * 1.5 will be w/o datetime support ? I may have misunderstood, but my understanding is that there will be no 1.5 release under the current proposal. The next release will be 2.0 and will come out in 3-4 weeks time. 2.0 will basically be 1.4.0 with at least the ABI changes Travis outlined. If 2.0 is coming out in 3-4 weeks time we will need to be careful about how aggressive we are in terms of doing any more than 1.4 + ABI changes. Once the general plan is agreed upon, which seems to be the direction that things are headed, then we will need to decide whether we should just work on the trunk or use the 1.4 branch with possibly a few things backported from the branch. I am happy to simply back whatever strategy David Cournapeau thinks is best. Personally, I would love to see Pauli's work toward supporting Py3k make it in to the NumPy 2.0 release and I believe that Pauli thinks that is reasonable to do in a 3-4 week timeframe. I don't think we should even try to provide binaries for Py3k during this release, though. I would also like to mark the numarray and numeric support as deprecated and planned for removal in NumPy 3.0. Just marking it deprecated shouldn't cause any problems and should give anyone left using the old interfaces plenty of time to migrate prior to a future 3.0 release. -- Jarrod Millman Helen Wills Neuroscience Institute 10 Giannini Hall, UC Berkeley http://cirl.berkeley.edu/ From charlesr.harris at gmail.com Thu Feb 11 19:23:26 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 11 Feb 2010 17:23:26 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> Message-ID: On Thu, Feb 11, 2010 at 5:17 PM, Jarrod Millman wrote: > On Thu, Feb 11, 2010 at 3:41 PM, Pierre GM wrote: > > Jus to make sure I understand: > > * 2.0 will be w/ datetime support and corresponds to the current trunk > > * 1.5 will be w/o datetime support ? > > I may have misunderstood, but my understanding is that there will be > no 1.5 release under the current proposal. The next release will be > 2.0 and will come out in 3-4 weeks time. 2.0 will basically be 1.4.0 > with at least the ABI changes Travis outlined. If 2.0 is coming out > in 3-4 weeks time we will need to be careful about how aggressive we > are in terms of doing any more than 1.4 + ABI changes. > > Once the general plan is agreed upon, which seems to be the direction > that things are headed, then we will need to decide whether we should > just work on the trunk or use the 1.4 branch with possibly a few > things backported from the branch. I am happy to simply back whatever > strategy David Cournapeau thinks is best. > > I do think a 1.4.1 should be released without the datetime changes just so there would be an updated version out there for slow adopters. We wouldn't maintain it, though, it would be the end of the 1.x line. > Personally, I would love to see Pauli's work toward supporting Py3k > make it in to the NumPy 2.0 release and I believe that Pauli thinks > that is reasonable to do in a 3-4 week timeframe. I don't think we > should even try to provide binaries for Py3k during this release, > though. I would also like to mark the numarray and numeric support as > deprecated and planned for removal in NumPy 3.0. Just marking it > deprecated shouldn't cause any problems and should give anyone left > using the old interfaces plenty of time to migrate prior to a future > 3.0 release. > What about python version? Do we want to bump that up from 2.4? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Feb 11 19:25:58 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 11 Feb 2010 18:25:58 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> Message-ID: <3d375d731002111625l565b4b35ha91ad2eda3ae5b4b@mail.gmail.com> On Thu, Feb 11, 2010 at 18:23, Charles R Harris wrote: > What about python version? Do we want to bump that up from 2.4? Only if it were *really* necessary for the Python 3 port. Otherwise, I would resist the urge. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at enthought.com Thu Feb 11 19:29:31 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Thu, 11 Feb 2010 18:29:31 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002111625l565b4b35ha91ad2eda3ae5b4b@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <3d375d731002111625l565b4b35ha91ad2eda3ae5b4b@mail.gmail.com> Message-ID: <56B87E6A-A174-450A-8509-F4FABFFD10D6@enthought.com> On Feb 11, 2010, at 6:25 PM, Robert Kern wrote: > On Thu, Feb 11, 2010 at 18:23, Charles R Harris > wrote: > >> What about python version? Do we want to bump that up from 2.4? > > Only if it were *really* necessary for the Python 3 port. Otherwise, I > would resist the urge. My understanding is NumPy 2.0 is on the trunk. If a 1.4.1 is released without the date-time changes, I will not argue or complain. On the trunk, I'm about to commit a change that updates the version number to 2.0 and changes the dtype pickle code so that it only updates the version number and enlarges the state tuple if there is actually metadata that needs to be included in the pickle. This will allow most pickles to be loaded on old NumPy releases. -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Thu Feb 11 19:46:23 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 11 Feb 2010 16:46:23 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002101722u20086eb0jcd5b91c78fbfe006@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> Message-ID: <4B74A4DF.7000505@noaa.gov> One question: Does anyone think it's a good idea to provide any support for numpy version selection, similar to wxPython's wxversion? What it does is allow an installation to have default version that gets imported with "import wx". Optionally, other versions can be installed, and selected by calling: import wxversion wxversion.select(version_number) before the first "import wx". This was added to wxPython when there was a lot of API breakage going on (I think during the 2.4 - 2.6 transition). It's nice, because you can have a set of installed utilities that rely on a given version, and then develop your new stuff with a newer version without breaking anything. In numpy's case, it might be messier, as there are a lot more packages that depend on numpy, but it could still be helpful, and in fact, maybe more necessary. Older versions of the MPL wx back-end were compiled against specific versions of wx, and wxversion was helpful for that. Anyway, it seems the big isuue is when an ABI-incompatible version of numpy gets released, you can't even install it until you re-compile all the packages you may have that are built against numpy. With version selection, you could install and mess with it without breaking any running code. It may be that virtualenv (and friends) is the "right" way to handle this now, however -- it wasn't around when wxversion was developed, and it may be a better way to keep a whole stack of packages compatible. Any point in thinking more about this? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From david at silveregg.co.jp Thu Feb 11 19:43:37 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 12 Feb 2010 09:43:37 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002111625l565b4b35ha91ad2eda3ae5b4b@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <3d375d731002111625l565b4b35ha91ad2eda3ae5b4b@mail.gmail.com> Message-ID: <4B74A439.60402@silveregg.co.jp> Robert Kern wrote: > On Thu, Feb 11, 2010 at 18:23, Charles R Harris > wrote: > >> What about python version? Do we want to bump that up from 2.4? > > Only if it were *really* necessary for the Python 3 port. Otherwise, I > would resist the urge. Me too, on the basis that 2.4 is the default version supported by "enterprise-grade" linux distributions (RHEL, CENTOS, etc...). cheers, David From robert.kern at gmail.com Thu Feb 11 19:46:52 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 11 Feb 2010 18:46:52 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74A4DF.7000505@noaa.gov> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <4B74A4DF.7000505@noaa.gov> Message-ID: <3d375d731002111646i3f06d236jde9ba65a2e2e3e38@mail.gmail.com> On Thu, Feb 11, 2010 at 18:46, Christopher Barker wrote: > One question: > > Does anyone think it's a good idea to provide any support for numpy > version selection, similar to wxPython's wxversion? -1. It complicates packaging and distribution substantially. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at silveregg.co.jp Thu Feb 11 20:03:00 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 12 Feb 2010 10:03:00 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> Message-ID: <4B74A8C4.70101@silveregg.co.jp> Charles R Harris wrote: > > I do think a 1.4.1 should be released without the datetime changes just > so there would be an updated version out there for slow adopters. We > wouldn't maintain it, though, it would be the end of the 1.x line. We could make a source release - we could do it from the current 1.4.x branch as it is, as the datetime has already been removed. As I want to get a binary installer ready for scipy as soon as 2.0 get released, I don't think it makes sense to waste time on getting a scipy binary built against 1.4.x now. cheers, David From oliphant at enthought.com Thu Feb 11 20:10:58 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Thu, 11 Feb 2010 19:10:58 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74A8C4.70101@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <4B74A8C4.70101@silveregg.co.jp> Message-ID: <44348799-BE44-454F-A26D-6D794C627427@enthought.com> On Feb 11, 2010, at 7:03 PM, David Cournapeau wrote: > Charles R Harris wrote: >> >> I do think a 1.4.1 should be released without the datetime changes >> just >> so there would be an updated version out there for slow adopters. We >> wouldn't maintain it, though, it would be the end of the 1.x line. > > We could make a source release - we could do it from the current 1.4.x > branch as it is, as the datetime has already been removed. > > As I want to get a binary installer ready for scipy as soon as 2.0 get > released, I don't think it makes sense to waste time on getting a > scipy > binary built against 1.4.x now. This is true, but you could make a NumPy 1.4.x binary and the old SciPy binary would still presumably work. -Travis -- Travis Oliphant Enthought Inc. 1-512-536-1057 http://www.enthought.com oliphant at enthought.com From charlesr.harris at gmail.com Thu Feb 11 20:11:47 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 11 Feb 2010 18:11:47 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74A8C4.70101@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <4B74A8C4.70101@silveregg.co.jp> Message-ID: On Thu, Feb 11, 2010 at 6:03 PM, David Cournapeau wrote: > Charles R Harris wrote: > > > > I do think a 1.4.1 should be released without the datetime changes just > > so there would be an updated version out there for slow adopters. We > > wouldn't maintain it, though, it would be the end of the 1.x line. > > We could make a source release - we could do it from the current 1.4.x > branch as it is, as the datetime has already been removed. > > As I want to get a binary installer ready for scipy as soon as 2.0 get > released, I don't think it makes sense to waste time on getting a scipy > binary built against 1.4.x now. > > I think a 1.4.x release without a corresponding scipy release would be fine. Folks who want to upgrade scipy could then compile it themselves but they wouldn't have to recompile all the other binaries that depended on numpy. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Thu Feb 11 20:23:02 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 12 Feb 2010 10:23:02 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <44348799-BE44-454F-A26D-6D794C627427@enthought.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002102303u78e314aaud9b569c5de741785@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <4B74A8C4.70101@silveregg.co.jp> <44348799-BE44-454F-A26D-6D794C627427@enthought.com> Message-ID: <4B74AD76.9010901@silveregg.co.jp> Travis Oliphant wrote: > > This is true, but you could make a NumPy 1.4.x binary and the old > SciPy binary would still presumably work. There is still the cython issue, although it concerns only some packages (stats and spatial IIRC), and there is an error message at least. I regenerated the cython files in the 0.7.x branch, so people could at least prepare compatible binaries themselves (and as in numpy, releasing a purely source tarball for scipy is much less of an issue - I can do it right now since almost nothing went in the 0.7.x branch since the 0.7.1 release). cheers, David From josef.pktd at gmail.com Thu Feb 11 20:31:03 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 11 Feb 2010 20:31:03 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74AD76.9010901@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <4B74A8C4.70101@silveregg.co.jp> <44348799-BE44-454F-A26D-6D794C627427@enthought.com> <4B74AD76.9010901@silveregg.co.jp> Message-ID: <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> On Thu, Feb 11, 2010 at 8:23 PM, David Cournapeau wrote: > Travis Oliphant wrote: > >> >> This is true, but you could make a NumPy 1.4.x binary and the old >> SciPy binary would still presumably work. > > There is still the cython issue, although it concerns only some packages > (stats and spatial IIRC), and there is an error message at least. I > regenerated the cython files in the 0.7.x branch, so people could at > least prepare compatible binaries themselves (and as in numpy, releasing > a purely source tarball for scipy is much less of an issue - I can do it > right now since almost nothing went in the 0.7.x branch since the 0.7.1 > release). So 1.4.1 wouldn't resolve the cython issue, packages that use cython still would need to be refreshed and recompiled, but non-cython packages should run without recompiling? Josef > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From david at silveregg.co.jp Thu Feb 11 20:36:45 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 12 Feb 2010 10:36:45 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <4B74A8C4.70101@silveregg.co.jp> <44348799-BE44-454F-A26D-6D794C627427@enthought.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> Message-ID: <4B74B0AD.9000704@silveregg.co.jp> josef.pktd at gmail.com wrote: > So 1.4.1 wouldn't resolve the cython issue, packages that use cython > still would need to be refreshed and recompiled, but non-cython > packages should run without recompiling? It is impossible to solve the cython issue in numpy. The only solution is to regenerate the cython files with Cython 0.12.1 (which is what I have done in scipy 0.7.x branch). Hopefully, the issue will never happen again in scipy, as long as we are careful to use always Cython 0.12.1 or above, cheers, David From josef.pktd at gmail.com Thu Feb 11 20:56:33 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 11 Feb 2010 20:56:33 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74B0AD.9000704@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <4B74A8C4.70101@silveregg.co.jp> <44348799-BE44-454F-A26D-6D794C627427@enthought.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> Message-ID: <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> On Thu, Feb 11, 2010 at 8:36 PM, David Cournapeau wrote: > josef.pktd at gmail.com wrote: > >> So 1.4.1 ?wouldn't resolve the cython issue, packages that use cython >> still would need to be refreshed and recompiled, but non-cython >> packages should run without recompiling? > > It is impossible to solve the cython issue in numpy. The only solution > is to regenerate the cython files with Cython 0.12.1 (which is what I > have done in scipy 0.7.x branch). > > Hopefully, the issue will never happen again in scipy, as long as we are > careful to use always Cython 0.12.1 or above, scipy is relatively easy to compile, I was thinking also of h5py, pytables and pymc (b/c of pytables), none of them are importing with numpy 1.4.0 because of the cython issue. Thanks, Josef > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From david at silveregg.co.jp Thu Feb 11 21:00:58 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 12 Feb 2010 11:00:58 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <4B74A8C4.70101@silveregg.co.jp> <44348799-BE44-454F-A26D-6D794C627427@enthought.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> Message-ID: <4B74B65A.1090908@silveregg.co.jp> josef.pktd at gmail.com wrote: > scipy is relatively easy to compile, I was thinking also of h5py, > pytables and pymc (b/c of pytables), none of them are importing with > numpy 1.4.0 because of the cython issue. As I said, all of them will have to be regenerated with cython 0.12.1. There is no other solution, cheers, David From charlesr.harris at gmail.com Thu Feb 11 21:44:39 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 11 Feb 2010 19:44:39 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74B65A.1090908@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74A8C4.70101@silveregg.co.jp> <44348799-BE44-454F-A26D-6D794C627427@enthought.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> Message-ID: On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau wrote: > josef.pktd at gmail.com wrote: > > > scipy is relatively easy to compile, I was thinking also of h5py, > > pytables and pymc (b/c of pytables), none of them are importing with > > numpy 1.4.0 because of the cython issue. > > As I said, all of them will have to be regenerated with cython 0.12.1. > There is no other solution, > > Wait, won't the structures be the same size? If they are then the cython check won't fail. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Thu Feb 11 22:12:41 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 12 Feb 2010 12:12:41 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74A8C4.70101@silveregg.co.jp> <44348799-BE44-454F-A26D-6D794C627427@enthought.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> Message-ID: <4B74C729.5090206@silveregg.co.jp> Charles R Harris wrote: > > > On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau > wrote: > > josef.pktd at gmail.com wrote: > > > scipy is relatively easy to compile, I was thinking also of h5py, > > pytables and pymc (b/c of pytables), none of them are importing with > > numpy 1.4.0 because of the cython issue. > > As I said, all of them will have to be regenerated with cython 0.12.1. > There is no other solution, > > > Wait, won't the structures be the same size? If they are then the cython > check won't fail. Yes, but the structures are bigger (even after removing the datetime stuff, I had the cython warning when I did some tests). cheers, David From charlesr.harris at gmail.com Thu Feb 11 23:22:13 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 11 Feb 2010 21:22:13 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74C729.5090206@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74A8C4.70101@silveregg.co.jp> <44348799-BE44-454F-A26D-6D794C627427@enthought.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> Message-ID: On Thu, Feb 11, 2010 at 8:12 PM, David Cournapeau wrote: > Charles R Harris wrote: > > > > > > On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau > > wrote: > > > > josef.pktd at gmail.com wrote: > > > > > scipy is relatively easy to compile, I was thinking also of h5py, > > > pytables and pymc (b/c of pytables), none of them are importing > with > > > numpy 1.4.0 because of the cython issue. > > > > As I said, all of them will have to be regenerated with cython > 0.12.1. > > There is no other solution, > > > > > > Wait, won't the structures be the same size? If they are then the cython > > check won't fail. > > Yes, but the structures are bigger (even after removing the datetime > stuff, I had the cython warning when I did some tests). > > That's curious. It sounds like it isn't ABI compatible yet. Any idea of what was added? It would be helpful if the cython message gave a bit more information... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Thu Feb 11 23:39:26 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Thu, 11 Feb 2010 23:39:26 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <44348799-BE44-454F-A26D-6D794C627427@enthought.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> Message-ID: On Thu, Feb 11, 2010 at 11:22 PM, Charles R Harris wrote: > > > On Thu, Feb 11, 2010 at 8:12 PM, David Cournapeau > wrote: >> >> Charles R Harris wrote: >> > >> > >> > On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau > > > wrote: >> > >> > ? ? josef.pktd at gmail.com wrote: >> > >> > ? ? ?> scipy is relatively easy to compile, I was thinking also of h5py, >> > ? ? ?> pytables and pymc (b/c of pytables), none of them are importing >> > with >> > ? ? ?> numpy 1.4.0 because of the cython issue. >> > >> > ? ? As I said, all of them will have to be regenerated with cython >> > 0.12.1. >> > ? ? There is no other solution, >> > >> > >> > Wait, won't the structures be the same size? If they are then the cython >> > check won't fail. >> >> Yes, but the structures are bigger (even after removing the datetime >> stuff, I had the cython warning when I did some tests). >> > > That's curious. It sounds like it isn't ABI compatible yet. Any idea of what > was added? It would be helpful if the cython message gave a bit more > information... Could it be related to __array_prepare__? From oliphant at enthought.com Thu Feb 11 23:42:17 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Thu, 11 Feb 2010 22:42:17 -0600 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74C729.5090206@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74A8C4.70101@silveregg.co.jp> <44348799-BE44-454F-A26D-6D794C627427@enthought.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> Message-ID: Is it just the metadata element in the dtype structure or were other objects affected. -- (mobile phone of) Travis Oliphant Enthought, Inc. 1-512-536-1057 http://www.enthought.com On Feb 11, 2010, at 9:12 PM, David Cournapeau wrote: > Charles R Harris wrote: >> >> >> On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau > > wrote: >> >> josef.pktd at gmail.com wrote: >> >>> scipy is relatively easy to compile, I was thinking also of h5py, >>> pytables and pymc (b/c of pytables), none of them are importing with >>> numpy 1.4.0 because of the cython issue. >> >> As I said, all of them will have to be regenerated with cython >> 0.12.1. >> There is no other solution, >> >> >> Wait, won't the structures be the same size? If they are then the >> cython >> check won't fail. > > Yes, but the structures are bigger (even after removing the datetime > stuff, I had the cython warning when I did some tests). > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From dwf at cs.toronto.edu Thu Feb 11 23:47:18 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 11 Feb 2010 23:47:18 -0500 Subject: [Numpy-discussion] Cholesky update/downdate? Message-ID: <20100212044718.GA12339@rodimus> Hi everyone, Does anyone know if there is an implementation of rank 1 updates (and "downdates") to a Cholesky factorization in NumPy or SciPy? It looks there are a bunch of routines for it in LINPACK, but not LAPACK. Thanks, David From charlesr.harris at gmail.com Thu Feb 11 23:57:11 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 11 Feb 2010 21:57:11 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> Message-ID: On Thu, Feb 11, 2010 at 9:39 PM, Darren Dale wrote: > On Thu, Feb 11, 2010 at 11:22 PM, Charles R Harris > wrote: > > > > > > On Thu, Feb 11, 2010 at 8:12 PM, David Cournapeau > > > wrote: > >> > >> Charles R Harris wrote: > >> > > >> > > >> > On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau < > david at silveregg.co.jp > >> > > wrote: > >> > > >> > josef.pktd at gmail.com wrote: > >> > > >> > > scipy is relatively easy to compile, I was thinking also of > h5py, > >> > > pytables and pymc (b/c of pytables), none of them are importing > >> > with > >> > > numpy 1.4.0 because of the cython issue. > >> > > >> > As I said, all of them will have to be regenerated with cython > >> > 0.12.1. > >> > There is no other solution, > >> > > >> > > >> > Wait, won't the structures be the same size? If they are then the > cython > >> > check won't fail. > >> > >> Yes, but the structures are bigger (even after removing the datetime > >> stuff, I had the cython warning when I did some tests). > >> > > > > That's curious. It sounds like it isn't ABI compatible yet. Any idea of > what > > was added? It would be helpful if the cython message gave a bit more > > information... > > Could it be related to __array_prepare__? > Didn't __array_prepare__ go into 1.3? Did you add anything to a structure? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Fri Feb 12 00:03:38 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 12 Feb 2010 14:03:38 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74A8C4.70101@silveregg.co.jp> <44348799-BE44-454F-A26D-6D794C627427@enthought.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> Message-ID: <4B74E12A.5010707@silveregg.co.jp> Charles R Harris wrote: > > > On Thu, Feb 11, 2010 at 8:12 PM, David Cournapeau > wrote: > > Charles R Harris wrote: > > > > > > On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau > > > >> wrote: > > > > josef.pktd at gmail.com > > wrote: > > > > > scipy is relatively easy to compile, I was thinking also > of h5py, > > > pytables and pymc (b/c of pytables), none of them are > importing with > > > numpy 1.4.0 because of the cython issue. > > > > As I said, all of them will have to be regenerated with > cython 0.12.1. > > There is no other solution, > > > > > > Wait, won't the structures be the same size? If they are then the > cython > > check won't fail. > > Yes, but the structures are bigger (even after removing the datetime > stuff, I had the cython warning when I did some tests). > > > That's curious. It sounds like it isn't ABI compatible yet. The Cython problem was that before 0.12.1, it failed importing whenever the struct size changed. You can change struct size and keep ABI compatibility (as long as nobody includes the struct in their own code), cheers, David From dsdale24 at gmail.com Fri Feb 12 00:12:29 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Fri, 12 Feb 2010 00:12:29 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> Message-ID: On Thu, Feb 11, 2010 at 11:57 PM, Charles R Harris wrote: > > > On Thu, Feb 11, 2010 at 9:39 PM, Darren Dale wrote: >> >> On Thu, Feb 11, 2010 at 11:22 PM, Charles R Harris >> wrote: >> > >> > >> > On Thu, Feb 11, 2010 at 8:12 PM, David Cournapeau >> > >> > wrote: >> >> >> >> Charles R Harris wrote: >> >> > >> >> > >> >> > On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau >> >> > > >> > > wrote: >> >> > >> >> > ? ? josef.pktd at gmail.com wrote: >> >> > >> >> > ? ? ?> scipy is relatively easy to compile, I was thinking also of >> >> > h5py, >> >> > ? ? ?> pytables and pymc (b/c of pytables), none of them are >> >> > importing >> >> > with >> >> > ? ? ?> numpy 1.4.0 because of the cython issue. >> >> > >> >> > ? ? As I said, all of them will have to be regenerated with cython >> >> > 0.12.1. >> >> > ? ? There is no other solution, >> >> > >> >> > >> >> > Wait, won't the structures be the same size? If they are then the >> >> > cython >> >> > check won't fail. >> >> >> >> Yes, but the structures are bigger (even after removing the datetime >> >> stuff, I had the cython warning when I did some tests). >> >> >> > >> > That's curious. It sounds like it isn't ABI compatible yet. Any idea of >> > what >> > was added? It would be helpful if the cython message gave a bit more >> > information... >> >> Could it be related to __array_prepare__? > > Didn't __array_prepare__? go into 1.3? Did you add anything to a structure? No, it was included in 1.4: http://svn.scipy.org/svn/numpy/trunk/doc/release/1.4.0-notes.rst No, I don't think so. I added __array_prepare__ to array_methods[] in this file: http://svn.scipy.org/svn/numpy/trunk/numpy/core/src/multiarray/methods.c Darren From charlesr.harris at gmail.com Fri Feb 12 00:14:42 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 11 Feb 2010 22:14:42 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> Message-ID: On Thu, Feb 11, 2010 at 10:12 PM, Darren Dale wrote: > On Thu, Feb 11, 2010 at 11:57 PM, Charles R Harris > wrote: > > > > > > On Thu, Feb 11, 2010 at 9:39 PM, Darren Dale wrote: > >> > >> On Thu, Feb 11, 2010 at 11:22 PM, Charles R Harris > >> wrote: > >> > > >> > > >> > On Thu, Feb 11, 2010 at 8:12 PM, David Cournapeau > >> > > >> > wrote: > >> >> > >> >> Charles R Harris wrote: > >> >> > > >> >> > > >> >> > On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau > >> >> > >> >> > > wrote: > >> >> > > >> >> > josef.pktd at gmail.com wrote: > >> >> > > >> >> > > scipy is relatively easy to compile, I was thinking also of > >> >> > h5py, > >> >> > > pytables and pymc (b/c of pytables), none of them are > >> >> > importing > >> >> > with > >> >> > > numpy 1.4.0 because of the cython issue. > >> >> > > >> >> > As I said, all of them will have to be regenerated with cython > >> >> > 0.12.1. > >> >> > There is no other solution, > >> >> > > >> >> > > >> >> > Wait, won't the structures be the same size? If they are then the > >> >> > cython > >> >> > check won't fail. > >> >> > >> >> Yes, but the structures are bigger (even after removing the datetime > >> >> stuff, I had the cython warning when I did some tests). > >> >> > >> > > >> > That's curious. It sounds like it isn't ABI compatible yet. Any idea > of > >> > what > >> > was added? It would be helpful if the cython message gave a bit more > >> > information... > >> > >> Could it be related to __array_prepare__? > > > > Didn't __array_prepare__ go into 1.3? Did you add anything to a > structure? > > No, it was included in 1.4: > http://svn.scipy.org/svn/numpy/trunk/doc/release/1.4.0-notes.rst > > No, I don't think so. I added __array_prepare__ to array_methods[] in this > file: > http://svn.scipy.org/svn/numpy/trunk/numpy/core/src/multiarray/methods.c > > I don't see any struct definitions there, it looks clean. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Fri Feb 12 00:16:07 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 12 Feb 2010 14:16:07 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> Message-ID: <4B74E417.1020602@silveregg.co.jp> Charles R Harris wrote: > > > I don't see any struct definitions there, it looks clean. Any struct defined outside numpy/core/include is fine to change at will as far as ABI is concerned anyway, so no need to check anything :) David From charlesr.harris at gmail.com Fri Feb 12 00:17:23 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 11 Feb 2010 22:17:23 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74E12A.5010707@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> <4B74E12A.5010707@silveregg.co.jp> Message-ID: On Thu, Feb 11, 2010 at 10:03 PM, David Cournapeau wrote: > Charles R Harris wrote: > > > > > > On Thu, Feb 11, 2010 at 8:12 PM, David Cournapeau > > wrote: > > > > Charles R Harris wrote: > > > > > > > > > On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau > > > > > >> > wrote: > > > > > > josef.pktd at gmail.com > > > wrote: > > > > > > > scipy is relatively easy to compile, I was thinking also > > of h5py, > > > > pytables and pymc (b/c of pytables), none of them are > > importing with > > > > numpy 1.4.0 because of the cython issue. > > > > > > As I said, all of them will have to be regenerated with > > cython 0.12.1. > > > There is no other solution, > > > > > > > > > Wait, won't the structures be the same size? If they are then the > > cython > > > check won't fail. > > > > Yes, but the structures are bigger (even after removing the datetime > > stuff, I had the cython warning when I did some tests). > > > > > > That's curious. It sounds like it isn't ABI compatible yet. > > The Cython problem was that before 0.12.1, it failed importing whenever > the struct size changed. You can change struct size and keep ABI > compatibility (as long as nobody includes the struct in their own code), > > Sure, but I don't recall any additions to structures apart from the datetime stuff and the metadata element. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Feb 12 00:28:26 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 11 Feb 2010 22:28:26 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74E417.1020602@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> <4B74E417.1020602@silveregg.co.jp> Message-ID: On Thu, Feb 11, 2010 at 10:16 PM, David Cournapeau wrote: > Charles R Harris wrote: > > > > > > > I don't see any struct definitions there, it looks clean. > > Any struct defined outside numpy/core/include is fine to change at will > as far as ABI is concerned anyway, so no need to check anything :) > > :o Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Fri Feb 12 00:28:54 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 12 Feb 2010 14:28:54 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74AD76.9010901@silveregg.co.jp> <1cd32cbb1002111731y38d8d541gd8f952cb925a91b3@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> <4B74E12A.5010707@silveregg.co.jp> Message-ID: <4B74E716.4010705@silveregg.co.jp> Charles R Harris wrote: > > > On Thu, Feb 11, 2010 at 10:03 PM, David Cournapeau > > wrote: > > Charles R Harris wrote: > > > > > > On Thu, Feb 11, 2010 at 8:12 PM, David Cournapeau > > > >> wrote: > > > > Charles R Harris wrote: > > > > > > > > > On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau > > > > > > > >>> wrote: > > > > > > josef.pktd at gmail.com > > > > > >> wrote: > > > > > > > scipy is relatively easy to compile, I was thinking > also > > of h5py, > > > > pytables and pymc (b/c of pytables), none of them are > > importing with > > > > numpy 1.4.0 because of the cython issue. > > > > > > As I said, all of them will have to be regenerated with > > cython 0.12.1. > > > There is no other solution, > > > > > > > > > Wait, won't the structures be the same size? If they are > then the > > cython > > > check won't fail. > > > > Yes, but the structures are bigger (even after removing the > datetime > > stuff, I had the cython warning when I did some tests). > > > > > > That's curious. It sounds like it isn't ABI compatible yet. > > The Cython problem was that before 0.12.1, it failed importing whenever > the struct size changed. You can change struct size and keep ABI > compatibility (as long as nobody includes the struct in their own code), > > > Sure, but I don't recall any additions to structures apart from the > datetime stuff and the metadata element. At least iterator (I needed to add some members to support the neighborhood iterator). There may be more changes I am not aware of, but a quick look at git di svn/tags/1.3.0..svn/1.4.x numpy/core/include suggests no other big changes, David From charlesr.harris at gmail.com Fri Feb 12 00:44:12 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 11 Feb 2010 22:44:12 -0700 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74E716.4010705@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> <4B74E12A.5010707@silveregg.co.jp> <4B74E716.4010705@silveregg.co.jp> Message-ID: On Thu, Feb 11, 2010 at 10:28 PM, David Cournapeau wrote: > Charles R Harris wrote: > > > > > > On Thu, Feb 11, 2010 at 10:03 PM, David Cournapeau > > > wrote: > > > > Charles R Harris wrote: > > > > > > > > > On Thu, Feb 11, 2010 at 8:12 PM, David Cournapeau > > > > > >> > wrote: > > > > > > Charles R Harris wrote: > > > > > > > > > > > > On Thu, Feb 11, 2010 at 7:00 PM, David Cournapeau > > > > > > > > > > > > >>> wrote: > > > > > > > > josef.pktd at gmail.com > > > > > > > > >> wrote: > > > > > > > > > scipy is relatively easy to compile, I was thinking > > also > > > of h5py, > > > > > pytables and pymc (b/c of pytables), none of them > are > > > importing with > > > > > numpy 1.4.0 because of the cython issue. > > > > > > > > As I said, all of them will have to be regenerated with > > > cython 0.12.1. > > > > There is no other solution, > > > > > > > > > > > > Wait, won't the structures be the same size? If they are > > then the > > > cython > > > > check won't fail. > > > > > > Yes, but the structures are bigger (even after removing the > > datetime > > > stuff, I had the cython warning when I did some tests). > > > > > > > > > That's curious. It sounds like it isn't ABI compatible yet. > > > > The Cython problem was that before 0.12.1, it failed importing > whenever > > the struct size changed. You can change struct size and keep ABI > > compatibility (as long as nobody includes the struct in their own > code), > > > > > > Sure, but I don't recall any additions to structures apart from the > > datetime stuff and the metadata element. > > At least iterator (I needed to add some members to support the > neighborhood iterator). There may be more changes I am not aware of, but > a quick look at git di svn/tags/1.3.0..svn/1.4.x numpy/core/include > suggests no other big changes, > > Well, so it goes. I don't see any reasonable way to fix that. I wonder how recent the cython size check is? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Fri Feb 12 01:00:38 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 12 Feb 2010 15:00:38 +0900 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74B0AD.9000704@silveregg.co.jp> <1cd32cbb1002111756v367064dbwfbff9577e504a9a1@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> <4B74E12A.5010707@silveregg.co.jp> <4B74E716.4010705@silveregg.co.jp> Message-ID: <4B74EE86.3090506@silveregg.co.jp> Charles R Harris wrote: > > > Well, so it goes. I don't see any reasonable way to fix that. I wonder > how recent the cython size check is? See related discussion on Cython ML - the problem is known for some time. That's when cython fixed the error into a warning that I started looking into the ABI issue which started the whole drama :) David From pav at iki.fi Fri Feb 12 03:01:42 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 12 Feb 2010 10:01:42 +0200 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002111625l565b4b35ha91ad2eda3ae5b4b@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002110000l7138c97ch4d36a271ca576e27@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <3d375d731002111625l565b4b35ha91ad2eda3ae5b4b@mail.gmail.com> Message-ID: <1265961702.2045.5.camel@Nokia-N900-42-11> > On Thu, Feb 11, 2010 at 18:23, Charles R Harris > wrote: > > > What about python version? Do we want to bump that up from 2.4? > > Only if it were *really* necessary for the Python 3 port. Otherwise, I > would resist the urge. I don't think it's necessary for that. -- Pauli Virtanen From seb.haase at gmail.com Fri Feb 12 03:54:08 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Fri, 12 Feb 2010 09:54:08 +0100 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <1265961702.2045.5.camel@Nokia-N900-42-11> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <9457e7c81002110843id42b64fje2d73755b006f6cd@mail.gmail.com> <5b8d13221002111457t46d02f2bw89b81d6493015fc0@mail.gmail.com> <5F4E3EE1-45E9-444D-8EAE-C547C947AF67@gmail.com> <3d375d731002111625l565b4b35ha91ad2eda3ae5b4b@mail.gmail.com> <1265961702.2045.5.camel@Nokia-N900-42-11> Message-ID: On Fri, Feb 12, 2010 at 9:01 AM, Pauli Virtanen wrote: >> On Thu, Feb 11, 2010 at 18:23, Charles R Harris >> wrote: >> >> > What about python version? Do we want to bump that up from 2.4? >> >> Only if it were *really* necessary for the Python 3 port. Otherwise, I >> would resist the urge. > > I don't think it's necessary for that. > > -- I'm trying to follow this discussion as good as I can. Please tell me, is the planned ABI change including the "Addition of a dict object to all NumPy objects" I was asking about recently. (I'm mostly referring to an old thread of Aug 2008: http://www.mail-archive.com/numpy-discussion at scipy.org/msg11898.html ) Oh, and is there a proposed name for that attribute (on the Python side) ? Regards, Sebastian Haase From meine at informatik.uni-hamburg.de Fri Feb 12 07:43:56 2010 From: meine at informatik.uni-hamburg.de (Hans Meine) Date: Fri, 12 Feb 2010 13:43:56 +0100 Subject: [Numpy-discussion] docstring suggestions Message-ID: <201002121344.01439.meine@informatik.uni-hamburg.de> Hi, I was just looking for numpy.ma.compressed, but forgot its name. I suggest to add a pointer/"see also" to numpy.ma.filled at least: http://docs.scipy.org/numpy/docs/numpy.ma.core.filled/ Unfortunately, I forgot the PW of my account (hans_meine), otherwise I'd have given it a shot. Ciao, Hans From meine at informatik.uni-hamburg.de Fri Feb 12 08:14:38 2010 From: meine at informatik.uni-hamburg.de (Hans Meine) Date: Fri, 12 Feb 2010 14:14:38 +0100 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <201002121344.01439.meine@informatik.uni-hamburg.de> References: <201002121344.01439.meine@informatik.uni-hamburg.de> Message-ID: <201002121414.38938.meine@informatik.uni-hamburg.de> On Friday 12 February 2010 13:43:56 Hans Meine wrote: > I was just looking for numpy.ma.compressed, but forgot its name. Another strange thing is the docstring of numpy.ma.compress, which appears in ipython like this: Type: instance Base Class: numpy.ma.core._frommethod [...] Docstring: compress(self, condition, axis=None, out=None) Return `a` where condition is ``True``. [...] Parameters ---------- condition : var [...] axis : {None, int}, optional [...] out : {None, ndarray}, optional [...] Call def: numpy.ma.compress(self, a, *args, **params) Note the `self` vs. `a` problem, as well as the "call def" which has both, but no condition anymore. And `a`/self does not appear under parameters. All these problems are probably related to numpy.ma.core._frommethod, but anyhow this looks wrong from a user's POV. HTH, Hans From dsdale24 at gmail.com Fri Feb 12 08:48:37 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Fri, 12 Feb 2010 08:48:37 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <4B74E417.1020602@silveregg.co.jp> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <4B74B65A.1090908@silveregg.co.jp> <4B74C729.5090206@silveregg.co.jp> <4B74E417.1020602@silveregg.co.jp> Message-ID: On Fri, Feb 12, 2010 at 12:16 AM, David Cournapeau wrote: > Charles R Harris wrote: > >> >> >> I don't see any struct definitions there, it looks clean. > > Any struct defined outside numpy/core/include is fine to change at will > as far as ABI is concerned anyway, so no need to check anything :) Thanks for the clarification. I just double checked the svn diff (r7308), and I did not touch anything in numpy/core/include. Darren From fperez.net at gmail.com Fri Feb 12 10:02:03 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 12 Feb 2010 10:02:03 -0500 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <3d375d731002081417y24f2fe8ekd83ab179209031e8@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> Message-ID: On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern wrote: > Here's the problem that I don't think many people appreciate: logical > arguments suck just as much as personal experience in answering these > questions. You can make perfectly structured arguments until you are > blue in the face, but without real data to premise them on, they are > no better than the gut feelings. They can often be significantly worse > if the strength of the logic gets confused with the strength of the > premise. I need to frame this (or make a sig to put it in, the internet equivalent of a wooden frame :). Thank you, Robert. Cheers, f From ricitron at mac.com Fri Feb 12 10:46:54 2010 From: ricitron at mac.com (Robert C.) Date: Fri, 12 Feb 2010 07:46:54 -0800 (PST) Subject: [Numpy-discussion] Re ading scientific notation using D instead of E Message-ID: <27565041.post@talk.nabble.com> I am trying to read a large amount of data that is output in scientific notation using D instead of E. After searching around I found a thread that implied numpy already has the capability to do this: http://stackoverflow.com/questions/1959210/python-scientific-notation-using-d-instead-of-e http://stackoverflow.com/questions/1959210/python-scientific-notation-using-d-instead-of-e However, this does not work for me. I get: >>> numpy.float('1.23D+04') Traceback (most recent call last): File "", line 1, in ValueError: invalid literal for float(): 1.23D+04 Was this capability lost in more recent versions of numpy? I would rather not have to do a search and replace every time I read in data. Thanks. -- View this message in context: http://old.nabble.com/Reading-scientific-notation-using-D-instead-of-E-tp27565041p27565041.html Sent from the Numpy-discussion mailing list archive at Nabble.com. From josef.pktd at gmail.com Fri Feb 12 11:03:02 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 12 Feb 2010 11:03:02 -0500 Subject: [Numpy-discussion] Re ading scientific notation using D instead of E In-Reply-To: <27565041.post@talk.nabble.com> References: <27565041.post@talk.nabble.com> Message-ID: <1cd32cbb1002120803t70d9209dm349579fab9ea3d72@mail.gmail.com> On Fri, Feb 12, 2010 at 10:46 AM, Robert C. wrote: > > I am trying to read a large amount of data that is output in scientific > notation using D instead of E. After searching around I found a thread that > implied numpy already has the capability to do this: > http://stackoverflow.com/questions/1959210/python-scientific-notation-using-d-instead-of-e > http://stackoverflow.com/questions/1959210/python-scientific-notation-using-d-instead-of-e > > However, this does not work for me. I get: > >>>> numpy.float('1.23D+04') > Traceback (most recent call last): > ?File "", line 1, in > ValueError: invalid literal for float(): 1.23D+04 > > Was this capability lost in more recent versions of numpy? > > I would rather not have to do a search and replace every time I read in > data. >>> np.float('1.5698D+03') 1569.8 >>> np.float('1.23D+04') 12300.0 it's still working with numpy 1.4.0 Josef > > Thanks. > -- > View this message in context: http://old.nabble.com/Reading-scientific-notation-using-D-instead-of-E-tp27565041p27565041.html > Sent from the Numpy-discussion mailing list archive at Nabble.com. > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From josef.pktd at gmail.com Fri Feb 12 11:05:44 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 12 Feb 2010 11:05:44 -0500 Subject: [Numpy-discussion] Re ading scientific notation using D instead of E In-Reply-To: <1cd32cbb1002120803t70d9209dm349579fab9ea3d72@mail.gmail.com> References: <27565041.post@talk.nabble.com> <1cd32cbb1002120803t70d9209dm349579fab9ea3d72@mail.gmail.com> Message-ID: <1cd32cbb1002120805r301c05b1m35654725173dac49@mail.gmail.com> On Fri, Feb 12, 2010 at 11:03 AM, wrote: > On Fri, Feb 12, 2010 at 10:46 AM, Robert C. wrote: >> >> I am trying to read a large amount of data that is output in scientific >> notation using D instead of E. After searching around I found a thread that >> implied numpy already has the capability to do this: >> http://stackoverflow.com/questions/1959210/python-scientific-notation-using-d-instead-of-e >> http://stackoverflow.com/questions/1959210/python-scientific-notation-using-d-instead-of-e >> >> However, this does not work for me. I get: >> >>>>> numpy.float('1.23D+04') >> Traceback (most recent call last): >> ?File "", line 1, in >> ValueError: invalid literal for float(): 1.23D+04 >> >> Was this capability lost in more recent versions of numpy? >> >> I would rather not have to do a search and replace every time I read in >> data. > >>>> np.float('1.5698D+03') > > 1569.8 >>>> np.float('1.23D+04') > 12300.0 > > it's still working with numpy 1.4.0 maybe this is a python feature with python builtin float: >>> float('1.5698D+03') 1569.8 >>> float('123D+04') 1230000.0 with python 2.5 Josef > > Josef > >> >> Thanks. >> -- >> View this message in context: http://old.nabble.com/Reading-scientific-notation-using-D-instead-of-E-tp27565041p27565041.html >> Sent from the Numpy-discussion mailing list archive at Nabble.com. >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > From dagss at student.matnat.uio.no Fri Feb 12 11:08:28 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Fri, 12 Feb 2010 17:08:28 +0100 Subject: [Numpy-discussion] Re ading scientific notation using D instead of E In-Reply-To: <1cd32cbb1002120805r301c05b1m35654725173dac49@mail.gmail.com> References: <27565041.post@talk.nabble.com> <1cd32cbb1002120803t70d9209dm349579fab9ea3d72@mail.gmail.com> <1cd32cbb1002120805r301c05b1m35654725173dac49@mail.gmail.com> Message-ID: <4B757CFC.7090600@student.matnat.uio.no> josef.pktd at gmail.com wrote: > On Fri, Feb 12, 2010 at 11:03 AM, wrote: > >> On Fri, Feb 12, 2010 at 10:46 AM, Robert C. wrote: >> >>> I am trying to read a large amount of data that is output in scientific >>> notation using D instead of E. After searching around I found a thread that >>> implied numpy already has the capability to do this: >>> http://stackoverflow.com/questions/1959210/python-scientific-notation-using-d-instead-of-e >>> http://stackoverflow.com/questions/1959210/python-scientific-notation-using-d-instead-of-e >>> >>> However, this does not work for me. I get: >>> >>> >>>>>> numpy.float('1.23D+04') >>>>>> >>> Traceback (most recent call last): >>> File "", line 1, in >>> ValueError: invalid literal for float(): 1.23D+04 >>> >>> Was this capability lost in more recent versions of numpy? >>> >>> I would rather not have to do a search and replace every time I read in >>> data. >>> >>>>> np.float('1.5698D+03') >>>>> >> 1569.8 >> >>>>> np.float('1.23D+04') >>>>> >> 12300.0 >> >> it's still working with numpy 1.4.0 >> > > maybe this is a python feature with python builtin float: > > >>>> float('1.5698D+03') >>>> > 1569.8 > >>>> float('123D+04') >>>> > 1230000.0 > > with python 2.5 > With Python 2.6, I get "Invalid literal for float". Dag Sverre From pav at iki.fi Fri Feb 12 11:11:18 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 12 Feb 2010 18:11:18 +0200 Subject: [Numpy-discussion] Re ading scientific notation using D instead of E In-Reply-To: <1cd32cbb1002120803t70d9209dm349579fab9ea3d72@mail.gmail.com> References: <27565041.post@talk.nabble.com> <1cd32cbb1002120803t70d9209dm349579fab9ea3d72@mail.gmail.com> Message-ID: <1265991078.2662.13.camel@talisman> pe, 2010-02-12 kello 11:03 -0500, josef.pktd at gmail.com kirjoitti: [clip] > >>> np.float('1.5698D+03') > > 1569.8 > >>> np.float('1.23D+04') > 12300.0 > > it's still working with numpy 1.4.0 >>> np.float is float True Accepting the D+ notation is 1) Python feature, not one from Numpy 2) Only available on Windows, AFAIK -- Pauli Virtanen From robert.kern at gmail.com Fri Feb 12 11:12:17 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 12 Feb 2010 10:12:17 -0600 Subject: [Numpy-discussion] Re ading scientific notation using D instead of E In-Reply-To: <1cd32cbb1002120805r301c05b1m35654725173dac49@mail.gmail.com> References: <27565041.post@talk.nabble.com> <1cd32cbb1002120803t70d9209dm349579fab9ea3d72@mail.gmail.com> <1cd32cbb1002120805r301c05b1m35654725173dac49@mail.gmail.com> Message-ID: <3d375d731002120812x2ceb1e8cu436db835eea3af86@mail.gmail.com> On Fri, Feb 12, 2010 at 10:05, wrote: > On Fri, Feb 12, 2010 at 11:03 AM, ? wrote: >>>>> np.float('1.5698D+03') >> >> 1569.8 >>>>> np.float('1.23D+04') >> 12300.0 >> >> it's still working with numpy 1.4.0 > > maybe this is a python feature with python builtin float: > >>>> float('1.5698D+03') > 1569.8 >>>> float('123D+04') > 1230000.0 > > with python 2.5 numpy.float is indeed Python's builtin float type (for obscure historical reasons that I won't go into). However, in Python 2.5, at least, the parsing of the string is offloaded to the standard C function strtod(). On your platform, strtod() will parse the D correctly. On OS X, for example, it doesn't. It has nothing to do with Python versions but rather the platform that you are on. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ricitron at mac.com Fri Feb 12 12:06:02 2010 From: ricitron at mac.com (Robert C.) Date: Fri, 12 Feb 2010 09:06:02 -0800 (PST) Subject: [Numpy-discussion] Re ading scientific notation using D instead of E In-Reply-To: <3d375d731002120812x2ceb1e8cu436db835eea3af86@mail.gmail.com> References: <27565041.post@talk.nabble.com> <1cd32cbb1002120803t70d9209dm349579fab9ea3d72@mail.gmail.com> <1cd32cbb1002120805r301c05b1m35654725173dac49@mail.gmail.com> <3d375d731002120812x2ceb1e8cu436db835eea3af86@mail.gmail.com> Message-ID: <27566555.post@talk.nabble.com> Thank you for the replies. It must be because I am using python on OSX. Is there no work around for it then? Robert Kern-2 wrote: > > On Fri, Feb 12, 2010 at 10:05, wrote: >> On Fri, Feb 12, 2010 at 11:03 AM, ? wrote: > >>>>>> np.float('1.5698D+03') >>> >>> 1569.8 >>>>>> np.float('1.23D+04') >>> 12300.0 >>> >>> it's still working with numpy 1.4.0 >> >> maybe this is a python feature with python builtin float: >> >>>>> float('1.5698D+03') >> 1569.8 >>>>> float('123D+04') >> 1230000.0 >> >> with python 2.5 > > numpy.float is indeed Python's builtin float type (for obscure > historical reasons that I won't go into). However, in Python 2.5, at > least, the parsing of the string is offloaded to the standard C > function strtod(). On your platform, strtod() will parse the D > correctly. On OS X, for example, it doesn't. It has nothing to do with > Python versions but rather the platform that you are on. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -- View this message in context: http://old.nabble.com/Reading-scientific-notation-using-D-instead-of-E-tp27565041p27566555.html Sent from the Numpy-discussion mailing list archive at Nabble.com. From Chris.Barker at noaa.gov Fri Feb 12 12:27:31 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 12 Feb 2010 09:27:31 -0800 Subject: [Numpy-discussion] Re ading scientific notation using D instead of E In-Reply-To: <3d375d731002120812x2ceb1e8cu436db835eea3af86@mail.gmail.com> References: <27565041.post@talk.nabble.com> <1cd32cbb1002120803t70d9209dm349579fab9ea3d72@mail.gmail.com> <1cd32cbb1002120805r301c05b1m35654725173dac49@mail.gmail.com> <3d375d731002120812x2ceb1e8cu436db835eea3af86@mail.gmail.com> Message-ID: <4B758F83.5090300@noaa.gov> Robert Kern wrote: > numpy.float is indeed Python's builtin float type (for obscure > historical reasons that I won't go into). However, in Python 2.5, at > least, the parsing of the string is offloaded to the standard C > function strtod(). well, sort of -- it's pre-processed first, to add some numpy features, including parsing of NaN's. So it wouldn't be all that hard to add this too (well not that hard in the context of messing around in ugly C code, anyway). See my messages about fromfile() a few weeks ago for details. By the way, I got as far as identifying bugs/issues in that code, but not fixing them -- I'll try to do that before the upcoming 1.5/2.0/whatever release. If anyone wants to work in this issue, it might make sense to collaborate. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Fri Feb 12 12:41:29 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 12 Feb 2010 11:41:29 -0600 Subject: [Numpy-discussion] Re ading scientific notation using D instead of E In-Reply-To: <4B758F83.5090300@noaa.gov> References: <27565041.post@talk.nabble.com> <1cd32cbb1002120803t70d9209dm349579fab9ea3d72@mail.gmail.com> <1cd32cbb1002120805r301c05b1m35654725173dac49@mail.gmail.com> <3d375d731002120812x2ceb1e8cu436db835eea3af86@mail.gmail.com> <4B758F83.5090300@noaa.gov> Message-ID: <3d375d731002120941h7fda68d9uce617e3a459470f@mail.gmail.com> On Fri, Feb 12, 2010 at 11:27, Christopher Barker wrote: > Robert Kern wrote: >> numpy.float is indeed Python's builtin float type (for obscure >> historical reasons that I won't go into). However, in Python 2.5, at >> least, the parsing of the string is offloaded to the standard C >> function strtod(). > > well, sort of -- it's pre-processed first, to add some numpy features, > including parsing of NaN's. So it wouldn't be all that hard to add this > too (well not that hard in the context of messing around in ugly C code, > anyway). Eh, what? numpy.float is Python's float. No numpy features at all. There is some preprocessing, specifically to handle edge conditions like .1, and Python 2.6 handles 'nan', but the important point is that we don't control that code. You will have to submit a bug report to Python in order to change that behavior. That said, numpy.float64() is under our control, and you may submit a patch to convert [dD] to [eE]. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From d.l.goldsmith at gmail.com Fri Feb 12 13:19:41 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Fri, 12 Feb 2010 10:19:41 -0800 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <201002121414.38938.meine@informatik.uni-hamburg.de> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <201002121414.38938.meine@informatik.uni-hamburg.de> Message-ID: <45d1ab481002121019l7fb319f7v24d8a88b1e15889f@mail.gmail.com> 2010/2/12 Hans Meine > On Friday 12 February 2010 13:43:56 Hans Meine wrote: > > I was just looking for numpy.ma.compressed, but forgot its name. > Fixed this one in the Wiki. > Another strange thing is the docstring of numpy.ma.compress, which appears > in > ipython like this: > > Type: instance > Base Class: numpy.ma.core._frommethod > [...] > Docstring: > compress(self, condition, axis=None, out=None) > > Return `a` where condition is ``True``. > [...] > Parameters > ---------- > condition : var > [...] > axis : {None, int}, optional > [...] > out : {None, ndarray}, optional > [...] > Call def: numpy.ma.compress(self, a, *args, **params) > > Note the `self` vs. `a` problem, as well as the "call def" which has both, > but > no condition anymore. And `a`/self does not appear under parameters. > Uncertain how to fix this one - is it a "bug" in how the docstring is interpreted somewhere along the line? DG PS: please, if you don't mind, in the future post docstring "complaints" at scipy-dev (numpy-discussion has many more subscribers, many of whom probably don't immediately care about any particular docstring problem, whereas anyone who is working on the docstrings is - hopefully - subscribed to scipy-dev); thanks. > All these problems are probably related to numpy.ma.core._frommethod, but > anyhow this looks wrong from a user's POV. > > HTH, > Hans > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Feb 12 13:57:41 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 12 Feb 2010 12:57:41 -0600 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <45d1ab481002121019l7fb319f7v24d8a88b1e15889f@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <201002121414.38938.meine@informatik.uni-hamburg.de> <45d1ab481002121019l7fb319f7v24d8a88b1e15889f@mail.gmail.com> Message-ID: <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> On Fri, Feb 12, 2010 at 12:19, David Goldsmith wrote: > PS: please, if you don't mind, in the future post docstring "complaints" at > scipy-dev (numpy-discussion has many more subscribers, many of whom probably > don't immediately care about any particular docstring problem, whereas > anyone who is working on the docstrings is - hopefully - subscribed to > scipy-dev); thanks. numpy docstrings get discussed on numpy-discussion. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthew.brett at gmail.com Fri Feb 12 14:28:19 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 12 Feb 2010 11:28:19 -0800 Subject: [Numpy-discussion] Fwd: [atlas-devel] ATLAS support letters In-Reply-To: <20100212171502.7AAEF3489A@main205.cs.utsa.edu> References: <20100212171502.7AAEF3489A@main205.cs.utsa.edu> Message-ID: <1e2af89e1002121128o2e96f1bbva598b7ba58f57f17@mail.gmail.com> Hi, I don't know if y'all are subscribed to the ATLAS mailing list, but, it would be good if we could find a way of supporting Clint as strongly as we can. Best, Matthew ---------- Forwarded message ---------- From: Clint Whaley Date: Fri, Feb 12, 2010 at 9:15 AM Subject: [atlas-devel] ATLAS support letters To: math-atlas-devel at lists.sourceforge.net Guys, I go up for tenure this year. ?The tenure committee has asked me to get letters of support from ATLAS users so that they can assess the service impact of my support of ATLAS (I *tell* them it is widely used, but can I show it other than downloads?). ?The letter would discuss a little of what you do, and how you use ATLAS, and the importance of having ATLAS in furthering your project goals. ?So, if you are part of an organization/business/open source project/research project that uses ATLAS, please contact me if you or a colleague is willing to help with such a letter. If you know someone at such a place that uses ATLAS, forward this on. I will be contacting some groups that I know use ATLAS, but I don't know about the majority of people/groups who do, and I often don't have records and so forget even the ones I knew used it . . . With Goto taking a position at MS, it is all the more important that I can show my colleagues that ATLAS support and development is a service to the community, and having it at UTSA helps the university and department . . . Thanks, Clint ************************************************************************** ** R. Clint Whaley, PhD ** Assist Prof, UTSA ** www.cs.utsa.edu/~whaley ** ************************************************************************** ------------------------------------------------------------------------------ SOLARIS 10 is the OS for Data Centers - provides features such as DTrace, Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW http://p.sf.net/sfu/solaris-dev2dev _______________________________________________ Math-atlas-devel mailing list Math-atlas-devel at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/math-atlas-devel From matthew.brett at gmail.com Fri Feb 12 14:40:36 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 12 Feb 2010 11:40:36 -0800 Subject: [Numpy-discussion] Removing datetime support for 1.4.x series ? In-Reply-To: References: <5b8d13221002020011i565b936by928ada3fc51777e5@mail.gmail.com> <1e2af89e1002081427n59a5840dte6954710c9d10f3f@mail.gmail.com> <3d375d731002081430x2e82b1dat80429281979b910f@mail.gmail.com> <1e2af89e1002081432r1801539cn29a2ac2c07850de7@mail.gmail.com> <3d375d731002081440v512df9c8j83844a41e26a1f76@mail.gmail.com> <1e2af89e1002081503g3e38ce3fi53bdb6f9e5d1abe2@mail.gmail.com> <3d375d731002081518uf7841abp8f908e613da2c6ad@mail.gmail.com> <1e2af89e1002081543w2c9a4126i8929d2db45f3c1d5@mail.gmail.com> <3d375d731002081625m3b03c86fk6b4d1ed9c152d56@mail.gmail.com> Message-ID: <1e2af89e1002121140u4a0a491nf0b65d82d964016a@mail.gmail.com> Hi, On Fri, Feb 12, 2010 at 7:02 AM, Fernando Perez wrote: > On Mon, Feb 8, 2010 at 7:25 PM, Robert Kern wrote: > >> Here's the problem that I don't think many people appreciate: logical >> arguments suck just as much as personal experience in answering these >> questions. You can make perfectly structured arguments until you are >> blue in the face, but without real data to premise them on, they are >> no better than the gut feelings. They can often be significantly worse >> if the strength of the logic gets confused with the strength of the >> premise. > > I need to frame this (or make a sig to put it in, the internet > equivalent of a wooden frame :). ?Thank you, Robert. Yes, except that, at its most extreme, it renders reasonable argument pointless, and leads to resolving disputes by authority rather than discussion. Of course we don't work or think in realm that can be cleared of bias or error, but it would be difficult be a scientist and fail to notice that - agreeing - things that really should be true, aren't true and - disagreeing - despite all the threatening brackets, reasoned argument, and careful return to data, do work in increasing our understanding. See you, Matthew From d.l.goldsmith at gmail.com Fri Feb 12 15:26:13 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Fri, 12 Feb 2010 12:26:13 -0800 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <201002121414.38938.meine@informatik.uni-hamburg.de> <45d1ab481002121019l7fb319f7v24d8a88b1e15889f@mail.gmail.com> <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> Message-ID: <45d1ab481002121226p2932d802w90f8de4bc0b34387@mail.gmail.com> On Fri, Feb 12, 2010 at 10:57 AM, Robert Kern wrote: > On Fri, Feb 12, 2010 at 12:19, David Goldsmith > wrote: > > > PS: please, if you don't mind, in the future post docstring "complaints" > at > > scipy-dev (numpy-discussion has many more subscribers, many of whom > probably > > don't immediately care about any particular docstring problem, whereas > > anyone who is working on the docstrings is - hopefully - subscribed to > > scipy-dev); thanks. > > numpy docstrings get discussed on numpy-discussion. > Then is it just the Wiki and related issues that we ask people to discuss @ scipy-dev? DG > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Feb 12 15:29:31 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 12 Feb 2010 14:29:31 -0600 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <45d1ab481002121226p2932d802w90f8de4bc0b34387@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <201002121414.38938.meine@informatik.uni-hamburg.de> <45d1ab481002121019l7fb319f7v24d8a88b1e15889f@mail.gmail.com> <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> <45d1ab481002121226p2932d802w90f8de4bc0b34387@mail.gmail.com> Message-ID: <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> On Fri, Feb 12, 2010 at 14:26, David Goldsmith wrote: > On Fri, Feb 12, 2010 at 10:57 AM, Robert Kern wrote: >> >> On Fri, Feb 12, 2010 at 12:19, David Goldsmith >> wrote: >> >> > PS: please, if you don't mind, in the future post docstring "complaints" >> > at >> > scipy-dev (numpy-discussion has many more subscribers, many of whom >> > probably >> > don't immediately care about any particular docstring problem, whereas >> > anyone who is working on the docstrings is - hopefully - subscribed to >> > scipy-dev); thanks. >> >> numpy docstrings get discussed on numpy-discussion. > > Then is it just the Wiki and related issues that we ask people to discuss @ > scipy-dev? I don't see a reason to do that, either. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Fri Feb 12 15:42:02 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 12 Feb 2010 15:42:02 -0500 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <201002121414.38938.meine@informatik.uni-hamburg.de> <45d1ab481002121019l7fb319f7v24d8a88b1e15889f@mail.gmail.com> <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> <45d1ab481002121226p2932d802w90f8de4bc0b34387@mail.gmail.com> <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> Message-ID: <1cd32cbb1002121242m7034bf2fs1d326747c28218d7@mail.gmail.com> On Fri, Feb 12, 2010 at 3:29 PM, Robert Kern wrote: > On Fri, Feb 12, 2010 at 14:26, David Goldsmith wrote: >> On Fri, Feb 12, 2010 at 10:57 AM, Robert Kern wrote: >>> >>> On Fri, Feb 12, 2010 at 12:19, David Goldsmith >>> wrote: >>> >>> > PS: please, if you don't mind, in the future post docstring "complaints" >>> > at >>> > scipy-dev (numpy-discussion has many more subscribers, many of whom >>> > probably >>> > don't immediately care about any particular docstring problem, whereas >>> > anyone who is working on the docstrings is - hopefully - subscribed to >>> > scipy-dev); thanks. >>> >>> numpy docstrings get discussed on numpy-discussion. >> >> Then is it just the Wiki and related issues that we ask people to discuss @ >> scipy-dev? > > I don't see a reason to do that, either. doceditor not moin Wiki, that was the policy that Ralf and David followed since last summer to have all docediting questions in one place. Josef > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From robert.kern at gmail.com Fri Feb 12 15:47:17 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 12 Feb 2010 14:47:17 -0600 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <1cd32cbb1002121242m7034bf2fs1d326747c28218d7@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <201002121414.38938.meine@informatik.uni-hamburg.de> <45d1ab481002121019l7fb319f7v24d8a88b1e15889f@mail.gmail.com> <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> <45d1ab481002121226p2932d802w90f8de4bc0b34387@mail.gmail.com> <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> <1cd32cbb1002121242m7034bf2fs1d326747c28218d7@mail.gmail.com> Message-ID: <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> On Fri, Feb 12, 2010 at 14:42, wrote: > On Fri, Feb 12, 2010 at 3:29 PM, Robert Kern wrote: >> On Fri, Feb 12, 2010 at 14:26, David Goldsmith wrote: >>> On Fri, Feb 12, 2010 at 10:57 AM, Robert Kern wrote: >>>> >>>> On Fri, Feb 12, 2010 at 12:19, David Goldsmith >>>> wrote: >>>> >>>> > PS: please, if you don't mind, in the future post docstring "complaints" >>>> > at >>>> > scipy-dev (numpy-discussion has many more subscribers, many of whom >>>> > probably >>>> > don't immediately care about any particular docstring problem, whereas >>>> > anyone who is working on the docstrings is - hopefully - subscribed to >>>> > scipy-dev); thanks. >>>> >>>> numpy docstrings get discussed on numpy-discussion. >>> >>> Then is it just the Wiki and related issues that we ask people to discuss @ >>> scipy-dev? >> >> I don't see a reason to do that, either. > > doceditor not moin Wiki, that was the policy that Ralf and David > followed since last summer to have all docediting questions in one > place. Is the volume of questions really so large to justify the inconvenience to the questioners? It's one thing to direct someone to, say, the matplotlib list when asking matplotlib questions, but no one is going to guess that they need to go to scipy-dev to ask a question about the doceditor when they run into a problem editing a numpy docstring. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Fri Feb 12 15:58:46 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 12 Feb 2010 15:58:46 -0500 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <201002121414.38938.meine@informatik.uni-hamburg.de> <45d1ab481002121019l7fb319f7v24d8a88b1e15889f@mail.gmail.com> <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> <45d1ab481002121226p2932d802w90f8de4bc0b34387@mail.gmail.com> <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> <1cd32cbb1002121242m7034bf2fs1d326747c28218d7@mail.gmail.com> <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> Message-ID: <1cd32cbb1002121258r24fc0b60g38e49231f2474f63@mail.gmail.com> On Fri, Feb 12, 2010 at 3:47 PM, Robert Kern wrote: > On Fri, Feb 12, 2010 at 14:42, ? wrote: >> On Fri, Feb 12, 2010 at 3:29 PM, Robert Kern wrote: >>> On Fri, Feb 12, 2010 at 14:26, David Goldsmith wrote: >>>> On Fri, Feb 12, 2010 at 10:57 AM, Robert Kern wrote: >>>>> >>>>> On Fri, Feb 12, 2010 at 12:19, David Goldsmith >>>>> wrote: >>>>> >>>>> > PS: please, if you don't mind, in the future post docstring "complaints" >>>>> > at >>>>> > scipy-dev (numpy-discussion has many more subscribers, many of whom >>>>> > probably >>>>> > don't immediately care about any particular docstring problem, whereas >>>>> > anyone who is working on the docstrings is - hopefully - subscribed to >>>>> > scipy-dev); thanks. >>>>> >>>>> numpy docstrings get discussed on numpy-discussion. >>>> >>>> Then is it just the Wiki and related issues that we ask people to discuss @ >>>> scipy-dev? >>> >>> I don't see a reason to do that, either. >> >> doceditor not moin Wiki, that was the policy that Ralf and David >> followed since last summer to have all docediting questions in one >> place. > > Is the volume of questions really so large to justify the > inconvenience to the questioners? It's one thing to direct someone to, > say, the matplotlib list when asking matplotlib questions, but no one > is going to guess that they need to go to scipy-dev to ask a question > about the doceditor when they run into a problem editing a numpy > docstring. No, I agree with you, short questions can be answered wherever they happen, especially if they are on topic. But, if it turns into a discussion about the internal structure of how doc strings are generated, then maybe David can redirect the traffic. Josef > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From robert.kern at gmail.com Fri Feb 12 16:05:06 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 12 Feb 2010 15:05:06 -0600 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <1cd32cbb1002121258r24fc0b60g38e49231f2474f63@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <201002121414.38938.meine@informatik.uni-hamburg.de> <45d1ab481002121019l7fb319f7v24d8a88b1e15889f@mail.gmail.com> <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> <45d1ab481002121226p2932d802w90f8de4bc0b34387@mail.gmail.com> <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> <1cd32cbb1002121242m7034bf2fs1d326747c28218d7@mail.gmail.com> <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> <1cd32cbb1002121258r24fc0b60g38e49231f2474f63@mail.gmail.com> Message-ID: <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> On Fri, Feb 12, 2010 at 14:58, wrote: > On Fri, Feb 12, 2010 at 3:47 PM, Robert Kern wrote: >> On Fri, Feb 12, 2010 at 14:42, ? wrote: >>> On Fri, Feb 12, 2010 at 3:29 PM, Robert Kern wrote: >>>> On Fri, Feb 12, 2010 at 14:26, David Goldsmith wrote: >>>>> On Fri, Feb 12, 2010 at 10:57 AM, Robert Kern wrote: >>>>>> >>>>>> On Fri, Feb 12, 2010 at 12:19, David Goldsmith >>>>>> wrote: >>>>>> >>>>>> > PS: please, if you don't mind, in the future post docstring "complaints" >>>>>> > at >>>>>> > scipy-dev (numpy-discussion has many more subscribers, many of whom >>>>>> > probably >>>>>> > don't immediately care about any particular docstring problem, whereas >>>>>> > anyone who is working on the docstrings is - hopefully - subscribed to >>>>>> > scipy-dev); thanks. >>>>>> >>>>>> numpy docstrings get discussed on numpy-discussion. >>>>> >>>>> Then is it just the Wiki and related issues that we ask people to discuss @ >>>>> scipy-dev? >>>> >>>> I don't see a reason to do that, either. >>> >>> doceditor not moin Wiki, that was the policy that Ralf and David >>> followed since last summer to have all docediting questions in one >>> place. >> >> Is the volume of questions really so large to justify the >> inconvenience to the questioners? It's one thing to direct someone to, >> say, the matplotlib list when asking matplotlib questions, but no one >> is going to guess that they need to go to scipy-dev to ask a question >> about the doceditor when they run into a problem editing a numpy >> docstring. > > No, I agree with you, short questions can be answered wherever they > happen, especially if they are on topic. > > But, if it turns into a discussion about the internal structure of how > doc strings are generated, then maybe David can redirect the traffic. I just don't see the reason for all that hassle, and it is a substantial hassle. You redirect people in order to get their question in front of the audience that can help them best or for truly off-topic discussions. As far as I'm concerned, questions about the doceditor, which drives the documentation for both numpy and scipy, are on-topic for any of either of the projects' lists. You don't redirect people just to keep things tidy. Mailing lists are messy things no matter what you do. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From d.l.goldsmith at gmail.com Fri Feb 12 16:24:02 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Fri, 12 Feb 2010 13:24:02 -0800 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <201002121414.38938.meine@informatik.uni-hamburg.de> <45d1ab481002121019l7fb319f7v24d8a88b1e15889f@mail.gmail.com> <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> <45d1ab481002121226p2932d802w90f8de4bc0b34387@mail.gmail.com> <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> <1cd32cbb1002121242m7034bf2fs1d326747c28218d7@mail.gmail.com> <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> <1cd32cbb1002121258r24fc0b60g38e49231f2474f63@mail.gmail.com> <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> Message-ID: <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> OK, OK, Ok, it's not worth getting into a flame war over. We ask people who are going to be working on the docstrings to subscribe to scipy-dev; this is not the same thing as being an "innocent bystander" asking a question or making a comment - I retract the request. Now, does anyone have anything useful to say about OP's original second problem? DG On Fri, Feb 12, 2010 at 1:05 PM, Robert Kern wrote: > On Fri, Feb 12, 2010 at 14:58, wrote: > > On Fri, Feb 12, 2010 at 3:47 PM, Robert Kern > wrote: > >> On Fri, Feb 12, 2010 at 14:42, wrote: > >>> On Fri, Feb 12, 2010 at 3:29 PM, Robert Kern > wrote: > >>>> On Fri, Feb 12, 2010 at 14:26, David Goldsmith < > d.l.goldsmith at gmail.com> wrote: > >>>>> On Fri, Feb 12, 2010 at 10:57 AM, Robert Kern > wrote: > >>>>>> > >>>>>> On Fri, Feb 12, 2010 at 12:19, David Goldsmith < > d.l.goldsmith at gmail.com> > >>>>>> wrote: > >>>>>> > >>>>>> > PS: please, if you don't mind, in the future post docstring > "complaints" > >>>>>> > at > >>>>>> > scipy-dev (numpy-discussion has many more subscribers, many of > whom > >>>>>> > probably > >>>>>> > don't immediately care about any particular docstring problem, > whereas > >>>>>> > anyone who is working on the docstrings is - hopefully - > subscribed to > >>>>>> > scipy-dev); thanks. > >>>>>> > >>>>>> numpy docstrings get discussed on numpy-discussion. > >>>>> > >>>>> Then is it just the Wiki and related issues that we ask people to > discuss @ > >>>>> scipy-dev? > >>>> > >>>> I don't see a reason to do that, either. > >>> > >>> doceditor not moin Wiki, that was the policy that Ralf and David > >>> followed since last summer to have all docediting questions in one > >>> place. > >> > >> Is the volume of questions really so large to justify the > >> inconvenience to the questioners? It's one thing to direct someone to, > >> say, the matplotlib list when asking matplotlib questions, but no one > >> is going to guess that they need to go to scipy-dev to ask a question > >> about the doceditor when they run into a problem editing a numpy > >> docstring. > > > > No, I agree with you, short questions can be answered wherever they > > happen, especially if they are on topic. > > > > But, if it turns into a discussion about the internal structure of how > > doc strings are generated, then maybe David can redirect the traffic. > > I just don't see the reason for all that hassle, and it is a > substantial hassle. You redirect people in order to get their question > in front of the audience that can help them best or for truly > off-topic discussions. As far as I'm concerned, questions about the > doceditor, which drives the documentation for both numpy and scipy, > are on-topic for any of either of the projects' lists. You don't > redirect people just to keep things tidy. Mailing lists are messy > things no matter what you do. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Fri Feb 12 16:51:57 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 12 Feb 2010 13:51:57 -0800 Subject: [Numpy-discussion] Re ading scientific notation using D instead of E In-Reply-To: <3d375d731002120941h7fda68d9uce617e3a459470f@mail.gmail.com> References: <27565041.post@talk.nabble.com> <1cd32cbb1002120803t70d9209dm349579fab9ea3d72@mail.gmail.com> <1cd32cbb1002120805r301c05b1m35654725173dac49@mail.gmail.com> <3d375d731002120812x2ceb1e8cu436db835eea3af86@mail.gmail.com> <4B758F83.5090300@noaa.gov> <3d375d731002120941h7fda68d9uce617e3a459470f@mail.gmail.com> Message-ID: <4B75CD7D.6030600@noaa.gov> Robert Kern wrote: > Eh, what? numpy.float is Python's float. No numpy features at all. my mistake -- I guess I assumed that numpy.float was an alias for numpy.float64. anyway, the (all?) the numpy dtypes have their own implementation of conversion from strings (which are a bit buggy, unfortunately). They don't seem to be accessible in the same way, though: In [44]: np.float64('1.23F+04') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /Users/cbarker/HAZMAT/SmallTools/WxTool/trunk/tests/ in () ValueError: setting an array element with a sequence. is the only way to do this: In [49]: np.fromstring('1.23', dtype=np.float64, sep=' ') Out[49]: array([ 1.23]) which is indeed, buggy (I wasn't aware of this bug yet): In [51]: np.fromstring('1.23F+04', dtype=np.float64, sep=',') Out[51]: array([ 1.23]) This make me think that the string conversion code is only being used by fromstring/fromfile, and that it isn't used much there! Which makes me wonder -- should we fix it or deprecate it? If fix_it: I wonder about the choice of strtod() and friends for the string conversion -- is seems that fscans would be easier and more robust (or easier to make robust, anyway) -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From pgmdevlist at gmail.com Fri Feb 12 17:30:07 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 12 Feb 2010 17:30:07 -0500 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <201002121414.38938.meine@informatik.uni-hamburg.de> <45d1ab481002121019l7fb319f7v24d8a88b1e15889f@mail.gmail.com> <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> <45d1ab481002121226p2932d802w90f8de4bc0b34387@mail.gmail.com> <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> <1cd32cbb1002121242m7034bf2fs1d326747c28218d7@mail.gmail.com> <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> <1cd32cbb1002121258r24fc0b60g38e49231f2474f63@mail.gmail.com> <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> Message-ID: <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> On Feb 12, 2010, at 4:24 PM, David Goldsmith wrote: > > OK, OK, Ok, it's not worth getting into a flame war over. We ask people who are going to be working on the docstrings to subscribe to scipy-dev; this is not the same thing as being an "innocent bystander" asking a question or making a comment - I retract the request. > > Now, does anyone have anything useful to say about OP's original second problem? Yes: write a proper docstring, or find me a better way to automatically create the docstring of a function from the docstring of the corresponding method (or vice-versa) than we have now for numpy.ma. I agree that the current method is not ideal, but at least you get some kind of info. From d.l.goldsmith at gmail.com Fri Feb 12 20:14:00 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Fri, 12 Feb 2010 17:14:00 -0800 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> <45d1ab481002121226p2932d802w90f8de4bc0b34387@mail.gmail.com> <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> <1cd32cbb1002121242m7034bf2fs1d326747c28218d7@mail.gmail.com> <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> <1cd32cbb1002121258r24fc0b60g38e49231f2474f63@mail.gmail.com> <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> Message-ID: <45d1ab481002121714r41a30949t3b50f1aab1bd5994@mail.gmail.com> On Fri, Feb 12, 2010 at 2:30 PM, Pierre GM wrote: > On Feb 12, 2010, at 4:24 PM, David Goldsmith wrote: > > > > OK, OK, Ok, it's not worth getting into a flame war over. We ask people > who are going to be working on the docstrings to subscribe to scipy-dev; > this is not the same thing as being an "innocent bystander" asking a > question or making a comment - I retract the request. > > > > Now, does anyone have anything useful to say about OP's original second > problem? > > Yes: write a proper docstring, or find me a better way to automatically > create the docstring of a function from the docstring of the corresponding > method (or vice-versa) than we have now for numpy.ma. I agree that the > current method is not ideal, but at least you get some kind of info. > Ah, now I understand. We've been here before: http://docs.scipy.org/numpy/Questions+Answers/#documenting-equivalent-functions-and-methods No "canonical answer" has been recorded, but Scott Sinclair commented: "In the the masked array module we should doc the methods. The functions automatically have the same docstring." Is the present issue an instance where Scott's second statement is invalid, an instance where its validity is resulting in a poor docstring for the function, or an instance in which Scott's "recommendation" was not followed? In any event, Ralf Gommers agreed w/ Scott's first statement, I'm neutral, and no one else appears to have "voted"... DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Fri Feb 12 22:09:28 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 12 Feb 2010 22:09:28 -0500 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <45d1ab481002121714r41a30949t3b50f1aab1bd5994@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <3d375d731002121057n5f136008l55273e232f98ef29@mail.gmail.com> <45d1ab481002121226p2932d802w90f8de4bc0b34387@mail.gmail.com> <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> <1cd32cbb1002121242m7034bf2fs1d326747c28218d7@mail.gmail.com> <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> <1cd32cbb1002121258r24fc0b60g38e49231f2474f63@mail.gmail.com> <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> <45d1ab481002121714r41a30949t3b50f1aab1bd5994@mail.gmail.com> Message-ID: On Feb 12, 2010, at 8:14 PM, David Goldsmith wrote > Is the present issue an instance where Scott's second statement is invalid, an instance where its validity is resulting in a poor docstring for the function, or an instance in which Scott's "recommendation" was not followed? The methods' docstring are fine, but we could improve the way the corresponding function docstrings are created. From d.l.goldsmith at gmail.com Fri Feb 12 23:01:52 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Fri, 12 Feb 2010 20:01:52 -0800 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: References: <201002121344.01439.meine@informatik.uni-hamburg.de> <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> <1cd32cbb1002121242m7034bf2fs1d326747c28218d7@mail.gmail.com> <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> <1cd32cbb1002121258r24fc0b60g38e49231f2474f63@mail.gmail.com> <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> <45d1ab481002121714r41a30949t3b50f1aab1bd5994@mail.gmail.com> Message-ID: <45d1ab481002122001h6d6b1d44k24e3d275de20eceb@mail.gmail.com> On Fri, Feb 12, 2010 at 7:09 PM, Pierre GM wrote: > On Feb 12, 2010, at 8:14 PM, David Goldsmith wrote > > > Is the present issue an instance where Scott's second statement is > invalid, an instance where its validity is resulting in a poor docstring for > the function, or an instance in which Scott's "recommendation" was not > followed? > > The methods' docstring are fine, but we could improve the way the > corresponding function docstrings are created. > Does anyone have an idea of how universal of a problem this is (i.e., is it just confined to ma)? Scott's statement appears to imply that he thought there was no problem at all. DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat Feb 13 03:11:20 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 13 Feb 2010 03:11:20 -0500 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: Mmh, today I got bitten by this again. It took me a while to figure out what was going on while trying to construct a pedagogical example manipulating numpy poly1d objects, and after searching for 'poly1d multiplication float' in my gmail inbox, the *only* post I found was this old one of mine, so I guess I'll just resuscitate it: On Tue, Jul 31, 2007 at 2:54 PM, Fernando Perez wrote: > Hi all, > > consider this little script: > > from numpy import poly1d, float, float32 > p=poly1d([1.,2.]) > three=float(3) > three32=float32(3) > > print 'three*p:',three*p > print 'three32*p:',three32*p > print 'p*three32:',p*three32 > > > which produces when run: > > In [3]: run pol1d.py > three*p: > 3 x + 6 > three32*p: [ 3. ?6.] > p*three32: > 3 x + 6 > > > The fact that multiplication between poly1d objects and numbers is: > > - non-commutative when the numbers are numpy scalars > - different for the same number if it is a python float vs a numpy scalar > > is rather unpleasant, and I can see this causing hard to find bugs, > depending on whether your code gets a parameter that came as a python > float or a numpy one. > > This was found today by a colleague on numpy 1.0.4.dev3937. ? It feels > like a bug to me, do others agree? Or is it consistent with a part of > the zen of numpy I've missed thus far? Tim H. mentioned how it might be tricky to fix. I'm wondering if there are any new ideas since on this front, because it's really awkward to explain to new students that poly1d objects have this kind of odd behavior regarding operations with scalars. The same underlying problem happens for addition, but in this case the answer (depending on the order of operations) changes even more: In [560]: p Out[560]: poly1d([ 1., 2.]) In [561]: print(p) 1 x + 2 In [562]: p+3 Out[562]: poly1d([ 1., 5.]) In [563]: p+three32 Out[563]: poly1d([ 1., 5.]) In [564]: three32+p Out[564]: array([ 4., 5.]) # !!! I'm ok with teaching students that in floating point, basic algebraic operations may not be exactly associative and that ignoring this fact can lead to nasty surprises. But explaining that a+b and b+a give completely different *types* of answer is kind of defeating my 'python is the simple language you want to learn' :) Is this really unfixable, or does one of our resident gurus have some ideas on how to approach the problem? Thanks! f From josef.pktd at gmail.com Sat Feb 13 03:41:09 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 13 Feb 2010 03:41:09 -0500 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: <1cd32cbb1002130041j45f5200g6c7fda3da7e27627@mail.gmail.com> On Sat, Feb 13, 2010 at 3:11 AM, Fernando Perez wrote: > Mmh, today I got bitten by this again. ?It took me a while to figure > out what was going on while trying to construct a pedagogical example > manipulating numpy poly1d objects, and after searching for 'poly1d > multiplication float' in my gmail inbox, the *only* post I found was > this old one of mine, so I guess I'll just resuscitate it: > > On Tue, Jul 31, 2007 at 2:54 PM, Fernando Perez wrote: >> Hi all, >> >> consider this little script: >> >> from numpy import poly1d, float, float32 >> p=poly1d([1.,2.]) >> three=float(3) >> three32=float32(3) >> >> print 'three*p:',three*p >> print 'three32*p:',three32*p >> print 'p*three32:',p*three32 >> >> >> which produces when run: >> >> In [3]: run pol1d.py >> three*p: >> 3 x + 6 >> three32*p: [ 3. ?6.] >> p*three32: >> 3 x + 6 >> >> >> The fact that multiplication between poly1d objects and numbers is: >> >> - non-commutative when the numbers are numpy scalars >> - different for the same number if it is a python float vs a numpy scalar >> >> is rather unpleasant, and I can see this causing hard to find bugs, >> depending on whether your code gets a parameter that came as a python >> float or a numpy one. >> >> This was found today by a colleague on numpy 1.0.4.dev3937. ? It feels >> like a bug to me, do others agree? Or is it consistent with a part of >> the zen of numpy I've missed thus far? > > Tim H. mentioned how it might be tricky to fix. I'm wondering if there > are any new ideas since on this front, because it's really awkward to > explain to new students that poly1d objects have this kind of odd > behavior regarding operations with scalars. > > The same underlying problem happens for addition, but in this case the > answer (depending on the order of operations) changes even more: > > In [560]: p > Out[560]: poly1d([ 1., ?2.]) > > In [561]: print(p) > > 1 x + 2 > > In [562]: p+3 > Out[562]: poly1d([ 1., ?5.]) > > In [563]: p+three32 > Out[563]: poly1d([ 1., ?5.]) > > In [564]: three32+p > Out[564]: array([ 4., ?5.]) ?# !!! > > I'm ok with teaching students that in floating point, basic algebraic > operations may not be exactly associative and that ignoring this fact > can lead to nasty surprises. ?But explaining that a+b and b+a give > completely different *types* of answer is kind of defeating my 'python > is the simple language you want to learn' :) > > Is this really unfixable, or does one of our resident gurus have some > ideas on how to approach the problem? >From several recent discussion about selecting which method is called, it looks like multiplication and addition could easily be fixed by adding a higher __array_priority__ to poly1d. I didn't see any __array_priority__ specified in class poly1d(object) For the discussion about fixing equal, notequal or whichever other methods cannot be changed by __array_priority__ , I haven't seen any solution. (but maybe I'm wrong) Josef > Thanks! > > f > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From wkerzendorf at googlemail.com Sat Feb 13 04:20:53 2010 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Sat, 13 Feb 2010 20:20:53 +1100 Subject: [Numpy-discussion] Multithreading support Message-ID: Dear all, I don't know much about parallel programming so I don't know how easy it is to do that: When doing simple arrray operations like adding two arrays or adding a number to the array, is numpy able to put this on multiple cores? I have tried it but it doesnt seem to do that. Is there a special multithread implementation of numpy. IDL has this feature where it checks how many cores available and uses them. This feature in numpy would make an already amazing package even better. Is this feature coming in numpy? Is there some sort of ETA on that? Thanks in advance Wolfgang From cournape at gmail.com Sat Feb 13 07:20:28 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 13 Feb 2010 21:20:28 +0900 Subject: [Numpy-discussion] Multithreading support In-Reply-To: References: Message-ID: <5b8d13221002130420i17f634d1xe8ed114f7969d707@mail.gmail.com> On Sat, Feb 13, 2010 at 6:20 PM, Wolfgang Kerzendorf wrote: > Dear all, > > I don't know much about parallel programming so I don't know how easy it is to do that: When doing simple arrray operations like adding two arrays or adding a number to the array, is numpy able to put this on multiple cores? I have tried it but it doesnt seem to do that. Is there a special multithread implementation of numpy. Depending on your definition of simple operations, Numpy supports multithreaded execution or not. For ufuncs (which is used for things like adding two arrays together, etc...), there is no multithread support. > > IDL has this feature where it checks how many cores available and uses them. This feature in numpy would make an already amazing package even better. AFAIK, using multi-thread at the core level of NumPy has been tried only once a few years ago, without much success (no significant performance improvement). Maybe the approach was flawed in some ways. Some people have suggested using OpenMP, but nobody has every produced something significant AFAIK: http://mail.scipy.org/pipermail/numpy-discussion/2008-March/031897.html Note that Linear algebra operations can run in // depending on your libraries. In particular, the dot function runs in // if your blas/lapack does. cheers, David From renesd at gmail.com Sat Feb 13 07:25:56 2010 From: renesd at gmail.com (=?ISO-8859-1?Q?Ren=E9_Dudfield?=) Date: Sat, 13 Feb 2010 14:25:56 +0200 Subject: [Numpy-discussion] Multithreading support In-Reply-To: <5b8d13221002130420i17f634d1xe8ed114f7969d707@mail.gmail.com> References: <5b8d13221002130420i17f634d1xe8ed114f7969d707@mail.gmail.com> Message-ID: <64ddb72c1002130425p24c62a8end72ecd7a87f1003b@mail.gmail.com> hi, see: http://numcorepy.blogspot.com/ They see a benefit when working with large arrays. Otherwise you are limited by memory - and the extra cores don't help with memory bandwidth. cheers, On Sat, Feb 13, 2010 at 2:20 PM, David Cournapeau wrote: > On Sat, Feb 13, 2010 at 6:20 PM, Wolfgang Kerzendorf > wrote: > > Dear all, > > > > I don't know much about parallel programming so I don't know how easy it > is to do that: When doing simple arrray operations like adding two arrays or > adding a number to the array, is numpy able to put this on multiple cores? I > have tried it but it doesnt seem to do that. Is there a special multithread > implementation of numpy. > > Depending on your definition of simple operations, Numpy supports > multithreaded execution or not. For ufuncs (which is used for things > like adding two arrays together, etc...), there is no multithread > support. > > > > > IDL has this feature where it checks how many cores available and uses > them. This feature in numpy would make an already amazing package even > better. > > AFAIK, using multi-thread at the core level of NumPy has been tried > only once a few years ago, without much success (no significant > performance improvement). Maybe the approach was flawed in some ways. > Some people have suggested using OpenMP, but nobody has every produced > something significant AFAIK: > > http://mail.scipy.org/pipermail/numpy-discussion/2008-March/031897.html > > Note that Linear algebra operations can run in // depending on your > libraries. In particular, the dot function runs in // if your > blas/lapack does. > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.viamontes.esquivel at gmail.com Sat Feb 13 09:59:31 2010 From: a.viamontes.esquivel at gmail.com (Alcides Viamontes Esquivel) Date: Sat, 13 Feb 2010 09:59:31 -0500 Subject: [Numpy-discussion] A few notes... Message-ID: Hello everybody. I has been working with Numpy, I have enjoyed it a lot. Such a great idea to let you do things in Python while the real number-crunching code runs in C. I had to figure out how to do a few things with indexing and would like to share it, you might check it at http://www.scribd.com/full/26811489?access_key=key-2m6kuskw266wnjdsuioe If anyone is interested in the original sources, please contact me directly at a( d o t )viamontes( d o t )esquivel( a t )g m a i l( d o t )c o m Regards! From charlesr.harris at gmail.com Sat Feb 13 10:15:54 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Feb 2010 08:15:54 -0700 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: <1cd32cbb1002130041j45f5200g6c7fda3da7e27627@mail.gmail.com> References: <1cd32cbb1002130041j45f5200g6c7fda3da7e27627@mail.gmail.com> Message-ID: On Sat, Feb 13, 2010 at 1:41 AM, wrote: > On Sat, Feb 13, 2010 at 3:11 AM, Fernando Perez > wrote: > > Mmh, today I got bitten by this again. It took me a while to figure > > out what was going on while trying to construct a pedagogical example > > manipulating numpy poly1d objects, and after searching for 'poly1d > > multiplication float' in my gmail inbox, the *only* post I found was > > this old one of mine, so I guess I'll just resuscitate it: > > > > On Tue, Jul 31, 2007 at 2:54 PM, Fernando Perez > wrote: > >> Hi all, > >> > >> consider this little script: > >> > >> from numpy import poly1d, float, float32 > >> p=poly1d([1.,2.]) > >> three=float(3) > >> three32=float32(3) > >> > >> print 'three*p:',three*p > >> print 'three32*p:',three32*p > >> print 'p*three32:',p*three32 > >> > >> > >> which produces when run: > >> > >> In [3]: run pol1d.py > >> three*p: > >> 3 x + 6 > >> three32*p: [ 3. 6.] > >> p*three32: > >> 3 x + 6 > >> > >> > >> The fact that multiplication between poly1d objects and numbers is: > >> > >> - non-commutative when the numbers are numpy scalars > >> - different for the same number if it is a python float vs a numpy > scalar > >> > >> is rather unpleasant, and I can see this causing hard to find bugs, > >> depending on whether your code gets a parameter that came as a python > >> float or a numpy one. > >> > >> This was found today by a colleague on numpy 1.0.4.dev3937. It feels > >> like a bug to me, do others agree? Or is it consistent with a part of > >> the zen of numpy I've missed thus far? > > > > Tim H. mentioned how it might be tricky to fix. I'm wondering if there > > are any new ideas since on this front, because it's really awkward to > > explain to new students that poly1d objects have this kind of odd > > behavior regarding operations with scalars. > > > > The same underlying problem happens for addition, but in this case the > > answer (depending on the order of operations) changes even more: > > > > In [560]: p > > Out[560]: poly1d([ 1., 2.]) > > > > In [561]: print(p) > > > > 1 x + 2 > > > > In [562]: p+3 > > Out[562]: poly1d([ 1., 5.]) > > > > In [563]: p+three32 > > Out[563]: poly1d([ 1., 5.]) > > > > In [564]: three32+p > > Out[564]: array([ 4., 5.]) # !!! > > > > I'm ok with teaching students that in floating point, basic algebraic > > operations may not be exactly associative and that ignoring this fact > > can lead to nasty surprises. But explaining that a+b and b+a give > > completely different *types* of answer is kind of defeating my 'python > > is the simple language you want to learn' :) > > > > Is this really unfixable, or does one of our resident gurus have some > > ideas on how to approach the problem? > > >From several recent discussion about selecting which method is called, > it looks like multiplication and addition could easily be fixed by > adding a higher __array_priority__ to poly1d. I didn't see any > __array_priority__ specified in class poly1d(object) > > > For the discussion about fixing equal, notequal or whichever other > methods cannot be changed by __array_priority__ , I haven't seen any > solution. > > (but maybe I'm wrong) > > Josef > > > > > Thanks! > > > > f > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Feb 13 10:34:44 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Feb 2010 08:34:44 -0700 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 1:11 AM, Fernando Perez wrote: > Mmh, today I got bitten by this again. It took me a while to figure > out what was going on while trying to construct a pedagogical example > manipulating numpy poly1d objects, and after searching for 'poly1d > multiplication float' in my gmail inbox, the *only* post I found was > this old one of mine, so I guess I'll just resuscitate it: > > On Tue, Jul 31, 2007 at 2:54 PM, Fernando Perez > wrote: > > Hi all, > > > > consider this little script: > > > > from numpy import poly1d, float, float32 > > p=poly1d([1.,2.]) > > three=float(3) > > three32=float32(3) > > > > print 'three*p:',three*p > > print 'three32*p:',three32*p > > print 'p*three32:',p*three32 > > > > > > which produces when run: > > > > In [3]: run pol1d.py > > three*p: > > 3 x + 6 > > three32*p: [ 3. 6.] > > p*three32: > > 3 x + 6 > > > > > > The fact that multiplication between poly1d objects and numbers is: > > > > - non-commutative when the numbers are numpy scalars > > - different for the same number if it is a python float vs a numpy scalar > > > > is rather unpleasant, and I can see this causing hard to find bugs, > > depending on whether your code gets a parameter that came as a python > > float or a numpy one. > > > > This was found today by a colleague on numpy 1.0.4.dev3937. It feels > > like a bug to me, do others agree? Or is it consistent with a part of > > the zen of numpy I've missed thus far? > > Tim H. mentioned how it might be tricky to fix. I'm wondering if there > are any new ideas since on this front, because it's really awkward to > explain to new students that poly1d objects have this kind of odd > behavior regarding operations with scalars. > > The same underlying problem happens for addition, but in this case the > answer (depending on the order of operations) changes even more: > > In [560]: p > Out[560]: poly1d([ 1., 2.]) > > In [561]: print(p) > > 1 x + 2 > > In [562]: p+3 > Out[562]: poly1d([ 1., 5.]) > > In [563]: p+three32 > Out[563]: poly1d([ 1., 5.]) > > In [564]: three32+p > Out[564]: array([ 4., 5.]) # !!! > > I'm ok with teaching students that in floating point, basic algebraic > operations may not be exactly associative and that ignoring this fact > can lead to nasty surprises. But explaining that a+b and b+a give > completely different *types* of answer is kind of defeating my 'python > is the simple language you want to learn' :) > > Is this really unfixable, or does one of our resident gurus have some > ideas on how to approach the problem? > > The new polynomials don't have that problem. In [1]: from numpy.polynomial import Polynomial as Poly In [2]: p = Poly([1,2]) In [3]: 3*p Out[3]: Polynomial([ 3., 6.], [-1., 1.]) In [4]: p*3 Out[4]: Polynomial([ 3., 6.], [-1., 1.]) In [5]: float32(3)*p Out[5]: Polynomial([ 3., 6.], [-1., 1.]) In [6]: p*float32(3) Out[6]: Polynomial([ 3., 6.], [-1., 1.]) In [7]: 3.*p Out[7]: Polynomial([ 3., 6.], [-1., 1.]) In [8]: p*3. Out[8]: Polynomial([ 3., 6.], [-1., 1.]) In [9]: p + float32(3) Out[9]: Polynomial([ 4., 2.], [-1., 1.]) In [10]: float32(3) + p Out[10]: Polynomial([ 4., 2.], [-1., 1.]) They are only in the removed 1.4 release, unfortunately. You could just pull that folder and run them as a separate module. They do have a problem with ndarrays behaving differently on the left and right, but __array_priority__ can be use to fix that. I haven't made that last fix because I'm not quite sure how I want them to behave. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Sat Feb 13 10:57:18 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Sat, 13 Feb 2010 10:57:18 -0500 Subject: [Numpy-discussion] It's be nice if rint Message-ID: It's be nice if rint could directly return a dtype of my choice (an int- type, such as np.int32). From charlesr.harris at gmail.com Sat Feb 13 11:24:35 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Feb 2010 09:24:35 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? Message-ID: Hi All, Since there has been talk of deprecating the numarray and numeric compatibility parts of numpy for the upcoming 2.0 release I thought maybe we could consider a few other changes. First, numpy imports a ton of stuff by default and this is maintained for backward compatibility. Would this be a reasonable time to change that and require explicit imports for things like fft? Second, Poly1D has problems that aren't likely to get fixed, I would like to both deprecate the old polynomial support and make it not be imported by default. Thoughts? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat Feb 13 12:04:04 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 13 Feb 2010 12:04:04 -0500 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 10:34 AM, Charles R Harris wrote: > The new polynomials don't have that problem. > > In [1]: from numpy.polynomial import Polynomial as Poly > > In [2]: p = Poly([1,2]) Aha, great! Many thanks, I can tell my students this, and just show them the caveat of calling float(x) on any scalar they want to use with the 'old' ones for now. I remember being excited about your work on the new Polys, but since I'm teaching with stock 1.3, I hadn't found them recently and just forgot about them. Excellent. One minor suggestion: I think it would be useful to have the new polys have some form of pretty-printing like the old ones. It is actually useful when working, to verify what one has at hand, to see an expanded printout like the old ones do: In [26]: p_old = numpy.poly1d([3, 2, 1]) In [27]: p_old Out[27]: poly1d([3, 2, 1]) In [28]: print(p_old) 2 3 x + 2 x + 1 Just yesterday I was validating some code against a symbolic construction with sympy, and it was handy to pretty-print them; I also think it makes them much easier to grasp for students new to the tools. In any case, thanks both for the tip and especially the code contribution! Cheers, f From charlesr.harris at gmail.com Sat Feb 13 12:24:40 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Feb 2010 10:24:40 -0700 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 10:04 AM, Fernando Perez wrote: > On Sat, Feb 13, 2010 at 10:34 AM, Charles R Harris > wrote: > > The new polynomials don't have that problem. > > > > In [1]: from numpy.polynomial import Polynomial as Poly > > > > In [2]: p = Poly([1,2]) > > Aha, great! Many thanks, I can tell my students this, and just show > them the caveat of calling float(x) on any scalar they want to use > with the 'old' ones for now. > > I remember being excited about your work on the new Polys, but since > I'm teaching with stock 1.3, I hadn't found them recently and just > forgot about them. Excellent. > > One minor suggestion: I think it would be useful to have the new > polys have some form of pretty-printing like the old ones. It is > actually useful when working, to verify what one has at hand, to see > an expanded printout like the old ones do: > > I thought about that, but decided it was best left to a derived class, say PrettyPoly ;) Overriding __repr__ and __str__ is an example where inheritance makes sense. > In [26]: p_old = numpy.poly1d([3, 2, 1]) > > In [27]: p_old > Out[27]: poly1d([3, 2, 1]) > > In [28]: print(p_old) > 2 > 3 x + 2 x + 1 > > Just yesterday I was validating some code against a symbolic > construction with sympy, and it was handy to pretty-print them; I also > think it makes them much easier to grasp for students new to the > tools. > > In any case, thanks both for the tip and especially the code contribution! > > Cheers, > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Feb 13 13:00:51 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Feb 2010 11:00:51 -0700 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 10:24 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Sat, Feb 13, 2010 at 10:04 AM, Fernando Perez wrote: > >> On Sat, Feb 13, 2010 at 10:34 AM, Charles R Harris >> wrote: >> > The new polynomials don't have that problem. >> > >> > In [1]: from numpy.polynomial import Polynomial as Poly >> > >> > In [2]: p = Poly([1,2]) >> >> Aha, great! Many thanks, I can tell my students this, and just show >> them the caveat of calling float(x) on any scalar they want to use >> with the 'old' ones for now. >> >> I remember being excited about your work on the new Polys, but since >> I'm teaching with stock 1.3, I hadn't found them recently and just >> forgot about them. Excellent. >> >> One minor suggestion: I think it would be useful to have the new >> polys have some form of pretty-printing like the old ones. It is >> actually useful when working, to verify what one has at hand, to see >> an expanded printout like the old ones do: >> >> > I thought about that, but decided it was best left to a derived class, say > PrettyPoly ;) Overriding __repr__ and __str__ is an example where > inheritance makes sense. > > Hmm, and on testing it looks like maybe "isinstance" should be replaced with "type(s) is x" to avoid the left-right confusion when mixing derived classes with the base class. Binary operators play havoc with inheritance. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Sat Feb 13 13:17:18 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Sat, 13 Feb 2010 10:17:18 -0800 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: Message-ID: <4B76ECAE.40809@noaa.gov> Charles R Harris wrote: > numpy imports a ton > of stuff by default and this is maintained for backward compatibility. > Would this be a reasonable time to change that and require explicit > imports for things like fft? absolutely! I'd love far more minimalist imports. This is particularly an issue when you are bundling things up with py2exe and the like. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From jh at physics.ucf.edu Sat Feb 13 13:23:04 2010 From: jh at physics.ucf.edu (Joe Harrington) Date: Sat, 13 Feb 2010 13:23:04 -0500 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: (numpy-discussion-request@scipy.org) References: Message-ID: Chuck Harris writes (on numpy-discussion): > Since there has been talk of deprecating the numarray and numeric > compatibility parts of numpy for the upcoming 2.0 release I thought maybe we > could consider a few other changes. First, numpy imports a ton of stuff by > default and this is maintained for backward compatibility. Would this be a > reasonable time to change that and require explicit imports for things like > fft? Second, Poly1D has problems that aren't likely to get fixed, I would > like to both deprecate the old polynomial support and make it not be > imported by default. > > Thoughts? I'd like to suggest that 2.0 include a fully-reviewed set of docstrings (except for the "unimportant" ones). Really, 1.0 should not have been released without documentation, but it was released prematurely anyway, and we've spent much of the 1.x release series fixing inconsistencies and other problems, as well as writing the draft docs now included in the releases. I look at 2.0 as our "real" 1.0, as do many others. I am posting a call for a (possibly paid) Django programmer who can add a second review capability to the doc wiki. That call is on scipy-dev, where discussion of the wiki and general documentation topics takes place. If you are interested, please respond there, not here. Discussion of whether to include reviewed docs in numpy 2.0 belongs here on numpy-discussion, of course. I think the main issue with regard to docs will be time frame. What is the time frame for a 2.0 release? Aside from docs and the things Chuck mentioned, I think a general design review would be a good idea, to root out things like any more lurking inconsistencies or disorganizations, such as the "median" problem. I guess that's what Chuck started, but should we formalize it by parceling out chunks of the package to 2-3 reviewers each for comment? The idea would be to root out problems, incompleteness, and disorganization, *not* to engage in a big rewrite that would massively break the API for everyone. Ideally, after 2.0 the changes would be improvements rather than API-breaking fixes. Thanks, --jh-- From charlesr.harris at gmail.com Sat Feb 13 13:31:46 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Feb 2010 11:31:46 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 11:23 AM, Joe Harrington wrote: > Chuck Harris writes (on numpy-discussion): > > > Since there has been talk of deprecating the numarray and numeric > > compatibility parts of numpy for the upcoming 2.0 release I thought maybe > we > > could consider a few other changes. First, numpy imports a ton of stuff > by > > default and this is maintained for backward compatibility. Would this be > a > > reasonable time to change that and require explicit imports for things > like > > fft? Second, Poly1D has problems that aren't likely to get fixed, I would > > like to both deprecate the old polynomial support and make it not be > > imported by default. > > > > Thoughts? > > I'd like to suggest that 2.0 include a fully-reviewed set of > docstrings (except for the "unimportant" ones). > > Really, 1.0 should not have been released without documentation, but > it was released prematurely anyway, and we've spent much of the 1.x > release series fixing inconsistencies and other problems, as well as > writing the draft docs now included in the releases. I look at 2.0 > as our "real" 1.0, as do many others. > > I am posting a call for a (possibly paid) Django programmer who can > add a second review capability to the doc wiki. That call is on > scipy-dev, where discussion of the wiki and general documentation > topics takes place. If you are interested, please respond there, not > here. Discussion of whether to include reviewed docs in numpy 2.0 > belongs here on numpy-discussion, of course. > > I think the main issue with regard to docs will be time frame. What > is the time frame for a 2.0 release? > > 2-3 weeks from now. > Aside from docs and the things Chuck mentioned, I think a general > design review would be a good idea, to root out things like any more > lurking inconsistencies or disorganizations, such as the "median" > problem. I guess that's what Chuck started, but should we formalize > it by parceling out chunks of the package to 2-3 reviewers each for > comment? The idea would be to root out problems, incompleteness, and > disorganization, *not* to engage in a big rewrite that would massively > break the API for everyone. > > Ideally, after 2.0 the changes would be improvements rather than > API-breaking fixes. > > We aren't going to have time to review and redesign numpy for 2.0. That's what 3.0 is for and that is probably a couple of years in the future. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Sat Feb 13 14:44:51 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Sat, 13 Feb 2010 13:44:51 -0600 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: Message-ID: <094B5327-8D52-4CE9-BD4E-B19C1A23E62B@enthought.com> -1 -- (mobile phone of) Travis Oliphant Enthought, Inc. 1-512-536-1057 http://www.enthought.com On Feb 13, 2010, at 10:24 AM, Charles R Harris wrote: > Hi All, > > Since there has been talk of deprecating the numarray and numeric > compatibility parts of numpy for the upcoming 2.0 release I thought > maybe we could consider a few other changes. First, numpy imports a > ton of stuff by default and this is maintained for backward > compatibility. Would this be a reasonable time to change that and > require explicit imports for things like fft? Second, Poly1D has > problems that aren't likely to get fixed, I would like to both > deprecate the old polynomial support and make it not be imported by > default. > > Thoughts? > > Chuck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From oliphant at enthought.com Sat Feb 13 14:49:29 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Sat, 13 Feb 2010 13:49:29 -0600 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: Message-ID: This is exactly what I was worried about with calling the next release 2.0. This is not the time to change all the things we wish were done differently. The release is scheduled for 3 weeks. Travis -- (mobile phone of) Travis Oliphant Enthought, Inc. 1-512-536-1057 http://www.enthought.com On Feb 13, 2010, at 12:23 PM, Joe Harrington wrote: > Chuck Harris writes (on numpy-discussion): > >> Since there has been talk of deprecating the numarray and numeric >> compatibility parts of numpy for the upcoming 2.0 release I thought >> maybe we >> could consider a few other changes. First, numpy imports a ton of >> stuff by >> default and this is maintained for backward compatibility. Would >> this be a >> reasonable time to change that and require explicit imports for >> things like >> fft? Second, Poly1D has problems that aren't likely to get fixed, I >> would >> like to both deprecate the old polynomial support and make it not be >> imported by default. >> >> Thoughts? > > I'd like to suggest that 2.0 include a fully-reviewed set of > docstrings (except for the "unimportant" ones). > > Really, 1.0 should not have been released without documentation, but > it was released prematurely anyway, and we've spent much of the 1.x > release series fixing inconsistencies and other problems, as well as > writing the draft docs now included in the releases. I look at 2.0 > as our "real" 1.0, as do many others. > > I am posting a call for a (possibly paid) Django programmer who can > add a second review capability to the doc wiki. That call is on > scipy-dev, where discussion of the wiki and general documentation > topics takes place. If you are interested, please respond there, not > here. Discussion of whether to include reviewed docs in numpy 2.0 > belongs here on numpy-discussion, of course. > > I think the main issue with regard to docs will be time frame. What > is the time frame for a 2.0 release? > > Aside from docs and the things Chuck mentioned, I think a general > design review would be a good idea, to root out things like any more > lurking inconsistencies or disorganizations, such as the "median" > problem. I guess that's what Chuck started, but should we formalize > it by parceling out chunks of the package to 2-3 reviewers each for > comment? The idea would be to root out problems, incompleteness, and > disorganization, *not* to engage in a big rewrite that would massively > break the API for everyone. > > Ideally, after 2.0 the changes would be improvements rather than > API-breaking fixes. > > Thanks, > > --jh-- > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From xavier.gnata at gmail.com Sat Feb 13 14:50:45 2010 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sat, 13 Feb 2010 20:50:45 +0100 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: Message-ID: <4B770295.8070408@gmail.com> On 02/13/2010 07:31 PM, Charles R Harris wrote: > > > On Sat, Feb 13, 2010 at 11:23 AM, Joe Harrington > wrote: > > Chuck Harris writes (on numpy-discussion): > > > Since there has been talk of deprecating the numarray and numeric > > compatibility parts of numpy for the upcoming 2.0 release I > thought maybe we > > could consider a few other changes. First, numpy imports a ton > of stuff by > > default and this is maintained for backward compatibility. Would > this be a > > reasonable time to change that and require explicit imports for > things like > > fft? Second, Poly1D has problems that aren't likely to get > fixed, I would > > like to both deprecate the old polynomial support and make it not be > > imported by default. > > > > Thoughts? > > I'd like to suggest that 2.0 include a fully-reviewed set of > docstrings (except for the "unimportant" ones). > > Really, 1.0 should not have been released without documentation, but > it was released prematurely anyway, and we've spent much of the 1.x > release series fixing inconsistencies and other problems, as well as > writing the draft docs now included in the releases. I look at 2.0 > as our "real" 1.0, as do many others. > > I am posting a call for a (possibly paid) Django programmer who can > add a second review capability to the doc wiki. That call is on > scipy-dev, where discussion of the wiki and general documentation > topics takes place. If you are interested, please respond there, not > here. Discussion of whether to include reviewed docs in numpy 2.0 > belongs here on numpy-discussion, of course. > > I think the main issue with regard to docs will be time frame. What > is the time frame for a 2.0 release? > > > 2-3 weeks from now. > > > Aside from docs and the things Chuck mentioned, I think a general > design review would be a good idea, to root out things like any more > lurking inconsistencies or disorganizations, such as the "median" > problem. I guess that's what Chuck started, but should we formalize > it by parceling out chunks of the package to 2-3 reviewers each for > comment? The idea would be to root out problems, incompleteness, and > disorganization, *not* to engage in a big rewrite that would massively > break the API for everyone. > > Ideally, after 2.0 the changes would be improvements rather than > API-breaking fixes. > > > We aren't going to have time to review and redesign numpy for 2.0. > That's what 3.0 is for and that is probably a couple of years in the > future. > When do you plan to fully support python3? In version 2.x ? 3.x (that would be sad). Xavier -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Sat Feb 13 14:53:46 2010 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sat, 13 Feb 2010 20:53:46 +0100 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: Message-ID: <4B77034A.7060705@gmail.com> IMHO 2.0 should support python3. That would be a major step and a good reason to call it 2.0. Xavier > This is exactly what I was worried about with calling the next release > 2.0. > > This is not the time to change all the things we wish were done > differently. > > The release is scheduled for 3 weeks. > > Travis > > > -- > (mobile phone of) > Travis Oliphant > Enthought, Inc. > 1-512-536-1057 > http://www.enthought.com > > On Feb 13, 2010, at 12:23 PM, Joe Harrington wrote: > > >> Chuck Harris writes (on numpy-discussion): >> >> >>> Since there has been talk of deprecating the numarray and numeric >>> compatibility parts of numpy for the upcoming 2.0 release I thought >>> maybe we >>> could consider a few other changes. First, numpy imports a ton of >>> stuff by >>> default and this is maintained for backward compatibility. Would >>> this be a >>> reasonable time to change that and require explicit imports for >>> things like >>> fft? Second, Poly1D has problems that aren't likely to get fixed, I >>> would >>> like to both deprecate the old polynomial support and make it not be >>> imported by default. >>> >>> Thoughts? >>> >> I'd like to suggest that 2.0 include a fully-reviewed set of >> docstrings (except for the "unimportant" ones). >> >> Really, 1.0 should not have been released without documentation, but >> it was released prematurely anyway, and we've spent much of the 1.x >> release series fixing inconsistencies and other problems, as well as >> writing the draft docs now included in the releases. I look at 2.0 >> as our "real" 1.0, as do many others. >> >> I am posting a call for a (possibly paid) Django programmer who can >> add a second review capability to the doc wiki. That call is on >> scipy-dev, where discussion of the wiki and general documentation >> topics takes place. If you are interested, please respond there, not >> here. Discussion of whether to include reviewed docs in numpy 2.0 >> belongs here on numpy-discussion, of course. >> >> I think the main issue with regard to docs will be time frame. What >> is the time frame for a 2.0 release? >> >> Aside from docs and the things Chuck mentioned, I think a general >> design review would be a good idea, to root out things like any more >> lurking inconsistencies or disorganizations, such as the "median" >> problem. I guess that's what Chuck started, but should we formalize >> it by parceling out chunks of the package to 2-3 reviewers each for >> comment? The idea would be to root out problems, incompleteness, and >> disorganization, *not* to engage in a big rewrite that would massively >> break the API for everyone. >> >> Ideally, after 2.0 the changes would be improvements rather than >> API-breaking fixes. >> >> Thanks, >> >> --jh-- >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From matthew.brett at gmail.com Sat Feb 13 14:59:32 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 13 Feb 2010 11:59:32 -0800 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <4B77034A.7060705@gmail.com> References: <4B77034A.7060705@gmail.com> Message-ID: <1e2af89e1002131159q9b0fe18j63b0aaab5faf288@mail.gmail.com> Hi, On Sat, Feb 13, 2010 at 11:53 AM, Xavier Gnata wrote: > IMHO 2.0 should support python3. > That would be a major step and a good reason to call it 2.0. I agree with Travis, I think we should try not to attach too much importance to the big number change, release 2.0 just taking care of the ABI compatibility with the usual feature-freeze for an upcoming release, and then we can release 3.0 with any major additions in due course, as the work gets done. Basically, the '2.0' label does not mean that there's open-season for feature changes at this point - that has to wait, if the release is going to be stable. Best, Matthew From charlesr.harris at gmail.com Sat Feb 13 15:01:19 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Feb 2010 13:01:19 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 12:49 PM, Travis Oliphant wrote: > This is exactly what I was worried about with calling the next release > 2.0. > > This is not the time to change all the things we wish were done > differently. > > Do you think it would be reasonable to make such changes in 2.x? They would be incompatible API changes, but at the python level, not the C level. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Feb 13 15:28:03 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 13 Feb 2010 22:28:03 +0200 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <4B77034A.7060705@gmail.com> References: <4B77034A.7060705@gmail.com> Message-ID: <1266092883.4565.2.camel@Nokia-N900-42-11> We will most likely have experimental py3 support in 2.0. If you, or someone else wishes to help bringing 2.0 to fully work with Py3, now is a very good time to step up. How to give a hand: 1. Get my latest py3 branch from http://github.com/pv/numpy-work/tree/py3k Read doc/py3k.txt 2. Get py3 branch of nose (see doc/py3k.txt in the branch). 3. Build numpy, and run unit tests (build with "NPY_SEPARATE_BUILD=1 python3 setup.py build", numscons is not supported at the moment). 4. Fix bugs revealed by the unit tests. Currently, C code is mostly ported, so you can probably also help only by writing Python. There are about 100 test failures (of 2400) left. Many test failures occur also because the tests are wrong. For instance: the numpy I/O requires bytes, but some tests supply it unicode strings -> need changes in tests. One useful thing to do is to help with the str/bytes transition on the python side. Since the same code must work with pythons from 2.4 to 3.0 (for 3 it's automatically run through 2to3 on build), there are some helpers in numpy.compat.py3k for helping with this. See several previous commits on the branch on that. Another useful thing could be to port an existing numpy-using code to py3 and test if it works with the current py3k branch, what fails, and if the failures are already revealed by unit tests. Even if it does not work at the moment, having it at hand will help testing the rc when it comes. This, because I wouldn't completely rely on our unit test coverage. Finally, try to write some PEP 3118 using code, and check how it works. (You can use python >= 2.6 for this if you get numpy from the py3k branch.) -- Pauli Virtanen ----- Alkuper?inen viesti ----- > IMHO 2.0 should support python3. > That would be a major step and a good reason to call it 2.0. > > Xavier > > > This is exactly what I was worried about with calling the next release? > > 2.0. > > > > This is not the time to change all the things we wish were done? > > differently. > > > > The release is scheduled for 3 weeks. > > > > Travis > > > > > > -- > > (mobile phone of) > > Travis Oliphant > > Enthought, Inc. > > 1-512-536-1057 > > http://www.enthought.com > > > > On Feb 13, 2010, at 12:23 PM, Joe Harrington wrote: > > > >? ? > > > Chuck Harris writes (on numpy-discussion): > > > > > >? ? ? ? > > > > Since there has been talk of deprecating the numarray and numeric > > > > compatibility parts of numpy for the upcoming 2.0 release I thought? > > > > maybe we > > > > could consider a few other changes. First, numpy imports a ton of? > > > > stuff by > > > > default and this is maintained for backward compatibility. Would? > > > > this be a > > > > reasonable time to change that and require explicit imports for? > > > > things like > > > > fft? Second, Poly1D has problems that aren't likely to get fixed, I? > > > > would > > > > like to both deprecate the old polynomial support and make it not be > > > > imported by default. > > > > > > > > Thoughts? > > > >? ? ? ? ? ? > > > I'd like to suggest that 2.0 include a fully-reviewed set of > > > docstrings (except for the "unimportant" ones). > > > > > > Really, 1.0 should not have been released without documentation, but > > > it was released prematurely anyway, and we've spent much of the 1.x > > > release series fixing inconsistencies and other problems, as well as > > > writing the draft docs now included in the releases.? I look at 2.0 > > > as our "real" 1.0, as do many others. > > > > > > I am posting a call for a (possibly paid) Django programmer who can > > > add a second review capability to the doc wiki.? That call is on > > > scipy-dev, where discussion of the wiki and general documentation > > > topics takes place.? If you are interested, please respond there, not > > > here.? Discussion of whether to include reviewed docs in numpy 2.0 > > > belongs here on numpy-discussion, of course. > > > > > > I think the main issue with regard to docs will be time frame.? What > > > is the time frame for a 2.0 release? > > > > > > Aside from docs and the things Chuck mentioned, I think a general > > > design review would be a good idea, to root out things like any more > > > lurking inconsistencies or disorganizations, such as the "median" > > > problem.? I guess that's what Chuck started, but should we formalize > > > it by parceling out chunks of the package to 2-3 reviewers each for > > > comment?? The idea would be to root out problems, incompleteness, and > > > disorganization, *not* to engage in a big rewrite that would massively > > > break the API for everyone. > > > > > > Ideally, after 2.0 the changes would be improvements rather than > > > API-breaking fixes. > > > > > > Thanks, > > > > > > --jh-- > > > _______________________________________________ > > > NumPy-Discussion mailing list > > > NumPy-Discussion at scipy.org > > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > >? ? ? ? > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > >? ? > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From millman at berkeley.edu Sat Feb 13 15:34:57 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 13 Feb 2010 14:34:57 -0600 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 1:49 PM, Travis Oliphant wrote: > This is exactly what I was worried about with calling the next release > 2.0. > > This is not the time to change all the things we wish were done > differently. > > The release is scheduled for 3 weeks. Hey Travis, I agree with your general sentiment (and I assume Chuck does too). And I don't think either of us is suggesting that we change all the things we want done differently. I do think that it is reasonable for us to suggest a few changes that could be implemented quickly to the list and just have a quick up or down vote on that specific issue without having to have a general discussion regarding what 2.0 is. So there are at least three suggestions on the table right now: 1. I would like to add deprecation warnings for the numarray and numeric support (but leave all the code in at least until the 3.0 release). 2. Chuck proposed requiring explicit imports for things like fft. 3. Chuck also suggested deprecating the old polynomial support and make it not be imported by default. These things are relatively small and easy to implement. If someone is willing to do the work within, say a week, I think we should go for it. I am sure others may disagree. Why can't we just agree that the release is scheduled for 3 weeks from now. And if someone suggests a change that they commit to implementing in one weeks time and that won't require very much new code (for instance a deprecation warning), let's just vote for or against it. If it seems like people are generally in favor of the change, let's include it. So without changing the timing of the next release, would you still be against the three changes suggested by Chuck and me? I am in favor of making the above three changes as long as they don't add a ton of new code, functionality, or delay the release in any way. What do other people think? Thanks, Jarrod From xavier.gnata at gmail.com Sat Feb 13 16:07:14 2010 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sat, 13 Feb 2010 22:07:14 +0100 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <1266092883.4565.2.camel@Nokia-N900-42-11> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> Message-ID: <4B771482.8080306@gmail.com> On 02/13/2010 09:28 PM, Pauli Virtanen wrote: > We will most likely have experimental py3 support in 2.0. > > If you, or someone else wishes to help bringing 2.0 to fully work with Py3, now is a very good time to step up. > > How to give a hand: > > 1. Get my latest py3 branch from http://github.com/pv/numpy-work/tree/py3k > > Read doc/py3k.txt > > 2. Get py3 branch of nose (see doc/py3k.txt in the branch). > > 3. Build numpy, and run unit tests (build with "NPY_SEPARATE_BUILD=1 python3 setup.py build", numscons is not supported at the moment). > > 4. Fix bugs revealed by the unit tests. Currently, C code is mostly ported, so you can probably also help only by writing Python. There are about 100 test failures (of 2400) left. > > Many test failures occur also because the tests are wrong. For instance: the numpy I/O requires bytes, but some tests supply it unicode strings -> need changes in tests. > > One useful thing to do is to help with the str/bytes transition on the python side. Since the same code must work with pythons from 2.4 to 3.0 (for 3 it's automatically run through 2to3 on build), there are some helpers in numpy.compat.py3k for helping with this. See several previous commits on the branch on that. > > Another useful thing could be to port an existing numpy-using code to py3 and test if it works with the current py3k branch, what fails, and if the failures are already revealed by unit tests. Even if it does not work at the moment, having it at hand will help testing the rc when it comes. This, because I wouldn't completely rely on our unit test coverage. > > Finally, try to write some PEP 3118 using code, and check how it works. (You can use python >= 2.6 for this if you get numpy from the py3k branch.) > > Well I don't know where I should report that but your branch doesn't compile with python3.1: numpy/core/blasdot/_dotblas.c: In function ?dotblas_matrixproduct?: numpy/core/blasdot/_dotblas.c:404: error: ?PyArrayObject? has no member named ?ob_type? numpy/core/blasdot/_dotblas.c:404: error: ?PyArrayObject? has no member named ?ob_type? numpy/core/blasdot/_dotblas.c:407: error: ?PyArrayObject? has no member named ?ob_type? and so on... AFAICS, it is easy to fix using the Py_TYPE macro. For instance: - if (ap1->ob_type != ap2->ob_type) { + if (Py_TYPE(ap1) != Py_TYPE(ap2)) { Xavier From charlesr.harris at gmail.com Sat Feb 13 16:15:11 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Feb 2010 14:15:11 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <4B771482.8080306@gmail.com> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> Message-ID: On Sat, Feb 13, 2010 at 2:07 PM, Xavier Gnata wrote: > On 02/13/2010 09:28 PM, Pauli Virtanen wrote: > > We will most likely have experimental py3 support in 2.0. > > > > If you, or someone else wishes to help bringing 2.0 to fully work with > Py3, now is a very good time to step up. > > > > How to give a hand: > > > > 1. Get my latest py3 branch from > http://github.com/pv/numpy-work/tree/py3k > > > > Read doc/py3k.txt > > > > 2. Get py3 branch of nose (see doc/py3k.txt in the branch). > > > > 3. Build numpy, and run unit tests (build with "NPY_SEPARATE_BUILD=1 > python3 setup.py build", numscons is not supported at the moment). > > > > 4. Fix bugs revealed by the unit tests. Currently, C code is mostly > ported, so you can probably also help only by writing Python. There are > about 100 test failures (of 2400) left. > > > > Many test failures occur also because the tests are wrong. For instance: > the numpy I/O requires bytes, but some tests supply it unicode strings -> > need changes in tests. > > > > One useful thing to do is to help with the str/bytes transition on the > python side. Since the same code must work with pythons from 2.4 to 3.0 (for > 3 it's automatically run through 2to3 on build), there are some helpers in > numpy.compat.py3k for helping with this. See several previous commits on the > branch on that. > > > > Another useful thing could be to port an existing numpy-using code to py3 > and test if it works with the current py3k branch, what fails, and if the > failures are already revealed by unit tests. Even if it does not work at the > moment, having it at hand will help testing the rc when it comes. This, > because I wouldn't completely rely on our unit test coverage. > > > > Finally, try to write some PEP 3118 using code, and check how it works. > (You can use python >= 2.6 for this if you get numpy from the py3k branch.) > > > > > Well I don't know where I should report that but your branch doesn't > compile with python3.1: > numpy/core/blasdot/_dotblas.c: In function > ?dotblas_matrixproduct?: > numpy/core/blasdot/_dotblas.c:404: error: ?PyArrayObject? has no member > named ?ob_type? > numpy/core/blasdot/_dotblas.c:404: error: ?PyArrayObject? has no member > named ?ob_type? > numpy/core/blasdot/_dotblas.c:407: error: ?PyArrayObject? has no member > named ?ob_type? > > and so on... > > > AFAICS, it is easy to fix using the Py_TYPE macro. > > For instance: > - if (ap1->ob_type != ap2->ob_type) { > + if (Py_TYPE(ap1) != Py_TYPE(ap2)) { > > Pauli fixed a lot of those. Did you remove the old build directory and all that stuff? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Sat Feb 13 16:27:58 2010 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sat, 13 Feb 2010 22:27:58 +0100 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> Message-ID: <4B77195E.5050308@gmail.com> On 02/13/2010 10:15 PM, Charles R Harris wrote: > > > On Sat, Feb 13, 2010 at 2:07 PM, Xavier Gnata > wrote: > > On 02/13/2010 09:28 PM, Pauli Virtanen wrote: > > We will most likely have experimental py3 support in 2.0. > > > > If you, or someone else wishes to help bringing 2.0 to fully > work with Py3, now is a very good time to step up. > > > > How to give a hand: > > > > 1. Get my latest py3 branch from > http://github.com/pv/numpy-work/tree/py3k > > > > Read doc/py3k.txt > > > > 2. Get py3 branch of nose (see doc/py3k.txt in the branch). > > > > 3. Build numpy, and run unit tests (build with > "NPY_SEPARATE_BUILD=1 python3 setup.py build", numscons is not > supported at the moment). > > > > 4. Fix bugs revealed by the unit tests. Currently, C code is > mostly ported, so you can probably also help only by writing > Python. There are about 100 test failures (of 2400) left. > > > > Many test failures occur also because the tests are wrong. For > instance: the numpy I/O requires bytes, but some tests supply it > unicode strings -> need changes in tests. > > > > One useful thing to do is to help with the str/bytes transition > on the python side. Since the same code must work with pythons > from 2.4 to 3.0 (for 3 it's automatically run through 2to3 on > build), there are some helpers in numpy.compat.py3k for helping > with this. See several previous commits on the branch on that. > > > > Another useful thing could be to port an existing numpy-using > code to py3 and test if it works with the current py3k branch, > what fails, and if the failures are already revealed by unit > tests. Even if it does not work at the moment, having it at hand > will help testing the rc when it comes. This, because I wouldn't > completely rely on our unit test coverage. > > > > Finally, try to write some PEP 3118 using code, and check how it > works. (You can use python >= 2.6 for this if you get numpy from > the py3k branch.) > > > > > Well I don't know where I should report that but your branch doesn't > compile with python3.1: > numpy/core/blasdot/_dotblas.c: In function > ?dotblas_matrixproduct?: > numpy/core/blasdot/_dotblas.c:404: error: ?PyArrayObject? has no > member > named ?ob_type? > numpy/core/blasdot/_dotblas.c:404: error: ?PyArrayObject? has no > member > named ?ob_type? > numpy/core/blasdot/_dotblas.c:407: error: ?PyArrayObject? has no > member > named ?ob_type? > > and so on... > > > AFAICS, it is easy to fix using the Py_TYPE macro. > > For instance: > - if (ap1->ob_type != ap2->ob_type) { > + if (Py_TYPE(ap1) != Py_TYPE(ap2)) { > > > Pauli fixed a lot of those. Did you remove the old build directory and > all that stuff? > > Chuck > Well I ran git clone git://github.com/pv/numpy-work.git an hour ago (in an empty directory) Xavier -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedrichromstedt at gmail.com Sat Feb 13 16:40:49 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sat, 13 Feb 2010 22:40:49 +0100 Subject: [Numpy-discussion] A New Coercion Model Message-ID: Hi, there recently were some problems occuring when coercing numpy.ndarrays with something else. I would like to try to review the examples I've seen so far (it are two examples in only ~ one week), and also would try to summarise the approach taken up to now as far as I understand. Furthermore, I would like to start discussion about a more robust coercion model. I'm not shure whether I'm in the correct position for starting such a deep thread, but I simply hope you appreciate and excuse me for trying to do so. EXAMPLES 1. numpy.poly1d. The problem arises when coercing a numpy.poly1d instance as second operand with a numpy.ndarray (e.g., a scalar array) or a numpy scalar (like a numpy.int32 instance). It seems that numpy tries to create an array out of the poly1d instance, maybe via asarray(), and this results indeed in an ndarray. But it isn't clear to me, why. poly1d supports __len__() and __getitem__(), but one can easily check that a class supporting this is _not_ converted into an ndarray by numpy.asarray(). Also int32(2) * X([1,2]) where X is the resp. class does yield TypeError: unsupported operand type(s) for *: 'int' and 'instance'. Same applies for a class supporting __iter__(). Also same for a class supporting __iter__, __len__, __getitem__ simultaneously. instance(numpy.poly1d([1]), numpy.ndarray) returns False. 2. upy.undarray. As I reported myself, I encountered the same problem with my own code with also non-numpy.ndarray objects. The class supports also __len__() and __getitem__(), but I found that it is treated as a scalar. (Nevertheless, changing this behaviour wouldn't solve the problem.) numpy.asarray(upy.undarray(...)) returns a scalar array, opposed to the numpy.asarray(poly1d(...)) case, where it returns a corresponding ndarray. It seems to do not matter, that I never tried with numpy.int32 instances on the left hand side of, e.g., multiplication. 3. numpy.polynomial.Polynomial. I have no idea how this was solved for Polynomial. Even numpy.int32(2).__mul__(p) with p being a Polynomial instance works fine. Maybe it's a static route? Polynomial([1, 2]).__array_priority__ does not exist. CURRENT APPROACH So, this seems easy: There is __array_priority__, there may be static routes hardcoded, and there is numpy.set_numeric_ops(). Are there more ways implemented to treat the problem I'm not aware of? DISCUSSION __array_priority__ induces a linear order on all classes in the namespace. I think, I guess, this is not appropriate. At least my intuition tells me that this easily breaks down. Consider A and B having precedence over ndarray, for supporting an expression with A() as right operand. Now, C is introduced, which wants to have precedence over A, but B leaving more precedent than the new C. Now everything depends on the definition of A and B. The aim is unreachable if the __array_priority__s of A and B are by coincidence not compatible with this new elements of the relation. (It's maybe an a bit silly example, but I have no better at hand.) In fact, no one knows the precedence of something over anything else as long as there is no definition for that. There may occur even rings in the relation. Calling the relation >, although it's no longer assumed to be linear, there may hold A > B > C > A. What about implementing a simple class "Relation" in numpy. Users may register relations in the numpy.relations instance they are shure to be existent. For instance, I would say numpy.relations.set(lower = numpy.ndarray, higher = upy.undarray). Same for the numpy.poly1d thing, and it could also be used for numpy.polynomial.Polynomial. Though there are subtleties with inheritance. E.g. when class X(numpy.ndarray): [...], it should be treated as numpy.ndarray as long as there is no definition involving X directly. One could simply say, the last defined relation rules first, "LIFO". I could code a Python module for this, but it would slow numpy down. Maybe a C implementation would be helpful. I have much experience with C++ and Python both, but no experience with building numpy ... But now, I would appreciate any response in discussion. Friedrich P.S. To me it's clear that this would apply to 3.0 (or whatever) ... From oliphant at enthought.com Sat Feb 13 17:02:37 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Sat, 13 Feb 2010 16:02:37 -0600 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: Message-ID: <5D454CBF-C243-4C68-9C37-80DE46D6E835@enthought.com> -- (mobile phone of) Travis Oliphant Enthought, Inc. 1-512-536-1057 http://www.enthought.com On Feb 13, 2010, at 2:34 PM, Jarrod Millman wrote: > On Sat, Feb 13, 2010 at 1:49 PM, Travis Oliphant > wrote: >> This is exactly what I was worried about with calling the next >> release >> 2.0. >> >> This is not the time to change all the things we wish were done >> differently. >> >> The release is scheduled for 3 weeks. > > Hey Travis, > > I agree with your general sentiment (and I assume Chuck does too). > And I don't think either of us is suggesting that we change all the > things we want done differently. I do think that it is reasonable for > us to suggest a few changes that could be implemented quickly to the > list and just have a quick up or down vote on that specific issue > without having to have a general discussion regarding what 2.0 is. > OK. > So there are at least three suggestions on the table right now: > > 1. I would like to add deprecation warnings for the numarray and > numeric support (but leave all the code in at least until the 3.0 > release). > +1 > 2. Chuck proposed requiring explicit imports for things like fft. > +0 > 3. Chuck also suggested deprecating the old polynomial support and > make it not be imported by default. > I need to review his Poly class before giving a vote. I don't like the fact that it removes pretty printing by default. -Travis > These things are relatively small and easy to implement. If someone > is willing to do the work within, say a week, I think we should go for > it. I am sure others may disagree. > > Why can't we just agree that the release is scheduled for 3 weeks from > now. And if someone suggests a change that they commit to > implementing in one weeks time and that won't require very much new > code (for instance a deprecation warning), let's just vote for or > against it. If it seems like people are generally in favor of the > change, let's include it. > > So without changing the timing of the next release, would you still be > against the three changes suggested by Chuck and me? I am in favor of > making the above three changes as long as they don't add a ton of new > code, functionality, or delay the release in any way. What do other > people think? > > Thanks, > Jarrod > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From oliphant at enthought.com Sat Feb 13 17:04:29 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Sat, 13 Feb 2010 16:04:29 -0600 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: Message-ID: <30903A45-E9B9-4326-9BBE-29DD6AC4540C@enthought.com> Yes such changes could be done in the 2.x series with appropriate transition aids like deprecation warnings. Travis -- (mobile phone of) Travis Oliphant Enthought, Inc. 1-512-536-1057 http://www.enthought.com On Feb 13, 2010, at 2:01 PM, Charles R Harris wrote: > > > On Sat, Feb 13, 2010 at 12:49 PM, Travis Oliphant > wrote: > This is exactly what I was worried about with calling the next release > 2.0. > > This is not the time to change all the things we wish were done > differently. > > > Do you think it would be reasonable to make such changes in 2.x? > They would be incompatible API changes, but at the python level, not > the C level. > > > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Sat Feb 13 17:06:59 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Sat, 13 Feb 2010 16:06:59 -0600 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <094B5327-8D52-4CE9-BD4E-B19C1A23E62B@enthought.com> References: <094B5327-8D52-4CE9-BD4E-B19C1A23E62B@enthought.com> Message-ID: <46887D01-1BF1-4306-9320-5E09D8E95CFB@enthought.com> I clarified my vote on these topics in a follow up email to Jared's separating the ideas. -- (mobile phone of) Travis Oliphant Enthought, Inc. 1-512-536-1057 http://www.enthought.com On Feb 13, 2010, at 1:44 PM, Travis Oliphant wrote: > -1 > > -- > (mobile phone of) > Travis Oliphant > Enthought, Inc. > 1-512-536-1057 > http://www.enthought.com > > On Feb 13, 2010, at 10:24 AM, Charles R Harris > wrote: > >> Hi All, >> >> Since there has been talk of deprecating the numarray and numeric >> compatibility parts of numpy for the upcoming 2.0 release I thought >> maybe we could consider a few other changes. First, numpy imports a >> ton of stuff by default and this is maintained for backward >> compatibility. Would this be a reasonable time to change that and >> require explicit imports for things like fft? Second, Poly1D has >> problems that aren't likely to get fixed, I would like to both >> deprecate the old polynomial support and make it not be imported by >> default. >> >> Thoughts? >> >> Chuck >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From perry at stsci.edu Sat Feb 13 17:15:01 2010 From: perry at stsci.edu (Perry Greenfield) Date: Sat, 13 Feb 2010 17:15:01 -0500 Subject: [Numpy-discussion] numpy 2.0, what else to do? References: <0513909D-0A52-42F5-8140-AB02975DB77A@gmail.com> Message-ID: On Feb 13, 2010, at 11:24 AM, Charles R Harris wrote: > Hi All, > > Since there has been talk of deprecating the numarray and numeric > compatibility Can someone be explicit about what is mean by this deprecation? > parts of numpy for the upcoming 2.0 release I thought maybe we could > consider a From cournape at gmail.com Sat Feb 13 18:23:50 2010 From: cournape at gmail.com (David Cournapeau) Date: Sun, 14 Feb 2010 08:23:50 +0900 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: Message-ID: <5b8d13221002131523o5f3297acv6d66a7c7f6baed84@mail.gmail.com> On Sun, Feb 14, 2010 at 1:24 AM, Charles R Harris wrote: > Hi All, > > Since there has been talk of deprecating the numarray and numeric > compatibility parts of numpy for the upcoming 2.0 release I thought maybe we > could consider a few other changes. First, numpy imports a ton of stuff by > default and this is maintained for backward compatibility. Would this be a > reasonable time to change that and require explicit imports for things like > fft? Second, Poly1D has problems that aren't likely to get fixed, I would > like to both deprecate the old polynomial support and make it not be > imported by default. I think that there should be absolutely no change whatsoever, for two reasons: - the release is in a few weeks, it is too late to change much. The whole datetime issue happened because the change came too late, I would hope that we avoid the same mistake. - there was an agreement that with py3k support, nothing would be changed in a backward incompatible way, that's also the official python policy for py3k transition. Deprecations are fine, though, cheers, David From fperez.net at gmail.com Sat Feb 13 22:02:56 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 13 Feb 2010 22:02:56 -0500 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 12:24 PM, Charles R Harris wrote: >> One minor suggestion: ?I think it would be useful to have the new >> polys have some form of pretty-printing like the old ones. ?It is >> actually useful when working, to verify what one has at hand, to see >> an expanded printout like the old ones do: >> > > I thought about that, but decided it was best left to a derived class, say > PrettyPoly ;) Overriding __repr__ and __str__ is an example where > inheritance makes sense. I disagree, I think one of the advantages of having both str and repr is precisely to make it easy to have both a terse, implementation-oriented representation and a more human-friendly one out of the box. I don't like using 'training wheels' classes, people tend to learn one thing and use it for a long time, so I think objects should be as fully usable as possible from the get-go. I suspect I wouldn't use/teach a PrettyPoly if it existed. But it's ultimately your call. In any case, many thanks for the code! Best, f From dlc at halibut.com Sat Feb 13 22:04:10 2010 From: dlc at halibut.com (David Carmean) Date: Sat, 13 Feb 2010 19:04:10 -0800 Subject: [Numpy-discussion] Why does np.nan{min, max} clobber my array mask? Message-ID: <20100213190410.I26855@halibut.com> I'm just starting to work with masked arrays and I've found some behavior that definitely does not follow the Principle of Least Surprise: I've generated a 2-d array from a list of lists, where the elements are floats with a good number of NaNs. Inspections shows the expected numbers for ma.count() and ma.count_masked(). However, as soon as I run np.nanmin() or np.nanmax() over it, all of the mask elements are reset to False. (Pdb) flat = flatten(uut) # my own utility function (Pdb) len ( [ x for x in flat if x+0 == x ] ) # only way I could figure to detect 4086 (Pdb) len ( [ x for x in flat if x+0 != x ] ) # 1458 NaNs in the set. 1458 (Pdb) msk = ma.masked_invalid(uut) (Pdb) msk.shape (99, 56) (Pdb) ma.count(msk) 4086 (Pdb) ma.count_masked(msk) 1458 (Pdb) msk.hardmask False (Pdb) msk.harden_mask() # harden the mask first, for demo masked_array(data =.... (Pdb) msk.hardmask True (Pdb) rslt_hm = np.nanmin(msk, axis=1) (Pdb) rslt_hm.shape (99,) (Pdb) ma.count_masked(rslt_hm) 0 (Pdb) ma.count(rslt_hm) 99 # Is my original still OK? msk masked_array(data = ... ... [False False False ..., True True True]], fill_value = 1e+20) (Pdb) msk.soften_mask() # now re-soften the mask: masked_array(data = .... (Pdb) rslt_softmask = np.nanmin(msk, axis=1) (Pdb) rslt_softmask.shape (99,) (Pdb) msk.mask.any() False # BAM! note: 'control' is a hardmasked control copy: (Pdb) control.mask.any() True As the above shows, I discovered that I can work around this by setting the hardmask property, but ... there is no mention of such a side-effect in the docs (including the brand-new reference book). Have I found a bug? This is 1.4.0 running under 64-bit Windows 7 ( Python(x,y) distribution). From pgmdevlist at gmail.com Sat Feb 13 22:31:13 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 13 Feb 2010 22:31:13 -0500 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <45d1ab481002122001h6d6b1d44k24e3d275de20eceb@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <3d375d731002121229q5f858a3fn5e9408b8d1096e26@mail.gmail.com> <1cd32cbb1002121242m7034bf2fs1d326747c28218d7@mail.gmail.com> <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> <1cd32cbb1002121258r24fc0b60g38e49231f2474f63@mail.gmail.com> <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> <45d1ab481002121714r41a30949t3b50f1aab1bd5994@mail.gmail.com> <45d1ab481002122001h6d6b1d44k24e3d275de20eceb@mail.gmail.com> Message-ID: On Feb 12, 2010, at 11:01 PM, David Goldsmith wrote: > > On Fri, Feb 12, 2010 at 7:09 PM, Pierre GM wrote: > On Feb 12, 2010, at 8:14 PM, David Goldsmith wrote > > > Is the present issue an instance where Scott's second statement is invalid, an instance where its validity is resulting in a poor docstring for the function, or an instance in which Scott's "recommendation" was not followed? > > The methods' docstring are fine, but we could improve the way the corresponding function docstrings are created. > > Does anyone have an idea of how universal of a problem this is (i.e., is it just confined to ma)? Likely to be just a numpy.ma issue. I'll try to find some kind of fix. From charlesr.harris at gmail.com Sat Feb 13 22:32:12 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Feb 2010 20:32:12 -0700 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 8:02 PM, Fernando Perez wrote: > On Sat, Feb 13, 2010 at 12:24 PM, Charles R Harris > wrote: > >> One minor suggestion: I think it would be useful to have the new > >> polys have some form of pretty-printing like the old ones. It is > >> actually useful when working, to verify what one has at hand, to see > >> an expanded printout like the old ones do: > >> > > > > I thought about that, but decided it was best left to a derived class, > say > > PrettyPoly ;) Overriding __repr__ and __str__ is an example where > > inheritance makes sense. > > I disagree, I think one of the advantages of having both str and repr > is precisely to make it easy to have both a terse, > implementation-oriented representation and a more human-friendly one > Note that ipython calls __repr__ to print the output. __repr__ is supposed to provide a string that can be used to recreate the object, a pretty printed version of __repr__ doesn't provide that. Also, an array or list of polynomials, having pretty printed entries looks pretty ugly with the newlines and all -- try it with Poly1d. I was also thinking that someone might want to provide a better display at some point, drawing on a canvas, for instance. And what happens when the degree gets up over 100, which is quite reasonable with the Cheybshev polynomials? > out of the box. I don't like using 'training wheels' classes, people > tend to learn one thing and use it for a long time, so I think objects > should be as fully usable as possible from the get-go. I suspect I > wouldn't use/teach a PrettyPoly if it existed. > > I thought the pretty print in the original was intended as a teaching aid, but I didn't think it was a good interface for programming work. That said, I could add a pretty print option, or a pretty print function. I would be happy to provide another method that ipython could look for and call for pretty printing if that seems reasonable to you. > But it's ultimately your call. In any case, many thanks for the code! > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Feb 13 22:51:10 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Feb 2010 20:51:10 -0700 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 8:32 PM, Charles R Harris wrote: > > > On Sat, Feb 13, 2010 at 8:02 PM, Fernando Perez wrote: > >> On Sat, Feb 13, 2010 at 12:24 PM, Charles R Harris >> wrote: >> >> One minor suggestion: I think it would be useful to have the new >> >> polys have some form of pretty-printing like the old ones. It is >> >> actually useful when working, to verify what one has at hand, to see >> >> an expanded printout like the old ones do: >> >> >> > >> > I thought about that, but decided it was best left to a derived class, >> say >> > PrettyPoly ;) Overriding __repr__ and __str__ is an example where >> > inheritance makes sense. >> >> I disagree, I think one of the advantages of having both str and repr >> is precisely to make it easy to have both a terse, >> implementation-oriented representation and a more human-friendly one >> > > Note that ipython calls __repr__ to print the output. __repr__ is supposed > to provide a string that can be used to recreate the object, a pretty > printed version of __repr__ doesn't provide that. Also, an array or list of > polynomials, having pretty printed entries looks pretty ugly with the > newlines and all -- try it with Poly1d. I was also thinking that someone > might want to provide a better display at some point, drawing on a canvas, > for instance. And what happens when the degree gets up over 100, which is > quite reasonable with the Cheybshev polynomials? > > Example: >>> a array([ 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3], dtype=object) >>> print a [ 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3] Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sat Feb 13 22:52:15 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 13 Feb 2010 19:52:15 -0800 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <1e2af89e1002131159q9b0fe18j63b0aaab5faf288@mail.gmail.com> References: <4B77034A.7060705@gmail.com> <1e2af89e1002131159q9b0fe18j63b0aaab5faf288@mail.gmail.com> Message-ID: <45d1ab481002131952g4b9244dsc846ce33dde61715@mail.gmail.com> On Sat, Feb 13, 2010 at 11:59 AM, Matthew Brett wrote: > Hi, > > On Sat, Feb 13, 2010 at 11:53 AM, Xavier Gnata > wrote: > > IMHO 2.0 should support python3. > > That would be a major step and a good reason to call it 2.0. > > I agree with Travis, I think we should try not to attach too much > importance to the big number change, release 2.0 just taking care of > Sounds to me like you don't fully agree w/ Travis - he said "This is exactly what I was worried about with calling the next release 2.0." Seems that Travis understands that the larger community, whether we want them to or not, _does_ "attach...much importance to [a] big number change" and wants to avoid calling the next release 2.0 precisely because he recognizes that the changes we do think we can make in three weeks don't warrant that magnitude of a number change. But then, perhaps I shouldn't speak for Travis, sorry Travis. ;-) DG > the ABI compatibility with the usual feature-freeze for an upcoming > release, and then we can release 3.0 with any major additions in due > course, as the work gets done. Basically, the '2.0' label does not > mean that there's open-season for feature changes at this point - that > has to wait, if the release is going to be stable. > > Best, > > Matthew > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sat Feb 13 23:56:05 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 13 Feb 2010 20:56:05 -0800 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: References: <201002121344.01439.meine@informatik.uni-hamburg.de> <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> <1cd32cbb1002121258r24fc0b60g38e49231f2474f63@mail.gmail.com> <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> <45d1ab481002121714r41a30949t3b50f1aab1bd5994@mail.gmail.com> <45d1ab481002122001h6d6b1d44k24e3d275de20eceb@mail.gmail.com> Message-ID: <45d1ab481002132056t3c69e5bcg7b709f43c1f24544@mail.gmail.com> On Sat, Feb 13, 2010 at 7:31 PM, Pierre GM wrote: > On Feb 12, 2010, at 11:01 PM, David Goldsmith wrote: > > > > On Fri, Feb 12, 2010 at 7:09 PM, Pierre GM wrote: > > On Feb 12, 2010, at 8:14 PM, David Goldsmith wrote > > > > > Is the present issue an instance where Scott's second statement is > invalid, an instance where its validity is resulting in a poor docstring for > the function, or an instance in which Scott's "recommendation" was not > followed? > > > > The methods' docstring are fine, but we could improve the way the > corresponding function docstrings are created. > > > > Does anyone have an idea of how universal of a problem this is (i.e., is > it just confined to ma)? > > Likely to be just a numpy.ma issue. I'll try to find some kind of fix. > Please don't misinterpret my statements to mean that I think this isn't important and/or that you should feel solely responsible for a fix - I sincerely just wanted to uncover the nature and extent of the problem. Unfortunately, I still feel like I don't really understand the functional origin of the problem, otherwise I'd be the first to be offering to help - perhaps if you can explain to me what you think is happening... DG > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Sun Feb 14 00:53:19 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 14 Feb 2010 00:53:19 -0500 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <45d1ab481002132056t3c69e5bcg7b709f43c1f24544@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <3d375d731002121247p7d8f5a47k47a2e1786d04f72e@mail.gmail.com> <1cd32cbb1002121258r24fc0b60g38e49231f2474f63@mail.gmail.com> <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> <45d1ab481002121714r41a30949t3b50f1aab1bd5994@mail.gmail.com> <45d1ab481002122001h6d6b1d44k24e3d275de20eceb@mail.gmail.com> <45d1ab481002132056t3c69e5bcg7b709f43c1f24544@mail.gmail.com> Message-ID: On Feb 13, 2010, at 11:56 PM, David Goldsmith wrote: > > > Please don't misinterpret my statements to mean that I think this isn't important and/or that you should feel solely responsible for a fix - I sincerely just wanted to uncover the nature and extent of the problem. Unfortunately, I still feel like I don't really understand the functional origin of the problem, otherwise I'd be the first to be offering to help - perhaps if you can explain to me what you think is happening... In a nutshell: some functions in numpy.ma (like np.ma.compress) are actually instances of a factory class (_frommethod). This class implements a __call__ method, so its instances behave like functions. In practice, they just call a method of MaskedArray. Anyway, the __doc__ of the instance is created from the docstring of the corresponding method with _frommethod.getdoc. I'm sure that's where we can improve things (like substistute `self `by `a`. Because it's an instance, help(numpy.ma.compress) gives the docstring of numpy.ma._frommethod instead. In IPython, numpy.ma.compress? gives you the doc, twice (I don't get why). From charlesr.harris at gmail.com Sun Feb 14 01:26:11 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Feb 2010 23:26:11 -0700 Subject: [Numpy-discussion] Buildbots in red meltdown. Message-ID: *All* the buildbots are showing errors. Here are some: ====================================================================== ERROR: test_view_to_flexible_dtype (test_core.TestMaskedView) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/ma/tests/test_core.py", line 3333, in test_view_to_flexible_dtype test = a[0].view([('A', float), ('B', float)]) File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2866, in view File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2786, in __array_finalize__ TypeError: attribute 'shape' of 'numpy.generic' objects is not writable ====================================================================== ERROR: test_view_to_subdtype (test_core.TestMaskedView) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/ma/tests/test_core.py", line 3354, in test_view_to_subdtype test = a[0].view((float, 2)) File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2866, in view File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2786, in __array_finalize__ TypeError: attribute 'shape' of 'numpy.generic' objects is not writable ====================================================================== FAIL: test_buffer_hashlib (test_regression.TestRegression) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/core/tests/test_regression.py", line 1255, in test_buffer_hashlib assert_equal(md5(x).hexdigest(), '2a1dd1e1e59d0a384c26951e316cd7e6') File "../numpy-install/lib/python2.4/site-packages/numpy/testing/utils.py", line 305, in assert_equal AssertionError: Items are not equal: ACTUAL: '1264d4a9f74dc462700fd163e3ff09a6' DESIRED: '2a1dd1e1e59d0a384c26951e316cd7e6' Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sun Feb 14 01:42:56 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 13 Feb 2010 22:42:56 -0800 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: References: <201002121344.01439.meine@informatik.uni-hamburg.de> <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> <45d1ab481002121714r41a30949t3b50f1aab1bd5994@mail.gmail.com> <45d1ab481002122001h6d6b1d44k24e3d275de20eceb@mail.gmail.com> <45d1ab481002132056t3c69e5bcg7b709f43c1f24544@mail.gmail.com> Message-ID: <45d1ab481002132242x544f4f04na12a12907e9619da@mail.gmail.com> On Sat, Feb 13, 2010 at 9:53 PM, Pierre GM wrote: > On Feb 13, 2010, at 11:56 PM, David Goldsmith wrote: > > > > > > Please don't misinterpret my statements to mean that I think this isn't > important and/or that you should feel solely responsible for a fix - I > sincerely just wanted to uncover the nature and extent of the problem. > Unfortunately, I still feel like I don't really understand the functional > origin of the problem, otherwise I'd be the first to be offering to help - > perhaps if you can explain to me what you think is happening... > > In a nutshell: > some functions in numpy.ma (like np.ma.compress) are actually instances of > a factory class (_frommethod). This class implements a __call__ method, so > its instances behave like functions. In practice, they just call a method of > MaskedArray. Anyway, the __doc__ of the instance is created from the > docstring of the corresponding method with _frommethod.getdoc. I'm sure > that's where we can improve things (like substistute `self `by `a`. > Because it's an instance, help(numpy.ma.compress) gives the docstring of > numpy.ma._frommethod instead. In IPython, numpy.ma.compress? gives you the > doc, twice (I don't get why). > Excellent, thanks Pierre: w/ this in the thread, if I can't help (I'm no expert on factory classes, nor, certainly, on the why's and wherefore's of iPython) I'm all but certain we have the communal know-how to get this taken care of quickly. One final request, though, if I may: perhaps you could make the issue "official" by filing a ticket? Thanks again! DG PS: I can certainly take a look at _frommethod.getdoc and see what I can do with that... > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sun Feb 14 01:45:12 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 13 Feb 2010 22:45:12 -0800 Subject: [Numpy-discussion] Buildbots in red meltdown. In-Reply-To: References: Message-ID: <45d1ab481002132245p41ee52e6uec038ed933d18ee5@mail.gmail.com> "When it rains, it pours..." :-( DG On Sat, Feb 13, 2010 at 10:26 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > *All* the buildbots are showing errors. Here are some: > > ====================================================================== > ERROR: test_view_to_flexible_dtype (test_core.TestMaskedView) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/ma/tests/test_core.py", line 3333, in test_view_to_flexible_dtype > > test = a[0].view([('A', float), ('B', float)]) > > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2866, in view > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2786, in __array_finalize__ > > TypeError: attribute 'shape' of 'numpy.generic' objects is not writable > > ====================================================================== > ERROR: test_view_to_subdtype (test_core.TestMaskedView) > > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/ma/tests/test_core.py", line 3354, in test_view_to_subdtype > > test = a[0].view((float, 2)) > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2866, in view > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2786, in __array_finalize__ > > TypeError: attribute 'shape' of 'numpy.generic' objects is not writable > > ====================================================================== > FAIL: test_buffer_hashlib (test_regression.TestRegression) > > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/core/tests/test_regression.py", line 1255, in test_buffer_hashlib > > assert_equal(md5(x).hexdigest(), '2a1dd1e1e59d0a384c26951e316cd7e6') > File "../numpy-install/lib/python2.4/site-packages/numpy/testing/utils.py", line 305, in assert_equal > AssertionError: > > Items are not equal: > ACTUAL: '1264d4a9f74dc462700fd163e3ff09a6' > DESIRED: '2a1dd1e1e59d0a384c26951e316cd7e6' > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Sun Feb 14 01:50:08 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 14 Feb 2010 01:50:08 -0500 Subject: [Numpy-discussion] Buildbots in red meltdown. In-Reply-To: References: Message-ID: <53671882-2BC8-4819-AD25-120D2D45BE32@gmail.com> On Feb 14, 2010, at 1:26 AM, Charles R Harris wrote: > *All* the buildbots are showing errors. Here are some: Only with Python 2.4, right ? That's the ticket #1367 I haven't had time to deal with (because I need a Python2.4 to test it). From pgmdevlist at gmail.com Sun Feb 14 01:52:01 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 14 Feb 2010 01:52:01 -0500 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: <45d1ab481002132242x544f4f04na12a12907e9619da@mail.gmail.com> References: <201002121344.01439.meine@informatik.uni-hamburg.de> <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> <45d1ab481002121714r41a30949t3b50f1aab1bd5994@mail.gmail.com> <45d1ab481002122001h6d6b1d44k24e3d275de20eceb@mail.gmail.com> <45d1ab481002132056t3c69e5bcg7b709f43c1f24544@mail.gmail.com> <45d1ab481002132242x544f4f04na12a12907e9619da@mail.gmail.com> Message-ID: On Feb 14, 2010, at 1:42 AM, David Goldsmith wrote: > > On Sat, Feb 13, 2010 at 9:53 PM, Pierre GM wrote: > On Feb 13, 2010, at 11:56 PM, David Goldsmith wrote: > > > > > > Please don't misinterpret my statements to mean that I think this isn't important and/or that you should feel solely responsible for a fix - I sincerely just wanted to uncover the nature and extent of the problem. Unfortunately, I still feel like I don't really understand the functional origin of the problem, otherwise I'd be the first to be offering to help - perhaps if you can explain to me what you think is happening... > > In a nutshell: > some functions in numpy.ma (like np.ma.compress) are actually instances of a factory class (_frommethod). This class implements a __call__ method, so its instances behave like functions. In practice, they just call a method of MaskedArray. Anyway, the __doc__ of the instance is created from the docstring of the corresponding method with _frommethod.getdoc. I'm sure that's where we can improve things (like substistute `self `by `a`. > Because it's an instance, help(numpy.ma.compress) gives the docstring of numpy.ma._frommethod instead. In IPython, numpy.ma.compress? gives you the doc, twice (I don't get why). > > Excellent, thanks Pierre: w/ this in the thread, if I can't help (I'm no expert on factory classes, nor, certainly, on the why's and wherefore's of iPython) I'm all but certain we have the communal know-how to get this taken care of quickly. One final request, though, if I may: perhaps you could make the issue "official" by filing a ticket? Thanks again! Well, you're the one who started the conversation, so *you* should open the ticket ;) From fperez.net at gmail.com Sun Feb 14 01:55:26 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 14 Feb 2010 01:55:26 -0500 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: References: <201002121344.01439.meine@informatik.uni-hamburg.de> <3d375d731002121305j41f6c63aw3ce940441645d20d@mail.gmail.com> <45d1ab481002121324w43fc96adp49c3cc50f6d7558a@mail.gmail.com> <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> <45d1ab481002121714r41a30949t3b50f1aab1bd5994@mail.gmail.com> <45d1ab481002122001h6d6b1d44k24e3d275de20eceb@mail.gmail.com> <45d1ab481002132056t3c69e5bcg7b709f43c1f24544@mail.gmail.com> Message-ID: On Sun, Feb 14, 2010 at 12:53 AM, Pierre GM wrote: > In IPython, numpy.ma.compress? gives you the doc, twice (I don't get why). I don't have a clue either, but it's now tracked at least: https://bugs.launchpad.net/ipython/+bug/521612 Thanks! f From charlesr.harris at gmail.com Sun Feb 14 02:00:46 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 14 Feb 2010 00:00:46 -0700 Subject: [Numpy-discussion] Buildbots in red meltdown. In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 11:26 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > *All* the buildbots are showing errors. Here are some: > > ====================================================================== > ERROR: test_view_to_flexible_dtype (test_core.TestMaskedView) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/ma/tests/test_core.py", line 3333, in test_view_to_flexible_dtype > > test = a[0].view([('A', float), ('B', float)]) > > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2866, in view > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2786, in __array_finalize__ > > TypeError: attribute 'shape' of 'numpy.generic' objects is not writable > > ====================================================================== > ERROR: test_view_to_subdtype (test_core.TestMaskedView) > > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/ma/tests/test_core.py", line 3354, in test_view_to_subdtype > > test = a[0].view((float, 2)) > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2866, in view > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2786, in __array_finalize__ > > TypeError: attribute 'shape' of 'numpy.generic' objects is not writable > > ====================================================================== > FAIL: test_buffer_hashlib (test_regression.TestRegression) > > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/core/tests/test_regression.py", line 1255, in test_buffer_hashlib > > assert_equal(md5(x).hexdigest(), '2a1dd1e1e59d0a384c26951e316cd7e6') > File "../numpy-install/lib/python2.4/site-packages/numpy/testing/utils.py", line 305, in assert_equal > AssertionError: > > Items are not equal: > ACTUAL: '1264d4a9f74dc462700fd163e3ff09a6' > DESIRED: '2a1dd1e1e59d0a384c26951e316cd7e6' > > There was a patch for the hash problem. I think that is fixed now. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Feb 14 02:03:28 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 14 Feb 2010 00:03:28 -0700 Subject: [Numpy-discussion] Buildbots in red meltdown. In-Reply-To: <53671882-2BC8-4819-AD25-120D2D45BE32@gmail.com> References: <53671882-2BC8-4819-AD25-120D2D45BE32@gmail.com> Message-ID: On Sat, Feb 13, 2010 at 11:50 PM, Pierre GM wrote: > On Feb 14, 2010, at 1:26 AM, Charles R Harris wrote: > > *All* the buildbots are showing errors. Here are some: > > > Only with Python 2.4, right ? That's the ticket #1367 I haven't had time to > deal with (because I need a Python2.4 to test it). > __ > That's what the buildbots are for ;) What OS are you running these days? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sun Feb 14 02:10:13 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 14 Feb 2010 02:10:13 -0500 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 10:32 PM, Charles R Harris wrote: > Note that ipython calls __repr__ to print the output. __repr__ is supposed > to provide a string that can be used to recreate the object, a pretty > printed version of __repr__ doesn't provide that. Also, an array or list of IPython calls repr because that's the convention the standard python shell uses, and I decided long ago to follow suit. > polynomials, having pretty printed entries looks pretty ugly with the > newlines and all -- try it with Poly1d. I was also thinking that someone > might want to provide a better display at some point, drawing on a canvas, > for instance. And what happens when the degree gets up over 100, which is > quite reasonable with the Cheybshev polynomials? sympy has pretty remarkable pretty-printing support, perhaps some of that could be reused. Just a thought. I do agree that 2d printing is tricky, but it doesn't mean it's useless. For long and complicated expressions, getting the layout correct is not trivial. But even good ole' poly1d's display is actually useful for small polynomials, which can aid if one is debugging a more complex code with test cases that lead to small polys. I realize this isn't always viable, but it does happen in practice. But again, small nits, otherwise happy :) So if you don't see it as useful or don't have the time/interest, no worries. I don't see it as important enough to work on it myself, so I'm not going to complain further either :) >> out of the box. ?I don't like using 'training wheels' classes, people >> tend to learn one thing and use it for a long time, so I think objects >> should be as fully usable as possible from the get-go. ?I suspect I >> wouldn't use/teach a PrettyPoly if it existed. >> > > I thought the pretty print in the original was intended as a teaching aid, > but I didn't think it was a good interface for programming work. That said, > I could add a pretty print option, or a pretty print function. I would be > happy to provide another method that ipython could look for and call for > pretty printing if that seems reasonable to you. In IPython we're already shipping the 'pretty' extension: http://bazaar.launchpad.net/~ipython-dev/ipython/trunk/annotate/head%3A/IPython/external/pretty.py So I guess we could just start adding __pretty__ to certain objects for such fancy representations. Cheers, f From charlesr.harris at gmail.com Sun Feb 14 02:10:37 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 14 Feb 2010 00:10:37 -0700 Subject: [Numpy-discussion] Buildbots in red meltdown. In-Reply-To: References: Message-ID: On Sat, Feb 13, 2010 at 11:26 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > *All* the buildbots are showing errors. Here are some: > > ====================================================================== > ERROR: test_view_to_flexible_dtype (test_core.TestMaskedView) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/ma/tests/test_core.py", line 3333, in test_view_to_flexible_dtype > > test = a[0].view([('A', float), ('B', float)]) > > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2866, in view > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2786, in __array_finalize__ > > TypeError: attribute 'shape' of 'numpy.generic' objects is not writable > > ====================================================================== > ERROR: test_view_to_subdtype (test_core.TestMaskedView) > > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/ma/tests/test_core.py", line 3354, in test_view_to_subdtype > > test = a[0].view((float, 2)) > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2866, in view > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2786, in __array_finalize__ > > TypeError: attribute 'shape' of 'numpy.generic' objects is not writable > > ====================================================================== > FAIL: test_buffer_hashlib (test_regression.TestRegression) > > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/core/tests/test_regression.py", line 1255, in test_buffer_hashlib > > assert_equal(md5(x).hexdigest(), '2a1dd1e1e59d0a384c26951e316cd7e6') > File "../numpy-install/lib/python2.4/site-packages/numpy/testing/utils.py", line 305, in assert_equal > AssertionError: > > Items are not equal: > ACTUAL: '1264d4a9f74dc462700fd163e3ff09a6' > DESIRED: '2a1dd1e1e59d0a384c26951e316cd7e6' > > More errors: ====================================================================== FAIL: test_umath.TestComplexFunctions.test_loss_of_precision(,) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/tmp/numpy-buildbot/b12/numpy-install/lib/python2.5/site-packages/numpy/core/tests/test_umath.py", line 721, in check_loss_of_precision check(x_basic, 2*eps/1e-3) File "/tmp/numpy-buildbot/b12/numpy-install/lib/python2.5/site-packages/numpy/core/tests/test_umath.py", line 691, in check 'arcsinh') AssertionError: (0, 0.0010023052, 0.99711633, 'arcsinh') ====================================================================== FAIL: test_umath.TestComplexFunctions.test_precisions_consistent ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/tmp/numpy-buildbot/b12/numpy-install/lib/python2.5/site-packages/numpy/core/tests/test_umath.py", line 602, in test_precisions_consistent assert_almost_equal(fcf, fcd, decimal=6, err_msg='fch-fcd %s'%f) File "../numpy-install/lib/python2.5/site-packages/numpy/testing/utils.py", line 435, in assert_almost_equal AssertionError: Arrays are not almost equal fch-fcd ACTUAL: (0.66623944+0.95530742j) DESIRED: (0.66623943249251527+1.0612750619050355j) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sun Feb 14 02:13:43 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 13 Feb 2010 23:13:43 -0800 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <45d1ab481002131952g4b9244dsc846ce33dde61715@mail.gmail.com> References: <4B77034A.7060705@gmail.com> <1e2af89e1002131159q9b0fe18j63b0aaab5faf288@mail.gmail.com> <45d1ab481002131952g4b9244dsc846ce33dde61715@mail.gmail.com> Message-ID: <1e2af89e1002132313r6da2fff6w7df276127ba06df7@mail.gmail.com> Hi, > Sounds to me like you don't fully agree w/ Travis - he said "This is exactly > what I was worried about with calling the next release 2.0."? Seems that > Travis understands that the larger community, whether we want them to or > not, _does_ "attach...much importance to [a] big number change" and wants to > avoid calling the next release 2.0 precisely because he recognizes that the > changes we do think we can make in three weeks don't warrant that magnitude > of a number change.? But then, perhaps I shouldn't speak for Travis, sorry > Travis. ;-) I think the wider community will be OK, as long as we stay calm about not getting overwhelmed with the number change, and just doing an ordinary release. I can't see us losing many users if they pick up 2.0 and don't see lots of new features, at least, that's never worried me in other people's releases. In any case, I think we're committed to the 2.0 version number at this point. Best, Matthew From charlesr.harris at gmail.com Sun Feb 14 02:17:42 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 14 Feb 2010 00:17:42 -0700 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: On Sun, Feb 14, 2010 at 12:10 AM, Fernando Perez wrote: > On Sat, Feb 13, 2010 at 10:32 PM, Charles R Harris > wrote: > > Note that ipython calls __repr__ to print the output. __repr__ is > supposed > > to provide a string that can be used to recreate the object, a pretty > > printed version of __repr__ doesn't provide that. Also, an array or list > of > > IPython calls repr because that's the convention the standard python > shell uses, and I decided long ago to follow suit. > > > polynomials, having pretty printed entries looks pretty ugly with the > > newlines and all -- try it with Poly1d. I was also thinking that someone > > might want to provide a better display at some point, drawing on a > canvas, > > for instance. And what happens when the degree gets up over 100, which is > > quite reasonable with the Cheybshev polynomials? > > sympy has pretty remarkable pretty-printing support, perhaps some of > that could be reused. Just a thought. > > I do agree that 2d printing is tricky, but it doesn't mean it's > useless. For long and complicated expressions, getting the layout > correct is not trivial. > > But even good ole' poly1d's display is actually useful for small > polynomials, which can aid if one is debugging a more complex code > with test cases that lead to small polys. I realize this isn't always > viable, but it does happen in practice. > > But again, small nits, otherwise happy :) So if you don't see it as > useful or don't have the time/interest, no worries. I don't see it as > important enough to work on it myself, so I'm not going to complain > further either :) > > >> out of the box. I don't like using 'training wheels' classes, people > >> tend to learn one thing and use it for a long time, so I think objects > >> should be as fully usable as possible from the get-go. I suspect I > >> wouldn't use/teach a PrettyPoly if it existed. > >> > > > > I thought the pretty print in the original was intended as a teaching > aid, > > but I didn't think it was a good interface for programming work. That > said, > > I could add a pretty print option, or a pretty print function. I would be > > happy to provide another method that ipython could look for and call for > > pretty printing if that seems reasonable to you. > > In IPython we're already shipping the 'pretty' extension: > > > http://bazaar.launchpad.net/~ipython-dev/ipython/trunk/annotate/head%3A/IPython/external/pretty.py > > So I guess we could just start adding __pretty__ to certain objects > for such fancy representations. > > That's what I was looking for. I see that it works for python >= 2.4 with some work. Does it work for python 3.1 also? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sun Feb 14 02:19:44 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 13 Feb 2010 23:19:44 -0800 Subject: [Numpy-discussion] docstring suggestions In-Reply-To: References: <201002121344.01439.meine@informatik.uni-hamburg.de> <35E4310D-9200-4450-BBD8-6E7BC82D9EAF@gmail.com> <45d1ab481002121714r41a30949t3b50f1aab1bd5994@mail.gmail.com> <45d1ab481002122001h6d6b1d44k24e3d275de20eceb@mail.gmail.com> <45d1ab481002132056t3c69e5bcg7b709f43c1f24544@mail.gmail.com> <45d1ab481002132242x544f4f04na12a12907e9619da@mail.gmail.com> Message-ID: <45d1ab481002132319i618c4c9u5ccd61f0071a28b5@mail.gmail.com> On Sat, Feb 13, 2010 at 10:52 PM, Pierre GM wrote: > On Feb 14, 2010, at 1:42 AM, David Goldsmith wrote: > > > > On Sat, Feb 13, 2010 at 9:53 PM, Pierre GM wrote: > > On Feb 13, 2010, at 11:56 PM, David Goldsmith wrote: > > > > > > > > > Please don't misinterpret my statements to mean that I think this isn't > important and/or that you should feel solely responsible for a fix - I > sincerely just wanted to uncover the nature and extent of the problem. > Unfortunately, I still feel like I don't really understand the functional > origin of the problem, otherwise I'd be the first to be offering to help - > perhaps if you can explain to me what you think is happening... > > > > In a nutshell: > > some functions in numpy.ma (like np.ma.compress) are actually instances > of a factory class (_frommethod). This class implements a __call__ method, > so its instances behave like functions. In practice, they just call a method > of MaskedArray. Anyway, the __doc__ of the instance is created from the > docstring of the corresponding method with _frommethod.getdoc. I'm sure > that's where we can improve things (like substistute `self `by `a`. > > Because it's an instance, help(numpy.ma.compress) gives the docstring of > numpy.ma._frommethod instead. In IPython, numpy.ma.compress? gives you the > doc, twice (I don't get why). > > > > Excellent, thanks Pierre: w/ this in the thread, if I can't help (I'm no > expert on factory classes, nor, certainly, on the why's and wherefore's of > iPython) I'm all but certain we have the communal know-how to get this taken > care of quickly. One final request, though, if I may: perhaps you could > make the issue "official" by filing a ticket? Thanks again! > > Well, you're the one who started the conversation, so *you* should open the > ticket ;) > Actually, it was Hans Meine, but no matter, I'll file "the __doc__ of the instance is created from the docstring of the corresponding method with _frommethod.getdoc. I'm sure that's where we can improve things (like substistute `self `by `a`" and the "Because it's an instance, help(numpy.ma.compress) gives the docstring of numpy.ma._frommethod instead" as numpy tickets (I just thought you might be able to describe the problems better, and, due to a deeper understanding, be a better point of contact for the tickets - all I'll do is quote your above characterizations). Thanks, Fernando, for filing the iPython issue. DG Thanks, Fernando -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sun Feb 14 02:29:04 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 13 Feb 2010 23:29:04 -0800 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <1e2af89e1002132313r6da2fff6w7df276127ba06df7@mail.gmail.com> References: <4B77034A.7060705@gmail.com> <1e2af89e1002131159q9b0fe18j63b0aaab5faf288@mail.gmail.com> <45d1ab481002131952g4b9244dsc846ce33dde61715@mail.gmail.com> <1e2af89e1002132313r6da2fff6w7df276127ba06df7@mail.gmail.com> Message-ID: <45d1ab481002132329q73e1540cufbaa66ba245c9d8f@mail.gmail.com> On Sat, Feb 13, 2010 at 11:13 PM, Matthew Brett wrote: > Hi, > > > Sounds to me like you don't fully agree w/ Travis - he said "This is > exactly > > what I was worried about with calling the next release 2.0." Seems that > > Travis understands that the larger community, whether we want them to or > > not, _does_ "attach...much importance to [a] big number change" and wants > to > > avoid calling the next release 2.0 precisely because he recognizes that > the > > changes we do think we can make in three weeks don't warrant that > magnitude > > of a number change. But then, perhaps I shouldn't speak for Travis, > sorry > > Travis. ;-) > > I think the wider community will be OK, as long as we stay calm about > not getting overwhelmed with the number change, and just doing an > ordinary release. I can't see us losing many users if they pick up > 2.0 and don't see lots of new features, at least, that's never worried > me in other people's releases. In any case, I think we're committed > to the 2.0 version number at this point. > I recognize this falls into the category of "too little, too late" (I stopped following the ABI breakage thread and thus didn't know that it had morphed into a new release/naming thread) but "-1" on calling the new release "2.0". DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Sun Feb 14 02:33:20 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 14 Feb 2010 02:33:20 -0500 Subject: [Numpy-discussion] Buildbots in red meltdown. In-Reply-To: References: <53671882-2BC8-4819-AD25-120D2D45BE32@gmail.com> Message-ID: <391F42FC-8733-4367-B714-B7E162446EC3@gmail.com> On Feb 14, 2010, at 2:03 AM, Charles R Harris wrote: > > > > On Sat, Feb 13, 2010 at 11:50 PM, Pierre GM wrote: > On Feb 14, 2010, at 1:26 AM, Charles R Harris wrote: > > *All* the buildbots are showing errors. Here are some: > > > Only with Python 2.4, right ? That's the ticket #1367 I haven't had time to deal with (because I need a Python2.4 to test it). > __ > > That's what the buildbots are for ;) What OS are you running these days? OS 10.6.2. Looks like a patch was already suggested. Good call, Neil Muller ! So, does r8111 work for you ? From d.l.goldsmith at gmail.com Sun Feb 14 02:40:05 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 13 Feb 2010 23:40:05 -0800 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: <45d1ab481002132340v10ab1a77m4efe21acd1ca650f@mail.gmail.com> On Sat, Feb 13, 2010 at 11:10 PM, Fernando Perez wrote: > On Sat, Feb 13, 2010 at 10:32 PM, Charles R Harris > wrote: > > Note that ipython calls __repr__ to print the output. __repr__ is > supposed > > to provide a string that can be used to recreate the object, a pretty > > printed version of __repr__ doesn't provide that. Also, an array or list > of > > IPython calls repr because that's the convention the standard python > shell uses, and I decided long ago to follow suit. > > > polynomials, having pretty printed entries looks pretty ugly with the > > newlines and all -- try it with Poly1d. I was also thinking that someone > > might want to provide a better display at some point, drawing on a > canvas, > > for instance. And what happens when the degree gets up over 100, which is > > quite reasonable with the Cheybshev polynomials? > > sympy has pretty remarkable pretty-printing support, perhaps some of > that could be reused. Just a thought. > Curious: how is sympy at deducing recursion relations and/or index functions? Reason: my first thought about Chuck's high-degree issue was that in such cases perhaps PrettyPoly (or __pretty__) could attempt to use summation notation (of course, this would only be useful when the coefficients are formulaic functions of the index, but hey, it's something). DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Sun Feb 14 03:01:29 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 14 Feb 2010 03:01:29 -0500 Subject: [Numpy-discussion] dtype=None as default for np.genfromtxt ? Message-ID: <5C080678-C72D-47E5-A818-5E13DDA5631F@gmail.com> It has been suggested (ticket #1262) to change the default dtype=float to dtype=None in np.genfromtxt. Any thoughts ? From fperez.net at gmail.com Sun Feb 14 03:22:29 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 14 Feb 2010 03:22:29 -0500 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: <45d1ab481002132340v10ab1a77m4efe21acd1ca650f@mail.gmail.com> References: <45d1ab481002132340v10ab1a77m4efe21acd1ca650f@mail.gmail.com> Message-ID: On Sun, Feb 14, 2010 at 2:40 AM, David Goldsmith wrote: > > Curious: how is sympy at deducing recursion relations and/or index > functions?? Reason: my first thought about Chuck's high-degree issue was > that in such cases perhaps PrettyPoly (or __pretty__) could attempt to use > summation notation (of course, this would only be useful when the > coefficients are formulaic functions of the index, but hey, it's something). I don't think it has any such support, but I could be wrong. Cheers, f From fperez.net at gmail.com Sun Feb 14 03:24:27 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 14 Feb 2010 03:24:27 -0500 Subject: [Numpy-discussion] Unpleasant behavior with poly1d and numpy scalar multiplication In-Reply-To: References: Message-ID: On Sun, Feb 14, 2010 at 2:17 AM, Charles R Harris wrote: > That's what I was looking for. I see that it works for python >= 2.4 with > some work. Does it work for python 3.1 also? I haven't tried, but a quick scan of the code makes me think it would be pretty easy to port it to 3.1. It's all fairly straightforward code, so a 2to3 pass might be sufficient. Cheers, f From ralf.gommers at googlemail.com Sun Feb 14 06:03:55 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 14 Feb 2010 19:03:55 +0800 Subject: [Numpy-discussion] dtype=None as default for np.genfromtxt ? In-Reply-To: <5C080678-C72D-47E5-A818-5E13DDA5631F@gmail.com> References: <5C080678-C72D-47E5-A818-5E13DDA5631F@gmail.com> Message-ID: On Sun, Feb 14, 2010 at 4:01 PM, Pierre GM wrote: > It has been suggested (ticket #1262) to change the default dtype=float to > dtype=None in np.genfromtxt. Any thoughts ? > Comments in the ticket make sense, and I don't see a downside. Type inference should be done only once, so performance for large files should not be affected. +1 Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Sun Feb 14 06:08:03 2010 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 14 Feb 2010 12:08:03 +0100 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <1266092883.4565.2.camel@Nokia-N900-42-11> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> Message-ID: <4B77D993.4080204@gmail.com> On 02/13/2010 09:28 PM, Pauli Virtanen wrote: > We will most likely have experimental py3 support in 2.0. > > If you, or someone else wishes to help bringing 2.0 to fully work with Py3, now is a very good time to step up. > > How to give a hand: > > 1. Get my latest py3 branch from http://github.com/pv/numpy-work/tree/py3k > > Read doc/py3k.txt > > 2. Get py3 branch of nose (see doc/py3k.txt in the branch). > > 3. Build numpy, and run unit tests (build with "NPY_SEPARATE_BUILD=1 python3 setup.py build", numscons is not supported at the moment). > > 4. Fix bugs revealed by the unit tests. Currently, C code is mostly ported, so you can probably also help only by writing Python. There are about 100 test failures (of 2400) left. > > Many test failures occur also because the tests are wrong. For instance: the numpy I/O requires bytes, but some tests supply it unicode strings -> need changes in tests. > > One useful thing to do is to help with the str/bytes transition on the python side. Since the same code must work with pythons from 2.4 to 3.0 (for 3 it's automatically run through 2to3 on build), there are some helpers in numpy.compat.py3k for helping with this. See several previous commits on the branch on that. > > Another useful thing could be to port an existing numpy-using code to py3 and test if it works with the current py3k branch, what fails, and if the failures are already revealed by unit tests. Even if it does not work at the moment, having it at hand will help testing the rc when it comes. This, because I wouldn't completely rely on our unit test coverage. > > Finally, try to write some PEP 3118 using code, and check how it works. (You can use python >= 2.6 for this if you get numpy from the py3k branch.) > > Ok. With the up to date numpy and nose repositories, I get this result: numpy.test('full', 2) Ran 1978 tests in 3.966s FAILED (KNOWNFAIL=5, SKIP=4, errors=438, failures=72) Xavier From dagss at student.matnat.uio.no Sun Feb 14 07:06:09 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Sun, 14 Feb 2010 13:06:09 +0100 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <1266092883.4565.2.camel@Nokia-N900-42-11> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> Message-ID: <4B77E731.90101@student.matnat.uio.no> Pauli Virtanen wrote: > We will most likely have experimental py3 support in 2.0. > > If you, or someone else wishes to help bringing 2.0 to fully work with Py3, now is a very good time to step up. > ... > Finally, try to write some PEP 3118 using code, and check how it works. (You can use python >= 2.6 for this if you get numpy from the py3k branch.) > At least parts of this last point should be easily done with all the code out there using Cython and NumPy. If PEP 3118 is enabled in NumPy's ndarray, then Cython's special-casing of NumPy should magically disappear (this can be checked by putting a "raise AssertionError()" statement in __getbuffer__ in Cython/Includes/numpy.pxd). Then, one can recompile any code using Cython and NumPy to at least verify most basic functionality (access of basic types, structs/record arrays, and different striding and ordering). Indeed, just running python runtests.py numpy with a recent Cython and the Py3 NumPy trunk should be a rather good test of PEP3118 in NumPy. Things like non-native endian etc. must be done manually though. Dag Sverre From dagss at student.matnat.uio.no Sun Feb 14 07:07:47 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Sun, 14 Feb 2010 13:07:47 +0100 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <4B77E731.90101@student.matnat.uio.no> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B77E731.90101@student.matnat.uio.no> Message-ID: <4B77E793.7020402@student.matnat.uio.no> Dag Sverre Seljebotn wrote: > Pauli Virtanen wrote: >> We will most likely have experimental py3 support in 2.0. >> >> If you, or someone else wishes to help bringing 2.0 to fully work >> with Py3, now is a very good time to step up. >> > ... >> Finally, try to write some PEP 3118 using code, and check how it >> works. (You can use python >= 2.6 for this if you get numpy from the >> py3k branch.) >> > At least parts of this last point should be easily done with all the > code out there using Cython and NumPy. > > If PEP 3118 is enabled in NumPy's ndarray, then Cython's > special-casing of NumPy should magically disappear (this can be > checked by putting a "raise AssertionError()" statement in > __getbuffer__ in Cython/Includes/numpy.pxd). > > Then, one can recompile any code using Cython and NumPy to at least > verify most basic functionality (access of basic types, structs/record > arrays, and different striding and ordering). Even recompilation is only needed in order to raise that assertion error, I think. So once it is tested that things work, NumPy's PEP 3118 support should be used instead of emulation by already compiled Cython modules. Of course, that's never actually been tested. Dag Sverre From stefan at sun.ac.za Sun Feb 14 07:40:17 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 14 Feb 2010 14:40:17 +0200 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <5b8d13221002131523o5f3297acv6d66a7c7f6baed84@mail.gmail.com> References: <5b8d13221002131523o5f3297acv6d66a7c7f6baed84@mail.gmail.com> Message-ID: <9457e7c81002140440u7906f274w5cffc96169d9dbb4@mail.gmail.com> On 14 February 2010 01:23, David Cournapeau wrote: > I think that there should be absolutely no change whatsoever, for two reasons: > ?- the release is in a few weeks, it is too late to change much. The > whole datetime issue happened because the change came too late, I > would hope that we avoid the same mistake. I agree with David; we should not rush to include any new, untested features now. That is what 2.1 is for. Having seen the response to 2.0, I realise that the concerns raised by myself, Travis and others regarding the community's views on the significance of "2.0" were warranted (i.e., we have some of the data Robert Kern was referring to!). To quote an old war poster, let's "keep calm and carry on." Cheers St?fan > ?- there was an agreement that with py3k support, nothing would be > changed in a backward incompatible way, that's also the official > python policy for py3k transition. > > Deprecations are fine, though, From neilcrighton at gmail.com Sun Feb 14 07:47:50 2010 From: neilcrighton at gmail.com (Neil Crighton) Date: Sun, 14 Feb 2010 12:47:50 +0000 (UTC) Subject: [Numpy-discussion] dtype=None as default for np.genfromtxt ? References: <5C080678-C72D-47E5-A818-5E13DDA5631F@gmail.com> Message-ID: Pierre GM gmail.com> writes: > > It has been suggested (ticket #1262) to change the default dtype=float to dtype=None in np.genfromtxt. > Any thoughts ? > I agree dtype=None should be default for the reasons given in the ticket. How do we handle the backwards-incompatible change? A warning in the next release, then change it in the following release? Neil From pgmdevlist at gmail.com Sun Feb 14 07:54:57 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 14 Feb 2010 07:54:57 -0500 Subject: [Numpy-discussion] dtype=None as default for np.genfromtxt ? In-Reply-To: References: <5C080678-C72D-47E5-A818-5E13DDA5631F@gmail.com> Message-ID: On Feb 14, 2010, at 7:47 AM, Neil Crighton wrote: > Pierre GM gmail.com> writes: > >> >> It has been suggested (ticket #1262) to change the default dtype=float to > dtype=None in np.genfromtxt. >> Any thoughts ? >> > > I agree dtype=None should be default for the reasons given in the ticket. > > How do we handle the backwards-incompatible change? A warning in the next > release, then change it in the following release? This backwards-incompatibility bugs me. Why don't we set dtype=None as the default for ndfromtxt & mafromtxt and tell people to use these functions instead (I'm pretty sure nobody knew they existed, so we can break them without upsetting anyone)(of course, there's gonna be a counterexample as soon as I post that). From stefan at sun.ac.za Sun Feb 14 09:27:45 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 14 Feb 2010 16:27:45 +0200 Subject: [Numpy-discussion] dtype=None as default for np.genfromtxt ? In-Reply-To: References: <5C080678-C72D-47E5-A818-5E13DDA5631F@gmail.com> Message-ID: <9457e7c81002140627v75d4cca5xd15516607fcf8052@mail.gmail.com> On 14 February 2010 14:54, Pierre GM wrote: > This backwards-incompatibility bugs me. Why don't we set dtype=None as the default for ndfromtxt & mafromtxt and tell people to use these functions instead (I'm pretty sure nobody knew they existed, so we can break them without upsetting anyone)(of course, there's gonna be a counterexample as soon as I post that). If `ndfromtxt' does the same as genfromtxt, why do we have it in the main numpy namespace? Regards St?fan From charlesr.harris at gmail.com Sun Feb 14 12:02:52 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 14 Feb 2010 10:02:52 -0700 Subject: [Numpy-discussion] Remaining buildbot errors. Message-ID: Python 2.4 ====================================================================== ERROR: test_view_to_flexible_dtype (test_core.TestMaskedView) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/ma/tests/test_core.py", line 3333, in test_view_to_flexible_dtype test = a[0].view([('A', float), ('B', float)]) File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2877, in view TypeError: attribute 'shape' of 'numpy.generic' objects is not writable BSD ====================================================================== FAIL: test_umath.TestComplexFunctions.test_loss_of_precision(,) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/tmp/numpy-buildbot/b12/numpy-install/lib/python2.5/site-packages/numpy/core/tests/test_umath.py", line 721, in check_loss_of_precision check(x_basic, 2*eps/1e-3) File "/tmp/numpy-buildbot/b12/numpy-install/lib/python2.5/site-packages/numpy/core/tests/test_umath.py", line 691, in check 'arcsinh') AssertionError: (0, 0.0010023052, 0.99711633, 'arcsinh') ====================================================================== FAIL: test_umath.TestComplexFunctions.test_precisions_consistent ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/tmp/numpy-buildbot/b12/numpy-install/lib/python2.5/site-packages/numpy/core/tests/test_umath.py", line 602, in test_precisions_consistent assert_almost_equal(fcf, fcd, decimal=6, err_msg='fch-fcd %s'%f) File "../numpy-install/lib/python2.5/site-packages/numpy/testing/utils.py", line 435, in assert_almost_equal AssertionError: Arrays are not almost equal fch-fcd ACTUAL: (0.66623944+0.95530742j) DESIRED: (0.66623943249251527+1.0612750619050355j) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Feb 14 13:17:46 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 14 Feb 2010 18:17:46 +0000 (UTC) Subject: [Numpy-discussion] numpy 2.0, what else to do? References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> Message-ID: Xavier Gnata gmail.com> writes: > Well I ran git clone git://github.com/pv/numpy-work.git an hour ago (in > an empty directory) That will give you the master branch, which indeed does not contain any Py3 stuff. You need also to switch to the py3k branch: git co origin/py3k To see all available branches, do git branch -r -- Pauli Virtanen From pav at iki.fi Sun Feb 14 13:32:10 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 14 Feb 2010 18:32:10 +0000 (UTC) Subject: [Numpy-discussion] numpy 2.0, what else to do? References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> Message-ID: Charles R Harris gmail.com> writes: > - ?if (ap1->ob_type != ap2->ob_type) { > + if (Py_TYPE(ap1) != Py_TYPE(ap2)) { > > Pauli fixed a lot of those. Did you remove the old build directory and all that stuff? I thought I fixed all of those, but apparently missed that one. Builds fine for me, but maybe it didn't find Atlas on my system for some reason. -- Pauli Virtanen From charlesr.harris at gmail.com Sun Feb 14 13:38:34 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 14 Feb 2010 11:38:34 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> Message-ID: On Sun, Feb 14, 2010 at 11:32 AM, Pauli Virtanen wrote: > Charles R Harris gmail.com> writes: > > - if (ap1->ob_type != ap2->ob_type) { > > + if (Py_TYPE(ap1) != Py_TYPE(ap2)) { > > > > Pauli fixed a lot of those. Did you remove the old build directory and > all > that stuff? > > I thought I fixed all of those, but apparently missed that one. Builds fine > for > me, but maybe it didn't find Atlas on my system for some reason. > > There are more - grep -r ob_type numpy/* - how do you want to go about fixing these things? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Feb 14 13:51:04 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 14 Feb 2010 11:51:04 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> Message-ID: On Sun, Feb 14, 2010 at 11:38 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Sun, Feb 14, 2010 at 11:32 AM, Pauli Virtanen wrote: > >> Charles R Harris gmail.com> writes: >> > - if (ap1->ob_type != ap2->ob_type) { >> > + if (Py_TYPE(ap1) != Py_TYPE(ap2)) { >> > >> > Pauli fixed a lot of those. Did you remove the old build directory and >> all >> that stuff? >> >> I thought I fixed all of those, but apparently missed that one. Builds >> fine for >> me, but maybe it didn't find Atlas on my system for some reason. >> >> > There are more - grep -r ob_type numpy/* - how do you want to go about > fixing these things? > > The py3k branch doesn't compile: numpy/core/src/multiarray/buffer.h: At top level: numpy/core/src/multiarray/buffer.h:14: error: conflicting types for ?_descriptor_from_pep3118_format? numpy/core/src/multiarray/common.c:220: note: previous implicit declaration of ?_descriptor_from_pep3118_format? was here Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Sun Feb 14 14:10:55 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Sun, 14 Feb 2010 14:10:55 -0500 Subject: [Numpy-discussion] dtype=None as default for np.genfromtxt ? In-Reply-To: <5C080678-C72D-47E5-A818-5E13DDA5631F@gmail.com> References: <5C080678-C72D-47E5-A818-5E13DDA5631F@gmail.com> Message-ID: On Sun, Feb 14, 2010 at 3:01 AM, Pierre GM wrote: > It has been suggested (ticket #1262) to change the default dtype=float to dtype=None in np.genfromtxt. Any thoughts ? +1 For my case, I didn't use the default behavior hardly at all. Skipper From pgmdevlist at gmail.com Sun Feb 14 15:13:57 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 14 Feb 2010 15:13:57 -0500 Subject: [Numpy-discussion] dtype=None as default for np.genfromtxt ? In-Reply-To: <9457e7c81002140627v75d4cca5xd15516607fcf8052@mail.gmail.com> References: <5C080678-C72D-47E5-A818-5E13DDA5631F@gmail.com> <9457e7c81002140627v75d4cca5xd15516607fcf8052@mail.gmail.com> Message-ID: <0C870793-E970-40C1-B76D-B89A786BC25C@gmail.com> On Feb 14, 2010, at 9:27 AM, St?fan van der Walt wrote: > On 14 February 2010 14:54, Pierre GM wrote: >> This backwards-incompatibility bugs me. Why don't we set dtype=None as the default for ndfromtxt & mafromtxt and tell people to use these functions instead (I'm pretty sure nobody knew they existed, so we can break them without upsetting anyone)(of course, there's gonna be a counterexample as soon as I post that). > > If `ndfromtxt' does the same as genfromtxt, why do we have it in the > main numpy namespace? Because it uses a slightly different set of defaults, so it's not exactly the same. From pgmdevlist at gmail.com Sun Feb 14 15:22:04 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 14 Feb 2010 15:22:04 -0500 Subject: [Numpy-discussion] Why does np.nan{min, max} clobber my array mask? In-Reply-To: <20100213190410.I26855@halibut.com> References: <20100213190410.I26855@halibut.com> Message-ID: <5AFAA9DD-E54C-4EF4-B0FC-A4B62AA6401C@gmail.com> On Feb 13, 2010, at 10:04 PM, David Carmean wrote: > I'm just starting to work with masked arrays and I've found some behavior that > definitely does not follow the Principle of Least Surprise: A fuzzy concept ;) > > I've generated a 2-d array from a list of lists, where the elements are floats with > a good number of NaNs. Inspections shows the expected numbers for ma.count() and > ma.count_masked(). > > However, as soon as I run np.nanmin() or np.nanmax() over it, all of the mask elements > are reset to False. I'm sorry, I can't follow you. Can you post a simpler self-contained example I can play with ? Why using np.nanmin/max ? These functions are designed for ndarrays, to avoid using a masked array: can't you just use min/max on the masked array ? From stefan at sun.ac.za Sun Feb 14 15:58:16 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 14 Feb 2010 22:58:16 +0200 Subject: [Numpy-discussion] dtype=None as default for np.genfromtxt ? In-Reply-To: <0C870793-E970-40C1-B76D-B89A786BC25C@gmail.com> References: <5C080678-C72D-47E5-A818-5E13DDA5631F@gmail.com> <9457e7c81002140627v75d4cca5xd15516607fcf8052@mail.gmail.com> <0C870793-E970-40C1-B76D-B89A786BC25C@gmail.com> Message-ID: <9457e7c81002141258r14c7d601s5eec0fb32eb7383d@mail.gmail.com> On 14 February 2010 22:13, Pierre GM wrote: > On Feb 14, 2010, at 9:27 AM, St?fan van der Walt wrote: >> On 14 February 2010 14:54, Pierre GM wrote: >>> This backwards-incompatibility bugs me. Why don't we set dtype=None as the default for ndfromtxt & mafromtxt and tell people to use these functions instead (I'm pretty sure nobody knew they existed, so we can break them without upsetting anyone)(of course, there's gonna be a counterexample as soon as I post that). >> >> If `ndfromtxt' does the same as genfromtxt, why do we have it in the >> main numpy namespace? > > Because it uses a slightly different set of defaults, so it's not exactly the same. Here is the content of ndfromtxt: kwargs['usemask'] = False return genfromtxt(fname, **kwargs) Here is the signature of genfromtxt: np.genfromtxt(fname, dtype=, comments='#', delimiter=None, skiprows=0, skip_header=0, skip_footer=0, converters=None, missing='', missing_values=None, filling_values=None, usecols=None, names=None, excludelist=None, deletechars=None, autostrip=False, case_sensitive=True, defaultfmt='f%i', unpack=None, usemask=False, loose=True, invalid_raise=True) All ndfromtxt does is to force usemask to False (but usemask is False by default). This isn't documented, nor is it reflected by the name. What am I missing? Regards St?fan From pgmdevlist at gmail.com Sun Feb 14 17:03:42 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 14 Feb 2010 17:03:42 -0500 Subject: [Numpy-discussion] dtype=None as default for np.genfromtxt ? In-Reply-To: <9457e7c81002141258r14c7d601s5eec0fb32eb7383d@mail.gmail.com> References: <5C080678-C72D-47E5-A818-5E13DDA5631F@gmail.com> <9457e7c81002140627v75d4cca5xd15516607fcf8052@mail.gmail.com> <0C870793-E970-40C1-B76D-B89A786BC25C@gmail.com> <9457e7c81002141258r14c7d601s5eec0fb32eb7383d@mail.gmail.com> Message-ID: On Feb 14, 2010, at 3:58 PM, St?fan van der Walt wrote: > All ndfromtxt does is to force usemask to False (but usemask is False > by default). This isn't documented, nor is it reflected by the name. > What am I missing? http://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html#shortcut-functions In my mind, np.genfromtxt is a generic function that can be complemented with shortcut functions. These shortcuts can use different defaults which trigger different behavior. Here, ndfromtxt forces usemask to False, so you'll never have a MaskedArray (even if some entries are missing). It doesn't do more than that, true, but it's an example of shortcut. I was suggesting that we could use dtype=None as a default for ndfromtxt, which would address the issue raised in the ticket without breaking backward compatiblity (because nobody uses ndfromtxt anyway). Note that we already have recfromtxt that uses dtype=None by default, but it returns a recarray instead of a structured array. From friedrichromstedt at gmail.com Sun Feb 14 17:36:08 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sun, 14 Feb 2010 23:36:08 +0100 Subject: [Numpy-discussion] Scalar-ndarray arguments passed to not_equal In-Reply-To: References: Message-ID: Ok, to come back to the original question I have pushed a branch "bugtracking.01.numpyops" to github.com/friedrichromstedt/upy. If you want to inspect the problem with some closer look, please pull from this repo, and run demo.py in the branch mentioned. Today's version is tagged 10-02-14_GMT-22-20. Please install the upy directory in some place where Python can load it as a package. Something I found out: * The issue seems to apply both to numpy.equal and numpy.not_equal (or their by means of numpy.set_numeric_ops() overloaded equivalents). * It seems to not apply to anything else, but I didn't check all ops, precisely I checked one out of each category. Furthermore, I would appreciate any help in explaining the behaviour of case 7. It is clear to me only up to the final result, all steps logged seem sensible, except numpy's last conclusion ... It is a bit tricky, but thanks for any help, Friedrich From pgmdevlist at gmail.com Sun Feb 14 18:05:07 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 14 Feb 2010 18:05:07 -0500 Subject: [Numpy-discussion] Remaining buildbot errors. In-Reply-To: References: Message-ID: <21D180E6-697F-4AD9-9077-B69172C369B3@gmail.com> On Feb 14, 2010, at 12:02 PM, Charles R Harris wrote: > Python 2.4 > > ====================================================================== > ERROR: test_view_to_flexible_dtype (test_core.TestMaskedView) > ---------------------------------------------------------------------- > > Traceback (most recent call last): > File "/home/numpybb/Buildbot/numpy/b13/numpy-install/lib/python2.4/site-packages/numpy/ma/tests/test_core.py", line 3333, in test_view_to_flexible_dtype > test = a[0].view([('A', float), ('B', float)]) > > File "../numpy-install/lib/python2.4/site-packages/numpy/ma/core.py", line 2877, in view > TypeError: attribute 'shape' of 'numpy.generic' objects is not writable Argh. OK, r8112 should do it. From xavier.gnata at gmail.com Sun Feb 14 18:38:57 2010 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Mon, 15 Feb 2010 00:38:57 +0100 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> Message-ID: <4B788991.3070605@gmail.com> Ok! git clone git://github.com/pv/numpy-work.git git checkout origin/py3k NPY_SEPARATE_BUILD=1 python3.1 setup.py build but now it fails during the build: n file included from numpy/core/src/multiarray/buffer.c:14, from numpy/core/src/multiarray/multiarraymodule_onefile.c:36: numpy/core/src/multiarray/buffer.h: At top level: numpy/core/src/multiarray/buffer.h:14: error: conflicting types for ?_descriptor_from_pep3118_format? numpy/core/src/multiarray/common.c:220: note: previous implicit declaration of ?_descriptor_from_pep3118_format? was here In file included from numpy/core/src/multiarray/multiarraymodule_onefile.c:36: numpy/core/src/multiarray/buffer.c: In function ?_buffer_format_string?: numpy/core/src/multiarray/buffer.c:151: warning: unused variable ?repr? In file included from numpy/core/src/multiarray/multiarraymodule_onefile.c:36: numpy/core/src/multiarray/buffer.c:204:2: warning: #warning XXX -- should it use UTF-8 here? error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-prototypes -fPIC -Inumpy/core/include -Ibuild/src.linux-x86_64-3.1/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/include/python3.1 -Ibuild/src.linux-x86_64-3.1/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-3.1/numpy/core/src/umath -c numpy/core/src/multiarray/multiarraymodule_onefile.c -o build/temp.linux-x86_64-3.1/numpy/core/src/multiarray/multiarraymodule_onefile.o" failed with exit status 1 BTW, is there a better place to discuss these "python3 only" related issues? Xavier > Xavier Gnata gmail.com> writes: > >> Well I ran git clone git://github.com/pv/numpy-work.git an hour ago (in >> an empty directory) >> > That will give you the master branch, which indeed does not contain any Py3 > stuff. You need also to switch to the py3k branch: > > git co origin/py3k > > To see all available branches, do > > git branch -r > > From robert.kern at gmail.com Sun Feb 14 20:13:27 2010 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 14 Feb 2010 19:13:27 -0600 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <4B788991.3070605@gmail.com> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> Message-ID: <3d375d731002141713i344ca700lf710cc330a52de6f@mail.gmail.com> On Sun, Feb 14, 2010 at 17:38, Xavier Gnata wrote: > BTW, is there a better place to discuss these "python3 only" related issues? I suggest starting a new thread, but numpy-discussion is the right place. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sccolbert at gmail.com Sun Feb 14 21:07:16 2010 From: sccolbert at gmail.com (Chris Colbert) Date: Sun, 14 Feb 2010 21:07:16 -0500 Subject: [Numpy-discussion] Multithreading support In-Reply-To: <64ddb72c1002130425p24c62a8end72ecd7a87f1003b@mail.gmail.com> References: <5b8d13221002130420i17f634d1xe8ed114f7969d707@mail.gmail.com> <64ddb72c1002130425p24c62a8end72ecd7a87f1003b@mail.gmail.com> Message-ID: <7f014ea61002141807u7caec67ck8385f14ac16248d@mail.gmail.com> Perhaps it's my inability to properly use openmp, but when working on scikits.image on algorithms doing per-pixel manipulation with numpy arrays (using Cython), i saw better performance using Python threads and releasing the GIL than I did with openmp. I found the openmp overhead to be quite large, and for the size the images i was working with (5 MP), the overhead wasn't worth it. I made a post to Cython-dev about it. cheers, Chris On Sat, Feb 13, 2010 at 7:25 AM, Ren? Dudfield wrote: > hi, > > see: http://numcorepy.blogspot.com/ > > They see a benefit when working with large arrays. Otherwise you are > limited by memory - and the extra cores don't help with memory bandwidth. > > cheers, > > > > > On Sat, Feb 13, 2010 at 2:20 PM, David Cournapeau wrote: > >> On Sat, Feb 13, 2010 at 6:20 PM, Wolfgang Kerzendorf >> wrote: >> > Dear all, >> > >> > I don't know much about parallel programming so I don't know how easy it >> is to do that: When doing simple arrray operations like adding two arrays or >> adding a number to the array, is numpy able to put this on multiple cores? I >> have tried it but it doesnt seem to do that. Is there a special multithread >> implementation of numpy. >> >> Depending on your definition of simple operations, Numpy supports >> multithreaded execution or not. For ufuncs (which is used for things >> like adding two arrays together, etc...), there is no multithread >> support. >> >> > >> > IDL has this feature where it checks how many cores available and uses >> them. This feature in numpy would make an already amazing package even >> better. >> >> AFAIK, using multi-thread at the core level of NumPy has been tried >> only once a few years ago, without much success (no significant >> performance improvement). Maybe the approach was flawed in some ways. >> Some people have suggested using OpenMP, but nobody has every produced >> something significant AFAIK: >> >> http://mail.scipy.org/pipermail/numpy-discussion/2008-March/031897.html >> >> Note that Linear algebra operations can run in // depending on your >> libraries. In particular, the dot function runs in // if your >> blas/lapack does. >> >> cheers, >> >> David >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cycomanic at gmail.com Sun Feb 14 23:03:41 2010 From: cycomanic at gmail.com (Jochen Schroeder) Date: Mon, 15 Feb 2010 15:03:41 +1100 Subject: [Numpy-discussion] [ANN] pyfftw-0.2 released Message-ID: <20100215040339.GA2115@cudos0803> Hi all, I'm pleased to announce version 0.2 of pyfftw, a python module providing access to the FFTW3 library. New features: - pyfftw can now create advanced plans (needs testing) - provide location of fftw libraries at runtime with an environment variable - better detection of fftw location at install time Note: Pyfftw moved to launchpad, new url is http://launchpad.net/pyfftw. Cheers Jochen From pav at iki.fi Mon Feb 15 01:55:53 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Feb 2010 08:55:53 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <4B788991.3070605@gmail.com> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> Message-ID: <1266216953.2728.2.camel@talisman> ma, 2010-02-15 kello 00:38 +0100, Xavier Gnata kirjoitti: > Ok! > git clone git://github.com/pv/numpy-work.git > git checkout origin/py3k > NPY_SEPARATE_BUILD=1 python3.1 setup.py build > > but now it fails during the build: > > In file included from numpy/core/src/multiarray/buffer.c:14, > from numpy/core/src/multiarray/multiarraymodule_onefile.c:36: > numpy/core/src/multiarray/buffer.h: At top level: > numpy/core/src/multiarray/buffer.h:14: error: conflicting types for > ?_descriptor_from_pep3118_format? > numpy/core/src/multiarray/common.c:220: note: previous implicit > declaration of ?_descriptor_from_pep3118_format? was here > In file included from > numpy/core/src/multiarray/multiarraymodule_onefile.c:36: > numpy/core/src/multiarray/buffer.c: In function ?_buffer_format_string?: > numpy/core/src/multiarray/buffer.c:151: warning: unused variable ?repr? Hmm, I probably tested only the separate compilation properly as it seems the single-file build is failing. The environment variable is actually NPY_SEPARATE_COMPILATION=1, not *_BUILD. -- Pauli Virtanen From faltet at pytables.org Mon Feb 15 05:17:05 2010 From: faltet at pytables.org (Francesc Alted) Date: Mon, 15 Feb 2010 11:17:05 +0100 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <9457e7c81002140440u7906f274w5cffc96169d9dbb4@mail.gmail.com> References: <5b8d13221002131523o5f3297acv6d66a7c7f6baed84@mail.gmail.com> <9457e7c81002140440u7906f274w5cffc96169d9dbb4@mail.gmail.com> Message-ID: <201002151117.05646.faltet@pytables.org> A Sunday 14 February 2010 13:40:17 St?fan van der Walt escrigu?: > On 14 February 2010 01:23, David Cournapeau wrote: > > I think that there should be absolutely no change whatsoever, for two > > reasons: - the release is in a few weeks, it is too late to change much. > > The whole datetime issue happened because the change came too late, I > > would hope that we avoid the same mistake. > > I agree with David; we should not rush to include any new, untested > features now. That is what 2.1 is for. +1 As Travis pointed out, NumPy 2.0 should mostly be a release to contain all the ABI changes we think we will need until NumPy 3.0 (hope that David and the other core developers can figure out a good way to do this). > To quote an old war poster, let's "keep calm and carry on." Exactly :-) -- Francesc Alted From seb.haase at gmail.com Mon Feb 15 05:20:08 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Mon, 15 Feb 2010 11:20:08 +0100 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <201002151117.05646.faltet@pytables.org> References: <5b8d13221002131523o5f3297acv6d66a7c7f6baed84@mail.gmail.com> <9457e7c81002140440u7906f274w5cffc96169d9dbb4@mail.gmail.com> <201002151117.05646.faltet@pytables.org> Message-ID: On Mon, Feb 15, 2010 at 11:17 AM, Francesc Alted wrote: > A Sunday 14 February 2010 13:40:17 St?fan van der Walt escrigu?: >> On 14 February 2010 01:23, David Cournapeau wrote: >> > I think that there should be absolutely no change whatsoever, for two >> > reasons: - the release is in a few weeks, it is too late to change much. >> > The whole datetime issue happened because the change came too late, I >> > would hope that we avoid the same mistake. >> >> I agree with David; we should not rush to include any new, untested >> features now. ?That is what 2.1 is for. > > +1 > > As Travis pointed out, NumPy 2.0 should mostly be a release to contain all the > ABI changes we think we will need until NumPy 3.0 (hope that David and the > other core developers can figure out a good way to do this). > >> To quote an old war poster, let's "keep calm and carry on." > > Exactly :-) > > -- Is the addition of a dict-attribute (or just a pointer to one) an ABI change ? -S. From david at silveregg.co.jp Mon Feb 15 05:42:54 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Mon, 15 Feb 2010 19:42:54 +0900 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <5b8d13221002131523o5f3297acv6d66a7c7f6baed84@mail.gmail.com> <9457e7c81002140440u7906f274w5cffc96169d9dbb4@mail.gmail.com> <201002151117.05646.faltet@pytables.org> Message-ID: <4B79252E.9010607@silveregg.co.jp> Sebastian Haase wrote: > On Mon, Feb 15, 2010 at 11:17 AM, Francesc Alted wrote: >> A Sunday 14 February 2010 13:40:17 St?fan van der Walt escrigu?: >>> On 14 February 2010 01:23, David Cournapeau wrote: >>>> I think that there should be absolutely no change whatsoever, for two >>>> reasons: - the release is in a few weeks, it is too late to change much. >>>> The whole datetime issue happened because the change came too late, I >>>> would hope that we avoid the same mistake. >>> I agree with David; we should not rush to include any new, untested >>> features now. That is what 2.1 is for. >> +1 >> >> As Travis pointed out, NumPy 2.0 should mostly be a release to contain all the >> ABI changes we think we will need until NumPy 3.0 (hope that David and the >> other core developers can figure out a good way to do this). >> >>> To quote an old war poster, let's "keep calm and carry on." >> Exactly :-) >> >> -- > Is the addition of a dict-attribute (or just a pointer to one) an ABI change ? It is always an ABI change, but is mostly backward compatible (which is neither the case of matplotlib or scipy AFAIK). But ABI changes for 2.0 are OK, that's the whole point of doing it in the first place. cheers, David From david at silveregg.co.jp Mon Feb 15 06:24:36 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Mon, 15 Feb 2010 20:24:36 +0900 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <4B79252E.9010607@silveregg.co.jp> References: <5b8d13221002131523o5f3297acv6d66a7c7f6baed84@mail.gmail.com> <9457e7c81002140440u7906f274w5cffc96169d9dbb4@mail.gmail.com> <201002151117.05646.faltet@pytables.org> <4B79252E.9010607@silveregg.co.jp> Message-ID: <4B792EF4.5040205@silveregg.co.jp> David Cournapeau wrote: > > It is always an ABI change, but is mostly backward compatible (which is > neither the case of matplotlib or scipy AFAIK). This sentence does not make any sense: I meant that it is backward compatible from an ABI POV, unless the structure PyArray_Array itself is included in another structure (instead of merely being used). Neither matplotlib or scipy do that AFAIK - the main use-case for that would be to inherit from numpy array at the C level, but I doubt many extensions do that. For people who do C++, that's the same problem as changing a base class, which always break the ABI, cheers, David From dave.hirschfeld at gmail.com Mon Feb 15 08:35:11 2010 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Mon, 15 Feb 2010 13:35:11 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?Unpleasant_behavior_with_poly1d_and_?= =?utf-8?q?numpy=09scalar_multiplication?= References: Message-ID: Charles R Harris gmail.com> writes: > I was also thinking that someone might want to provide a better display at > some point, drawing on a canvas, for instance. And what happens when the > degree gets up over 100, which is quite reasonable with the Cheybshev > polynomials? There may well be better ways to do it but I've found the following function to be quite handy for visualising latex equations: def eqview(expr,fontsize=28,dpi=80): IS_INTERACTIVE = is_interactive() try: interactive(False) fig = figure(dpi=dpi, facecolor='w') h = figtext(0.5, 0.5, latex, fontsize = fontsize, horizontalalignment = 'center', verticalalignment = 'center') bbox = h.get_window_extent(RendererAgg(15,15,dpi)) fig.set_size_inches(1.1*bbox.width/dpi, 1.25*bbox.height/dpi) show() finally: interactive(IS_INTERACTIVE) NB: Sympy provides the latex function to convert the equation objects into latex as well as other ways to display the objects in the sympy.printing module. It shouldn't be too hard to do something similar if someone was so inclined! HTH, Dave From aisaac at american.edu Mon Feb 15 08:42:02 2010 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 15 Feb 2010 08:42:02 -0500 Subject: [Numpy-discussion] cov Message-ID: <4B794F2A.8080207@american.edu> 1. Should `numpy.cov` use `ddof` instead of `bias`, like `std` and `mean`? 2. Should the docs for scipy.cov state that it is deprecated? http://docs.scipy.org/scipy/docs/scipy.stats.stats.cov/#scipy-stats-cov (Use raises a deprecation warning.) Thanks, Alan Isaac From robert.kern at gmail.com Mon Feb 15 09:21:05 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 15 Feb 2010 08:21:05 -0600 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <4B792EF4.5040205@silveregg.co.jp> References: <5b8d13221002131523o5f3297acv6d66a7c7f6baed84@mail.gmail.com> <9457e7c81002140440u7906f274w5cffc96169d9dbb4@mail.gmail.com> <201002151117.05646.faltet@pytables.org> <4B79252E.9010607@silveregg.co.jp> <4B792EF4.5040205@silveregg.co.jp> Message-ID: <3d375d731002150621q5190fdfbva68a26e15e7493ac@mail.gmail.com> On Mon, Feb 15, 2010 at 05:24, David Cournapeau wrote: > David Cournapeau wrote: > >> >> It is always an ABI change, but is mostly backward compatible (which is >> neither the case of matplotlib or scipy AFAIK). > > This sentence does not make any sense: I meant that it is backward > compatible from an ABI POV, unless the structure PyArray_Array itself is > included in another structure (instead of merely being used). > > Neither matplotlib or scipy do that AFAIK - the main use-case for that > would be to inherit from numpy array at the C level, but I doubt many > extensions do that. For people who do C++, that's the same problem as > changing a base class, which always break the ABI, Actually, it's PyArray_Descr, which corresponds to numpy.dtype, that has been extended. That has even fewer possible use cases for subtyping. I know of none. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Mon Feb 15 09:31:00 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 15 Feb 2010 09:31:00 -0500 Subject: [Numpy-discussion] cov In-Reply-To: <4B794F2A.8080207@american.edu> References: <4B794F2A.8080207@american.edu> Message-ID: <1cd32cbb1002150631u1178b49ey18d353bb5f4cef32@mail.gmail.com> On Mon, Feb 15, 2010 at 8:42 AM, Alan G Isaac wrote: > 1. Should `numpy.cov` use `ddof` instead of `bias`, > like `std` and `mean`? +1 (I just checked scipy stats and the usage of bias versus ddof is also inconsistent. Is there an interpretation for a general ddof for skew and kurtosis, or does only the binary choice bias=True/False make sense?) > > 2. Should the docs for scipy.cov state that it is deprecated? > http://docs.scipy.org/scipy/docs/scipy.stats.stats.cov/#scipy-stats-cov > (Use raises a deprecation warning.) I think so, we have not yet done any doc cleaning for the removal or depreciation of the descriptive statistics in scipy.stats, mean, var, cov, ... It would be a good policy in general to add more information about depreciation and changes in behavior in the docstrings and not only in the warnings. Josef > Thanks, > Alan Isaac > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From david.huard at gmail.com Mon Feb 15 11:04:04 2010 From: david.huard at gmail.com (David Huard) Date: Mon, 15 Feb 2010 11:04:04 -0500 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <3d375d731002150621q5190fdfbva68a26e15e7493ac@mail.gmail.com> References: <5b8d13221002131523o5f3297acv6d66a7c7f6baed84@mail.gmail.com> <9457e7c81002140440u7906f274w5cffc96169d9dbb4@mail.gmail.com> <201002151117.05646.faltet@pytables.org> <4B79252E.9010607@silveregg.co.jp> <4B792EF4.5040205@silveregg.co.jp> <3d375d731002150621q5190fdfbva68a26e15e7493ac@mail.gmail.com> Message-ID: <91cf711d1002150804h78d085bfpdf9b455f0581065a@mail.gmail.com> In the list of things to do, I suggest deleting completely the old histogram behaviour and the `new` keyword. The `new` keyword argument has raised a deprecation warning since 1.3 and was set for removal in 1.4. David H. On Mon, Feb 15, 2010 at 9:21 AM, Robert Kern wrote: > On Mon, Feb 15, 2010 at 05:24, David Cournapeau wrote: >> David Cournapeau wrote: >> >>> >>> It is always an ABI change, but is mostly backward compatible (which is >>> neither the case of matplotlib or scipy AFAIK). >> >> This sentence does not make any sense: I meant that it is backward >> compatible from an ABI POV, unless the structure PyArray_Array itself is >> included in another structure (instead of merely being used). >> >> Neither matplotlib or scipy do that AFAIK - the main use-case for that >> would be to inherit from numpy array at the C level, but I doubt many >> extensions do that. For people who do C++, that's the same problem as >> changing a base class, which always break the ABI, > > Actually, it's PyArray_Descr, which corresponds to numpy.dtype, that > has been extended. That has even fewer possible use cases for > subtyping. I know of none. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From bsouthey at gmail.com Mon Feb 15 11:18:59 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 15 Feb 2010 10:18:59 -0600 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266216953.2728.2.camel@talisman> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> Message-ID: <4B7973F3.7040504@gmail.com> On 02/15/2010 12:55 AM, Pauli Virtanen wrote: > ma, 2010-02-15 kello 00:38 +0100, Xavier Gnata kirjoitti: > >> Ok! >> git clone git://github.com/pv/numpy-work.git >> git checkout origin/py3k >> NPY_SEPARATE_BUILD=1 python3.1 setup.py build >> >> but now it fails during the build: >> >> In file included from numpy/core/src/multiarray/buffer.c:14, >> from numpy/core/src/multiarray/multiarraymodule_onefile.c:36: >> numpy/core/src/multiarray/buffer.h: At top level: >> numpy/core/src/multiarray/buffer.h:14: error: conflicting types for >> ?_descriptor_from_pep3118_format? >> numpy/core/src/multiarray/common.c:220: note: previous implicit >> declaration of ?_descriptor_from_pep3118_format? was here >> In file included from >> numpy/core/src/multiarray/multiarraymodule_onefile.c:36: >> numpy/core/src/multiarray/buffer.c: In function ?_buffer_format_string?: >> numpy/core/src/multiarray/buffer.c:151: warning: unused variable ?repr? >> > Hmm, I probably tested only the separate compilation properly as it > seems the single-file build is failing. The environment variable is > actually NPY_SEPARATE_COMPILATION=1, not *_BUILD. > > Hi, Is there a correct way to get Python3.1 to find the relative path on Linux? I can change the import statement to work but I do not think that is viable. I tried appending the directory with sys.path but that did not work. Python 3.1.1 (r311:74480, Feb 15 2010, 09:08:21) [GCC 4.4.1 20090725 (Red Hat 4.4.1-2)] on linux2 >>> import numpy Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.1/site-packages/numpy/__init__.py", line 136, in from . import add_newdocs File "/usr/local/lib/python3.1/site-packages/numpy/add_newdocs.py", line 9, in from numpy.lib import add_newdoc File "/usr/local/lib/python3.1/site-packages/numpy/lib/__init__.py", line 1, in from info import __doc__ ImportError: No module named info Thanks Bruce From josef.pktd at gmail.com Mon Feb 15 11:32:04 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 15 Feb 2010 11:32:04 -0500 Subject: [Numpy-discussion] numpy 1.4 distutils install_clib when there are none? Message-ID: <1cd32cbb1002150832q19db4a27w5a154809b044c9a1@mail.gmail.com> I was doing the final checks for a new release of statsmodels and ran into a problem with install_clib I do "setup.py install" from the source or sdist of scikits.statsmodels and the install breaks with this message at the end running install_clib No module named msvccompiler in numpy.distutils; trying from distutils error: Python was built with Visual Studio 2003; extensions must be built with a compiler than can generate compatible binaries. Visual Studio 2003 was not found on this system. If you have Cygwin installed, you can try compiling with MingW32, by passing "-c mingw32" to setup.py. statsmodels is pure python and there are no extensions to compile, the setup.py file is here http://bazaar.launchpad.net/~scipystats/statsmodels/trunk/annotate/head%3A/setup.py we don't have any setup.py on lower levels setup.py install worked and works without problems with numpy 1.3.0 same story with "easy_install scikits.statsmodels" directly from pypi, works for numpy 1.3.0, same error as above with numpy 1.4.0 Any ideas? Are there some configuration settings that I need to change to install with numpy 1.4? "setup.py build install" works, but does it mean it requires a c compiler for a pure python package when using numpy distutils? Josef From aisaac at american.edu Mon Feb 15 11:46:40 2010 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 15 Feb 2010 11:46:40 -0500 Subject: [Numpy-discussion] cov In-Reply-To: <1cd32cbb1002150631u1178b49ey18d353bb5f4cef32@mail.gmail.com> References: <4B794F2A.8080207@american.edu> <1cd32cbb1002150631u1178b49ey18d353bb5f4cef32@mail.gmail.com> Message-ID: <4B797A70.9040400@american.edu> > On Mon, Feb 15, 2010 at 8:42 AM, Alan G Isaac wrote: >> > 1. Should `numpy.cov` use `ddof` instead of `bias`, >> > like `std` and `mean`? On 2/15/2010 9:31 AM, josef.pktd at gmail.com wrote: > +1 > > (I just checked scipy stats and the usage of bias versus ddof is also > inconsistent. Is there an interpretation for a general ddof for skew > and kurtosis, or does only the binary choice bias=True/False make > sense?) So then this should get into 2.0 and not wait, right? Should I open a ticket for this? As for the SciPy functions, I also thought they should change to the new keyword, but this seems less pressing at the moment. Alan From pav at iki.fi Mon Feb 15 11:55:18 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Feb 2010 18:55:18 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <4B7973F3.7040504@gmail.com> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> Message-ID: <1266252918.6419.5.camel@idol> ma, 2010-02-15 kello 10:18 -0600, Bruce Southey kirjoitti: [clip] > Is there a correct way to get Python3.1 to find the relative path on Linux? > I can change the import statement to work but I do not think that is viable. You need to use relative imports. 2to3 should be able to take care of this. [clip] > File "/usr/local/lib/python3.1/site-packages/numpy/lib/__init__.py", > line 1, in > from info import __doc__ > ImportError: No module named info That statement should read from .info import __doc__ and indeed, it reads like that for me. Check how it is in build/py3k/numpy/lib/__init__.py Most likely you interrupted the build by Ctrl+C and 2to3 did not finish the conversion of the files to Python3 format. Try removing the build/ directory and trying again -- if you interrupt it, 2to3 may not have finished running. Of course, it should be more robust, but at the moment, it isn't (patches welcome). -- Pauli Virtanen From charlesr.harris at gmail.com Mon Feb 15 12:23:13 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 10:23:13 -0700 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266252918.6419.5.camel@idol> References: <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> Message-ID: On Mon, Feb 15, 2010 at 9:55 AM, Pauli Virtanen wrote: > ma, 2010-02-15 kello 10:18 -0600, Bruce Southey kirjoitti: > [clip] > > Is there a correct way to get Python3.1 to find the relative path on > Linux? > > I can change the import statement to work but I do not think that is > viable. > > You need to use relative imports. 2to3 should be able to take care of > this. > > [clip] > > File "/usr/local/lib/python3.1/site-packages/numpy/lib/__init__.py", > > line 1, in > > from info import __doc__ > > ImportError: No module named info > > That statement should read > > from .info import __doc__ > > and indeed, it reads like that for me. Check how it is in > build/py3k/numpy/lib/__init__.py > > Most likely you interrupted the build by Ctrl+C and 2to3 did not finish > the conversion of the files to Python3 format. Try removing the build/ > directory and trying again -- if you interrupt it, 2to3 may not have > finished running. > > Of course, it should be more robust, but at the moment, it isn't > (patches welcome). > > Segfaults: test_multiarray.TestNewBufferProtocol.test_export_simple_1d ... FAIL test_multiarray.TestNewBufferProtocol.test_export_simple_nd ... ok test_multiarray.TestNewBufferProtocol.test_export_subarray ... FAIL test_multiarray.TestNewBufferProtocol.test_roundtrip ... Segmentation fault Are there changes you haven't pushed to github? I don't want to be making fixes that already exist. It would also be easier to work on this if the current state was in the main repository so that the rest of us could push changes. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Feb 15 12:55:07 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Feb 2010 19:55:07 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: References: <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> Message-ID: <1266256506.6419.14.camel@idol> ma, 2010-02-15 kello 10:23 -0700, Charles R Harris kirjoitti: [clip] > Segfaults: > > test_multiarray.TestNewBufferProtocol.test_export_simple_1d ... FAIL > test_multiarray.TestNewBufferProtocol.test_export_simple_nd ... ok > test_multiarray.TestNewBufferProtocol.test_export_subarray ... FAIL > test_multiarray.TestNewBufferProtocol.test_roundtrip ... Segmentation > fault Worksforme, and the tests that FAIL above also pass for me... No idea what could be different. > Are there changes you haven't pushed to github? No. The current is commit 2132bdf550d12af5c2198027182778a47d5d19ab > I don't want to be making fixes that already exist. It would also be > easier to work on this if the current state was in the main repository > so that the rest of us could push changes. I will push the changes to SVN once I clean up some parts of the commit history. I'll try to do this ASAP. Anyway, I will not rebase the py3k branch so it's safe to work on, and I'll push any new stuff immediately there. Pauli From millman at berkeley.edu Mon Feb 15 13:32:38 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 15 Feb 2010 12:32:38 -0600 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <91cf711d1002150804h78d085bfpdf9b455f0581065a@mail.gmail.com> References: <5b8d13221002131523o5f3297acv6d66a7c7f6baed84@mail.gmail.com> <9457e7c81002140440u7906f274w5cffc96169d9dbb4@mail.gmail.com> <201002151117.05646.faltet@pytables.org> <4B79252E.9010607@silveregg.co.jp> <4B792EF4.5040205@silveregg.co.jp> <3d375d731002150621q5190fdfbva68a26e15e7493ac@mail.gmail.com> <91cf711d1002150804h78d085bfpdf9b455f0581065a@mail.gmail.com> Message-ID: On Mon, Feb 15, 2010 at 10:04 AM, David Huard wrote: > In the list of things to do, I suggest deleting completely the old > histogram behaviour and the `new` keyword. +1 From charlesr.harris at gmail.com Mon Feb 15 13:44:26 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 11:44:26 -0700 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266256506.6419.14.camel@idol> References: <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> Message-ID: On Mon, Feb 15, 2010 at 10:55 AM, Pauli Virtanen wrote: > ma, 2010-02-15 kello 10:23 -0700, Charles R Harris kirjoitti: > [clip] > > Segfaults: > > > > test_multiarray.TestNewBufferProtocol.test_export_simple_1d ... FAIL > > test_multiarray.TestNewBufferProtocol.test_export_simple_nd ... ok > > test_multiarray.TestNewBufferProtocol.test_export_subarray ... FAIL > > test_multiarray.TestNewBufferProtocol.test_roundtrip ... Segmentation > > fault > > Worksforme, and the tests that FAIL above also pass for me... No idea > what could be different. > > Metadata: this is on ubuntu karmic 64 bit with the distro version of python3.1. Chuck > > Are there changes you haven't pushed to github? > > No. The current is commit 2132bdf550d12af5c2198027182778a47d5d19ab > > > I don't want to be making fixes that already exist. It would also be > > easier to work on this if the current state was in the main repository > > so that the rest of us could push changes. > > I will push the changes to SVN once I clean up some parts of the commit > history. I'll try to do this ASAP. > > Great. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Mon Feb 15 13:49:47 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 15 Feb 2010 12:49:47 -0600 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266252918.6419.5.camel@idol> References: <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> Message-ID: On Mon, Feb 15, 2010 at 10:55 AM, Pauli Virtanen wrote: > ma, 2010-02-15 kello 10:18 -0600, Bruce Southey kirjoitti: > [clip] >> Is there a correct way to get Python3.1 to find the relative path on Linux? >> I can change the import statement to work but I do not think that is viable. > > You need to use relative imports. 2to3 should be able to take care of > this. > > [clip] >> ? ?File "/usr/local/lib/python3.1/site-packages/numpy/lib/__init__.py", >> line 1, in >> ? ? ?from info import __doc__ >> ImportError: No module named info > > That statement should read > > ? ? ? ?from .info import __doc__ > > and indeed, it reads like that for me. Check how it is in > build/py3k/numpy/lib/__init__.py Not for me > > Most likely you interrupted the build by Ctrl+C and 2to3 did not finish > the conversion of the files to Python3 format. Try removing the build/ > directory and trying again -- if you interrupt it, 2to3 may not have > finished running. Nope. I'll go through the log and see if anything looks weird. > Of course, it should be more robust, but at the moment, it isn't > (patches welcome). > Well that it is the whole point of trying this as I would like to move to Python 3. Bruce From pav at iki.fi Mon Feb 15 14:03:34 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Feb 2010 21:03:34 +0200 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> Message-ID: <1266260613.6419.19.camel@idol> su, 2010-02-14 kello 11:51 -0700, Charles R Harris kirjoitti: [clip] > There are more - grep -r ob_type numpy/* - how do you want to go about > fixing these things? self->ob_type is fine if `self` is a plain PyObject* and not a subclass pointer. The issue in changing all of them to Py_TYPE is that there's no Py_TYPE on Python 2.4, and we define it locally only in core/src/private so it's a bit icky to include npy_3kcompat.h outside that subtree. Anyway, with suitable #ifndef's there should be no problem. [clip] > The py3k branch doesn't compile: > > numpy/core/src/multiarray/buffer.h: At top level: > numpy/core/src/multiarray/buffer.h:14: error: conflicting types for > ?_descriptor_from_pep3118_format? > numpy/core/src/multiarray/common.c:220: note: previous implicit > declaration of ?_descriptor_from_pep3118_format? was here That was fixed in 48f8edfdc8fc24484b2c91d581e00b4024a341ac Pauli From stefan at sun.ac.za Mon Feb 15 14:07:03 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 15 Feb 2010 21:07:03 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266256506.6419.14.camel@idol> References: <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> Message-ID: <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> Hi Pauli Well done! You and Charles have made huge strides since last I looked at the problem. After your latest changes, numpy builds on OSX, although importing is still broken: from . import multiarray ImportError: dlopen(/Users/stefan/lib/python3.1/site-packages/numpy/core/multiarray.so, 2): Symbol not found: __numpymemoryview_init Referenced from: /Users/stefan/lib/python3.1/site-packages/numpy/core/multiarray.so Expected in: flat namespace in /Users/stefan/lib/python3.1/site-packages/numpy/core/multiarray.so Have you seen this before? Looks like something isn't linked properly, but I'm not sure where memoryview would be defined. Is this part of the new PEP implementation? Regards St?fan On 15 February 2010 19:55, Pauli Virtanen wrote: > ma, 2010-02-15 kello 10:23 -0700, Charles R Harris kirjoitti: > [clip] >> Segfaults: >> >> test_multiarray.TestNewBufferProtocol.test_export_simple_1d ... FAIL >> test_multiarray.TestNewBufferProtocol.test_export_simple_nd ... ok >> test_multiarray.TestNewBufferProtocol.test_export_subarray ... FAIL >> test_multiarray.TestNewBufferProtocol.test_roundtrip ... Segmentation >> fault > > Worksforme, and the tests that FAIL above also pass for me... No idea > what could be different. > >> Are there changes you haven't pushed to github? > > No. The current is commit 2132bdf550d12af5c2198027182778a47d5d19ab > >> I don't want to be making fixes that already exist. It would also be >> easier to work on this if the current state was in the main repository >> so that the rest of us could push changes. > > I will push the changes to SVN once I clean up some parts of the commit > history. I'll try to do this ASAP. > > Anyway, I will not rebase the py3k branch so it's safe to work on, and > I'll push any new stuff immediately there. > > ? ? ? ?Pauli > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Mon Feb 15 14:08:41 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 12:08:41 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <1266260613.6419.19.camel@idol> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <1266260613.6419.19.camel@idol> Message-ID: On Mon, Feb 15, 2010 at 12:03 PM, Pauli Virtanen wrote: > su, 2010-02-14 kello 11:51 -0700, Charles R Harris kirjoitti: > [clip] > > There are more - grep -r ob_type numpy/* - how do you want to go about > > fixing these things? > > self->ob_type is fine if `self` is a plain PyObject* and not a subclass > pointer. > > The issue in changing all of them to Py_TYPE is that there's no Py_TYPE > on Python 2.4, and we define it locally only in core/src/private so it's > a bit icky to include npy_3kcompat.h outside that subtree. > > I was wondering about that. Why do we have a private include directory? Would it make more sense to move it to core/include/numpy/private. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Feb 15 14:19:54 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Feb 2010 21:19:54 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> References: <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> Message-ID: <1266261593.6419.28.camel@idol> ma, 2010-02-15 kello 21:07 +0200, St?fan van der Walt kirjoitti: [clip] > After your latest changes, numpy builds on OSX, although importing is > still broken: > > from . import multiarray > ImportError: dlopen(/Users/stefan/lib/python3.1/site-packages/numpy/core/multiarray.so, > 2): Symbol not found: __numpymemoryview_init > Referenced from: > /Users/stefan/lib/python3.1/site-packages/numpy/core/multiarray.so > Expected in: flat namespace > in /Users/stefan/lib/python3.1/site-packages/numpy/core/multiarray.so Oh crap, nothing seems to work for anyone else ;) > Have you seen this before? Looks like something isn't linked > properly, but I'm not sure where memoryview would be defined. Is this > part of the new PEP implementation? Yep, it's a part of that, but it's only necessary on Python 2.6. The Memoryview object is a part of Python proper starting from Python 2.7. It was a huge convenience for the implementation to be able to keep track of buffers via refcounting, so I backported that bit. On Python 3.1, the numpymemoryview_init is a stub function that does nothing. I guess this is another single-file compilation issue -- the new file should be included in multiarraymodule_onefile.c. Should be fixed now. Pauli From stefan at sun.ac.za Mon Feb 15 14:41:56 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 15 Feb 2010 21:41:56 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266261593.6419.28.camel@idol> References: <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> <1266261593.6419.28.camel@idol> Message-ID: <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> On 15 February 2010 21:19, Pauli Virtanen wrote: > Oh crap, nothing seems to work for anyone else ;) Don't speak too soon: we have import! > On Python 3.1, the numpymemoryview_init is a stub function that does > nothing. I guess this is another single-file compilation issue -- the > new file should be included in multiarraymodule_onefile.c. Should be > fixed now. Thanks, it works now. I wonder if 2to3 is doing its job, though. I had to make the attached changes before I could import. St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: py3k_imports.patch Type: application/octet-stream Size: 6621 bytes Desc: not available URL: From stefan at sun.ac.za Mon Feb 15 14:48:14 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 15 Feb 2010 21:48:14 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> References: <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> <1266261593.6419.28.camel@idol> <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> Message-ID: <9457e7c81002151148n536663d3r6e26dda48c0286e4@mail.gmail.com> 2010/2/15 St?fan van der Walt : > Thanks, it works now. Progress: the unit test suite starts to run, but fails soon after. .................Fatal Python error: Inconsistent interned string state. Program received signal SIGABRT, Aborted. 0x00007fff84e1efe6 in __kill () (gdb) bt #0 0x00007fff84e1efe6 in __kill () #1 0x00007fff84ebfe32 in abort () #2 0x00000001000cd715 in Py_FatalError () #3 0x0000000100069508 in unicode_dealloc () #4 0x000000010065cd1f in UNICODE_to_STRING (ip=0x101f06fd0 "a", op=0x101f01470 "abc", n=4, aip=0x101ce5de8, aop=0x101901a28) at arraytypes.c.src:1501 #5 0x00000001006771ca in PyArray_CastTo (out=0x101901a28, mp=0x101ce5de8) at convert_datatype.c:336 #6 0x00000001006772e1 in PyArray_CastToType (mp=0x101ce5de8, at=, fortran_=0) at convert_datatype.c:73 #7 0x0000000100681569 in array_cast (self=0x101ce5de8, args=) at methods.c:760 Cheers St?fan From pav at iki.fi Mon Feb 15 14:58:59 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Feb 2010 21:58:59 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> References: <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> <1266261593.6419.28.camel@idol> <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> Message-ID: <1266263938.11444.4.camel@idol> ma, 2010-02-15 kello 21:41 +0200, St?fan van der Walt kirjoitti: [clip] > > On Python 3.1, the numpymemoryview_init is a stub function that does > > nothing. I guess this is another single-file compilation issue -- the > > new file should be included in multiarraymodule_onefile.c. Should be > > fixed now. > > Thanks, it works now. > > I wonder if 2to3 is doing its job, though. I had to make the attached > changes before I could import. This seems to be the same problem Bruce had. Doesn't seem like 2to3 is doing what it's supposed to do. In fact, the 2to3 shipped with Python 2.6 does those changes by itself, but the one with Python 3.1.1 does not. Obviously, I was all the time using Python 2.6 shipped 2to3. I'll try to find a workaround. > Progress: the unit test suite starts to run, but fails soon after. > > .................Fatal Python error: Inconsistent interned string > state. > > Program received signal SIGABRT, Aborted. > 0x00007fff84e1efe6 in __kill () > (gdb) bt > #0 0x00007fff84e1efe6 in __kill () > #1 0x00007fff84ebfe32 in abort () > #2 0x00000001000cd715 in Py_FatalError () > #3 0x0000000100069508 in unicode_dealloc () > #4 0x000000010065cd1f in UNICODE_to_STRING (ip=0x101f06fd0 "a", > op=0x101f01470 "abc", n=4, aip=0x101ce5de8, aop=0x101901a28) at > arraytypes.c.src:1501 > #5 0x00000001006771ca in PyArray_CastTo (out=0x101901a28, > mp=0x101ce5de8) at convert_datatype.c:336 > #6 0x00000001006772e1 in PyArray_CastToType (mp=0x101ce5de8, > at=, fortran_=0) > at convert_datatype.c:73 > #7 0x0000000100681569 in array_cast (self=0x101ce5de8, args= temporarily unavailable, due to optimizations>) at methods.c:760 The platform is OSX -- 32 or 64 bits? Is your Python unicode narrow or wide? Which test triggers the issue? Thanks, Pauli From charlesr.harris at gmail.com Mon Feb 15 15:41:44 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 13:41:44 -0700 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266263938.11444.4.camel@idol> References: <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> <1266261593.6419.28.camel@idol> <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> <1266263938.11444.4.camel@idol> Message-ID: On Mon, Feb 15, 2010 at 12:58 PM, Pauli Virtanen wrote: > ma, 2010-02-15 kello 21:41 +0200, St?fan van der Walt kirjoitti: > [clip] > > > On Python 3.1, the numpymemoryview_init is a stub function that does > > > nothing. I guess this is another single-file compilation issue -- the > > > new file should be included in multiarraymodule_onefile.c. Should be > > > fixed now. > > > > Thanks, it works now. > > > > I wonder if 2to3 is doing its job, though. I had to make the attached > > changes before I could import. > > This seems to be the same problem Bruce had. > > Doesn't seem like 2to3 is doing what it's supposed to do. In fact, the > 2to3 shipped with Python 2.6 does those changes by itself, but the one > with Python 3.1.1 does not. Obviously, I was all the time using Python > 2.6 shipped 2to3. I'll try to find a workaround. > > > > Progress: the unit test suite starts to run, but fails soon after. > > > > .................Fatal Python error: Inconsistent interned string > > state. > > > > Program received signal SIGABRT, Aborted. > > 0x00007fff84e1efe6 in __kill () > > (gdb) bt > > #0 0x00007fff84e1efe6 in __kill () > > #1 0x00007fff84ebfe32 in abort () > > #2 0x00000001000cd715 in Py_FatalError () > > #3 0x0000000100069508 in unicode_dealloc () > > #4 0x000000010065cd1f in UNICODE_to_STRING (ip=0x101f06fd0 "a", > > op=0x101f01470 "abc", n=4, aip=0x101ce5de8, aop=0x101901a28) at > > arraytypes.c.src:1501 > > #5 0x00000001006771ca in PyArray_CastTo (out=0x101901a28, > > mp=0x101ce5de8) at convert_datatype.c:336 > > #6 0x00000001006772e1 in PyArray_CastToType (mp=0x101ce5de8, > > at=, fortran_=0) > > at convert_datatype.c:73 > > #7 0x0000000100681569 in array_cast (self=0x101ce5de8, args= > temporarily unavailable, due to optimizations>) at methods.c:760 > > The platform is OSX -- 32 or 64 bits? Is your Python unicode narrow or > wide? Which test triggers the issue? > > Is there an easy way to discover what the unicode size is? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon Feb 15 15:44:17 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 15 Feb 2010 22:44:17 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266263938.11444.4.camel@idol> References: <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> <1266261593.6419.28.camel@idol> <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> <1266263938.11444.4.camel@idol> Message-ID: <9457e7c81002151244u19d63b17lc23a93bf8818410b@mail.gmail.com> On 15 February 2010 21:58, Pauli Virtanen wrote: > ma, 2010-02-15 kello 21:41 +0200, St?fan van der Walt kirjoitti: > [clip] >> .................Fatal Python error: Inconsistent interned string >> state. >> > The platform is OSX -- 32 or 64 bits? Is your Python unicode narrow or > wide? Which test triggers the issue? OSX, 64-bit with (I'm assuming) UCS2 (since I didn't specify the UCS4 variant when building). Is there an easy way to check the unicode width? The test that failed was "test_from_unicode_array (test_defchararray.TestBasic)". Cheers St?fan From charlesr.harris at gmail.com Mon Feb 15 16:11:45 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 14:11:45 -0700 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <9457e7c81002151244u19d63b17lc23a93bf8818410b@mail.gmail.com> References: <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> <1266261593.6419.28.camel@idol> <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> <1266263938.11444.4.camel@idol> <9457e7c81002151244u19d63b17lc23a93bf8818410b@mail.gmail.com> Message-ID: 2010/2/15 St?fan van der Walt > On 15 February 2010 21:58, Pauli Virtanen wrote: > > ma, 2010-02-15 kello 21:41 +0200, St?fan van der Walt kirjoitti: > > [clip] > >> .................Fatal Python error: Inconsistent interned string > >> state. > >> > > The platform is OSX -- 32 or 64 bits? Is your Python unicode narrow or > > wide? Which test triggers the issue? > > OSX, 64-bit with (I'm assuming) UCS2 (since I didn't specify the UCS4 > variant when building). Is there an easy way to check the unicode > width? > > I found it in the pyconfig.h file: pyconfig.h:#define Py_UNICODE_SIZE 4' Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon Feb 15 16:21:27 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 15 Feb 2010 23:21:27 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: References: <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> <1266261593.6419.28.camel@idol> <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> <1266263938.11444.4.camel@idol> <9457e7c81002151244u19d63b17lc23a93bf8818410b@mail.gmail.com> Message-ID: <9457e7c81002151321o226b2a7dvaf0edd9b6204aac6@mail.gmail.com> On 15 February 2010 23:11, Charles R Harris wrote: >> OSX, 64-bit with (I'm assuming) UCS2 (since I didn't specify the UCS4 >> variant when building). ?Is there an easy way to check the unicode >> width? >> > I found it in the pyconfig.h file: > > pyconfig.h:#define Py_UNICODE_SIZE 4' Aha! #define Py_UNICODE_SIZE 2 St?fan From cournape at gmail.com Mon Feb 15 16:46:47 2010 From: cournape at gmail.com (David Cournapeau) Date: Tue, 16 Feb 2010 06:46:47 +0900 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <1266260613.6419.19.camel@idol> Message-ID: <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> On Tue, Feb 16, 2010 at 4:08 AM, Charles R Harris wrote: > > I was wondering about that. Why do we have a private include directory? > Would it make more sense to move it to core/include/numpy/private. No, the whole point is to avoid other packages to include that by mistake, to avoid namespace pollution. David From charlesr.harris at gmail.com Mon Feb 15 17:04:33 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 15:04:33 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> References: <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <1266260613.6419.19.camel@idol> <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> Message-ID: On Mon, Feb 15, 2010 at 2:46 PM, David Cournapeau wrote: > On Tue, Feb 16, 2010 at 4:08 AM, Charles R Harris > wrote: > > > > > I was wondering about that. Why do we have a private include directory? > > Would it make more sense to move it to core/include/numpy/private. > > No, the whole point is to avoid other packages to include that by > mistake, to avoid namespace pollution. > Isn't that what the npy prefix is for? In any case, if it needs to be at a higher level for easy inclusion, then it should move up. Or else all the c code should move down. Having a mix of obj_ptr->ob_type and Py_TYPE(obj_ptr) is just asking for trouble. Mindless consistency is the safest policy in such things. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Feb 15 17:05:03 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 16 Feb 2010 00:05:03 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266263938.11444.4.camel@idol> References: <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> <1266261593.6419.28.camel@idol> <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> <1266263938.11444.4.camel@idol> Message-ID: <1266271502.11444.58.camel@idol> ma, 2010-02-15 kello 21:58 +0200, Pauli Virtanen kirjoitti: [clip] > > Program received signal SIGABRT, Aborted. > > 0x00007fff84e1efe6 in __kill () > > (gdb) bt > > #0 0x00007fff84e1efe6 in __kill () > > #1 0x00007fff84ebfe32 in abort () > > #2 0x00000001000cd715 in Py_FatalError () > > #3 0x0000000100069508 in unicode_dealloc () > > #4 0x000000010065cd1f in UNICODE_to_STRING (ip=0x101f06fd0 "a", > > op=0x101f01470 "abc", n=4, aip=0x101ce5de8, aop=0x101901a28) at > > arraytypes.c.src:1501 > > #5 0x00000001006771ca in PyArray_CastTo (out=0x101901a28, > > mp=0x101ce5de8) at convert_datatype.c:336 > > #6 0x00000001006772e1 in PyArray_CastToType (mp=0x101ce5de8, > > at=, fortran_=0) > > at convert_datatype.c:73 > > #7 0x0000000100681569 in array_cast (self=0x101ce5de8, args= > temporarily unavailable, due to optimizations>) at methods.c:760 > > The platform is OSX -- 32 or 64 bits? Is your Python unicode narrow or > wide? Which test triggers the issue? Ok, I think I managed to nail that and all other remaining 64-bit specific issues. Also Python 3.1.1's 2to3 should now work. Cheers, Pauli From charlesr.harris at gmail.com Mon Feb 15 17:19:21 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 15:19:21 -0700 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266271502.11444.58.camel@idol> References: <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> <1266261593.6419.28.camel@idol> <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> <1266263938.11444.4.camel@idol> <1266271502.11444.58.camel@idol> Message-ID: On Mon, Feb 15, 2010 at 3:05 PM, Pauli Virtanen wrote: > ma, 2010-02-15 kello 21:58 +0200, Pauli Virtanen kirjoitti: > [clip] > > > Program received signal SIGABRT, Aborted. > > > 0x00007fff84e1efe6 in __kill () > > > (gdb) bt > > > #0 0x00007fff84e1efe6 in __kill () > > > #1 0x00007fff84ebfe32 in abort () > > > #2 0x00000001000cd715 in Py_FatalError () > > > #3 0x0000000100069508 in unicode_dealloc () > > > #4 0x000000010065cd1f in UNICODE_to_STRING (ip=0x101f06fd0 "a", > > > op=0x101f01470 "abc", n=4, aip=0x101ce5de8, aop=0x101901a28) at > > > arraytypes.c.src:1501 > > > #5 0x00000001006771ca in PyArray_CastTo (out=0x101901a28, > > > mp=0x101ce5de8) at convert_datatype.c:336 > > > #6 0x00000001006772e1 in PyArray_CastToType (mp=0x101ce5de8, > > > at=, fortran_=0) > > > at convert_datatype.c:73 > > > #7 0x0000000100681569 in array_cast (self=0x101ce5de8, args= > > temporarily unavailable, due to optimizations>) at methods.c:760 > > > > The platform is OSX -- 32 or 64 bits? Is your Python unicode narrow or > > wide? Which test triggers the issue? > > Ok, I think I managed to nail that and all other remaining 64-bit > specific issues. Also Python 3.1.1's 2to3 should now work. > > Much better: FAILED (KNOWNFAIL=4, SKIP=4, errors=35, failures=51) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Feb 15 17:34:24 2010 From: cournape at gmail.com (David Cournapeau) Date: Tue, 16 Feb 2010 07:34:24 +0900 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <4B771482.8080306@gmail.com> <1266260613.6419.19.camel@idol> <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> Message-ID: <5b8d13221002151434p3f5e6f40v944693640e9cfd2e@mail.gmail.com> On Tue, Feb 16, 2010 at 7:04 AM, Charles R Harris wrote: > > > On Mon, Feb 15, 2010 at 2:46 PM, David Cournapeau > wrote: >> >> On Tue, Feb 16, 2010 at 4:08 AM, Charles R Harris >> wrote: >> >> > >> > I was wondering about that. Why do we have a private include directory? >> > Would it make more sense to move it to core/include/numpy/private. >> >> No, the whole point is to avoid other packages to include that by >> mistake, to avoid namespace pollution. > > Isn't that what the npy prefix is for? No, npy_ is for public symbols. Anything in private should be private :) > In any case, if it needs to be at a > higher level for easy inclusion, then it should move up. It is not that easy - we should avoid putting this code into core/include, because then we have to keep it compatible across releases, but there is no easy way to share headers between modules without making it public. That's one of the numerous issues of having numpy/scipy organized as a multiple set of independent packages (even though they are not independent). > Or else all the c > code should move down. Having a mix of obj_ptr->ob_type and Py_TYPE(obj_ptr) > is just asking for trouble. Mindless consistency is the safest policy in > such things. Maybe npy_3kcompat.h could be copied across the modules which need it. I will look into distutils to check whether there is an easy way to add an include path from one package to the other, but since Pauli said it was complicated, I am not that hopeful. David From charlesr.harris at gmail.com Mon Feb 15 17:51:09 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 15:51:09 -0700 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: References: <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> <1266261593.6419.28.camel@idol> <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> <1266263938.11444.4.camel@idol> <1266271502.11444.58.camel@idol> Message-ID: On Mon, Feb 15, 2010 at 3:19 PM, Charles R Harris wrote: > > > On Mon, Feb 15, 2010 at 3:05 PM, Pauli Virtanen wrote: > >> ma, 2010-02-15 kello 21:58 +0200, Pauli Virtanen kirjoitti: >> [clip] >> > > Program received signal SIGABRT, Aborted. >> > > 0x00007fff84e1efe6 in __kill () >> > > (gdb) bt >> > > #0 0x00007fff84e1efe6 in __kill () >> > > #1 0x00007fff84ebfe32 in abort () >> > > #2 0x00000001000cd715 in Py_FatalError () >> > > #3 0x0000000100069508 in unicode_dealloc () >> > > #4 0x000000010065cd1f in UNICODE_to_STRING (ip=0x101f06fd0 "a", >> > > op=0x101f01470 "abc", n=4, aip=0x101ce5de8, aop=0x101901a28) at >> > > arraytypes.c.src:1501 >> > > #5 0x00000001006771ca in PyArray_CastTo (out=0x101901a28, >> > > mp=0x101ce5de8) at convert_datatype.c:336 >> > > #6 0x00000001006772e1 in PyArray_CastToType (mp=0x101ce5de8, >> > > at=, fortran_=0) >> > > at convert_datatype.c:73 >> > > #7 0x0000000100681569 in array_cast (self=0x101ce5de8, args=> > > temporarily unavailable, due to optimizations>) at methods.c:760 >> > >> > The platform is OSX -- 32 or 64 bits? Is your Python unicode narrow or >> > wide? Which test triggers the issue? >> >> Ok, I think I managed to nail that and all other remaining 64-bit >> specific issues. Also Python 3.1.1's 2to3 should now work. >> >> > Much better: > > FAILED (KNOWNFAIL=4, SKIP=4, errors=35, failures=51) > > A lot of the remaining failures are of this sort: x: array([b'pi', b'pi', b'pi', b'four', b'five'], dtype='|S8') y: array(['pi', 'pi', 'pi', 'four', 'five'], dtype='>> np.array([b'pi']) array([b'pi'], dtype='|S2') >>> np.array(['pi']) array(['pi'], dtype='>> np.array(['pi'], dtype='|S2') array([b'pi'], dtype='|S2') I expect we will break a lot of code if b'pi' can't somehow be made the default. Hmm. The 'b' prefix is an undocumented feature of python 2.6 but doesn't work for earlier versions. But these tests can be fixed by being a bit more explicit about the type. More problematic are failing doctests, mostly because of unconverted print statements. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Feb 15 17:58:19 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 15:58:19 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <5b8d13221002151434p3f5e6f40v944693640e9cfd2e@mail.gmail.com> References: <1266260613.6419.19.camel@idol> <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> <5b8d13221002151434p3f5e6f40v944693640e9cfd2e@mail.gmail.com> Message-ID: On Mon, Feb 15, 2010 at 3:34 PM, David Cournapeau wrote: > On Tue, Feb 16, 2010 at 7:04 AM, Charles R Harris > wrote: > > > > > > On Mon, Feb 15, 2010 at 2:46 PM, David Cournapeau > > wrote: > >> > >> On Tue, Feb 16, 2010 at 4:08 AM, Charles R Harris > >> wrote: > >> > >> > > >> > I was wondering about that. Why do we have a private include > directory? > >> > Would it make more sense to move it to core/include/numpy/private. > >> > >> No, the whole point is to avoid other packages to include that by > >> mistake, to avoid namespace pollution. > > > > Isn't that what the npy prefix is for? > > No, npy_ is for public symbols. Anything in private should be private :) > > > In any case, if it needs to be at a > > higher level for easy inclusion, then it should move up. > > It is not that easy - we should avoid putting this code into > core/include, because then we have to keep it compatible across > releases, but there is no easy way to share headers between modules > without making it public. > > Py_TYPE, Py_Size, etc. are unlikely to cause compatibility problems across releases. > That's one of the numerous issues of having numpy/scipy organized as a > multiple set of independent packages (even though they are not > independent). > > > Or else all the c > > code should move down. Having a mix of obj_ptr->ob_type and > Py_TYPE(obj_ptr) > > is just asking for trouble. Mindless consistency is the safest policy in > > such things. > > Maybe npy_3kcompat.h could be copied across the modules which need it. > I will look into distutils to check whether there is an easy way to > add an include path from one package to the other, but since Pauli > said it was complicated, I am not that hopeful. > > Note that some of the macros in the public ndarrayobject.h use ob_type ndarrayobject.h:#define PyArray_DescrCheck(op) (((PyObject*)(op))->ob_type== PyArrayDescr_Type) ndarrayobject.h:#define PyArray_CheckExact(op) (((PyObject*)(op))->ob_type == &PyArray_Type) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Feb 15 18:16:05 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 16 Feb 2010 01:16:05 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: References: <1266252918.6419.5.camel@idol> <1266256506.6419.14.camel@idol> <9457e7c81002151107i7f8cf77dr81177643563067ce@mail.gmail.com> <1266261593.6419.28.camel@idol> <9457e7c81002151141o1aedcc62xc5f5ef10b54239fd@mail.gmail.com> <1266263938.11444.4.camel@idol> <1266271502.11444.58.camel@idol> Message-ID: <1266275764.11444.65.camel@idol> ma, 2010-02-15 kello 15:51 -0700, Charles R Harris kirjoitti: [clip] > A lot of the remaining failures are of this sort: > > x: array([b'pi', b'pi', b'pi', b'four', b'five'], > dtype='|S8') > y: array(['pi', 'pi', 'pi', 'four', 'five'], > dtype=' > > This looks fixable by specifying the dtype Specifying the dtype in the test changes the meaning of the test. Rather, the expected results should be made bytes on Py3. This is what I've done so far. There are asbytes() and asbytes_nested() macros available in numpy.compat that can be used to portably get bytes literals. > >>> np.array([b'pi']) > array([b'pi'], > dtype='|S2') > >>> np.array(['pi']) > array(['pi'], > dtype=' >>> np.array(['pi'], dtype='|S2') > array([b'pi'], > dtype='|S2') > > I expect we will break a lot of code if b'pi' can't somehow be made > the default. I don't think we should make the unicode str map to bytes_ dtype, it's just too magical. Any Python code being ported to Py3 will anyway need to go the str vs. bytes transition so there will be breakage in any case. > Hmm. The 'b' prefix is an undocumented feature of python 2.6 but > doesn't work for earlier versions. But these tests can be fixed by > being a bit more explicit about the type. I think the doctests can be partly fixed by using asstr() from numpy.compat. Probably not completely, though -- I've seen some complaints that doctests are a lot of work to convert to Py3. Pauli From charlesr.harris at gmail.com Mon Feb 15 19:09:47 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 17:09:47 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <1266260613.6419.19.camel@idol> <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> <5b8d13221002151434p3f5e6f40v944693640e9cfd2e@mail.gmail.com> Message-ID: On Mon, Feb 15, 2010 at 3:58 PM, Charles R Harris wrote: > > > On Mon, Feb 15, 2010 at 3:34 PM, David Cournapeau wrote: > >> On Tue, Feb 16, 2010 at 7:04 AM, Charles R Harris >> wrote: >> > >> > >> > On Mon, Feb 15, 2010 at 2:46 PM, David Cournapeau >> > wrote: >> >> >> >> On Tue, Feb 16, 2010 at 4:08 AM, Charles R Harris >> >> wrote: >> >> >> >> > >> >> > I was wondering about that. Why do we have a private include >> directory? >> >> > Would it make more sense to move it to core/include/numpy/private. >> >> >> >> No, the whole point is to avoid other packages to include that by >> >> mistake, to avoid namespace pollution. >> > >> > Isn't that what the npy prefix is for? >> >> No, npy_ is for public symbols. Anything in private should be private :) >> >> > In any case, if it needs to be at a >> > higher level for easy inclusion, then it should move up. >> >> It is not that easy - we should avoid putting this code into >> core/include, because then we have to keep it compatible across >> releases, but there is no easy way to share headers between modules >> without making it public. >> >> > Py_TYPE, Py_Size, etc. are unlikely to cause compatibility problems across > releases. > > In particular, I think #if (PY_VERSION_HEX < 0x02060000) #define Py_TYPE(o) (((PyObject*)(o))->ob_type) #define Py_REFCNT(o) (((PyObject*)(o))->ob_refcnt) #define Py_SIZE(o) (((PyVarObject*)(o))->ob_size) #endif belongs somewhere near the top, maybe with a prefix (cython seems to define them also) Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dlc at halibut.com Mon Feb 15 20:51:44 2010 From: dlc at halibut.com (David Carmean) Date: Mon, 15 Feb 2010 17:51:44 -0800 Subject: [Numpy-discussion] Why does np.nan{min, max} clobber my array mask? In-Reply-To: <5AFAA9DD-E54C-4EF4-B0FC-A4B62AA6401C@gmail.com>; from pgmdevlist@gmail.com on Sun, Feb 14, 2010 at 03:22:04PM -0500 References: <20100213190410.I26855@halibut.com> <5AFAA9DD-E54C-4EF4-B0FC-A4B62AA6401C@gmail.com> Message-ID: <20100215175144.J26855@halibut.com> On Sun, Feb 14, 2010 at 03:22:04PM -0500, Pierre GM wrote: > > I'm sorry, I can't follow you. Can you post a simpler self-contained example I can play with ? > Why using np.nanmin/max ? These functions are designed for ndarrays, to avoid using a masked array: can't you just use min/max on the masked array ? I was using np.nanmin/max because I did not yet understand how masked arrays worked; perhaps the docs for those methods need a note indicating that "If you can take the (small?) memory hit, use Masked Arrays instead". Now that I know different... I'm going to drop it unless you reall want to dig into it. From pgmdevlist at gmail.com Mon Feb 15 21:35:05 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 15 Feb 2010 21:35:05 -0500 Subject: [Numpy-discussion] Why does np.nan{min, max} clobber my array mask? In-Reply-To: <20100215175144.J26855@halibut.com> References: <20100213190410.I26855@halibut.com> <5AFAA9DD-E54C-4EF4-B0FC-A4B62AA6401C@gmail.com> <20100215175144.J26855@halibut.com> Message-ID: <5A82E5A3-8D6C-4DC9-B890-80BF59CC61D7@gmail.com> On Feb 15, 2010, at 8:51 PM, David Carmean wrote: > On Sun, Feb 14, 2010 at 03:22:04PM -0500, Pierre GM wrote: > >> >> I'm sorry, I can't follow you. Can you post a simpler self-contained example I can play with ? >> Why using np.nanmin/max ? These functions are designed for ndarrays, to avoid using a masked array: can't you just use min/max on the masked array ? > > I was using np.nanmin/max because I did not yet understand how masked arrays worked; perhaps the > docs for those methods need a note indicating that "If you can take the (small?) memory hit, > use Masked Arrays instead". Now that I know different... I'm going to drop it unless you > reall want to dig into it. I'm curious. Can you post an excerpt of your array, so that I can check what goes wrong? From ramercer at gmail.com Mon Feb 15 22:00:01 2010 From: ramercer at gmail.com (Adam Mercer) Date: Mon, 15 Feb 2010 21:00:01 -0600 Subject: [Numpy-discussion] numpy-1.4.0 no longer available for download? Message-ID: <799406d61002151900h74369c03od5b32adf35e87718@mail.gmail.com> Hi According to the NumPy download page the latest available version is 1.3.0, what happened to 1.4.0? Apologies if I've missed some announcement. Cheers Adam From david at silveregg.co.jp Mon Feb 15 22:13:33 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Tue, 16 Feb 2010 12:13:33 +0900 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <1266260613.6419.19.camel@idol> <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> <5b8d13221002151434p3f5e6f40v944693640e9cfd2e@mail.gmail.com> Message-ID: <4B7A0D5D.2070100@silveregg.co.jp> Charles R Harris wrote: > > > On Mon, Feb 15, 2010 at 3:58 PM, Charles R Harris > > wrote: > > > > On Mon, Feb 15, 2010 at 3:34 PM, David Cournapeau > > wrote: > > On Tue, Feb 16, 2010 at 7:04 AM, Charles R Harris > > > wrote: > > > > > > On Mon, Feb 15, 2010 at 2:46 PM, David Cournapeau > > > > wrote: > >> > >> On Tue, Feb 16, 2010 at 4:08 AM, Charles R Harris > >> > wrote: > >> > >> > > >> > I was wondering about that. Why do we have a private > include directory? > >> > Would it make more sense to move it to > core/include/numpy/private. > >> > >> No, the whole point is to avoid other packages to include > that by > >> mistake, to avoid namespace pollution. > > > > Isn't that what the npy prefix is for? > > No, npy_ is for public symbols. Anything in private should be > private :) > > > In any case, if it needs to be at a > > higher level for easy inclusion, then it should move up. > > It is not that easy - we should avoid putting this code into > core/include, because then we have to keep it compatible across > releases, but there is no easy way to share headers between modules > without making it public. > > > Py_TYPE, Py_Size, etc. are unlikely to cause compatibility problems > across releases. > > > > In particular, I think > > #if (PY_VERSION_HEX < 0x02060000) > #define Py_TYPE(o) (((PyObject*)(o))->ob_type) > #define Py_REFCNT(o) (((PyObject*)(o))->ob_refcnt) > #define Py_SIZE(o) (((PyVarObject*)(o))->ob_size) > #endif > > belongs somewhere near the top, maybe with a prefix (cython seems to > define them also) The rule is easy: one should put in core/include/numpy whatever is public, and put in private what is not. Note that defining those macros above publicly is very likely to cause trouble because I am sure other people do define those macros, without caring about polluting the namespace as well. Given that it is temporary, and is small, I think copying the compat header is better than making it public, the best solution being to add something in distutils to share it between submodules, cheers, David From bsouthey at gmail.com Mon Feb 15 22:24:21 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 15 Feb 2010 21:24:21 -0600 Subject: [Numpy-discussion] Why does np.nan{min, max} clobber my array mask? In-Reply-To: <5A82E5A3-8D6C-4DC9-B890-80BF59CC61D7@gmail.com> References: <20100213190410.I26855@halibut.com> <5AFAA9DD-E54C-4EF4-B0FC-A4B62AA6401C@gmail.com> <20100215175144.J26855@halibut.com> <5A82E5A3-8D6C-4DC9-B890-80BF59CC61D7@gmail.com> Message-ID: On Mon, Feb 15, 2010 at 8:35 PM, Pierre GM wrote: > On Feb 15, 2010, at 8:51 PM, David Carmean wrote: >> On Sun, Feb 14, 2010 at 03:22:04PM -0500, Pierre GM wrote: >> >>> >>> I'm sorry, I can't follow you. Can you post a simpler self-contained example I can play with ? >>> Why using np.nanmin/max ? These functions are designed for ndarrays, to avoid using a masked array: can't you just use min/max on the masked array ? >> >> I was using np.nanmin/max because I did not yet understand how masked arrays worked; perhaps the >> docs for those methods need a note indicating that "If you can take the (small?) memory hit, >> use Masked Arrays instead". ? Now that I know different... I'm ?going to drop it unless you >> reall want to dig into it. > > > I'm curious. Can you post an excerpt of your array, so that I can check what goes wrong? > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Hi, David, please file a bug report. I think is occurs with np.nansum, np.nanmin and np.nanmax. Perhaps some thing with the C99 changes as I think it exists with numpy 1.3. I think this code shows the problem with Linux and recent numpy svn: import numpy as np uut = np.array([[2, 1, 3, np.nan], [5, 2, 3, np.nan]]) msk = np.ma.masked_invalid(uut) msk np.nanmin(msk, axis=1) msk $ python Python 2.6 (r26:66714, Nov 3 2009, 17:33:18) [GCC 4.4.1 20090725 (Red Hat 4.4.1-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> uut = np.array([[2, 1, 3, np.nan], [5, 2, 3, np.nan]]) >>> msk = np.ma.masked_invalid(uut) >>> msk masked_array(data = [[2.0 1.0 3.0 --] [5.0 2.0 3.0 --]], mask = [[False False False True] [False False False True]], fill_value = 1e+20) >>> np.nanmin(msk, axis=1) masked_array(data = [1.0 2.0], mask = [False False], fill_value = 1e+20) >>> msk masked_array(data = [[2.0 1.0 3.0 nan] [5.0 2.0 3.0 nan]], mask = [[False False False False] [False False False False]], fill_value = 1e+20) Bruce From charlesr.harris at gmail.com Mon Feb 15 22:36:41 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 20:36:41 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <4B7A0D5D.2070100@silveregg.co.jp> References: <1266260613.6419.19.camel@idol> <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> <5b8d13221002151434p3f5e6f40v944693640e9cfd2e@mail.gmail.com> <4B7A0D5D.2070100@silveregg.co.jp> Message-ID: On Mon, Feb 15, 2010 at 8:13 PM, David Cournapeau wrote: > Charles R Harris wrote: > > > > > > On Mon, Feb 15, 2010 at 3:58 PM, Charles R Harris > > > wrote: > > > > > > > > On Mon, Feb 15, 2010 at 3:34 PM, David Cournapeau > > > wrote: > > > > On Tue, Feb 16, 2010 at 7:04 AM, Charles R Harris > > > > > wrote: > > > > > > > > > On Mon, Feb 15, 2010 at 2:46 PM, David Cournapeau > > > > > > wrote: > > >> > > >> On Tue, Feb 16, 2010 at 4:08 AM, Charles R Harris > > >> > > wrote: > > >> > > >> > > > >> > I was wondering about that. Why do we have a private > > include directory? > > >> > Would it make more sense to move it to > > core/include/numpy/private. > > >> > > >> No, the whole point is to avoid other packages to include > > that by > > >> mistake, to avoid namespace pollution. > > > > > > Isn't that what the npy prefix is for? > > > > No, npy_ is for public symbols. Anything in private should be > > private :) > > > > > In any case, if it needs to be at a > > > higher level for easy inclusion, then it should move up. > > > > It is not that easy - we should avoid putting this code into > > core/include, because then we have to keep it compatible across > > releases, but there is no easy way to share headers between > modules > > without making it public. > > > > > > Py_TYPE, Py_Size, etc. are unlikely to cause compatibility problems > > across releases. > > > > > > > > In particular, I think > > > > #if (PY_VERSION_HEX < 0x02060000) > > #define Py_TYPE(o) (((PyObject*)(o))->ob_type) > > #define Py_REFCNT(o) (((PyObject*)(o))->ob_refcnt) > > #define Py_SIZE(o) (((PyVarObject*)(o))->ob_size) > > #endif > > > > belongs somewhere near the top, maybe with a prefix (cython seems to > > define them also) > > The rule is easy: one should put in core/include/numpy whatever is > public, and put in private what is not. > > Note that defining those macros above publicly is very likely to cause > trouble because I am sure other people do define those macros, without > caring about polluting the namespace as well. Given that it is > temporary, and is small, I think copying the compat header is better > than making it public, the best solution being to add something in > distutils to share it between submodules, > > You would prefer to fix the macros in ndarrayobject.h using #ifdef's then? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramercer at gmail.com Mon Feb 15 23:26:02 2010 From: ramercer at gmail.com (Adam Mercer) Date: Mon, 15 Feb 2010 22:26:02 -0600 Subject: [Numpy-discussion] numpy-1.4.0 no longer available for download? In-Reply-To: <799406d61002151900h74369c03od5b32adf35e87718@mail.gmail.com> References: <799406d61002151900h74369c03od5b32adf35e87718@mail.gmail.com> Message-ID: <799406d61002152026gd276cdeiea2eb017392945f0@mail.gmail.com> Ahhh I see this is due to the ABI change, sorry for the noise. Cheers Adam On Mon, Feb 15, 2010 at 21:00, Adam Mercer wrote: > Hi > > According to the NumPy download page > the latest available > version is 1.3.0, what happened to 1.4.0? Apologies if I've missed > some announcement. > > Cheers > > Adam > From david at silveregg.co.jp Tue Feb 16 00:19:23 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Tue, 16 Feb 2010 14:19:23 +0900 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <1266260613.6419.19.camel@idol> <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> <5b8d13221002151434p3f5e6f40v944693640e9cfd2e@mail.gmail.com> <4B7A0D5D.2070100@silveregg.co.jp> Message-ID: <4B7A2ADB.2000706@silveregg.co.jp> Charles R Harris wrote: > > > On Mon, Feb 15, 2010 at 8:13 PM, David Cournapeau > wrote: > > Charles R Harris wrote: > > > > > > On Mon, Feb 15, 2010 at 3:58 PM, Charles R Harris > > > >> wrote: > > > > > > > > On Mon, Feb 15, 2010 at 3:34 PM, David Cournapeau > > > >> wrote: > > > > On Tue, Feb 16, 2010 at 7:04 AM, Charles R Harris > > >> > > wrote: > > > > > > > > > On Mon, Feb 15, 2010 at 2:46 PM, David Cournapeau > > > >> > > > wrote: > > >> > > >> On Tue, Feb 16, 2010 at 4:08 AM, Charles R Harris > > >> > > >> wrote: > > >> > > >> > > > >> > I was wondering about that. Why do we have a private > > include directory? > > >> > Would it make more sense to move it to > > core/include/numpy/private. > > >> > > >> No, the whole point is to avoid other packages to include > > that by > > >> mistake, to avoid namespace pollution. > > > > > > Isn't that what the npy prefix is for? > > > > No, npy_ is for public symbols. Anything in private should be > > private :) > > > > > In any case, if it needs to be at a > > > higher level for easy inclusion, then it should move up. > > > > It is not that easy - we should avoid putting this code into > > core/include, because then we have to keep it compatible > across > > releases, but there is no easy way to share headers > between modules > > without making it public. > > > > > > Py_TYPE, Py_Size, etc. are unlikely to cause compatibility > problems > > across releases. > > > > > > > > In particular, I think > > > > #if (PY_VERSION_HEX < 0x02060000) > > #define Py_TYPE(o) (((PyObject*)(o))->ob_type) > > #define Py_REFCNT(o) (((PyObject*)(o))->ob_refcnt) > > #define Py_SIZE(o) (((PyVarObject*)(o))->ob_size) > > #endif > > > > belongs somewhere near the top, maybe with a prefix (cython seems to > > define them also) > > The rule is easy: one should put in core/include/numpy whatever is > public, and put in private what is not. > > Note that defining those macros above publicly is very likely to cause > trouble because I am sure other people do define those macros, without > caring about polluting the namespace as well. Given that it is > temporary, and is small, I think copying the compat header is better > than making it public, the best solution being to add something in > distutils to share it between submodules, > > > You would prefer to fix the macros in ndarrayobject.h using #ifdef's then? In case what I am worried about is not clear: if ndarrayobject.h defines Py_TYPE, it means that every C extensions using the numpy C API will have Py_TYPE in the public namespace. Now, if another python extension with a C API does the same, you have issues. Having #ifdef/#endif around only make it worse because then you have strange interactions depending on the order of header inclusion (I really hate that behavior from the python headers). The numpy C headers are already pretty messy, let's not make it worse. Especially since the workaround is trivial. cheers, David From charlesr.harris at gmail.com Tue Feb 16 00:35:27 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 22:35:27 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <4B7A2ADB.2000706@silveregg.co.jp> References: <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> <5b8d13221002151434p3f5e6f40v944693640e9cfd2e@mail.gmail.com> <4B7A0D5D.2070100@silveregg.co.jp> <4B7A2ADB.2000706@silveregg.co.jp> Message-ID: On Mon, Feb 15, 2010 at 10:19 PM, David Cournapeau wrote: > Charles R Harris wrote: > > > > > > On Mon, Feb 15, 2010 at 8:13 PM, David Cournapeau > > wrote: > > > > Charles R Harris wrote: > > > > > > > > > On Mon, Feb 15, 2010 at 3:58 PM, Charles R Harris > > > > > > >> wrote: > > > > > > > > > > > > On Mon, Feb 15, 2010 at 3:34 PM, David Cournapeau > > > > > >> wrote: > > > > > > On Tue, Feb 16, 2010 at 7:04 AM, Charles R Harris > > > > > >> > > > wrote: > > > > > > > > > > > > On Mon, Feb 15, 2010 at 2:46 PM, David Cournapeau > > > > > >> > > > > wrote: > > > >> > > > >> On Tue, Feb 16, 2010 at 4:08 AM, Charles R Harris > > > >> > > > > > >> wrote: > > > >> > > > >> > > > > >> > I was wondering about that. Why do we have a private > > > include directory? > > > >> > Would it make more sense to move it to > > > core/include/numpy/private. > > > >> > > > >> No, the whole point is to avoid other packages to > include > > > that by > > > >> mistake, to avoid namespace pollution. > > > > > > > > Isn't that what the npy prefix is for? > > > > > > No, npy_ is for public symbols. Anything in private should > be > > > private :) > > > > > > > In any case, if it needs to be at a > > > > higher level for easy inclusion, then it should move > up. > > > > > > It is not that easy - we should avoid putting this code > into > > > core/include, because then we have to keep it compatible > > across > > > releases, but there is no easy way to share headers > > between modules > > > without making it public. > > > > > > > > > Py_TYPE, Py_Size, etc. are unlikely to cause compatibility > > problems > > > across releases. > > > > > > > > > > > > In particular, I think > > > > > > #if (PY_VERSION_HEX < 0x02060000) > > > #define Py_TYPE(o) (((PyObject*)(o))->ob_type) > > > #define Py_REFCNT(o) (((PyObject*)(o))->ob_refcnt) > > > #define Py_SIZE(o) (((PyVarObject*)(o))->ob_size) > > > #endif > > > > > > belongs somewhere near the top, maybe with a prefix (cython seems > to > > > define them also) > > > > The rule is easy: one should put in core/include/numpy whatever is > > public, and put in private what is not. > > > > Note that defining those macros above publicly is very likely to > cause > > trouble because I am sure other people do define those macros, > without > > caring about polluting the namespace as well. Given that it is > > temporary, and is small, I think copying the compat header is better > > than making it public, the best solution being to add something in > > distutils to share it between submodules, > > > > > > You would prefer to fix the macros in ndarrayobject.h using #ifdef's > then? > > In case what I am worried about is not clear: if ndarrayobject.h defines > Py_TYPE, it means that every C extensions using the numpy C API will > have Py_TYPE in the public namespace. Now, if another python extension > with a C API does the same, you have issues. Having #ifdef/#endif around > only make it worse because then you have strange interactions depending > on the order of header inclusion (I really hate that behavior from the > python headers). > > The numpy C headers are already pretty messy, let's not make it worse. > Especially since the workaround is trivial. > > What is the work around? Mind, I think those macros need to be compatible with py3k just to make porting other applications easier. I still think we should call it NPY_Py_TYPE or some such. We also have some stray ob_refcnt. Note that the gnu headers also have implementation stuff hidden away in a folder. Whatever we do, I think it needs to be easy discover for anyone coming new to the code, it shouldn't be hidden away in somewhere in the distutils. That's like burying it on a small Caribbean island along with all the witnesses. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Feb 16 00:52:27 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 22:52:27 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> <5b8d13221002151434p3f5e6f40v944693640e9cfd2e@mail.gmail.com> <4B7A0D5D.2070100@silveregg.co.jp> <4B7A2ADB.2000706@silveregg.co.jp> Message-ID: On Mon, Feb 15, 2010 at 10:35 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Mon, Feb 15, 2010 at 10:19 PM, David Cournapeau wrote: > >> Charles R Harris wrote: >> > >> > >> > On Mon, Feb 15, 2010 at 8:13 PM, David Cournapeau < >> david at silveregg.co.jp >> > > wrote: >> > >> > Charles R Harris wrote: >> > > >> > > >> > > On Mon, Feb 15, 2010 at 3:58 PM, Charles R Harris >> > > >> > > > >> wrote: >> > > >> > > >> > > >> > > On Mon, Feb 15, 2010 at 3:34 PM, David Cournapeau >> > > >> > >> wrote: >> > > >> > > On Tue, Feb 16, 2010 at 7:04 AM, Charles R Harris >> > > > > > charlesr.harris at gmail.com >> > >> >> > > wrote: >> > > > >> > > > >> > > > On Mon, Feb 15, 2010 at 2:46 PM, David Cournapeau >> > > >> > >> >> > > > wrote: >> > > >> >> > > >> On Tue, Feb 16, 2010 at 4:08 AM, Charles R Harris >> > > >> > > >> > > > > >> wrote: >> > > >> >> > > >> > >> > > >> > I was wondering about that. Why do we have a >> private >> > > include directory? >> > > >> > Would it make more sense to move it to >> > > core/include/numpy/private. >> > > >> >> > > >> No, the whole point is to avoid other packages to >> include >> > > that by >> > > >> mistake, to avoid namespace pollution. >> > > > >> > > > Isn't that what the npy prefix is for? >> > > >> > > No, npy_ is for public symbols. Anything in private >> should be >> > > private :) >> > > >> > > > In any case, if it needs to be at a >> > > > higher level for easy inclusion, then it should move >> up. >> > > >> > > It is not that easy - we should avoid putting this code >> into >> > > core/include, because then we have to keep it compatible >> > across >> > > releases, but there is no easy way to share headers >> > between modules >> > > without making it public. >> > > >> > > >> > > Py_TYPE, Py_Size, etc. are unlikely to cause compatibility >> > problems >> > > across releases. >> > > >> > > >> > > >> > > In particular, I think >> > > >> > > #if (PY_VERSION_HEX < 0x02060000) >> > > #define Py_TYPE(o) (((PyObject*)(o))->ob_type) >> > > #define Py_REFCNT(o) (((PyObject*)(o))->ob_refcnt) >> > > #define Py_SIZE(o) (((PyVarObject*)(o))->ob_size) >> > > #endif >> > > >> > > belongs somewhere near the top, maybe with a prefix (cython seems >> to >> > > define them also) >> > >> > The rule is easy: one should put in core/include/numpy whatever is >> > public, and put in private what is not. >> > >> > Note that defining those macros above publicly is very likely to >> cause >> > trouble because I am sure other people do define those macros, >> without >> > caring about polluting the namespace as well. Given that it is >> > temporary, and is small, I think copying the compat header is better >> > than making it public, the best solution being to add something in >> > distutils to share it between submodules, >> > >> > >> > You would prefer to fix the macros in ndarrayobject.h using #ifdef's >> then? >> >> In case what I am worried about is not clear: if ndarrayobject.h defines >> Py_TYPE, it means that every C extensions using the numpy C API will >> have Py_TYPE in the public namespace. Now, if another python extension >> with a C API does the same, you have issues. Having #ifdef/#endif around >> only make it worse because then you have strange interactions depending >> on the order of header inclusion (I really hate that behavior from the >> python headers). >> >> The numpy C headers are already pretty messy, let's not make it worse. >> Especially since the workaround is trivial. >> >> > What is the work around? Mind, I think those macros need to be compatible > with py3k just to make porting other applications easier. I still think we > should call it NPY_Py_TYPE or some such. We also have some stray ob_refcnt. > Note that the gnu headers also have implementation stuff hidden away in a > folder. Whatever we do, I think it needs to be easy discover for anyone > coming new to the code, it shouldn't be hidden away in somewhere in the > distutils. That's like burying it on a small Caribbean island along with all > the witnesses. > > Just to be clear, there are *already* macros in the ndarrayobject.h file that aren't py3k compatible. How do you propose to fix those? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Tue Feb 16 01:00:14 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Tue, 16 Feb 2010 15:00:14 +0900 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: References: <5b8d13221002151346u1ab13d12x67356e26abbc9777@mail.gmail.com> <5b8d13221002151434p3f5e6f40v944693640e9cfd2e@mail.gmail.com> <4B7A0D5D.2070100@silveregg.co.jp> <4B7A2ADB.2000706@silveregg.co.jp> Message-ID: <4B7A346E.8050402@silveregg.co.jp> Charles R Harris wrote: > > > > Just to be clear, there are *already* macros in the ndarrayobject.h file > that aren't py3k compatible. How do you propose to fix those? I don't understand the connection with the public vs private issue. If the py3k compatibility header is to be shared by several extensions, it has to be in a different header, included separately from any numpy headers. Fixing the ndarrayobject.h to be py3k-compatible can and should be done without adding extra public macros. David From charlesr.harris at gmail.com Tue Feb 16 01:21:40 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 15 Feb 2010 23:21:40 -0700 Subject: [Numpy-discussion] numpy 2.0, what else to do? In-Reply-To: <4B7A346E.8050402@silveregg.co.jp> References: <5b8d13221002151434p3f5e6f40v944693640e9cfd2e@mail.gmail.com> <4B7A0D5D.2070100@silveregg.co.jp> <4B7A2ADB.2000706@silveregg.co.jp> <4B7A346E.8050402@silveregg.co.jp> Message-ID: On Mon, Feb 15, 2010 at 11:00 PM, David Cournapeau wrote: > Charles R Harris wrote: > > > > > > > > Just to be clear, there are *already* macros in the ndarrayobject.h file > > that aren't py3k compatible. How do you propose to fix those? > > I don't understand the connection with the public vs private issue. If > the py3k compatibility header is to be shared by several extensions, it > has to be in a different header, included separately from any numpy > headers. > > So I had the impression that the compatibility header couldn't be included in some of the code that needed it. Is that the case. > Fixing the ndarrayobject.h to be py3k-compatible can and should be done > without adding extra public macros. > I wasn't adding extra public macros, I propose putting ifdefs around the *current* macros so that they are compatible with py3k. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus.proeller at ifm.com Tue Feb 16 03:07:20 2010 From: markus.proeller at ifm.com (markus.proeller at ifm.com) Date: Tue, 16 Feb 2010 09:07:20 +0100 Subject: [Numpy-discussion] create dll from numpy code Message-ID: Hello, is there a possibility to create a dll from a numpy code? Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From brecht.machiels at esat.kuleuven.be Tue Feb 16 06:00:48 2010 From: brecht.machiels at esat.kuleuven.be (Brecht Machiels) Date: Tue, 16 Feb 2010 12:00:48 +0100 Subject: [Numpy-discussion] ndarray of complex-like data Message-ID: Hello, I have written a subclass of Python's complex type, which only adds a couple of properties that return values calculated from the real and imaginary parts (magnitude and angle, for example). Now I would like to store objects of this new type in an ndarray. As the new type doesn't store more information than a complex number, the ndarray's dtype can be 'complex'. I also assume that is is better for performance to use dtype=complex instead of dtype=object? Using dtype=complex also ensures that anything put into the array can be cast to a complex. However, I would like the array to return objects of my new type when I retrieve an item from the ndarray. I'm not sure how to do that. I would rather avoid having to define the new type in C. Kind regards, Brecht From ncreati at inogs.it Tue Feb 16 07:42:37 2010 From: ncreati at inogs.it (Nicola Creati) Date: Tue, 16 Feb 2010 13:42:37 +0100 Subject: [Numpy-discussion] Extract subset from an array Message-ID: <4B7A92BD.4050209@inogs.it> Hello, I need to extract a subset from a Nx3 array. Each row has x, y, and z coordinates. The subset is just a portion of the array in which the following condition realizes x_min < x < x_max and y_min < y < y_max The problem reduce to the extraction of points inside a rectangular box defined by x_min, x_max, y_min, y_max. I work with large arrays, the number or rows is always larger than 5x1e7. I'm looking for a fast way to extract the subset. At the moment I found a solution that seems the best. This is a small example: import numpy as np # Create a large 1e7x3 array of random numbers array = np.random.random((10000000, 3)) # Define rectangular box x_min = 0.3 x_max = 0.5 y_min = 0.4 y_max = 0.7 # Create bool array that indicates the elemnts of array to extract condition = (array[:,0]>x_min) & (array[:,0]y_min) & (array[:,1] References: Message-ID: <9498932A-2F83-412D-9D16-FBBAA6C828AC@enthought.com> On Feb 16, 2010, at 5:00 AM, Brecht Machiels wrote: > Hello, > > I have written a subclass of Python's complex type, which only adds a > couple of properties that return values calculated from the real and > imaginary parts (magnitude and angle, for example). > > Now I would like to store objects of this new type in an ndarray. As > the > new type doesn't store more information than a complex number, the > ndarray's dtype can be 'complex'. I also assume that is is better for > performance to use dtype=complex instead of dtype=object? Using > dtype=complex also ensures that anything put into the array can be > cast > to a complex. > > However, I would like the array to return objects of my new type > when I > retrieve an item from the ndarray. I'm not sure how to do that. I > would > rather avoid having to define the new type in C. I see two options: 1) Write a user defined type in C --- there is a floatint example in the doc directory you can use as guidance. 2) Subclass the ndarray to do what you want. -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Feb 16 10:34:43 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 16 Feb 2010 09:34:43 -0600 Subject: [Numpy-discussion] Extract subset from an array In-Reply-To: <4B7A92BD.4050209@inogs.it> References: <4B7A92BD.4050209@inogs.it> Message-ID: <3d375d731002160734n4e911a99kf6551ccb39d298d8@mail.gmail.com> On Tue, Feb 16, 2010 at 06:42, Nicola Creati wrote: > Hello, > I need to extract a subset from a Nx3 array. Each row has x, y, and z > coordinates. > The subset is just a portion of the array in which the following > condition realizes > > x_min < x < x_max and y_min < y < y_max > > The problem reduce to the extraction of points inside a rectangular box > defined by > x_min, x_max, y_min, y_max. > > I work with large arrays, the number or rows is always larger than 5x1e7. > I'm looking for a fast way to extract the subset. > > At the moment I found a solution that seems the best. This is a small > example: > > import numpy as np > > # Create a large 1e7x3 array of random numbers > array = np.random.random((10000000, 3)) > > # Define rectangular box > x_min = 0.3 > x_max = 0.5 > y_min = 0.4 > y_max = 0.7 > > # Create bool array that indicates the elemnts of array to extract > condition = (array[:,0]>x_min) & (array[:,0]y_min) > & (array[:,1] > # Extract the subset > subset = array[condition] > > Are there any faster solution? That's about as good as it gets. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Feb 16 10:38:33 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 16 Feb 2010 09:38:33 -0600 Subject: [Numpy-discussion] create dll from numpy code In-Reply-To: References: Message-ID: <3d375d731002160738w27db5e55sce2a29d499b2a8ca@mail.gmail.com> On Tue, Feb 16, 2010 at 02:07, wrote: > > Hello, > > is there a possibility to create a dll from a numpy code? Not really, no. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sierra_mtnview at sbcglobal.net Tue Feb 16 10:39:04 2010 From: sierra_mtnview at sbcglobal.net (Wayne Watson) Date: Tue, 16 Feb 2010 07:39:04 -0800 Subject: [Numpy-discussion] Updating Packages in 2.5 (win/numpy) and Related Matters Message-ID: <4B7ABC18.7010908@sbcglobal.net> I normally use IDLE on Win, but recently needed to go to command prompt to see all error messages. When I did, I was greeted by a host of deprecation and Numpy messages before things got running. The program otherwise functioned OK, after I found the problem I was after. Are these messages a warning to get to the next update of numpy? I would guess that updating to a higher update does not mean I need to remove the old one, correct? In general for libraries like numpy or scipy, I use win32 updates, but I see win32-p3 updates too on download pages. Since I may be distributing this program to p3 machines, will I need to provide the win32-p3 updates to those users? -- "Crime is way down. War is declining. And that's far from the good news." -- Steven Pinker (and other sources) Why is this true, but yet the media says otherwise? The media knows very well how to manipulate us (see limbic, emotion, $$). -- WTW From brecht.machiels at esat.kuleuven.be Tue Feb 16 10:58:25 2010 From: brecht.machiels at esat.kuleuven.be (Brecht Machiels) Date: Tue, 16 Feb 2010 16:58:25 +0100 Subject: [Numpy-discussion] ndarray of complex-like data In-Reply-To: <9498932A-2F83-412D-9D16-FBBAA6C828AC@enthought.com> References: <9498932A-2F83-412D-9D16-FBBAA6C828AC@enthought.com> Message-ID: Travis Oliphant wrote: > On Feb 16, 2010, at 5:00 AM, Brecht Machiels wrote: >> I have written a subclass of Python's complex type, which only adds a >> couple of properties that return values calculated from the real and >> imaginary parts (magnitude and angle, for example). >> >> Now I would like to store objects of this new type in an ndarray. As the >> new type doesn't store more information than a complex number, the >> ndarray's dtype can be 'complex'. I also assume that is is better for >> performance to use dtype=complex instead of dtype=object? Using >> dtype=complex also ensures that anything put into the array can be cast >> to a complex. >> >> However, I would like the array to return objects of my new type when I >> retrieve an item from the ndarray. I'm not sure how to do that. I would >> rather avoid having to define the new type in C. > > 2) Subclass the ndarray to do what you want. I have subclassed ndarray, but I'm not sure how to continue from there. I was thinking of overriding __getitem__ and casting the complex to my complex subclass. Would that be the way to go? How would that work with slices? Kind regards, Brecht From robert.kern at gmail.com Tue Feb 16 11:02:17 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 16 Feb 2010 10:02:17 -0600 Subject: [Numpy-discussion] ndarray of complex-like data In-Reply-To: References: <9498932A-2F83-412D-9D16-FBBAA6C828AC@enthought.com> Message-ID: <3d375d731002160802i6acd309dveffffff343a2b76e@mail.gmail.com> On Tue, Feb 16, 2010 at 09:58, Brecht Machiels wrote: > Travis Oliphant wrote: >> On Feb 16, 2010, at 5:00 AM, Brecht Machiels wrote: >>> I have written a subclass of Python's complex type, which only adds a >>> couple of properties that return values calculated from the real and >>> imaginary parts (magnitude and angle, for example). >>> >>> Now I would like to store objects of this new type in an ndarray. As the >>> new type doesn't store more information than a complex number, the >>> ndarray's dtype can be 'complex'. I also assume that is is better for >>> performance to use dtype=complex instead of dtype=object? Using >>> dtype=complex also ensures that anything put into the array can be cast >>> to a complex. >>> >>> However, I would like the array to return objects of my new type when I >>> retrieve an item from the ndarray. I'm not sure how to do that. I would >>> rather avoid having to define the new type in C. >> >> 2) Subclass the ndarray to do what you want. > > I have subclassed ndarray, but I'm not sure how to continue from there. > I was thinking of overriding __getitem__ and casting the complex to my > complex subclass. Would that be the way to go? How would that work with > slices? I strongly recommend simply implementing an arg() function that works on both arrays and and complex objects. Then just use abs() and arg() instead of trying to get instances of your class and using their attributes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bsouthey at gmail.com Tue Feb 16 13:11:01 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 16 Feb 2010 12:11:01 -0600 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266252918.6419.5.camel@idol> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> Message-ID: <4B7ADFB5.9050507@gmail.com> On 02/15/2010 10:55 AM, Pauli Virtanen wrote: > ma, 2010-02-15 kello 10:18 -0600, Bruce Southey kirjoitti: > [clip] > >> Is there a correct way to get Python3.1 to find the relative path on Linux? >> I can change the import statement to work but I do not think that is viable. >> > You need to use relative imports. 2to3 should be able to take care of > this. > > [clip] > >> File "/usr/local/lib/python3.1/site-packages/numpy/lib/__init__.py", >> line 1, in >> from info import __doc__ >> ImportError: No module named info >> > That statement should read > > from .info import __doc__ > > and indeed, it reads like that for me. Check how it is in > build/py3k/numpy/lib/__init__.py > > Most likely you interrupted the build by Ctrl+C and 2to3 did not finish > the conversion of the files to Python3 format. Try removing the build/ > directory and trying again -- if you interrupt it, 2to3 may not have > finished running. > > Of course, it should be more robust, but at the moment, it isn't > (patches welcome). > > Hi, I managed to get 2to3 (I think from Python 3.1) to crash and isolated it to the file numpy-work/numpy/lib/arrayterator.py So I might hitting this ' assertion error in 2to3' bug: http://bugs.python.org/issue7824 I try to get the latest version of 2to3 and try again. Bruce $ 2to3 -w arrayterator.py RefactoringTool: Skipping implicit fixer: buffer RefactoringTool: Skipping implicit fixer: idioms RefactoringTool: Skipping implicit fixer: set_literal RefactoringTool: Skipping implicit fixer: ws_comma Traceback (most recent call last): File "/usr/local/bin/2to3", line 6, in sys.exit(main("lib2to3.fixes")) File "/usr/local/lib/python3.1/lib2to3/main.py", line 159, in main options.processes) File "/usr/local/lib/python3.1/lib2to3/refactor.py", line 616, in refactor items, write, doctests_only) File "/usr/local/lib/python3.1/lib2to3/refactor.py", line 276, in refactor self.refactor_file(dir_or_file, write, doctests_only) File "/usr/local/lib/python3.1/lib2to3/refactor.py", line 656, in refactor_file *args, **kwargs) File "/usr/local/lib/python3.1/lib2to3/refactor.py", line 328, in refactor_file tree = self.refactor_string(input, filename) File "/usr/local/lib/python3.1/lib2to3/refactor.py", line 358, in refactor_string self.refactor_tree(tree, name) File "/usr/local/lib/python3.1/lib2to3/refactor.py", line 392, in refactor_tree self.traverse_by(self.post_order_heads, tree.post_order()) File "/usr/local/lib/python3.1/lib2to3/refactor.py", line 418, in traverse_by node.replace(new) File "/usr/local/lib/python3.1/lib2to3/pytree.py", line 133, in replace assert self.parent is not None, str(self) AssertionError: def __init__(self, var, buf_size=None): self.var = var self.buf_size = buf_size self.start = [0 for dim in var.shape] self.stop = [dim for dim in var.shape] self.step = [1 for dim in var.shape] -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Mon Feb 15 14:42:12 2010 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Mon, 15 Feb 2010 20:42:12 +0100 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266216953.2728.2.camel@talisman> References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> Message-ID: <4B79A394.9010907@gmail.com> New try new error: gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions build/temp.linux-x86_64-3.1/numpy/core/src/multiarray/multiarraymodule_onefile.o -Lbuild/temp.linux-x86_64-3.1 -lnpymath -lm -o build/lib.linux-x86_64-3.1/numpy/core/multiarray.so /usr/bin/ld: build/temp.linux-x86_64-3.1/numpy/core/src/multiarray/multiarraymodule_onefile.o: relocation R_X86_64_PC32 against undefined symbol `_numpymemoryview_init' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: Bad value collect2: ld returned 1 exit status /usr/bin/ld: build/temp.linux-x86_64-3.1/numpy/core/src/multiarray/multiarraymodule_onefile.o: relocation R_X86_64_PC32 against undefined symbol `_numpymemoryview_init' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: Bad value collect2: ld returned 1 exit status error: Command "gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions build/temp.linux-x86_64-3.1/numpy/core/src/multiarray/multiarraymodule_onefile.o -Lbuild/temp.linux-x86_64-3.1 -lnpymath -lm -o build/lib.linux-x86_64-3.1/numpy/core/multiarray.so" failed with exit status 1 No clue why :( Xavier >> Ok! >> git clone git://github.com/pv/numpy-work.git >> git checkout origin/py3k >> NPY_SEPARATE_BUILD=1 python3.1 setup.py build >> >> but now it fails during the build: >> >> In file included from numpy/core/src/multiarray/buffer.c:14, >> from numpy/core/src/multiarray/multiarraymodule_onefile.c:36: >> numpy/core/src/multiarray/buffer.h: At top level: >> numpy/core/src/multiarray/buffer.h:14: error: conflicting types for >> ?_descriptor_from_pep3118_format? >> numpy/core/src/multiarray/common.c:220: note: previous implicit >> declaration of ?_descriptor_from_pep3118_format? was here >> In file included from >> numpy/core/src/multiarray/multiarraymodule_onefile.c:36: >> numpy/core/src/multiarray/buffer.c: In function ?_buffer_format_string?: >> numpy/core/src/multiarray/buffer.c:151: warning: unused variable ?repr? >> > Hmm, I probably tested only the separate compilation properly as it > seems the single-file build is failing. The environment variable is > actually NPY_SEPARATE_COMPILATION=1, not *_BUILD. > > From faltet at pytables.org Tue Feb 16 13:26:15 2010 From: faltet at pytables.org (Francesc Alted) Date: Tue, 16 Feb 2010 19:26:15 +0100 Subject: [Numpy-discussion] Extract subset from an array In-Reply-To: <4B7A92BD.4050209@inogs.it> References: <4B7A92BD.4050209@inogs.it> Message-ID: <201002161926.15587.faltet@pytables.org> A Tuesday 16 February 2010 13:42:37 Nicola Creati escrigu?: > Hello, > I need to extract a subset from a Nx3 array. Each row has x, y, and z > coordinates. > The subset is just a portion of the array in which the following > condition realizes > > x_min < x < x_max and y_min < y < y_max > > The problem reduce to the extraction of points inside a rectangular box > defined by > x_min, x_max, y_min, y_max. > > I work with large arrays, the number or rows is always larger than 5x1e7. > I'm looking for a fast way to extract the subset. > > At the moment I found a solution that seems the best. This is a small > example: > > import numpy as np > > # Create a large 1e7x3 array of random numbers > array = np.random.random((10000000, 3)) > > # Define rectangular box > x_min = 0.3 > x_max = 0.5 > y_min = 0.4 > y_max = 0.7 > > # Create bool array that indicates the elemnts of array to extract > condition = (array[:,0]>x_min) & (array[:,0]y_min) > & (array[:,1] > # Extract the subset > subset = array[condition] > > Are there any faster solution? In the above condition you are walking strided arrays, and that hurts performance somewhat. If you can afford to transpose your array first, you can get some significant performance. For example, your original code takes: In [6]: x_min, x_max, y_min, y_max = .3, .5, .4, .7 In [7]: array = np.random.random((10000000, 3)) In [8]: time (array[:,0]>x_min) & (array[:,0]y_min) & (array[:,1]x_min) & (array[0]y_min) & (array[1] References: <4B77034A.7060705@gmail.com> <1266092883.4565.2.camel@Nokia-N900-42-11> <4B771482.8080306@gmail.com> <4B77195E.5050308@gmail.com> <4B788991.3070605@gmail.com> <1266216953.2728.2.camel@talisman> <4B7973F3.7040504@gmail.com> <1266252918.6419.5.camel@idol> <4B7ADFB5.9050507@gmail.com> Message-ID: <1266345078.2728.3.camel@talisman> ti, 2010-02-16 kello 12:11 -0600, Bruce Southey kirjoitti: [clip] > I managed to get 2to3 (I think from Python 3.1) to crash and isolated > it to the file numpy-work/numpy/lib/arrayterator.py > > So I might hitting this ' assertion error in 2to3' bug: > http://bugs.python.org/issue7824 > > I try to get the latest version of 2to3 and try again. Try the latest head of the py3k branch, this should be worked around there so that also earlier 2to3 work. -- Pauli Virtanen From oliphant at enthought.com Tue Feb 16 14:13:18 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Tue, 16 Feb 2010 13:13:18 -0600 Subject: [Numpy-discussion] ABI changes complete in trunk Message-ID: I've made the ABI changes I think are needed in the SVN trunk. Please feel free to speak up if you have concerns or problems (and if you want to change white-space, just do it...). If the release schedule needs to be delayed by several weeks in order to get Py3k support in NumPy 2.0, that seems like a worthwhile thing. I wish I had time to help, but Pauli and Chuck are doing a great job. Best, -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From neilcrighton at gmail.com Tue Feb 16 17:17:20 2010 From: neilcrighton at gmail.com (Neil Crighton) Date: Tue, 16 Feb 2010 22:17:20 +0000 (UTC) Subject: [Numpy-discussion] Extract subset from an array References: <4B7A92BD.4050209@inogs.it> <201002161926.15587.faltet@pytables.org> Message-ID: Francesc Alted pytables.org> writes: > In [10]: array = np.random.random((3, 10000000)) > > then the time drops significantly: > > In [11]: time (array[0]>x_min) & (array[0]y_min) & > (array[1] CPU times: user 0.15 s, sys: 0.01 s, total: 0.16 s > Wall time: 0.16 s > Out[12]: array([False, False, False, ..., False, False, False], dtype=bool) > > i.e. walking your arrays row-wise is around 1.7x faster in this case. > It saves some array creation if you use &=: In [29]: array = np.random.random((10000000, 3)) In [30]: x_min, x_max, y_min, y_max = .3, .5, .4, .7 In [31]: %timeit c = (array[:,0]>x_min) & (array[:,0]y_min) & (array[:,1]x_min); c &= (array[:,0]y_min); c &= (array[:,1] References: <4B7ABC18.7010908@sbcglobal.net> Message-ID: <4B7B3935.2060300@silveregg.co.jp> Hi Wayne, Wayne Watson wrote: > I normally use IDLE on Win, but recently needed to go to command prompt > to see all error messages. When I did, I was greeted by a host of > deprecation and Numpy messages before things got running. The program > otherwise functioned OK, after I found the problem I was after. Are > these messages a warning to get to the next update of numpy? > > I would guess that updating to a higher update does not mean I need to > remove the old one, correct? > > In general for libraries like numpy or scipy, I use win32 updates, but I > see win32-p3 updates too on download pages. Since I may be distributing > this program to p3 machines, will I need to provide the win32-p3 updates > to those users? I am not familiar with IDLE, so I don't really understand your problem: - what triggers numpy warnings ? You talked about a program, but without knowing which program, we can't really help you. - What warnings do you get ? - What is win32-p3 updates ? cheers, David From david at silveregg.co.jp Tue Feb 16 19:43:03 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 17 Feb 2010 09:43:03 +0900 Subject: [Numpy-discussion] create dll from numpy code In-Reply-To: References: Message-ID: <4B7B3B97.6040402@silveregg.co.jp> markus.proeller at ifm.com wrote: > > Hello, > > is there a possibility to create a dll from a numpy code? What do you want to create a dll for ? For distribution purpose, to hide your code, etc... ? cheers, David From sierra_mtnview at sbcglobal.net Wed Feb 17 00:10:32 2010 From: sierra_mtnview at sbcglobal.net (Wayne Watson) Date: Tue, 16 Feb 2010 21:10:32 -0800 Subject: [Numpy-discussion] Updating Packages in 2.5 (win/numpy) and Related Matters In-Reply-To: <4B7B3935.2060300@silveregg.co.jp> References: <4B7ABC18.7010908@sbcglobal.net> <4B7B3935.2060300@silveregg.co.jp> Message-ID: <4B7B7A48.1010803@sbcglobal.net> Hi, I'm working on a 1800+ line program that uses tkinter. Here are the messages I started getting recently. (I finally figured out how to copy them.). The program goes merrily on its way despite them. s\sentuser>sentuser_20080716NoiseStudy7.py C:\Python25\lib\site-packages\scipy\misc\__init__.py:25: DeprecationWarning: Num pyTest will be removed in the next release; please update your code to use nose or unittest test = NumpyTest().test C:\Python25\lib\site-packages\scipy\special\__init__.py:23: DeprecationWarning: NumpyTest will be removed in the next release; please update your code to use no se or unittest test = NumpyTest().test C:\Python25\lib\site-packages\scipy\linalg\__init__.py:32: DeprecationWarning: N umpyTest will be removed in the next release; please update your code to use nos e or unittest test = NumpyTest().test C:\Python25\lib\site-packages\scipy\optimize\__init__.py:19: DeprecationWarning: NumpyTest will be removed in the next release; please update your code to use n ose or unittest test = NumpyTest().test C:\Python25\lib\site-packages\scipy\stats\__init__.py:15: DeprecationWarning: Nu mpyTest will be removed in the next release; please update your code to use nose or unittest test = NumpyTest().test Traceback (most recent call last): File "C:\Users\Wayne\Sandia_Meteors\Sentinel_Development\Development_Sentuser+ Utilities\sentuser\sentuser_20080716NoiseStudy7.py", line 1993, in Process() File "C:\Users\Wayne\Sandia_Meteors\Sentinel_Development\Development_Sentuser+ Utilities\sentuser\sentuser_20080716NoiseStudy7.py", line 1990, in Process root.mainloop() File "C:\Python25\lib\lib-tk\Tkinter.py", line 1023, in mainloop On 2/16/2010 4:32 PM, David Cournapeau wrote: > Hi Wayne, > > Wayne Watson wrote: > >> I normally use IDLE on Win, but recently needed to go to command prompt >> to see all error messages. When I did, I was greeted by a host of >> deprecation and Numpy messages before things got running. The program >> otherwise functioned OK, after I found the problem I was after. Are >> these messages a warning to get to the next update of numpy? >> >> I would guess that updating to a higher update does not mean I need to >> remove the old one, correct? >> >> In general for libraries like numpy or scipy, I use win32 updates, but I >> see win32-p3 updates too on download pages. Since I may be distributing >> this program to p3 machines, will I need to provide the win32-p3 updates >> to those users? >> > I am not familiar with IDLE, so I don't really understand your problem: > - what triggers numpy warnings ? You talked about a program, but > without knowing which program, we can't really help you. > - What warnings do you get ? > - What is win32-p3 updates ? > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -- "There is nothing so annoying as to have two people talking when you're busy interrupting." -- Mark Twain From josef.pktd at gmail.com Wed Feb 17 00:25:17 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 17 Feb 2010 00:25:17 -0500 Subject: [Numpy-discussion] Updating Packages in 2.5 (win/numpy) and Related Matters In-Reply-To: <4B7B7A48.1010803@sbcglobal.net> References: <4B7ABC18.7010908@sbcglobal.net> <4B7B3935.2060300@silveregg.co.jp> <4B7B7A48.1010803@sbcglobal.net> Message-ID: <1cd32cbb1002162125t4db502b5g89f648f4cfd7b9fb@mail.gmail.com> On Wed, Feb 17, 2010 at 12:10 AM, Wayne Watson wrote: > Hi, I'm working on a 1800+ line program that uses tkinter. Here are the > messages I started getting recently. (I finally figured out how to copy > them.). The program goes merrily on its way despite them. > > > s\sentuser>sentuser_20080716NoiseStudy7.py > C:\Python25\lib\site-packages\scipy\misc\__init__.py:25: > DeprecationWarning: Num > pyTest will be removed in the next release; please update your code to > use nose > or unittest > ? test = NumpyTest().test > C:\Python25\lib\site-packages\scipy\special\__init__.py:23: > DeprecationWarning: > NumpyTest will be removed in the next release; please update your code > to use no > se or unittest > ? test = NumpyTest().test > C:\Python25\lib\site-packages\scipy\linalg\__init__.py:32: > DeprecationWarning: N > umpyTest will be removed in the next release; please update your code to > use nos > e or unittest > ? test = NumpyTest().test > C:\Python25\lib\site-packages\scipy\optimize\__init__.py:19: > DeprecationWarning: > ?NumpyTest will be removed in the next release; please update your code > to use n > ose or unittest > ? test = NumpyTest().test > C:\Python25\lib\site-packages\scipy\stats\__init__.py:15: > DeprecationWarning: Nu > mpyTest will be removed in the next release; please update your code to > use nose > ?or unittest > ? test = NumpyTest().test > Traceback (most recent call last): > ? File > "C:\Users\Wayne\Sandia_Meteors\Sentinel_Development\Development_Sentuser+ > Utilities\sentuser\sentuser_20080716NoiseStudy7.py", line 1993, in > ? ? Process() > ? File > "C:\Users\Wayne\Sandia_Meteors\Sentinel_Development\Development_Sentuser+ > Utilities\sentuser\sentuser_20080716NoiseStudy7.py", line 1990, in Process > ? ? root.mainloop() > ? File "C:\Python25\lib\lib-tk\Tkinter.py", line 1023, in mainloop DeprecationWarnings mean some some functionality in numpy (or scipy) has changed and the old way of doing things will be removed and be invalid in the next version. During depreciation the old code still works, but before you upgrade you might want to check whether and how much you use these functions and switch to the new behavior. In the case of numpy.test, it means that if you have tests written that use the numpy testing module, then you need to switch them to the new nose based numpy.testing. And you need to install nose for running numpy.test() >> - What is win32-p3 updates ? Josef > > On 2/16/2010 4:32 PM, David Cournapeau wrote: >> Hi Wayne, >> >> Wayne Watson wrote: >> >>> I normally use IDLE on Win, but recently needed to go to command prompt >>> to see all error messages. When I did, I was greeted by a host of >>> deprecation and Numpy messages before things got running. The program >>> otherwise functioned OK, after I found the problem I was after. Are >>> these messages a warning to get to the next update of numpy? >>> >>> I would guess that updating to a higher update does not mean I need to >>> remove the old one, correct? >>> >>> In general for libraries like numpy or scipy, I use win32 updates, but I >>> see win32-p3 updates too on download pages. Since I may be distributing >>> this program to p3 machines, will I need to provide the win32-p3 updates >>> to those users? >>> >> I am not familiar with IDLE, so I don't really understand your problem: >> ? ? ? - what triggers numpy warnings ? You talked about a program, but >> without knowing which program, we can't really help you. >> ? ? ? - What warnings do you get ? >> ? ? ? - What is win32-p3 updates ? >> >> cheers, >> >> David >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > -- > ? ? ? ? ? ? "There is nothing so annoying as to have two people > ? ? ? ? ? ? ?talking when you're busy interrupting." -- Mark Twain > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From scott.sinclair.za at gmail.com Wed Feb 17 01:01:31 2010 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Wed, 17 Feb 2010 08:01:31 +0200 Subject: [Numpy-discussion] Updating Packages in 2.5 (win/numpy) and Related Matters In-Reply-To: <1cd32cbb1002162125t4db502b5g89f648f4cfd7b9fb@mail.gmail.com> References: <4B7ABC18.7010908@sbcglobal.net> <4B7B3935.2060300@silveregg.co.jp> <4B7B7A48.1010803@sbcglobal.net> <1cd32cbb1002162125t4db502b5g89f648f4cfd7b9fb@mail.gmail.com> Message-ID: <6a17e9ee1002162201j6d370992m62bbc204e8d3f9a1@mail.gmail.com> >On 17 February 2010 07:25, wrote: > On Wed, Feb 17, 2010 at 12:10 AM, Wayne Watson > wrote: >> Hi, I'm working on a 1800+ line program that uses tkinter. Here are the >> messages I started getting recently. (I finally figured out how to copy >> them.). The program goes merrily on its way despite them. >> >> >> s\sentuser>sentuser_20080716NoiseStudy7.py >> C:\Python25\lib\site-packages\scipy\misc\__init__.py:25: >> DeprecationWarning: Num >> pyTest will be removed in the next release; please update your code to >> use nose >> or unittest >> ? test = NumpyTest().test > > DeprecationWarnings mean some some functionality in numpy (or scipy) > has changed and the old way of doing things will be removed and be > invalid in the next version. > > During depreciation the old code still works, but before you upgrade > you might want to check whether and how much you use these functions > and switch to the new behavior. > > In the case of numpy.test, it means that if you have tests written > that use the numpy testing module, then you need to switch them to the > new nose based numpy.testing. And you need to install nose for running > numpy.test() Wayne - The DeprecationWarnings are being raised by SciPy, not by your code. You probably don't have a recent version of SciPy installed. The most recent release of SciPy is 0.7.1 and works with NumPy 1.3.0. I don't think you will see the warnings if you upgrade SciPy and NumPy on your system. Check your NumPy and SciPy versions at a python prompt as follows: >>> import numpy as np >>> print np.__version__ >>> import scipy as sp >>> print sp.__version__ You will need to completely remove the old versions if you choose to upgrade. You should be able to do this from "Add/Remove Programs". Cheers, Scott From markus.proeller at ifm.com Wed Feb 17 01:05:43 2010 From: markus.proeller at ifm.com (markus.proeller at ifm.com) Date: Wed, 17 Feb 2010 07:05:43 +0100 Subject: [Numpy-discussion] Antwort: Re: create dll from numpy code In-Reply-To: <4B7B3B97.6040402@silveregg.co.jp> Message-ID: numpy-discussion-bounces at scipy.org schrieb am 17.02.2010 01:43:03: > markus.proeller at ifm.com wrote: > > > > Hello, > > > > is there a possibility to create a dll from a numpy code? > > What do you want to create a dll for ? For distribution purpose, to hide > your code, etc... ? > To replace a Matlab generated dll, which is part of a bigger project. Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Wed Feb 17 01:31:52 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 17 Feb 2010 15:31:52 +0900 Subject: [Numpy-discussion] Antwort: Re: create dll from numpy code In-Reply-To: References: Message-ID: <4B7B8D58.5020604@silveregg.co.jp> markus.proeller at ifm.com wrote: > > numpy-discussion-bounces at scipy.org schrieb am 17.02.2010 01:43:03: > > > markus.proeller at ifm.com wrote: > > > > > > Hello, > > > > > > is there a possibility to create a dll from a numpy code? > > > > What do you want to create a dll for ? For distribution purpose, to hide > > your code, etc... ? > > > To replace a Matlab generated dll, which is part of a bigger project. If you mean that it is a DLL generated from .m code compiled from the matlab compiler, then the equivalent of doing that in python is to embed the python interpreter in your code. It is possible to embed Python with numpy inside it, but you can't have everything in one dll - you need all the python files used by your program. Depending on your usecase, it may be important or not, but we need more informations on what you are trying to do to help you, cheers, David From ncreati at inogs.it Wed Feb 17 02:34:06 2010 From: ncreati at inogs.it (Nicola Creati) Date: Wed, 17 Feb 2010 08:34:06 +0100 Subject: [Numpy-discussion] Extract subset from an array In-Reply-To: References: <4B7A92BD.4050209@inogs.it> <201002161926.15587.faltet@pytables.org> Message-ID: <4B7B9BEE.600@inogs.it> Neil Crighton wrote: > Francesc Alted pytables.org> writes: > > >> In [10]: array = np.random.random((3, 10000000)) >> >> then the time drops significantly: >> >> In [11]: time (array[0]>x_min) & (array[0]y_min) & >> (array[1]> CPU times: user 0.15 s, sys: 0.01 s, total: 0.16 s >> Wall time: 0.16 s >> Out[12]: array([False, False, False, ..., False, False, False], dtype=bool) >> >> i.e. walking your arrays row-wise is around 1.7x faster in this case. >> >> > > It saves some array creation if you use &=: > > In [29]: array = np.random.random((10000000, 3)) > In [30]: x_min, x_max, y_min, y_max = .3, .5, .4, .7 > > In [31]: %timeit c = (array[:,0]>x_min) & (array[:,0] (array[:,1]>y_min) & (array[:,1] 1 loops, best of 3: 633 ms per loop > > In [32]: %timeit c = (array[:,0]>x_min); c &= (array[:,0] c &= (array[:,1]>y_min); c &= (array[:,1] 1 loops, best of 3: 604 ms per loop > > Only ~5% speedup though, so not a big deal. > > Neil Any kind of improvement is really appreciated. Thank you. Nicola From faltet at pytables.org Wed Feb 17 03:09:22 2010 From: faltet at pytables.org (Francesc Alted) Date: Wed, 17 Feb 2010 09:09:22 +0100 Subject: [Numpy-discussion] Extract subset from an array In-Reply-To: <4B7B9BEE.600@inogs.it> References: <4B7A92BD.4050209@inogs.it> <4B7B9BEE.600@inogs.it> Message-ID: <201002170909.22441.faltet@pytables.org> A Wednesday 17 February 2010 08:34:06 Nicola Creati escrigu?: > Any kind of improvement is really appreciated. Well, if you cannot really transpose your matrix, numexpr can also serve as a good accelerator: In [1]: import numpy as np In [2]: import numexpr as ne In [3]: x_min, x_max, y_min, y_max = .3, .5, .4, .7 In [4]: array = np.random.random((10000000, 3)) In [5]: time (array[:,0]>x_min) & (array[:,0]y_min) & (array[:,1]x_min) & (ay_min) & (b References: <4B7A92BD.4050209@inogs.it> <4B7B9BEE.600@inogs.it> <201002170909.22441.faltet@pytables.org> Message-ID: <4B7BACE5.9080906@inogs.it> Francesc Alted wrote: > A Wednesday 17 February 2010 08:34:06 Nicola Creati escrigu?: > >> Any kind of improvement is really appreciated. >> > > Well, if you cannot really transpose your matrix, numexpr can also serve as a > good accelerator: > > In [1]: import numpy as np > > In [2]: import numexpr as ne > > In [3]: x_min, x_max, y_min, y_max = .3, .5, .4, .7 > > In [4]: array = np.random.random((10000000, 3)) > > In [5]: time (array[:,0]>x_min) & (array[:,0]y_min) & > (array[:,1] CPU times: user 0.23 s, sys: 0.03 s, total: 0.26 s > Wall time: 0.27 s > Out[6]: array([False, False, False, ..., False, False, False], dtype=bool) > > In [9]: time ne.evaluate("(a>x_min) & (ay_min) & (b {'a': array[:,0], 'b': array[:,1]}) > CPU times: user 0.16 s, sys: 0.00 s, total: 0.16 s > Wall time: 0.16 s > Out[10]: array([False, False, False, ..., False, False, False], dtype=bool) > > Again, an 1.7x of improvement, but without the need for transposing. > > Hi, this morning I tried numexpr and I got a good speed improvement as you just suggest. Thanks for keep helping me. :) Nicola -- Nicola Creati Istituto Nazionale di Oceanografia e di Geofisica Sperimentale - OGS www.inogs.it Dipartimento di Geofisica della Litosfera Geophysics of Lithosphere Department CARS (Cartography and Remote Sensing) Research Group http://www.inogs.it/Cars/ Borgo Grotta Gigante 42/c 34010 Sgonico - Trieste - ITALY ncreati at ogs.trieste.it off. +39 040 2140 213 fax. +39 040 327307 _____________________________________________________________________ This communication, that may contain confidential and/or legally privileged information, is intended solely for the use of the intended addressees. Opinions, conclusions and other information contained in this message, that do not relate to the official business of OGS, shall be considered as not given or endorsed by it. Every opinion or advice contained in this communication is subject to the terms and conditions provided by the agreement governing the engagement with such a client. Any use, disclosure, copying or distribution of the contents of this communication by a not-intended recipient or in violation of the purposes of this communication is strictly prohibited and may be unlawful. For Italy only: Ai sensi del D.Lgs.196/2003 - "T.U. sulla Privacy" si precisa che le informazioni contenute in questo messaggio sono riservate ed a uso esclusivo del destinatario. _____________________________________________________________________ From brecht.machiels at esat.kuleuven.be Wed Feb 17 03:43:37 2010 From: brecht.machiels at esat.kuleuven.be (Brecht Machiels) Date: Wed, 17 Feb 2010 09:43:37 +0100 Subject: [Numpy-discussion] ndarray of complex-like data In-Reply-To: <3d375d731002160802i6acd309dveffffff343a2b76e@mail.gmail.com> References: <9498932A-2F83-412D-9D16-FBBAA6C828AC@enthought.com> <3d375d731002160802i6acd309dveffffff343a2b76e@mail.gmail.com> Message-ID: Robert Kern wrote: >>> 2) Subclass the ndarray to do what you want. >> I have subclassed ndarray, but I'm not sure how to continue from there. >> I was thinking of overriding __getitem__ and casting the complex to my >> complex subclass. Would that be the way to go? How would that work with >> slices? > > I strongly recommend simply implementing an arg() function that works > on both arrays and and complex objects. Then just use abs() and arg() > instead of trying to get instances of your class and using their > attributes. Hmm. I hadn't thought of that option. I guess this makes more sense as this can than also operate on arrays like you say. Still, it would be interesting to know whether it's possible to have an ndarray subclass return instances of my complex subclass, preferably without having to copy the complex data (which would occur if I were to implement this behaviour by overriding __getitem__ as described above). But I assume that would require a user-defined type written in C? Regards, Brecht From ralf.gommers at googlemail.com Wed Feb 17 09:04:55 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 17 Feb 2010 22:04:55 +0800 Subject: [Numpy-discussion] doc wiki merge request Message-ID: Hi, Can we have a doc wiki merge before the next release? I reviewed everything, there's changes to about 150 docstrings. See http://docs.scipy.org/numpy/patch/ Thanks, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From chanley at stsci.edu Wed Feb 17 10:33:54 2010 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 17 Feb 2010 10:33:54 -0500 Subject: [Numpy-discussion] Unit Tests failing daily for 2 months Message-ID: Hi, I've been informed by our build/installation person that 3 unit tests have been failing in the daily numpy svn installation for the last 2 months. The most recent output from the tests is as follows: ====================================================================== FAIL: Test generic loops. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_ufunc.py", line 86, in test_generic_loops assert_almost_equal(fone(x), fone_val, err_msg=msg) File "/usr/stsci/pyssgdev/2.5.4/numpy/testing/utils.py", line 435, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal PyUFunc_F_F ACTUAL: array([ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], dtype=complex64) DESIRED: 1 ====================================================================== FAIL: test_umath.TestComplexFunctions.test_loss_of_precision(,) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/stsci/pyssgdev/2.5.4/nose/case.py", line 183, in runTest self.test(*self.arg) File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_umath.py", line 721, in check_loss_of_precision check(x_basic, 2*eps/1e-3) File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_umath.py", line 691, in check 'arcsinh') AssertionError: (0, 0.0010023052, 0.9987238, 'arcsinh') ====================================================================== FAIL: test_umath.TestComplexFunctions.test_precisions_consistent ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/stsci/pyssgdev/2.5.4/nose/case.py", line 183, in runTest self.test(*self.arg) File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_umath.py", line 602, in test_precisions_consistent assert_almost_equal(fcf, fcd, decimal=6, err_msg='fch-fcd %s'%f) File "/usr/stsci/pyssgdev/2.5.4/numpy/testing/utils.py", line 435, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal fch-fcd ACTUAL: 2.3561945j DESIRED: (0.66623943249251527+1.0612750619050355j) ---------------------------------------------------------------------- Ran 2512 tests in 18.753s FAILED (KNOWNFAIL=4, failures=3) Running unit tests for numpy NumPy version 2.0.0.dev8116 NumPy is installed in /usr/stsci/pyssgdev/2.5.4/numpy Python version 2.5.4 (r254:67916, Oct 26 2009, 14:36:20) [GCC 3.4.6 20060404 (Red Hat 3.4.6-11)] nose version 0.11.1 errors: failures: (Test(), 'Traceback (most recent call last):\n File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_ufunc.py", line 86, in test_generic_loops\n assert_almost_equal(fone(x), fone_val, err_msg=msg)\n File "/usr/stsci/pyssgdev/2.5.4/numpy/testing/utils.py", line 435, in assert_almost_equal\n raise AssertionError(msg)\nAssertionError: \nArrays are not almost equal PyUFunc_F_F\n ACTUAL: array([ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], dtype=complex64)\n DESIRED: 1\n') (Test(test_umath.TestComplexFunctions.test_loss_of_precision(,)), 'Traceback (most recent call last):\n File "/usr/stsci/pyssgdev/2.5.4/nose/case.py", line 183, in runTest\n self.test(*self.arg)\n File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_umath.py", line 721, in check_loss_of_precision\n check(x_basic, 2*eps/1e-3)\n File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_umath.py", line 691, in check\n \'arcsinh\')\nAssertionError: (0, 0.0010023052, 0.9987238, \'arcsinh\')\n') (Test(test_umath.TestComplexFunctions.test_precisions_consistent), 'Traceback (most recent call last):\n File "/usr/stsci/pyssgdev/2.5.4/nose/case.py", line 183, in runTest\n self.test(*self.arg)\n File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_umath.py", line 602, in test_precisions_consistent\n assert_almost_equal(fcf, fcd, decimal=6, err_msg=\'fch-fcd %s\'%f)\n File "/usr/stsci/pyssgdev/2.5.4/numpy/testing/utils.py", line 435, in assert_almost_equal\n raise AssertionError(msg)\nAssertionError: \nArrays are not almost equal fch-fcd \n ACTUAL: 2.3561945j\n DESIRED: (0.66623943249251527+1.0612750619050355j)\n') -- These failing tests are logged in Trac ticket numbers 1323, 1324, and 1325 respectively. Is anyone else seeing these failures? Any idea what the problem may be? It appears this problem is limited to 64-bit RHE 4 systems. Thank you for your time and help, Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From charlesr.harris at gmail.com Wed Feb 17 10:46:38 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 17 Feb 2010 08:46:38 -0700 Subject: [Numpy-discussion] Unit Tests failing daily for 2 months In-Reply-To: References: Message-ID: On Wed, Feb 17, 2010 at 8:33 AM, Christopher Hanley wrote: > Hi, > > I've been informed by our build/installation person that 3 unit tests > have been failing in the daily numpy svn installation for the last 2 > months. The most recent output from the tests is as follows: > > I don't see these here. What architecture/compiler/os ? > ====================================================================== > FAIL: Test generic loops. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_ufunc.py", > line 86, in test_generic_loops > assert_almost_equal(fone(x), fone_val, err_msg=msg) > File "/usr/stsci/pyssgdev/2.5.4/numpy/testing/utils.py", line 435, > in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal PyUFunc_F_F > ACTUAL: array([ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], > dtype=complex64) > DESIRED: 1 > > That's new to me. > ====================================================================== > FAIL: test_umath.TestComplexFunctions.test_loss_of_precision( 'numpy.complex64'>,) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/stsci/pyssgdev/2.5.4/nose/case.py", line 183, in runTest > self.test(*self.arg) > File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_umath.py", > line 721, in check_loss_of_precision > check(x_basic, 2*eps/1e-3) > File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_umath.py", > line 691, in check > 'arcsinh') > AssertionError: (0, 0.0010023052, 0.9987238, 'arcsinh') > > ====================================================================== > FAIL: test_umath.TestComplexFunctions.test_precisions_consistent > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/stsci/pyssgdev/2.5.4/nose/case.py", line 183, in runTest > self.test(*self.arg) > File "/usr/stsci/pyssgdev/2.5.4/numpy/core/tests/test_umath.py", > line 602, in test_precisions_consistent > assert_almost_equal(fcf, fcd, decimal=6, err_msg='fch-fcd %s'%f) > File "/usr/stsci/pyssgdev/2.5.4/numpy/testing/utils.py", line 435, > in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal fch-fcd > ACTUAL: 2.3561945j > DESIRED: (0.66623943249251527+1.0612750619050355j) > > ---------------------------------------------------------------------- > These two also fail on the buildbot Mac and have for some time. If you are able to reproduce them on an available machine that will be helpful in tracking them down. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From chanley at stsci.edu Wed Feb 17 11:07:45 2010 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 17 Feb 2010 11:07:45 -0500 Subject: [Numpy-discussion] Unit Tests failing daily for 2 months In-Reply-To: References: Message-ID: > > I don't see these here. What architecture/compiler/os ? > The system architecture is 2 * Intel Xeon with hyperthreading. The OS is Red Hat Enterprise (RHE) 4 64-bit running Python 2.5.4. The C compiler being used is GCC 3.4.6. No Fortran compiler is being used. > > That's new to me. Darn. I was hoping this was old news. > > These two also fail on the buildbot Mac and have for some time. If you are > able to reproduce them on an available machine that will be helpful in > tracking them down. > We can reproduce these problems on our RHE machine. A quick check of our logs from last night doesn't seem to indicate a problem on our Intel Mac 10.5 systems. However if there is something you want me to try just let me know what you need. Thanks, Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From robert.kern at gmail.com Wed Feb 17 11:08:46 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 17 Feb 2010 10:08:46 -0600 Subject: [Numpy-discussion] ndarray of complex-like data In-Reply-To: References: <9498932A-2F83-412D-9D16-FBBAA6C828AC@enthought.com> <3d375d731002160802i6acd309dveffffff343a2b76e@mail.gmail.com> Message-ID: <3d375d731002170808t16dfaec8v98b906fb3633b71c@mail.gmail.com> On Wed, Feb 17, 2010 at 02:43, Brecht Machiels wrote: > Robert Kern wrote: >>>> 2) Subclass the ndarray to do what you want. >>> I have subclassed ndarray, but I'm not sure how to continue from there. >>> I was thinking of overriding __getitem__ and casting the complex to my >>> complex subclass. Would that be the way to go? How would that work with >>> slices? >> >> I strongly recommend simply implementing an arg() function that works >> on both arrays and and complex objects. Then just use abs() and arg() >> instead of trying to get instances of your class and using their >> attributes. > > Hmm. I hadn't thought of that option. I guess this makes more sense as > this can than also operate on arrays like you say. > > Still, it would be interesting to know whether it's possible to have an > ndarray subclass return instances of my complex subclass, preferably > without having to copy the complex data (which would occur if I were to > implement this behaviour by overriding __getitem__ as described above). > But I assume that would require a user-defined type written in C? Well, when numpy indexes to get a scalar, the data is always copied (apart from object arrays, of course). It's not something you can or should try to avoid. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Wed Feb 17 12:18:25 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 17 Feb 2010 10:18:25 -0700 Subject: [Numpy-discussion] Unit Tests failing daily for 2 months In-Reply-To: References: Message-ID: On Wed, Feb 17, 2010 at 9:07 AM, Christopher Hanley wrote: > > > > I don't see these here. What architecture/compiler/os ? > > > > The system architecture is 2 * Intel Xeon with hyperthreading. The OS > is Red Hat Enterprise (RHE) 4 64-bit running Python 2.5.4. The C > compiler being used is GCC 3.4.6. No Fortran compiler is being used. > > My mistake on the buildbot os, it was FreeBSD. > > > > That's new to me. > > Darn. I was hoping this was old news. > > > > > These two also fail on the buildbot Mac and have for some time. If you > are > > able to reproduce them on an available machine that will be helpful in > > tracking them down. > > > > We can reproduce these problems on our RHE machine. A quick check of > our logs from last night doesn't seem to indicate a problem on our > Intel Mac 10.5 systems. However if there is something you want me to > try just let me know what you need. > > It doesn't seem related to python version, but may be related to compiler version. For FreeBSD: Python version 2.5.2 (r252:60911, Dec 15 2008, 12:04:33) [GCC 3.4.6 [FreeBSD] 20060305] Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Wed Feb 17 13:14:23 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 17 Feb 2010 13:14:23 -0500 Subject: [Numpy-discussion] 1.4 still the candidate for easy_install Message-ID: <1E6D6C05-6F2C-4DC1-817B-49834DA7B41A@cs.toronto.edu> Hi, I'm pretty sure this is unintentional but I tried easy_install numpy the other day and it pulled down a 1.4 tarball from PyPI. David From millman at berkeley.edu Wed Feb 17 13:20:02 2010 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 17 Feb 2010 10:20:02 -0800 Subject: [Numpy-discussion] 1.4 still the candidate for easy_install In-Reply-To: <1E6D6C05-6F2C-4DC1-817B-49834DA7B41A@cs.toronto.edu> References: <1E6D6C05-6F2C-4DC1-817B-49834DA7B41A@cs.toronto.edu> Message-ID: Good catch. I just removed the 1.4.0 tarball from PyPI. Thanks, -- Jarrod Millman Helen Wills Neuroscience Institute 10 Giannini Hall, UC Berkeley http://cirl.berkeley.edu/ From christian at marquardt.sc Wed Feb 17 13:36:38 2010 From: christian at marquardt.sc (Christian Marquardt) Date: Wed, 17 Feb 2010 19:36:38 +0100 (CET) Subject: [Numpy-discussion] Sun Studio Compilers on Linux / atan2 regression In-Reply-To: <26364170.179.1266431605193.JavaMail.root@athene> Message-ID: <20652313.182.1266431798062.JavaMail.root@athene> Hi, when compiling numpy-1.40 with the Sun Studio Compilers (v12 Update 1) on Linux (an OpenSUSE 11.1 in my case), about 30 tests in numpy.test() fail; all failures are related to the arctan2 function. I've found that in r7732 a patch was applied to trunk/numpy/core/src/private/npy_config.h in response to #1201, #1202, and #1203, #undef'ing the HAVE_ATAN2 variable in order to fix a broken atan2() implementation on Solaris. It seems that this does no good with the most recent Sun compiler on Linux... The attached patch ensures that the original patch is only applied on Solaris platforms; with this applied, all tests are completed successfully under Linux. BTW, I did not observe #1204 or #1205... As I have no access to a Solaris machine, I also don't know if the original patch is required with Sun Studio 12.1 at all. Something different - I would've loved to enter this in the numpy-Trac, but registration didn't work (I was asked for another username/password at scipy.org during the registration process) :-(( Thanks, Christian. -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy-1.4.0-linux-11.1-sun-arctan2.patch Type: text/x-patch Size: 975 bytes Desc: not available URL: From silva at lma.cnrs-mrs.fr Wed Feb 17 16:29:20 2010 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 17 Feb 2010 22:29:20 +0100 Subject: [Numpy-discussion] f2py: variable number of arguments of variable lengths Message-ID: <1266442161.2096.20.camel@Portable-s2m.cnrs-mrs.fr> I previously coded a fortran function that needs a variable number of scalar arguments. This number is not known at compile time, but at call time. So I used to pass them within a vector, passing also the length of this vector subroutine systeme(inc,t,nm,Dinc,sn) C C evaluate the derivative of vector x at time t C with complex modes (sn). Used for the calculation C of auto-oscillations in resonator-valve coupled system. C integer nm,np,ny,ind double precision inc(1:2*nm+2), Dinc(1:2*nm+2) complex*16 sn(1:nm) Cf2py double precision, intent(in) :: t Cf2py integer, intent(in), optional :: nm Cf2py double precision, intent(in), dimension(2*nm+2) :: inc Cf2py double precision, intent(out), dimension(2*nm+2) :: Dinc Cf2py complex, intent(in), dimension(nm) :: sn I do now want to pass, not nm float values, but nm arrays of variables lengths. I expect to pass the following objects : - nm: number of arrays - L : a 1d-array (dimension nm) containing the lengths of each array - np: the sum of lengths - X : a 1d-array (dimension np) containing the concatenated arrays. Does anyone have an alternative to this suggestion ? any tip or example? Regards -- Fabrice Silva LMA UPR CNRS 7051 From robert.kern at gmail.com Wed Feb 17 16:43:00 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 17 Feb 2010 15:43:00 -0600 Subject: [Numpy-discussion] f2py: variable number of arguments of variable lengths In-Reply-To: <1266442161.2096.20.camel@Portable-s2m.cnrs-mrs.fr> References: <1266442161.2096.20.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <3d375d731002171343y5376f6edka30a1e5a7efb7203@mail.gmail.com> On Wed, Feb 17, 2010 at 15:29, Fabrice Silva wrote: > I previously coded a fortran function that needs a variable number of > scalar arguments. This number is not known at compile time, but at call > time. So I used to pass them within a vector, passing also the length of > this vector > > ? ? ? ? ? ? ?subroutine systeme(inc,t,nm,Dinc,sn) > ? ? ? ?C > ? ? ? ?C ? ? ?evaluate the derivative of vector x at time t > ? ? ? ?C ? ? ?with complex modes (sn). Used for the calculation > ? ? ? ?C ? ? ?of auto-oscillations in resonator-valve coupled system. > ? ? ? ?C > ? ? ? ? ? ? ?integer nm,np,ny,ind > ? ? ? ? ? ? ?double precision inc(1:2*nm+2), Dinc(1:2*nm+2) > ? ? ? ? ? ? ?complex*16 sn(1:nm) > > ? ? ? ?Cf2py double precision, intent(in) :: t > ? ? ? ?Cf2py integer, intent(in), optional :: nm > ? ? ? ?Cf2py double precision, intent(in), dimension(2*nm+2) :: inc > ? ? ? ?Cf2py double precision, intent(out), dimension(2*nm+2) :: Dinc > ? ? ? ?Cf2py complex, intent(in), dimension(nm) :: sn > > > I do now want to pass, not nm float values, but nm arrays of variables > lengths. I expect to pass the following objects : > - nm: number of arrays > - L : a 1d-array (dimension nm) containing the lengths of each array > - np: the sum of lengths > - X : a 1d-array (dimension np) containing the concatenated arrays. Yeah, that's pretty much what you would have to do. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From silva at lma.cnrs-mrs.fr Wed Feb 17 16:55:48 2010 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 17 Feb 2010 22:55:48 +0100 Subject: [Numpy-discussion] f2py: variable number of arguments of variable lengths In-Reply-To: <3d375d731002171343y5376f6edka30a1e5a7efb7203@mail.gmail.com> References: <1266442161.2096.20.camel@Portable-s2m.cnrs-mrs.fr> <3d375d731002171343y5376f6edka30a1e5a7efb7203@mail.gmail.com> Message-ID: <1266443748.2096.23.camel@Portable-s2m.cnrs-mrs.fr> Le mercredi 17 f?vrier 2010 ? 15:43 -0600, Robert Kern a ?crit : > On Wed, Feb 17, 2010 at 15:29, Fabrice Silva wrote: > > I previously coded a fortran function that needs a variable number of > > scalar arguments. This number is not known at compile time, but at call > > time. So I used to pass them within a vector, passing also the length of > > this vector > > > > subroutine systeme(inc,t,nm,Dinc,sn) > > C > > C evaluate the derivative of vector x at time t > > C with complex modes (sn). Used for the calculation > > C of auto-oscillations in resonator-valve coupled system. > > C > > integer nm,np,ny,ind > > double precision inc(1:2*nm+2), Dinc(1:2*nm+2) > > complex*16 sn(1:nm) > > > > Cf2py double precision, intent(in) :: t > > Cf2py integer, intent(in), optional :: nm > > Cf2py double precision, intent(in), dimension(2*nm+2) :: inc > > Cf2py double precision, intent(out), dimension(2*nm+2) :: Dinc > > Cf2py complex, intent(in), dimension(nm) :: sn > > > > > > I do now want to pass, not nm float values, but nm arrays of variables > > lengths. I expect to pass the following objects : > > - nm: number of arrays > > - L : a 1d-array (dimension nm) containing the lengths of each array > > - np: the sum of lengths > > - X : a 1d-array (dimension np) containing the concatenated arrays. > > Yeah, that's pretty much what you would have to do. What about the next step: a variable number of arguments that are 2d-arrays with different shapes ? -- Fabrice Silva LMA UPR CNRS 7051 From robert.kern at gmail.com Wed Feb 17 17:21:37 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 17 Feb 2010 16:21:37 -0600 Subject: [Numpy-discussion] f2py: variable number of arguments of variable lengths In-Reply-To: <1266443748.2096.23.camel@Portable-s2m.cnrs-mrs.fr> References: <1266442161.2096.20.camel@Portable-s2m.cnrs-mrs.fr> <3d375d731002171343y5376f6edka30a1e5a7efb7203@mail.gmail.com> <1266443748.2096.23.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <3d375d731002171421s580abefay301eee26f3945fb@mail.gmail.com> On Wed, Feb 17, 2010 at 15:55, Fabrice Silva wrote: > Le mercredi 17 f?vrier 2010 ? 15:43 -0600, Robert Kern a ?crit : >> On Wed, Feb 17, 2010 at 15:29, Fabrice Silva wrote: >> > I previously coded a fortran function that needs a variable number of >> > scalar arguments. This number is not known at compile time, but at call >> > time. So I used to pass them within a vector, passing also the length of >> > this vector >> > >> > ? ? ? ? ? ? ?subroutine systeme(inc,t,nm,Dinc,sn) >> > ? ? ? ?C >> > ? ? ? ?C ? ? ?evaluate the derivative of vector x at time t >> > ? ? ? ?C ? ? ?with complex modes (sn). Used for the calculation >> > ? ? ? ?C ? ? ?of auto-oscillations in resonator-valve coupled system. >> > ? ? ? ?C >> > ? ? ? ? ? ? ?integer nm,np,ny,ind >> > ? ? ? ? ? ? ?double precision inc(1:2*nm+2), Dinc(1:2*nm+2) >> > ? ? ? ? ? ? ?complex*16 sn(1:nm) >> > >> > ? ? ? ?Cf2py double precision, intent(in) :: t >> > ? ? ? ?Cf2py integer, intent(in), optional :: nm >> > ? ? ? ?Cf2py double precision, intent(in), dimension(2*nm+2) :: inc >> > ? ? ? ?Cf2py double precision, intent(out), dimension(2*nm+2) :: Dinc >> > ? ? ? ?Cf2py complex, intent(in), dimension(nm) :: sn >> > >> > >> > I do now want to pass, not nm float values, but nm arrays of variables >> > lengths. I expect to pass the following objects : >> > - nm: number of arrays >> > - L : a 1d-array (dimension nm) containing the lengths of each array >> > - np: the sum of lengths >> > - X : a 1d-array (dimension np) containing the concatenated arrays. >> >> Yeah, that's pretty much what you would have to do. > > What about the next step: a variable number of arguments that are > 2d-arrays with different shapes ? - nm: number of arrays - ncols : a 1d-array (dimension nm) containing the number of columns in each array - nrows : a 1d-array (dimension nm) containing the number of rows in each array - np: the sum of array sizes [(ncols * nrows).sum() in numpy terms] - X : a 1d-array (dimension np) containing the concatenated arrays. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From silva at lma.cnrs-mrs.fr Wed Feb 17 17:32:56 2010 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 17 Feb 2010 23:32:56 +0100 Subject: [Numpy-discussion] f2py: variable number of arguments of variable lengths In-Reply-To: <3d375d731002171421s580abefay301eee26f3945fb@mail.gmail.com> References: <1266442161.2096.20.camel@Portable-s2m.cnrs-mrs.fr> <3d375d731002171343y5376f6edka30a1e5a7efb7203@mail.gmail.com> <1266443748.2096.23.camel@Portable-s2m.cnrs-mrs.fr> <3d375d731002171421s580abefay301eee26f3945fb@mail.gmail.com> Message-ID: <1266445976.2096.25.camel@Portable-s2m.cnrs-mrs.fr> Le mercredi 17 f?vrier 2010 ? 16:21 -0600, Robert Kern a ?crit : > > What about the next step: a variable number of arguments that are > > 2d-arrays with different shapes ? > > - nm: number of arrays > - ncols : a 1d-array (dimension nm) containing the number of columns > in each array > - nrows : a 1d-array (dimension nm) containing the number of rows in each array > - np: the sum of array sizes [(ncols * nrows).sum() in numpy terms] > - X : a 1d-array (dimension np) containing the concatenated arrays. > I guess I will need to be careful when building the arrays from X. Thanks! -- Fabrice Silva LMA UPR CNRS 7051 From touisteur at gmail.com Wed Feb 17 18:24:46 2010 From: touisteur at gmail.com (Touisteur EmporteUneVache) Date: Thu, 18 Feb 2010 00:24:46 +0100 Subject: [Numpy-discussion] Unable to install numpy-1.3.0 on WinXP (without Administrative rights) Message-ID: <29ec52431002171524u31fcf6e6xd2131f13facc46e3@mail.gmail.com> Hi, I'm trying to install numpy on a WinXP system, on which I have no administrative rights. Installation of Python-2.6 went OK, but the windows installer that I downloaded on sourceforge for numpy (numpy-1.3.0-win32 -superpack-python2.6.exe) gives me an error pop-up window "Executing numpy installer failed". And when I ask the installer for the details, here's the displayed log : Output folder: C:\Windows\Temp Install dir for actual installers is C:\DOCUME~1\user1\LOCALS~1\Temp "Target CPU handles SSE2" "Target CPU handles SSE3" "native install (arch value: native)" "Install SSE 3" Extract: *numpy*-1.3.0-sse3.exe... 100% Execute: "C:\Windows\Temp\*numpy*-1.3.0-sse3.exe" Completed And of course then, typing "import numpy" in a python shell will just give "ImportError: No module named numpy". Seems I'm not the only one that encountered the problem ( see http://old.nabble.com/Failed-installation-on-Windows-XP-td26316987.html#a26316987). I'm wondering if this is known and/or fixed issue (my search on the mailing-list and tickets archives has been fruitless, but I might have not looked at the right places), and if so, what can I do to solve it ? Cheers, T. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sierra_mtnview at sbcglobal.net Wed Feb 17 22:13:13 2010 From: sierra_mtnview at sbcglobal.net (Wayne Watson) Date: Wed, 17 Feb 2010 19:13:13 -0800 Subject: [Numpy-discussion] Updating Packages in 2.5 (win/numpy) and Related Matters In-Reply-To: <1cd32cbb1002162125t4db502b5g89f648f4cfd7b9fb@mail.gmail.com> References: <4B7ABC18.7010908@sbcglobal.net> <4B7B3935.2060300@silveregg.co.jp> <4B7B7A48.1010803@sbcglobal.net> <1cd32cbb1002162125t4db502b5g89f648f4cfd7b9fb@mail.gmail.com> Message-ID: <4B7CB049.4050000@sbcglobal.net> I don't think I'm on the current version. Does it make sense to move ahead? Is there a way to suppress the messages? On 2/16/2010 9:25 PM, josef.pktd at gmail.com wrote: > On Wed, Feb 17, 2010 at 12:10 AM, Wayne Watson > wrote: > >> Hi, I'm working on a 1800+ line program that uses tkinter. Here are the >> messages I started getting recently. (I finally figured out how to copy >> them.). The program goes merrily on its way despite them. >> >> >> s\sentuser>sentuser_20080716NoiseStudy7.py >> C:\Python25\lib\site-packages\scipy\misc\__init__.py:25: >> DeprecationWarning: Num >> pyTest will be removed in the next release; please update your code to >> use nose >> or unittest >> test = NumpyTest().test >> C:\Python25\lib\site-packages\scipy\special\__init__.py:23: >> DeprecationWarning: >> NumpyTest will be removed in the next release; please update your code >> to use no >> se or unittest >> test = NumpyTest().test >> C:\Python25\lib\site-packages\scipy\linalg\__init__.py:32: >> DeprecationWarning: N >> umpyTest will be removed in the next release; please update your code to >> use nos >> e or unittest >> test = NumpyTest().test >> C:\Python25\lib\site-packages\scipy\optimize\__init__.py:19: >> DeprecationWarning: >> NumpyTest will be removed in the next release; please update your code >> to use n >> ose or unittest >> test = NumpyTest().test >> C:\Python25\lib\site-packages\scipy\stats\__init__.py:15: >> DeprecationWarning: Nu >> mpyTest will be removed in the next release; please update your code to >> use nose >> or unittest >> test = NumpyTest().test >> Traceback (most recent call last): >> File >> "C:\Users\Wayne\Sandia_Meteors\Sentinel_Development\Development_Sentuser+ >> Utilities\sentuser\sentuser_20080716NoiseStudy7.py", line 1993, in >> Process() >> File >> "C:\Users\Wayne\Sandia_Meteors\Sentinel_Development\Development_Sentuser+ >> Utilities\sentuser\sentuser_20080716NoiseStudy7.py", line 1990, in Process >> root.mainloop() >> File "C:\Python25\lib\lib-tk\Tkinter.py", line 1023, in mainloop >> > DeprecationWarnings mean some some functionality in numpy (or scipy) > has changed and the old way of doing things will be removed and be > invalid in the next version. > > During depreciation the old code still works, but before you upgrade > you might want to check whether and how much you use these functions > and switch to the new behavior. > > In the case of numpy.test, it means that if you have tests written > that use the numpy testing module, then you need to switch them to the > new nose based numpy.testing. And you need to install nose for running > numpy.test() > > >>> - What is win32-p3 updates ? >>> > Josef > > > >> On 2/16/2010 4:32 PM, David Cournapeau wrote: >> >>> Hi Wayne, >>> >>> Wayne Watson wrote: >>> >>> >>>> I normally use IDLE on Win, but recently needed to go to command prompt >>>> to see all error messages. When I did, I was greeted by a host of >>>> deprecation and Numpy messages before things got running. The program >>>> otherwise functioned OK, after I found the problem I was after. Are >>>> these messages a warning to get to the next update of numpy? >>>> >>>> I would guess that updating to a higher update does not mean I need to >>>> remove the old one, correct? >>>> >>>> In general for libraries like numpy or scipy, I use win32 updates, but I >>>> see win32-p3 updates too on download pages. Since I may be distributing >>>> this program to p3 machines, will I need to provide the win32-p3 updates >>>> to those users? >>>> >>>> >>> I am not familiar with IDLE, so I don't really understand your problem: >>> - what triggers numpy warnings ? You talked about a program, but >>> without knowing which program, we can't really help you. >>> - What warnings do you get ? >>> - What is win32-p3 updates ? >>> >>> cheers, >>> >>> David >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >>> >> -- >> "There is nothing so annoying as to have two people >> talking when you're busy interrupting." -- Mark Twain >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -- "There is nothing so annoying as to have two people talking when you're busy interrupting." -- Mark Twain From cournape at gmail.com Wed Feb 17 22:30:13 2010 From: cournape at gmail.com (David Cournapeau) Date: Thu, 18 Feb 2010 12:30:13 +0900 Subject: [Numpy-discussion] Unable to install numpy-1.3.0 on WinXP (without Administrative rights) In-Reply-To: <29ec52431002171524u31fcf6e6xd2131f13facc46e3@mail.gmail.com> References: <29ec52431002171524u31fcf6e6xd2131f13facc46e3@mail.gmail.com> Message-ID: <5b8d13221002171930x74fc904u9aa0b90a35812ac2@mail.gmail.com> On Thu, Feb 18, 2010 at 8:24 AM, Touisteur EmporteUneVache wrote: > Hi, > > I'm trying to install numpy on a WinXP system, on which I have no > administrative rights. I think it is not possible to install NumPy for python 2.6 if you don't have admin priviledges. I believe the root of the problem is the lack of a right C runtime, and there is no easy way to install it without admin priviledges, and I have no idea how to fix this. The problem is specific to python 2.6 (more exactly because it was built with visual studio 2008), so using python 2.5 or 2.4 should not cause any issue if that's an option for you. The other solution is to ask your administrator to install the redistributable runtime from VS 2008, cheers, David From sierra_mtnview at sbcglobal.net Wed Feb 17 22:30:54 2010 From: sierra_mtnview at sbcglobal.net (Wayne Watson) Date: Wed, 17 Feb 2010 19:30:54 -0800 Subject: [Numpy-discussion] Updating Packages in 2.5 (win/numpy) and Related Matters In-Reply-To: <6a17e9ee1002162201j6d370992m62bbc204e8d3f9a1@mail.gmail.com> References: <4B7ABC18.7010908@sbcglobal.net> <4B7B3935.2060300@silveregg.co.jp> <4B7B7A48.1010803@sbcglobal.net> <1cd32cbb1002162125t4db502b5g89f648f4cfd7b9fb@mail.gmail.com> <6a17e9ee1002162201j6d370992m62bbc204e8d3f9a1@mail.gmail.com> Message-ID: <4B7CB46E.7010209@sbcglobal.net> On 2/16/2010 10:01 PM, Scott Sinclair wrote: >> On 17 February 2010 07:25, wrote: >> On Wed, Feb 17, 2010 at 12:10 AM, Wayne Watson >> wrote: >> >>> Hi, I'm working on a 1800+ line program that uses tkinter. Here are the >>> messages I started getting recently. (I finally figured out how to copy >>> them.). The program goes merrily on its way despite them. >>> >>> >>> s\sentuser>sentuser_20080716NoiseStudy7.py >>> C:\Python25\lib\site-packages\scipy\misc\__init__.py:25: >>> DeprecationWarning: Num >>> pyTest will be removed in the next release; please update your code to >>> use nose >>> or unittest >>> test = NumpyTest().test >>> >> DeprecationWarnings mean some some functionality in numpy (or scipy) >> has changed and the old way of doing things will be removed and be >> invalid in the next version. >> >> During depreciation the old code still works, but before you upgrade >> you might want to check whether and how much you use these functions >> and switch to the new behavior. >> >> In the case of numpy.test, it means that if you have tests written >> that use the numpy testing module, then you need to switch them to the >> new nose based numpy.testing. And you need to install nose for running >> numpy.test() >> > Wayne - The DeprecationWarnings are being raised by SciPy, not by your > code. You probably don't have a recent version of SciPy installed. The > most recent release of SciPy is 0.7.1 and works with NumPy 1.3.0. I > don't think you will see the warnings if you upgrade SciPy and NumPy > on your system. > > Check your NumPy and SciPy versions at a python prompt as follows: > > >>>> import numpy as np >>>> print np.__version__ >>>> import scipy as sp >>>> print sp.__version__ >>>> > You will need to completely remove the old versions if you choose to > upgrade. You should be able to do this from "Add/Remove Programs". > > > You just answered the question I posted to joef moments ago. Interestingly, I'm on win7's Add/Remove numpy. No scipy. I just checked the version via import and it's 0.6.0. These are some of my imports. Note "from scipy" below. from Tkinter import * from numpy import * import Image import ImageChops import ImageTk import time import binascii import tkMessageBox import tkSimpleDialog from pylab import plot, xlabel, ylabel, title, show, xticks, bar from scipy import stats as stats # scoreatpercentile <------------ from matplotlib.pyplot import figure, show #from matplotlib.lines import Line2D #from matplotlib.patches import Patch, Rectangle #from matplotlib.text import Text from matplotlib.image import AxesImage -- "There is nothing so annoying as to have two people talking when you're busy interrupting." -- Mark Twain From cgohlke at uci.edu Wed Feb 17 23:04:16 2010 From: cgohlke at uci.edu (Christoph Gohlke) Date: Wed, 17 Feb 2010 20:04:16 -0800 Subject: [Numpy-discussion] Unable to install numpy-1.3.0 on WinXP (without Administrative rights) In-Reply-To: <5b8d13221002171930x74fc904u9aa0b90a35812ac2@mail.gmail.com> References: <29ec52431002171524u31fcf6e6xd2131f13facc46e3@mail.gmail.com> <5b8d13221002171930x74fc904u9aa0b90a35812ac2@mail.gmail.com> Message-ID: <4B7CBC40.90100@uci.edu> On 2/17/2010 7:30 PM, David Cournapeau wrote: > On Thu, Feb 18, 2010 at 8:24 AM, Touisteur EmporteUneVache > wrote: >> Hi, >> >> I'm trying to install numpy on a WinXP system, on which I have no >> administrative rights. > > I think it is not possible to install NumPy for python 2.6 if you > don't have admin priviledges. I believe the root of the problem is the > lack of a right C runtime, and there is no easy way to install it > without admin priviledges, and I have no idea how to fix this. The > problem is specific to python 2.6 (more exactly because it was built > with visual studio 2008), so using python 2.5 or 2.4 should not cause > any issue if that's an option for you. > > The other solution is to ask your administrator to install the > redistributable runtime from VS 2008, > If everything else fails you can try to install numpy manually: the file numpy-1.3.0-sse3.exe, which is created in the %TEMP% directory during the numpy-1.3.0-win32-superpack-python2.6.exe installation, is a executable ZIP file and can be opened with any decent archive program, e.g. WinRAR. From numpy-1.3.0-sse3.exe copy PLATLIB\numpy\* to C:\Python26\sitepackages\numpy\ and SCRIPTS\* to C:\Python26\Scripts\. Unlike many other packages, Numpy does not need to have the Microsoft Visual C++ 2008 redistributable package installed to work. To avoid the problem, numpy-1.3.0-sse3.exe could probably be linked statically to MSVCRT9 like the bdist_wininst installers created by Python distutils. Christoph From touisteur at gmail.com Wed Feb 17 23:33:42 2010 From: touisteur at gmail.com (Touisteur EmporteUneVache) Date: Thu, 18 Feb 2010 15:33:42 +1100 Subject: [Numpy-discussion] Unable to install numpy-1.3.0 on WinXP (without Administrative rights) In-Reply-To: <4B7CBC40.90100@uci.edu> References: <29ec52431002171524u31fcf6e6xd2131f13facc46e3@mail.gmail.com> <5b8d13221002171930x74fc904u9aa0b90a35812ac2@mail.gmail.com> <4B7CBC40.90100@uci.edu> Message-ID: On 2/18/2010 at 15:04, Christoph Gohlke wrote: > On 2/17/2010 7:30 PM, David Cournapeau wrote: >> On Thu, Feb 18, 2010 at 8:24 AM, Touisteur EmporteUneVache >> wrote: >>> Hi, >>> >>> I'm trying to install numpy on a WinXP system, on which I have no >>> administrative rights. >> >> I think it is not possible to install NumPy for python 2.6 if you >> don't have admin priviledges. I believe the root of the problem is >> the >> lack of a right C runtime, and there is no easy way to install it >> without admin priviledges, and I have no idea how to fix this. The >> problem is specific to python 2.6 (more exactly because it was built >> with visual studio 2008), so using python 2.5 or 2.4 should not cause >> any issue if that's an option for you. >> >> The other solution is to ask your administrator to install the >> redistributable runtime from VS 2008, >> > > > If everything else fails you can try to install numpy manually: the > file > numpy-1.3.0-sse3.exe, which is created in the %TEMP% directory during > the numpy-1.3.0-win32-superpack-python2.6.exe installation, is a > executable ZIP file and can be opened with any decent archive program, > e.g. WinRAR. From numpy-1.3.0-sse3.exe copy PLATLIB\numpy\* to > C:\Python26\sitepackages\numpy\ and SCRIPTS\* to C:\Python26\Scripts\. > Unlike many other packages, Numpy does not need to have the Microsoft > Visual C++ 2008 redistributable package installed to work. > > To avoid the problem, numpy-1.3.0-sse3.exe could probably be linked > statically to MSVCRT9 like the bdist_wininst installers created by > Python distutils. > > Christoph Hi David and Christoph, Thank you both very much for your suggestions. I went for Christoph's workaround (manual installation) and it seems to work like a charm. Thanks again. Cheers! From scott.sinclair.za at gmail.com Thu Feb 18 01:00:40 2010 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Thu, 18 Feb 2010 08:00:40 +0200 Subject: [Numpy-discussion] Updating Packages in 2.5 (win/numpy) and Related Matters In-Reply-To: <4B7CB46E.7010209@sbcglobal.net> References: <4B7ABC18.7010908@sbcglobal.net> <4B7B3935.2060300@silveregg.co.jp> <4B7B7A48.1010803@sbcglobal.net> <1cd32cbb1002162125t4db502b5g89f648f4cfd7b9fb@mail.gmail.com> <6a17e9ee1002162201j6d370992m62bbc204e8d3f9a1@mail.gmail.com> <4B7CB46E.7010209@sbcglobal.net> Message-ID: <6a17e9ee1002172200h675ea571xdbe997d7e0408ea5@mail.gmail.com> >On 18 February 2010 05:30, Wayne Watson wrote: > >> On 2/16/2010 10:01 PM, Scott Sinclair wrote: >> >> Wayne - The DeprecationWarnings are being raised by SciPy, not by your >> code. You probably don't have a recent version of SciPy installed. The >> most recent release of SciPy is 0.7.1 and works with NumPy 1.3.0. I >> don't think you will see the warnings if you upgrade SciPy and NumPy >> on your system. >> >> Check your NumPy and SciPy versions at a python prompt as follows: >> >> >>>>> >>>>> import numpy as np >>>>> print np.__version__ >>>>> import scipy as sp >>>>> print sp.__version__ >>>>> >> >> You will need to completely remove the old versions if you choose to >> upgrade. You should be able to do this from "Add/Remove Programs". > > I'm on win7's Add/Remove numpy. No scipy. I just checked the version via > import and it's 0.6.0. You can download the latest NumPy and SciPy installers from: http://sourceforge.net/projects/numpy/files/ and http://sourceforge.net/projects/scipy/files/ You want the win32-superpack for your Python version. Use "Add/Remove" to remove your current NumPy install (if your version is not already 1.3.0). I'm not sure how SciPy was installed and why it doesn't appear in "Add/Remove". You should look in C:\Python25\Lib\site-packages for directories named numpy or scipy (numpy should have been removed already). It is safe to delete C:\Python25\Lib\site-packages\scipy. Then run the superpack installers and you should be good to go. Good luck. Cheers, Scott From ranavishal at gmail.com Thu Feb 18 01:12:00 2010 From: ranavishal at gmail.com (Vishal Rana) Date: Wed, 17 Feb 2010 22:12:00 -0800 Subject: [Numpy-discussion] Accessing fields of the object stored in numpy array Message-ID: Hi, I have a numpy arrays with datetime objects as: a = np.array([dt.datetime(2010, 2, 17), dt.datetime(2010, 2, 16), dt.datetime(2010, 2, 15)]) b = np.array([dt.datetime(2010, 2, 14), dt.datetime(2010, 2, 13), dt.datetime(2010, 2, 12)]) I want doing a-b should give me days difference as numpy array but I get as: array([3 days, 0:00:00, 3 days, 0:00:00, 3 days, 0:00:00], dtype=object) which is a numpy array of timedelta! I can extract days out by using property days of each, but how I do it in a numpy way may like: c=a-b c.days (a numpy array of days difference) like: array([3, 3, 3]) Any pointers? Thanks Vishal Rana Joan Crawford - "I, Joan Crawford, I believe in the dollar. Everything I earn, I spend." -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Feb 18 04:18:58 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 18 Feb 2010 10:18:58 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python Message-ID: Hi all, I have a static library (*.a) compiled by gfortran but no source files. How can I call routines from that library using python ? Any pointer would be appreciated. Thanks in advance. Nils From david at silveregg.co.jp Thu Feb 18 04:32:18 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Thu, 18 Feb 2010 18:32:18 +0900 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: Message-ID: <4B7D0922.1010000@silveregg.co.jp> Nils Wagner wrote: > Hi all, > > I have a static library (*.a) compiled by gfortran but no > source files. > How can I call routines from that library using python ? Is there any kind of interface (.h, etc...) ? If this is a proprietary library, there has to be something so that it can be called from C, and anything that can be called from C can be called from python. If you don't know at least the functions signatures, it will be very difficult (you would have to disassemble the code to find how the functions are called, etc...). cheers, David From nwagner at iam.uni-stuttgart.de Thu Feb 18 05:07:23 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 18 Feb 2010 11:07:23 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <4B7D0922.1010000@silveregg.co.jp> References: <4B7D0922.1010000@silveregg.co.jp> Message-ID: On Thu, 18 Feb 2010 18:32:18 +0900 David Cournapeau wrote: > Nils Wagner wrote: >> Hi all, >> >> I have a static library (*.a) compiled by gfortran but >>no >> source files. >> How can I call routines from that library using python ? > > Is there any kind of interface (.h, etc...) ? If this is >a proprietary > library, there has to be something so that it can be >called from C, and > anything that can be called from C can be called from >python. If you > don't know at least the functions signatures, it will be >very difficult > (you would have to disassemble the code to find how the >functions are > called, etc...). > > cheers, > > David Hi David, you are right. It's a proprietary library. I found a header file (*.h) including prototype declarations of externally callable procedures. How can I proceed ? Thank you again. Cheers, Nils From neilcrighton at gmail.com Thu Feb 18 05:15:51 2010 From: neilcrighton at gmail.com (Neil Crighton) Date: Thu, 18 Feb 2010 10:15:51 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?Calling_routines_from_a_Fortran=09li?= =?utf-8?q?brary=09using_python?= References: <4B7D0922.1010000@silveregg.co.jp> Message-ID: Nils Wagner iam.uni-stuttgart.de> writes: > Hi David, > > you are right. It's a proprietary library. > I found a header file (*.h) including prototype > declarations of externally callable procedures. > > How can I proceed ? Apparently you can use ctypes to access fortran libraries. See the first paragraph of: http://www.sagemath.org/doc/numerical_sage/ctypes.html You may have to convert the .a library to a .so library. Neil From david at silveregg.co.jp Thu Feb 18 05:21:03 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Thu, 18 Feb 2010 19:21:03 +0900 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: <4B7D0922.1010000@silveregg.co.jp> Message-ID: <4B7D148F.3090407@silveregg.co.jp> Nils Wagner wrote: > On Thu, 18 Feb 2010 18:32:18 +0900 > David Cournapeau wrote: >> Nils Wagner wrote: >>> Hi all, >>> >>> I have a static library (*.a) compiled by gfortran but >>> no >>> source files. >>> How can I call routines from that library using python ? >> Is there any kind of interface (.h, etc...) ? If this is >> a proprietary >> library, there has to be something so that it can be >> called from C, and >> anything that can be called from C can be called from >> python. If you >> don't know at least the functions signatures, it will be >> very difficult >> (you would have to disassemble the code to find how the >> functions are >> called, etc...). >> >> cheers, >> >> David > > Hi David, > > you are right. It's a proprietary library. > I found a header file (*.h) including prototype > declarations of externally callable procedures. > > How can I proceed ? Exactly as you would do for a C library (ctypes, cython, by hand, swig, etc...). Once you have the header (plus the C->Fortran ABI convention, which depend on your compilers and platforms), it is exactly as calling a C function in a C library, cheers, David From matthieu.brucher at gmail.com Thu Feb 18 05:24:47 2010 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 18 Feb 2010 11:24:47 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: <4B7D0922.1010000@silveregg.co.jp> Message-ID: > You may have to convert the .a library to a .so library. And this is where I hope that the library is compiled with fPIC (which is generally not the case for static libraries). If it is not the case, you will not be able to compile it as a shared library and thus not be able to use it from Python :| Matthieu -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From nwagner at iam.uni-stuttgart.de Thu Feb 18 05:25:25 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 18 Feb 2010 11:25:25 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: <4B7D0922.1010000@silveregg.co.jp> Message-ID: On Thu, 18 Feb 2010 10:15:51 +0000 (UTC) Neil Crighton wrote: > Nils Wagner iam.uni-stuttgart.de> writes: > >> Hi David, >> >> you are right. It's a proprietary library. >> I found a header file (*.h) including prototype >> declarations of externally callable procedures. >> >> How can I proceed ? > > Apparently you can use ctypes to access fortran >libraries. See the first > paragraph of: > > http://www.sagemath.org/doc/numerical_sage/ctypes.html > > You may have to convert the .a library to a .so library. > > > Neil How do I convert the .a library to a .so library ? Nils From david at silveregg.co.jp Thu Feb 18 05:30:10 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Thu, 18 Feb 2010 19:30:10 +0900 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: <4B7D0922.1010000@silveregg.co.jp> Message-ID: <4B7D16B2.1000009@silveregg.co.jp> Nils Wagner wrote: > How do I convert the .a library to a .so library ? You first "uncompress" the .a into a temporary directory, with ar x on Linux. Then, you group the .o together with gfortran -shared $LIST_OF_OBJECT + a few options. You can also look at how Atlas does it in its makefile. As Matthieu mentioned, if the .o are not compiled with -fPIC, you are screwed on 64 bits architectures (unless you statically link numpy in your python interpreter, but I doubt you want to go that road). It would be somewhat surprising if your vendor did not shared libraries available, though. cheers, David From nwagner at iam.uni-stuttgart.de Thu Feb 18 05:33:02 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 18 Feb 2010 11:33:02 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <4B7D148F.3090407@silveregg.co.jp> References: <4B7D0922.1010000@silveregg.co.jp> <4B7D148F.3090407@silveregg.co.jp> Message-ID: On Thu, 18 Feb 2010 19:21:03 +0900 David Cournapeau wrote: > Nils Wagner wrote: >> On Thu, 18 Feb 2010 18:32:18 +0900 >> David Cournapeau wrote: >>> Nils Wagner wrote: >>>> Hi all, >>>> >>>> I have a static library (*.a) compiled by gfortran but >>>> no >>>> source files. >>>> How can I call routines from that library using python ? >>> Is there any kind of interface (.h, etc...) ? If this is >>> a proprietary >>> library, there has to be something so that it can be >>> called from C, and >>> anything that can be called from C can be called from >>> python. If you >>> don't know at least the functions signatures, it will be >>> very difficult >>> (you would have to disassemble the code to find how the >>> functions are >>> called, etc...). >>> >>> cheers, >>> >>> David >> >> Hi David, >> >> you are right. It's a proprietary library. >> I found a header file (*.h) including prototype >> declarations of externally callable procedures. >> >> How can I proceed ? > > Exactly as you would do for a C library (ctypes, cython, >by hand, swig, > etc...). Once you have the header (plus the C->Fortran >ABI convention, > which depend on your compilers and platforms), it is >exactly as calling > a C function in a C library, > > cheers, > > David To be honest that's over my head. I mean I have never used C before. Where can I find a step-by-step example for my task ? Nils From friedrichromstedt at gmail.com Thu Feb 18 05:45:10 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 18 Feb 2010 11:45:10 +0100 Subject: [Numpy-discussion] Accessing fields of the object stored in numpy array In-Reply-To: References: Message-ID: Hello Vishal, 2010/2/18 Vishal Rana : > a = np.array([dt.datetime(2010, 2, 17), dt.datetime(2010, 2, 16), > dt.datetime(2010, 2, 15)]) > b = np.array([dt.datetime(2010, 2, 14), dt.datetime(2010, 2, 13), > dt.datetime(2010, 2, 12)]) > c=a-b > c.days (a numpy array of days difference) like: > array([3, 3, 3]) I think a (rather slow) solution would be to use: def days(timedelta): return timedelta.days udays = numpy.vectorize(days) and applying the ufunc udays() on your dtype = numpy.object array like: c_days = udays(c) numpy.vectorize() turns an ordinary function into an ufunc. This means, that the ufunc created can take ndarrays and the days() function will be applied to all elements. hth, Friedrich From markus.proeller at ifm.com Thu Feb 18 05:44:51 2010 From: markus.proeller at ifm.com (markus.proeller at ifm.com) Date: Thu, 18 Feb 2010 11:44:51 +0100 Subject: [Numpy-discussion] create dll from numpy code In-Reply-To: <4B7B3B97.6040402@silveregg.co.jp> Message-ID: numpy-discussion-bounces at scipy.org schrieb am 17.02.2010 01:43:03: > markus.proeller at ifm.com wrote: > > > > Hello, > > > > is there a possibility to create a dll from a numpy code? > > What do you want to create a dll for ? For distribution purpose, to hide > your code, etc... ? To replace an exisiting Matlab dll and not have to write it in pure c/c++. Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Feb 18 05:51:57 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 18 Feb 2010 11:51:57 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <4B7D16B2.1000009@silveregg.co.jp> References: <4B7D0922.1010000@silveregg.co.jp> <4B7D16B2.1000009@silveregg.co.jp> Message-ID: On Thu, 18 Feb 2010 19:30:10 +0900 David Cournapeau wrote: > Nils Wagner wrote: > >> How do I convert the .a library to a .so library ? > > You first "uncompress" the .a into a temporary >directory, with ar x on > Linux. Then, you group the .o together with gfortran >-shared > $LIST_OF_OBJECT + a few options. You can also look at >how Atlas does it > in its makefile. > > As Matthieu mentioned, if the .o are not compiled with >-fPIC, you are > screwed on 64 bits architectures (unless you statically >link numpy in > your python interpreter, but I doubt you want to go that >road). It would > be somewhat surprising if your vendor did not shared >libraries > available, though. > > cheers, > > David Ok I have extracted the *.o files from the static library. Applying the file command to the object files yields ELF 64-bit LSB relocatable, AMD x86-64, version 1 (SYSV), not stripped What's that supposed to mean ? Nils From gnurser at googlemail.com Thu Feb 18 05:44:34 2010 From: gnurser at googlemail.com (George Nurser) Date: Thu, 18 Feb 2010 10:44:34 +0000 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: Message-ID: <1d1e6ea71002180244y758e7915n1fe6cf9192681d58@mail.gmail.com> Hi Nils, I've not tried it, but you might be able to interface with f2py your own fortran subroutine that calls the library. Then issue the f2py command with extra arguments -l -L. See section 5 of http://cens.ioc.ee/projects/f2py2e/usersguide/index.html#command-f2py --George. On 18 February 2010 09:18, Nils Wagner wrote: > Hi all, > > I have a static ?library (*.a) compiled by gfortran but no > source files. > How can I call routines from that library using python ? > > Any pointer would be appreciated. > > Thanks in advance. > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Nils > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From matthieu.brucher at gmail.com Thu Feb 18 05:55:07 2010 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 18 Feb 2010 11:55:07 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: <4B7D0922.1010000@silveregg.co.jp> <4B7D16B2.1000009@silveregg.co.jp> Message-ID: > Ok I have extracted the *.o files from the static library. > > Applying the file command to the object files yields > > ELF 64-bit LSB relocatable, AMD x86-64, version 1 (SYSV), > not stripped > > What's that supposed to mean ? It means that each object file is an object file compiled with -fPIC, so you just have to make a shared library (gfortran -shared *.o -o libmysharedlibrary.so) Then, you can try to open the library with ctypes. If something is lacking, you may have to add -lsome_library to the gfortran line. Matthieu -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From matthieu.brucher at gmail.com Thu Feb 18 05:56:25 2010 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 18 Feb 2010 11:56:25 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <1d1e6ea71002180244y758e7915n1fe6cf9192681d58@mail.gmail.com> References: <1d1e6ea71002180244y758e7915n1fe6cf9192681d58@mail.gmail.com> Message-ID: If header files are provided, the work done by f2py is almost done. But you don't know the real Fortran interface, so you still have to use ctypes over f2py. Matthieu 2010/2/18 George Nurser : > Hi Nils, > I've not tried it, but you might be able to interface with f2py your > own fortran subroutine that calls the library. > Then issue the f2py command with extra arguments -l > -L. > > See section 5 of > http://cens.ioc.ee/projects/f2py2e/usersguide/index.html#command-f2py > > --George. > > > On 18 February 2010 09:18, Nils Wagner wrote: >> Hi all, >> >> I have a static ?library (*.a) compiled by gfortran but no >> source files. >> How can I call routines from that library using python ? >> >> Any pointer would be appreciated. >> >> Thanks in advance. >> >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Nils >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From gnurser at googlemail.com Thu Feb 18 06:19:38 2010 From: gnurser at googlemail.com (George Nurser) Date: Thu, 18 Feb 2010 11:19:38 +0000 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: <1d1e6ea71002180244y758e7915n1fe6cf9192681d58@mail.gmail.com> Message-ID: <1d1e6ea71002180319x7c4028fdrd4439345a38ff89e@mail.gmail.com> I'm suggesting writing a *new* Fortran interface, coupled with f2py. The original library just needs to be linked to the new .so generated by f2py. I am hoping (perhaps optimistically) that can be done in the Fortran compilation... --George. On 18 February 2010 10:56, Matthieu Brucher wrote: > If header files are provided, the work done by f2py is almost done. > But you don't know the real Fortran interface, so you still have to > use ctypes over f2py. > > Matthieu > > 2010/2/18 George Nurser : >> Hi Nils, >> I've not tried it, but you might be able to interface with f2py your >> own fortran subroutine that calls the library. >> Then issue the f2py command with extra arguments -l >> -L. >> >> See section 5 of >> http://cens.ioc.ee/projects/f2py2e/usersguide/index.html#command-f2py >> >> --George. >> >> >> On 18 February 2010 09:18, Nils Wagner wrote: >>> Hi all, >>> >>> I have a static ?library (*.a) compiled by gfortran but no >>> source files. >>> How can I call routines from that library using python ? >>> >>> Any pointer would be appreciated. >>> >>> Thanks in advance. >>> >>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Nils >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > > > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From matthieu.brucher at gmail.com Thu Feb 18 06:22:25 2010 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 18 Feb 2010 12:22:25 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <1d1e6ea71002180319x7c4028fdrd4439345a38ff89e@mail.gmail.com> References: <1d1e6ea71002180244y758e7915n1fe6cf9192681d58@mail.gmail.com> <1d1e6ea71002180319x7c4028fdrd4439345a38ff89e@mail.gmail.com> Message-ID: If Nils has no access to the Fortran interface (and I don't think he has, unless there is some .mod file somewhere?), he shouldn't use f2py. Even if you know that the Fortran routine is named XXX, you don't know how the arguments must be given. Addressing the C interface directly is much safer. Matthieu 2010/2/18 George Nurser : > I'm suggesting writing a *new* Fortran interface, coupled with f2py. > The original library just needs to be linked to the new .so generated > by f2py. I am hoping (perhaps optimistically) that can be done in the > Fortran compilation... > > --George. > > On 18 February 2010 10:56, Matthieu Brucher wrote: >> If header files are provided, the work done by f2py is almost done. >> But you don't know the real Fortran interface, so you still have to >> use ctypes over f2py. >> >> Matthieu >> >> 2010/2/18 George Nurser : >>> Hi Nils, >>> I've not tried it, but you might be able to interface with f2py your >>> own fortran subroutine that calls the library. >>> Then issue the f2py command with extra arguments -l >>> -L. >>> >>> See section 5 of >>> http://cens.ioc.ee/projects/f2py2e/usersguide/index.html#command-f2py >>> >>> --George. >>> >>> >>> On 18 February 2010 09:18, Nils Wagner wrote: >>>> Hi all, >>>> >>>> I have a static ?library (*.a) compiled by gfortran but no >>>> source files. >>>> How can I call routines from that library using python ? >>>> >>>> Any pointer would be appreciated. >>>> >>>> Thanks in advance. >>>> >>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Nils >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >> >> >> >> -- >> Information System Engineer, Ph.D. >> Blog: http://matt.eifelle.com >> LinkedIn: http://www.linkedin.com/in/matthieubrucher >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From aisaac at american.edu Thu Feb 18 07:31:27 2010 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 18 Feb 2010 07:31:27 -0500 Subject: [Numpy-discussion] Updating Packages in 2.5 (win/numpy) and Related Matters In-Reply-To: <4B7CB46E.7010209@sbcglobal.net> References: <4B7ABC18.7010908@sbcglobal.net> <4B7B3935.2060300@silveregg.co.jp> <4B7B7A48.1010803@sbcglobal.net> <1cd32cbb1002162125t4db502b5g89f648f4cfd7b9fb@mail.gmail.com> <6a17e9ee1002162201j6d370992m62bbc204e8d3f9a1@mail.gmail.com> <4B7CB46E.7010209@sbcglobal.net> Message-ID: <4B7D331F.6080107@american.edu> Wayne wrote: > I just checked the version via import and it's 0.6.0. Try updating. Also, the SciPy Reference Guide explains how to turn off deprecation warnings. http://docs.scipy.org/doc/scipy/scipy-ref.pdf Alan Isaac From nwagner at iam.uni-stuttgart.de Thu Feb 18 08:22:38 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 18 Feb 2010 14:22:38 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: <4B7D0922.1010000@silveregg.co.jp> <4B7D16B2.1000009@silveregg.co.jp> Message-ID: On Thu, 18 Feb 2010 11:55:07 +0100 Matthieu Brucher wrote: >> Ok I have extracted the *.o files from the static >>library. >> >> Applying the file command to the object files yields >> >> ELF 64-bit LSB relocatable, AMD x86-64, version 1 >>(SYSV), >> not stripped >> >> What's that supposed to mean ? > > It means that each object file is an object file >compiled with -fPIC, > so you just have to make a shared library (gfortran >-shared *.o -o > libmysharedlibrary.so) > > Then, you can try to open the library with ctypes. If >something is > lacking, you may have to add -lsome_library to the >gfortran line. > > Matthieu > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher O.k. I tried gfortran -shared *.o -o libmysharedlibrary.so /usr/bin/ld: dxop.o: relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC dscpde.o: could not read symbols: Bad value Any idea ? Nils From cournape at gmail.com Thu Feb 18 08:29:39 2010 From: cournape at gmail.com (David Cournapeau) Date: Thu, 18 Feb 2010 22:29:39 +0900 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: <4B7D0922.1010000@silveregg.co.jp> <4B7D16B2.1000009@silveregg.co.jp> Message-ID: <5b8d13221002180529g2c725bfel8ca8366b0a11b91b@mail.gmail.com> On Thu, Feb 18, 2010 at 10:22 PM, Nils Wagner wrote: > On Thu, 18 Feb 2010 11:55:07 +0100 > ?Matthieu Brucher wrote: >>> Ok I have extracted the *.o files from the static >>>library. >>> >>> Applying the file command to the object files yields >>> >>> ELF 64-bit LSB relocatable, AMD x86-64, version 1 >>>(SYSV), >>> not stripped >>> >>> What's that supposed to mean ? >> >> It means that each object file is an object file >>compiled with -fPIC, >> so you just have to make a shared library (gfortran >>-shared *.o -o >> libmysharedlibrary.so) >> >> Then, you can try to open the library with ctypes. If >>something is >> lacking, you may have to add -lsome_library to the >>gfortran line. >> >> Matthieu >> -- >> Information System Engineer, Ph.D. >> Blog: http://matt.eifelle.com >> LinkedIn: http://www.linkedin.com/in/matthieubrucher > > O.k. I tried > > gfortran -shared *.o -o libmysharedlibrary.so > > /usr/bin/ld: dxop.o: relocation R_X86_64_32 against `a > local symbol' can not be used when making a shared object; > recompile with -fPIC The message is pretty explicit: it is not compiled with -fPIC, there is nothing you can do, short of requesting a shared library from the software vendor. David From dagss at student.matnat.uio.no Thu Feb 18 09:32:12 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 18 Feb 2010 15:32:12 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <5b8d13221002180529g2c725bfel8ca8366b0a11b91b@mail.gmail.com> References: <4B7D0922.1010000@silveregg.co.jp> <4B7D16B2.1000009@silveregg.co.jp> <5b8d13221002180529g2c725bfel8ca8366b0a11b91b@mail.gmail.com> Message-ID: <4B7D4F6C.2020604@student.matnat.uio.no> David Cournapeau wrote: > On Thu, Feb 18, 2010 at 10:22 PM, Nils Wagner > wrote: > >> On Thu, 18 Feb 2010 11:55:07 +0100 >> Matthieu Brucher wrote: >> >>>> Ok I have extracted the *.o files from the static >>>> library. >>>> >>>> Applying the file command to the object files yields >>>> >>>> ELF 64-bit LSB relocatable, AMD x86-64, version 1 >>>> (SYSV), >>>> not stripped >>>> >>>> What's that supposed to mean ? >>>> >>> It means that each object file is an object file >>> compiled with -fPIC, >>> so you just have to make a shared library (gfortran >>> -shared *.o -o >>> libmysharedlibrary.so) >>> >>> Then, you can try to open the library with ctypes. If >>> something is >>> lacking, you may have to add -lsome_library to the >>> gfortran line. >>> >>> Matthieu >>> -- >>> Information System Engineer, Ph.D. >>> Blog: http://matt.eifelle.com >>> LinkedIn: http://www.linkedin.com/in/matthieubrucher >>> >> O.k. I tried >> >> gfortran -shared *.o -o libmysharedlibrary.so >> >> /usr/bin/ld: dxop.o: relocation R_X86_64_32 against `a >> local symbol' can not be used when making a shared object; >> recompile with -fPIC >> > > The message is pretty explicit: it is not compiled with -fPIC, there > is nothing you can do, short of requesting a shared library from the > software vendor. > Well, I think one can make a static executable with C or Cython and embed the Python interpreter. But it is pretty complicated stuff, and requesting a shared library is vastly preferable. Dag Sverre From nwagner at iam.uni-stuttgart.de Thu Feb 18 09:47:01 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 18 Feb 2010 15:47:01 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <4B7D4F6C.2020604@student.matnat.uio.no> References: <4B7D0922.1010000@silveregg.co.jp> <4B7D16B2.1000009@silveregg.co.jp> <5b8d13221002180529g2c725bfel8ca8366b0a11b91b@mail.gmail.com> <4B7D4F6C.2020604@student.matnat.uio.no> Message-ID: On Thu, 18 Feb 2010 15:32:12 +0100 Dag Sverre Seljebotn wrote: > David Cournapeau wrote: >> On Thu, Feb 18, 2010 at 10:22 PM, Nils Wagner >> wrote: >> >>> On Thu, 18 Feb 2010 11:55:07 +0100 >>> Matthieu Brucher wrote: >>> >>>>> Ok I have extracted the *.o files from the static >>>>> library. >>>>> >>>>> Applying the file command to the object files yields >>>>> >>>>> ELF 64-bit LSB relocatable, AMD x86-64, version 1 >>>>> (SYSV), >>>>> not stripped >>>>> >>>>> What's that supposed to mean ? >>>>> >>>> It means that each object file is an object file >>>> compiled with -fPIC, >>>> so you just have to make a shared library (gfortran >>>> -shared *.o -o >>>> libmysharedlibrary.so) >>>> >>>> Then, you can try to open the library with ctypes. If >>>> something is >>>> lacking, you may have to add -lsome_library to the >>>> gfortran line. >>>> >>>> Matthieu >>>> -- >>>> Information System Engineer, Ph.D. >>>> Blog: http://matt.eifelle.com >>>> LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>> >>> O.k. I tried >>> >>> gfortran -shared *.o -o libmysharedlibrary.so >>> >>> /usr/bin/ld: dxop.o: relocation R_X86_64_32 against `a >>> local symbol' can not be used when making a shared >>>object; >>> recompile with -fPIC >>> >> >> The message is pretty explicit: it is not compiled with >>-fPIC, there >> is nothing you can do, short of requesting a shared >>library from the >> software vendor. >> > Well, I think one can make a static executable with C or >Cython and > embed the Python interpreter. But it is pretty >complicated stuff, and > requesting a shared library is vastly preferable. > > Dag Sverre > Can you shed light on your approach ? Nils From dagss at student.matnat.uio.no Thu Feb 18 09:56:37 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 18 Feb 2010 15:56:37 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: <4B7D0922.1010000@silveregg.co.jp> <4B7D16B2.1000009@silveregg.co.jp> <5b8d13221002180529g2c725bfel8ca8366b0a11b91b@mail.gmail.com> <4B7D4F6C.2020604@student.matnat.uio.no> Message-ID: <4B7D5525.8070902@student.matnat.uio.no> Nils Wagner wrote: > On Thu, 18 Feb 2010 15:32:12 +0100 > Dag Sverre Seljebotn > wrote: > >> David Cournapeau wrote: >> >>> On Thu, Feb 18, 2010 at 10:22 PM, Nils Wagner >>> wrote: >>> >>> >>>> On Thu, 18 Feb 2010 11:55:07 +0100 >>>> Matthieu Brucher wrote: >>>> >>>> >>>>>> Ok I have extracted the *.o files from the static >>>>>> library. >>>>>> >>>>>> Applying the file command to the object files yields >>>>>> >>>>>> ELF 64-bit LSB relocatable, AMD x86-64, version 1 >>>>>> (SYSV), >>>>>> not stripped >>>>>> >>>>>> What's that supposed to mean ? >>>>>> >>>>>> >>>>> It means that each object file is an object file >>>>> compiled with -fPIC, >>>>> so you just have to make a shared library (gfortran >>>>> -shared *.o -o >>>>> libmysharedlibrary.so) >>>>> >>>>> Then, you can try to open the library with ctypes. If >>>>> something is >>>>> lacking, you may have to add -lsome_library to the >>>>> gfortran line. >>>>> >>>>> Matthieu >>>>> -- >>>>> Information System Engineer, Ph.D. >>>>> Blog: http://matt.eifelle.com >>>>> LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>>> >>>>> >>>> O.k. I tried >>>> >>>> gfortran -shared *.o -o libmysharedlibrary.so >>>> >>>> /usr/bin/ld: dxop.o: relocation R_X86_64_32 against `a >>>> local symbol' can not be used when making a shared >>>> object; >>>> recompile with -fPIC >>>> >>>> >>> The message is pretty explicit: it is not compiled with >>> -fPIC, there >>> is nothing you can do, short of requesting a shared >>> library from the >>> software vendor. >>> >>> >> Well, I think one can make a static executable with C or >> Cython and >> embed the Python interpreter. But it is pretty >> complicated stuff, and >> requesting a shared library is vastly preferable. >> >> Dag Sverre >> >> > > Can you shed light on your approach ? > If one searches the Cython lists (gmane.org) for "embedding python interpreter" it should give some hints as to how to compile a Cython .pyx module into an executable (so you get an executable which links in Python, and which has to be used instead of Python). There's even some flags in Cython to do this easily. Ask on the Cython list for more info, I don't know more myself. Then, one could link the static Fortran library into the resulting application statically, and use Cython to call the exported functions in the Fortran library. But, the result is a standalone application, one can't use it with the standard Python interpreter (although one can import in any .py files etc. as usual). Dag Sverre From ranavishal at gmail.com Thu Feb 18 12:04:52 2010 From: ranavishal at gmail.com (Vishal Rana) Date: Thu, 18 Feb 2010 09:04:52 -0800 Subject: [Numpy-discussion] Accessing fields of the object stored in numpy array In-Reply-To: References: Message-ID: Thanks Friedrich it helped. On Thu, Feb 18, 2010 at 2:45 AM, Friedrich Romstedt < friedrichromstedt at gmail.com> wrote: > Hello Vishal, > > 2010/2/18 Vishal Rana : > > > a = np.array([dt.datetime(2010, 2, 17), dt.datetime(2010, 2, 16), > > dt.datetime(2010, 2, 15)]) > > b = np.array([dt.datetime(2010, 2, 14), dt.datetime(2010, 2, 13), > > dt.datetime(2010, 2, 12)]) > > > c=a-b > > c.days (a numpy array of days difference) like: > > array([3, 3, 3]) > > I think a (rather slow) solution would be to use: > > def days(timedelta): > return timedelta.days > > udays = numpy.vectorize(days) > > and applying the ufunc udays() on your dtype = numpy.object array like: > > c_days = udays(c) > > numpy.vectorize() turns an ordinary function into an ufunc. This > means, that the ufunc created can take ndarrays and the days() > function will be applied to all elements. > > hth, > Friedrich > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Thu Feb 18 12:16:56 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 18 Feb 2010 09:16:56 -0800 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: <4B7D0922.1010000@silveregg.co.jp> Message-ID: <4B7D7608.5010000@noaa.gov> Matthieu Brucher wrote: > If it is not the > case, you will not be able to compile it as a shared library and thus > not be able to use it from Python :| maybe not directly with ctypes, but you should be able to call it from Cython (or SWIG, or custom C code), and statically link it. What about f2py? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From dagss at student.matnat.uio.no Thu Feb 18 12:13:41 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 18 Feb 2010 18:13:41 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <4B7D7608.5010000@noaa.gov> References: <4B7D0922.1010000@silveregg.co.jp> <4B7D7608.5010000@noaa.gov> Message-ID: <4B7D7545.6040203@student.matnat.uio.no> Christopher Barker wrote: > Matthieu Brucher wrote: > >> If it is not the >> case, you will not be able to compile it as a shared library and thus >> not be able to use it from Python :| >> > > maybe not directly with ctypes, but you should be able to call it from > Cython (or SWIG, or custom C code), and statically link it. > If it is not compiled with -fPIC, you can't statically link it into any shared library, it has to be statically linked into the final executable (so the standard /usr/bin/python will never work). Dag Sverre From Chris.Barker at noaa.gov Thu Feb 18 12:22:00 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 18 Feb 2010 09:22:00 -0800 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <4B7D7545.6040203@student.matnat.uio.no> References: <4B7D0922.1010000@silveregg.co.jp> <4B7D7608.5010000@noaa.gov> <4B7D7545.6040203@student.matnat.uio.no> Message-ID: <4B7D7738.5090705@noaa.gov> Dag Sverre Seljebotn wrote: > If it is not compiled with -fPIC, you can't statically link it into any > shared library, it has to be statically linked into the final executable > (so the standard /usr/bin/python will never work). Shows you what I (don't) know! The joys of closed-source software! On a similar topic -- is it possible to convert a *.so to a static lib? (on OS-X)? I did a bunch a googling a while back, and couldn't figure it out. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From matthieu.brucher at gmail.com Thu Feb 18 13:25:27 2010 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 18 Feb 2010 19:25:27 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <4B7D7738.5090705@noaa.gov> References: <4B7D0922.1010000@silveregg.co.jp> <4B7D7608.5010000@noaa.gov> <4B7D7545.6040203@student.matnat.uio.no> <4B7D7738.5090705@noaa.gov> Message-ID: 2010/2/18 Christopher Barker : > Dag Sverre Seljebotn wrote: >> If it is not compiled with -fPIC, you can't statically link it into any >> shared library, it has to be statically linked into the final executable >> (so the standard /usr/bin/python will never work). > > Shows you what I (don't) know! > > The joys of closed-source software! > > On a similar topic -- is it possible to convert a *.so to a static lib? > (on OS-X)? I did a bunch a googling a while back, and couldn't figure it > out. I don't think you can. A static library is nothing more than an archive of object files (a Fortran module file is the same BTW), a dynamic library is one big object with every link created. Going from the latter to the former cannot be easilly done. Matthieu -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From edward.shishkin at gmail.com Thu Feb 18 13:29:02 2010 From: edward.shishkin at gmail.com (Edward Shishkin) Date: Thu, 18 Feb 2010 19:29:02 +0100 Subject: [Numpy-discussion] Numpy headers Message-ID: <4B7D86EE.4010406@gmail.com> Hello everyone. Afer installing numpy (under linux) I have got a number of header files: /usr/lib/python2.5/site-packages/numpy/f2py/src/fortranobject.h /usr/lib/python2.5/site-packages/numpy/numarray/numpy/cfunc.h /usr/lib/python2.5/site-packages/numpy/numarray/numpy/numcomplex.h /usr/lib/python2.5/site-packages/numpy/numarray/numpy/nummacro.h /usr/lib/python2.5/site-packages/numpy/numarray/numpy/ieeespecial.h /usr/lib/python2.5/site-packages/numpy/numarray/numpy/arraybase.h /usr/lib/python2.5/site-packages/numpy/numarray/numpy/libnumarray.h ... Is there a particular reason for including these headers in that location (/usr/lib/python2.5)? I maintain numpy for our distro and am planning to make a separate package (numpy-devel) for those headers to be delivered under /usr/include. Any ideas? Thanks in advance, Edward. From robert.kern at gmail.com Thu Feb 18 14:54:00 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 18 Feb 2010 13:54:00 -0600 Subject: [Numpy-discussion] Numpy headers In-Reply-To: <4B7D86EE.4010406@gmail.com> References: <4B7D86EE.4010406@gmail.com> Message-ID: <3d375d731002181154r6d13053fn9fd2983e6eb3c457@mail.gmail.com> On Thu, Feb 18, 2010 at 12:29, Edward Shishkin wrote: > Hello everyone. > > Afer installing numpy (under linux) I have got a number of header files: > > /usr/lib/python2.5/site-packages/numpy/f2py/src/fortranobject.h > /usr/lib/python2.5/site-packages/numpy/numarray/numpy/cfunc.h > /usr/lib/python2.5/site-packages/numpy/numarray/numpy/numcomplex.h > /usr/lib/python2.5/site-packages/numpy/numarray/numpy/nummacro.h > /usr/lib/python2.5/site-packages/numpy/numarray/numpy/ieeespecial.h > /usr/lib/python2.5/site-packages/numpy/numarray/numpy/arraybase.h > /usr/lib/python2.5/site-packages/numpy/numarray/numpy/libnumarray.h > ... > > Is there a particular reason for including these headers in that location > (/usr/lib/python2.5)? > > I maintain numpy for our distro and am planning to make a separate > package (numpy-devel) for those headers to be delivered under > /usr/include. > > Any ideas? There are distribution use cases where installing to /usr/include/python2.x is not an option. One can also install multiple numpy distributions in different locations. Keeping the headers inside the packages helps with these use cases. There is a function in numpy that returns the path where the main numpy headers are installed. We greatly prefer that you leave the numpy package intact in order to reduce the number of different configurations out there and to ensure that your users have a completely functional numpy installation. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peck at us.ibm.com Thu Feb 18 17:59:13 2010 From: peck at us.ibm.com (Jon K Peck) Date: Thu, 18 Feb 2010 15:59:13 -0700 Subject: [Numpy-discussion] AUTO: Jon K Peck is out of the office (returning 02/21/2010) Message-ID: I am out of the office until 02/21/2010. I will be traveling through Sunday, Feb 21 and will be delayed responding to your email. I will have periodic email access. Note: This is an automated response to your message "NumPy-Discussion Digest, Vol 41, Issue 87" sent on 2/18/10 11:00:03. This is the only notification you will receive while this person is away. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Thu Feb 18 19:31:42 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 19 Feb 2010 09:31:42 +0900 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <4B7D4F6C.2020604@student.matnat.uio.no> References: <4B7D0922.1010000@silveregg.co.jp> <4B7D16B2.1000009@silveregg.co.jp> <5b8d13221002180529g2c725bfel8ca8366b0a11b91b@mail.gmail.com> <4B7D4F6C.2020604@student.matnat.uio.no> Message-ID: <4B7DDBEE.6010804@silveregg.co.jp> Dag Sverre Seljebotn wrote: > Well, I think one can make a static executable with C or Cython and > embed the Python interpreter. Yes, it is possible, but I think it is fair to say that if you don't know how to write a C extension, statically build numpy into python would be daunting :) David From cournape at gmail.com Thu Feb 18 19:53:56 2010 From: cournape at gmail.com (David Cournapeau) Date: Fri, 19 Feb 2010 09:53:56 +0900 Subject: [Numpy-discussion] ABI changes complete in trunk In-Reply-To: References: Message-ID: <5b8d13221002181653n1f5e19fdqee9e313ccb9c7810@mail.gmail.com> Hi Travis, On Wed, Feb 17, 2010 at 4:13 AM, Travis Oliphant wrote: > > I've made the ABI changes I think are needed in the SVN trunk. ? ? Please > feel free to speak up if you have concerns or problems (and if you want to > change white-space, just do it...). Great, thanks for the effort. Just a nitpick: I think there is no need to jump the number for ABI to 0x0200... Just incrementing is OK, we only check when they are different, and there is not link between this number and the publicised numpy version number. David From sierra_mtnview at sbcglobal.net Thu Feb 18 23:33:20 2010 From: sierra_mtnview at sbcglobal.net (Wayne Watson) Date: Thu, 18 Feb 2010 20:33:20 -0800 Subject: [Numpy-discussion] Updating Packages in 2.5 (win/numpy) and Related Matters In-Reply-To: <6a17e9ee1002172200h675ea571xdbe997d7e0408ea5@mail.gmail.com> References: <4B7ABC18.7010908@sbcglobal.net> <4B7B3935.2060300@silveregg.co.jp> <4B7B7A48.1010803@sbcglobal.net> <1cd32cbb1002162125t4db502b5g89f648f4cfd7b9fb@mail.gmail.com> <6a17e9ee1002162201j6d370992m62bbc204e8d3f9a1@mail.gmail.com> <4B7CB46E.7010209@sbcglobal.net> <6a17e9ee1002172200h675ea571xdbe997d7e0408ea5@mail.gmail.com> Message-ID: <4B7E1490.7050003@sbcglobal.net> On 2/17/2010 10:00 PM, Scott Sinclair wrote: >> On 18 February 2010 05:30, Wayne Watson wrote: >> ... >> >> I'm on win7's Add/Remove numpy. No scipy. I just checked the version via >> import and it's 0.6.0. >> > You can download the latest NumPy and SciPy installers from: > > http://sourceforge.net/projects/numpy/files/ > > and > > http://sourceforge.net/projects/scipy/files/ > > You want the win32-superpack for your Python version. > > Use "Add/Remove" to remove your current NumPy install (if your version > is not already 1.3.0). I'm not sure how SciPy was installed and why it > doesn't appear in "Add/Remove". You should look in > C:\Python25\Lib\site-packages for directories named numpy or scipy > (numpy should have been removed already). It is safe to delete > C:\Python25\Lib\site-packages\scipy. > > Then run the superpack installers and you should be good to go. Good luck. > Scipy is definitely in site-packages*, but not in Add/Remove. I also downloaded it from IDLE prompt. Numpy is both in site-pkg and Add/Remove. Numpy is 1.2.0. * If I said otherwise, it may be because I'm in the midst of going from XP to Win7, and am using two machines very close together. Well, I think it's time to update per your instructions. -- "There is nothing so annoying as to have two people talking when you're busy interrupting." -- Mark Twain From njs at pobox.com Fri Feb 19 03:08:50 2010 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 19 Feb 2010 00:08:50 -0800 Subject: [Numpy-discussion] Best interface for computing the logarithm of the determinant? In-Reply-To: <961fa2b41002161249g66dde100s251cf8874d2a996e@mail.gmail.com> References: <961fa2b41002161249g66dde100s251cf8874d2a996e@mail.gmail.com> Message-ID: <961fa2b41002190008v81f5cd1o57f20b6d27211623@mail.gmail.com> Thanks for your comments, all. Since it occurs to me that this is a general need, not just for sparse matrices, and it would be very annoying to settle on one API for scikits.sparse and then have another show up in one of the main packages later, I've just submitted a patch for option (1) to numpy: http://projects.scipy.org/numpy/ticket/1402 And we'll see what happens with that :-) On Tue, Feb 16, 2010 at 12:49 PM, Nathaniel Smith wrote: > So when you have a matrix whose determinant you want, it's often wise > to compute the logarithm of the determinant instead of the determinant > itself, because determinants involve lots and lots of multiplications > and the result might otherwise underflow/overflow. Therefore, in > scikits.sparse, I'd like to provide an API for doing this (and this is > well-supported by the underlying libraries). > > But the problem is that for a general matrix, the determinant may be > zero or negative. Obviously we can deal with this, but what's the best > API? I'd like to use one consistently across the different > factorizations in scikits.sparse, and perhaps eventually in numpy as > well. > > Some options: > > 1) Split off the sign into a separate return value ('sign' may be 1, -1, 0): > sign, value = logdet(A) > actual_determinant = sign * exp(value) > > 2) Allow complex/infinite return values, even when A is a real matrix: > logdet(eye(3)) == pi*1j > logdet(zeros((3, 3))) == -Inf > > 3) "Scientific notation" (This is what UMFPACK's API does): return a > mantissa and base-10 exponent: > mantissa, exponent = logdet(A) > actual_determinant = mantissa * 10 ** exponent > > 4) Have separate functions for computing the sign, and the log of the > absolute value (This is what GSL does, though it seems pointlessly > inefficient): > sign = sgndet(A) > value = logdet(A) > actual_determinant = sign * exp(value) > > These are all kind of ugly looking, unfortunately, but that seems > unavoidable, unless someone has a clever idea. > > Any preferences? > > -- Nathaniel > From robert.kern at gmail.com Fri Feb 19 11:17:35 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 19 Feb 2010 10:17:35 -0600 Subject: [Numpy-discussion] Numpy headers In-Reply-To: <4B7D86EE.4010406@gmail.com> References: <4B7D86EE.4010406@gmail.com> Message-ID: <3d375d731002190817x6159fbbay3c11799a5f97c8b6@mail.gmail.com> On Thu, Feb 18, 2010 at 12:29, Edward Shishkin wrote: > I maintain numpy for our distro and am planning to make a separate > package (numpy-devel) for those headers to be delivered under > /usr/include. If I sounded too harsh in my previous email, I apologize. I *do* want to thank you for coming to the list and asking instead of making changes without input. Most distro maintainers of numpy never make an appearance on the list, but we have to field the bug reports of the users of their broken packages regardless. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From edward at redhat.com Fri Feb 19 11:53:03 2010 From: edward at redhat.com (Edward Shishkin) Date: Fri, 19 Feb 2010 17:53:03 +0100 Subject: [Numpy-discussion] Numpy headers In-Reply-To: <3d375d731002190817x6159fbbay3c11799a5f97c8b6@mail.gmail.com> References: <4B7D86EE.4010406@gmail.com> <3d375d731002190817x6159fbbay3c11799a5f97c8b6@mail.gmail.com> Message-ID: <4B7EC1EF.2060907@redhat.com> Robert Kern wrote: > On Thu, Feb 18, 2010 at 12:29, Edward Shishkin > wrote: > > >> I maintain numpy for our distro and am planning to make a separate >> package (numpy-devel) for those headers to be delivered under >> /usr/include. >> > > If I sounded too harsh in my previous email, I apologize. I *do* want > to thank you for coming to the list and asking instead of making > changes without input. Most distro maintainers of numpy never make an > appearance on the list, but we have to field the bug reports of the > users of their broken packages regardless. > > Hello Robert, You are right, we'll leave the numpy package intact. Thank you. -- Edward Shishkin Principal Software Engineer Red Hat Czech s.r.o., Purkynova 99/71, 612 45 Brno, Czech Republic From matthew.brett at gmail.com Sat Feb 20 14:48:44 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 20 Feb 2010 11:48:44 -0800 Subject: [Numpy-discussion] Somewhat goofy warning in 'isfinite'? Message-ID: <1e2af89e1002201148oe03f02ajf12d2504344b4b8d@mail.gmail.com> Hi, I just noticed this: In [2]: np.isfinite(np.inf) Warning: invalid value encountered in isfinite Out[2]: False Maybe it would be worth not raising the warning, in the interests of tidiness? Matthew From dwf at cs.toronto.edu Sat Feb 20 15:22:47 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 20 Feb 2010 15:22:47 -0500 Subject: [Numpy-discussion] Somewhat goofy warning in 'isfinite'? In-Reply-To: <1e2af89e1002201148oe03f02ajf12d2504344b4b8d@mail.gmail.com> References: <1e2af89e1002201148oe03f02ajf12d2504344b4b8d@mail.gmail.com> Message-ID: <1268D353-C911-4368-BA9F-72AEFCC82576@cs.toronto.edu> On 20-Feb-10, at 2:48 PM, Matthew Brett wrote: > Hi, > > I just noticed this: > > In [2]: np.isfinite(np.inf) > Warning: invalid value encountered in isfinite > Out[2]: False > > Maybe it would be worth not raising the warning, in the interests of > tidiness? I think these warnings somehow got turned on recently in the trunk, I see tons of them when I run the tests despite what np.seterr says. David From oliphant at enthought.com Sat Feb 20 20:46:41 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Sat, 20 Feb 2010 20:46:41 -0500 Subject: [Numpy-discussion] ABI changes complete in trunk In-Reply-To: <5b8d13221002181653n1f5e19fdqee9e313ccb9c7810@mail.gmail.com> References: <5b8d13221002181653n1f5e19fdqee9e313ccb9c7810@mail.gmail.com> Message-ID: On Feb 18, 2010, at 7:53 PM, David Cournapeau wrote: > Hi Travis, > > On Wed, Feb 17, 2010 at 4:13 AM, Travis Oliphant > wrote: >> >> I've made the ABI changes I think are needed in the SVN trunk. >> Please >> feel free to speak up if you have concerns or problems (and if you >> want to >> change white-space, just do it...). > > Great, thanks for the effort. Just a nitpick: I think there is no need > to jump the number for ABI to 0x0200... Just incrementing is OK, we > only check when they are different, and there is not link between this > number and the publicised numpy version number. I have no opinion on this. We could also make it a smaller number of bytes because we aren't going to be changing it much. -Travis -- Travis Oliphant Enthought Inc. 1-512-536-1057 http://www.enthought.com oliphant at enthought.com From charlesr.harris at gmail.com Sun Feb 21 04:13:15 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 21 Feb 2010 02:13:15 -0700 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 Message-ID: Hi Travis, The warning is dep.py:6: PendingDeprecationWarning: The CObject API is deprecated as of Python 3.1. Please convert to using the Capsule API. np.dtype('M8[3M/40]') This doesn't happen with the old dtypes, so I assume it is associated with something introduced for datetime. Any ideas? I've attached a script that shows the warning. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dep.py Type: text/x-python Size: 142 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Sun Feb 21 05:27:59 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 21 Feb 2010 11:27:59 +0100 Subject: [Numpy-discussion] numpy.test() failures in 2.0.0.dev8233 Message-ID: ====================================================================== FAIL: test_multiarray.TestNewBufferProtocol.test_export_endian ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/tests/test_multiarray.py", line 1582, in test_export_endian assert y.format in '>l' AssertionError ====================================================================== FAIL: test_multiarray.TestNewBufferProtocol.test_export_record ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/tests/test_multiarray.py", line 1561, in test_export_record assert y.format == 'T{b:a:=h:b:=l:c:=q:d:B:e:=H:f:=L:g:=Q:h:=d:i:=d:j:=g:k:4s:l:=4w:m:3x:n:?:o:}' AssertionError ====================================================================== FAIL: test_multiarray.TestNewBufferProtocol.test_export_simple_1d ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/tests/test_multiarray.py", line 1514, in test_export_simple_1d assert y.format == '=l' AssertionError ====================================================================== FAIL: test_multiarray.TestNewBufferProtocol.test_export_subarray ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.11.2.dev-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/tests/test_multiarray.py", line 1576, in test_export_subarray assert y.itemsize == 16 AssertionError ---------------------------------------------------------------------- Ran 2519 tests in 21.494s FAILED (KNOWNFAIL=4, failures=4) From charlesr.harris at gmail.com Sun Feb 21 05:30:31 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 21 Feb 2010 03:30:31 -0700 Subject: [Numpy-discussion] Request for testing Message-ID: Hi All, I would be much obliged if some folks would run the attached script and report the output, numpy version, and python version. It just runs np.isinf(np.inf), which raises an "invalid value" warning with current numpy. As far as I can see the function itself hasn't changed since numpy1.3, yet numpy1.3 & python2.5 gives no such warning. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: isinf.py Type: text/x-python Size: 120 bytes Desc: not available URL: From cournape at gmail.com Sun Feb 21 05:33:48 2010 From: cournape at gmail.com (David Cournapeau) Date: Sun, 21 Feb 2010 19:33:48 +0900 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: <5b8d13221002210233g148d8b2at1872070111372d44@mail.gmail.com> On Sun, Feb 21, 2010 at 7:30 PM, Charles R Harris wrote: > Hi All, > > I would be much obliged if some folks would run the attached script and > report the output, numpy version, and python version. It just runs > np.isinf(np.inf), which raises an "invalid value" warning with current > numpy. As far as I can see the function itself hasn't changed since > numpy1.3, yet numpy1.3 & python2.5 gives no such warning. This is most likely a bug in isinf or how we use it - the warning is not new, but was hidden before because of the FPU error stage set to ignore instead of warning. I am afraid dealing with this correctly cannot be done in a short time frame: the issues are quite subtle, and very platform dependent. cheers, David From nwagner at iam.uni-stuttgart.de Sun Feb 21 05:37:04 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 21 Feb 2010 11:37:04 +0100 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: On Sun, 21 Feb 2010 03:30:31 -0700 Charles R Harris wrote: > Hi All, > > I would be much obliged if some folks would run the >attached script and > report the output, numpy version, and python version. It >just runs > np.isinf(np.inf), which raises an "invalid value" >warning with current > numpy. As far as I can see the function itself hasn't >changed since > numpy1.3, yet numpy1.3 & python2.5 gives no such >warning. > > Chuck python -i isinf.py 2.0.0.dev8233 2.6.2 import numpy as np import warnings import platform print np.__version__ print platform.python_version() warnings.simplefilter('always') np.seterr(invalid='print') print (np.isinf(np.inf)) Nils From pav at iki.fi Sun Feb 21 06:22:41 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 21 Feb 2010 13:22:41 +0200 Subject: [Numpy-discussion] numpy.test() failures in 2.0.0.dev8233 In-Reply-To: References: Message-ID: <1266751361.5722.15.camel@idol> Hi, Please remind me what platform you are running on. Also, please update and re-run the tests, and check the output from import numpy as np from numpy.core.multiarray import memorysimpleview as memoryview dt = [('a', np.int8), ('b', np.int16), ('c', np.int32), ('d', np.int64), ('e', np.uint8), ('f', np.uint16), ('g', np.uint32), ('h', np.uint64), ('i', np.float), ('j', np.double), ('k', np.longdouble), ('l', 'S4'), ('m', 'U4'), ('n', 'V3'), ('o', '?')] x = np.array([(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 'aaaa', 'bbbb', ' ', True)], dtype=dt) print memoryview(x).format x = np.array([1,2,3], dtype='>i4') print memoryview(x).format x = np.array(([[1,2],[3,4]],), dtype=[('a', (int, (2,2)))]) print memoryview(x).format print memoryview(x).itemsize From nwagner at iam.uni-stuttgart.de Sun Feb 21 06:37:26 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 21 Feb 2010 12:37:26 +0100 Subject: [Numpy-discussion] numpy.test() failures in 2.0.0.dev8233 In-Reply-To: <1266751361.5722.15.camel@idol> References: <1266751361.5722.15.camel@idol> Message-ID: On Sun, 21 Feb 2010 13:22:41 +0200 Pauli Virtanen wrote: > Hi, > > Please remind me what platform you are running on. Also, >please update > and re-run the tests, and check the output from > > import numpy as np > from numpy.core.multiarray import memorysimpleview as >memoryview > > dt = [('a', np.int8), ('b', np.int16), > ('c', np.int32), ('d', np.int64), > ('e', np.uint8), ('f', np.uint16), > ('g', np.uint32), ('h', np.uint64), > ('i', np.float), ('j', np.double), > ('k', np.longdouble), ('l', 'S4'), > ('m', 'U4'), ('n', 'V3'), ('o', '?')] > x = np.array([(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 'aaaa', 'bbbb', ' ', True)], dtype=dt) > print memoryview(x).format > > x = np.array([1,2,3], dtype='>i4') > print memoryview(x).format > > x = np.array(([[1,2],[3,4]],), dtype=[('a', (int, >(2,2)))]) > print memoryview(x).format > print memoryview(x).itemsize > > > T{b:a:=h:b:=i:c:=l:d:B:e:=H:f:=I:g:=L:h:=d:i:=d:j:=g:k:4s:l:=4w:m:3x:n:?:o:} >i T{(2,2)=l:a:} 32 Linux-2.6.31.12-0.1-default-x86_64-with-SuSE-11.2-x86_64 2.0.0.dev8235 Nils From aisaac at american.edu Sun Feb 21 08:42:12 2010 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 21 Feb 2010 08:42:12 -0500 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: <4B813834.4020506@american.edu> On 2/21/2010 5:30 AM, Charles R Harris wrote: > I would be much obliged if some folks would run the attached script and > report the output, numpy version, and python version. No problem with NumPy 1.3.0 (from superpack) on Python 2.6.4 under Vista. Alan Isaac Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> import warnings >>> >>> warnings.simplefilter('always') >>> np.seterr(invalid='print') {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} >>> print (np.isinf(np.inf)) True From pav at iki.fi Sun Feb 21 08:43:57 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 21 Feb 2010 15:43:57 +0200 Subject: [Numpy-discussion] Python 3 porting Message-ID: <1266759837.5722.134.camel@idol> Hi, The test suite passes now on Pythons 2.4 - 3.1. Further testing is very welcome -- also on Python 2.x. Please check that your favourite software still builds and works with SVN trunk Numpy. Currently, Scipy has some known failures because of (i) removed new= keyword in numpy.histogram (ii) Cython supports only native size/alignment PEP 3118 buffers, and Numpy arrays are most naturally expressed in the standardized sizes. Supporting the full struct module alignment stuff appears to be a slight PITA. I'll try to take a look at how to address this. But everything else seems to work on Python 2.6. *** Python version 2.4.6 (#2, Jan 21 2010, 23:27:36) [GCC 4.4.1] Ran 2509 tests in 18.892s OK (KNOWNFAIL=4, SKIP=2) Python version 2.5.4 (r254:67916, Jan 20 2010, 21:44:03) [GCC 4.4.1] Ran 2512 tests in 18.531s OK (KNOWNFAIL=4) Python version 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) [GCC 4.4.1] Ran 2519 tests in 19.367s OK (KNOWNFAIL=4) Python version 3.1.1+ (r311:74480, Nov 2 2009, 14:49:22) [GCC 4.4.1] Ran 2518 tests in 23.239s OK (KNOWNFAIL=5) Cheers, Pauli From rpyle at post.harvard.edu Sun Feb 21 09:00:40 2010 From: rpyle at post.harvard.edu (Robert Pyle) Date: Sun, 21 Feb 2010 09:00:40 -0500 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: <2771990E-CFAC-4C9E-9E5F-5446B8C90E6B@post.harvard.edu> My machine is a PPC dual G5, running Mac OS X 10.5.8 ~ $ python Python 2.5.4 (r254:67917, Dec 23 2008, 14:57:27) [GCC 4.0.1 (Apple Computer, Inc. build 5363)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> import warnings >>> >>> warnings.simplefilter('always') >>> np.seterr(invalid='print') {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} >>> print (np.isinf(np.inf)) True >>> np.__version__ '1.4.0.dev7577' >>> On Feb 21, 2010, at 5:30 AM, Charles R Harris wrote: > Hi All, > > I would be much obliged if some folks would run the attached script > and report the output, numpy version, and python version. It just > runs np.isinf(np.inf), which raises an "invalid value" warning with > current numpy. As far as I can see the function itself hasn't > changed since numpy1.3, yet numpy1.3 & python2.5 gives no such > warning. > > Chuck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From ralf.gommers at googlemail.com Sun Feb 21 09:24:30 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 21 Feb 2010 22:24:30 +0800 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: On Sun, Feb 21, 2010 at 6:30 PM, Charles R Harris wrote: > Hi All, > > I would be much obliged if some folks would run the attached script and > report the output, numpy version, and python version. It just runs > np.isinf(np.inf), which raises an "invalid value" warning with current > numpy. As far as I can see the function itself hasn't changed since > numpy1.3, yet numpy1.3 & python2.5 gives no such warning. > > $ python isinf.py Warning: invalid value encountered in isinf True Python 2.6.4, on Snow Leopard NumPy trunk r8233 Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From renesd at gmail.com Sun Feb 21 09:36:38 2010 From: renesd at gmail.com (=?ISO-8859-1?Q?Ren=E9_Dudfield?=) Date: Sun, 21 Feb 2010 14:36:38 +0000 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266759837.5722.134.camel@idol> References: <1266759837.5722.134.camel@idol> Message-ID: <64ddb72c1002210636u4e8357fx4240fe45ec17bb84@mail.gmail.com> AWESOME :) On Sun, Feb 21, 2010 at 1:43 PM, Pauli Virtanen wrote: > Hi, > > The test suite passes now on Pythons 2.4 - 3.1. Further testing is very > welcome -- also on Python 2.x. Please check that your favourite software > still builds and works with SVN trunk Numpy. > > Currently, Scipy has some known failures because of > > (i) removed new= keyword in numpy.histogram > (ii) Cython supports only native size/alignment PEP 3118 buffers, and > Numpy arrays are most naturally expressed in the standardized > sizes. Supporting the full struct module alignment stuff appears > to be a slight PITA. I'll try to take a look at how to address > this. > > But everything else seems to work on Python 2.6. > > *** > > Python version 2.4.6 (#2, Jan 21 2010, 23:27:36) [GCC 4.4.1] > Ran 2509 tests in 18.892s > OK (KNOWNFAIL=4, SKIP=2) > > Python version 2.5.4 (r254:67916, Jan 20 2010, 21:44:03) [GCC 4.4.1] > Ran 2512 tests in 18.531s > OK (KNOWNFAIL=4) > > Python version 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) [GCC 4.4.1] > Ran 2519 tests in 19.367s > OK (KNOWNFAIL=4) > > Python version 3.1.1+ (r311:74480, Nov 2 2009, 14:49:22) [GCC 4.4.1] > Ran 2518 tests in 23.239s > OK (KNOWNFAIL=5) > > > Cheers, > Pauli > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Sun Feb 21 11:04:46 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 21 Feb 2010 08:04:46 -0800 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: On Sun, Feb 21, 2010 at 2:30 AM, Charles R Harris wrote: > I would be much obliged if some folks would run the attached script and > report the output, numpy version, and python version. >>> import isinf Warning: invalid value encountered in isinf True Python 2.6.4 (r264:75706, Dec 7 2009, 18:43:55) [GCC 4.4.1] on linux2 Numpy '1.4.0rc2' From charlesr.harris at gmail.com Sun Feb 21 11:05:51 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 21 Feb 2010 09:05:51 -0700 Subject: [Numpy-discussion] Request for testing In-Reply-To: <5b8d13221002210233g148d8b2at1872070111372d44@mail.gmail.com> References: <5b8d13221002210233g148d8b2at1872070111372d44@mail.gmail.com> Message-ID: On Sun, Feb 21, 2010 at 3:33 AM, David Cournapeau wrote: > On Sun, Feb 21, 2010 at 7:30 PM, Charles R Harris > wrote: > > Hi All, > > > > I would be much obliged if some folks would run the attached script and > > report the output, numpy version, and python version. It just runs > > np.isinf(np.inf), which raises an "invalid value" warning with current > > numpy. As far as I can see the function itself hasn't changed since > > numpy1.3, yet numpy1.3 & python2.5 gives no such warning. > > This is most likely a bug in isinf or how we use it - the warning is > not new, but was hidden before because of the FPU error stage set to > ignore instead of warning. I am afraid dealing with this correctly > cannot be done in a short time frame: the issues are quite subtle, and > very platform dependent. > > The script enables the warning so the difference shouldn't depend on the recent change in the warnings default. I was thinking it more likely had something to do with the build environment/python version/compiler flags/.etc. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Feb 21 11:27:07 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 21 Feb 2010 11:27:07 -0500 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: <5b8d13221002210233g148d8b2at1872070111372d44@mail.gmail.com> Message-ID: <1cd32cbb1002210827xa68bdc5ye143b8ceb148fea2@mail.gmail.com> On Sun, Feb 21, 2010 at 11:05 AM, Charles R Harris wrote: > > > On Sun, Feb 21, 2010 at 3:33 AM, David Cournapeau > wrote: >> >> On Sun, Feb 21, 2010 at 7:30 PM, Charles R Harris >> wrote: >> > Hi All, >> > >> > I would be much obliged if some folks would run the attached script and >> > report the output, numpy version, and python version. It just runs >> > np.isinf(np.inf), which raises an "invalid value" warning with current >> > numpy. As far as I can see the function itself hasn't changed since >> > numpy1.3, yet numpy1.3 & python2.5 gives no such warning. no warning WinXP, numpy 1.4.0 superpack (I think) >isinf.py True Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.version.version '1.4.0' Josef >> >> This is most likely a bug in isinf or how we use it - the warning is >> not new, but was hidden before because of the FPU error stage set to >> ignore instead of warning. I am afraid dealing with this correctly >> cannot be done in a short time frame: the issues are quite subtle, and >> very platform dependent. >> > > The script enables the warning so the difference shouldn't depend on the > recent change in the warnings default. I was thinking it more likely had > something to do with the build environment/python version/compiler > flags/.etc. > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From warren.weckesser at enthought.com Sun Feb 21 11:39:13 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 21 Feb 2010 10:39:13 -0600 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: <4B8161B1.1040402@enthought.com> Charles R Harris wrote: > Hi All, > > I would be much obliged if some folks would run the attached script > and report the output, numpy version, and python version. It just runs > np.isinf(np.inf), which raises an "invalid value" warning with current > numpy. As far as I can see the function itself hasn't changed since > numpy1.3, yet numpy1.3 & python2.5 gives no such warning. > Python 2.5.4 on Mac OSX 10.5.8 (EPD 5.0.0): I do not get a warning with numpy 1.3.0 or 2.0.0.dev8233. Warren From jsseabold at gmail.com Sun Feb 21 11:49:45 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Sun, 21 Feb 2010 11:49:45 -0500 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: On Sun, Feb 21, 2010 at 5:30 AM, Charles R Harris wrote: > Hi All, > > I would be much obliged if some folks would run the attached script and > report the output, numpy version, and python version. It just runs > np.isinf(np.inf), which raises an "invalid value" warning with current > numpy. As far as I can see the function itself hasn't changed since > numpy1.3, yet numpy1.3 & python2.5 gives no such warning. > > Chuck > On Kubuntu 9.10 with recent trunk. $ python2.5 isinf.py True $ python2.6 isinf.py Warning: invalid value encountered in isinf True Skipper PS. I also see a lot of the divide by zero warnings now (which are helpful) and wondered if they weren't related. From efiring at hawaii.edu Sun Feb 21 12:30:22 2010 From: efiring at hawaii.edu (Eric Firing) Date: Sun, 21 Feb 2010 07:30:22 -1000 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: <4B816DAE.1030504@hawaii.edu> Charles R Harris wrote: > Hi All, > > I would be much obliged if some folks would run the attached script and > report the output, numpy version, and python version. It just runs > np.isinf(np.inf), which raises an "invalid value" warning with current > numpy. As far as I can see the function itself hasn't changed since > numpy1.3, yet numpy1.3 & python2.5 gives no such warning. > > Chuck efiring at manini:~$ python test/isinf.py Warning: invalid value encountered in isinf True In [4]:numpy.version.version Out[4]:'1.5.0.dev8042' Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) Ubuntu 9.10 Eric From nadavh at visionsense.com Sun Feb 21 12:42:01 2010 From: nadavh at visionsense.com (Nadav Horesh) Date: Sun, 21 Feb 2010 19:42:01 +0200 Subject: [Numpy-discussion] Request for testing References: Message-ID: <710F2847B0018641891D9A21602763605AD305@ex3.envision.co.il> $ python isinf.py Warning: invalid value encountered in isinf True machine: gentoo linux on amd64 python 2.6.4 (64 bit) gcc 4.3.4 numpy.__version__ == '1.4.0' glibc 2.10.1 Nadav -----Original Message----- From: numpy-discussion-bounces at scipy.org on behalf of Charles R Harris Sent: Sun 21-Feb-10 12:30 To: numpy-discussion Subject: [Numpy-discussion] Request for testing Hi All, I would be much obliged if some folks would run the attached script and report the output, numpy version, and python version. It just runs np.isinf(np.inf), which raises an "invalid value" warning with current numpy. As far as I can see the function itself hasn't changed since numpy1.3, yet numpy1.3 & python2.5 gives no such warning. Chuck -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2903 bytes Desc: not available URL: From pav at iki.fi Sun Feb 21 13:17:46 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 21 Feb 2010 20:17:46 +0200 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: References: Message-ID: <1266776266.5722.141.camel@idol> su, 2010-02-21 kello 02:13 -0700, Charles R Harris kirjoitti: > The warning is > > dep.py:6: PendingDeprecationWarning: The CObject API is deprecated as > of Python 3.1. Please convert to using the Capsule API. > np.dtype('M8[3M/40]') > > This doesn't happen with the old dtypes, so I assume it is associated > with something introduced for datetime. Any ideas? I've attached a > script that shows the warning. The PyCObjects are used at least within the __array_struct__ interface, ufuncs, and apparently the datetime extra data is stored within one in the array metadata dict. The Capsule API seems pretty much the same as the CObject API. (Why the name change?) We can probably #define PyCapsule_* compatibility defines in npy_3kcompat.h that use PyCObject on 2.x, and use the real thing on 3.x. -- Pauli Virtanen From sccolbert at gmail.com Sun Feb 21 13:24:11 2010 From: sccolbert at gmail.com (Chris Colbert) Date: Sun, 21 Feb 2010 13:24:11 -0500 Subject: [Numpy-discussion] Request for testing In-Reply-To: <710F2847B0018641891D9A21602763605AD305@ex3.envision.co.il> References: <710F2847B0018641891D9A21602763605AD305@ex3.envision.co.il> Message-ID: <7f014ea61002211024s794e1fd1u7f047ee3647ce8a0@mail.gmail.com> brucewayne at broo:~/Downloads$ python isinf.py True Kubuntu 9.10 NumPy 1.3.0 Python 2.6.4 (r264:75706, Dec 7 2009, 18:43:55) [GCC 4.4.1] on linux2 On Sun, Feb 21, 2010 at 12:42 PM, Nadav Horesh wrote: > > $ python isinf.py > Warning: invalid value encountered in isinf > True > > machine: gentoo linux on amd64 > python 2.6.4 (64 bit) > gcc 4.3.4 > numpy.__version__ == '1.4.0' > glibc 2.10.1 > > Nadav > > > -----Original Message----- > From: numpy-discussion-bounces at scipy.org on behalf of Charles R Harris > Sent: Sun 21-Feb-10 12:30 > To: numpy-discussion > Subject: [Numpy-discussion] Request for testing > > Hi All, > > I would be much obliged if some folks would run the attached script and > report the output, numpy version, and python version. It just runs > np.isinf(np.inf), which raises an "invalid value" warning with current > numpy. As far as I can see the function itself hasn't changed since > numpy1.3, yet numpy1.3 & python2.5 gives no such warning. > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun Feb 21 13:32:46 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 21 Feb 2010 12:32:46 -0600 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: <49d6b3501002211032v2acbe6fdx62b0e3eb343a214f@mail.gmail.com> On Sun, Feb 21, 2010 at 4:30 AM, Charles R Harris wrote: > Hi All, > > I would be much obliged if some folks would run the attached script and > report the output, numpy version, and python version. It just runs > np.isinf(np.inf), which raises an "invalid value" warning with current > numpy. As far as I can see the function itself hasn't changed since > numpy1.3, yet numpy1.3 & python2.5 gives no such warning. > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > [gsever at ccn ~]$ python isinf.py True [gsever at ccn various]$ python sysinfo.py ================================================================================ Platform : Linux-2.6.31.9-174.fc12.i686.PAE-i686-with-fedora-12-Constantine Python : ('CPython', 'tags/r262', '71600') NumPy : 1.5.0.dev8038 ================================================================================ -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Feb 21 13:34:28 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 21 Feb 2010 20:34:28 +0200 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: <1266776266.5722.141.camel@idol> References: <1266776266.5722.141.camel@idol> Message-ID: <1266777268.5722.142.camel@idol> su, 2010-02-21 kello 20:17 +0200, Pauli Virtanen kirjoitti: [clip] > The Capsule API seems pretty much the same as the CObject API. (Why the > name change?) We can probably #define PyCapsule_* compatibility defines > in npy_3kcompat.h that use PyCObject on 2.x, and use the real thing on > 3.x. Btw, I read that PyCObjects are completely gone on Python 3.2, so apparently we *have* to make this transition. Pauli From charlesr.harris at gmail.com Sun Feb 21 13:37:05 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 21 Feb 2010 11:37:05 -0700 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: <1266777268.5722.142.camel@idol> References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> Message-ID: On Sun, Feb 21, 2010 at 11:34 AM, Pauli Virtanen wrote: > su, 2010-02-21 kello 20:17 +0200, Pauli Virtanen kirjoitti: > [clip] > > The Capsule API seems pretty much the same as the CObject API. (Why the > > name change?) We can probably #define PyCapsule_* compatibility defines > > in npy_3kcompat.h that use PyCObject on 2.x, and use the real thing on > > 3.x. > > Btw, I read that PyCObjects are completely gone on Python 3.2, so > apparently we *have* to make this transition. > > I haven't looked closely at the new API. If you think the fix is as easy as some defines, go for it. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagss at student.matnat.uio.no Sun Feb 21 14:45:16 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Sun, 21 Feb 2010 20:45:16 +0100 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266759837.5722.134.camel@idol> References: <1266759837.5722.134.camel@idol> Message-ID: <4B818D4C.4020405@student.matnat.uio.no> Pauli Virtanen wrote: > Hi, > > The test suite passes now on Pythons 2.4 - 3.1. Further testing is very > welcome -- also on Python 2.x. Please check that your favourite software > still builds and works with SVN trunk Numpy. > > Currently, Scipy has some known failures because of > > (i) removed new= keyword in numpy.histogram > (ii) Cython supports only native size/alignment PEP 3118 buffers, and > Numpy arrays are most naturally expressed in the standardized > sizes. Supporting the full struct module alignment stuff appears > to be a slight PITA. I'll try Hmm. How much would it help if Cython dealt with standardized sizes when possible? Is Cython the only reason to have NumPy export native size/alignment? Also, wouldn't it be a pain to export align=True dtypes in standard size/alignment? (As a quick hack in SciPy, there's always np.ndarray[int, cast=True] to skip the format string checking (size is still checked).) Dag Sverre From cycomanic at gmail.com Sun Feb 21 16:37:52 2010 From: cycomanic at gmail.com (Jochen Schroeder) Date: Mon, 22 Feb 2010 08:37:52 +1100 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: <20100221213750.GA2106@cudos0803> No Warning for me: ??(08:26 $)?> python isinf.py True ??(08:26 $)?> python2.5 isinf.py True Python 2.6.4 (r264:75706, Dec 7 2009, 18:43:55) [GCC 4.4.1] on linux2 Python 2.5.4 (r254:67916, Jan 20 2010, 21:43:02) [GCC 4.4.1] on linux2 numpy.version.version '1.3.0' ??(08:33 $)?> uname -a Linux cudos0803 2.6.31-19-generic #56-Ubuntu SMP Thu Jan 28 02:39:34 UTC 2010 x86_64 GNU/Linux ??(08:31 $)?> lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description:Ubuntu 9.10 Release: 9.10 Codename: karmic On 02/21/10 03:30, Charles R Harris wrote: > Hi All, > > I would be much obliged if some folks would run the attached script and report > the output, numpy version, and python version. It just runs np.isinf(np.inf), > which raises an "invalid value" warning with current numpy. As far as I can see > the function itself hasn't changed since numpy1.3, yet numpy1.3 & python2.5 > gives no such warning. > > Chuck > import numpy as np > import warnings > > warnings.simplefilter('always') > np.seterr(invalid='print') > print (np.isinf(np.inf)) > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From gokhansever at gmail.com Sun Feb 21 17:00:30 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 21 Feb 2010 16:00:30 -0600 Subject: [Numpy-discussion] ask.scipy.org Message-ID: <49d6b3501002211400r43cbcc01y146c179e880851bf@mail.gmail.com> Hello, Since after Robert Kern showed http://advice.mechanicalkern.com/ on SciPy09 there are many similar initiatives that uses stackoverflow.com (SO) layout. Some smart guys come up with this site http://stackexchange.com/ to those who want to have a simple but a paid solution. I don't have an intention of creating controversial discussion. It just to my eyes SO has a very appealing and easy to use interface and it's getting some number of posts related to scientific Python tools. I usually suggest my friends to use the mailing lists first and SO for their questions. Some prefer mailing lists some not. Mailing lists require more steps to get in however SO register step is much easier due to OpenID logging. Without belabouring further, It would be good to link R. Kern's advice site to either ask.scipy or advice.scipy or another alternative to attract new-comers easily. I am more in favor of the ask.scipy.org option. Thus I can refer the people (hear I mean mostly non-programmers or students/programmers without Python experience), simply to go ask.scipy.orgfor their first questions instead of telling them to search answers at many different mediums. What do you think? -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Feb 21 17:06:09 2010 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 21 Feb 2010 16:06:09 -0600 Subject: [Numpy-discussion] ask.scipy.org In-Reply-To: <49d6b3501002211400r43cbcc01y146c179e880851bf@mail.gmail.com> References: <49d6b3501002211400r43cbcc01y146c179e880851bf@mail.gmail.com> Message-ID: <3d375d731002211406p53f8eae5qbfe542b4b75a7dcc@mail.gmail.com> On Sun, Feb 21, 2010 at 16:00, G?khan Sever wrote: > Hello, > > Since after Robert Kern showed http://advice.mechanicalkern.com/ on SciPy09 > there are many similar initiatives that uses stackoverflow.com (SO) layout. > Some smart guys come up with this site http://stackexchange.com/ to those > who want to have a simple but a paid solution. Indeed, stackexchange.com is the paid hosting option from the Stack Overflow team. > I don't have an intention of creating controversial discussion. It just to > my eyes SO has a very appealing and easy to use interface and it's getting > some number of posts related to scientific Python tools. I usually suggest > my friends to use the mailing lists first and SO for their questions. Some > prefer mailing lists some not. Mailing lists require more steps to get in > however SO register step is much easier due to OpenID logging. > > Without belabouring further, It would be good to link R. Kern's advice site > to either ask.scipy or advice.scipy or another alternative to attract > new-comers easily. I am more in favor of the ask.scipy.org option. Thus I > can refer the people (hear I mean mostly non-programmers or > students/programmers without Python experience), simply to go ask.scipy.org > for their first questions instead of telling them to search answers at many > different mediums. > > What do you think? I spent some time on Friday getting Plurk's Solace tweaked for our use (for various reasons, it's much better code to deal with than the CNPROG software currently running advice.mechanicalkern.com). http://opensource.plurk.com/Solace/ I still need to investigate how to migrate the content from the old site over, but ask.scipy.org should be up and running quite soon. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Sun Feb 21 17:11:29 2010 From: cournape at gmail.com (David Cournapeau) Date: Mon, 22 Feb 2010 07:11:29 +0900 Subject: [Numpy-discussion] ask.scipy.org In-Reply-To: <3d375d731002211406p53f8eae5qbfe542b4b75a7dcc@mail.gmail.com> References: <49d6b3501002211400r43cbcc01y146c179e880851bf@mail.gmail.com> <3d375d731002211406p53f8eae5qbfe542b4b75a7dcc@mail.gmail.com> Message-ID: <5b8d13221002211411w4f56441fj1bc79e1a7948b55d@mail.gmail.com> On Mon, Feb 22, 2010 at 7:06 AM, Robert Kern wrote: > > I spent some time on Friday getting Plurk's Solace tweaked for our use > (for various reasons, it's much better code to deal with than the > CNPROG software currently running advice.mechanicalkern.com). > > ?http://opensource.plurk.com/Solace/ > > I still need to investigate how to migrate the content from the old > site over, but ask.scipy.org should be up and running quite soon. This is great news. Thank you very much for the effort ! David From gokhansever at gmail.com Sun Feb 21 18:08:03 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 21 Feb 2010 17:08:03 -0600 Subject: [Numpy-discussion] ask.scipy.org In-Reply-To: <3d375d731002211406p53f8eae5qbfe542b4b75a7dcc@mail.gmail.com> References: <49d6b3501002211400r43cbcc01y146c179e880851bf@mail.gmail.com> <3d375d731002211406p53f8eae5qbfe542b4b75a7dcc@mail.gmail.com> Message-ID: <49d6b3501002211508k4cfb522cm33572b401ea891@mail.gmail.com> On Sun, Feb 21, 2010 at 4:06 PM, Robert Kern wrote: > On Sun, Feb 21, 2010 at 16:00, G?khan Sever wrote: > > Hello, > > > > Since after Robert Kern showed http://advice.mechanicalkern.com/ on > SciPy09 > > there are many similar initiatives that uses stackoverflow.com (SO) > layout. > > Some smart guys come up with this site http://stackexchange.com/ to > those > > who want to have a simple but a paid solution. > > Indeed, stackexchange.com is the paid hosting option from the Stack > Overflow team. > > > I don't have an intention of creating controversial discussion. It just > to > > my eyes SO has a very appealing and easy to use interface and it's > getting > > some number of posts related to scientific Python tools. I usually > suggest > > my friends to use the mailing lists first and SO for their questions. > Some > > prefer mailing lists some not. Mailing lists require more steps to get in > > however SO register step is much easier due to OpenID logging. > > > > Without belabouring further, It would be good to link R. Kern's advice > site > > to either ask.scipy or advice.scipy or another alternative to attract > > new-comers easily. I am more in favor of the ask.scipy.org option. Thus > I > > can refer the people (hear I mean mostly non-programmers or > > students/programmers without Python experience), simply to go > ask.scipy.org > > for their first questions instead of telling them to search answers at > many > > different mediums. > > > > What do you think? > > I spent some time on Friday getting Plurk's Solace tweaked for our use > (for various reasons, it's much better code to deal with than the > CNPROG software currently running advice.mechanicalkern.com). > > http://opensource.plurk.com/Solace/ > > I still need to investigate how to migrate the content from the old > site over, but ask.scipy.org should be up and running quite soon. > > Thanks for your efforts Robert. Please let us know when the new site is up. > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Feb 21 19:01:51 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 22 Feb 2010 02:01:51 +0200 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <4B818D4C.4020405@student.matnat.uio.no> References: <1266759837.5722.134.camel@idol> <4B818D4C.4020405@student.matnat.uio.no> Message-ID: <1266796911.5722.174.camel@idol> su, 2010-02-21 kello 20:45 +0100, Dag Sverre Seljebotn kirjoitti: > Pauli Virtanen wrote: [clip] > > Currently, Scipy has some known failures because of > > > > (i) removed new= keyword in numpy.histogram > > (ii) Cython supports only native size/alignment PEP 3118 buffers, and > > Numpy arrays are most naturally expressed in the standardized > > sizes. Supporting the full struct module alignment stuff appears > > to be a slight PITA. I'll try > > Hmm. How much would it help if Cython dealt with standardized sizes when > possible? Is Cython the only reason to have NumPy export native > size/alignment? Possibly. Anyway, I managed to implement this so that the format string is in the native+aligned '@' form when possible, and falls back to the unaligned alternatives when not. Now the question is: should it prefer the standard unaligned types ('=') or the native types ('^')? For non-native byte orders of course there is only the standard alternative. This also means that long doubles or 64-bit long longs in non-native byte order cannot be exported. Btw, do you know if the '@' format should include the padding xxx or not? And if not, does the implicit padding also pad the end of the structure to even alignment? Or is alignment <= itemsize always? > Also, wouldn't it be a pain to export align=True dtypes in standard > size/alignment? Not really, as the padding needed for alignment is computed at the time the dtype is constructed, so the necessary info is readily available. It's actually exporting '@' dtypes properly that's painful, since this requires thinking about what information must be omitted, and checking when it is possible to do. Notes to self: - I think I possibly forgot the possible padding at the end of the dtype in the provider. - '@' format strings should probably not include padding that is included in the alignment. I assumed this in the consumer interface, but forgot about it in the provider. Roundtrips probably work all right, though, since explicit padding causes zero implicit padding. Pauli From charlesr.harris at gmail.com Sun Feb 21 20:48:32 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 21 Feb 2010 18:48:32 -0700 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: <1266777268.5722.142.camel@idol> References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> Message-ID: On Sun, Feb 21, 2010 at 11:34 AM, Pauli Virtanen wrote: > su, 2010-02-21 kello 20:17 +0200, Pauli Virtanen kirjoitti: > [clip] > > The Capsule API seems pretty much the same as the CObject API. (Why the > > name change?) We can probably #define PyCapsule_* compatibility defines > > in npy_3kcompat.h that use PyCObject on 2.x, and use the real thing on > > 3.x. > > Btw, I read that PyCObjects are completely gone on Python 3.2, so > apparently we *have* to make this transition. > > It does look like the old interface can be emulated with the new objects, but the need for a 'name' might cause trouble. I suppose that will depend on how the current objects are used. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Feb 21 21:27:12 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 22 Feb 2010 10:27:12 +0800 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> References: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> Message-ID: On Tue, Feb 9, 2010 at 9:54 AM, David Cournapeau wrote: > On Mon, Feb 8, 2010 at 9:14 PM, Ralf Gommers > > Final question is about Atlas and friends. Is 3.8.3 the best version to > > install? Does it compile out of the box under Wine? Is this page > > http://www.scipy.org/Installing_SciPy/Windows still up-to-date with > regard > > to the Lapack/Atlas info and does it apply for Wine? > > Atlas 3.9.x should not be used, it is too unstable IMO (it is a dev > version after all, and windows receives little testing compared to > unix). I will put the Atlas binaries I am using somewhere - building > Atlas is already painful, but building it with a limited architecture > on windows takes it to a whole new level (it is not supported by > atlas, you have to patch the build system by yourself). > > Hi David, did you find time to put those Atlas binaries somewhere? Thanks, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Mon Feb 22 07:17:22 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 22 Feb 2010 07:17:22 -0500 Subject: [Numpy-discussion] ask.scipy.org In-Reply-To: <3d375d731002211406p53f8eae5qbfe542b4b75a7dcc@mail.gmail.com> References: <49d6b3501002211400r43cbcc01y146c179e880851bf@mail.gmail.com> <3d375d731002211406p53f8eae5qbfe542b4b75a7dcc@mail.gmail.com> Message-ID: <20100222121722.GA29578@rodimus> On Sun, Feb 21, 2010 at 04:06:09PM -0600, Robert Kern wrote: > > I spent some time on Friday getting Plurk's Solace tweaked for our use > (for various reasons, it's much better code to deal with than the > CNPROG software currently running advice.mechanicalkern.com). > > http://opensource.plurk.com/Solace/ > > I still need to investigate how to migrate the content from the old > site over, but ask.scipy.org should be up and running quite soon. This is great news, thanks for your efforts Robert. I remember when we last discussed it, Solace didn't support OpenID and a bunch of other things. Are your changes in a public repository anywhere? David From lesserwhirls at gmail.com Mon Feb 22 07:30:08 2010 From: lesserwhirls at gmail.com (Sean Arms) Date: Mon, 22 Feb 2010 06:30:08 -0600 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: On Sun, Feb 21, 2010 at 4:30 AM, Charles R Harris wrote: > Hi All, > > I would be much obliged if some folks would run the attached script and > report the output, numpy version, and python version. It just runs > np.isinf(np.inf), which raises an "invalid value" warning with current > numpy. As far as I can see the function itself hasn't changed since > numpy1.3, yet numpy1.3 & python2.5 gives no such warning. > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > lesserwhirls at microsat-xps ~/Desktop $ python isinf.py Warning: invalid value encountered in isinf True Platform : Linux microsat-xps 2.6.31-gentoo-r6 (x86_64 Intel(R) Core(TM)2 Duo CPU T5450) Python : Python 2.6.4 (r264:75706, Dec 7 2009, 11:36:55) NumPy : 2.0.0.dev8251 GCC : gcc (Gentoo 4.3.4 p1.0, pie-10.1.5) 4.3.4 glibc : 2.10.1 Sean From nwagner at iam.uni-stuttgart.de Mon Feb 22 08:01:28 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 22 Feb 2010 14:01:28 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <5b8d13221002180529g2c725bfel8ca8366b0a11b91b@mail.gmail.com> References: <4B7D0922.1010000@silveregg.co.jp> <4B7D16B2.1000009@silveregg.co.jp> <5b8d13221002180529g2c725bfel8ca8366b0a11b91b@mail.gmail.com> Message-ID: On Thu, 18 Feb 2010 22:29:39 +0900 David Cournapeau wrote: > On Thu, Feb 18, 2010 at 10:22 PM, Nils Wagner > wrote: >> On Thu, 18 Feb 2010 11:55:07 +0100 >> ?Matthieu Brucher wrote: >>>> Ok I have extracted the *.o files from the static >>>>library. >>>> >>>> Applying the file command to the object files yields >>>> >>>> ELF 64-bit LSB relocatable, AMD x86-64, version 1 >>>>(SYSV), >>>> not stripped >>>> >>>> What's that supposed to mean ? >>> >>> It means that each object file is an object file >>>compiled with -fPIC, >>> so you just have to make a shared library (gfortran >>>-shared *.o -o >>> libmysharedlibrary.so) >>> >>> Then, you can try to open the library with ctypes. If >>>something is >>> lacking, you may have to add -lsome_library to the >>>gfortran line. >>> >>> Matthieu >>> -- >>> Information System Engineer, Ph.D. >>> Blog: http://matt.eifelle.com >>> LinkedIn: http://www.linkedin.com/in/matthieubrucher >> >> O.k. I tried >> >> gfortran -shared *.o -o libmysharedlibrary.so >> >> /usr/bin/ld: dxop.o: relocation R_X86_64_32 against `a >> local symbol' can not be used when making a shared >>object; >> recompile with -fPIC > > The message is pretty explicit: it is not compiled with >-fPIC, there > is nothing you can do, short of requesting a shared >library from the > software vendor. > > David Hi, Meanwhile I received a static library (including -fPIC support) from the software vendor. Now I have used ar x test.a gfortran -shared *.o -o libtest.so -lg2c to build a shared library. The additional option -lg2c was necessary due to an undefined symbol: s_cmp Now I am able to load the shared library from ctypes import * my_lib = CDLL('test.so') What are the next steps to use the library functions within python ? Nils From cournape at gmail.com Mon Feb 22 08:18:23 2010 From: cournape at gmail.com (David Cournapeau) Date: Mon, 22 Feb 2010 22:18:23 +0900 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: References: <4B7D16B2.1000009@silveregg.co.jp> <5b8d13221002180529g2c725bfel8ca8366b0a11b91b@mail.gmail.com> Message-ID: <5b8d13221002220518q7f9db23ey39c3722ad875b5a9@mail.gmail.com> On Mon, Feb 22, 2010 at 10:01 PM, Nils Wagner wrote: > > ar x test.a > gfortran -shared *.o -o libtest.so -lg2c > > to build a shared library. The additional option -lg2c was > necessary due to an undefined symbol: s_cmp You should avoid the -lg2c option at any cost if compiling with gfortran. I am afraid that you got a library compiled with g77. If that's the case, you should use g77 and not gfortran. You cannot mix libraries built with one with libraries with another. > > Now I am able to load the shared library > > from ctypes import * > my_lib = CDLL('test.so') > > What are the next steps to use the library functions > within python ? You use it as you would use a C library: http://python.net/crew/theller/ctypes/tutorial.html But the fortran ABI, at least for code built with g77 and gfortran, pass everything by reference. To make sure to pass the right arguments, I strongly suggest to double check with the .h you received. cheers, David From nwagner at iam.uni-stuttgart.de Mon Feb 22 08:57:14 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 22 Feb 2010 14:57:14 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <5b8d13221002220518q7f9db23ey39c3722ad875b5a9@mail.gmail.com> References: <4B7D16B2.1000009@silveregg.co.jp> <5b8d13221002180529g2c725bfel8ca8366b0a11b91b@mail.gmail.com> <5b8d13221002220518q7f9db23ey39c3722ad875b5a9@mail.gmail.com> Message-ID: On Mon, 22 Feb 2010 22:18:23 +0900 David Cournapeau wrote: > On Mon, Feb 22, 2010 at 10:01 PM, Nils Wagner > wrote: > >> >> ar x test.a >> gfortran -shared *.o -o libtest.so -lg2c >> >> to build a shared library. The additional option -lg2c >>was >> necessary due to an undefined symbol: s_cmp > > You should avoid the -lg2c option at any cost if >compiling with > gfortran. I am afraid that you got a library compiled >with g77. If > that's the case, you should use g77 and not gfortran. >You cannot mix > libraries built with one with libraries with another. > g77 -shared *.o -o libtest.so -lg2c failed with /usr/lib/64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../x86_64-suse-linux/bin/ld: cannot find -lgcc_s IIRC that is a known bug related to SuSE . Are you aware of a solution ? Cheers, Nils >> >> Now I am able to load the shared library >> >> from ctypes import * >> my_lib = CDLL('test.so') >> >> What are the next steps to use the library functions >> within python ? > > You use it as you would use a C library: > > http://python.net/crew/theller/ctypes/tutorial.html > > But the fortran ABI, at least for code built with g77 >and gfortran, > pass everything by reference. To make sure to pass the >right > arguments, I strongly suggest to double check with the >.h you > received. > > cheers, > > David From khamenya at gmail.com Mon Feb 22 09:46:07 2010 From: khamenya at gmail.com (Valery Khamenya) Date: Mon, 22 Feb 2010 15:46:07 +0100 Subject: [Numpy-discussion] numpy + ubuntu 9.10 (karmic) + unladen swallow Message-ID: <84fecab1002220646g3d1eed14w9b5806926812d1f1@mail.gmail.com> Hi all, I know the formula works, but fail to reproduce it :) Issue #1. the following entry from numpy installation docs is perhaps out-of-date, at least as for ubuntu karmic: sudo apt-get install gcc g77 python-dev atlas3-base-dev Neither g77 nor atlas3-base-dev are available. Perhaps, g77 from previous ubuntu distro could work, but it would be good to see what installation docs says about this trick. Issue #2. The following definition of include_dirs in site.cfg doesn't seem to be used by gcc: [DEFAULT] include_dirs = /usr/local/include:/home/me/wrk/unladen-trunk/Include I build numpy like that: PYTHONPATH= ~/wrk/unladen-trunk/python setup.py build Where the python executable is the one from the great "unladen-swallow" project. Such invocation leads fast to the following error: ... compile options: '-Inumpy/core/src -Inumpy/core/include -IInclude -I/home/vak/me/unladen-trunk -c' gcc: _configtest.c _configtest.c:1:20: error: Python.h: No such file or directory ... Indeed, the directory /home/me/wrk/unladen-trunk/Include isn't listed for "-I" flag Any hints? thanks in advance :) best regards -- Valery From robert.kern at gmail.com Mon Feb 22 10:38:48 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 22 Feb 2010 09:38:48 -0600 Subject: [Numpy-discussion] ask.scipy.org In-Reply-To: <20100222121722.GA29578@rodimus> References: <49d6b3501002211400r43cbcc01y146c179e880851bf@mail.gmail.com> <3d375d731002211406p53f8eae5qbfe542b4b75a7dcc@mail.gmail.com> <20100222121722.GA29578@rodimus> Message-ID: <3d375d731002220738k5fd57fa0xf3ddc614459a64b2@mail.gmail.com> On Mon, Feb 22, 2010 at 06:17, David Warde-Farley wrote: > On Sun, Feb 21, 2010 at 04:06:09PM -0600, Robert Kern wrote: >> >> I spent some time on Friday getting Plurk's Solace tweaked for our use >> (for various reasons, it's much better code to deal with than the >> CNPROG software currently running advice.mechanicalkern.com). >> >> ? http://opensource.plurk.com/Solace/ >> >> I still need to investigate how to migrate the content from the old >> site over, but ask.scipy.org should be up and running quite soon. > > This is great news, thanks for your efforts Robert. I remember when we last > discussed it, Solace didn't support OpenID and a bunch of other things. Are > your changes in a public repository anywhere? Armin added OpenID support. My only changes have been to add a nice OpenID endpoint selector (e.g. so you can just click the Google button to use your Google login) and to change the text input format from WikiCreole (which has bad preformatted text support for code snippets) to GitHub-flavored Markdown (which has reasonable preformatted text support). http://www.enthought.com/~rkern/cgi-bin/hgwebdir.cgi/solace/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bsouthey at gmail.com Mon Feb 22 12:31:09 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 22 Feb 2010 11:31:09 -0600 Subject: [Numpy-discussion] numpy + ubuntu 9.10 (karmic) + unladen swallow In-Reply-To: <84fecab1002220646g3d1eed14w9b5806926812d1f1@mail.gmail.com> References: <84fecab1002220646g3d1eed14w9b5806926812d1f1@mail.gmail.com> Message-ID: <4B82BF5D.9020607@gmail.com> On 02/22/2010 08:46 AM, Valery Khamenya wrote: > Hi all, > > I know the formula works, but fail to reproduce it :) > > Issue #1. the following entry from numpy installation docs is perhaps > out-of-date, at least as for ubuntu karmic: > > sudo apt-get install gcc g77 python-dev atlas3-base-dev > > Neither g77 nor atlas3-base-dev are available. > > Perhaps, g77 from previous ubuntu distro could work, but it would be > good to see what installation docs says about this trick. > > You only need a C compiler for numpy. I do not use Ubuntu so I can not help with it. > Issue #2. The following definition of include_dirs in site.cfg doesn't > seem to be used by gcc: > > [DEFAULT] > include_dirs = /usr/local/include:/home/me/wrk/unladen-trunk/Include > > I build numpy like that: > PYTHONPATH= ~/wrk/unladen-trunk/python setup.py build > Where the python executable is the one from the great "unladen-swallow" project. > > Such invocation leads fast to the following error: > > ... > compile options: '-Inumpy/core/src -Inumpy/core/include -IInclude > -I/home/vak/me/unladen-trunk -c' > gcc: _configtest.c > _configtest.c:1:20: error: Python.h: No such file or directory > ... > > Indeed, the directory /home/me/wrk/unladen-trunk/Include isn't listed > for "-I" flag > > Any hints? > > thanks in advance :) > > best regards > -- > Valery > I do not have this problem when I use 'make altinstall'. So you probably need to use the --prefix syntax: $python setup.py install --prefix= See: http://docs.python.org/install/index.html#alternate-installation-unix-the-prefix-scheme Bruce From jsseabold at gmail.com Mon Feb 22 12:43:47 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 22 Feb 2010 12:43:47 -0500 Subject: [Numpy-discussion] numpy + ubuntu 9.10 (karmic) + unladen swallow In-Reply-To: <4B82BF5D.9020607@gmail.com> References: <84fecab1002220646g3d1eed14w9b5806926812d1f1@mail.gmail.com> <4B82BF5D.9020607@gmail.com> Message-ID: On Mon, Feb 22, 2010 at 12:31 PM, Bruce Southey wrote: > On 02/22/2010 08:46 AM, Valery Khamenya wrote: >> Hi all, >> >> I know the formula works, but fail to reproduce it :) >> >> Issue #1. the following entry from numpy installation docs is perhaps >> out-of-date, at least as for ubuntu karmic: >> >> ? ?sudo apt-get install gcc g77 python-dev atlas3-base-dev >> >> Neither g77 nor atlas3-base-dev are available. >> >> Perhaps, g77 from previous ubuntu distro could work, but it would be >> good to see what installation docs says about this trick. >> >> > You only need a C compiler for numpy. I do not use Ubuntu so I can not > help with it. > Also (someone correct me if I'm wrong), but I believe gfortran is used in place of g77 as part of gcc >= 4.0 on ubuntu. I think you want libatlas-base-dev (from a quick look at the *ubuntu repo), as well, though I've never used this package. Someone else will have to confirm if it works, as I know there have been problems with packages in the past (on suse for me). >> Issue #2. The following definition of include_dirs in site.cfg doesn't >> seem to be used by gcc: >> >> ? ?[DEFAULT] >> ? ?include_dirs = /usr/local/include:/home/me/wrk/unladen-trunk/Include >> Also, for Kubuntu (at least on my install), the dir is /usr/include. You might want to have a look and see which one contains your headers. Skipper >> I build numpy like that: >> ? ?PYTHONPATH= ~/wrk/unladen-trunk/python setup.py build >> Where the python executable is the one from the great "unladen-swallow" project. >> >> Such invocation leads fast to the following error: >> >> ... >> compile options: '-Inumpy/core/src -Inumpy/core/include -IInclude >> -I/home/vak/me/unladen-trunk -c' >> gcc: _configtest.c >> _configtest.c:1:20: error: Python.h: No such file or directory >> ... >> >> Indeed, the directory /home/me/wrk/unladen-trunk/Include isn't listed >> for "-I" flag >> >> Any hints? >> >> thanks in advance :) >> >> best regards >> -- >> Valery >> > I do not have this problem when I use 'make altinstall'. So you probably > need to use the --prefix syntax: > $python setup.py install --prefix= > See: > http://docs.python.org/install/index.html#alternate-installation-unix-the-prefix-scheme > > Bruce > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Mon Feb 22 14:26:12 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 22 Feb 2010 12:26:12 -0700 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> Message-ID: On Sun, Feb 21, 2010 at 6:48 PM, Charles R Harris wrote: > > > On Sun, Feb 21, 2010 at 11:34 AM, Pauli Virtanen wrote: > >> su, 2010-02-21 kello 20:17 +0200, Pauli Virtanen kirjoitti: >> [clip] >> > The Capsule API seems pretty much the same as the CObject API. (Why the >> > name change?) We can probably #define PyCapsule_* compatibility defines >> > in npy_3kcompat.h that use PyCObject on 2.x, and use the real thing on >> > 3.x. >> >> Btw, I read that PyCObjects are completely gone on Python 3.2, so >> apparently we *have* to make this transition. >> >> > It does look like the old interface can be emulated with the new objects, > but the need for a 'name' might cause trouble. I suppose that will depend on > how the current objects are used. > > List of files containing string PyCObject numpy/numarray/include/numpy/libnumarray.h numpy/numarray/_capi.c numpy/core/include/numpy/ndarrayobject.h numpy/core/src/multiarray/common.c numpy/core/src/multiarray/descriptor.c numpy/core/src/multiarray/multiarraymodule.c numpy/core/src/multiarray/getset.c numpy/core/src/multiarray/convert_datatype.c numpy/core/src/multiarray/arraytypes.c.src numpy/core/src/multiarray/scalartypes.c.src numpy/core/src/multiarray/scalarapi-merge.c numpy/core/src/multiarray/ctors.c.save numpy/core/src/multiarray/usertypes.c numpy/core/src/multiarray/scalarapi.c numpy/core/src/multiarray/ctors.c numpy/core/src/umath/ufunc_object.c numpy/core/src/umath/umathmodule.c.src numpy/core/code_generators/generate_numpy_api.py numpy/core/code_generators/generate_ufunc_api.py numpy/lib/type_check.py numpy/random/mtrand/Python.pxi numpy/f2py/src/fortranobject.c numpy/f2py/cb_rules.py numpy/f2py/rules.py numpy/f2py/cfuncs.py It looks like context is the new name for desc, so that PyCObject_FromVoidPtrAndDesc can be implemented as two calls. I think it is a bit tricky to implement these as macros, getting the return back from a multi call substitution can be doable with a comma expression, but I think a better route is to define our own library of compatible functions, prepending npy_ to the current PyCObject functions. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Feb 22 15:03:01 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 22 Feb 2010 13:03:01 -0700 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> Message-ID: On Mon, Feb 22, 2010 at 12:26 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Sun, Feb 21, 2010 at 6:48 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sun, Feb 21, 2010 at 11:34 AM, Pauli Virtanen wrote: >> >>> su, 2010-02-21 kello 20:17 +0200, Pauli Virtanen kirjoitti: >>> [clip] >>> > The Capsule API seems pretty much the same as the CObject API. (Why the >>> > name change?) We can probably #define PyCapsule_* compatibility defines >>> > in npy_3kcompat.h that use PyCObject on 2.x, and use the real thing on >>> > 3.x. >>> >>> Btw, I read that PyCObjects are completely gone on Python 3.2, so >>> apparently we *have* to make this transition. >>> >>> >> It does look like the old interface can be emulated with the new objects, >> but the need for a 'name' might cause trouble. I suppose that will depend on >> how the current objects are used. >> >> > List of files containing string PyCObject > > numpy/numarray/include/numpy/libnumarray.h > numpy/numarray/_capi.c > numpy/core/include/numpy/ndarrayobject.h > numpy/core/src/multiarray/common.c > numpy/core/src/multiarray/descriptor.c > numpy/core/src/multiarray/multiarraymodule.c > numpy/core/src/multiarray/getset.c > numpy/core/src/multiarray/convert_datatype.c > numpy/core/src/multiarray/arraytypes.c.src > numpy/core/src/multiarray/scalartypes.c.src > numpy/core/src/multiarray/scalarapi-merge.c > numpy/core/src/multiarray/ctors.c.save > numpy/core/src/multiarray/usertypes.c > numpy/core/src/multiarray/scalarapi.c > numpy/core/src/multiarray/ctors.c > numpy/core/src/umath/ufunc_object.c > numpy/core/src/umath/umathmodule.c.src > numpy/core/code_generators/generate_numpy_api.py > numpy/core/code_generators/generate_ufunc_api.py > numpy/lib/type_check.py > numpy/random/mtrand/Python.pxi > numpy/f2py/src/fortranobject.c > numpy/f2py/cb_rules.py > numpy/f2py/rules.py > numpy/f2py/cfuncs.py > > It looks like context is the new name for desc, so that > PyCObject_FromVoidPtrAndDesc can be implemented as two calls. > > I think it is a bit tricky to implement these as macros, getting the return > back from a multi call substitution can be doable with a comma expression, > but I think a better route is to define our own library of compatible > functions, prepending npy_ to the current PyCObject functions. > > But the destructor callbacks will differ: static void PyCObject_dealloc(PyCObject *self) { if (self->destructor) { if(self->desc) ((destructor2)(self->destructor))(self->cobject, self->desc); else (self->destructor)(self->cobject); } PyObject_DEL(self); } The PyCapsule callbacks only have the one argument form. static void capsule_dealloc(PyObject *o) { PyCapsule *capsule = (PyCapsule *)o; if (capsule->destructor) { capsule->destructor(o); } PyObject_DEL(o); } There are two places where numpy uses a desc. So I think we will have to have different destructors for py2k and py3k. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Feb 22 15:25:00 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 22 Feb 2010 13:25:00 -0700 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> Message-ID: On Mon, Feb 22, 2010 at 1:03 PM, Charles R Harris wrote: > > > On Mon, Feb 22, 2010 at 12:26 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sun, Feb 21, 2010 at 6:48 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Sun, Feb 21, 2010 at 11:34 AM, Pauli Virtanen wrote: >>> >>>> su, 2010-02-21 kello 20:17 +0200, Pauli Virtanen kirjoitti: >>>> [clip] >>>> > The Capsule API seems pretty much the same as the CObject API. (Why >>>> the >>>> > name change?) We can probably #define PyCapsule_* compatibility >>>> defines >>>> > in npy_3kcompat.h that use PyCObject on 2.x, and use the real thing on >>>> > 3.x. >>>> >>>> Btw, I read that PyCObjects are completely gone on Python 3.2, so >>>> apparently we *have* to make this transition. >>>> >>>> >>> It does look like the old interface can be emulated with the new objects, >>> but the need for a 'name' might cause trouble. I suppose that will depend on >>> how the current objects are used. >>> >>> >> List of files containing string PyCObject >> >> numpy/numarray/include/numpy/libnumarray.h >> numpy/numarray/_capi.c >> numpy/core/include/numpy/ndarrayobject.h >> numpy/core/src/multiarray/common.c >> numpy/core/src/multiarray/descriptor.c >> numpy/core/src/multiarray/multiarraymodule.c >> numpy/core/src/multiarray/getset.c >> numpy/core/src/multiarray/convert_datatype.c >> numpy/core/src/multiarray/arraytypes.c.src >> numpy/core/src/multiarray/scalartypes.c.src >> numpy/core/src/multiarray/scalarapi-merge.c >> numpy/core/src/multiarray/ctors.c.save >> numpy/core/src/multiarray/usertypes.c >> numpy/core/src/multiarray/scalarapi.c >> numpy/core/src/multiarray/ctors.c >> numpy/core/src/umath/ufunc_object.c >> numpy/core/src/umath/umathmodule.c.src >> numpy/core/code_generators/generate_numpy_api.py >> numpy/core/code_generators/generate_ufunc_api.py >> numpy/lib/type_check.py >> numpy/random/mtrand/Python.pxi >> numpy/f2py/src/fortranobject.c >> numpy/f2py/cb_rules.py >> numpy/f2py/rules.py >> numpy/f2py/cfuncs.py >> >> It looks like context is the new name for desc, so that >> PyCObject_FromVoidPtrAndDesc can be implemented as two calls. >> >> I think it is a bit tricky to implement these as macros, getting the >> return back from a multi call substitution can be doable with a comma >> expression, but I think a better route is to define our own library of >> compatible functions, prepending npy_ to the current PyCObject functions. >> >> > But the destructor callbacks will differ: > > static void > PyCObject_dealloc(PyCObject *self) > { > if (self->destructor) { > if(self->desc) > ((destructor2)(self->destructor))(self->cobject, self->desc); > else > (self->destructor)(self->cobject); > } > PyObject_DEL(self); > } > > The PyCapsule callbacks only have the one argument form. > > static void > capsule_dealloc(PyObject *o) > { > PyCapsule *capsule = (PyCapsule *)o; > if (capsule->destructor) { > capsule->destructor(o); > } > PyObject_DEL(o); > } > > There are two places where numpy uses a desc. So I think we will have to > have different destructors for py2k and py3k. > > So, I think it isn't a big problem to do this with #ifdef's in the code. That is the way I'm going unless you object. I'm not sure if using PyCapsule objects will make pickled arrays incompatible between py2k and py3k, but so it goes. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedrichromstedt at gmail.com Mon Feb 22 15:42:56 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Mon, 22 Feb 2010 21:42:56 +0100 Subject: [Numpy-discussion] Request for testing In-Reply-To: References: Message-ID: I have several Pythons with several numpys on it: (Ordered by version:) 1. > python-2.4 isinf.py True Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.1.1' 2. > python-2.5 isinf.py True Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.4.0' 3. > python-2.6 isinf.py True Python 2.6.3 (r263rc1:75186, Oct 2 2009, 20:40:30) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.3.0' Friedrich From pav at iki.fi Mon Feb 22 15:45:59 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 22 Feb 2010 22:45:59 +0200 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> Message-ID: <1266871559.5645.32.camel@idol> ma, 2010-02-22 kello 13:25 -0700, Charles R Harris kirjoitti: [clip] > It looks like context is the new name for desc, so > that PyCObject_FromVoidPtrAndDesc can be implemented > as two calls. > > I think it is a bit tricky to implement these as > macros, getting the return back from a multi call > substitution can be doable with a comma expression, > but I think a better route is to define our own > library of compatible functions, prepending npy_ to > the current PyCObject functions. [clip] I think we can just put static functions into npy_3kcompat.h. They're anyway going to be short. [clip: destructors] > So, I think it isn't a big problem to do this with #ifdef's in the > code. That is the way I'm going unless you object. No objection here. I don't see any other way to deal with the destructor issue. > I'm not sure if using PyCapsule objects will make pickled arrays > incompatible between py2k and py3k, but so it goes. The pickled arrays are, IIRC, only backward compatible, so that Py2 pickles can be opened with Py3, but not vice versa. This is because of the str versus unicode issue. Pauli From charlesr.harris at gmail.com Mon Feb 22 15:53:56 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 22 Feb 2010 13:53:56 -0700 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: <1266871559.5645.32.camel@idol> References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> <1266871559.5645.32.camel@idol> Message-ID: On Mon, Feb 22, 2010 at 1:45 PM, Pauli Virtanen wrote: > ma, 2010-02-22 kello 13:25 -0700, Charles R Harris kirjoitti: > [clip] > > It looks like context is the new name for desc, so > > that PyCObject_FromVoidPtrAndDesc can be implemented > > as two calls. > > > > I think it is a bit tricky to implement these as > > macros, getting the return back from a multi call > > substitution can be doable with a comma expression, > > but I think a better route is to define our own > > library of compatible functions, prepending npy_ to > > the current PyCObject functions. > [clip] > > I think we can just put static functions into npy_3kcompat.h. They're > anyway going to be short. > > [clip: destructors] > > So, I think it isn't a big problem to do this with #ifdef's in the > > code. That is the way I'm going unless you object. > > No objection here. I don't see any other way to deal with the destructor > I'm actually using #ifdefs for the whole change, no macros in the include files. It hasn't been a lot of work so far. The c_api is currently exported as a PyCObject, we might want to give it a name when it is a PyCapsule. The include file will need some #ifdefs too. > issue. > > > I'm not sure if using PyCapsule objects will make pickled arrays > > incompatible between py2k and py3k, but so it goes. > > The pickled arrays are, IIRC, only backward compatible, so that Py2 > pickles can be opened with Py3, but not vice versa. This is because of > the str versus unicode issue. > Backward compatibility will probably break with PyCapsules in the array instead of PyCObjects. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Feb 22 15:57:41 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 22 Feb 2010 22:57:41 +0200 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> <1266871559.5645.32.camel@idol> Message-ID: <1266872261.5645.35.camel@idol> ma, 2010-02-22 kello 13:53 -0700, Charles R Harris kirjoitti: [clip] > I'm actually using #ifdefs for the whole change, no macros in the > include files. It hasn't been a lot of work so far. The c_api is > currently exported as a PyCObject, we might want to give it a name > when it is a PyCapsule. The include file will need some #ifdefs too. I'd perhaps prefer a compatibility layer with either CObject or Capsule API, to avoid #ifdef's sprinkled everywhere in the other Numpy code. Not sure how doable this is, though. Pauli From robert.kern at gmail.com Mon Feb 22 15:58:03 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 22 Feb 2010 14:58:03 -0600 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> <1266871559.5645.32.camel@idol> Message-ID: <3d375d731002221258s5994348dg7abf7cd24577ade4@mail.gmail.com> On Mon, Feb 22, 2010 at 14:53, Charles R Harris wrote: > > > On Mon, Feb 22, 2010 at 1:45 PM, Pauli Virtanen wrote: >> >> ma, 2010-02-22 kello 13:25 -0700, Charles R Harris kirjoitti: >> > I'm not sure if using PyCapsule objects will make pickled arrays >> > incompatible between py2k and py3k, but so it goes. >> >> The pickled arrays are, IIRC, only backward compatible, so that Py2 >> pickles can be opened with Py3, but not vice versa. This is because of >> the str versus unicode issue. > > Backward compatibility will probably break with PyCapsules in the array > instead of PyCObjects. Why? PyCObjects don't serialize at all. They would never show up in a pickle to begin with. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Mon Feb 22 16:01:12 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 22 Feb 2010 14:01:12 -0700 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: <3d375d731002221258s5994348dg7abf7cd24577ade4@mail.gmail.com> References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> <1266871559.5645.32.camel@idol> <3d375d731002221258s5994348dg7abf7cd24577ade4@mail.gmail.com> Message-ID: On Mon, Feb 22, 2010 at 1:58 PM, Robert Kern wrote: > On Mon, Feb 22, 2010 at 14:53, Charles R Harris > wrote: > > > > > > On Mon, Feb 22, 2010 at 1:45 PM, Pauli Virtanen wrote: > >> > >> ma, 2010-02-22 kello 13:25 -0700, Charles R Harris kirjoitti: > > >> > I'm not sure if using PyCapsule objects will make pickled arrays > >> > incompatible between py2k and py3k, but so it goes. > >> > >> The pickled arrays are, IIRC, only backward compatible, so that Py2 > >> pickles can be opened with Py3, but not vice versa. This is because of > >> the str versus unicode issue. > > > > Backward compatibility will probably break with PyCapsules in the array > > instead of PyCObjects. > > Why? PyCObjects don't serialize at all. They would never show up in a > pickle to begin with. > > So what happens to them? I'm not that familiar with pickles. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Feb 22 16:02:18 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 22 Feb 2010 14:02:18 -0700 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: <1266872261.5645.35.camel@idol> References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> <1266871559.5645.32.camel@idol> <1266872261.5645.35.camel@idol> Message-ID: On Mon, Feb 22, 2010 at 1:57 PM, Pauli Virtanen wrote: > ma, 2010-02-22 kello 13:53 -0700, Charles R Harris kirjoitti: > [clip] > > I'm actually using #ifdefs for the whole change, no macros in the > > include files. It hasn't been a lot of work so far. The c_api is > > currently exported as a PyCObject, we might want to give it a name > > when it is a PyCapsule. The include file will need some #ifdefs too. > > I'd perhaps prefer a compatibility layer with either CObject or Capsule > API, to avoid #ifdef's sprinkled everywhere in the other Numpy code. Not > sure how doable this is, though. > > There are just a few #ifdefs/file. I've done about 1/3 already. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Feb 22 16:06:44 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 22 Feb 2010 23:06:44 +0200 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: References: <1266776266.5722.141.camel@idol> <1266777268.5722.142.camel@idol> <1266871559.5645.32.camel@idol> <3d375d731002221258s5994348dg7abf7cd24577ade4@mail.gmail.com> Message-ID: <1266872803.5645.37.camel@idol> ma, 2010-02-22 kello 14:01 -0700, Charles R Harris kirjoitti: > On Mon, Feb 22, 2010 at 1:58 PM, Robert Kern > wrote: [clip] > > Why? PyCObjects don't serialize at all. They would never show up in > > a pickle to begin with. > > So what happens to them? I'm not that familiar with pickles arraydescr_reduce pulls out the datetime info from the metadata dict, and converts it to a tuple containing something pickleable. And everything in reverse in *_setstate Pauli From d.l.goldsmith at gmail.com Mon Feb 22 22:56:02 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 22 Feb 2010 19:56:02 -0800 Subject: [Numpy-discussion] int-ifying a float array Message-ID: <45d1ab481002221956l56ec2d4bld20fd0afd33743f8@mail.gmail.com> Hi! Is there a less cumbersome way (e.g., one that has a "cast-like" syntax and/or leverages broadcasting) than what follows to convert an array of floats to an array of ints? Here's what works: >>> import numpy as N >>> t = N.array([0.0, 1.0]); t.dtype dtype('float64') >>> t = N.array(t, dtype=int); t; t.dtype array([0, 1]) dtype('int32') Here's three ways that don't: >>> t = N.array([0.0, 1.0]) >>> int(t) Traceback (most recent call last): File "", line 1, in TypeError: only length-1 arrays can be converted to Python scalars >>> N.int(t) Traceback (most recent call last): File "", line 1, in TypeError: only length-1 arrays can be converted to Python scalars >>> t.dtype = N.int >>> t array([ 0, 0, 0, 1072693248]) It doesn't really surprise me that none of these cast-like (or attribute change in the last case) ways work (though it might be nice if at least one of them did), but perhaps I'm just not guessing the syntax right... Thanks, DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Feb 22 22:58:23 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 22 Feb 2010 21:58:23 -0600 Subject: [Numpy-discussion] int-ifying a float array In-Reply-To: <45d1ab481002221956l56ec2d4bld20fd0afd33743f8@mail.gmail.com> References: <45d1ab481002221956l56ec2d4bld20fd0afd33743f8@mail.gmail.com> Message-ID: <3d375d731002221958s46c3a670xa722915d64b57808@mail.gmail.com> On Mon, Feb 22, 2010 at 21:56, David Goldsmith wrote: > Hi!? Is there a less cumbersome way (e.g., one that has a "cast-like" syntax > and/or leverages broadcasting) than what follows to convert an array of > floats to an array of ints?? Here's what works: > >>>> import numpy as N >>>> t = N.array([0.0, 1.0]); t.dtype > dtype('float64') >>>> t = N.array(t, dtype=int); t; t.dtype > array([0, 1]) > dtype('int32') t.astype(int) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From warren.weckesser at enthought.com Mon Feb 22 22:58:57 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 22 Feb 2010 21:58:57 -0600 Subject: [Numpy-discussion] int-ifying a float array In-Reply-To: <45d1ab481002221956l56ec2d4bld20fd0afd33743f8@mail.gmail.com> References: <45d1ab481002221956l56ec2d4bld20fd0afd33743f8@mail.gmail.com> Message-ID: <4B835281.6090804@enthought.com> Here's another way, using 'astype': In [1]: import numpy as np In [2]: x = np.array([1.0, 2.0, 3.0]) In [3]: y = x.astype(int) In [4]: y Out[4]: array([1, 2, 3]) Warren David Goldsmith wrote: > Hi! Is there a less cumbersome way (e.g., one that has a "cast-like" > syntax and/or leverages broadcasting) than what follows to convert an > array of floats to an array of ints? Here's what works: > > >>> import numpy as N > >>> t = N.array([0.0, 1.0]); t.dtype > dtype('float64') > >>> t = N.array(t, dtype=int); t; t.dtype > array([0, 1]) > dtype('int32') > > Here's three ways that don't: > > >>> t = N.array([0.0, 1.0]) > >>> int(t) > Traceback (most recent call last): > File "", line 1, in > TypeError: only length-1 arrays can be converted to Python scalars > >>> N.int(t) > Traceback (most recent call last): > File "", line 1, in > TypeError: only length-1 arrays can be converted to Python scalars > >>> t.dtype = N.int > >>> t > array([ 0, 0, 0, 1072693248]) > > It doesn't really surprise me that none of these cast-like (or > attribute change in the last case) ways work (though it might be nice > if at least one of them did), but perhaps I'm just not guessing the > syntax right... > > Thanks, > > DG > ------------------------------------------------------------------------ > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From d.l.goldsmith at gmail.com Tue Feb 23 00:57:00 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 22 Feb 2010 21:57:00 -0800 Subject: [Numpy-discussion] int-ifying a float array In-Reply-To: <4B835281.6090804@enthought.com> References: <45d1ab481002221956l56ec2d4bld20fd0afd33743f8@mail.gmail.com> <4B835281.6090804@enthought.com> Message-ID: <45d1ab481002222157j7142d6e6m498f62d5475e7186@mail.gmail.com> Thanks, both, I knew there had to be a better way. :-) DG On Mon, Feb 22, 2010 at 7:58 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > Here's another way, using 'astype': > > In [1]: import numpy as np > > In [2]: x = np.array([1.0, 2.0, 3.0]) > > In [3]: y = x.astype(int) > > In [4]: y > Out[4]: array([1, 2, 3]) > > > > Warren > > David Goldsmith wrote: > > Hi! Is there a less cumbersome way (e.g., one that has a "cast-like" > > syntax and/or leverages broadcasting) than what follows to convert an > > array of floats to an array of ints? Here's what works: > > > > >>> import numpy as N > > >>> t = N.array([0.0, 1.0]); t.dtype > > dtype('float64') > > >>> t = N.array(t, dtype=int); t; t.dtype > > array([0, 1]) > > dtype('int32') > > > > Here's three ways that don't: > > > > >>> t = N.array([0.0, 1.0]) > > >>> int(t) > > Traceback (most recent call last): > > File "", line 1, in > > TypeError: only length-1 arrays can be converted to Python scalars > > >>> N.int(t) > > Traceback (most recent call last): > > File "", line 1, in > > TypeError: only length-1 arrays can be converted to Python scalars > > >>> t.dtype = N.int > > >>> t > > array([ 0, 0, 0, 1072693248]) > > > > It doesn't really surprise me that none of these cast-like (or > > attribute change in the last case) ways work (though it might be nice > > if at least one of them did), but perhaps I'm just not guessing the > > syntax right... > > > > Thanks, > > > > DG > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Feb 23 02:03:42 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 23 Feb 2010 00:03:42 -0700 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: <1266872803.5645.37.camel@idol> References: <1266871559.5645.32.camel@idol> <3d375d731002221258s5994348dg7abf7cd24577ade4@mail.gmail.com> <1266872803.5645.37.camel@idol> Message-ID: On Mon, Feb 22, 2010 at 2:06 PM, Pauli Virtanen wrote: > ma, 2010-02-22 kello 14:01 -0700, Charles R Harris kirjoitti: > > On Mon, Feb 22, 2010 at 1:58 PM, Robert Kern > > wrote: > [clip] > > > Why? PyCObjects don't serialize at all. They would never show up in > > > a pickle to begin with. > > > > So what happens to them? I'm not that familiar with pickles > > arraydescr_reduce pulls out the datetime info from the metadata dict, > and converts it to a tuple containing something pickleable. And > everything in reverse in *_setstate > > Everything works except the import of the {ufunc, multiarray} api's from the modules. If the api's are stored as PyCObjects then all the tests pass. I'll try to get that last bit fixed up tomorrow. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Feb 23 03:43:04 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 23 Feb 2010 09:43:04 +0100 Subject: [Numpy-discussion] Calling routines from a Fortran library using python In-Reply-To: <5b8d13221002220518q7f9db23ey39c3722ad875b5a9@mail.gmail.com> References: <4B7D16B2.1000009@silveregg.co.jp> <5b8d13221002180529g2c725bfel8ca8366b0a11b91b@mail.gmail.com> <5b8d13221002220518q7f9db23ey39c3722ad875b5a9@mail.gmail.com> Message-ID: On Mon, 22 Feb 2010 22:18:23 +0900 David Cournapeau wrote: > On Mon, Feb 22, 2010 at 10:01 PM, Nils Wagner > wrote: > >> >> ar x test.a >> gfortran -shared *.o -o libtest.so -lg2c >> >> to build a shared library. The additional option -lg2c >>was >> necessary due to an undefined symbol: s_cmp > > You should avoid the -lg2c option at any cost if >compiling with > gfortran. I am afraid that you got a library compiled >with g77. If > that's the case, you should use g77 and not gfortran. >You cannot mix > libraries built with one with libraries with another. > >> >> Now I am able to load the shared library >> >> from ctypes import * >> my_lib = CDLL('test.so') >> >> What are the next steps to use the library functions >> within python ? > > You use it as you would use a C library: > > http://python.net/crew/theller/ctypes/tutorial.html > > But the fortran ABI, at least for code built with g77 >and gfortran, > pass everything by reference. To make sure to pass the >right > arguments, I strongly suggest to double check with the >.h you > received. > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion Just to play it safe Consider extern void dsio (int* const,const char* const, int* const,const size_t); extern void dsrhed (const int* const,const int* const,void* const, const int* const,const int* const,int* const, int* const,int* const,int* const,int* const, int* const,int* const,int* const); from ctypes import * my_lib = CDLL('libtest.so') How do I call the functions within python I mean what arguments are needed ? my_lib.dsio( ) my_lib.dsrhed( ) Thank you very much for your help. Cheers, Nils From aisaac at american.edu Tue Feb 23 09:21:32 2010 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 23 Feb 2010 09:21:32 -0500 Subject: [Numpy-discussion] random.uniform documentation bug? Message-ID: <4B83E46C.60301@american.edu> This behavior does not match the current documentation. >>> np.random.uniform(low=0.5,high=0.5) 0.5 >>> np.random.uniform(low=0.5,high=0.4) 0.48796883601707464 I assume this behavior is intentional and it is the documentation that is in error (for the case when high<=low)? fwiw, Alan Isaac From eavventi at yahoo.it Tue Feb 23 11:32:44 2010 From: eavventi at yahoo.it (enrico avventi) Date: Tue, 23 Feb 2010 16:32:44 +0000 (GMT) Subject: [Numpy-discussion] double free or corruption after calling a Slicot routine wrapped with f2py Message-ID: <779870.3295.qm@web26705.mail.ukl.yahoo.com> hello, first of all, as i'm new here, i would like to greet everyone in the list and thank the developers of numpy/scipy. i'm transitioning my work from matlab to python and this software is very helpfull indeed. the reason i'm writing is that i got to a stumbling block last week. i tried wrote some wrappers of Slicot routines aided by f2py that i keep on github (http://github.com/avventi/Slycot). the latest routine i tried to wrap, SB02OD, make python to crash with the glibc error double free or corruption. precisely i get this error if the wrapper slycot.sb02od is called within a method, i.e Python 2.5.2 (r252:60911, Jan 24 2010, 14:53:14) [GCC 4.3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import slycot >>> slycot.examples.sb02od_example() --- Example for sb01od ... The solution X is [[ 1.? 0.] ?[ 0.? 2.]] rcond = 0.632455532034 *** glibc detected *** python: double free or corruption (!prev): 0x082ec3b8 *** as you can see the routine does indeed return the correct output but make python crash afterwards. On the other hand if i type ieach step interactively Python 2.5.2 (r252:60911, Jan 24 2010, 14:53:14) [GCC 4.3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> import slycot >>> A = array([[0,1],[0,0]]) >>> B = array([[0],[1]]) >>> C = array([[1,0],[0,1],[0,0]]) >>> Q = dot(C.T,C) >>> R = zeros((1,1)) >>> L = zeros((2,1)) >>> out = slycot.sb02od('D',2,1,3,A,B,Q,R,L) >>> out[1] array([[ 1.,? 0.], ?????? [ 0.,? 2.]]) >>> out[0] 0.63245553203367577 >>> out = slycot.sb02od('D',2,1,3,A,B,Q,R,L) *** glibc detected *** python: double free or corruption (!prev): 0x0832b428 *** it works the first time but not the second. i tried Debian lenny 32/64bit and a virtualized Fedora 12 32bit and the error persists. do you have any ideas why this happens or where should i look to start solving it? thanks in advance. regards, /Enrico -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Feb 23 12:26:22 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 23 Feb 2010 19:26:22 +0200 Subject: [Numpy-discussion] SciPy2010 Call for Papers Message-ID: <9457e7c81002230926n57e80af9l7e8262ed3ae26e24@mail.gmail.com> ========================== SciPy 2010 Call for Papers ========================== SciPy 2010, the 9th Python in Science Conference, will be held from June 28th - July 3rd, 2010 in Austin, Texas. At this conference, novel applications and breakthroughs made in the pursuit of science using Python are presented. Attended by leading figures from both academia and industry, it is an excellent opportunity to experience the cutting edge of scientific software development. The conference is preceded by two days of paid tutorials, during which community experts provide training on several scientific Python packages. We invite you to take part by submitting a talk abstract on the conference website at: http://conference.scipy.org Talk/Paper Submission ===================== We solicit talks and accompanying papers (either formal academic or magazine-style articles) that discuss topics regarding scientific computing using Python, including applications, teaching, development and research. Papers are included in the peer-reviewed conference proceedings, published online. Please note that submissions primarily aimed at the promotion of a commercial product or service will not be considered. Important dates for authors include: * 11 April: Talk abstracts due * 20 April: Notification of acceptance * 13 June: Papers due * 15 August: Publication of proceedings Further detail will be made available on http://conference.scipy.org Conference Dates ================ * Friday, 10 May: Early registration ends * Monday-Tuesday, 28-29 June: Tutorials * Wednesday-Thursday, June 30-July 1: Conference * Friday-Saturday, July 2-3: Coding Sprints Executive Committee =================== * Conference: Jarrod Millman & Eric Jones * Program: Stefan van der Walt & Ondrej Certik * Student Sponsorship: Travis Oliphant For more information on Python, visit http://www.python.org. From d.l.goldsmith at gmail.com Tue Feb 23 13:14:58 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Tue, 23 Feb 2010 10:14:58 -0800 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: <4B83E46C.60301@american.edu> References: <4B83E46C.60301@american.edu> Message-ID: <45d1ab481002231014jb72f9afv356a7ac24458d41@mail.gmail.com> On Tue, Feb 23, 2010 at 6:21 AM, Alan G Isaac wrote: > This behavior does not match the current documentation. > > >>> np.random.uniform(low=0.5,high=0.5) > 0.5 > >>> np.random.uniform(low=0.5,high=0.4) > 0.48796883601707464 > > I assume this behavior is intentional and it is > Why do you assume that? DG > the documentation that is in error (for the case > when high<=low)? > > fwiw, > Alan Isaac > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Feb 23 13:29:29 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Feb 2010 12:29:29 -0600 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: <4B83E46C.60301@american.edu> References: <4B83E46C.60301@american.edu> Message-ID: <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> On Tue, Feb 23, 2010 at 08:21, Alan G Isaac wrote: > This behavior does not match the current documentation. > >>>> np.random.uniform(low=0.5,high=0.5) > 0.5 >>>> np.random.uniform(low=0.5,high=0.4) > 0.48796883601707464 > > I assume this behavior is intentional and it is > the documentation that is in error (for the case > when high<=low)? Well, the documentation just doesn't really address high<=low. In any case, the claim that the results are in [low, high) is wrong thanks to floating point arithmetic. It has exactly the same issues as the standard library's random.uniform() and should be updated to reflect that fact: random.uniform(a, b) Return a random floating point number N such that a <= N <= b for a <= b and b <= N <= a for b < a. The end-point value b may or may not be included in the range depending on floating-point rounding in the equation a + (b-a) * random(). We should address the high < low case in the documentation because we're not going to bother raising an exception when high < low. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From d.l.goldsmith at gmail.com Tue Feb 23 14:05:37 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Tue, 23 Feb 2010 11:05:37 -0800 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> References: <4B83E46C.60301@american.edu> <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> Message-ID: <45d1ab481002231105j652e7e8esb1de6b42660610e0@mail.gmail.com> On Tue, Feb 23, 2010 at 10:29 AM, Robert Kern wrote: > On Tue, Feb 23, 2010 at 08:21, Alan G Isaac wrote: > > This behavior does not match the current documentation. > > > >>>> np.random.uniform(low=0.5,high=0.5) > > 0.5 > >>>> np.random.uniform(low=0.5,high=0.4) > > 0.48796883601707464 > > > > I assume this behavior is intentional and it is > > the documentation that is in error (for the case > > when high<=low)? > > Well, the documentation just doesn't really address high<=low. In any > case, the claim that the results are in [low, high) is wrong thanks to > floating point arithmetic. It has exactly the same issues as the > standard library's random.uniform() and should be updated to reflect > that fact: > > random.uniform(a, b) > Return a random floating point number N such that a <= N <= b for a > <= b and b <= N <= a for b < a. > > The end-point value b may or may not be included in the range > depending on floating-point rounding in the equation a + (b-a) * > random(). > > > We should address the high < low case in the documentation because > we're not going to bother raising an exception when high < low. > Well, an exception isn't the only option (e.g., it could return NaN), but does everyone agree (or at least not block) that this is acceptable behavior? IMO, if this function is going to allow high < low, then the doc should _also_ be _quite_ clear that if this "feature" might mess up the user's program in some way, then the user will have to implement their own safeguard against such parameters being fed to the monster. ;-) DG > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From loredo at astro.cornell.edu Tue Feb 23 14:18:56 2010 From: loredo at astro.cornell.edu (Tom Loredo) Date: Tue, 23 Feb 2010 14:18:56 -0500 Subject: [Numpy-discussion] distutils problem with NumPy-1.4 & Py-2.7a3 (Snow Leopard) Message-ID: <1266952736.4b842a202de1e@astrosun2.astro.cornell.edu> Hi- I've been testing Python-2.7a3 on Mac OS 10.6.2. NumPy-1.4.0 will not install; it appears something has changed within distutils that breaks it: $ export MACOSX_DEPLOYMENT_TARGET=10.6 $ export CFLAGS="-arch x86_64" $ export FFLAGS="-m64" $ export LDFLAGS="-Wall -undefined dynamic_lookup -bundle -arch x86_64" $ time python setup.py build --fcompiler=gnu95 Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 187, in setup_package() File "setup.py", line 155, in setup_package from numpy.distutils.core import setup File "/Volumes/System/Users/loredo/Downloads/numpy-1.4.0-OSX/numpy/distutils/__init__.py", line 6, in import ccompiler File "/Volumes/System/Users/loredo/Downloads/numpy-1.4.0-OSX/numpy/distutils/ccompiler.py", line 17, in _old_init_posix = distutils.sysconfig._init_posix AttributeError: 'module' object has no attribute '_init_posix' I realize NumPy makes no claim to be compatible with 2.7(alpha); I'm reporting this as a heads-up. -Tom PS: For testing purposes: To get nose to install for 2.7a3, you need to use the current hg branch. The last release (including the out-of-date dev branch on PyPI) is not compatible with 2.7 changes to unittest internals. ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From robert.kern at gmail.com Tue Feb 23 14:25:23 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Feb 2010 13:25:23 -0600 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: <45d1ab481002231105j652e7e8esb1de6b42660610e0@mail.gmail.com> References: <4B83E46C.60301@american.edu> <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> <45d1ab481002231105j652e7e8esb1de6b42660610e0@mail.gmail.com> Message-ID: <3d375d731002231125u27e2a4b2t88dcc1cc0073ed4d@mail.gmail.com> On Tue, Feb 23, 2010 at 13:05, David Goldsmith wrote: > On Tue, Feb 23, 2010 at 10:29 AM, Robert Kern wrote: >> >> On Tue, Feb 23, 2010 at 08:21, Alan G Isaac wrote: >> > This behavior does not match the current documentation. >> > >> >>>> np.random.uniform(low=0.5,high=0.5) >> > 0.5 >> >>>> np.random.uniform(low=0.5,high=0.4) >> > 0.48796883601707464 >> > >> > I assume this behavior is intentional and it is >> > the documentation that is in error (for the case >> > when high<=low)? >> >> Well, the documentation just doesn't really address high<=low. In any >> case, the claim that the results are in [low, high) is wrong thanks to >> floating point arithmetic. It has exactly the same issues as the >> standard library's random.uniform() and should be updated to reflect >> that fact: >> >> random.uniform(a, b) >> ?Return a random floating point number N such that a <= N <= b for a >> <= b and b <= N <= a for b < a. >> >> ?The end-point value b may or may not be included in the range >> depending on floating-point rounding in the equation a + (b-a) * >> random(). >> >> >> We should address the high < low case in the documentation because >> we're not going to bother raising an exception when high < low. > > Well, an exception isn't the only option (e.g., it could return NaN), > but > does everyone agree (or at least not block) that this is acceptable > behavior? It's a useful feature. Whenever there is a low/high pair of arguments, a user frequently has to write code like so: low, high = min(a, b), max(a, b) just to satisfy the argument spec of the function. This function does not really require knowing which is which for its implementation, so requiring them to be one way is simply arbitrariness for the sake of arbitrariness. > IMO, if this function is going to allow high < low, then the doc > should _also_ be _quite_ clear that if this "feature" might mess up the > user's program in some way, then the user will have to implement their own > safeguard against such parameters being fed to the monster. ;-) So do it. But please, don't use frightening terminology like you are here. Just state the fact clearly and succinctly as in the random.uniform() docs. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bsouthey at gmail.com Tue Feb 23 14:32:08 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 23 Feb 2010 13:32:08 -0600 Subject: [Numpy-discussion] distutils problem with NumPy-1.4 & Py-2.7a3 (Snow Leopard) In-Reply-To: <1266952736.4b842a202de1e@astrosun2.astro.cornell.edu> References: <1266952736.4b842a202de1e@astrosun2.astro.cornell.edu> Message-ID: <4B842D38.3090904@gmail.com> On 02/23/2010 01:18 PM, Tom Loredo wrote: > Hi- > > I've been testing Python-2.7a3 on Mac OS 10.6.2. NumPy-1.4.0 will > not install; it appears something has changed within distutils that > breaks it: > > $ export MACOSX_DEPLOYMENT_TARGET=10.6 > $ export CFLAGS="-arch x86_64" > $ export FFLAGS="-m64" > $ export LDFLAGS="-Wall -undefined dynamic_lookup -bundle -arch x86_64" > $ time python setup.py build --fcompiler=gnu95 > Running from numpy source directory. > Traceback (most recent call last): > File "setup.py", line 187, in > setup_package() > File "setup.py", line 155, in setup_package > from numpy.distutils.core import setup > File "/Volumes/System/Users/loredo/Downloads/numpy-1.4.0-OSX/numpy/distutils/__init__.py", line 6, in > import ccompiler > File "/Volumes/System/Users/loredo/Downloads/numpy-1.4.0-OSX/numpy/distutils/ccompiler.py", line 17, in > _old_init_posix = distutils.sysconfig._init_posix > AttributeError: 'module' object has no attribute '_init_posix' > > I realize NumPy makes no claim to be compatible with 2.7(alpha); I'm > reporting this as a heads-up. > > -Tom > > PS: For testing purposes: To get nose to install for 2.7a3, > you need to use the current hg branch. The last release > (including the out-of-date dev branch on PyPI) is not > compatible with 2.7 changes to unittest internals. > > > > > ------------------------------------------------- > This mail sent through IMP: http://horde.org/imp/ > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Hi, I think it is Python related as I did a grep of my Python versions installed from source for this function and got: python2.5/distutils/sysconfig.py:def _init_posix(): python2.6/distutils/sysconfig.py:def _init_posix(): python2.7/sysconfig.py:def _init_posix(vars): python3.1/distutils/sysconfig.py:def _init_posix(): I have not had time to check why Python2.7 is different from the other versions (both location and call). Bruce From d.l.goldsmith at gmail.com Tue Feb 23 14:39:16 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Tue, 23 Feb 2010 11:39:16 -0800 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: <3d375d731002231125u27e2a4b2t88dcc1cc0073ed4d@mail.gmail.com> References: <4B83E46C.60301@american.edu> <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> <45d1ab481002231105j652e7e8esb1de6b42660610e0@mail.gmail.com> <3d375d731002231125u27e2a4b2t88dcc1cc0073ed4d@mail.gmail.com> Message-ID: <45d1ab481002231139k303166cfh787cefed212f293a@mail.gmail.com> On Tue, Feb 23, 2010 at 11:25 AM, Robert Kern wrote: > On Tue, Feb 23, 2010 at 13:05, David Goldsmith > wrote: > > On Tue, Feb 23, 2010 at 10:29 AM, Robert Kern > wrote: > >> > >> On Tue, Feb 23, 2010 at 08:21, Alan G Isaac > wrote: > >> > This behavior does not match the current documentation. > >> > > >> >>>> np.random.uniform(low=0.5,high=0.5) > >> > 0.5 > >> >>>> np.random.uniform(low=0.5,high=0.4) > >> > 0.48796883601707464 > >> > > >> > I assume this behavior is intentional and it is > >> > the documentation that is in error (for the case > >> > when high<=low)? > >> > >> Well, the documentation just doesn't really address high<=low. In any > >> case, the claim that the results are in [low, high) is wrong thanks to > >> floating point arithmetic. It has exactly the same issues as the > >> standard library's random.uniform() and should be updated to reflect > >> that fact: > >> > >> random.uniform(a, b) > >> Return a random floating point number N such that a <= N <= b for a > >> <= b and b <= N <= a for b < a. > >> > >> The end-point value b may or may not be included in the range > >> depending on floating-point rounding in the equation a + (b-a) * > >> random(). > >> > >> > >> We should address the high < low case in the documentation because > >> we're not going to bother raising an exception when high < low. > > > > Well, an exception isn't the only option (e.g., it could return NaN), > > > > > but > > does everyone agree (or at least not block) that this is acceptable > > behavior? > > It's a useful feature. Whenever there is a low/high pair of arguments, > a user frequently has to write code like so: > > low, high = min(a, b), max(a, b) > > just to satisfy the argument spec of the function. This function does > not really require knowing which is which for its implementation, so > requiring them to be one way is simply arbitrariness for the sake of > arbitrariness. > OK. > > IMO, if this function is going to allow high < low, then the doc > > should _also_ be _quite_ clear that if this "feature" might mess up the > > user's program in some way, then the user will have to implement their > own > > safeguard against such parameters being fed to the monster. ;-) > > So do it. But please, don't use frightening terminology like you are > here. Just state the fact clearly and succinctly as in the > random.uniform() docs. > Aw shucks, these docstrings are so dry. (Just kidding.) ;-) DG > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khamenya at gmail.com Tue Feb 23 15:00:03 2010 From: khamenya at gmail.com (Valery Khamenya) Date: Tue, 23 Feb 2010 21:00:03 +0100 Subject: [Numpy-discussion] numpy + ubuntu 9.10 (karmic) + unladen swallow In-Reply-To: <84fecab1002220646g3d1eed14w9b5806926812d1f1@mail.gmail.com> References: <84fecab1002220646g3d1eed14w9b5806926812d1f1@mail.gmail.com> Message-ID: <84fecab1002231200i576ec872y5be4af2d878ef7a9@mail.gmail.com> Hi all, After getting the answers above on the maillist I palyed a bit more with building numpy. However without success. Nevertheless I've found the way to install it using unladen-swallow itself :) http://groups.google.com/group/unladen-swallow/browse_thread/thread/80f7ccb68a9dcea3#b015202752197989 So, the question is currently closed for me. Kind regards -- Valery From friedrichromstedt at gmail.com Tue Feb 23 15:04:43 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Tue, 23 Feb 2010 21:04:43 +0100 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: <45d1ab481002231139k303166cfh787cefed212f293a@mail.gmail.com> References: <4B83E46C.60301@american.edu> <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> <45d1ab481002231105j652e7e8esb1de6b42660610e0@mail.gmail.com> <3d375d731002231125u27e2a4b2t88dcc1cc0073ed4d@mail.gmail.com> <45d1ab481002231139k303166cfh787cefed212f293a@mail.gmail.com> Message-ID: Why not rewriting the definition of uniform() to: def uniform(start, stop, low = None, high = None): if low is not None: start = low if high is not None: stop = high [and here what matters] This makes no trouble when a user uses either non-keyword or keyword specification. The second pair of keywords is just for backward compatibility. As after a keyword there is no positional argument allowed, the only call mixing keywords and non-keywords would be uniform(low, high = high), and this is also maintained. Friedrich 2010/2/23 David Goldsmith : > On Tue, Feb 23, 2010 at 11:25 AM, Robert Kern wrote: >> >> On Tue, Feb 23, 2010 at 13:05, David Goldsmith >> wrote: >> > On Tue, Feb 23, 2010 at 10:29 AM, Robert Kern >> > wrote: >> >> >> >> On Tue, Feb 23, 2010 at 08:21, Alan G Isaac >> >> wrote: >> >> > This behavior does not match the current documentation. >> >> > >> >> >>>> np.random.uniform(low=0.5,high=0.5) >> >> > 0.5 >> >> >>>> np.random.uniform(low=0.5,high=0.4) >> >> > 0.48796883601707464 >> >> > >> >> > I assume this behavior is intentional and it is >> >> > the documentation that is in error (for the case >> >> > when high<=low)? >> >> >> >> Well, the documentation just doesn't really address high<=low. In any >> >> case, the claim that the results are in [low, high) is wrong thanks to >> >> floating point arithmetic. It has exactly the same issues as the >> >> standard library's random.uniform() and should be updated to reflect >> >> that fact: >> >> >> >> random.uniform(a, b) >> >> ?Return a random floating point number N such that a <= N <= b for a >> >> <= b and b <= N <= a for b < a. >> >> >> >> ?The end-point value b may or may not be included in the range >> >> depending on floating-point rounding in the equation a + (b-a) * >> >> random(). >> >> >> >> >> >> We should address the high < low case in the documentation because >> >> we're not going to bother raising an exception when high < low. >> > >> > Well, an exception isn't the only option (e.g., it could return NaN), >> >> >> >> > but >> > does everyone agree (or at least not block) that this is acceptable >> > behavior? >> >> It's a useful feature. Whenever there is a low/high pair of arguments, >> a user frequently has to write code like so: >> >> ?low, high = min(a, b), max(a, b) >> >> just to satisfy the argument spec of the function. This function does >> not really require knowing which is which for its implementation, so >> requiring them to be one way is simply arbitrariness for the sake of >> arbitrariness. > > OK. > >> >> > IMO, if this function is going to allow high < low, then the doc >> > should _also_ be _quite_ clear that if this "feature" might mess up the >> > user's program in some way, then the user will have to implement their >> > own >> > safeguard against such parameters being fed to the monster. ;-) >> >> So do it. But please, don't use frightening terminology like you are >> here. Just state the fact clearly and succinctly as in the >> random.uniform() docs. > > Aw shucks, these docstrings are so dry. (Just kidding.) ;-) > > DG From robert.kern at gmail.com Tue Feb 23 15:08:28 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Feb 2010 14:08:28 -0600 Subject: [Numpy-discussion] distutils problem with NumPy-1.4 & Py-2.7a3 (Snow Leopard) In-Reply-To: <4B842D38.3090904@gmail.com> References: <1266952736.4b842a202de1e@astrosun2.astro.cornell.edu> <4B842D38.3090904@gmail.com> Message-ID: <3d375d731002231208x73f67922g6c91c63f608ad70d@mail.gmail.com> On Tue, Feb 23, 2010 at 13:32, Bruce Southey wrote: > On 02/23/2010 01:18 PM, Tom Loredo wrote: >> Hi- >> >> I've been testing Python-2.7a3 on Mac OS 10.6.2. ?NumPy-1.4.0 will >> not install; it appears something has changed within distutils that >> breaks it: >> >> $ export MACOSX_DEPLOYMENT_TARGET=10.6 >> $ export CFLAGS="-arch x86_64" >> $ export FFLAGS="-m64" >> $ export LDFLAGS="-Wall -undefined dynamic_lookup -bundle -arch x86_64" >> $ time python setup.py build --fcompiler=gnu95 >> Running from numpy source directory. >> Traceback (most recent call last): >> ? ?File "setup.py", line 187, in >> ? ? ?setup_package() >> ? ?File "setup.py", line 155, in setup_package >> ? ? ?from numpy.distutils.core import setup >> ? ?File "/Volumes/System/Users/loredo/Downloads/numpy-1.4.0-OSX/numpy/distutils/__init__.py", line 6, in >> ? ? ?import ccompiler >> ? ?File "/Volumes/System/Users/loredo/Downloads/numpy-1.4.0-OSX/numpy/distutils/ccompiler.py", line 17, in >> ? ? ?_old_init_posix = distutils.sysconfig._init_posix >> AttributeError: 'module' object has no attribute '_init_posix' >> >> I realize NumPy makes no claim to be compatible with 2.7(alpha); I'm >> reporting this as a heads-up. >> >> -Tom >> >> PS: ?For testing purposes: ?To get nose to install for 2.7a3, >> you need to use the current hg branch. ?The last release >> (including the out-of-date dev branch on PyPI) is not >> compatible with 2.7 changes to unittest internals. >> >> >> >> >> ------------------------------------------------- >> This mail sent through IMP: http://horde.org/imp/ >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > Hi, > I think it is Python related as I did a grep of my Python versions > installed from source for this function and got: > python2.5/distutils/sysconfig.py:def _init_posix(): > python2.6/distutils/sysconfig.py:def _init_posix(): > python2.7/sysconfig.py:def _init_posix(vars): > python3.1/distutils/sysconfig.py:def _init_posix(): > > I have not had time to check why Python2.7 is different from the other > versions (both location and call). sysconfig was deemed useful outside of distutils and was promoted to the top level. Unfortunately, they didn't leave a backwards compatibility stub. Feel free to create a bug ticket on the Python bug tracker. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Feb 23 15:12:35 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Feb 2010 14:12:35 -0600 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: References: <4B83E46C.60301@american.edu> <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> <45d1ab481002231105j652e7e8esb1de6b42660610e0@mail.gmail.com> <3d375d731002231125u27e2a4b2t88dcc1cc0073ed4d@mail.gmail.com> <45d1ab481002231139k303166cfh787cefed212f293a@mail.gmail.com> Message-ID: <3d375d731002231212h6e3f4b77le5f87fc92c6edf3e@mail.gmail.com> On Tue, Feb 23, 2010 at 14:04, Friedrich Romstedt wrote: > Why not rewriting the definition of uniform() to: > > def uniform(start, stop, low = None, high = None): > ? ?if low is not None: > ? ? ? ?start = low > ? ?if high is not None: > ? ? ? ?stop = high > ? ?[and here what matters] > > This makes no trouble when a user uses either non-keyword or keyword > specification. ?The second pair of keywords is just for backward > compatibility. ?As after a keyword there is no positional argument > allowed, the only call mixing keywords and non-keywords would be > uniform(low, high = high), and this is also maintained. Except for someone calling uniform(low, high, size). In any case, why would you make this change? It doesn't seem to solve any problem or clear up any semantics. "start" and "stop" imply a stop > start relationship, too, albeit not as strongly. If someone wants to pass in a high < low, let them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From friedrichromstedt at gmail.com Tue Feb 23 15:26:04 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Tue, 23 Feb 2010 21:26:04 +0100 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: <3d375d731002231212h6e3f4b77le5f87fc92c6edf3e@mail.gmail.com> References: <4B83E46C.60301@american.edu> <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> <45d1ab481002231105j652e7e8esb1de6b42660610e0@mail.gmail.com> <3d375d731002231125u27e2a4b2t88dcc1cc0073ed4d@mail.gmail.com> <45d1ab481002231139k303166cfh787cefed212f293a@mail.gmail.com> <3d375d731002231212h6e3f4b77le5f87fc92c6edf3e@mail.gmail.com> Message-ID: > Except for someone calling uniform(low, high, size). Ah, sorry, I didn't know about that. In that case, everything I wrote is superfluous and I apologise for a non-helping comment. But, one could incorporate SIZE simply in the calling convention. > In any case, why > would you make this change? It doesn't seem to solve any problem or > clear up any semantics. "start" and "stop" imply a stop > start > relationship, too, albeit not as strongly. Hmm, I thought that start is where the thing starts, and stop where it stops, so it's in "virtual time" stop > start, but it can travel downwards. I thought it would help making the semantics more clear. But I see it depends on interpretation. With "low" and "high", my interpretation is on the contrary impossible. The ugly doubling was just intended for compatibility, resulting in a note "for backward compatibility reasons, you can also pass ..." or something like that. > If someone wants to pass in > a high < low, let them. It's possible, of course. Friedrich From robert.kern at gmail.com Tue Feb 23 15:41:56 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Feb 2010 14:41:56 -0600 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: References: <4B83E46C.60301@american.edu> <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> <45d1ab481002231105j652e7e8esb1de6b42660610e0@mail.gmail.com> <3d375d731002231125u27e2a4b2t88dcc1cc0073ed4d@mail.gmail.com> <45d1ab481002231139k303166cfh787cefed212f293a@mail.gmail.com> <3d375d731002231212h6e3f4b77le5f87fc92c6edf3e@mail.gmail.com> Message-ID: <3d375d731002231241m4eb33430peaca4696a112ee94@mail.gmail.com> On Tue, Feb 23, 2010 at 14:26, Friedrich Romstedt wrote: >> In any case, why >> would you make this change? It doesn't seem to solve any problem or >> clear up any semantics. "start" and "stop" imply a stop > start >> relationship, too, albeit not as strongly. > Hmm, I thought that start is where the thing starts, and stop where it > stops, so it's in "virtual time" stop > start, but it can travel > downwards. ?I thought it would help making the semantics more clear. It helps a little, I agree, but not as much as simply changing the names to something neutral like (a, b) as in the standard library. The necessity for a backwards compatibility hack imposes additional costs to making any such change. I don't think those costs are warranted by the semantic confusion of allowing high < low. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From friedrichromstedt at gmail.com Tue Feb 23 15:51:36 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Tue, 23 Feb 2010 21:51:36 +0100 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: <3d375d731002231241m4eb33430peaca4696a112ee94@mail.gmail.com> References: <4B83E46C.60301@american.edu> <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> <45d1ab481002231105j652e7e8esb1de6b42660610e0@mail.gmail.com> <3d375d731002231125u27e2a4b2t88dcc1cc0073ed4d@mail.gmail.com> <45d1ab481002231139k303166cfh787cefed212f293a@mail.gmail.com> <3d375d731002231212h6e3f4b77le5f87fc92c6edf3e@mail.gmail.com> <3d375d731002231241m4eb33430peaca4696a112ee94@mail.gmail.com> Message-ID: 2010/2/23 Robert Kern : > It helps a little, I agree, but not as much as simply changing the > names to something neutral like (a, b) as in the standard library. The > necessity for a backwards compatibility hack imposes additional costs > to making any such change. I don't think those costs are warranted by > the semantic confusion of allowing high < low. I agree fully. The (a, b) thing is the most elegant. And I also agree that the overhead renders it nearly useless, when one focuses on speed. Sorry for making noise again with an unmature thought. It just came into my mind and looked so cute ... :-( Friedrich From robert.kern at gmail.com Tue Feb 23 15:55:02 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Feb 2010 14:55:02 -0600 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: References: <4B83E46C.60301@american.edu> <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> <45d1ab481002231105j652e7e8esb1de6b42660610e0@mail.gmail.com> <3d375d731002231125u27e2a4b2t88dcc1cc0073ed4d@mail.gmail.com> <45d1ab481002231139k303166cfh787cefed212f293a@mail.gmail.com> <3d375d731002231212h6e3f4b77le5f87fc92c6edf3e@mail.gmail.com> <3d375d731002231241m4eb33430peaca4696a112ee94@mail.gmail.com> Message-ID: <3d375d731002231255k2d43eaccg25e992c1bf40d656@mail.gmail.com> On Tue, Feb 23, 2010 at 14:51, Friedrich Romstedt wrote: > 2010/2/23 Robert Kern : >> It helps a little, I agree, but not as much as simply changing the >> names to something neutral like (a, b) as in the standard library. The >> necessity for a backwards compatibility hack imposes additional costs >> to making any such change. I don't think those costs are warranted by >> the semantic confusion of allowing high < low. > > I agree fully. ?The (a, b) thing is the most elegant. ?And I also > agree that the overhead renders it nearly useless, when one focuses on > speed. > > Sorry for making noise again with an unmature thought. ?It just came > into my mind and looked so cute ... :-( No worries! I'm not trying to discourage you from posting half-baked thoughts. They're often correct! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From friedrichromstedt at gmail.com Tue Feb 23 16:02:47 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Tue, 23 Feb 2010 22:02:47 +0100 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: <3d375d731002231255k2d43eaccg25e992c1bf40d656@mail.gmail.com> References: <4B83E46C.60301@american.edu> <45d1ab481002231105j652e7e8esb1de6b42660610e0@mail.gmail.com> <3d375d731002231125u27e2a4b2t88dcc1cc0073ed4d@mail.gmail.com> <45d1ab481002231139k303166cfh787cefed212f293a@mail.gmail.com> <3d375d731002231212h6e3f4b77le5f87fc92c6edf3e@mail.gmail.com> <3d375d731002231241m4eb33430peaca4696a112ee94@mail.gmail.com> <3d375d731002231255k2d43eaccg25e992c1bf40d656@mail.gmail.com> Message-ID: 2010/2/23 Robert Kern : > No worries! I'm not trying to discourage you from posting half-baked > thoughts. They're often correct! Thank you :-) *smiling and laughing* ! Friedrich P.S.: But my reply obviously does no longer belong to the mailing list ... From d.l.goldsmith at gmail.com Tue Feb 23 16:05:09 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Tue, 23 Feb 2010 13:05:09 -0800 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: References: <4B83E46C.60301@american.edu> <3d375d731002231125u27e2a4b2t88dcc1cc0073ed4d@mail.gmail.com> <45d1ab481002231139k303166cfh787cefed212f293a@mail.gmail.com> <3d375d731002231212h6e3f4b77le5f87fc92c6edf3e@mail.gmail.com> <3d375d731002231241m4eb33430peaca4696a112ee94@mail.gmail.com> <3d375d731002231255k2d43eaccg25e992c1bf40d656@mail.gmail.com> Message-ID: <45d1ab481002231305i3af5431cw974a1424bee78cc0@mail.gmail.com> On Tue, Feb 23, 2010 at 1:02 PM, Friedrich Romstedt < friedrichromstedt at gmail.com> wrote: > 2010/2/23 Robert Kern : > > No worries! I'm not trying to discourage you from posting half-baked > > thoughts. They're often correct! > > Thank you :-) *smiling and laughing* ! > > Friedrich > > P.S.: But my reply obviously does no longer belong to the mailing list ... > For better or worse, institutional memory, be it baked, half-baked, or raw, is best preserved. :-) DG > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Tue Feb 23 16:10:32 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Tue, 23 Feb 2010 13:10:32 -0800 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: <45d1ab481002231305i3af5431cw974a1424bee78cc0@mail.gmail.com> References: <4B83E46C.60301@american.edu> <45d1ab481002231139k303166cfh787cefed212f293a@mail.gmail.com> <3d375d731002231212h6e3f4b77le5f87fc92c6edf3e@mail.gmail.com> <3d375d731002231241m4eb33430peaca4696a112ee94@mail.gmail.com> <3d375d731002231255k2d43eaccg25e992c1bf40d656@mail.gmail.com> <45d1ab481002231305i3af5431cw974a1424bee78cc0@mail.gmail.com> Message-ID: <45d1ab481002231310l6581b1cbm35029a18de015653@mail.gmail.com> Incidentally, I noted the following in the discussion, but since those don't get as much viewership (and since I'm about to edit the docstring anyway): presently, the Example in uniform's docstring generates a plot using matplotlib.pyplot - is generating a plot really consistent w/ the spirit of wanting our examples to pass automated testing? DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Tue Feb 23 16:44:32 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Tue, 23 Feb 2010 13:44:32 -0800 Subject: [Numpy-discussion] random.uniform documentation bug? In-Reply-To: <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> References: <4B83E46C.60301@american.edu> <3d375d731002231029k5da6837va0762db93e4f5f51@mail.gmail.com> Message-ID: <45d1ab481002231344s4ce6fb81u232fa76d49f1032c@mail.gmail.com> OK, fixed in Wiki. (& "OK to apply" set to "Yes") DG On Tue, Feb 23, 2010 at 10:29 AM, Robert Kern wrote: > On Tue, Feb 23, 2010 at 08:21, Alan G Isaac wrote: > > This behavior does not match the current documentation. > > > >>>> np.random.uniform(low=0.5,high=0.5) > > 0.5 > >>>> np.random.uniform(low=0.5,high=0.4) > > 0.48796883601707464 > > > > I assume this behavior is intentional and it is > > the documentation that is in error (for the case > > when high<=low)? > > Well, the documentation just doesn't really address high<=low. In any > case, the claim that the results are in [low, high) is wrong thanks to > floating point arithmetic. It has exactly the same issues as the > standard library's random.uniform() and should be updated to reflect > that fact: > > > random.uniform(a, b) > Return a random floating point number N such that a <= N <= b for a > <= b and b <= N <= a for b < a. > > The end-point value b may or may not be included in the range > depending on floating-point rounding in the equation a + (b-a) * > random(). > > > We should address the high < low case in the documentation because > we're not going to bother raising an exception when high < low. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Tue Feb 23 17:40:25 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Tue, 23 Feb 2010 16:40:25 -0600 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: References: <1266871559.5645.32.camel@idol> <3d375d731002221258s5994348dg7abf7cd24577ade4@mail.gmail.com> <1266872803.5645.37.camel@idol> Message-ID: <534C7B63-A445-4710-853B-5531AE37C03D@enthought.com> On Feb 23, 2010, at 1:03 AM, Charles R Harris wrote: > > > On Mon, Feb 22, 2010 at 2:06 PM, Pauli Virtanen wrote: > ma, 2010-02-22 kello 14:01 -0700, Charles R Harris kirjoitti: > > On Mon, Feb 22, 2010 at 1:58 PM, Robert Kern > > wrote: > [clip] > > > Why? PyCObjects don't serialize at all. They would never show > up in > > > a pickle to begin with. > > > > So what happens to them? I'm not that familiar with pickles > > arraydescr_reduce pulls out the datetime info from the metadata dict, > and converts it to a tuple containing something pickleable. And > everything in reverse in *_setstate > > > Everything works except the import of the {ufunc, multiarray} api's > from the modules. If the api's are stored as PyCObjects then all the > tests pass. I'll try to get that last bit fixed up tomorrow. Just back from PyCon. It is useful to know that the Python core team feels that NumPy porting to 3k is a *big* deal. Lots of people would be interested in your experiences with porting NumPy to Python 3k. In particular, the fact that they removed APIs and the extra pain that causes is useful information in their decision making. I'm not sure how big a deal it is that we have to change the API to handle PyCapsules instead of PyCObjects, but if you have any feedback to the python core dev team, they would be interested in hearing it --- particularly right after PyCon. -Travis > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Travis Oliphant Enthought Inc. 1-512-536-1057 http://www.enthought.com oliphant at enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Feb 23 17:47:44 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Feb 2010 16:47:44 -0600 Subject: [Numpy-discussion] distutils problem with NumPy-1.4 & Py-2.7a3 (Snow Leopard) In-Reply-To: <1266952736.4b842a202de1e@astrosun2.astro.cornell.edu> References: <1266952736.4b842a202de1e@astrosun2.astro.cornell.edu> Message-ID: <3d375d731002231447p4cab4c3aqa2b3596d60efb1e6@mail.gmail.com> On Tue, Feb 23, 2010 at 13:18, Tom Loredo wrote: > > Hi- > > I've been testing Python-2.7a3 on Mac OS 10.6.2. ?NumPy-1.4.0 will > not install; it appears something has changed within distutils that > breaks it: > ?File "/Volumes/System/Users/loredo/Downloads/numpy-1.4.0-OSX/numpy/distutils/ccompiler.py", line 17, in > ? ?_old_init_posix = distutils.sysconfig._init_posix > AttributeError: 'module' object has no attribute '_init_posix' This line is actually unused. You may delete it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Tue Feb 23 17:54:46 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 23 Feb 2010 15:54:46 -0700 Subject: [Numpy-discussion] datetime uses API deprecated in python3.1 In-Reply-To: <534C7B63-A445-4710-853B-5531AE37C03D@enthought.com> References: <1266871559.5645.32.camel@idol> <3d375d731002221258s5994348dg7abf7cd24577ade4@mail.gmail.com> <1266872803.5645.37.camel@idol> <534C7B63-A445-4710-853B-5531AE37C03D@enthought.com> Message-ID: On Tue, Feb 23, 2010 at 3:40 PM, Travis Oliphant wrote: > > On Feb 23, 2010, at 1:03 AM, Charles R Harris wrote: > > > > On Mon, Feb 22, 2010 at 2:06 PM, Pauli Virtanen wrote: > >> ma, 2010-02-22 kello 14:01 -0700, Charles R Harris kirjoitti: >> > On Mon, Feb 22, 2010 at 1:58 PM, Robert Kern >> > wrote: >> [clip] >> > > Why? PyCObjects don't serialize at all. They would never show up in >> > > a pickle to begin with. >> > >> > So what happens to them? I'm not that familiar with pickles >> >> arraydescr_reduce pulls out the datetime info from the metadata dict, >> and converts it to a tuple containing something pickleable. And >> everything in reverse in *_setstate >> >> > Everything works except the import of the {ufunc, multiarray} api's from > the modules. If the api's are stored as PyCObjects then all the tests pass. > I'll try to get that last bit fixed up tomorrow. > > > Just back from PyCon. It is useful to know that the Python core team > feels that NumPy porting to 3k is a *big* deal. > > Lots of people would be interested in your experiences with porting NumPy > to Python 3k. In particular, the fact that they removed APIs and the extra > pain that causes is useful information in their decision making. > > I'm not sure how big a deal it is that we have to change the API to handle > PyCapsules instead of PyCObjects, but if you have any feedback to the python > core dev team, they would be interested in hearing it --- particularly right > after PyCon. > > The PyCapsule transition is done, but needs some cleanup. I'm thinking about the best approach to the latter. I put some functions in compat_py3k.h that are drop in replacements for our needs, but they hide the improved error handling of PyCapsule. I'm thinking a better approach might be to use the replacement functions to bring the error support of PyCObject closer to PyCapsule. Because the current fix is a bunch of #ifdefs in the code the substitution could be made bit by bit, rewriting the surrounding code to support the new error handling. f2py still needs fixing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Feb 23 20:52:15 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 24 Feb 2010 10:52:15 +0900 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: References: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> Message-ID: <5b8d13221002231752s1f5ede67p486e984e19a1dd8c@mail.gmail.com> On Mon, Feb 22, 2010 at 11:27 AM, Ralf Gommers wrote: > >> > Hi David, did you find time to put those Atlas binaries somewhere? I am putting them into numpy subversion as we speak (in vendor: http://svn.scipy.org/svn/numpy/vendor). cheers, David From bsouthey at gmail.com Tue Feb 23 21:04:40 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 23 Feb 2010 20:04:40 -0600 Subject: [Numpy-discussion] distutils problem with NumPy-1.4 & Py-2.7a3 (Snow Leopard) In-Reply-To: <3d375d731002231447p4cab4c3aqa2b3596d60efb1e6@mail.gmail.com> References: <1266952736.4b842a202de1e@astrosun2.astro.cornell.edu> <3d375d731002231447p4cab4c3aqa2b3596d60efb1e6@mail.gmail.com> Message-ID: On Tue, Feb 23, 2010 at 4:47 PM, Robert Kern wrote: > On Tue, Feb 23, 2010 at 13:18, Tom Loredo wrote: >> >> Hi- >> >> I've been testing Python-2.7a3 on Mac OS 10.6.2. ?NumPy-1.4.0 will >> not install; it appears something has changed within distutils that >> breaks it: >> ?File "/Volumes/System/Users/loredo/Downloads/numpy-1.4.0-OSX/numpy/distutils/ccompiler.py", line 17, in >> ? ?_old_init_posix = distutils.sysconfig._init_posix >> AttributeError: 'module' object has no attribute '_init_posix' > > This line is actually unused. You may delete it. > > -- > Robert Kern > Do you want this as a numpy bug report? Bruce From dlc at halibut.com Tue Feb 23 21:04:49 2010 From: dlc at halibut.com (David Carmean) Date: Tue, 23 Feb 2010 18:04:49 -0800 Subject: [Numpy-discussion] RHEL 5.3+ build? Message-ID: <20100223180448.A21089@halibut.com> Does anyone use/build this stuff on RHEL 5.3+ (x64)? :) Seems not so much. I'd like to use numpy (and PyTables) for a few tasks where it would be much more efficient to have much of the processing performed on the servers generating the data (about 400 systems) than backhauling the huge amount of input data across our WAN around the continent. However, the vast majority of these systems are 64-bit RedHat EL 5.3 and 5.4, and I'm having trouble building numpy 1.3.0 with gcc. I found an RPM for 1.2.0 so that will get me through most of the R&D, and I'd rather wait for the next stable release before spending any more time trying to build. But I'm wondering if there's anybody on the team or in the active contributors/users world who is regularly building numpy on various flavors of CentOS/RHEL5.x. Thanks. From josef.pktd at gmail.com Tue Feb 23 21:05:32 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 23 Feb 2010 21:05:32 -0500 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: <5b8d13221002231752s1f5ede67p486e984e19a1dd8c@mail.gmail.com> References: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> <5b8d13221002231752s1f5ede67p486e984e19a1dd8c@mail.gmail.com> Message-ID: <1cd32cbb1002231805n4ce6af7cn192380b2742a66a6@mail.gmail.com> On Tue, Feb 23, 2010 at 8:52 PM, David Cournapeau wrote: > On Mon, Feb 22, 2010 at 11:27 AM, Ralf Gommers > wrote: >> >>> >> Hi David, did you find time to put those Atlas binaries somewhere? > > I am putting them into numpy subversion as we speak (in vendor: > http://svn.scipy.org/svn/numpy/vendor). Thank you, Are they ok to link to as an update in http://scipy.org/Installing_SciPy/Windows#head-cd37d819e333227e327079e4c2a2298daf625624 the old Atlas is 3.6.0 Josef > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From cournape at gmail.com Tue Feb 23 21:08:56 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 24 Feb 2010 11:08:56 +0900 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: <1cd32cbb1002231805n4ce6af7cn192380b2742a66a6@mail.gmail.com> References: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> <5b8d13221002231752s1f5ede67p486e984e19a1dd8c@mail.gmail.com> <1cd32cbb1002231805n4ce6af7cn192380b2742a66a6@mail.gmail.com> Message-ID: <5b8d13221002231808p2a3d842g3d897fb6c3153eca@mail.gmail.com> On Wed, Feb 24, 2010 at 11:05 AM, wrote: > On Tue, Feb 23, 2010 at 8:52 PM, David Cournapeau wrote: >> On Mon, Feb 22, 2010 at 11:27 AM, Ralf Gommers >> wrote: >>> >>>> >>> Hi David, did you find time to put those Atlas binaries somewhere? >> >> I am putting them into numpy subversion as we speak (in vendor: >> http://svn.scipy.org/svn/numpy/vendor). > > Thank you, > > Are they ok to link to as an update in > ?http://scipy.org/Installing_SciPy/Windows#head-cd37d819e333227e327079e4c2a2298daf625624 Maybe we should put them also somewhere on the website directly - I am not sure whether it is good idea to download relatively large binaries directly from svn. cheers, David From ralf.gommers at googlemail.com Tue Feb 23 21:19:04 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 24 Feb 2010 10:19:04 +0800 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: <5b8d13221002231752s1f5ede67p486e984e19a1dd8c@mail.gmail.com> References: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> <5b8d13221002231752s1f5ede67p486e984e19a1dd8c@mail.gmail.com> Message-ID: On Wed, Feb 24, 2010 at 9:52 AM, David Cournapeau wrote: > On Mon, Feb 22, 2010 at 11:27 AM, Ralf Gommers > wrote: > > > >> > > Hi David, did you find time to put those Atlas binaries somewhere? > > I am putting them into numpy subversion as we speak (in vendor: > http://svn.scipy.org/svn/numpy/vendor). > > Thanks a lot! Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Feb 23 22:01:39 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 24 Feb 2010 12:01:39 +0900 Subject: [Numpy-discussion] RHEL 5.3+ build? In-Reply-To: <20100223180448.A21089@halibut.com> References: <20100223180448.A21089@halibut.com> Message-ID: <5b8d13221002231901v637c047bwf7f4ea622d2eb8ae@mail.gmail.com> On Wed, Feb 24, 2010 at 11:04 AM, David Carmean wrote: > > > Does anyone use/build this stuff on RHEL 5.3+ (x64)? ?:) ?Seems not so much. > > I'd like to use numpy (and PyTables) for a few tasks where it would be much > more efficient to have much of the processing performed on the servers generating > the data (about 400 systems) than backhauling the huge amount of input data > across our WAN around the continent. ?However, the vast majority of these systems > are 64-bit RedHat EL 5.3 and 5.4, and I'm having trouble building numpy 1.3.0 > with gcc. > > I found an RPM for 1.2.0 so that will get me through most of the R&D, and I'd > rather wait for the next stable release before spending any more time trying > to build. ?But I'm wondering if there's anybody on the team or in the active > contributors/users world who is regularly building numpy on various flavors of > CentOS/RHEL5.x. Please tell us what does not work, and what you did to build numpy before it fails. David From charlesr.harris at gmail.com Tue Feb 23 22:14:19 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 23 Feb 2010 20:14:19 -0700 Subject: [Numpy-discussion] How to test f2py? Message-ID: Hi All, I've made PyCObject -> PyCapsule changes to f2py for python3.1. How can I check that f2py still works as advertised before making a commit? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Feb 23 22:31:25 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Feb 2010 21:31:25 -0600 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: References: Message-ID: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> On Tue, Feb 23, 2010 at 21:14, Charles R Harris wrote: > Hi All, > > I've made PyCObject -> PyCapsule changes to f2py for python3.1. How can I > check that f2py still works as advertised before making a commit? numpy/f2py/tests/run_all.py -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Tue Feb 23 22:51:51 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 23 Feb 2010 20:51:51 -0700 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> Message-ID: On Tue, Feb 23, 2010 at 8:31 PM, Robert Kern wrote: > On Tue, Feb 23, 2010 at 21:14, Charles R Harris > wrote: > > Hi All, > > > > I've made PyCObject -> PyCapsule changes to f2py for python3.1. How can I > > check that f2py still works as advertised before making a commit? > > numpy/f2py/tests/run_all.py > > It's not py3k compatible... also it doesn't find the f2py2e module even though it has been installed with numpy. ? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Feb 23 22:54:59 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Feb 2010 21:54:59 -0600 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> Message-ID: <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> On Tue, Feb 23, 2010 at 21:51, Charles R Harris wrote: > > On Tue, Feb 23, 2010 at 8:31 PM, Robert Kern wrote: >> >> On Tue, Feb 23, 2010 at 21:14, Charles R Harris >> wrote: >> > Hi All, >> > >> > I've made PyCObject -> PyCapsule changes to f2py for python3.1. How can >> > I >> > check that f2py still works as advertised before making a commit? >> >> numpy/f2py/tests/run_all.py > > It's not py3k compatible... So make it py3k compatible. > also it doesn't find the f2py2e module even > though it has been installed with numpy. ? I don't understand this. Error message? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Tue Feb 23 23:12:30 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 23 Feb 2010 21:12:30 -0700 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> Message-ID: On Tue, Feb 23, 2010 at 8:54 PM, Robert Kern wrote: > On Tue, Feb 23, 2010 at 21:51, Charles R Harris > wrote: > > > > On Tue, Feb 23, 2010 at 8:31 PM, Robert Kern > wrote: > >> > >> On Tue, Feb 23, 2010 at 21:14, Charles R Harris > >> wrote: > >> > Hi All, > >> > > >> > I've made PyCObject -> PyCapsule changes to f2py for python3.1. How > can > >> > I > >> > check that f2py still works as advertised before making a commit? > >> > >> numpy/f2py/tests/run_all.py > > > > It's not py3k compatible... > > So make it py3k compatible. > It's autoconverted in build/py3k. It is, however, not installed anywhere. > > > also it doesn't find the f2py2e module even > > though it has been installed with numpy. ? > > I don't understand this. Error message? > > Running /usr/bin/python /home/charris/Workspace/numpy.git/numpy/f2py/tests/f77/return_character.py 10 --quiet Traceback (most recent call last): File "/home/charris/Workspace/numpy.git/numpy/f2py/tests/f77/return_character.py", line 10, in import f2py2e ImportError: No module named f2py2e TEST FAILURE (status=1) So the import is wrong. The question is: did this used to work? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Feb 23 23:19:32 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Feb 2010 22:19:32 -0600 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> Message-ID: <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> On Tue, Feb 23, 2010 at 22:12, Charles R Harris wrote: > > On Tue, Feb 23, 2010 at 8:54 PM, Robert Kern wrote: >> >> On Tue, Feb 23, 2010 at 21:51, Charles R Harris >> wrote: >> > >> > On Tue, Feb 23, 2010 at 8:31 PM, Robert Kern >> > wrote: >> >> >> >> On Tue, Feb 23, 2010 at 21:14, Charles R Harris >> >> wrote: >> >> > Hi All, >> >> > >> >> > I've made PyCObject -> PyCapsule changes to f2py for python3.1. How >> >> > can >> >> > I >> >> > check that f2py still works as advertised before making a commit? >> >> >> >> numpy/f2py/tests/run_all.py >> > >> > It's not py3k compatible... >> >> So make it py3k compatible. > > It's autoconverted in build/py3k. It is, however, not installed anywhere. > >> >> > also it doesn't find the f2py2e module even >> > though it has been installed with numpy. ? >> >> I don't understand this. Error message? >> > > Running /usr/bin/python > /home/charris/Workspace/numpy.git/numpy/f2py/tests/f77/return_character.py > 10 --quiet > Traceback (most recent call last): > ? File > "/home/charris/Workspace/numpy.git/numpy/f2py/tests/f77/return_character.py", > line 10, in > ??? import f2py2e > ImportError: No module named f2py2e > TEST FAILURE (status=1) > > So the import is wrong. The question is: did this used to work? >From the independent f2py2e days, yes. Just change those import lines to "from numpy import f2py as f2py2e". -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Tue Feb 23 23:27:53 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 24 Feb 2010 13:27:53 +0900 Subject: [Numpy-discussion] long(a) vs a.__long__() for scalar arrays In-Reply-To: <5b8d13221002092212v49b488c9uc0b9a8a97588bc3a@mail.gmail.com> References: <5b8d13221002092212v49b488c9uc0b9a8a97588bc3a@mail.gmail.com> Message-ID: <5b8d13221002232027o3c4d8e22j98a24751ccf005ec@mail.gmail.com> On Wed, Feb 10, 2010 at 3:12 PM, David Cournapeau wrote: > Hi, > > I am a bit puzzled by the protocol for long(a) where a is a scalar > array. For example, for a = np.float128(1), I was expecting long(a) to > call a.__long__, but it does not look like it is the case. int(a) does > not call a.__int__ either. Where does the long conversion happen in > numpy for scalar arrays ? For the record, this happens in the PyNumber machinery (the exact C function doing it is longdouble_long in scalarmath module). cheers, David From charlesr.harris at gmail.com Tue Feb 23 23:51:18 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 23 Feb 2010 21:51:18 -0700 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> Message-ID: On Tue, Feb 23, 2010 at 9:19 PM, Robert Kern wrote: > On Tue, Feb 23, 2010 at 22:12, Charles R Harris > wrote: > > > > On Tue, Feb 23, 2010 at 8:54 PM, Robert Kern > wrote: > >> > >> On Tue, Feb 23, 2010 at 21:51, Charles R Harris > >> wrote: > >> > > >> > On Tue, Feb 23, 2010 at 8:31 PM, Robert Kern > >> > wrote: > >> >> > >> >> On Tue, Feb 23, 2010 at 21:14, Charles R Harris > >> >> wrote: > >> >> > Hi All, > >> >> > > >> >> > I've made PyCObject -> PyCapsule changes to f2py for python3.1. How > >> >> > can > >> >> > I > >> >> > check that f2py still works as advertised before making a commit? > >> >> > >> >> numpy/f2py/tests/run_all.py > >> > > >> > It's not py3k compatible... > >> > >> So make it py3k compatible. > > > > It's autoconverted in build/py3k. It is, however, not installed anywhere. > > > >> > >> > also it doesn't find the f2py2e module even > >> > though it has been installed with numpy. ? > >> > >> I don't understand this. Error message? > >> > > > > Running /usr/bin/python > > > /home/charris/Workspace/numpy.git/numpy/f2py/tests/f77/return_character.py > > 10 --quiet > > Traceback (most recent call last): > > File > > > "/home/charris/Workspace/numpy.git/numpy/f2py/tests/f77/return_character.py", > > line 10, in > > import f2py2e > > ImportError: No module named f2py2e > > TEST FAILURE (status=1) > > > > So the import is wrong. The question is: did this used to work? > > From the independent f2py2e days, yes. Just change those import lines > to "from numpy import f2py as f2py2e" > Boy, that code is *old*, it still uses Numeric ;) I don't think it can really be considered a test suite, it needs lotsa love and it needs to get installed. Anyway, f2py with py3k turns out to have string problems, and I expect other type problems, so there is considerable work that needs to be done to bring it up to snuff. Sounds like gsoc material. I'm not going to worry about it any more until later. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Feb 24 02:42:34 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 24 Feb 2010 16:42:34 +0900 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: References: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> <5b8d13221002231752s1f5ede67p486e984e19a1dd8c@mail.gmail.com> Message-ID: <5b8d13221002232342r4fe58324i5e357a06dfa76894@mail.gmail.com> On Wed, Feb 24, 2010 at 11:19 AM, Ralf Gommers wrote: > > > On Wed, Feb 24, 2010 at 9:52 AM, David Cournapeau > wrote: >> >> On Mon, Feb 22, 2010 at 11:27 AM, Ralf Gommers >> wrote: >> > >> >> >> > Hi David, did you find time to put those Atlas binaries somewhere? >> >> I am putting them into numpy subversion as we speak (in vendor: >> http://svn.scipy.org/svn/numpy/vendor). >> > Thanks a lot! So here is how I see things in the near future for release: - compile a simple binary installer for mac os x and windows (no need for doc or multiple archs) from 1.4.x - test this with the scipy binary out there (running the full test suites), ideally other well known packages as well (matplotlib, pytables, etc...). - if it works for you, or you cannot easily test it, put it for wide testing as a basis for the 1.4.0.1 binary - if it works, make a RC1 for Numpy 1.4.0.1 ("full" binaries). I think we need to push this ASAP to recover from the current confusion w.r.t. binaries. cheers, David From cournape at gmail.com Wed Feb 24 03:15:44 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 24 Feb 2010 17:15:44 +0900 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> Message-ID: <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> On Wed, Feb 24, 2010 at 1:51 PM, Charles R Harris wrote: > > Boy, that code is *old*, it still uses Numeric ;) I don't think it can > really be considered a test suite, it needs lotsa love and it needs to get > installed. Anyway, f2py with py3k turns out to have string problems, and I > expect other type problems, so there is considerable work that needs to be > done to bring it up to snuff. Sounds like gsoc material. I'm not going to > worry about it any more until later. If it would take a GSoC to make it up to work, it may be time better spent on improving fwrap. Maybe Pearu would have some ideas, but the problem I see with f2py today is that it is pretty much Pearu's work only, and given that the code has a relatively low unit test suite, the code is not that easy to dive in for someone else. cheers, David From pav at iki.fi Wed Feb 24 03:19:08 2010 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 24 Feb 2010 10:19:08 +0200 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> Message-ID: <1266999548.11747.2.camel@Nokia-N900-42-11> I don't think the situation is that bad with f2py. I suppose it will be enough to erect unicode vs. Bytes barrier where the file i/o is done, and let f2py work internally with unicode. Doesn't sound so bad, but I'd have to take a closer look. -- Pauli Virtanen ----- Alkuper?inen viesti ----- > On Wed, Feb 24, 2010 at 1:51 PM, Charles R Harris > wrote: > > > > > Boy, that code is *old*, it still uses Numeric ;) I don't think it can > > really be considered a test suite, it needs lotsa love and it needs to get > > installed. Anyway, f2py with py3k turns out to have string problems, and I > > expect other type problems, so there is considerable work that needs to be > > done to bring it up to snuff. Sounds like gsoc material. I'm not going to > > worry about it any more until later. > > If it would take a GSoC to make it up to work, it may be time better > spent on improving fwrap. > > Maybe Pearu would have some ideas, but the problem I see with f2py > today is that it is pretty much Pearu's work only, and given that the > code has a relatively low unit test suite, the code is not that easy > to dive in for someone else. > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From cournape at gmail.com Wed Feb 24 03:33:05 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 24 Feb 2010 17:33:05 +0900 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <1266999548.11747.2.camel@Nokia-N900-42-11> References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <1266999548.11747.2.camel@Nokia-N900-42-11> Message-ID: <5b8d13221002240033h626df36ud33ba10cdebf222@mail.gmail.com> On Wed, Feb 24, 2010 at 5:19 PM, Pauli Virtanen wrote: > I don't think the situation is that bad with f2py. I suppose it will be enough to erect unicode vs. Bytes barrier where the file i/o is done, and let f2py work internally with unicode. Doesn't sound so bad, but I'd have to take a closer look. How did you handle name clash in numpy for 2to3 ? For example, f2py uses things like dict quite a lot as argument for functions, and it does not look like 2to3 handles this (or does it ?). Of course, I could try a brute force sed script as a pre-processing step, but maybe you got a better way of doing this, cheers, David From pav at iki.fi Wed Feb 24 03:54:05 2010 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 24 Feb 2010 10:54:05 +0200 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <5b8d13221002240033h626df36ud33ba10cdebf222@mail.gmail.com> References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <1266999548.11747.2.camel@Nokia-N900-42-11> <5b8d13221002240033h626df36ud33ba10cdebf222@mail.gmail.com> Message-ID: <1267001645.2728.317.camel@talisman> ke, 2010-02-24 kello 17:33 +0900, David Cournapeau kirjoitti: > On Wed, Feb 24, 2010 at 5:19 PM, Pauli Virtanen wrote: > > I don't think the situation is that bad with f2py. I suppose it will > > be enough to erect unicode vs. Bytes barrier where the file i/o is > > done, and let f2py work internally with unicode. Doesn't sound so > > bad, but I'd have to take a closer look. > > How did you handle name clash in numpy for 2to3? For example, f2py > uses things like dict quite a lot as argument for functions, and it > does not look like 2to3 handles this (or does it ?). I suppose you mean using "dict" as a variable name, and f2py using types.DictType which is "dict" in Py3? 2to3 does not handle those. Using builtin names as variable names is bad practice, so when I met that in SVN, I just changed the variable names to something more sane. > Of course, I > could try a brute force sed script as a pre-processing step, but maybe > you got a better way of doing this, The best alternative, imho, is not to use "dict" as a variable name at all. We should make that change manually in SVN sources, both for Py2 and Py3. Grepping the f2py source shows that this problem occurs only in auxfunc.replace, so changing that shouldn't be too much work. Pauli From david at silveregg.co.jp Wed Feb 24 04:04:33 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 24 Feb 2010 18:04:33 +0900 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <1267001645.2728.317.camel@talisman> References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <1266999548.11747.2.camel@Nokia-N900-42-11> <5b8d13221002240033h626df36ud33ba10cdebf222@mail.gmail.com> <1267001645.2728.317.camel@talisman> Message-ID: <4B84EBA1.7070508@silveregg.co.jp> Pauli Virtanen wrote: > The best alternative, imho, is not to use "dict" as a variable name at > all. We should make that change manually in SVN sources, both for Py2 > and Py3. Agreed - the changes should be put in the sources. Will do so tonight after work unless someone beats me to it. > Grepping the f2py source shows that this problem occurs only in > auxfunc.replace, so changing that shouldn't be too much work. I am playing a bit with fftpack, and very few modifications are needed for f2py to run on it. Now, I "just" have to fix the generated C code to see what's going on. David P.S: is it expected that numpy cannot be built in-place correctly under py3k ? From pav at iki.fi Wed Feb 24 04:10:45 2010 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 24 Feb 2010 11:10:45 +0200 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <4B84EBA1.7070508@silveregg.co.jp> References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <1266999548.11747.2.camel@Nokia-N900-42-11> <5b8d13221002240033h626df36ud33ba10cdebf222@mail.gmail.com> <1267001645.2728.317.camel@talisman> <4B84EBA1.7070508@silveregg.co.jp> Message-ID: <1267002645.2728.320.camel@talisman> ke, 2010-02-24 kello 18:04 +0900, David Cournapeau kirjoitti: [clip] > P.S: is it expected that numpy cannot be built in-place correctly under > py3k? Yes, unfortunately. 2to3 cannot really be run in-place, and I did not want to engage distutils in a fight how to read the sources from a different location. So how it works now is that first a complete Py3 converted copy is made in "build/py3k", and then the setup.py there is run. Pauli From ralf.gommers at googlemail.com Wed Feb 24 05:45:13 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 24 Feb 2010 18:45:13 +0800 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: <5b8d13221002232342r4fe58324i5e357a06dfa76894@mail.gmail.com> References: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> <5b8d13221002231752s1f5ede67p486e984e19a1dd8c@mail.gmail.com> <5b8d13221002232342r4fe58324i5e357a06dfa76894@mail.gmail.com> Message-ID: On Wed, Feb 24, 2010 at 3:42 PM, David Cournapeau wrote: > > So here is how I see things in the near future for release: > - compile a simple binary installer for mac os x and windows (no need > for doc or multiple archs) from 1.4.x > - test this with the scipy binary out there (running the full test > suites), ideally other well known packages as well (matplotlib, > pytables, etc...). > - if it works for you, or you cannot easily test it, put it for wide > testing as a basis for the 1.4.0.1 binary > - if it works, make a RC1 for Numpy 1.4.0.1 ("full" binaries). > > I think we need to push this ASAP to recover from the current > confusion w.r.t. binaries. > > That's a sensible plan, I'll start on it right away. Just to double-check, can the 1.4.x branch be released as-is? How about the version, the version scheme major.minor.micro does not allow for your proposed 1.4.0.1. Do you want to just drop the last .1 or make this 1.4.1? Patrick, are you okay with David's plan as well? Do you want to do this in parallel so we both generate a complete set of binaries? Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Wed Feb 24 06:19:51 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 24 Feb 2010 20:19:51 +0900 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: References: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> <5b8d13221002231752s1f5ede67p486e984e19a1dd8c@mail.gmail.com> <5b8d13221002232342r4fe58324i5e357a06dfa76894@mail.gmail.com> Message-ID: <4B850B57.8050600@silveregg.co.jp> Ralf Gommers wrote: > On Wed, Feb 24, 2010 at 3:42 PM, David Cournapeau > wrote: > > > So here is how I see things in the near future for release: > - compile a simple binary installer for mac os x and windows (no need > for doc or multiple archs) from 1.4.x > - test this with the scipy binary out there (running the full test > suites), ideally other well known packages as well (matplotlib, > pytables, etc...). > - if it works for you, or you cannot easily test it, put it for wide > testing as a basis for the 1.4.0.1 binary > - if it works, make a RC1 for Numpy 1.4.0.1 ("full" binaries). > > I think we need to push this ASAP to recover from the current > confusion w.r.t. binaries. > > That's a sensible plan, I'll start on it right away. Great. Let me know of any glitch. > Just to double-check, can the 1.4.x branch be released as-is? How about > the version, the version scheme major.minor.micro does not allow for > your proposed 1.4.0.1. Do you want to just drop the last .1 or make this > 1.4.1? Yes, 1.4.1 is fine. There are a few fixes besides the ABI fix now, so no need to complicate things further. I think 1.4.x can serve as the basis for 1.4.1 as is. I have not checked recently if it builds OK on MS compiler, but not much has changed. cheers, David From mdroe at stsci.edu Wed Feb 24 08:59:05 2010 From: mdroe at stsci.edu (Michael Droettboom) Date: Wed, 24 Feb 2010 08:59:05 -0500 Subject: [Numpy-discussion] RHEL 5.3+ build? In-Reply-To: <20100223180448.A21089@halibut.com> References: <20100223180448.A21089@halibut.com> Message-ID: <4B8530A9.2060707@stsci.edu> David Carmean wrote: > Does anyone use/build this stuff on RHEL 5.3+ (x64)? :) Seems not so much. > > I'd like to use numpy (and PyTables) for a few tasks where it would be much > more efficient to have much of the processing performed on the servers generating > the data (about 400 systems) than backhauling the huge amount of input data > across our WAN around the continent. However, the vast majority of these systems > are 64-bit RedHat EL 5.3 and 5.4, and I'm having trouble building numpy 1.3.0 > with gcc. > > I found an RPM for 1.2.0 so that will get me through most of the R&D, and I'd > rather wait for the next stable release before spending any more time trying > to build. But I'm wondering if there's anybody on the team or in the active > contributors/users world who is regularly building numpy on various flavors of > CentOS/RHEL5.x. > We (STScI) routinely build Numpy on RHEL5.x 64-bit systems for our internal use. We need more detail about what you're doing and what errors you're seeing to diagnose the problem. Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA From patrickmarshwx at gmail.com Wed Feb 24 09:17:07 2010 From: patrickmarshwx at gmail.com (Patrick Marsh) Date: Wed, 24 Feb 2010 08:17:07 -0600 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: References: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> <5b8d13221002231752s1f5ede67p486e984e19a1dd8c@mail.gmail.com> <5b8d13221002232342r4fe58324i5e357a06dfa76894@mail.gmail.com> Message-ID: This sounds good to me. I also like the idea of doing this in parallel so we both have a complete set of binaries - at least on the Windows side. I'm still having issues with my MBP, but hope to have those resolved later today. Patrick On Wed, Feb 24, 2010 at 4:45 AM, Ralf Gommers wrote: > On Wed, Feb 24, 2010 at 3:42 PM, David Cournapeau wrote: > >> >> So here is how I see things in the near future for release: >> - compile a simple binary installer for mac os x and windows (no need >> for doc or multiple archs) from 1.4.x >> - test this with the scipy binary out there (running the full test >> suites), ideally other well known packages as well (matplotlib, >> pytables, etc...). >> - if it works for you, or you cannot easily test it, put it for wide >> testing as a basis for the 1.4.0.1 binary >> - if it works, make a RC1 for Numpy 1.4.0.1 ("full" binaries). >> >> I think we need to push this ASAP to recover from the current >> confusion w.r.t. binaries. >> >> That's a sensible plan, I'll start on it right away. > > Just to double-check, can the 1.4.x branch be released as-is? How about the > version, the version scheme major.minor.micro does not allow for your > proposed 1.4.0.1. Do you want to just drop the last .1 or make this 1.4.1? > > > Patrick, are you okay with David's plan as well? Do you want to do this in > parallel so we both generate a complete set of binaries? > > Cheers, > Ralf > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -- Patrick Marsh Ph.D. Student / NSSL Liaison to the HWT School of Meteorology / University of Oklahoma Cooperative Institute for Mesoscale Meteorological Studies National Severe Storms Laboratory http://www.patricktmarsh.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrickmarshwx at gmail.com Wed Feb 24 09:20:07 2010 From: patrickmarshwx at gmail.com (Patrick Marsh) Date: Wed, 24 Feb 2010 08:20:07 -0600 Subject: [Numpy-discussion] Building Windows binaries on OS X In-Reply-To: <4B850B57.8050600@silveregg.co.jp> References: <5b8d13221002081754j4b266b6dq50bfa98fda271ac1@mail.gmail.com> <5b8d13221002231752s1f5ede67p486e984e19a1dd8c@mail.gmail.com> <5b8d13221002232342r4fe58324i5e357a06dfa76894@mail.gmail.com> <4B850B57.8050600@silveregg.co.jp> Message-ID: On Wed, Feb 24, 2010 at 5:19 AM, David Cournapeau wrote: > Ralf Gommers wrote: > > On Wed, Feb 24, 2010 at 3:42 PM, David Cournapeau > > wrote: > > > > > > So here is how I see things in the near future for release: > > - compile a simple binary installer for mac os x and windows (no need > > for doc or multiple archs) from 1.4.x > > - test this with the scipy binary out there (running the full test > > suites), ideally other well known packages as well (matplotlib, > > pytables, etc...). > > - if it works for you, or you cannot easily test it, put it for wide > > testing as a basis for the 1.4.0.1 binary > > - if it works, make a RC1 for Numpy 1.4.0.1 ("full" binaries). > > > > I think we need to push this ASAP to recover from the current > > confusion w.r.t. binaries. > > > > That's a sensible plan, I'll start on it right away. > > Great. Let me know of any glitch. > > > Just to double-check, can the 1.4.x branch be released as-is? How about > > the version, the version scheme major.minor.micro does not allow for > > your proposed 1.4.0.1. Do you want to just drop the last .1 or make this > > 1.4.1? > > Yes, 1.4.1 is fine. There are a few fixes besides the ABI fix now, so no > need to complicate things further. > > I think 1.4.x can serve as the basis for 1.4.1 as is. I have not checked > recently if it builds OK on MS compiler, but not much has changed. > I have the 2008 MSVC compiler already installed and can test building with Python 2.6 that this afternoon. I have an old 2003 MSVC disc that I can use to install MSVC 7.1 in parallel to allow me to test earlier versions as well. Cheers, Patrick > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Patrick Marsh Ph.D. Student / NSSL Liaison to the HWT School of Meteorology / University of Oklahoma Cooperative Institute for Mesoscale Meteorological Studies National Severe Storms Laboratory http://www.patricktmarsh.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dlc at halibut.com Wed Feb 24 09:37:08 2010 From: dlc at halibut.com (David Carmean) Date: Wed, 24 Feb 2010 06:37:08 -0800 Subject: [Numpy-discussion] RHEL 5.3+ build? In-Reply-To: <4B8530A9.2060707@stsci.edu>; from mdroe@stsci.edu on Wed, Feb 24, 2010 at 08:59:05AM -0500 References: <20100223180448.A21089@halibut.com> <4B8530A9.2060707@stsci.edu> Message-ID: <20100224063707.B21089@halibut.com> On Wed, Feb 24, 2010 at 08:59:05AM -0500, Michael Droettboom wrote: > We (STScI) routinely build Numpy on RHEL5.x 64-bit systems for our internal > use. We need more detail about what you're doing and what errors you're > seeing to diagnose the problem. OK, that's encouraging; it may take a few days before I have time to get back to this and provide a good narrative, but it had to do with a tool not knowing what to do with the numpy/core/src/.src files. From bacmsantos at gmail.com Wed Feb 24 10:55:51 2010 From: bacmsantos at gmail.com (Bruno Santos) Date: Wed, 24 Feb 2010 15:55:51 +0000 Subject: [Numpy-discussion] Numpy array performance issue Message-ID: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> Hello everyone, I am using numpy arrays whenever I demand performance from my algorithms. Nevertheless, I am having a performance issue at the moment mainly because I am iterating several times over numpy arrays. Fot that reason I decided to use timeit to see the performance of different versions of the same procedure. What surprised me was that in fact Python lists are performing almost ten times faster than numpy. Why is this happening. My test code is this: list1 = [random.randint(0,20) for i in xrange(100)] list2 = numpy.zeros(100,dtype='Int64') for i in xrange(100):list2[i]=random.randint(0,20) def test1(listx): return len([elem for elem in list if elem >=10]) t = timeit.Timer("test1(list1)","from __main__ import *") >>> t.timeit() 6.4516620635986328 t = timeit.Timer("test1(list2)","from __main__ import *") >>> t.timeit() 76.807533979415894 Thanks in advance, Bruno -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Feb 24 11:02:10 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Feb 2010 10:02:10 -0600 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> Message-ID: <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> On Wed, Feb 24, 2010 at 09:55, Bruno Santos wrote: > Hello everyone, > I am using numpy arrays whenever I demand performance from my > algorithms.?Nevertheless,?I am having a performance issue at the moment > mainly because I am iterating several times over numpy arrays. Fot that > reason I decided to use timeit to see the performance of different versions > of the same procedure. What?surprised?me was that in fact Python lists are > performing almost ten times faster than numpy. Why is this happening. Pulling items out of an array (either explicitly, or via iteration as you are doing here) is expensive because numpy needs to make a new object for each item. numpy stores integers and floats efficiently as their underlying C data, not the Python object. numpy is optimized for bulk operations on arrays, not for iteration over the items of an array with Python for loops. > My test code is this: > list1 = [random.randint(0,20) for i in xrange(100)] > ?list2 = numpy.zeros(100,dtype='Int64') > for i in xrange(100):list2[i]=random.randint(0,20) > def test1(listx): > > return len([elem for elem in list if elem >=10]) The idiomatic way of doing this for numpy arrays would be: def test2(arrx): return (arrx >= 10).sum() -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bacmsantos at gmail.com Wed Feb 24 11:21:46 2010 From: bacmsantos at gmail.com (Bruno Santos) Date: Wed, 24 Feb 2010 16:21:46 +0000 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> Message-ID: <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> > > > > The idiomatic way of doing this for numpy arrays would be: > > def test2(arrx): > return (arrx >= 10).sum() > > Even this versions takes more time to run than my original python version > with arrays. >>> def test3(listx): ... return (listx>=10).sum() >>> t = timeit.Timer("test3(list2)","from __main__ import *") >>> t.timeit() 7.146049976348877 My fastest version at the moment is: def test3(listx): ... return len(numpy.where(listx>=10)[0]) >>> t = timeit.Timer("test3(list2)","from __main__ import *") :2: SyntaxWarning: import * only allowed at module level >>> t.timeit() 5.8264470100402832 Thank you. All the best, Bruno -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Feb 24 11:28:45 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Feb 2010 10:28:45 -0600 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> Message-ID: <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> On Wed, Feb 24, 2010 at 10:21, Bruno Santos wrote: >> The idiomatic way of doing this for numpy arrays would be: >> >> def test2(arrx): >> ? ?return (arrx >= 10).sum() >> > ?Even this versions takes more time to run than my original python version > with arrays. Works fine for me, and gets better as the size increases: In [1]: N = 100 In [2]: import numpy as np In [3]: A = np.random.randint(0, 21, N) In [4]: L = A.tolist() In [5]: %timeit len([e for e in L if e >= 10]) 100000 loops, best of 3: 15 us per loop In [6]: %timeit (A >= 10).sum() 100000 loops, best of 3: 12.7 us per loop In [7]: N = 1000 In [8]: %macro mm 3 4 5 6 Macro `mm` created. To execute, type its name (without quotes). Macro contents: A = np.random.randint(0, 21, N) L = A.tolist() _ip.magic("timeit len([e for e in L if e >= 10])") _ip.magic("timeit (A >= 10).sum()") In [9]: mm ------> mm() 10000 loops, best of 3: 103 us per loop 100000 loops, best of 3: 17.6 us per loop -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bacmsantos at gmail.com Wed Feb 24 11:40:09 2010 From: bacmsantos at gmail.com (Bruno Santos) Date: Wed, 24 Feb 2010 16:40:09 +0000 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> Message-ID: <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> Funny. Which version of python are you using? My python is still better for small lists. But you are rigth it gets better with size, here how the same code performs on mine computer: In [1]: N = 100 In [2]: import numpy as np In [3]: A = np.random.randint(0, 21, N) ...: In [4]: L = A.tolist() ...: In [5]: %timeit len([e for e in L if e >= 10]) 100000 loops, best of 3: 6.12 us per loop In [6]: %timeit (A >= 10).sum() 100000 loops, best of 3: 6.34 us per loop In [7]: N = 10000 In [8]: %macro mm 3 4 5 6 Macro `mm` created. To execute, type its name (without quotes). Macro contents: A = np.random.randint(0, 21, N) L = A.tolist() _ip.magic("timeit len([e for e in L if e >= 10])") _ip.magic("timeit (A >= 10).sum()") In [9]: mm ------> mm() 1000 loops, best of 3: 544 us per loop 10000 loops, best of 3: 98.3 us per loop Anyway, thank you very much for your help I will try to change my code to replace my for loop. I might need to come back to the mailing list if a run into problems in the future. All the best, Bruno 2010/2/24 Robert Kern > On Wed, Feb 24, 2010 at 10:21, Bruno Santos wrote: > > >> The idiomatic way of doing this for numpy arrays would be: > >> > >> def test2(arrx): > >> return (arrx >= 10).sum() > >> > > Even this versions takes more time to run than my original python > version > > with arrays. > > Works fine for me, and gets better as the size increases: > > In [1]: N = 100 > > In [2]: import numpy as np > > In [3]: A = np.random.randint(0, 21, N) > > In [4]: L = A.tolist() > > In [5]: %timeit len([e for e in L if e >= 10]) > 100000 loops, best of 3: 15 us per loop > > In [6]: %timeit (A >= 10).sum() > 100000 loops, best of 3: 12.7 us per loop > > In [7]: N = 1000 > > In [8]: %macro mm 3 4 5 6 > Macro `mm` created. To execute, type its name (without quotes). > Macro contents: > A = np.random.randint(0, 21, N) > L = A.tolist() > _ip.magic("timeit len([e for e in L if e >= 10])") > _ip.magic("timeit (A >= 10).sum()") > > > In [9]: mm > ------> mm() > 10000 loops, best of 3: 103 us per loop > 100000 loops, best of 3: 17.6 us per loop > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Feb 24 11:41:46 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Feb 2010 10:41:46 -0600 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> Message-ID: <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> On Wed, Feb 24, 2010 at 10:40, Bruno Santos wrote: > Funny. Which version of python are you using? Python 2.5.4 on OS X. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bacmsantos at gmail.com Wed Feb 24 12:19:17 2010 From: bacmsantos at gmail.com (Bruno Santos) Date: Wed, 24 Feb 2010 17:19:17 +0000 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> Message-ID: <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> It seems that the python 2.6.4 has a more efficient implementation of the lists. It runs faster on this version and slower on 2.5.4 on the same machine with debian. A lot faster in fact. I was trying to change my headche for the last couple of weeks. But you migth give me a lot more optimizations that I can pick. I am trying to optimize the following function def hypergeometric(self,lindex,rindex): """ loc.hypergeometric(lindex,rindex) Performs the hypergeometric test for the loci between lindex and rindex. Returns the minimum p-Value """ aASense = self.aASCounts[lindex*nSize:(rindex+1)*nSize] #Create the subarray to test aLoci = numpy.hstack([self.aSCounts[lindex*nSize:(rindex+1)*nSize],aASense[::-1]]) #Get the values to test length = len(aLoci) lsPhasedValues = set([aLoci[i] for i in xrange(length) if i%nSize==0 and aLoci[i]>0]) m = length/nSize n = (length-1)-(length/nSize-1) #Create an array to store the Pvalues lsPvalues = [] append = lsPvalues.append #Calculate matches in Phased and non Phased position for r in lsPhasedValues: #Initiate number of matches to 0 q = sum([1 for j in xrange(length) if j%nSize==0 and aLoci[j]>=r]) k = sum([1 for j in xrange(length) if aLoci[j]>=r]) key = '%i,%i,%i,%i'%(q-1,m,n,k) try:append(dtPhyper[key]) except KeyError: value = self.lphyper(q-1, m, n, k) append(value) dtPhyper[key]=value return min(lsPvalues) Is there any efficient way to test the array simultaneous for two different conditions? All the best, Bruno 2010/2/24 Robert Kern > On Wed, Feb 24, 2010 at 10:40, Bruno Santos wrote: > > Funny. Which version of python are you using? > > Python 2.5.4 on OS X. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Wed Feb 24 12:21:39 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 24 Feb 2010 11:21:39 -0600 Subject: [Numpy-discussion] distutils problem with NumPy-1.4 & Py-2.7a3 (Snow Leopard) In-Reply-To: <3d375d731002231447p4cab4c3aqa2b3596d60efb1e6@mail.gmail.com> References: <1266952736.4b842a202de1e@astrosun2.astro.cornell.edu> <3d375d731002231447p4cab4c3aqa2b3596d60efb1e6@mail.gmail.com> Message-ID: <4B856023.1070300@gmail.com> On 02/23/2010 04:47 PM, Robert Kern wrote: > On Tue, Feb 23, 2010 at 13:18, Tom Loredo wrote: > >> Hi- >> >> I've been testing Python-2.7a3 on Mac OS 10.6.2. NumPy-1.4.0 will >> not install; it appears something has changed within distutils that >> breaks it: >> File "/Volumes/System/Users/loredo/Downloads/numpy-1.4.0-OSX/numpy/distutils/ccompiler.py", line 17, in >> _old_init_posix = distutils.sysconfig._init_posix >> AttributeError: 'module' object has no attribute '_init_posix' >> > This line is actually unused. You may delete it. > > Hi, I have created ticket 1409 with a patch to remove the associated code in ccompiler.py file. http://projects.scipy.org/numpy/ticket/1409 Also, I added ticket 1410 to get DistutilsExecError imported by Python 2.7 in the same file http://projects.scipy.org/numpy/ticket/1410 With these two patches Python 2.7 alpha 3 should build numpy. Bruce From robert.kern at gmail.com Wed Feb 24 12:26:39 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Feb 2010 11:26:39 -0600 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> Message-ID: <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> On Wed, Feb 24, 2010 at 11:19, Bruno Santos wrote: > It seems that the python 2.6.4 has a more efficient implementation of the > lists. It runs faster on this version and slower on 2.5.4 on the same > machine with debian. A lot faster in fact. > I was trying to change my headche for the last couple of weeks. But you > migth give me a lot more optimizations that I can pick. I am trying to > optimize the following function > def hypergeometric(self,lindex,rindex): > ?? ? ? ?""" > ?? ? ? ?loc.hypergeometric(lindex,rindex) > ?? ? ? ?Performs the hypergeometric test for the loci between lindex and > rindex. > ?? ? ? ?Returns the minimum p-Value > ?? ? ? ?""" > ?? ? ? ?aASense = self.aASCounts[lindex*nSize:(rindex+1)*nSize] > ?? ? ? ?#Create the subarray to test > ?? ? ? ?aLoci = > numpy.hstack([self.aSCounts[lindex*nSize:(rindex+1)*nSize],aASense[::-1]]) > ?? ? ? ?#Get the values to test > ?? ? ? ?length = len(aLoci) > ?? ? ? ?lsPhasedValues = set([aLoci[i] for i in xrange(length) if i%nSize==0 > and aLoci[i]>0]) > ?? ? ? ?m = length/nSize > ?? ? ? ?n = (length-1)-(length/nSize-1) > ?? ? ? ?#Create an array to store the Pvalues > ?? ? ? ?lsPvalues = [] > ?? ? ? ?append = lsPvalues.append > ?? ? ? ?#Calculate matches in Phased and non Phased position > ?? ? ? ?for r in lsPhasedValues: > ?? ? ? ? ? ?#Initiate number of matches to 0 > ?? ? ? ? ? ?q = sum([1 for j in xrange(length) if j%nSize==0 and > aLoci[j]>=r]) > ?? ? ? ? ? ?k = sum([1 for j in xrange(length) if aLoci[j]>=r]) > ?? ? ? ? ? ?key = '%i,%i,%i,%i'%(q-1,m,n,k) > ?? ? ? ? ? ?try:append(dtPhyper[key]) > ?? ? ? ? ? ?except KeyError: > ?? ? ? ? ? ? ? ?value = self.lphyper(q-1, m, n, k) > ?? ? ? ? ? ? ? ?append(value) > ?? ? ? ? ? ? ? ?dtPhyper[key]=value > ?? ? ? ?return min(lsPvalues) > Is there any efficient way to test the array simultaneous for two different > conditions? j = np.arange(length) j_nSize_mask = ((j % nSize) == 0) lsPhasedValues = (j_nSize_mask & (aLoci >= 0)).sum() ... bigALoci = (aLoci >= r) q = (j_nSize_mask & bigALoci).sum() k = bigALoci.sum() Another way to do it: j_nSize = np.arange(0, length, nSize) lsPhasedValues = (aLoci[j_nSize] >= 0).sum() ... q = (aLoci[j_nSize] >= r).sum() k = (aLoci >= r).sum() -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bsouthey at gmail.com Wed Feb 24 12:44:24 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 24 Feb 2010 11:44:24 -0600 Subject: [Numpy-discussion] distutils problem with NumPy-1.4 & Py-2.7a3 (Snow Leopard) In-Reply-To: <4B856023.1070300@gmail.com> References: <1266952736.4b842a202de1e@astrosun2.astro.cornell.edu> <3d375d731002231447p4cab4c3aqa2b3596d60efb1e6@mail.gmail.com> <4B856023.1070300@gmail.com> Message-ID: On Wed, Feb 24, 2010 at 11:21 AM, Bruce Southey wrote: > On 02/23/2010 04:47 PM, Robert Kern wrote: >> >> On Tue, Feb 23, 2010 at 13:18, Tom Loredo >> ?wrote: >> >>> >>> Hi- >>> >>> I've been testing Python-2.7a3 on Mac OS 10.6.2. ?NumPy-1.4.0 will >>> not install; it appears something has changed within distutils that >>> breaks it: >>> ?File >>> "/Volumes/System/Users/loredo/Downloads/numpy-1.4.0-OSX/numpy/distutils/ccompiler.py", >>> line 17, in >>> ? ?_old_init_posix = distutils.sysconfig._init_posix >>> AttributeError: 'module' object has no attribute '_init_posix' >>> >> >> This line is actually unused. You may delete it. >> >> > > Hi, > I have created ticket 1409 with a patch to remove the associated code in > ccompiler.py file. > http://projects.scipy.org/numpy/ticket/1409 > > Also, I added ticket 1410 to get DistutilsExecError imported by Python 2.7 > in the same file > http://projects.scipy.org/numpy/ticket/1410 > > With these two patches Python 2.7 alpha 3 should build numpy. > > Bruce > > > Sorry, The last ticket is redundant given Ticket 1355 which includes additional import changes that need to be applied http://projects.scipy.org/numpy/ticket/1355 Also,Ticket 1345 is still relevant to Python 2.7: http://projects.scipy.org/numpy/ticket/1345 Bruce From bacmsantos at gmail.com Wed Feb 24 12:50:56 2010 From: bacmsantos at gmail.com (Bruno Santos) Date: Wed, 24 Feb 2010 17:50:56 +0000 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> Message-ID: <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> In both versions your lsPhasedValues contains the number of positions in the array that match a certain criteria. What I need in that step is the unique values and not their positions. 2010/2/24 Robert Kern > On Wed, Feb 24, 2010 at 11:19, Bruno Santos wrote: > > It seems that the python 2.6.4 has a more efficient implementation of the > > lists. It runs faster on this version and slower on 2.5.4 on the same > > machine with debian. A lot faster in fact. > > I was trying to change my headche for the last couple of weeks. But you > > migth give me a lot more optimizations that I can pick. I am trying to > > optimize the following function > > def hypergeometric(self,lindex,rindex): > > """ > > loc.hypergeometric(lindex,rindex) > > Performs the hypergeometric test for the loci between lindex and > > rindex. > > Returns the minimum p-Value > > """ > > aASense = self.aASCounts[lindex*nSize:(rindex+1)*nSize] > > #Create the subarray to test > > aLoci = > > > numpy.hstack([self.aSCounts[lindex*nSize:(rindex+1)*nSize],aASense[::-1]]) > > #Get the values to test > > length = len(aLoci) > > lsPhasedValues = set([aLoci[i] for i in xrange(length) if > i%nSize==0 > > and aLoci[i]>0]) > > m = length/nSize > > n = (length-1)-(length/nSize-1) > > #Create an array to store the Pvalues > > lsPvalues = [] > > append = lsPvalues.append > > #Calculate matches in Phased and non Phased position > > for r in lsPhasedValues: > > #Initiate number of matches to 0 > > q = sum([1 for j in xrange(length) if j%nSize==0 and > > aLoci[j]>=r]) > > k = sum([1 for j in xrange(length) if aLoci[j]>=r]) > > key = '%i,%i,%i,%i'%(q-1,m,n,k) > > try:append(dtPhyper[key]) > > except KeyError: > > value = self.lphyper(q-1, m, n, k) > > append(value) > > dtPhyper[key]=value > > return min(lsPvalues) > > Is there any efficient way to test the array simultaneous for two > different > > conditions? > > j = np.arange(length) > j_nSize_mask = ((j % nSize) == 0) > lsPhasedValues = (j_nSize_mask & (aLoci >= 0)).sum() > ... > bigALoci = (aLoci >= r) > q = (j_nSize_mask & bigALoci).sum() > k = bigALoci.sum() > > > Another way to do it: > > j_nSize = np.arange(0, length, nSize) > lsPhasedValues = (aLoci[j_nSize] >= 0).sum() > ... > q = (aLoci[j_nSize] >= r).sum() > k = (aLoci >= r).sum() > > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Wed Feb 24 12:53:06 2010 From: sccolbert at gmail.com (Chris Colbert) Date: Wed, 24 Feb 2010 12:53:06 -0500 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> Message-ID: <7f014ea61002240953k7b8b24b0k21f1af841470d28@mail.gmail.com> In [4]: %timeit a = np.random.randint(0, 20, 100) 100000 loops, best of 3: 4.32 us per loop In [5]: %timeit (a>=10).sum() 100000 loops, best of 3: 7.32 us per loop In [8]: %timeit np.where(a>=10) 100000 loops, best of 3: 5.36 us per loop am i missing something? On Wed, Feb 24, 2010 at 12:50 PM, Bruno Santos wrote: > In both versions your lsPhasedValues contains the number of positions in > the array that match a certain criteria. What I need in that step is the > unique values and not their positions. > > 2010/2/24 Robert Kern > >> On Wed, Feb 24, 2010 at 11:19, Bruno Santos wrote: >> >> > It seems that the python 2.6.4 has a more efficient implementation of >> the >> > lists. It runs faster on this version and slower on 2.5.4 on the same >> > machine with debian. A lot faster in fact. >> > I was trying to change my headche for the last couple of weeks. But you >> > migth give me a lot more optimizations that I can pick. I am trying to >> > optimize the following function >> > def hypergeometric(self,lindex,rindex): >> > """ >> > loc.hypergeometric(lindex,rindex) >> > Performs the hypergeometric test for the loci between lindex and >> > rindex. >> > Returns the minimum p-Value >> > """ >> > aASense = self.aASCounts[lindex*nSize:(rindex+1)*nSize] >> > #Create the subarray to test >> > aLoci = >> > >> numpy.hstack([self.aSCounts[lindex*nSize:(rindex+1)*nSize],aASense[::-1]]) >> > #Get the values to test >> > length = len(aLoci) >> > lsPhasedValues = set([aLoci[i] for i in xrange(length) if >> i%nSize==0 >> > and aLoci[i]>0]) >> > m = length/nSize >> > n = (length-1)-(length/nSize-1) >> > #Create an array to store the Pvalues >> > lsPvalues = [] >> > append = lsPvalues.append >> > #Calculate matches in Phased and non Phased position >> > for r in lsPhasedValues: >> > #Initiate number of matches to 0 >> > q = sum([1 for j in xrange(length) if j%nSize==0 and >> > aLoci[j]>=r]) >> > k = sum([1 for j in xrange(length) if aLoci[j]>=r]) >> > key = '%i,%i,%i,%i'%(q-1,m,n,k) >> > try:append(dtPhyper[key]) >> > except KeyError: >> > value = self.lphyper(q-1, m, n, k) >> > append(value) >> > dtPhyper[key]=value >> > return min(lsPvalues) >> > Is there any efficient way to test the array simultaneous for two >> different >> > conditions? >> >> j = np.arange(length) >> j_nSize_mask = ((j % nSize) == 0) >> lsPhasedValues = (j_nSize_mask & (aLoci >= 0)).sum() >> ... >> bigALoci = (aLoci >= r) >> q = (j_nSize_mask & bigALoci).sum() >> k = bigALoci.sum() >> >> >> Another way to do it: >> >> j_nSize = np.arange(0, length, nSize) >> lsPhasedValues = (aLoci[j_nSize] >= 0).sum() >> ... >> q = (aLoci[j_nSize] >= r).sum() >> k = (aLoci >= r).sum() >> >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma that is made terrible by our own mad attempt to interpret it as >> though it had an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Feb 24 12:53:59 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Feb 2010 11:53:59 -0600 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> Message-ID: <3d375d731002240953j1fe24280q1f836218ef318b12@mail.gmail.com> On Wed, Feb 24, 2010 at 11:50, Bruno Santos wrote: > In both versions your lsPhasedValues contains the number of positions in the > array that match a certain criteria. What I need in that step is the unique > values and not their positions. Oops! lsPhasedValues = np.unique1d(aLoci[j_nSize_mask & (aLoci >= 0)]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bacmsantos at gmail.com Wed Feb 24 12:59:28 2010 From: bacmsantos at gmail.com (Bruno Santos) Date: Wed, 24 Feb 2010 17:59:28 +0000 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <7f014ea61002240953k7b8b24b0k21f1af841470d28@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240802x707c917l7ccea47170b874f5@mail.gmail.com> <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> <7f014ea61002240953k7b8b24b0k21f1af841470d28@mail.gmail.com> Message-ID: <699044521002240959o1f323402j14746afa3810529c@mail.gmail.com> 2010/2/24 Chris Colbert > In [4]: %timeit a = np.random.randint(0, 20, 100) > 100000 loops, best of 3: 4.32 us per loop > > In [5]: %timeit (a>=10).sum() > 100000 loops, best of 3: 7.32 us per loop > > In [8]: %timeit np.where(a>=10) > 100000 loops, best of 3: 5.36 us per loop > > > am i missing something? > I guess you are. In [23]: a = np.random.randint(0, 20, 1000) In [24]: %timeit np.where(a>=10) 10000 loops, best of 3: 22.4 us per loop In [25]: %timeit (a>=10).sum() 100000 loops, best of 3: 11.7 us per loop np.random.where doesn't scale very well. > > On Wed, Feb 24, 2010 at 12:50 PM, Bruno Santos wrote: > >> In both versions your lsPhasedValues contains the number of positions in >> the array that match a certain criteria. What I need in that step is the >> unique values and not their positions. >> >> 2010/2/24 Robert Kern >> >>> On Wed, Feb 24, 2010 at 11:19, Bruno Santos >>> wrote: >>> >>> > It seems that the python 2.6.4 has a more efficient implementation of >>> the >>> > lists. It runs faster on this version and slower on 2.5.4 on the same >>> > machine with debian. A lot faster in fact. >>> > I was trying to change my headche for the last couple of weeks. But you >>> > migth give me a lot more optimizations that I can pick. I am trying to >>> > optimize the following function >>> > def hypergeometric(self,lindex,rindex): >>> > """ >>> > loc.hypergeometric(lindex,rindex) >>> > Performs the hypergeometric test for the loci between lindex >>> and >>> > rindex. >>> > Returns the minimum p-Value >>> > """ >>> > aASense = self.aASCounts[lindex*nSize:(rindex+1)*nSize] >>> > #Create the subarray to test >>> > aLoci = >>> > >>> numpy.hstack([self.aSCounts[lindex*nSize:(rindex+1)*nSize],aASense[::-1]]) >>> > #Get the values to test >>> > length = len(aLoci) >>> > lsPhasedValues = set([aLoci[i] for i in xrange(length) if >>> i%nSize==0 >>> > and aLoci[i]>0]) >>> > m = length/nSize >>> > n = (length-1)-(length/nSize-1) >>> > #Create an array to store the Pvalues >>> > lsPvalues = [] >>> > append = lsPvalues.append >>> > #Calculate matches in Phased and non Phased position >>> > for r in lsPhasedValues: >>> > #Initiate number of matches to 0 >>> > q = sum([1 for j in xrange(length) if j%nSize==0 and >>> > aLoci[j]>=r]) >>> > k = sum([1 for j in xrange(length) if aLoci[j]>=r]) >>> > key = '%i,%i,%i,%i'%(q-1,m,n,k) >>> > try:append(dtPhyper[key]) >>> > except KeyError: >>> > value = self.lphyper(q-1, m, n, k) >>> > append(value) >>> > dtPhyper[key]=value >>> > return min(lsPvalues) >>> > Is there any efficient way to test the array simultaneous for two >>> different >>> > conditions? >>> >>> j = np.arange(length) >>> j_nSize_mask = ((j % nSize) == 0) >>> lsPhasedValues = (j_nSize_mask & (aLoci >= 0)).sum() >>> ... >>> bigALoci = (aLoci >= r) >>> q = (j_nSize_mask & bigALoci).sum() >>> k = bigALoci.sum() >>> >>> >>> Another way to do it: >>> >>> j_nSize = np.arange(0, length, nSize) >>> lsPhasedValues = (aLoci[j_nSize] >= 0).sum() >>> ... >>> q = (aLoci[j_nSize] >= r).sum() >>> k = (aLoci >= r).sum() >>> >>> >>> -- >>> Robert Kern >>> >>> "I have come to believe that the whole world is an enigma, a harmless >>> enigma that is made terrible by our own mad attempt to interpret it as >>> though it had an underlying truth." >>> -- Umberto Eco >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bacmsantos at gmail.com Wed Feb 24 13:38:46 2010 From: bacmsantos at gmail.com (Bruno Santos) Date: Wed, 24 Feb 2010 18:38:46 +0000 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <699044521002240959o1f323402j14746afa3810529c@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <699044521002240821i9186d7aqe471136e6959b66b@mail.gmail.com> <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> <7f014ea61002240953k7b8b24b0k21f1af841470d28@mail.gmail.com> <699044521002240959o1f323402j14746afa3810529c@mail.gmail.com> Message-ID: <699044521002241038h18f70e06h9fb137ef235cd600@mail.gmail.com> This is probably me just being stupid. But what is the reason for this peace of code not to be working: index_nSize=numpy.arange(0,length,nSize) lsPhasedValues = set([aLoci[i] for i in xrange(length) if (i%nSize==0 and aLoci[i]>0)]) lsPhasedValues1 = numpy.where(aLoci[index_nSize]>0) print aLoci[index_nSize] print lsPhasedValues==lsPhasedValues1,lsPhasedValues,lsPhasedValues1 [0 0 6 0 0 3] False set([3, 6]) (array([2, 5]),) 2010/2/24 Bruno Santos > > > 2010/2/24 Chris Colbert > > In [4]: %timeit a = np.random.randint(0, 20, 100) >> 100000 loops, best of 3: 4.32 us per loop >> >> In [5]: %timeit (a>=10).sum() >> 100000 loops, best of 3: 7.32 us per loop >> >> In [8]: %timeit np.where(a>=10) >> 100000 loops, best of 3: 5.36 us per loop >> >> >> am i missing something? >> > > I guess you are. > In [23]: a = np.random.randint(0, 20, 1000) > > In [24]: %timeit np.where(a>=10) > 10000 loops, best of 3: 22.4 us per loop > > In [25]: %timeit (a>=10).sum() > 100000 loops, best of 3: 11.7 us per loop > > np.random.where doesn't scale very well. > >> >> On Wed, Feb 24, 2010 at 12:50 PM, Bruno Santos wrote: >> >>> In both versions your lsPhasedValues contains the number of positions in >>> the array that match a certain criteria. What I need in that step is the >>> unique values and not their positions. >>> >>> 2010/2/24 Robert Kern >>> >>>> On Wed, Feb 24, 2010 at 11:19, Bruno Santos >>>> wrote: >>>> >>>> > It seems that the python 2.6.4 has a more efficient implementation of >>>> the >>>> > lists. It runs faster on this version and slower on 2.5.4 on the same >>>> > machine with debian. A lot faster in fact. >>>> > I was trying to change my headche for the last couple of weeks. But >>>> you >>>> > migth give me a lot more optimizations that I can pick. I am trying to >>>> > optimize the following function >>>> > def hypergeometric(self,lindex,rindex): >>>> > """ >>>> > loc.hypergeometric(lindex,rindex) >>>> > Performs the hypergeometric test for the loci between lindex >>>> and >>>> > rindex. >>>> > Returns the minimum p-Value >>>> > """ >>>> > aASense = self.aASCounts[lindex*nSize:(rindex+1)*nSize] >>>> > #Create the subarray to test >>>> > aLoci = >>>> > >>>> numpy.hstack([self.aSCounts[lindex*nSize:(rindex+1)*nSize],aASense[::-1]]) >>>> > #Get the values to test >>>> > length = len(aLoci) >>>> > lsPhasedValues = set([aLoci[i] for i in xrange(length) if >>>> i%nSize==0 >>>> > and aLoci[i]>0]) >>>> > m = length/nSize >>>> > n = (length-1)-(length/nSize-1) >>>> > #Create an array to store the Pvalues >>>> > lsPvalues = [] >>>> > append = lsPvalues.append >>>> > #Calculate matches in Phased and non Phased position >>>> > for r in lsPhasedValues: >>>> > #Initiate number of matches to 0 >>>> > q = sum([1 for j in xrange(length) if j%nSize==0 and >>>> > aLoci[j]>=r]) >>>> > k = sum([1 for j in xrange(length) if aLoci[j]>=r]) >>>> > key = '%i,%i,%i,%i'%(q-1,m,n,k) >>>> > try:append(dtPhyper[key]) >>>> > except KeyError: >>>> > value = self.lphyper(q-1, m, n, k) >>>> > append(value) >>>> > dtPhyper[key]=value >>>> > return min(lsPvalues) >>>> > Is there any efficient way to test the array simultaneous for two >>>> different >>>> > conditions? >>>> >>>> j = np.arange(length) >>>> j_nSize_mask = ((j % nSize) == 0) >>>> lsPhasedValues = (j_nSize_mask & (aLoci >= 0)).sum() >>>> ... >>>> bigALoci = (aLoci >= r) >>>> q = (j_nSize_mask & bigALoci).sum() >>>> k = bigALoci.sum() >>>> >>>> >>>> Another way to do it: >>>> >>>> j_nSize = np.arange(0, length, nSize) >>>> lsPhasedValues = (aLoci[j_nSize] >= 0).sum() >>>> ... >>>> q = (aLoci[j_nSize] >= r).sum() >>>> k = (aLoci >= r).sum() >>>> >>>> >>>> -- >>>> Robert Kern >>>> >>>> "I have come to believe that the whole world is an enigma, a harmless >>>> enigma that is made terrible by our own mad attempt to interpret it as >>>> though it had an underlying truth." >>>> -- Umberto Eco >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Feb 24 13:58:52 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Feb 2010 12:58:52 -0600 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <699044521002241038h18f70e06h9fb137ef235cd600@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240828raa4f1ebw5ba7b2e7c10beee8@mail.gmail.com> <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> <7f014ea61002240953k7b8b24b0k21f1af841470d28@mail.gmail.com> <699044521002240959o1f323402j14746afa3810529c@mail.gmail.com> <699044521002241038h18f70e06h9fb137ef235cd600@mail.gmail.com> Message-ID: <3d375d731002241058u2a2d1331pd72b3cd684a9074f@mail.gmail.com> On Wed, Feb 24, 2010 at 12:38, Bruno Santos wrote: > This is probably me just being stupid. But what is the reason for this peace > of code not to be working: > index_nSize=numpy.arange(0,length,nSize) > lsPhasedValues = set([aLoci[i] for i in xrange(length) if (i%nSize==0 and > aLoci[i]>0)]) > lsPhasedValues1 = numpy.where(aLoci[index_nSize]>0) Because this is not correct. where() gives you indices where the argument is True; you want the values in aLoci. Chris misunderstood your request. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Wed Feb 24 22:46:42 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 24 Feb 2010 20:46:42 -0700 Subject: [Numpy-discussion] What are the 'p', 'P' types? Message-ID: They are now typecodes but have no entries in the typename dictionary. The 'm', 'M' types also lack dictionary entries. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dlc at halibut.com Wed Feb 24 23:13:20 2010 From: dlc at halibut.com (David Carmean) Date: Wed, 24 Feb 2010 20:13:20 -0800 Subject: [Numpy-discussion] RHEL 5.3+ build? In-Reply-To: <20100224063707.B21089@halibut.com>; from dlc@halibut.com on Wed, Feb 24, 2010 at 06:37:08AM -0800 References: <20100223180448.A21089@halibut.com> <4B8530A9.2060707@stsci.edu> <20100224063707.B21089@halibut.com> Message-ID: <20100224201320.C21089@halibut.com> On Wed, Feb 24, 2010 at 06:37:08AM -0800, David Carmean wrote: > On Wed, Feb 24, 2010 at 08:59:05AM -0500, Michael Droettboom wrote: > > > > We (STScI) routinely build Numpy on RHEL5.x 64-bit systems for our internal > > use. We need more detail about what you're doing and what errors you're > > seeing to diagnose the problem. > > OK, that's encouraging; it may take a few days before I have time to get back > to this and provide a good narrative, but it had to do with a tool not knowing > what to do with the numpy/core/src/.src files. Well, I found a bit of time today and after reading some bits about fortran compiler choices, I tried gnu95 instead of gnu77 and the build succeedes, so, thank. From pete at shinners.org Wed Feb 24 23:53:50 2010 From: pete at shinners.org (Peter Shinners) Date: Wed, 24 Feb 2010 20:53:50 -0800 Subject: [Numpy-discussion] Want cumsum-like function Message-ID: <4B86025E.90506@shinners.org> I want a function that works like cumsum, but starts at zero, instead of starting with the first actual value. For example; I have an array with [4,3,3,1]. Cumsum will give me an array with [4,7,10,11]. I want an array that is like [0,4,7,8]. It looks like I could indirectly do this: tallies = np.cumsum(initial_array) np.subtract(tallies, tallies[0], tallies) But is there a more efficient way to get this specific result? From robert.kern at gmail.com Thu Feb 25 00:00:40 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Feb 2010 23:00:40 -0600 Subject: [Numpy-discussion] Want cumsum-like function In-Reply-To: <4B86025E.90506@shinners.org> References: <4B86025E.90506@shinners.org> Message-ID: <3d375d731002242100x28c5c344vc54aebef97453ba3@mail.gmail.com> On Wed, Feb 24, 2010 at 22:53, Peter Shinners wrote: > I want a function that works like cumsum, but starts at zero, instead of > starting with the first actual value. > > For example; I have an array with [4,3,3,1]. > Cumsum will give me an array with [4,7,10,11]. > I want an array that is like [0,4,7,8]. I don't understand what process would give you [0, 4, 7, 8] rather than [0, 4, 7, 10]. Can you explain a bit more? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pete at shinners.org Thu Feb 25 00:03:27 2010 From: pete at shinners.org (Peter Shinners) Date: Wed, 24 Feb 2010 21:03:27 -0800 Subject: [Numpy-discussion] Want cumsum-like function In-Reply-To: <3d375d731002242100x28c5c344vc54aebef97453ba3@mail.gmail.com> References: <4B86025E.90506@shinners.org> <3d375d731002242100x28c5c344vc54aebef97453ba3@mail.gmail.com> Message-ID: <4B86049F.8070408@shinners.org> On 02/24/2010 09:00 PM, Robert Kern wrote: > On Wed, Feb 24, 2010 at 22:53, Peter Shinners wrote: > >> I want a function that works like cumsum, but starts at zero, instead of >> starting with the first actual value. >> >> For example; I have an array with [4,3,3,1]. >> Cumsum will give me an array with [4,7,10,11]. >> I want an array that is like [0,4,7,8]. >> > I don't understand what process would give you [0, 4, 7, 8] rather > than [0, 4, 7, 10]. Can you explain a bit more? > > Just brain failure. Yes, [0,4,7,10] is the desired result. From robert.kern at gmail.com Thu Feb 25 00:11:28 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Feb 2010 23:11:28 -0600 Subject: [Numpy-discussion] Want cumsum-like function In-Reply-To: <4B86049F.8070408@shinners.org> References: <4B86025E.90506@shinners.org> <3d375d731002242100x28c5c344vc54aebef97453ba3@mail.gmail.com> <4B86049F.8070408@shinners.org> Message-ID: <3d375d731002242111l1aca317at6b41f19b533a3ecd@mail.gmail.com> On Wed, Feb 24, 2010 at 23:03, Peter Shinners wrote: > On 02/24/2010 09:00 PM, Robert Kern wrote: >> On Wed, Feb 24, 2010 at 22:53, Peter Shinners ?wrote: >> >>> I want a function that works like cumsum, but starts at zero, instead of >>> starting with the first actual value. >>> >>> For example; I have an array with [4,3,3,1]. >>> Cumsum will give me an array with [4,7,10,11]. >>> I want an array that is like [0,4,7,8]. >>> >> I don't understand what process would give you [0, 4, 7, 8] rather >> than [0, 4, 7, 10]. Can you explain a bit more? >> > Just brain failure. Yes, [0,4,7,10] is the desired result. np.cumsum(np.hstack(0, initial_array[:-1])) Correct, but not necessarily the most efficient. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Thu Feb 25 00:47:34 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 24 Feb 2010 22:47:34 -0700 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> Message-ID: On Wed, Feb 24, 2010 at 1:15 AM, David Cournapeau wrote: > On Wed, Feb 24, 2010 at 1:51 PM, Charles R Harris > wrote: > > > > > Boy, that code is *old*, it still uses Numeric ;) I don't think it can > > really be considered a test suite, it needs lotsa love and it needs to > get > > installed. Anyway, f2py with py3k turns out to have string problems, and > I > > expect other type problems, so there is considerable work that needs to > be > > done to bring it up to snuff. Sounds like gsoc material. I'm not going to > > worry about it any more until later. > > If it would take a GSoC to make it up to work, it may be time better > spent on improving fwrap. > How far along is fwrap? It looks like f2py2e was a project that got dropped half way through an update, some exceptions are of the wrong type, the tests need a complete rewrite, etc. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Thu Feb 25 00:52:54 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 24 Feb 2010 23:52:54 -0600 Subject: [Numpy-discussion] For-less code Message-ID: <49d6b3501002242152v5c3c2319g8ec2333f2b177abf@mail.gmail.com> Hello, I am working on a code shown at http://code.google.com/p/ccnworks/source/browse/trunk/thesis/part1/logn-fit.py I use the code to analyse a couple dataset also placed in the same directory. In the first part I use for-loops all over, but later decided to write them without using for loops. The script runs correctly for the two section, however I have a few question regarding to the for-less structures: This part is taken after line 371: gm10 = np.exp((1/Nt10)*(h10*np.log(Dp)).sum(axis=1)) gsd10 = np.exp(((1/Nt10)*(h10*np.log(Dp/gm10[:,np.newaxis])**2).sum(axis=-1))**0.5) dN_dDp10 = (Nt10[:,np.newaxis]/((2*np.pi)**0.5*np.log(gsd10[:,np.newaxis])*d))*np.exp(-(np.log(d)-\ np.log(gm10[:,np.newaxis]))**2/(2*np.log(gsd10[:,np.newaxis])**2)) a10 = (dN_dDp10[0:300,d >= dc10u]*0.001).sum(axis=1) Shape informations for the arrays as follow: I[306]: gm10.shape; gsd10.shape, Dp.shape, d.shape, dN_dDp10.shape, a10.shape O[306]: (300,), (300,), (11,), (991,), (300, 991), (300,) 1-) In gsd10 line what does axis=-1 really means, and why negative axis value is allowed? 2-) [:,np.newaxis] decreases the readability of the code. Is there any alternative for it? 3-) In the last line (dN_dDp10[0:300,d >= dc10u]) could I achieve the same result with different syntax? I have found it somewhat not in accordance with the previous lines. Thanks for your time and comments. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Thu Feb 25 01:06:54 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 25 Feb 2010 01:06:54 -0500 Subject: [Numpy-discussion] For-less code In-Reply-To: <49d6b3501002242152v5c3c2319g8ec2333f2b177abf@mail.gmail.com> References: <49d6b3501002242152v5c3c2319g8ec2333f2b177abf@mail.gmail.com> Message-ID: On Thu, Feb 25, 2010 at 12:52 AM, G?khan Sever wrote: > Hello, > > I am working on a code shown at > http://code.google.com/p/ccnworks/source/browse/trunk/thesis/part1/logn-fit.py > > I use the code to analyse a couple dataset also placed in the same > directory. In the first part I use for-loops all over, but later decided to > write them without using for loops. The script runs correctly for the two > section, however I have a few question regarding to the for-less structures: > > This part is taken after line 371: > > gm10 = np.exp((1/Nt10)*(h10*np.log(Dp)).sum(axis=1)) > gsd10 = > np.exp(((1/Nt10)*(h10*np.log(Dp/gm10[:,np.newaxis])**2).sum(axis=-1))**0.5) > > dN_dDp10 = > (Nt10[:,np.newaxis]/((2*np.pi)**0.5*np.log(gsd10[:,np.newaxis])*d))*np.exp(-(np.log(d)-\ > > np.log(gm10[:,np.newaxis]))**2/(2*np.log(gsd10[:,np.newaxis])**2)) > > a10 = (dN_dDp10[0:300,d >= dc10u]*0.001).sum(axis=1) > > Shape informations for the arrays as follow: > > I[306]: gm10.shape; gsd10.shape, Dp.shape, d.shape, dN_dDp10.shape, > a10.shape > O[306]: (300,), (300,), (11,), (991,), (300, 991), (300,) > > 1-) In gsd10 line what does axis=-1 really means, and why negative axis > value is allowed? > -1 in this case is the last axis I believe. gsd10.shape[-1] Or just like in python [1,2,3,4][-1] 4 > 2-) [:,np.newaxis] decreases the readability of the code. Is there any > alternative for it? > I find [:,None] to be the easiest unless I want to be really explicit about a reshape. It's probably just as bad though. np.ones(8).shape (8,) np.expand_dims(np.ones(8),1).shape (8, 1) np.ones(8).reshape(8,-1).shape (8, 1) > 3-) In the last line (dN_dDp10[0:300,d >= dc10u]) could I achieve the same > result with different syntax? I have found it somewhat not in accordance > with the previous lines. > If it's an array you could just define it once as index_d or something beforehand to increase readability? Skipper > Thanks for your time and comments. > > -- > G?khan > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From friedrichromstedt at gmail.com Thu Feb 25 02:48:49 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Thu, 25 Feb 2010 08:48:49 +0100 Subject: [Numpy-discussion] Want cumsum-like function In-Reply-To: <4B86025E.90506@shinners.org> References: <4B86025E.90506@shinners.org> Message-ID: 2010/2/25 Peter Shinners : > I want a function that works like cumsum, but starts at zero, instead of > starting with the first actual value. > > [...] > > tallies = np.cumsum(initial_array) > np.subtract(tallies, tallies[0], tallies) Also note that this wouln't work as the example result [0, 3, 6, 7] (= [4, 7, 10, 11] - 4) with initial_array = [4, 3, 3, 1] is different from [0, 4, 7, 10]. Note that you want always leave out the last term in the sum result[k] = \sum_{i = 0}^{k - 1} initial[i], thus the following expression should work: tallies = np.cumsum(initial_array) - initial_array. Indeed, for initial_array = [4, 3, 3, 1], np.cumsum() = [4, 7, 10, 11], np.cumsum() - initial_array = [4, 7, 10, 11] - [4, 3, 3, 1] = [0, 4, 7, 10] as intended. Friedrich From david at silveregg.co.jp Thu Feb 25 03:07:24 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Thu, 25 Feb 2010 17:07:24 +0900 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> Message-ID: <4B862FBC.1020205@silveregg.co.jp> Charles R Harris wrote: > > > On Wed, Feb 24, 2010 at 1:15 AM, David Cournapeau > wrote: > > On Wed, Feb 24, 2010 at 1:51 PM, Charles R Harris > > wrote: > > > > > Boy, that code is *old*, it still uses Numeric ;) I don't think > it can > > really be considered a test suite, it needs lotsa love and it > needs to get > > installed. Anyway, f2py with py3k turns out to have string > problems, and I > > expect other type problems, so there is considerable work that > needs to be > > done to bring it up to snuff. Sounds like gsoc material. I'm not > going to > > worry about it any more until later. > > If it would take a GSoC to make it up to work, it may be time better > spent on improving fwrap. > > > How far along is fwrap? It looks like f2py2e was a project that got > dropped half way through an update, some exceptions are of the wrong > type, the tests need a complete rewrite, etc. Well, the f2py as included in numpy is at least stable, since it has been used with little to no change for scipy the last few years, whereas fwrap is largely untested on the scale of something like scipy. I was suggesting to look into fwrap *if* f2py would be really hard to make to work for python 3.x. What worries me for f2py is not so much the python code (at worst, we could hack something to call f2py through python 2.x for the 3.x build - numscons runs f2py out of process for // build) as much as the generated C code. Debugging code generators is rarely fun in my experience :) cheers, David From pete at shinners.org Thu Feb 25 03:16:21 2010 From: pete at shinners.org (Peter Shinners) Date: Thu, 25 Feb 2010 00:16:21 -0800 Subject: [Numpy-discussion] Want cumsum-like function In-Reply-To: References: <4B86025E.90506@shinners.org> Message-ID: <4B8631D5.4050500@shinners.org> On 02/24/2010 11:48 PM, Friedrich Romstedt wrote: > 2010/2/25 Peter Shinners: > >> I want a function that works like cumsum, but starts at zero, instead of >> starting with the first actual value. >> >> [...] >> >> tallies = np.cumsum(initial_array) >> np.subtract(tallies, tallies[0], tallies) >> > Also note that this wouln't work as the example result [0, 3, 6, 7] (= > [4, 7, 10, 11] - 4) with initial_array = [4, 3, 3, 1] is different > from [0, 4, 7, 10]. > > Note that you want always leave out the last term in the sum result[k] > = \sum_{i = 0}^{k - 1} initial[i], thus the following expression > should work: > > tallies = np.cumsum(initial_array) - initial_array. > > Indeed, for initial_array = [4, 3, 3, 1], np.cumsum() = [4, 7, 10, > 11], np.cumsum() - initial_array = [4, 7, 10, 11] - [4, 3, 3, 1] = [0, > 4, 7, 10] as intended. > > Friedrich > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > I noticed my version with subtract wasn't right. I see now why it works with yours. Excellent. From bacmsantos at gmail.com Thu Feb 25 07:04:20 2010 From: bacmsantos at gmail.com (Bruno Santos) Date: Thu, 25 Feb 2010 12:04:20 +0000 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <3d375d731002241058u2a2d1331pd72b3cd684a9074f@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <699044521002240840q48cd9eddu5ce79538c884eff8@mail.gmail.com> <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> <7f014ea61002240953k7b8b24b0k21f1af841470d28@mail.gmail.com> <699044521002240959o1f323402j14746afa3810529c@mail.gmail.com> <699044521002241038h18f70e06h9fb137ef235cd600@mail.gmail.com> <3d375d731002241058u2a2d1331pd72b3cd684a9074f@mail.gmail.com> Message-ID: <699044521002250404s3fbfbcb5v1689d0af92706655@mail.gmail.com> After implementation all the possibilities we discuss yesterday mi fastest version is this one: index_nSize=numpy.arange(0,length,nSize) lsPhasedValues = numpy.unique1d(aLoci[numpy.where(aLoci[index_nSize]>0)]) ... bigaLoci = (aLoci>=r) k = (aLoci>=r).sum() This is taking around 0.12s for my test cases. The other version you proposed: j = numpy.arange(length) j_nSize_mask = ((j%nSize)==0) lsPhasedValues = numpy.unique1d(aLoci[j_nSize_mask&aLoci>=0]) bigaLoci = (aLoci>=r) q = (j_nSize_mask&bigaLoci).sum() k = bigaLoci.sum() This takes 0.75s for the same input. With this I was able to speed up my code in a afternoon more than in the two previous weeks. I don't have enough words to thank you. All the best, Bruno 2010/2/24 Robert Kern > On Wed, Feb 24, 2010 at 12:38, Bruno Santos wrote: > > This is probably me just being stupid. But what is the reason for this > peace > > of code not to be working: > > index_nSize=numpy.arange(0,length,nSize) > > lsPhasedValues = set([aLoci[i] for i in xrange(length) if (i%nSize==0 and > > aLoci[i]>0)]) > > lsPhasedValues1 = numpy.where(aLoci[index_nSize]>0) > > Because this is not correct. where() gives you indices where the > argument is True; you want the values in aLoci. Chris misunderstood > your request. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bacmsantos at gmail.com Thu Feb 25 08:51:38 2010 From: bacmsantos at gmail.com (Bruno Santos) Date: Thu, 25 Feb 2010 13:51:38 +0000 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <699044521002250404s3fbfbcb5v1689d0af92706655@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240841u4891f324iba0db2597dd74903@mail.gmail.com> <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> <7f014ea61002240953k7b8b24b0k21f1af841470d28@mail.gmail.com> <699044521002240959o1f323402j14746afa3810529c@mail.gmail.com> <699044521002241038h18f70e06h9fb137ef235cd600@mail.gmail.com> <3d375d731002241058u2a2d1331pd72b3cd684a9074f@mail.gmail.com> <699044521002250404s3fbfbcb5v1689d0af92706655@mail.gmail.com> Message-ID: <699044521002250551q51b3def6kcced16a3baa78461@mail.gmail.com> I just realized that the line lsPhasedValues = numpy.unique1d(aLoci[numpy.where(aLoci[index_nSize]>0)]) does not work properly. How can I get the unique values of an array based on their indexes? 2010/2/25 Bruno Santos > After implementation all the possibilities we discuss yesterday mi fastest > version is this one: > index_nSize=numpy.arange(0,length,nSize) > lsPhasedValues = numpy.unique1d(aLoci[numpy.where(aLoci[index_nSize]>0)]) > ... > > bigaLoci = (aLoci>=r) > k = (aLoci>=r).sum() > > > This is taking around 0.12s for my test cases. > The other version you proposed: > > j = numpy.arange(length) > j_nSize_mask = ((j%nSize)==0) > lsPhasedValues = numpy.unique1d(aLoci[j_nSize_mask&aLoci>=0]) > > bigaLoci = (aLoci>=r) > q = (j_nSize_mask&bigaLoci).sum() > k = bigaLoci.sum() > > This takes 0.75s for the same input. > > With this I was able to speed up my code in a afternoon more than in the > two previous weeks. I don't have enough words to thank you. > > All the best, > Bruno > > 2010/2/24 Robert Kern > >> On Wed, Feb 24, 2010 at 12:38, Bruno Santos wrote: >> >> > This is probably me just being stupid. But what is the reason for this >> peace >> > of code not to be working: >> > index_nSize=numpy.arange(0,length,nSize) >> > lsPhasedValues = set([aLoci[i] for i in xrange(length) if (i%nSize==0 >> and >> > aLoci[i]>0)]) >> > lsPhasedValues1 = numpy.where(aLoci[index_nSize]>0) >> >> Because this is not correct. where() gives you indices where the >> argument is True; you want the values in aLoci. Chris misunderstood >> your request. >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma that is made terrible by our own mad attempt to interpret it as >> though it had an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Feb 25 09:39:34 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 25 Feb 2010 07:39:34 -0700 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <4B862FBC.1020205@silveregg.co.jp> References: <3d375d731002231931y20eec786n64096d3c701d3bc4@mail.gmail.com> <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <4B862FBC.1020205@silveregg.co.jp> Message-ID: On Thu, Feb 25, 2010 at 1:07 AM, David Cournapeau wrote: > Charles R Harris wrote: > > > > > > On Wed, Feb 24, 2010 at 1:15 AM, David Cournapeau > > wrote: > > > > On Wed, Feb 24, 2010 at 1:51 PM, Charles R Harris > > > > wrote: > > > > > > > > Boy, that code is *old*, it still uses Numeric ;) I don't think > > it can > > > really be considered a test suite, it needs lotsa love and it > > needs to get > > > installed. Anyway, f2py with py3k turns out to have string > > problems, and I > > > expect other type problems, so there is considerable work that > > needs to be > > > done to bring it up to snuff. Sounds like gsoc material. I'm not > > going to > > > worry about it any more until later. > > > > If it would take a GSoC to make it up to work, it may be time better > > spent on improving fwrap. > > > > > > How far along is fwrap? It looks like f2py2e was a project that got > > dropped half way through an update, some exceptions are of the wrong > > type, the tests need a complete rewrite, etc. > > Well, the f2py as included in numpy is at least stable, since it has > been used with little to no change for scipy the last few years, whereas > fwrap is largely untested on the scale of something like scipy. I was > suggesting to look into fwrap *if* f2py would be really hard to make to > work for python 3.x. > > What worries me for f2py is not so much the python code (at worst, we > could hack something to call f2py through python 2.x for the 3.x build - > numscons runs f2py out of process for // build) as much as the generated > C code. Debugging code generators is rarely fun in my experience :) > > It might not be too difficult to get f2py running with Python3.x. At first try there were some places in the generated code that called Python string functions that have gone away, but those should be fixable without too much trouble. There may be a few other troublesome spots, but I don't think things will be that difficult. I'm more concerned for the long run. The code needs a fixed up test suite, it needs documentation, and it needs a maintainer, at least for a while. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Feb 25 10:50:01 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Feb 2010 09:50:01 -0600 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <699044521002250551q51b3def6kcced16a3baa78461@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <699044521002240919x45660149i24cd915d5d00b868@mail.gmail.com> <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> <7f014ea61002240953k7b8b24b0k21f1af841470d28@mail.gmail.com> <699044521002240959o1f323402j14746afa3810529c@mail.gmail.com> <699044521002241038h18f70e06h9fb137ef235cd600@mail.gmail.com> <3d375d731002241058u2a2d1331pd72b3cd684a9074f@mail.gmail.com> <699044521002250404s3fbfbcb5v1689d0af92706655@mail.gmail.com> <699044521002250551q51b3def6kcced16a3baa78461@mail.gmail.com> Message-ID: <3d375d731002250750r3e0df664m89a00f03670ac9f0@mail.gmail.com> On Thu, Feb 25, 2010 at 07:51, Bruno Santos wrote: > I just realized that the line?lsPhasedValues = > numpy.unique1d(aLoci[numpy.where(aLoci[index_nSize]>0)]) does not work > properly. > How can I get the unique values of an array based on their indexes? I don't know what that sentence means. Please show us some complete code that gives you a result that you do not expect and show us the result that you do expect. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bacmsantos at gmail.com Thu Feb 25 11:20:57 2010 From: bacmsantos at gmail.com (Bruno Santos) Date: Thu, 25 Feb 2010 16:20:57 +0000 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <3d375d731002250750r3e0df664m89a00f03670ac9f0@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <3d375d731002240926v615d0ec7l95e4bf7ec9d2bfe3@mail.gmail.com> <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> <7f014ea61002240953k7b8b24b0k21f1af841470d28@mail.gmail.com> <699044521002240959o1f323402j14746afa3810529c@mail.gmail.com> <699044521002241038h18f70e06h9fb137ef235cd600@mail.gmail.com> <3d375d731002241058u2a2d1331pd72b3cd684a9074f@mail.gmail.com> <699044521002250404s3fbfbcb5v1689d0af92706655@mail.gmail.com> <699044521002250551q51b3def6kcced16a3baa78461@mail.gmail.com> <3d375d731002250750r3e0df664m89a00f03670ac9f0@mail.gmail.com> Message-ID: <699044521002250820h2fc4d9d1u1193e19495f3a6ff@mail.gmail.com> This is the same example we discuss yesterday. The working code is this one: lsPhasedValues = [aLoci[i] for i in xrange(length) if i%21==0 and aLoci[i]>0] I was able to get the same result after a while: aAux =aLoci[index_nSize] lsPhasedValues = numpy.unique1d(aAux[numpy.where(aAux>0)[0]]) I couldn't came up with a better solution. Thank you in advance, Bruno 2010/2/25 Robert Kern > On Thu, Feb 25, 2010 at 07:51, Bruno Santos wrote: > > I just realized that the line lsPhasedValues = > > numpy.unique1d(aLoci[numpy.where(aLoci[index_nSize]>0)]) does not work > > properly. > > How can I get the unique values of an array based on their indexes? > > I don't know what that sentence means. Please show us some complete > code that gives you a result that you do not expect and show us the > result that you do expect. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Feb 25 11:25:36 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Feb 2010 10:25:36 -0600 Subject: [Numpy-discussion] Numpy array performance issue In-Reply-To: <699044521002250820h2fc4d9d1u1193e19495f3a6ff@mail.gmail.com> References: <699044521002240755i590db526n1f6b23fdacd20b40@mail.gmail.com> <699044521002240950m4e7c50d0w7702740d3eef34e6@mail.gmail.com> <7f014ea61002240953k7b8b24b0k21f1af841470d28@mail.gmail.com> <699044521002240959o1f323402j14746afa3810529c@mail.gmail.com> <699044521002241038h18f70e06h9fb137ef235cd600@mail.gmail.com> <3d375d731002241058u2a2d1331pd72b3cd684a9074f@mail.gmail.com> <699044521002250404s3fbfbcb5v1689d0af92706655@mail.gmail.com> <699044521002250551q51b3def6kcced16a3baa78461@mail.gmail.com> <3d375d731002250750r3e0df664m89a00f03670ac9f0@mail.gmail.com> <699044521002250820h2fc4d9d1u1193e19495f3a6ff@mail.gmail.com> Message-ID: <3d375d731002250825v2dfd6a04v60834aa546c98dc2@mail.gmail.com> On Thu, Feb 25, 2010 at 10:20, Bruno Santos wrote: > This is the same example we discuss yesterday. I think I can help you this time, but when we ask for complete code, we mean complete, self-contained code that we can run immediately, not a fragment of code that needs variables to be initialized. We also ask for the result that you get, so you should copy-and-paste the exact result from running that code and also show us the result that you were expecting. > The working code is this one: > lsPhasedValues = [aLoci[i] for i in xrange(length) if i%21==0 and > aLoci[i]>0] > I was able to get the same result after a while: > aAux =aLoci[index_nSize] > lsPhasedValues = numpy.unique1d(aAux[numpy.where(aAux>0)[0]]) > I couldn't came up with a better solution. You don't need the where(). lsPhasedValues = numpy.unique1d(aAux[aAux > 0]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Sam.Tygier at hep.manchester.ac.uk Thu Feb 25 12:10:32 2010 From: Sam.Tygier at hep.manchester.ac.uk (Sam Tygier) Date: Thu, 25 Feb 2010 17:10:32 +0000 Subject: [Numpy-discussion] read ascii file with quote delimited strings Message-ID: <1267117832.29535.60.camel@hydrogen> Hi I am trying to read an ascii file which mixes ints, floats and stings. eg. 1 2.3 'a' 'abc ' 2 3.2 'b' ' ' 3 3.4 ' ' 'hello' Within a column that data is always the same. the strings are sometimes contain with spaces. I have tried giving loadtxt a dtype that specifies the length of the strings: [('a', int), ('b', float), ("c", "a1"), ("d", "a5")] or including the quotes: [('a', int), ('b', float), ("c", "a3"), ("d", "a7")] but it seems that loadtxt uses split() before looking at the dtype, so for example line 3 becomes ["3", "3.4", "'", "'", "'hello'"] and my 2 string elements contain only quotes. Would it be possible a dtype of "a3" to force reading 3 chars? Or would it make more sense for loadtxt to have a quote_char, that i could set to "'". This would make it ignore whitespace between quote_chars. Sam From kwmsmith at gmail.com Thu Feb 25 13:15:16 2010 From: kwmsmith at gmail.com (Kurt Smith) Date: Thu, 25 Feb 2010 12:15:16 -0600 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: References: <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <4B862FBC.1020205@silveregg.co.jp> Message-ID: On Thu, Feb 25, 2010 at 8:39 AM, Charles R Harris wrote: > > > On Thu, Feb 25, 2010 at 1:07 AM, David Cournapeau > wrote: >> >> Charles R Harris wrote: >> > >> > >> > On Wed, Feb 24, 2010 at 1:15 AM, David Cournapeau > > > wrote: >> > >> > ? ? On Wed, Feb 24, 2010 at 1:51 PM, Charles R Harris >> > ? ? > >> > wrote: >> > >> > ? ? ?> >> > ? ? ?> Boy, that code is *old*, it still uses Numeric ;) I don't think >> > ? ? it can >> > ? ? ?> really be considered a test suite, it needs lotsa love and it >> > ? ? needs to get >> > ? ? ?> installed. Anyway, f2py with py3k turns out to have string >> > ? ? problems, and I >> > ? ? ?> expect other type problems, so there is considerable work that >> > ? ? needs to be >> > ? ? ?> done to bring it up to snuff. Sounds like gsoc material. I'm not >> > ? ? going to >> > ? ? ?> worry about it any more until later. >> > >> > ? ? If it would take a GSoC to make it up to work, it may be time better >> > ? ? spent on improving fwrap. >> > >> > >> > How far along is fwrap? It looks like f2py2e was a project that got >> > dropped half way through an update, some exceptions are of the wrong >> > type, the tests need a complete rewrite, etc. >> >> Well, the f2py as included in numpy is at least stable, since it has >> been used with little to no change for scipy the last few years, whereas >> fwrap is largely untested on the scale of something like scipy. I was >> suggesting to look into fwrap *if* f2py would be really hard to make to >> work for python 3.x. >> >> What worries me for f2py is not so much the python code (at worst, we >> could hack something to call f2py through python 2.x for the 3.x build - >> numscons runs f2py out of process for // build) as much as the generated >> C code. Debugging code generators is rarely fun in my experience :) >> > > It might not be too difficult to get f2py running with Python3.x. At first > try there were some places in the generated code that called Python string > functions that have gone away, but those should be fixable without too much > trouble. There may be a few other troublesome spots, but I don't think > things will be that difficult. > > I'm more concerned for the long run. The code needs a fixed up test suite, > it needs documentation, and it needs a maintainer, at least for a while. Glad I came across this thread :) I'm the developer of fwrap. It is coming along, but will be at least a month, likely two before the first release. (The main areas that need some TLC are the fortran parser and the build system; the build system will leverage numpy's distutils unless waf is easy to get working.) The first release will cover a large portion of f2py's current functionality, but I don't plan on having python callbacks working then. Callbacks will be a part of the second release. An issue that you should be aware of is that fwrap will not work with f77, and requires gfortran 4.3.3 or greater, since it uses C interoperability features not available in f77. (Fwrap will work with any 'modern' fortran compiler that has the basic C interoperability features implemented. Looking around it appears that all of them do have the basic set necessary, see [1]. So this only excludes f77.) Fwrap by design will work seamlessly with numpy arrays and PEP 3118 buffers [2], and will support Python 2.4 - 3.x (thanks in large part to its leveraging Cython for the C wrappers). It has an expanding testsuite (unittests & acceptance tests) and I hope it's designed clearly enough to encourage contributions after the first release. You can trust its C generation as far as you trust Cython's C generation. Although I guess you'll have to trust fwrap's Fortran & Cython generation :) But like I said, it has a testsuite that has everything covered. So, to recap: fwrap will do much of what you want, but it will exclude f77, and has a Cython dependency. It won't have callbacks working right away, but will by the second release. When its matured enough I'd like to get fwrap to generate the bindings for scipy. Once it gets to that point (sometime this summer) we can talk :) As far as the 'long run' goes: fwrap has and will have a test suite, documentation and a maintainer, so those are covered. For more information: Fwrap's blog: http://fortrancython.wordpress.com/ Fwrap's bitbucket repository: http://bitbucket.org/kwmsmith/fwrap-dev/ ...which is a mirror of it's cython repo: http://hg.cython.org/fwrap-dev/ Feel free to contact me with any more questions. Thanks, Kurt [1] http://tinyurl.com/yjgtpqp [2] http://www.python.org/dev/peps/pep-3118/ From warren.weckesser at enthought.com Thu Feb 25 15:30:33 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 25 Feb 2010 14:30:33 -0600 Subject: [Numpy-discussion] read ascii file with quote delimited strings In-Reply-To: <1267117832.29535.60.camel@hydrogen> References: <1267117832.29535.60.camel@hydrogen> Message-ID: <4B86DDE9.7000809@enthought.com> Sam Tygier wrote: > Hi > > I am trying to read an ascii file which mixes ints, floats and stings. > eg. > 1 2.3 'a' 'abc ' > 2 3.2 'b' ' ' > 3 3.4 ' ' 'hello' > > Within a column that data is always the same. the strings are sometimes > contain with spaces. > Does each column always contain the same number of characters? That is, are the field widths always the same? If so, you can give the 'delimiter' argument of numpy.genfromtxt a list of field widths. (This is true even in numpy 1.3.0, though it does not appear to be documented.) An example is attached to this post to scipy-user: http://mail.scipy.org/pipermail/scipy-user/2010-February/024333.html Warren > I have tried giving loadtxt a dtype that specifies the length of the > strings: > [('a', int), ('b', float), ("c", "a1"), ("d", "a5")] > or including the quotes: > [('a', int), ('b', float), ("c", "a3"), ("d", "a7")] > > but it seems that loadtxt uses split() before looking at the dtype, so > for example line 3 becomes > ["3", "3.4", "'", "'", "'hello'"] > and my 2 string elements contain only quotes. > > Would it be possible a dtype of "a3" to force reading 3 chars? > > Or would it make more sense for loadtxt to have a quote_char, that i > could set to "'". This would make it ignore whitespace between > quote_chars. > > Sam > > > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From Chris.Barker at noaa.gov Thu Feb 25 16:56:43 2010 From: Chris.Barker at noaa.gov (Chris Barker) Date: Thu, 25 Feb 2010 13:56:43 -0800 Subject: [Numpy-discussion] read ascii file with quote delimited strings In-Reply-To: <4B86DDE9.7000809@enthought.com> References: <1267117832.29535.60.camel@hydrogen> <4B86DDE9.7000809@enthought.com> Message-ID: <4B86F21B.5070507@noaa.gov> Warren Weckesser wrote: > Does each column always contain the same number of characters? That >is, are the field widths always the same? If so, you can ... if not, I'd use the std lib csv module, then convert to numpy arrays, not as efficient, but it should be easy. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From bergstrj at iro.umontreal.ca Thu Feb 25 17:59:20 2010 From: bergstrj at iro.umontreal.ca (James Bergstra) Date: Thu, 25 Feb 2010 17:59:20 -0500 Subject: [Numpy-discussion] problem w 32bit binomial? Message-ID: <7f1eaee31002251459s3184e2a6g4337fe3056b468e9@mail.gmail.com> In case this hasn't been solved in more recent numpy... I've tried the following lines on two installations of numpy 1.3 with python 2.6 ?numpy.random.binomial(n=numpy.asarray([2,3,4], dtype='int64'), p=numpy.asarray([.1, .2, .3], dtype='float64')) A 64bit computer gives an output of array length 3. A 32bit computer gives an error: ? ? TypeError: array cannot be safely cast to required type If I change the int64 cast to an int32 cast then it works on both machines. Thanks, James -- http://www-etud.iro.umontreal.ca/~bergstrj -- http://www-etud.iro.umontreal.ca/~bergstrj From dwf at cs.toronto.edu Thu Feb 25 19:32:54 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 25 Feb 2010 19:32:54 -0500 Subject: [Numpy-discussion] problem w 32bit binomial? In-Reply-To: <7f1eaee31002251459s3184e2a6g4337fe3056b468e9@mail.gmail.com> References: <7f1eaee31002251459s3184e2a6g4337fe3056b468e9@mail.gmail.com> Message-ID: <65D32DCE-F4AD-4D05-9C16-BB2F33A533CF@cs.toronto.edu> Hey James, On 25-Feb-10, at 5:59 PM, James Bergstra wrote: > In case this hasn't been solved in more recent numpy... > > I've tried the following lines on two installations of numpy 1.3 > with python 2.6 > > numpy.random.binomial(n=numpy.asarray([2,3,4], dtype='int64'), > p=numpy.asarray([.1, .2, .3], dtype='float64')) > > A 64bit computer gives an output of array length 3. > A 32bit computer gives an error: > > TypeError: array cannot be safely cast to required type It seems to be not only 32-bit specific but x86-specific. On a ppc machine, 32-bit mode: dwf at morrislab:~$ python-32 Python 2.6.4 (r264:75706, Feb 16 2010, 21:03:46) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.3.0' >>> numpy.random.binomial(n=numpy.asarray([2,3,4], dtype='int64'), p=numpy.asarray([.1,.2,.3], dtype='float64')) array([1, 1, 2]) But I can confirm the bug on OS X/Intel 32-bit, and Linux x86-32 (both 1.3.0 and most recent svn trunk), as well as its absence on Linux x86-64. The problem seems to be with this line in mtrand.pyx, line 3306 in the trunk: on = PyArray_FROM_OTF(n, NPY_LONG, NPY_ALIGNED) I recall there being some consistency problems with NPY_LONG across architectures. I thought it was only an issue for Python 2.4, though... Perhaps Chuck or David C. know what's going on. David From dwf at cs.toronto.edu Thu Feb 25 20:18:26 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 25 Feb 2010 20:18:26 -0500 Subject: [Numpy-discussion] problem w 32bit binomial? In-Reply-To: <7f1eaee31002251459s3184e2a6g4337fe3056b468e9@mail.gmail.com> References: <7f1eaee31002251459s3184e2a6g4337fe3056b468e9@mail.gmail.com> Message-ID: <7556470B-EE39-4310-9BEF-007200B8FCE1@cs.toronto.edu> On 25-Feb-10, at 5:59 PM, James Bergstra wrote: > In case this hasn't been solved in more recent numpy... > > I've tried the following lines on two installations of numpy 1.3 > with python 2.6 > > numpy.random.binomial(n=numpy.asarray([2,3,4], dtype='int64'), > p=numpy.asarray([.1, .2, .3], dtype='float64')) > > A 64bit computer gives an output of array length 3. > A 32bit computer gives an error: > > TypeError: array cannot be safely cast to required type > > If I change the int64 cast to an int32 cast then it works on both > machines. Alright, filed as a ticket at http://projects.scipy.org/numpy/ticket/1413 David From david at silveregg.co.jp Thu Feb 25 20:18:16 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 26 Feb 2010 10:18:16 +0900 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: References: <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <4B862FBC.1020205@silveregg.co.jp> Message-ID: <4B872158.9090902@silveregg.co.jp> Kurt Smith wrote: > I'm the developer of fwrap. It is coming along, but will be at least > a month, likely two before the first release. (The main areas that > need some TLC are the fortran parser and the build system; the build > system will leverage numpy's distutils unless waf is easy to get > working.) The first release will cover a large portion of f2py's > current functionality, but I don't plan on having python callbacks > working then. Callbacks will be a part of the second release. > > An issue that you should be aware of is that fwrap will not work with > f77, and requires gfortran 4.3.3 or greater, since it uses C > interoperability features not available in f77. (Fwrap will work with > any 'modern' fortran compiler that has the basic C interoperability > features implemented. Looking around it appears that all of them do > have the basic set necessary, see [1]. So this only excludes f77.) By f77, do you mean g77, i.e. the fortran compiler in the GNU gcc 3.x suite ? If so, that's quite a bummer for scipy. I don't see us removing support for g77 in the short or even mid term (many distributions depend on it, and that's not counting windows where there is still no gcc 4.x official support from MinGW). Do you have a list somewhere of what exactly is required for fwrap from the fortran compiler ? cheers, David From ralf.gommers at googlemail.com Fri Feb 26 01:50:54 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 26 Feb 2010 14:50:54 +0800 Subject: [Numpy-discussion] odd ascii format and genfromtxt Message-ID: Hi all, I'm trying to read in data from text files with genfromtxt, and have some trouble figuring out the right combination of keywords. The format is: ['0\t\t4.000000000000000e+007,0.000000000000000e+000\n', '\t9.860280631554179e-001,-1.902586503306264e-002\n', '\t9.860280631554179e-001,-1.902586503306264e-002'] Note that there are two delimiters, tab and comma. Also, the first line has an extra integer plus tab (this is a repeating pattern). The files are large, there's a lot of them, and they're generated by a binary I can't modify. Here are some things I've tried: In [216]: np.genfromtxt('ascii2test.raw', invalid_raise=False) Out[216]: array([ 0., NaN]) In [217]: np.genfromtxt('ascii2test.raw', invalid_raise=False, delimiter=['\t', ',']) TypeError: cannot perform accumulate with flexible type In [228]: np.genfromtxt('ascii2test.raw', delimiter=['\t', ','], dtype=[('intvar', ' From njs at pobox.com Fri Feb 26 02:56:34 2010 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 25 Feb 2010 23:56:34 -0800 Subject: [Numpy-discussion] anyone to look at #1402? Message-ID: <961fa2b41002252356y7f10dba5yf886740aa5ae8053@mail.gmail.com> So there's this patch I submitted: http://projects.scipy.org/numpy/ticket/1402 Obviously not that high a priority in the grand scheme of things (it adds a function to compute the log-determinant directly), but I don't want to release a version of scikits.sparse with this functionality while the numpy patch is hanging in needs-review status (since the API might change), so it's a bit of a blocker for me. Anyone have a minute to take a look? Thanks! -- Nathaniel From warren.weckesser at enthought.com Fri Feb 26 03:29:26 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Fri, 26 Feb 2010 02:29:26 -0600 Subject: [Numpy-discussion] odd ascii format and genfromtxt In-Reply-To: References: Message-ID: <4B878666.4030903@enthought.com> Ralf Gommers wrote: > Hi all, > > I'm trying to read in data from text files with genfromtxt, and have > some trouble figuring out the right combination of keywords. The > format is: > > ['0\t\t4.000000000000000e+007,0.000000000000000e+000\n', > '\t9.860280631554179e-001,-1.902586503306264e-002\n', > '\t9.860280631554179e-001,-1.902586503306264e-002'] > > Note that there are two delimiters, tab and comma. Also, the first > line has an extra integer plus tab (this is a repeating pattern). The > files are large, there's a lot of them, and they're generated by a > binary I can't modify. > > Here are some things I've tried: > > In [216]: np.genfromtxt('ascii2test.raw', invalid_raise=False) > Out[216]: array([ 0., NaN]) > > In [217]: np.genfromtxt('ascii2test.raw', invalid_raise=False, > delimiter=['\t', ',']) > TypeError: cannot perform accumulate with flexible type > > In [228]: np.genfromtxt('ascii2test.raw', delimiter=['\t', ','], > dtype=[('intvar', ' TypeError: cannot perform accumulate with flexible type > > > Any suggestions? The 'delimiter' keyword does not accept a list of strings. If it is a list, it must be a list of integers that are the field widths. In your case, that won't work. You could try fromregex: ----- In [1]: import numpy as np In [2]: cat sample.raw 0 4.000e+007,0.00000e+000 9.8602806e-001,-1.9025e-002 9.8602806e-001,-1.9025e-002 123 5.0e6,100.0 10.1,-2.0e-3 10.2,-2.1e-3 In [3]: a = np.fromregex('sample.raw', '(.*?)\t+(.*),(.*)', np.dtype([('extra', 'S8'), ('x', float), ('y', float)])) In [4]: a Out[4]: array([('0', 40000000.0, 0.0), ('', 0.98602805999999998, -0.019025), ('', 0.98602805999999998, -0.019025), ('123', 5000000.0, 100.0), ('', 10.1, -0.002), ('', 10.199999999999999, -0.0020999999999999999)], dtype=[('extra', '|S8'), ('x', ' > Thanks, > Ralf > > ------------------------------------------------------------------------ > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From dagss at student.matnat.uio.no Fri Feb 26 03:44:44 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Fri, 26 Feb 2010 09:44:44 +0100 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <4B872158.9090902@silveregg.co.jp> References: <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <4B862FBC.1020205@silveregg.co.jp> <4B872158.9090902@silveregg.co.jp> Message-ID: <4B8789FC.3080804@student.matnat.uio.no> David Cournapeau wrote: > Kurt Smith wrote: > > >> I'm the developer of fwrap. It is coming along, but will be at least >> a month, likely two before the first release. (The main areas that >> need some TLC are the fortran parser and the build system; the build >> system will leverage numpy's distutils unless waf is easy to get >> working.) The first release will cover a large portion of f2py's >> current functionality, but I don't plan on having python callbacks >> working then. Callbacks will be a part of the second release. >> >> An issue that you should be aware of is that fwrap will not work with >> f77, and requires gfortran 4.3.3 or greater, since it uses C >> interoperability features not available in f77. (Fwrap will work with >> any 'modern' fortran compiler that has the basic C interoperability >> features implemented. Looking around it appears that all of them do >> have the basic set necessary, see [1]. So this only excludes f77.) >> > > By f77, do you mean g77, i.e. the fortran compiler in the GNU gcc 3.x > suite ? > > If so, that's quite a bummer for scipy. I don't see us removing support > for g77 in the short or even mid term (many distributions depend on it, > and that's not counting windows where there is still no gcc 4.x official > support from MinGW). > > Do you have a list somewhere of what exactly is required for fwrap from > the fortran compiler ? > I think f77 means Fortran 77 in general, including g77. (Of course, g77 that might be the only compiler left in daily use which only supports Fortran 77 and not also more modern Fortran.) Long-term: While Fortran 77 is not something fwrap targets today, I think it should be possible to add in some special-casing for f77-only support. Basically a command-line flag to fwrap to tell it not to use ISO C BINDING and assume Fortran 77 feature-level only. Kurt, what do you say? Do you chance on an estimate on how long would it take for (somebody else) to do that? (I'd think one basically has to make the same blatant assumptions that f2py makes about type conversion, name mangling etc., but that is also much less dangerous/error-prone for Fortran 77.) Dag Sverre From dwf at cs.toronto.edu Fri Feb 26 03:52:10 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 26 Feb 2010 03:52:10 -0500 Subject: [Numpy-discussion] anyone to look at #1402? In-Reply-To: <961fa2b41002252356y7f10dba5yf886740aa5ae8053@mail.gmail.com> References: <961fa2b41002252356y7f10dba5yf886740aa5ae8053@mail.gmail.com> Message-ID: <20100226085210.GB13538@rodimus> On Thu, Feb 25, 2010 at 11:56:34PM -0800, Nathaniel Smith wrote: > So there's this patch I submitted: > http://projects.scipy.org/numpy/ticket/1402 > Obviously not that high a priority in the grand scheme of things (it > adds a function to compute the log-determinant directly), but I don't > want to release a version of scikits.sparse with this functionality > while the numpy patch is hanging in needs-review status (since the API > might change), so it's a bit of a blocker for me. Anyone have a minute > to take a look? I'm not someone who can act on it, but FWIW I am very much +1 on this addition, and the patch looks solid to me. It'd definitely be useful in scikits.learn, maybe scipy.stats/scikits.statsmodels too. The name is a bit awkward, but I can't think of a better one. My first instinct would be to look for "logdet", but I would also not expect such a function to return the log determinant *and* the sign of the determinant. David From ralf.gommers at googlemail.com Fri Feb 26 04:10:08 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 26 Feb 2010 17:10:08 +0800 Subject: [Numpy-discussion] odd ascii format and genfromtxt In-Reply-To: <4B878666.4030903@enthought.com> References: <4B878666.4030903@enthought.com> Message-ID: On Fri, Feb 26, 2010 at 4:29 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > Ralf Gommers wrote: > > Hi all, > > > > I'm trying to read in data from text files with genfromtxt, and have > > some trouble figuring out the right combination of keywords. The > > format is: > > > > ['0\t\t4.000000000000000e+007,0.000000000000000e+000\n', > > '\t9.860280631554179e-001,-1.902586503306264e-002\n', > > '\t9.860280631554179e-001,-1.902586503306264e-002'] > > > > Note that there are two delimiters, tab and comma. Also, the first > > line has an extra integer plus tab (this is a repeating pattern). > > > > The 'delimiter' keyword does not accept a list of strings. If it is a > list, it must be a list of integers that are the field widths. In your > case, that won't work. > > You could try fromregex: > > ----- > In [1]: import numpy as np > > In [2]: cat sample.raw > 0 4.000e+007,0.00000e+000 > 9.8602806e-001,-1.9025e-002 > 9.8602806e-001,-1.9025e-002 > 123 5.0e6,100.0 > 10.1,-2.0e-3 > 10.2,-2.1e-3 > > > In [3]: a = np.fromregex('sample.raw', '(.*?)\t+(.*),(.*)', > np.dtype([('extra', 'S8'), ('x', float), ('y', float)])) > > In [4]: a > Out[4]: > array([('0', 40000000.0, 0.0), ('', 0.98602805999999998, -0.019025), > ('', 0.98602805999999998, -0.019025), ('123', 5000000.0, 100.0), > ('', 10.1, -0.002), ('', 10.199999999999999, > -0.0020999999999999999)], > dtype=[('extra', '|S8'), ('x', ' > > Note that the first field of the array is a string, not an integer. The > string will be empty in rows that did not have the initial integer. I > don't know if that will work for you. > > That works, thanks. I had hoped that genfromtxt could do it because it can skip the header and is presumably faster. But I'll take what I can get. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Feb 26 05:21:16 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 26 Feb 2010 11:21:16 +0100 Subject: [Numpy-discussion] anyone to look at #1402? In-Reply-To: <20100226085210.GB13538@rodimus> References: <961fa2b41002252356y7f10dba5yf886740aa5ae8053@mail.gmail.com> <20100226085210.GB13538@rodimus> Message-ID: <20100226102116.GB6638@phare.normalesup.org> On Fri, Feb 26, 2010 at 03:52:10AM -0500, David Warde-Farley wrote: > On Thu, Feb 25, 2010 at 11:56:34PM -0800, Nathaniel Smith wrote: > > So there's this patch I submitted: > > http://projects.scipy.org/numpy/ticket/1402 > > Obviously not that high a priority in the grand scheme of things (it > > adds a function to compute the log-determinant directly), but I don't > > want to release a version of scikits.sparse with this functionality > > while the numpy patch is hanging in needs-review status (since the API > > might change), so it's a bit of a blocker for me. Anyone have a minute > > to take a look? > I'm not someone who can act on it, but FWIW I am very much +1 on this > addition, and the patch looks solid to me. It'd definitely be useful in > scikits.learn, maybe scipy.stats/scikits.statsmodels too. Indeed. The patch looks good at a first glance. Gael From ole-usenet-spam at gmx.net Fri Feb 26 05:23:11 2010 From: ole-usenet-spam at gmx.net (Ole Streicher) Date: Fri, 26 Feb 2010 11:23:11 +0100 Subject: [Numpy-discussion] Apply a function to all indices Message-ID: Hi, I want to apply a function to all indices of an array that fullfill a certain condition. What I tried: ---------------------8<-------------------------------- import numpy def myfunc(x): print 'myfunc of', x a = numpy.random.random((2,3,4)) numpy.apply_along_axis(myfunc, 0, numpy.where(a > 0.8)) ---------------------8<-------------------------------- But this prints just the first index vector and then shows a TypeError: object of type 'NoneType' has no len() What is wrong with my code and how can I do it right? Best regards Ole From david at silveregg.co.jp Fri Feb 26 05:23:54 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 26 Feb 2010 19:23:54 +0900 Subject: [Numpy-discussion] anyone to look at #1402? In-Reply-To: <20100226085210.GB13538@rodimus> References: <961fa2b41002252356y7f10dba5yf886740aa5ae8053@mail.gmail.com> <20100226085210.GB13538@rodimus> Message-ID: <4B87A13A.9050706@silveregg.co.jp> David Warde-Farley wrote: > My first > instinct would be to look for "logdet", but I would also not expect such > a function to return the log determinant *and* the sign of the > determinant. What about having logadet for the (common) case where log |A| only is needed, and having the more complex function when the sign is needed as well ? cheers, David From gael.varoquaux at normalesup.org Fri Feb 26 05:26:28 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 26 Feb 2010 11:26:28 +0100 Subject: [Numpy-discussion] anyone to look at #1402? In-Reply-To: <4B87A13A.9050706@silveregg.co.jp> References: <961fa2b41002252356y7f10dba5yf886740aa5ae8053@mail.gmail.com> <20100226085210.GB13538@rodimus> <4B87A13A.9050706@silveregg.co.jp> Message-ID: <20100226102628.GC6638@phare.normalesup.org> On Fri, Feb 26, 2010 at 07:23:54PM +0900, David Cournapeau wrote: > David Warde-Farley wrote: > > My first > > instinct would be to look for "logdet", but I would also not expect such > > a function to return the log determinant *and* the sign of the > > determinant. > What about having logadet for the (common) case where log |A| only is > needed, and having the more complex function when the sign is needed as > well ? I was more thinking of a 'return_sign=False' keyword argument. Ga?l From david at silveregg.co.jp Fri Feb 26 05:42:58 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 26 Feb 2010 19:42:58 +0900 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <4B8789FC.3080804@student.matnat.uio.no> References: <3d375d731002231954r43520b3bydac3e41e87a4e92a@mail.gmail.com> <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <4B862FBC.1020205@silveregg.co.jp> <4B872158.9090902@silveregg.co.jp> <4B8789FC.3080804@student.matnat.uio.no> Message-ID: <4B87A5B2.5090205@silveregg.co.jp> Dag Sverre Seljebotn wrote: > (I'd think one basically has to make the same blatant assumptions that > f2py makes about type conversion, name mangling etc., but that is also > much less dangerous/error-prone for Fortran 77.) Everything related to name mangling can be handled by distutils/numscons, so this is not an issue (I am ready to help if necessary on that side). I don't really know the assumptions made by f2py otherwise: is it prevalent for most fortran compilers to pass most things by reference ? g77 uses the f2c convention I believe, but I don't know much about other compilers, especially proprietary ones, cheers, David From dwf at cs.toronto.edu Fri Feb 26 05:45:17 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 26 Feb 2010 05:45:17 -0500 Subject: [Numpy-discussion] anyone to look at #1402? In-Reply-To: <20100226102628.GC6638@phare.normalesup.org> References: <961fa2b41002252356y7f10dba5yf886740aa5ae8053@mail.gmail.com> <20100226085210.GB13538@rodimus> <4B87A13A.9050706@silveregg.co.jp> <20100226102628.GC6638@phare.normalesup.org> Message-ID: <20100226104516.GB5906@rodimus> On Fri, Feb 26, 2010 at 11:26:28AM +0100, Gael Varoquaux wrote: > I was more thinking of a 'return_sign=False' keyword argument. My thoughts exactly. David From david at silveregg.co.jp Fri Feb 26 05:47:51 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Fri, 26 Feb 2010 19:47:51 +0900 Subject: [Numpy-discussion] anyone to look at #1402? In-Reply-To: <20100226102628.GC6638@phare.normalesup.org> References: <961fa2b41002252356y7f10dba5yf886740aa5ae8053@mail.gmail.com> <20100226085210.GB13538@rodimus> <4B87A13A.9050706@silveregg.co.jp> <20100226102628.GC6638@phare.normalesup.org> Message-ID: <4B87A6D7.6070606@silveregg.co.jp> Gael Varoquaux wrote: > On Fri, Feb 26, 2010 at 07:23:54PM +0900, David Cournapeau wrote: >> David Warde-Farley wrote: >>> My first >>> instinct would be to look for "logdet", but I would also not expect such >>> a function to return the log determinant *and* the sign of the >>> determinant. > >> What about having logadet for the (common) case where log |A| only is >> needed, and having the more complex function when the sign is needed as >> well ? > > I was more thinking of a 'return_sign=False' keyword argument. I think the consensus in python community is to actually create two functions when the returned values' kind differ depending on a boolean. cheers, David From faltet at pytables.org Fri Feb 26 06:38:29 2010 From: faltet at pytables.org (Francesc Alted) Date: Fri, 26 Feb 2010 12:38:29 +0100 Subject: [Numpy-discussion] ANN: PyTables 2.2b3 released Message-ID: <201002261238.29323.faltet@pytables.org> =========================== Announcing PyTables 2.2b3 =========================== PyTables is a library for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data with support for full 64-bit file addressing. PyTables runs on top of the HDF5 library and NumPy package for achieving maximum throughput and convenient use. This is the third, and most probably last, beta version of 2.2 release. The main addition in this beta version is the addition of Blosc (http://blosc.pytables.org), a high-speed compressor that is meant to work at similar speeds, or higher, than the memory-cache bandwidth in modern processors. This will allow for very high performance in internal, in-memory PyTables computations while still using compression. Remember that Blosc is still in *beta* and it is not meant for production purposes yet. You have been warned! In case you want to know more in detail what has changed in this version, have a look at: http://www.pytables.org/moin/ReleaseNotes/Release_2.2b3 You can download a source package with generated PDF and HTML docs, as well as binaries for Windows, from: http://www.pytables.org/download/preliminary For an on-line version of the manual, visit: http://www.pytables.org/docs/manual-2.2b3 Resources ========= About PyTables: http://www.pytables.org About the HDF5 library: http://hdfgroup.org/HDF5/ About NumPy: http://numpy.scipy.org/ Acknowledgments =============== Thanks to many users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Most specially, a lot of kudos go to the HDF5 and NumPy (and numarray!) makers. Without them, PyTables simply would not exist. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- Francesc Alted From eadrogue at gmx.net Fri Feb 26 06:43:00 2010 From: eadrogue at gmx.net (Ernest =?iso-8859-1?Q?Adrogu=E9?=) Date: Fri, 26 Feb 2010 12:43:00 +0100 Subject: [Numpy-discussion] Apply a function to all indices In-Reply-To: References: Message-ID: <20100226114300.GA12362@doriath.local> Hi, 26/02/10 @ 11:23 (+0100), thus spake Ole Streicher: > Hi, > > I want to apply a function to all indices of an array that fullfill a > certain condition. > > What I tried: > > ---------------------8<-------------------------------- > import numpy > > def myfunc(x): > print 'myfunc of', x > > a = numpy.random.random((2,3,4)) > numpy.apply_along_axis(myfunc, 0, numpy.where(a > 0.8)) > ---------------------8<-------------------------------- > > But this prints just the first index vector and then shows a > TypeError: object of type 'NoneType' has no len() > > What is wrong with my code and how can I do it right? Your function returns nothing (i.e. None), and the numpy function was expecting a scalar or an array-like object, that's why it fails. It depends on what exactly you want to do. If you just want to iterate over the array, try something liks this for element in a[a > 0.8]: myfunc(element) Or if you want to produce a different array of the same shape as the original, then you probably need a vectorised function. def myfunc(x): print 'myfunc of', x if x > 0.8: return x + 2 else: return x vect_func = numpy.frompyfunc(myfunc, 1, 1) vect_func(a) But in this case, myfunc() has to return a scalar value for each element in a. > Best regards > > Ole > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From pav at iki.fi Fri Feb 26 06:51:35 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 26 Feb 2010 13:51:35 +0200 Subject: [Numpy-discussion] Apply a function to all indices In-Reply-To: <20100226114300.GA12362@doriath.local> References: <20100226114300.GA12362@doriath.local> Message-ID: <1267185095.2728.464.camel@talisman> pe, 2010-02-26 kello 12:43 +0100, Ernest Adrogu? kirjoitti: [clip] > Or if you want to produce a different array of the same shape > as the original, then you probably need a vectorised function. > > def myfunc(x): > print 'myfunc of', x > if x > 0.8: > return x + 2 > else: > return x > vect_func = numpy.frompyfunc(myfunc, 1, 1) > vect_func(a) Note that frompyfunc always makes the vectorized function return object arrays, which may not be what is wanted. Instead, one can use numpy.vectorize. -- Pauli Virtanen From ole-usenet-spam at gmx.net Fri Feb 26 07:31:41 2010 From: ole-usenet-spam at gmx.net (Ole Streicher) Date: Fri, 26 Feb 2010 13:31:41 +0100 Subject: [Numpy-discussion] Apply a function to all indices References: <20100226114300.GA12362@doriath.local> Message-ID: Hello Ernest, Ernest Adrogu? writes: > It depends on what exactly you want to do. If you just want > to iterate over the array, try something liks this > for element in a[a > 0.8]: > myfunc(element) No; I need to iterate over the *indices*, not over the elements. a = numpy.random.random((2,3,4)) for index in ???(a > 0.5): print index[0], index[1], index[2] Best regards Ole From eadrogue at gmx.net Fri Feb 26 08:02:41 2010 From: eadrogue at gmx.net (Ernest =?iso-8859-1?Q?Adrogu=E9?=) Date: Fri, 26 Feb 2010 14:02:41 +0100 Subject: [Numpy-discussion] Apply a function to all indices In-Reply-To: References: <20100226114300.GA12362@doriath.local> Message-ID: <20100226130241.GA12636@doriath.local> 26/02/10 @ 13:31 (+0100), thus spake Ole Streicher: > Hello Ernest, > > Ernest Adrogu? writes: > > It depends on what exactly you want to do. If you just want > > to iterate over the array, try something liks this > > for element in a[a > 0.8]: > > myfunc(element) > > No; I need to iterate over the *indices*, not over the elements. > > a = numpy.random.random((2,3,4)) > > for index in ???(a > 0.5): > print index[0], index[1], index[2] Ah, I think np.argwhere does that: for index in numpy.argwhere(a > 0.5): print index index is an array, so it can be indexed, for example: for index in numpy.argwhere(a > 0.5): print a[index[0], index[1], index[2]] prints all elements in a that are greater than 0.5. Bye. > Best regards > > Ole > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From nicolas.fauchereau at gmail.com Fri Feb 26 08:06:54 2010 From: nicolas.fauchereau at gmail.com (Nicolas) Date: Fri, 26 Feb 2010 15:06:54 +0200 Subject: [Numpy-discussion] Sea Ice Concentrations from Nimbus-7 SMMR and DMSP SSM/I Passive Microwave Data Message-ID: Hello a VERY specific question, but I thought someone in the list might have used this dataset before and could give me some advice I am trying to read the daily Antarctic Sea Ice Concentrations from Nimbus-7 SMMR and DMSP SSM/I Passive Microwave Data, as provided by the national snow and ice data center (http://nsidc.org), more specifically, these are the files as downloaded from their ftp site (ftp://sidads.colorado.edu/pub/DATASETS/seaice/polar-stereo/nasateam/final-gsfc/south/daily) they are provided in binary files (e.g. nt_19980101_f13_v01_s.bin for the 1st of Jan. 1998) the metadata information (http://nsidc.org/cgi-bin/get_metadata.pl?id=nsidc-0051) gives the following information (for the polar stereographic projection): """ Data are scaled and stored as one-byte integers in flat binary arrays geographical coordinates N: 90? S: 30.98? E: 180? W: -180? Latitude Resolution: 25 km Longitude Resolution: 25 km Distribution Size: 105212 bytes per southern file """ I am unfamiliar with non self-documented files (used to hdf and netcdf !) and struggle to make sense of how to read these files and plot the corresponding maps, I've tried using the array module and the fromstring function file=open(filename,'rb') a=array.array('B',file.read()) var=numpy.fromstring(a,dtype=np.int) or directly the fromfile function var = numpy.fromfile(filename,np.int) with various array types and numpy.dtypes, but I dont make sense of these values, anyone has read those files before and plot the fields using python / numpy / matplotlib ? I am using python2.6 / numpy 1.3.0 / thanks in advance Nicolas -- _/\/??????\/\_ 33?49'45.24"S & 18?28'45.60"E Dr. Nicolas Fauchereau Senior Researcher CSIR - NRE Research Group: Ocean systems and climate 15 Lower Hope street, Rosebank 7700 South Africa tel: 021 658 2764 _/\/??????\/\_ 33?49'45.24"S & 18?28'45.60"E From dagss at student.matnat.uio.no Fri Feb 26 08:11:45 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Fri, 26 Feb 2010 14:11:45 +0100 Subject: [Numpy-discussion] Sea Ice Concentrations from Nimbus-7 SMMR and DMSP SSM/I Passive Microwave Data In-Reply-To: References: Message-ID: <4B87C891.1050800@student.matnat.uio.no> Nicolas wrote: > Hello > > a VERY specific question, but I thought someone in the list might have > used this dataset before and could give me some advice > > I am trying to read the daily Antarctic Sea Ice Concentrations from > Nimbus-7 SMMR and DMSP SSM/I Passive Microwave Data, as provided by > the national snow and ice data center (http://nsidc.org), more > specifically, these are the files as downloaded from their ftp site > (ftp://sidads.colorado.edu/pub/DATASETS/seaice/polar-stereo/nasateam/final-gsfc/south/daily) > > they are provided in binary files (e.g. nt_19980101_f13_v01_s.bin for > the 1st of Jan. 1998) > > the metadata information > (http://nsidc.org/cgi-bin/get_metadata.pl?id=nsidc-0051) gives the > following information (for the polar stereographic projection): > """ > Data are scaled and stored as one-byte integers in flat binary arrays > > geographical coordinates > N: 90? S: 30.98? E: 180? W: -180? > > Latitude Resolution: 25 km > Longitude Resolution: 25 km > > Distribution Size: 105212 bytes per southern file > """ > > I am unfamiliar with non self-documented files (used to hdf and netcdf > !) and struggle to make sense of how to read these files and plot the > corresponding maps, I've tried using the array module and the > fromstring function > > file=open(filename,'rb') > a=array.array('B',file.read()) > var=numpy.fromstring(a,dtype=np.int) > Try numpy.fromfile(f, dtype=np.int8) or uint8, depending on whether the data is signed or not. What you did is also correct except that the final dtype must be np.int8. (You can always do var = var.astype(np.int) *afterwards* to convert to a bigger integer type.) Dag Sverre From eadrogue at gmx.net Fri Feb 26 08:12:58 2010 From: eadrogue at gmx.net (Ernest =?iso-8859-1?Q?Adrogu=E9?=) Date: Fri, 26 Feb 2010 14:12:58 +0100 Subject: [Numpy-discussion] Apply a function to all indices In-Reply-To: <1267185095.2728.464.camel@talisman> References: <20100226114300.GA12362@doriath.local> <1267185095.2728.464.camel@talisman> Message-ID: <20100226131258.GB12636@doriath.local> 26/02/10 @ 13:51 (+0200), thus spake Pauli Virtanen: > pe, 2010-02-26 kello 12:43 +0100, Ernest Adrogu? kirjoitti: > [clip] > > Or if you want to produce a different array of the same shape > > as the original, then you probably need a vectorised function. > > > > def myfunc(x): > > print 'myfunc of', x > > if x > 0.8: > > return x + 2 > > else: > > return x > > vect_func = numpy.frompyfunc(myfunc, 1, 1) > > vect_func(a) > > Note that frompyfunc always makes the vectorized function return object > arrays, which may not be what is wanted. Instead, one can use > numpy.vectorize. Thanks for the tip. I didn't know that... Also, frompyfunc appears to crash python when the last argument is 0: In [9]: func=np.frompyfunc(lambda x: x, 1, 0) In [10]: func(np.arange(5)) Violaci? de segment This with Python 2.5.5, Numpy 1.3.0 on GNU/Linux. Cheers. > -- > Pauli Virtanen > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From robert.kern at gmail.com Fri Feb 26 10:06:58 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 26 Feb 2010 09:06:58 -0600 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <4B87A5B2.5090205@silveregg.co.jp> References: <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <4B862FBC.1020205@silveregg.co.jp> <4B872158.9090902@silveregg.co.jp> <4B8789FC.3080804@student.matnat.uio.no> <4B87A5B2.5090205@silveregg.co.jp> Message-ID: <3d375d731002260706q6f7d7e06ga9de20fe4ab9ace0@mail.gmail.com> On Fri, Feb 26, 2010 at 04:42, David Cournapeau wrote: > I don't really know the assumptions made by f2py otherwise: is it > prevalent for most fortran compilers to pass most things by reference ? I think that's part of the standardized semantics, yes. That said, some things in the C API like how to pass the length of a character string are not standardized and are sometimes passed by value. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Fri Feb 26 10:20:11 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 26 Feb 2010 08:20:11 -0700 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <3d375d731002260706q6f7d7e06ga9de20fe4ab9ace0@mail.gmail.com> References: <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <4B862FBC.1020205@silveregg.co.jp> <4B872158.9090902@silveregg.co.jp> <4B8789FC.3080804@student.matnat.uio.no> <4B87A5B2.5090205@silveregg.co.jp> <3d375d731002260706q6f7d7e06ga9de20fe4ab9ace0@mail.gmail.com> Message-ID: On Fri, Feb 26, 2010 at 8:06 AM, Robert Kern wrote: > On Fri, Feb 26, 2010 at 04:42, David Cournapeau > wrote: > > > I don't really know the assumptions made by f2py otherwise: is it > > prevalent for most fortran compilers to pass most things by reference ? > > I think that's part of the standardized semantics, yes. That said, > some things in the C API like how to pass the length of a character > string are not standardized and are sometimes passed by value. > > Old Fortran didn't use stacks, all the data was collected into a section of memory and the subroutine referenced that memory location; think of it as an anonymous common. The addition of stack variables and the possibility of recursion was an innovation and originally required a special declaration that the variable was local, i.e., on a stack. I don't know how modern Fortran is implemented but using references has a long tradition in the language. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwmsmith at gmail.com Fri Feb 26 11:01:31 2010 From: kwmsmith at gmail.com (Kurt Smith) Date: Fri, 26 Feb 2010 10:01:31 -0600 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <4B872158.9090902@silveregg.co.jp> References: <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <4B862FBC.1020205@silveregg.co.jp> <4B872158.9090902@silveregg.co.jp> Message-ID: On Thu, Feb 25, 2010 at 7:18 PM, David Cournapeau wrote: > Kurt Smith wrote: > >> I'm the developer of fwrap. ?It is coming along, but will be at least >> a month, likely two before the first release. ?(The main areas that >> need some TLC are the fortran parser and the build system; the build >> system will leverage numpy's distutils unless waf is easy to get >> working.) The first release will cover a large portion of f2py's >> current functionality, but I don't plan on having python callbacks >> working then. ?Callbacks will be a part of the second release. >> >> An issue that you should be aware of is that fwrap will not work with >> f77, and requires gfortran 4.3.3 or greater, since it uses C >> interoperability features not available in f77. ?(Fwrap will work with >> any 'modern' fortran compiler that has the basic C interoperability >> features implemented. ?Looking around it appears that all of them do >> have the basic set necessary, see [1]. ?So this only excludes f77.) > > By f77, do you mean g77, i.e. the fortran compiler in the GNU gcc 3.x > suite ? Yes, my bad. For some reason 'apt-cache search' finds 'f77' but not 'g77' here. Fwrap makes use of the ISO C BINDING intrinsic module and BIND(C) attributes heavily -- compilers that handle *only* Fortran 77 code don't have this. > > If so, that's quite a bummer for scipy. I don't see us removing support > for g77 in the short or even mid term (many distributions depend on it, > and that's not counting windows where there is still no gcc 4.x official > support from MinGW). > > Do you have a list somewhere of what exactly is required for fwrap from > the fortran compiler ? Not a comprehensive list, although that would be good to draw up. I'll do that soon. It is entirely possible to add in support for FORTRAN 77-only stuff, to allow g77 to be supported. Since g77 is important to support in scipy it's probably worth the effort. But that will have to come later (over the summer); it wouldn't be too hard. The reason for the new stuff isn't purely academic, either -- it allows interoperability between C structs and Fortran derived types, and greatly improves portability, to the point that fwrap doesn't need to be installed for the wrapped code to compile (similar to Cython-generated code). Kurt > > cheers, > > David From kwmsmith at gmail.com Fri Feb 26 11:18:52 2010 From: kwmsmith at gmail.com (Kurt Smith) Date: Fri, 26 Feb 2010 10:18:52 -0600 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <4B8789FC.3080804@student.matnat.uio.no> References: <3d375d731002232019l6c275f6eve37deb3773dd248d@mail.gmail.com> <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <4B862FBC.1020205@silveregg.co.jp> <4B872158.9090902@silveregg.co.jp> <4B8789FC.3080804@student.matnat.uio.no> Message-ID: On Fri, Feb 26, 2010 at 2:44 AM, Dag Sverre Seljebotn wrote: > David Cournapeau wrote: >> Kurt Smith wrote: >> >> >>> I'm the developer of fwrap. ?It is coming along, but will be at least >>> a month, likely two before the first release. ?(The main areas that >>> need some TLC are the fortran parser and the build system; the build >>> system will leverage numpy's distutils unless waf is easy to get >>> working.) The first release will cover a large portion of f2py's >>> current functionality, but I don't plan on having python callbacks >>> working then. ?Callbacks will be a part of the second release. >>> >>> An issue that you should be aware of is that fwrap will not work with >>> f77, and requires gfortran 4.3.3 or greater, since it uses C >>> interoperability features not available in f77. ?(Fwrap will work with >>> any 'modern' fortran compiler that has the basic C interoperability >>> features implemented. ?Looking around it appears that all of them do >>> have the basic set necessary, see [1]. ?So this only excludes f77.) >>> >> >> By f77, do you mean g77, i.e. the fortran compiler in the GNU gcc 3.x >> suite ? >> >> If so, that's quite a bummer for scipy. I don't see us removing support >> for g77 in the short or even mid term (many distributions depend on it, >> and that's not counting windows where there is still no gcc 4.x official >> support from MinGW). >> >> Do you have a list somewhere of what exactly is required for fwrap from >> the fortran compiler ? >> > I think f77 means Fortran 77 in general, including g77. (Of course, g77 > that might be the only compiler left in daily use which only supports > Fortran 77 and not also more modern Fortran.) > > Long-term: While Fortran 77 is not something fwrap targets today, I > think it should be possible to add in some special-casing for f77-only > support. Basically a command-line flag to fwrap to tell it not to use > ISO C BINDING and assume Fortran 77 feature-level only. Kurt, what do > you say? Do you chance on an estimate on how long would it take for > (somebody else) to do that? Once fwrap has blazed a trail for all the main features, including procedure arguments, then it wouldn't be too hard, since 'nice' 77 code is a subset. I'd prefer not to get into the dark corners of 77 stuff, though, like ENTRY statements and statement functions. But if patches come to support these features, who am I to refuse :) How long would it take? I'd rather not hazard a guess. If it is critical for scipy then it will be bumped up on the priority list, though. I don't mean to sound like I assume fwrap will be *the* fortran wrapper for scipy. f2py has worked very well as proven by scipy's success. There is much overlap between their functionality, but they take different approaches. I realize that if fwrap doesn't work well with numpy/scipy then it really isn't useful. The intersection between python users and numerical programmers using fortran ain't that big. If at some point fwrap is used as the fortran wrapper for scipy, then it's fine by me. But that's up to David, Chuck, Travis, et alia. > > (I'd think one basically has to make the same blatant assumptions that > f2py makes about type conversion, name mangling etc., but that is also > much less dangerous/error-prone for Fortran 77.) > > Dag Sverre > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From kwmsmith at gmail.com Fri Feb 26 11:30:02 2010 From: kwmsmith at gmail.com (Kurt Smith) Date: Fri, 26 Feb 2010 10:30:02 -0600 Subject: [Numpy-discussion] How to test f2py? In-Reply-To: <4B87A5B2.5090205@silveregg.co.jp> References: <5b8d13221002240015l6e8060cdm9765b338ef5cad20@mail.gmail.com> <4B862FBC.1020205@silveregg.co.jp> <4B872158.9090902@silveregg.co.jp> <4B8789FC.3080804@student.matnat.uio.no> <4B87A5B2.5090205@silveregg.co.jp> Message-ID: On Fri, Feb 26, 2010 at 4:42 AM, David Cournapeau wrote: > Dag Sverre Seljebotn wrote: > >> (I'd think one basically has to make the same blatant assumptions that >> f2py makes about type conversion, name mangling etc., but that is also >> much less dangerous/error-prone for Fortran 77.) > > Everything related to name mangling can be handled by > distutils/numscons, so this is not an issue (I am ready to help if > necessary on that side). > That's good and will make it easier to support 77-only code in fwrap. > I don't really know the assumptions made by f2py otherwise: is it > prevalent for most fortran compilers to pass most things by reference ? > g77 uses the f2c convention I believe, but I don't know much about other > compilers, especially proprietary ones, I'm no expert on 77 stuff, but at least with the standardization of the C interoperability features in the 2003 standard, everything is passed by reference by default between Fortran and C (for the non-opaque types, which are the only allowed arguments between Fortran & C). The 'VALUE' keyword was added to the language to allow arguments to be passed by value. Some compilers supported the VALUE kw before the 2003 standard came out (Sun's fortran compiler, for one). Kurt From ralf.gommers at googlemail.com Fri Feb 26 12:00:25 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 27 Feb 2010 01:00:25 +0800 Subject: [Numpy-discussion] testing binary installer for OS X Message-ID: Hi, I built an installer for OS X and did some testing on a clean computer. All NumPy tests pass. SciPy (0.7.1 binary) gives a number of errors and failures, I copied one of each type below. For full output see http://pastebin.com/eEcwkzKr . To me it looks like the failures are harmless, and the kdtree errors are not related to changes in NumPy. Is that right? I also installed Matplotlib (0.99.1.1 binary), but I did not find a way to test just the binary install except manually. Created some plots, looked fine. Then I ran the test script examples/tests/backend_driver.py from an svn checkout, but my laptop died before the tests finished (at least 25 mins). Basic output was: 1.123 0 0.987 0 ... Can anyone tell me what the best way is to test the MPL binary? Cheers, Ralf ====================================================================== ERROR: Failure: ValueError (numpy.dtype does not appear to be the correct type object) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/cluster/__init__.py", line 9, in import vq, hierarchy File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/cluster/hierarchy.py", line 199, in import scipy.spatial.distance as distance File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/spatial/__init__.py", line 7, in from ckdtree import * File "numpy.pxd", line 30, in scipy.spatial.ckdtree (scipy/spatial/ckdtree.c:6087) ValueError: numpy.dtype does not appear to be the correct type object ====================================================================== ERROR: test_kdtree.test_random_compiled.test_approx ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/case.py", line 364, in setUp try_run(self.inst, ('setup', 'setUp')) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/util.py", line 487, in try_run return func() File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/spatial/tests/test_kdtree.py", line 133, in setUp self.kdtree = cKDTree(self.data) File "ckdtree.pyx", line 214, in scipy.spatial.ckdtree.cKDTree.__init__ (scipy/spatial/ckdtree.c:1563) NameError: np ====================================================================== FAIL: test_asfptype (test_base.TestBSR) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/tests/test_base.py", line 242, in test_asfptype assert_equal( A.dtype , 'int32' ) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/testing/utils.py", line 284, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: dtype('int32') DESIRED: 'int32' ====================================================================== FAIL: test_nrdtrisd (test_basic.TestCephes) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/tests/test_basic.py", line 349, in test_nrdtrisd assert_equal(cephes.nrdtrisd(0.5,0.5,0.5),0.0) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/testing/utils.py", line 301, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: -0 DESIRED: 0.0 ---------------------------------------------------------------------- Ran 2585 tests in 46.196s FAILED (KNOWNFAIL=4, SKIP=31, errors=28, failures=17) Out[2]: -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Feb 26 12:09:24 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 26 Feb 2010 12:09:24 -0500 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: References: Message-ID: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> On Fri, Feb 26, 2010 at 12:00 PM, Ralf Gommers wrote: > Hi, > > I built an installer for OS X and did some testing on a clean computer. All > NumPy tests pass. SciPy (0.7.1 binary) gives a number of errors and > failures, I copied one of each type below. For full output see > http://pastebin.com/eEcwkzKr . To me it looks like the failures are > harmless, and the kdtree errors are not related to changes in NumPy. Is that > right? > > I also installed Matplotlib (0.99.1.1 binary), but I did not find a way to > test just the binary install except manually. Created some plots, looked > fine. Then I ran the test script examples/tests/backend_driver.py from an > svn checkout, but my laptop died before the tests finished (at least 25 > mins). Basic output was: > ???? 1.123 0 > ?? 0.987 0 > ... > Can anyone tell me what the best way is to test the MPL binary? > > Cheers, > Ralf > > > > > ====================================================================== > ERROR: Failure: ValueError (numpy.dtype does not appear to be the correct > type object) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/loader.py", > line 379, in loadTestsFromName > ??? addr.filename, addr.module) > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/importer.py", > line 39, in importFromPath > ??? return self.importFromDir(dir_path, fqname) > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/importer.py", > line 86, in importFromDir > ??? mod = load_module(part_fqname, fh, filename, desc) > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/cluster/__init__.py", > line 9, in > ??? import vq, hierarchy > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/cluster/hierarchy.py", > line 199, in > ??? import scipy.spatial.distance as distance > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/spatial/__init__.py", > line 7, in > ??? from ckdtree import * > ? File "numpy.pxd", line 30, in scipy.spatial.ckdtree > (scipy/spatial/ckdtree.c:6087) > ValueError: numpy.dtype does not appear to be the correct type object this looks like the cython type check problem, ckdtree.c doesn't look compatible with your numpy version In this case the next errors might be follow-up errors because of an incomplete import Josef > > ====================================================================== > ERROR: test_kdtree.test_random_compiled.test_approx > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/case.py", > line 364, in setUp > ??? try_run(self.inst, ('setup', 'setUp')) > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/util.py", > line 487, in try_run > ??? return func() > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/spatial/tests/test_kdtree.py", > line 133, in setUp > ??? self.kdtree = cKDTree(self.data) > ? File "ckdtree.pyx", line 214, in scipy.spatial.ckdtree.cKDTree.__init__ > (scipy/spatial/ckdtree.c:1563) > NameError: np > > > ====================================================================== > FAIL: test_asfptype (test_base.TestBSR) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/tests/test_base.py", > line 242, in test_asfptype > ??? assert_equal( A.dtype , 'int32' ) > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/testing/utils.py", > line 284, in assert_equal > ??? raise AssertionError(msg) > AssertionError: > Items are not equal: > ?ACTUAL: dtype('int32') > ?DESIRED: 'int32' > > > ====================================================================== > FAIL: test_nrdtrisd (test_basic.TestCephes) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/tests/test_basic.py", > line 349, in test_nrdtrisd > ??? assert_equal(cephes.nrdtrisd(0.5,0.5,0.5),0.0) > ? File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/testing/utils.py", > line 301, in assert_equal > ??? raise AssertionError(msg) > AssertionError: > Items are not equal: > ?ACTUAL: -0 > ?DESIRED: 0.0 > > ---------------------------------------------------------------------- > Ran 2585 tests in 46.196s > > FAILED (KNOWNFAIL=4, SKIP=31, errors=28, failures=17) > Out[2]: > > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From pav at iki.fi Fri Feb 26 12:19:58 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 26 Feb 2010 19:19:58 +0200 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> Message-ID: <1267204798.2728.497.camel@talisman> pe, 2010-02-26 kello 12:09 -0500, josef.pktd at gmail.com kirjoitti: > On Fri, Feb 26, 2010 at 12:00 PM, Ralf Gommers [clip] > > ValueError: numpy.dtype does not appear to be the correct type object > > This looks like the cython type check problem, ckdtree.c doesn't look > compatible with your numpy version Or, rather, the Scipy binary is not compatible with the Numpy you built, because of a differing size of the PyArray_Descr structure. Recompilation of Scipy would fix this, but if the aim is to produce a binary-compatible release, then something is still wrong. -- Pauli Virtanen From josef.pktd at gmail.com Fri Feb 26 12:26:15 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 26 Feb 2010 12:26:15 -0500 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: <1267204798.2728.497.camel@talisman> References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> Message-ID: <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> On Fri, Feb 26, 2010 at 12:19 PM, Pauli Virtanen wrote: > pe, 2010-02-26 kello 12:09 -0500, josef.pktd at gmail.com kirjoitti: >> On Fri, Feb 26, 2010 at 12:00 PM, Ralf Gommers > [clip] >> > ValueError: numpy.dtype does not appear to be the correct type object >> >> This looks like the cython type check problem, ckdtree.c ?doesn't look >> compatible with your numpy version > > Or, rather, the Scipy binary is not compatible with the Numpy you built, > because of a differing size of the PyArray_Descr structure. > Recompilation of Scipy would fix this, but if the aim is to produce a > binary-compatible release, then something is still wrong. recompiling wouldn't be enough, the cython c files also need to be regenerated for a different numpy version. (If I understand the problem correctly.) Josef > > -- > Pauli Virtanen > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From pav at iki.fi Fri Feb 26 12:34:08 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 26 Feb 2010 19:34:08 +0200 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> Message-ID: <1267205648.2728.500.camel@talisman> pe, 2010-02-26 kello 12:26 -0500, josef.pktd at gmail.com kirjoitti: [clip] > recompiling wouldn't be enough, the cython c files also need to be > regenerated for a different numpy version. > (If I understand the problem correctly.) No. The Cython-generated sources just use sizeof(PyArray_Descr), the value is not hardcoded, so it's a compile-time issue. -- Pauli Virtanen From charlesr.harris at gmail.com Fri Feb 26 12:41:26 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 26 Feb 2010 10:41:26 -0700 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: <1267205648.2728.500.camel@talisman> References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> <1267205648.2728.500.camel@talisman> Message-ID: On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen wrote: > pe, 2010-02-26 kello 12:26 -0500, josef.pktd at gmail.com kirjoitti: > [clip] > > recompiling wouldn't be enough, the cython c files also need to be > > regenerated for a different numpy version. > > (If I understand the problem correctly.) > > No. The Cython-generated sources just use sizeof(PyArray_Descr), the > value is not hardcoded, so it's a compile-time issue. > So Ralf need to be sure that scipy was compiled against, say, numpy1.3. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sam.Tygier at hep.manchester.ac.uk Fri Feb 26 12:42:36 2010 From: Sam.Tygier at hep.manchester.ac.uk (Sam Tygier) Date: Fri, 26 Feb 2010 17:42:36 +0000 Subject: [Numpy-discussion] read ascii file with quote delimited strings In-Reply-To: References: Message-ID: <1267206156.29535.99.camel@hydrogen> On Fri, 2010-02-26 at 07:56 +0000, numpy-discussion-request at scipy.org wrote: > Date: Thu, 25 Feb 2010 13:56:43 -0800 > From: Chris Barker > Subject: Re: [Numpy-discussion] read ascii file with quote delimited > strings > To: Discussion of Numerical Python > Message-ID: <4B86F21B.5070507 at noaa.gov> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > Warren Weckesser wrote: > > Does each column always contain the same number of characters? That > >is, are the field widths always the same? If so, you can ... > > > if not, I'd use the std lib csv module, then convert to numpy arrays, > not as efficient, but it should be easy. > > -Chris thanks, using cvs works. do you think it would be useful for this functionality to be added to loadtxt(), or would it be good to have a loadcsv() function? sam From josef.pktd at gmail.com Fri Feb 26 12:44:48 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 26 Feb 2010 12:44:48 -0500 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> <1267205648.2728.500.camel@talisman> Message-ID: <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> On Fri, Feb 26, 2010 at 12:41 PM, Charles R Harris wrote: > > > On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen wrote: >> >> pe, 2010-02-26 kello 12:26 -0500, josef.pktd at gmail.com kirjoitti: >> [clip] >> > recompiling wouldn't be enough, the cython c files also need to be >> > regenerated for a different numpy version. >> > (If I understand the problem correctly.) >> >> No. The Cython-generated sources just use sizeof(PyArray_Descr), the >> value is not hardcoded, so it's a compile-time issue. > > So Ralf need to be sure that scipy was compiled against, say, numpy1.3. I think I mixed up some things then, scipy 0.7.1 cython files should be regenerated with the latest cython release so that it doesn't check the sizeof anymore. Then, a scipy 0.7.1 build against numpy 1.3 would also work without recompiling against numpy 1.4.1 Is this correct? Josef > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From charlesr.harris at gmail.com Fri Feb 26 12:50:22 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 26 Feb 2010 10:50:22 -0700 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> <1267205648.2728.500.camel@talisman> <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> Message-ID: On Fri, Feb 26, 2010 at 10:44 AM, wrote: > On Fri, Feb 26, 2010 at 12:41 PM, Charles R Harris > wrote: > > > > > > On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen wrote: > >> > >> pe, 2010-02-26 kello 12:26 -0500, josef.pktd at gmail.com kirjoitti: > >> [clip] > >> > recompiling wouldn't be enough, the cython c files also need to be > >> > regenerated for a different numpy version. > >> > (If I understand the problem correctly.) > >> > >> No. The Cython-generated sources just use sizeof(PyArray_Descr), the > >> value is not hardcoded, so it's a compile-time issue. > > > > So Ralf need to be sure that scipy was compiled against, say, numpy1.3. > > I think I mixed up some things then, > scipy 0.7.1 cython files should be regenerated with the latest cython > release so that it doesn't check the sizeof anymore. > Then, a scipy 0.7.1 build against numpy 1.3 would also work without > recompiling against numpy 1.4.1 > > Is this correct? > > Yes, but the aim of 1.4.1 is that it should work with the existing binaries of scipy, i.e., it should be backward compatible with no changes in dtype sizes and such so that even files generated with the old cython shouldn't cause problems. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Feb 26 12:53:32 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 26 Feb 2010 12:53:32 -0500 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> <1267205648.2728.500.camel@talisman> <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> Message-ID: <1cd32cbb1002260953v4671ccc2u1e874480c03323ee@mail.gmail.com> On Fri, Feb 26, 2010 at 12:50 PM, Charles R Harris wrote: > > > On Fri, Feb 26, 2010 at 10:44 AM, wrote: >> >> On Fri, Feb 26, 2010 at 12:41 PM, Charles R Harris >> wrote: >> > >> > >> > On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen wrote: >> >> >> >> pe, 2010-02-26 kello 12:26 -0500, josef.pktd at gmail.com kirjoitti: >> >> [clip] >> >> > recompiling wouldn't be enough, the cython c files also need to be >> >> > regenerated for a different numpy version. >> >> > (If I understand the problem correctly.) >> >> >> >> No. The Cython-generated sources just use sizeof(PyArray_Descr), the >> >> value is not hardcoded, so it's a compile-time issue. >> > >> > So Ralf need to be sure that scipy was compiled against, say, numpy1.3. >> >> I think I mixed up some things then, >> scipy 0.7.1 cython files should be regenerated with the latest cython >> release so that it doesn't check the sizeof anymore. >> Then, a scipy 0.7.1 build against numpy 1.3 would also work without >> recompiling against numpy 1.4.1 >> >> Is this correct? >> > > Yes, but the aim of 1.4.1 is that it should work with the existing binaries > of scipy, i.e., it should be backward compatible with no changes in dtype > sizes and such so that even files generated with the old cython shouldn't > cause problems. We had this discussion but David said that this is impossible, binary compatibility doesn't remove the (old) cython problem. Josef > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From charlesr.harris at gmail.com Fri Feb 26 13:26:27 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 26 Feb 2010 11:26:27 -0700 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: <1cd32cbb1002260953v4671ccc2u1e874480c03323ee@mail.gmail.com> References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> <1267205648.2728.500.camel@talisman> <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> <1cd32cbb1002260953v4671ccc2u1e874480c03323ee@mail.gmail.com> Message-ID: On Fri, Feb 26, 2010 at 10:53 AM, wrote: > On Fri, Feb 26, 2010 at 12:50 PM, Charles R Harris > wrote: > > > > > > On Fri, Feb 26, 2010 at 10:44 AM, wrote: > >> > >> On Fri, Feb 26, 2010 at 12:41 PM, Charles R Harris > >> wrote: > >> > > >> > > >> > On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen wrote: > >> >> > >> >> pe, 2010-02-26 kello 12:26 -0500, josef.pktd at gmail.com kirjoitti: > >> >> [clip] > >> >> > recompiling wouldn't be enough, the cython c files also need to be > >> >> > regenerated for a different numpy version. > >> >> > (If I understand the problem correctly.) > >> >> > >> >> No. The Cython-generated sources just use sizeof(PyArray_Descr), the > >> >> value is not hardcoded, so it's a compile-time issue. > >> > > >> > So Ralf need to be sure that scipy was compiled against, say, > numpy1.3. > >> > >> I think I mixed up some things then, > >> scipy 0.7.1 cython files should be regenerated with the latest cython > >> release so that it doesn't check the sizeof anymore. > >> Then, a scipy 0.7.1 build against numpy 1.3 would also work without > >> recompiling against numpy 1.4.1 > >> > >> Is this correct? > >> > > > > Yes, but the aim of 1.4.1 is that it should work with the existing > binaries > > of scipy, i.e., it should be backward compatible with no changes in dtype > > sizes and such so that even files generated with the old cython shouldn't > > cause problems. > > We had this discussion but David said that this is impossible, binary > compatibility doesn't remove the (old) cython problem. > > Depends on what you mean by binary compatibility. If something is added to the end of a structure, then it is still backwards compatible because the offsets of old entries don't change, but the old cython will fail in that case because the size changes. If the sizes are all the same, then there should be no problem and that is what we are shooting for. There are additions to the c_api in 1.4 but I think that structure is private. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Feb 26 15:23:59 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 26 Feb 2010 13:23:59 -0700 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> <1267205648.2728.500.camel@talisman> <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> <1cd32cbb1002260953v4671ccc2u1e874480c03323ee@mail.gmail.com> Message-ID: On Fri, Feb 26, 2010 at 11:26 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Fri, Feb 26, 2010 at 10:53 AM, wrote: > >> On Fri, Feb 26, 2010 at 12:50 PM, Charles R Harris >> wrote: >> > >> > >> > On Fri, Feb 26, 2010 at 10:44 AM, wrote: >> >> >> >> On Fri, Feb 26, 2010 at 12:41 PM, Charles R Harris >> >> wrote: >> >> > >> >> > >> >> > On Fri, Feb 26, 2010 at 10:34 AM, Pauli Virtanen wrote: >> >> >> >> >> >> pe, 2010-02-26 kello 12:26 -0500, josef.pktd at gmail.com kirjoitti: >> >> >> [clip] >> >> >> > recompiling wouldn't be enough, the cython c files also need to be >> >> >> > regenerated for a different numpy version. >> >> >> > (If I understand the problem correctly.) >> >> >> >> >> >> No. The Cython-generated sources just use sizeof(PyArray_Descr), the >> >> >> value is not hardcoded, so it's a compile-time issue. >> >> > >> >> > So Ralf need to be sure that scipy was compiled against, say, >> numpy1.3. >> >> >> >> I think I mixed up some things then, >> >> scipy 0.7.1 cython files should be regenerated with the latest cython >> >> release so that it doesn't check the sizeof anymore. >> >> Then, a scipy 0.7.1 build against numpy 1.3 would also work without >> >> recompiling against numpy 1.4.1 >> >> >> >> Is this correct? >> >> >> > >> > Yes, but the aim of 1.4.1 is that it should work with the existing >> binaries >> > of scipy, i.e., it should be backward compatible with no changes in >> dtype >> > sizes and such so that even files generated with the old cython >> shouldn't >> > cause problems. >> >> We had this discussion but David said that this is impossible, binary >> compatibility doesn't remove the (old) cython problem. >> >> > Depends on what you mean by binary compatibility. If something is added to > the end of a structure, then it is still backwards compatible because the > offsets of old entries don't change, but the old cython will fail in that > case because the size changes. If the sizes are all the same, then there > should be no problem and that is what we are shooting for. There are > additions to the c_api in 1.4 but I think that structure is private. > > I note that there are still traces of datetime in the 1.4.x public include files, although the desc size looks right. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Fri Feb 26 15:33:13 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 26 Feb 2010 15:33:13 -0500 Subject: [Numpy-discussion] read ascii file with quote delimited strings In-Reply-To: <1267206156.29535.99.camel@hydrogen> References: <1267206156.29535.99.camel@hydrogen> Message-ID: On Feb 26, 2010, at 12:42 PM, Sam Tygier wrote: > On Fri, 2010-02-26 at 07:56 +0000, numpy-discussion-request at scipy.org > wrote: >> Date: Thu, 25 Feb 2010 13:56:43 -0800 >> From: Chris Barker >> >> if not, I'd use the std lib csv module, then convert to numpy arrays, >> not as efficient, but it should be easy. >> >> -Chris > > thanks, using cvs works. > > do you think it would be useful for this functionality to be added to > loadtxt(), or would it be good to have a loadcsv() function? I'd favor a genfromcsv. I'll see what I can do in the next few weeks. From dalcinl at gmail.com Fri Feb 26 15:38:22 2010 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Fri, 26 Feb 2010 17:38:22 -0300 Subject: [Numpy-discussion] What are the 'p', 'P' types? In-Reply-To: References: Message-ID: On 25 February 2010 00:46, Charles R Harris wrote: > They are now typecodes but have no entries in the typename dictionary. IIUC, p/P should map to signed/unsigned integers large enough to hold a pointer on the platform.. So perhaps their names should be intpr_t and uintptr_t ? > > The 'm', 'M' types also lack dictionary entries. > Map to 'timedelta' and 'datetime' ? -- Lisandro Dalcin --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From rpmuller at gmail.com Fri Feb 26 16:01:35 2010 From: rpmuller at gmail.com (Rick Muller) Date: Fri, 26 Feb 2010 14:01:35 -0700 Subject: [Numpy-discussion] Can you help me find a dumb matrix multiply mistake Message-ID: I'm making a mistake here, one that I suspect is a dumb error. I'm not as familiar with the math of complex hermetian matrices as I am with real symmetry matrices. I want to diagonalize the matrix: Y = matrix([[0,-1j],[1j,0]]) # this is the Y Pauli spin matrix Ey,Uy = eigh(Y) When I try to do: print Uy.H * diag(Ey) * Uy rather than getting Y back, I get: [[ 0.+0.j -1.+0.j] [-1.+0.j 0.+0.j]] I also tried dot(Uy.H,dot(diag(Ey),Uy)) to make sure this isn't a matrix/array problem with the same result. Can someone spot what I'm doing wrong? -- Rick Muller rpmuller at gmail.com 505-750-7557 -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Fri Feb 26 16:08:50 2010 From: david.huard at gmail.com (David Huard) Date: Fri, 26 Feb 2010 16:08:50 -0500 Subject: [Numpy-discussion] Sea Ice Concentrations from Nimbus-7 SMMR and DMSP SSM/I Passive Microwave Data In-Reply-To: <4B87C891.1050800@student.matnat.uio.no> References: <4B87C891.1050800@student.matnat.uio.no> Message-ID: <91cf711d1002261308n27b27252kcb621a5133b3ea17@mail.gmail.com> Nicolas, I've attached a script I used to load the files, metadata and coordinates. You owe me a donut. David On Fri, Feb 26, 2010 at 8:11 AM, Dag Sverre Seljebotn wrote: > Nicolas wrote: >> Hello >> >> a VERY specific question, but I thought someone in the list might have >> used this dataset before and could give me some advice >> >> I am trying to read the daily Antarctic Sea Ice Concentrations from >> Nimbus-7 SMMR and DMSP SSM/I Passive Microwave Data, as provided by >> the national snow and ice data center (http://nsidc.org), more >> specifically, these are the files as downloaded from their ftp site >> (ftp://sidads.colorado.edu/pub/DATASETS/seaice/polar-stereo/nasateam/final-gsfc/south/daily) >> >> they are provided in binary files (e.g. nt_19980101_f13_v01_s.bin for >> the 1st of Jan. 1998) >> >> the metadata information >> (http://nsidc.org/cgi-bin/get_metadata.pl?id=nsidc-0051) gives the >> following information (for the polar stereographic projection): >> """ >> Data are scaled and stored as one-byte integers in flat binary arrays >> >> geographical coordinates >> N: 90? ? ? S: 30.98? ? ? E: 180? ? ? W: -180? >> >> Latitude Resolution: 25 km >> Longitude Resolution: 25 km >> >> Distribution Size: 105212 bytes per southern file >> """ >> >> I am unfamiliar with non self-documented files (used to hdf and netcdf >> !) and struggle to make sense of how to read these files and plot the >> corresponding maps, I've tried using the array module and the >> fromstring function >> >> file=open(filename,'rb') >> a=array.array('B',file.read()) >> var=numpy.fromstring(a,dtype=np.int) >> > Try > > numpy.fromfile(f, dtype=np.int8) > > or uint8, depending on whether the data is signed or not. What you did > is also correct except that the final dtype must be np.int8. > > (You can always do var = var.astype(np.int) *afterwards* to convert to a > bigger integer type.) > > Dag Sverre > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- A non-text attachment was scrubbed... Name: showice.py Type: text/x-python Size: 3241 bytes Desc: not available URL: From josef.pktd at gmail.com Fri Feb 26 16:18:27 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 26 Feb 2010 16:18:27 -0500 Subject: [Numpy-discussion] Can you help me find a dumb matrix multiply mistake In-Reply-To: References: Message-ID: <1cd32cbb1002261318w7d45a735t8cc97131297bdef4@mail.gmail.com> On Fri, Feb 26, 2010 at 4:01 PM, Rick Muller wrote: > I'm making a mistake here, one that I suspect is a dumb error. I'm not as > familiar with the math of complex hermetian matrices as I am with real > symmetry matrices. > > I want to diagonalize the matrix: > > Y = matrix([[0,-1j],[1j,0]])???? # this is the Y Pauli spin matrix > > Ey,Uy = eigh(Y) > > When I try to do: > > print Uy.H * diag(Ey) * Uy > > rather than getting Y back, I get: > > [[ 0.+0.j -1.+0.j] > ?[-1.+0.j? 0.+0.j]] to get Y back: >>> Uy * np.diag(Ey) * Uy.H matrix([[ 0.+0.j, 0.-1.j], [ 0.+1.j, 0.+0.j]]) >>> Uy * np.diag(Ey) * Uy.H - Y matrix([[ 0. +0.00000000e+00j, 0. +2.22044605e-16j], [ 0. -2.22044605e-16j, 0. +0.00000000e+00j]]) Josef > > I also tried > > dot(Uy.H,dot(diag(Ey),Uy)) > > to make sure this isn't a matrix/array problem with the same result. Can > someone spot what I'm doing wrong? > > > > -- > Rick Muller > rpmuller at gmail.com > 505-750-7557 > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From numpy at mspacek.mm.st Fri Feb 26 17:41:25 2010 From: numpy at mspacek.mm.st (Martin Spacek) Date: Fri, 26 Feb 2010 14:41:25 -0800 Subject: [Numpy-discussion] pickling/unpickling numpy.void and numpy.record for multiprocessing Message-ID: I have a 1D structured ndarray with several different fields in the dtype. I'm using multiprocessing.Pool.map() to iterate over this structured ndarray, passing one entry (of type numpy.void) at a time to the function to be called by each process in the pool. After much confusion about why this wasn't working, I finally realized that unpickling a previously pickled numpy.void results in garbage data. Here's an example: >>> import numpy as np >>> x = np.zeros((2,), dtype=('i4,f4,a10')) >>> x[:] = [(1,2.,'Hello'), (2,3.,"World")] >>> x array([(1, 2.0, 'Hello'), (2, 3.0, 'World')], dtype=[('f0', '>> x[0] (1, 2.0, 'Hello') >>> type(x[0]) >>> import pickle >>> s = pickle.dumps(x[0]) >>> newx0 = pickle.loads(s) >>> newx0 (30917960, 1.6904535998413144e-38, '\xd0\xef\x1c\x1eZ\x03\x00d') >>> s "cnumpy.core.multiarray\nscalar\np0\n(cnumpy\ndtype\np1\n(S'V18'\np2\nI0\nI1\ntp3\nRp4\n(I4\nS'|'\np5\nN(S'f0'\np6\nS'f1'\np7\nS'f2'\np8\ntp9\n(dp10\ng6\n(g1\n(S'i4'\np11\nI0\nI1\ntp12\nRp13\n(I4\nS'<'\np14\nNNNI-1\nI-1\nI0\nNtp15\nbI0\ntp16\nsg7\n(g1\n(S'f4'\np17\nI0\nI1\ntp18\nRp19\n(I4\nS'<'\np20\nNNNI-1\nI-1\nI0\nNtp21\nbI4\ntp22\nsg8\n(g1\n(S'S10'\np23\nI0\nI1\ntp24\nRp25\n(I4\nS'|'\np26\nNNNI10\nI1\nI0\nNtp27\nbI8\ntp28\nsI18\nI1\nI0\nNtp29\nbS'\\x01\\x00\\x00\\x00\\x00\\x00\\x00 at Hello\\x00\\x00\\x00\\x00\\x00'\np30\ntp31\nRp32\n." >>> type(newx0) >>> newx0.dtype dtype([('f0', '>> x[0].dtype dtype([('f0', '>> np.version.version '1.4.0' This also seems to be the case for recarrays with their numpy.record entries. I've tried using pickle and cPickle, with both the oldest and the newest pickling protocol. This is in numpy 1.4 on win32 and win64, and numpy 1.3 on 32-bit linux. I'm using Python 2.6.4 in all cases. I also just tried it on Python 2.5.2 with numpy 1.0.4. All have the same result, although the garbage data is different each time. I suppose numpy.void is as it suggests, a pointer to a specific place in memory. I'm just surprised that this pointer isn't dereferenced before pickling Or is it? I'm not skilled in interpreting the strings returned by pickle.dumps(). I do see the word "Hello" in the string, so maybe the problem is during unpickling. I've tried doing a copy, and even a deepcopy of a structured array numpy.void entry, with no luck. Is this a known limitation? Any suggestions on how I might get around this? Pool.map() pickles each numpy.void entry as it iterates over the structured array, before sending it to the next available process. My structured array only needs to be read from by my multiple processes (one per core), so perhaps there's a better way than sending copies of entries. Multithreading (using an implementation of a ThreadPool I found somewhere) doesn't work because I'm calling scipy.optimize.leastsq, which doesn't seem to release the GIL. Thanks! Martin From robert.kern at gmail.com Fri Feb 26 18:02:36 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 26 Feb 2010 17:02:36 -0600 Subject: [Numpy-discussion] pickling/unpickling numpy.void and numpy.record for multiprocessing In-Reply-To: References: Message-ID: <3d375d731002261502l17d31301uf7607c8c8e790143@mail.gmail.com> On Fri, Feb 26, 2010 at 16:41, Martin Spacek wrote: > I have a 1D structured ndarray with several different fields in the dtype. I'm > using multiprocessing.Pool.map() to iterate over this structured ndarray, > passing one entry (of type numpy.void) at a time to the function to be called by > each process in the pool. After much confusion about why this wasn't working, I > finally realized that unpickling a previously pickled numpy.void results in > garbage data. Here's an example: > > ?>>> import numpy as np > ?>>> x = np.zeros((2,), dtype=('i4,f4,a10')) > ?>>> x[:] = [(1,2.,'Hello'), (2,3.,"World")] > ?>>> x > array([(1, 2.0, 'Hello'), (2, 3.0, 'World')], > ? ? ? dtype=[('f0', ' ?>>> x[0] > (1, 2.0, 'Hello') > ?>>> type(x[0]) > > ?>>> import pickle > ?>>> s = pickle.dumps(x[0]) > ?>>> newx0 = pickle.loads(s) > ?>>> newx0 > (30917960, 1.6904535998413144e-38, '\xd0\xef\x1c\x1eZ\x03\x00d') > ?>>> s > "cnumpy.core.multiarray\nscalar\np0\n(cnumpy\ndtype\np1\n(S'V18'\np2\nI0\nI1\ntp3\nRp4\n(I4\nS'|'\np5\nN(S'f0'\np6\nS'f1'\np7\nS'f2'\np8\ntp9\n(dp10\ng6\n(g1\n(S'i4'\np11\nI0\nI1\ntp12\nRp13\n(I4\nS'<'\np14\nNNNI-1\nI-1\nI0\nNtp15\nbI0\ntp16\nsg7\n(g1\n(S'f4'\np17\nI0\nI1\ntp18\nRp19\n(I4\nS'<'\np20\nNNNI-1\nI-1\nI0\nNtp21\nbI4\ntp22\nsg8\n(g1\n(S'S10'\np23\nI0\nI1\ntp24\nRp25\n(I4\nS'|'\np26\nNNNI10\nI1\nI0\nNtp27\nbI8\ntp28\nsI18\nI1\nI0\nNtp29\nbS'\\x01\\x00\\x00\\x00\\x00\\x00\\x00 at Hello\\x00\\x00\\x00\\x00\\x00'\np30\ntp31\nRp32\n." > ?>>> type(newx0) > > ?>>> newx0.dtype > dtype([('f0', ' ?>>> x[0].dtype > dtype([('f0', ' ?>>> np.version.version > '1.4.0' > > This also seems to be the case for recarrays with their numpy.record entries. > I've tried using pickle and cPickle, with both the oldest and the newest > pickling protocol. This is in numpy 1.4 on win32 and win64, and numpy 1.3 on > 32-bit linux. I'm using Python 2.6.4 in all cases. I also just tried it on > Python 2.5.2 with numpy 1.0.4. All have the same result, although the garbage > data is different each time. > > I suppose numpy.void is as it suggests, a pointer to a specific place in memory. No, it isn't. It's just a base dtype for all of the ad-hoc dtypes that are created, for example, for record arrays. > I'm just surprised that this pointer isn't dereferenced before pickling Or is > it? I'm not skilled in interpreting the strings returned by pickle.dumps(). I do > see the word "Hello" in the string, so maybe the problem is during unpickling. Use pickletools.dis() on the string. It helps to understand what is going on. The data string is definitely correct: In [25]: t = '\x01\x00\x00\x00\x00\x00\x00 at Hello\x00\x00\x00\x00\x00' In [29]: np.fromstring(t, x.dtype) Out[29]: array([(1, 2.0, 'Hello')], dtype=[('f0', ' I've tried doing a copy, and even a deepcopy of a structured array numpy.void > entry, with no luck. > > Is this a known limitation? Nope. New bug! Thanks! > Any suggestions on how I might get around this? > Pool.map() pickles each numpy.void entry as it iterates over the structured > array, before sending it to the next available process. My structured array only > needs to be read from by my multiple processes (one per core), so perhaps > there's a better way than sending copies of entries. Multithreading (using an > implementation of a ThreadPool I found somewhere) doesn't work because I'm > calling scipy.optimize.leastsq, which doesn't seem to release the GIL. Pickling of complete arrays works. A quick workaround would be to send rank-0 scalars: Pool.map(map(np.asarray, x)) Or just tuples: Pool.map(map(tuple, x)) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Fri Feb 26 18:26:00 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 27 Feb 2010 01:26:00 +0200 Subject: [Numpy-discussion] pickling/unpickling numpy.void and numpy.record for multiprocessing In-Reply-To: References: Message-ID: <1267226760.9131.25.camel@idol> pe, 2010-02-26 kello 14:41 -0800, Martin Spacek kirjoitti: [clip: pickling/unpickling numpy.void scalar objects] > I suppose numpy.void is as it suggests, a pointer to a specific place in memory. > I'm just surprised that this pointer isn't dereferenced before pickling Or is > it? I'm not skilled in interpreting the strings returned by pickle.dumps(). I do > see the word "Hello" in the string, so maybe the problem is during unpickling. No, the unpickled void scalar will own its data. The problem is that either the data is not saved correctly (unlikely), or it is unpickled incorrectly. The relevant code path to look at is multiarraymodule:array_scalar -> scalarapi.c:PyArray_Scalar. Needs some cgdb'ing to find out what's going on there. Please file a bug report on this. > Is this a known limitation? Any suggestions on how I might get around this? > Pool.map() pickles each numpy.void entry as it iterates over the structured > array, before sending it to the next available process. Use 1-element arrays instead of void scalars. Those will pickle correctly. Perhaps reshaping your array to (N, 1) will be enough. -- Pauli Virtanen From cournape at gmail.com Fri Feb 26 19:17:59 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 27 Feb 2010 09:17:59 +0900 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> <1267205648.2728.500.camel@talisman> <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> Message-ID: <5b8d13221002261617t68224835lb0b121916f2466e@mail.gmail.com> On Sat, Feb 27, 2010 at 2:44 AM, wrote: > > I think I mixed up some things then, > scipy 0.7.1 cython files should be regenerated with the latest cython > release so that it doesn't check the sizeof anymore. > Then, a scipy 0.7.1 build against numpy 1.3 would also work without > recompiling against numpy 1.4.1 > > Is this correct? Yes, this is correct. It is impossible to create a numpy 1.4.x which is compatible with the *existing* scipy binary, because several structures have been growing (not only because of datetime). The cython changes have already been incorporated in scipy 0.7.x branch, so in the end, what should be done is a new 0.7.2 scipy binary built against numpy 1.3.0, which will then be compatible with both numpy 1.3 and 1.4 binaries, cheers, David From rpmuller at gmail.com Fri Feb 26 19:43:04 2010 From: rpmuller at gmail.com (Rick Muller) Date: Fri, 26 Feb 2010 17:43:04 -0700 Subject: [Numpy-discussion] Can you help me find a dumb matrix multiply mistake In-Reply-To: <1cd32cbb1002261318w7d45a735t8cc97131297bdef4@mail.gmail.com> References: <1cd32cbb1002261318w7d45a735t8cc97131297bdef4@mail.gmail.com> Message-ID: Argh! I mixed up where the .H went!! Thanks for pointing out the mistake. Thought it was something mindless. On Fri, Feb 26, 2010 at 2:18 PM, wrote: > On Fri, Feb 26, 2010 at 4:01 PM, Rick Muller wrote: > > I'm making a mistake here, one that I suspect is a dumb error. I'm not as > > familiar with the math of complex hermetian matrices as I am with real > > symmetry matrices. > > > > I want to diagonalize the matrix: > > > > Y = matrix([[0,-1j],[1j,0]]) # this is the Y Pauli spin matrix > > > > Ey,Uy = eigh(Y) > > > > When I try to do: > > > > print Uy.H * diag(Ey) * Uy > > > > rather than getting Y back, I get: > > > > [[ 0.+0.j -1.+0.j] > > [-1.+0.j 0.+0.j]] > > to get Y back: > > >>> Uy * np.diag(Ey) * Uy.H > matrix([[ 0.+0.j, 0.-1.j], > [ 0.+1.j, 0.+0.j]]) > > >>> Uy * np.diag(Ey) * Uy.H - Y > matrix([[ 0. +0.00000000e+00j, 0. +2.22044605e-16j], > [ 0. -2.22044605e-16j, 0. +0.00000000e+00j]]) > > Josef > > > > > I also tried > > > > dot(Uy.H,dot(diag(Ey),Uy)) > > > > to make sure this isn't a matrix/array problem with the same result. Can > > someone spot what I'm doing wrong? > > > > > > > > -- > > Rick Muller > > rpmuller at gmail.com > > 505-750-7557 > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Rick Muller rpmuller at gmail.com 505-750-7557 -------------- next part -------------- An HTML attachment was scrubbed... URL: From doutriaux1 at llnl.gov Fri Feb 26 19:43:23 2010 From: doutriaux1 at llnl.gov (=?utf-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Fri, 26 Feb 2010 16:43:23 -0800 Subject: [Numpy-discussion] Snow Leopard Message-ID: <54208F16-C60C-4711-BB97-B9D794079300@llnl.gov> Hi there, I'm having a lot of trouble on Snow Leopard. I'm trying to build a 32bit only version of our system.... Turns out I had to use python 2.7a3 in order to get Python --universalsdk to work.... But now numpy fails to build as well... First because of the: >> _old_init_posix = distutils.sysconfig._init_posix error I got it to use sysconfig instead But now when running python setup.py build install I get: creating build/src.macosx-10.3-fat-2.7 creating build/src.macosx-10.3-fat-2.7/numpy creating build/src.macosx-10.3-fat-2.7/numpy/distutils building library "npymath" sources customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize Gnu95FCompiler Could not locate executable gfortran customize G95FCompiler Could not locate executable g95 don't know how to compile Fortran code on platform 'posix' C compiler: gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -arch i386 -m32 compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/lgm/cdat/trunk/include/python2.7 -c' gcc: _configtest.c gcc _configtest.o -o _configtest ld: warning: in _configtest.o, file is not of required architecture Undefined symbols: "_main", referenced from: __start in crt1.o ld: symbol(s) not found collect2: ld returned 1 exit status ld: warning: in _configtest.o, file is not of required architecture Undefined symbols: "_main", referenced from: __start in crt1.o ld: symbol(s) not found collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "setup.py", line 210, in setup_package() File "setup.py", line 203, in setup_package configuration=configuration ) File "/svn/cdat/trunk/build/numpy/numpy/distutils/core.py", line 186, in setup return old_setup(**new_attr) File "/lgm/cdat/trunk/lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File "/lgm/cdat/trunk/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/lgm/cdat/trunk/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/svn/cdat/trunk/build/numpy/numpy/distutils/command/build.py", line 37, in run old_build.run(self) File "/lgm/cdat/trunk/lib/python2.7/distutils/command/build.py", line 127, in run self.run_command(cmd_name) File "/lgm/cdat/trunk/lib/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "/lgm/cdat/trunk/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/svn/cdat/trunk/build/numpy/numpy/distutils/command/build_src.py", line 152, in run self.build_sources() File "/svn/cdat/trunk/build/numpy/numpy/distutils/command/build_src.py", line 163, in build_sources self.build_library_sources(*libname_info) File "/svn/cdat/trunk/build/numpy/numpy/distutils/command/build_src.py", line 298, in build_library_sources sources = self.generate_sources(sources, (lib_name, build_info)) File "/svn/cdat/trunk/build/numpy/numpy/distutils/command/build_src.py", line 385, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 670, in get_mathlib_info raise RuntimeError("Broken toolchain: cannot link a simple C program") RuntimeError: Broken toolchain: cannot link a simple C program Note: it's saying soemtihng about mac 10.3.... Any idea on how to build a pure 32bit numpy on snow leopard? Thx, C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Fri Feb 26 19:59:15 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 26 Feb 2010 19:59:15 -0500 Subject: [Numpy-discussion] Snow Leopard In-Reply-To: <54208F16-C60C-4711-BB97-B9D794079300@llnl.gov> References: <54208F16-C60C-4711-BB97-B9D794079300@llnl.gov> Message-ID: <7E340A39-8F92-44F4-BE54-A6A49150DE47@cs.toronto.edu> On 26-Feb-10, at 7:43 PM, Charles ???? Doutriaux wrote: > Any idea on how to build a pure 32bit numpy on snow leopard? If I'm not mistaken you'll probably want to build against the Python.org Python rather than the wacky version that comes installed on the system. The Python.org installer is a 32-bit Python that installs itself in /Library. David From numpy at mspacek.mm.st Fri Feb 26 20:37:08 2010 From: numpy at mspacek.mm.st (Martin Spacek) Date: Fri, 26 Feb 2010 17:37:08 -0800 Subject: [Numpy-discussion] pickling/unpickling numpy.void and numpy.record for multiprocessing In-Reply-To: <3d375d731002261502l17d31301uf7607c8c8e790143@mail.gmail.com> References: <3d375d731002261502l17d31301uf7607c8c8e790143@mail.gmail.com> Message-ID: On 2010-02-26 15:02, Robert Kern wrote: >> Is this a known limitation? > > Nope. New bug! Thanks! Good. I'm not crazy after all :) > Pickling of complete arrays works. A quick workaround would be to send > rank-0 scalars: > > Pool.map(map(np.asarray, x)) > > Or just tuples: > > Pool.map(map(tuple, x)) Excellent! The first method works as a drop-in replacement for me. Seems better than the second, because it conserves named field access. The only slight difference is this: >>> a = map(np.asarray, x) >>> a[0]['f0'] array(1) >>> x[0]['f0'] 1 ...but that doesn't seem to affect my code. Thanks a bunch for the quick solution! Martin From numpy at mspacek.mm.st Fri Feb 26 20:40:17 2010 From: numpy at mspacek.mm.st (Martin Spacek) Date: Fri, 26 Feb 2010 17:40:17 -0800 Subject: [Numpy-discussion] pickling/unpickling numpy.void and numpy.record for multiprocessing In-Reply-To: <1267226760.9131.25.camel@idol> References: <1267226760.9131.25.camel@idol> Message-ID: On 2010-02-26 15:26, Pauli Virtanen wrote: > No, the unpickled void scalar will own its data. The problem is that > either the data is not saved correctly (unlikely), or it is unpickled > incorrectly. > > The relevant code path to look at is multiarraymodule:array_scalar -> > scalarapi.c:PyArray_Scalar. Needs some cgdb'ing to find out what's going > on there. > > Please file a bug report on this. OK, Done. See http://projects.scipy.org/numpy/ticket/1415 Martin From bsouthey at gmail.com Fri Feb 26 21:17:17 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 26 Feb 2010 20:17:17 -0600 Subject: [Numpy-discussion] Snow Leopard In-Reply-To: <7E340A39-8F92-44F4-BE54-A6A49150DE47@cs.toronto.edu> References: <54208F16-C60C-4711-BB97-B9D794079300@llnl.gov> <7E340A39-8F92-44F4-BE54-A6A49150DE47@cs.toronto.edu> Message-ID: On Fri, Feb 26, 2010 at 6:59 PM, David Warde-Farley wrote: > On 26-Feb-10, at 7:43 PM, Charles ???? Doutriaux wrote: > >> Any idea on how to build a pure 32bit numpy on snow leopard? > > If I'm not mistaken you'll probably want to build against the > Python.org Python rather than the wacky version that comes installed > on the system. The Python.org installer is a 32-bit Python that > installs itself in /Library. > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > If you remain with 2.7 then you should also view the thread started 3 days ago: 'distutils problem with NumPy-1.4 & Py-2.7a3 (Snow Leopard)' http://mail.scipy.org/pipermail/numpy-discussion/2010-February/048882.html In particular: Ticket 1355 - that should be resolved with r8260 (thanks Stefan): http://projects.scipy.org/numpy/ticket/1355 Ticket 1409 http://projects.scipy.org/numpy/ticket/1409 Ticket 1345: http://projects.scipy.org/numpy/ticket/1345 Bruce From ralf.gommers at googlemail.com Fri Feb 26 21:59:52 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 27 Feb 2010 10:59:52 +0800 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: <5b8d13221002261617t68224835lb0b121916f2466e@mail.gmail.com> References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> <1267205648.2728.500.camel@talisman> <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> <5b8d13221002261617t68224835lb0b121916f2466e@mail.gmail.com> Message-ID: On Sat, Feb 27, 2010 at 8:17 AM, David Cournapeau wrote: > On Sat, Feb 27, 2010 at 2:44 AM, wrote: > > > > > I think I mixed up some things then, > > scipy 0.7.1 cython files should be regenerated with the latest cython > > release so that it doesn't check the sizeof anymore. > > Then, a scipy 0.7.1 build against numpy 1.3 would also work without > > recompiling against numpy 1.4.1 > > > > Is this correct? > > Yes, this is correct. It is impossible to create a numpy 1.4.x which > is compatible with the *existing* scipy binary, because several > structures have been growing (not only because of datetime). > > The cython changes have already been incorporated in scipy 0.7.x > branch, so in the end, what should be done is a new 0.7.2 scipy binary > built against numpy 1.3.0, which will then be compatible with both > numpy 1.3 and 1.4 binaries, > > Hmm, I remember you saying this a while ago and I'm sure you're right. But it got lost in the noise, and like Charles I thought the aim was to produce a 1.4.x binary compatible with what's out there now. This is also what you said on Wednesday: So here is how I see things in the near future for release: - compile a simple binary installer for mac os x and windows (no need for doc or multiple archs) from 1.4.x - test this with the scipy binary out there (running the full test suites), ideally other well known packages as well (matplotlib, pytables, etc...). So now this seems to be impossible. I'm not so sure then we're not confusing even more confusing with yet another incompatible binary... Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From peck at us.ibm.com Fri Feb 26 22:49:17 2010 From: peck at us.ibm.com (Jon K Peck) Date: Fri, 26 Feb 2010 20:49:17 -0700 Subject: [Numpy-discussion] AUTO: Jon K Peck is out of the office (returning 03/06/2010) Message-ID: I am out of the office until 03/06/2010. I will be traveling through Saturday, March 6 and will be delayed responding to your email. I will have periodic email access, but I will be many time zones away from my usual location. Note: This is an automated response to your message "NumPy-Discussion Digest, Vol 41, Issue 118" sent on 2/26/10 17:43:30. This is the only notification you will receive while this person is away. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Feb 27 01:33:11 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 27 Feb 2010 15:33:11 +0900 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> <1267205648.2728.500.camel@talisman> <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> <5b8d13221002261617t68224835lb0b121916f2466e@mail.gmail.com> Message-ID: <5b8d13221002262233j636273f8ic78683d97085b837@mail.gmail.com> On Sat, Feb 27, 2010 at 11:59 AM, Ralf Gommers wrote: > > > So here is how I see things in the near future for release: > - compile a simple binary installer for mac os x and windows (no need > for doc or multiple archs) from 1.4.x > - test this with the scipy binary out there (running the full test > suites), ideally other well known packages as well (matplotlib, > pytables, etc...). > > > So now this seems to be impossible. I'm not so sure then we're not confusing > even more confusing with yet another incompatible binary... Sorry, I should have been clearer in the above quoted list. There were two issues with numpy 1.4.0, one caused by datetime, and one caused by other changes to growing structures. The second one is ok for most cases, but cython < 0.12.1 was too strict in checking some structure size, meaning any extension built from cython < 0.12.1 will refuse to import. There is nothing we can do for this one. So the "plan" I had in mind was: - release fixed numpy 1.4.1 - release a new scipy 0.7.2 built against numpy 1.3.0, which would be compatible with both existing 1.3.0 and the new 1.4.1 Is this clearer ? David From ralf.gommers at googlemail.com Sat Feb 27 01:43:27 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 27 Feb 2010 14:43:27 +0800 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: <5b8d13221002262233j636273f8ic78683d97085b837@mail.gmail.com> References: <1cd32cbb1002260909y53b5c30asb3a51c225a02e971@mail.gmail.com> <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> <1267205648.2728.500.camel@talisman> <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> <5b8d13221002261617t68224835lb0b121916f2466e@mail.gmail.com> <5b8d13221002262233j636273f8ic78683d97085b837@mail.gmail.com> Message-ID: On Sat, Feb 27, 2010 at 2:33 PM, David Cournapeau wrote: > > Sorry, I should have been clearer in the above quoted list. There were > two issues with numpy 1.4.0, one caused by datetime, and one caused by > other changes to growing structures. The second one is ok for most > cases, but cython < 0.12.1 was too strict in checking some structure > size, meaning any extension built from cython < 0.12.1 will refuse to > import. There is nothing we can do for this one. > > So the "plan" I had in mind was: > - release fixed numpy 1.4.1 > - release a new scipy 0.7.2 built against numpy 1.3.0, which would be > compatible with both existing 1.3.0 and the new 1.4.1 > > Is this clearer ? > > Yes that is clear. Would it make sense to first release scipy 0.7.2 though? Then numpy 1.4.1 can be tested against it and we can be sure it works. The other way around it's not possible to test. Or can you tell from the test output I posted that it should be okay? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Feb 27 02:21:07 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 27 Feb 2010 16:21:07 +0900 Subject: [Numpy-discussion] testing binary installer for OS X In-Reply-To: References: <1267204798.2728.497.camel@talisman> <1cd32cbb1002260926n683a35bve120bff5acca587f@mail.gmail.com> <1267205648.2728.500.camel@talisman> <1cd32cbb1002260944x2206c2e3vd2497a9c7fc4ec1c@mail.gmail.com> <5b8d13221002261617t68224835lb0b121916f2466e@mail.gmail.com> <5b8d13221002262233j636273f8ic78683d97085b837@mail.gmail.com> Message-ID: <5b8d13221002262321p1ecc7e79peed2492e0f4242a9@mail.gmail.com> On Sat, Feb 27, 2010 at 3:43 PM, Ralf Gommers wrote: > > Yes that is clear. Would it make sense to first release scipy 0.7.2 though? > Then numpy 1.4.1 can be tested against it and we can be sure it works. The > other way around it's not possible to test. Yes it is, you just have to build scipy against numpy 1.3.0. David From sebastian.walter at gmail.com Sat Feb 27 09:04:41 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Sat, 27 Feb 2010 15:04:41 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions Message-ID: Announcement: ----------------------- I have started to implement vectorized univariate truncated Taylor polynomial operations (add,sub,mul,div,sin,exp,...) in ANSI-C. The interface to python is implemented by using numpy.ndarray's ctypes functionality. Unit tests are implement using nose. It is BSD licencsed and hosted on github: http://github.com/b45ch1/taylorpoly Rationale: ------------------------ A truncated Taylor polynomial is of the form [x]_d = \sum_{d=0}^{D-1} x_d t^d where x_d a real number and t an external parameter. Truncated Taylor polynomials should not be confused with polynomials for interpolation, as it is implemented in numpy.polynomial. The difference is that Taylor polynomials, as implemented in `taylorpoly`, are polynomials in an "external parameter". I.e. they are *never* evaluated for some t. One should think of this algebraic class as an extension of the real numbers ( e.g. similarly to the complex numbers). At the moment, I am not aware of any such algorithms available in Python. I believe algorithms for this algebraic structure are a very important building block for more sophisticated algorithms, e.g. for Algorithmic Differentiation (AD) tools and Taylor polynomial integrators. More Detailed Description: --------------------------------------- The operations add,mul,etc. are extended to compute on truncated Taylor polynomials, e.g. for the multiplication [z]_{D+1} &=_{D+1}& [x]_{D+1} y_{D+1} \\ \sum_{d=0}^D z_d t^d & =_{D+1} & \left( \sum_{d=0}^D x_d t^d \right) \left( \sum_{c=0}^D y_c t^c \right) where D+1 is the degree of the polynomial. The coefficients z_d are only evaluated up to degree D+1, i.e. higher orders are truncated. Request for Opinions: ------------------------------- Before I continue implementing more algorithms, it would be nice to have some feedback. There are things I'm still not sure about: 1) data structure for vectorized operations: For the non-vectorized algorithms, it is quite clear that the coefficients of a Taylor polynomial \sum_{d=0}^{D-1} x_d t^d are stored in an 1D array [x_0,x_1,...,x_{D-1}]: In the vectorized version, P different Taylor polynomials are computed at once, but the base coefficient x_0 is the same for all of them. At the moment I have implemented the data structure: [x]_{D,P} := [x_0, x_{1,1},...,x_{1,D-1},x_{2,1},...,x_{P,D-1}]. Another possibility would be: [x] = [x_0, x_{1,1}, ..., x_{1,P}, x_{2, 1}, ..., x_{D-1, P}] Any thoughts about which to choose? The first version is much easier to implement. The second is possibly easier to vectorize by a compiler. 2) implementation of binary operators in Python: I have defined a class UTPS (univariate Taylor polynomial over Scalar) that basically only stores the above array [x]_{D,P} in the attribute `UTPS.data` and the algorithms add,sub,mul,div take instances of the class UTPS. I.e. the functions are not implemented as member functions. I plan to add member functions later for convenience that call those functions. Is this is a good idea? I think it would be good to be consistent with numpy. 3) coding style of the algorithms in C I'm making heavy use of pointer arithmetic, rendering the algorithms a little hard to write and to understand. The alternative would be array indexing with address computations. Does anyone know how good the compilers are nowadays at optimizing away the address computations? I'd be very happy to get some feedback. regards, Sebastian From friedrichromstedt at gmail.com Sat Feb 27 16:02:03 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sat, 27 Feb 2010 22:02:03 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: To the core developers (of numpy.polynomial e.g.): Skip the mess and read the last paragraph. The other things I will post back to the list, where they belong to. I just didn't want to have off-topic discussion there. > I wanted to stress that one can do arithmetic on Taylor polynomials in > a very similar was as with complex numbers. I do not understand completely. What is the analogy with complex numbers which one cannot draw to real numbers? Or, more precise: What /is/ actually the analogy besides that there are operations? With i, i * i = -1, thus one has not to discard terms, contrary to the polynomial product as you defined it, no? > I guess there are also situations when you have polynomials z = > \sum_{d=0}^{D-1} z_d t^d, where z_d are complex numbers. >> I like it more to implement operators as overloads of the __mul__ > > I thought about something like > def __mul__(self, other): > return mul(self,other) Yes, I know! And in fact, it may symmetrise the thing a bit. >> etc., but this is a matter of taste. >> In fact, you /have/ to provide >> external binary operators, because I guess you also want to have >> numpy.ndarrays as left operand. In that case, the undarray will have >> higher precedence, and will treat your data structure as a scalar, >> applying it to all the undarray's elements. > > well, actually it should treat it as a scalar since the Taylor > polynomial is something like a real or complex number. Maybe I misunderstood you completely, but didn't you want to implement arrays of polynomials using numpy? So I guess you want to apply a vector from numpy pairwise to the polynomials in the P-object? > [z]_{D+1} &=_{D+1}& [x]_{D+1} y_{D+1} \\ > \sum_{d=0}^D z_d t^d & =_{D+1} & \left( \sum_{d=0}^D x_d t^d \right) > \left( \sum_{c=0}^D y_c t^c \right) Did your forget the [] around the y in line 1, or is this intentional? Actually, I had to compile the things before I could imagine what you could mean. Why don't you use multidimensional arrays? Has it reasons in the C accessibility? Now, as I see it, you implement your strides manually. With a multidimensional array, you could even create arrays of shape (10, 12) of D-polynomials by storing an ndarray of shape (10, 12, D) or (D, 10, 12). Just because of curiosity: Why do you set X_{0, 1} = ... X_{0, P} ? Furthermore, I believe there is some elegant way formulating the product for a single polynomial. Let me think a bit ... For one entry of [z]_E, you want to sum up all pairs: x_{0} y{E} + ... + x{D} y{E - D} , (1) right? In a matrix, containing all mutual products x_{i} y{j}, this are diagonals. Rotating the matrix by 45? counterclockwise, they are sums in columns. Hmmm... how to rotate a matrix by 45?? Another fresh look: (1) looks pretty much like the discrete convolution of x_{i} and y_{i} at argument E. Going into Fourier space, this becomes a product. x_i and y_i have finite support, because one sets x_{i} and y_{i} = 0 outside 0 <= i <= D. The support of (1) in the running index is at most [0, D]. The support in E is at most [0, 2 D]. Thus you don't make mistakes by using DFT, when you pad x and y by D zeros on the right. My strategy is: Push the padded versions of x and y into Fourier space, calculate their product, transform back, cut away the D last entries, and you're done! I guess you mainly want to parallelise the calculation instead of improving the singleton calculation itself? So then, even non-FFT would incorporate 2 D explicit python sums for Fourier transform, and again 2 D sums for transforming back, is this an improvement over the method you are using currently? And you could code it in pure Python :-) I will investigate this tonight. I'm curious. Irrespective of that I have also other fish to fry ... ;-) iirc, in fact DFT is used for efficient polynomial multiplication. Maybe Chuck or another core developer of numpy can tell whether numpy.polynomial does it actually that way? Friedrich From friedrichromstedt at gmail.com Sat Feb 27 16:14:39 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sat, 27 Feb 2010 22:14:39 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: 2010/2/27 Sebastian Walter : > IMO this kind of discussion is not offtopic since it is directly > related to the original question. Ok, but I say it's not my responsibility now if the numpy-discussion namespace is polluted now. >> 2010/2/27 Sebastian Walter : >>> On Sat, Feb 27, 2010 at 3:59 PM, Friedrich Romstedt >>> wrote: >>>> I'm working currently on upy, uncertainty-python, dealing with real >>>> numbers. github.com/friedrichromstedt/upy . I want in mid-term extend >>>> it to complex numbers, where the concepts of "uncertainty" are >>>> necessarily more elaborate. Do you think the concept of truncated >>>> Taylor polynomials could help in understanding or even generalising >>>> the uncertainties? >>> I'm not sure what you mean by uncertainties. Could you elaborate? >>> For all I know you can use Taylor series for nonlinear error propagation. >> >> I mean Gaussian error propagation. I currently am not intending to >> cover regimes where one has to consider "higher order" effects. If it >> is not very easy. On the contrary, I want to find a way to describe >> complex numbers which consist of a deterministic value and as many >> superposed "complex Gaussian variables" as needed. > > what is a Gaussian variable? a formula would help ;) Oh, I wasn't precise: en.wikipedia.org/wiki/Gaussian_random_variable (aka normal distribution) >>>> Are complex numbers and truncated Taylor >>>> polynomials in some way isomorphic or something similar? >>> >>> Taylor polynomials are not a field but just a commutative ring (there >>> are zero divisors) so I guess it's not possible to find an >>> isomorphism. >> >> Ok. >> >> The other things I will post back to the list, where they belong to. >> I just didn't want to have off-topic discussion there. Friedrich From friedrichromstedt at gmail.com Sat Feb 27 17:11:32 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sat, 27 Feb 2010 23:11:32 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: Ok, it took me about one hour, but here they are: Fourier-accelerated polynomials. > python Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import gdft_polynomial >>> p1 = gdft_polynomial.Polynomial([1]) >>> p2 = gdft_polynomial.Polynomial([2]) >>> p1 * p2 >>> print p1 * p2 [ 2.+0.j] >>> p1 = gdft_polynomial.Polynomial([1, 1]) >>> p2 = gdft_polynomial.Polynomial([1]) >>> print p1 * p2 [ 1. +6.12303177e-17j 1. -6.12303177e-17j] >>> p2 = gdft_polynomial.Polynomial([1, 2]) >>> print p1 * p2 [ 1. +8.51170986e-16j 3. +3.70074342e-17j 2. -4.44089210e-16j] >>> p1 = gdft_polynomial.Polynomial([1, 2, 3, 4, 3, 2, 1]) >>> p2 = gdft_polynomial.Polynomial([4, 3, 2, 1, 2, 3, 4]) >>> print (p1 * p2).coefficients.real [ 4. 11. 20. 30. 34. 35. 36. 35. 34. 30. 20. 11. 4.] >>> github.com/friedrichromstedt/gdft_polynomials It's open for bug hunting :-) Haven't checked the last result. I used my own gdft module. Maybe one could incorporate numpy.fft easily. But that's your job, Sebastian, isn't it? Feel free to push to the repo, and don't forget to add your name to the copyright notice, hope you are happy with MIT. Anyway, I don't know whether numpy.fft supports transforming only one coordinate and using the others for "parallelisation"? Friedrich From sebastian.walter at gmail.com Sat Feb 27 17:39:03 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Sat, 27 Feb 2010 23:39:03 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: On Sat, Feb 27, 2010 at 10:02 PM, Friedrich Romstedt wrote: > To the core developers (of numpy.polynomial e.g.): Skip the mess and > read the last paragraph. > > The other things I will post back to the list, where they belong to. > I just didn't want to have off-topic discussion there. > >> I wanted to stress that one can do arithmetic on Taylor polynomials in >> a very similar was as with complex numbers. > > I do not understand completely. ?What is the analogy with complex > numbers which one cannot draw to real numbers? ?Or, more precise: What > /is/ actually the analogy besides that there are operations? ?With i, > i * i = -1, thus one has not to discard terms, contrary to the > polynomial product as you defined it, no? I'm sorry this comment turns out to be confusing. It has apparently quite the contrary effect of what I wanted to achieve: Since there is already a polynomial module in numpy I wanted to highlight their difference and stress that they are used to do arithmetic, e.g. compute f([x],[y]) = [x] * (sin([x])**2 + [y]) in Taylor arithmetic. > >> I guess there are also situations when you have polynomials ? z = >> \sum_{d=0}^{D-1} z_d t^d, where z_d are complex numbers. > >>> I like it more to implement operators as overloads of the __mul__ >> >> I thought about something like >> def __mul__(self, other): >> ? ?return mul(self,other) > > Yes, I know! ?And in fact, it may symmetrise the thing a bit. > >>> etc., but this is a matter of taste. >>> ?In fact, you /have/ to provide >>> external binary operators, because I guess you also want to have >>> numpy.ndarrays as left operand. ?In that case, the undarray will have >>> higher precedence, and will treat your data structure as a scalar, >>> applying it to all the undarray's elements. >> >> well, actually it should treat it as a scalar since the Taylor >> polynomial is something like a real or complex number. > > Maybe I misunderstood you completely, but didn't you want to implement > arrays of polynomials using numpy? ?So I guess you want to apply a > vector from numpy pairwise to the polynomials in the P-object? no, the vectorization is something different. It's purpose becomes only clear when applied in Algorithmic Differentiation. E.g. if you have a function f: R^N -> R x -> y=f(x) where x = [x1,...,xN] and you want to compute the gradient g(x) of f(x), then you can compute df(x)/dxn by propagating the following array of Taylor polynomials: x = numpy.array( UTPS([x1_0, 0]), ..., UTPS([xn_0, 1]), ..., UTPS([xN_0,0]), dtype=object) y = f(x) if you want to have the complete gradient, you will have to repeat N times. Each time for the same zero'th coefficients [x1,...,xN]. Using the vecorized version, you would do only one propagation x = numpy.array( UTPS([x1_0, 1,0,...,0]), ..., UTPS([xn_0, 0,...,1,...0]), ..., UTPS([xN_0,0,....,1]), dtype=object) y = f(x) i.e. you save the overhead of calling the same function N times. > >> [z]_{D+1} &=_{D+1}& [x]_{D+1} y_{D+1} \\ >> \sum_{d=0}^D z_d t^d & =_{D+1} & \left( \sum_{d=0}^D x_d t^d \right) >> \left( \sum_{c=0}^D y_c t^c \right) > > Did your forget the [] around the y in line 1, or is this intentional? > ?Actually, I had to compile the things before I could imagine what you > could mean. Yes, I'm sorry, this is a typo. > > Why don't you use multidimensional arrays? ?Has it reasons in the C > accessibility? ?Now, as I see it, you implement your strides manually. > ?With a multidimensional array, you could even create arrays of shape > (10, 12) of D-polynomials by storing an ndarray of shape (10, 12, D) > or (D, 10, 12). the goal is to have several Taylor polynomials evaluated in the same base point, e.g. [x_0, x_{11}, x_{21}, x_{31}] [x_0, x_{12}, x_{22}, x_{32}] [x_0, x_{13}, x_{23}, x_{33}] i.e. P=3, D=3 One could use an (P,D) array. However, one would do unnecessary computations since x_0 is the same for all P polynomials. I.e. one implements the data structure as [x]_{D,P} := [x_0, x_{1,1},...,x_{1,D-1},x_{2,1},...,x_{P,D-1}]. This results in a non-const stride access. > > Just because of curiosity: ?Why do you set X_{0, 1} = ... X_{0, P} ? > > Furthermore, I believe there is some elegant way formulating the > product for a single polynomial. ?Let me think a bit ... > > For one entry of [z]_E, you want to sum up all pairs: > ? ?x_{0} y{E} + ... + x{D} y{E - D} , ? ? (1) > right? ?In a matrix, containing all mutual products x_{i} y{j}, this > are diagonals. ?Rotating the matrix by 45? counterclockwise, they are > sums in columns. ?Hmmm... how to rotate a matrix by 45?? > > Another fresh look: (1) looks pretty much like the discrete > convolution of x_{i} and y_{i} at argument E. ?Going into Fourier > space, this becomes a product. ?x_i and y_i have finite support, > because one sets x_{i} and y_{i} = 0 outside 0 <= i <= D. ?The support > of (1) in the running index is at most [0, D]. ?The support in E is at > most [0, 2 D]. ?Thus you don't make mistakes by using DFT, when you > pad x and y by D zeros on the right. ?My strategy is: Push the padded > versions of x and y into Fourier space, calculate their product, > transform back, cut away the D last entries, and you're done! > > I guess you mainly want to parallelise the calculation instead of > improving the singleton calculation itself? ?So then, even non-FFT > would incorporate 2 D explicit python sums for Fourier transform, and > again 2 D sums for transforming back, is this an improvement over the > method you are using currently? ?And you could code it in pure Python > :-) > > I will investigate this tonight. ?I'm curious. ?Irrespective of that I > have also other fish to fry ... ;-) > > iirc, in fact DFT is used for efficient polynomial multiplication. > Maybe Chuck or another core developer of numpy can tell whether > numpy.polynomial does it actually that way? I believe the degree D is typically much to small (i.e. D <= 4) to justify the additional overhead of using FFT, though there may be cases when really high order polynomials are used. > > Friedrich > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From sebastian.walter at gmail.com Sat Feb 27 17:54:59 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Sat, 27 Feb 2010 23:54:59 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: On Sat, Feb 27, 2010 at 11:11 PM, Friedrich Romstedt wrote: > Ok, it took me about one hour, but here they are: Fourier-accelerated > polynomials. that's the spirit! ;) > >> python > Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32 > Type "help", "copyright", "credits" or "license" for more information. >>>> import gdft_polynomial >>>> p1 = gdft_polynomial.Polynomial([1]) >>>> p2 = gdft_polynomial.Polynomial([2]) >>>> p1 * p2 > >>>> print p1 * p2 > [ 2.+0.j] >>>> p1 = gdft_polynomial.Polynomial([1, 1]) >>>> p2 = gdft_polynomial.Polynomial([1]) >>>> print p1 * p2 > [ 1. +6.12303177e-17j ?1. -6.12303177e-17j] >>>> p2 = gdft_polynomial.Polynomial([1, 2]) >>>> print p1 * p2 > [ 1. +8.51170986e-16j ?3. +3.70074342e-17j ?2. -4.44089210e-16j] >>>> p1 = gdft_polynomial.Polynomial([1, 2, 3, 4, 3, 2, 1]) >>>> p2 = gdft_polynomial.Polynomial([4, 3, 2, 1, 2, 3, 4]) >>>> print (p1 * p2).coefficients.real > [ ?4. ?11. ?20. ?30. ?34. ?35. ?36. ?35. ?34. ?30. ?20. ?11. ? 4.] >>>> > > github.com/friedrichromstedt/gdft_polynomials > > It's open for bug hunting :-) > > Haven't checked the last result. looks correct > > I used my own gdft module. ?Maybe one could incorporate numpy.fft > easily. ?But that's your job, Sebastian, isn't it? ?Feel free to push > to the repo, and don't forget to add your name to the copyright > notice, hope you are happy with MIT. i'll have a look at it. > > Anyway, I don't know whether numpy.fft supports transforming only one > coordinate and using the others for "parallelisation"? > > Friedrich > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From friedrichromstedt at gmail.com Sat Feb 27 18:30:43 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sun, 28 Feb 2010 00:30:43 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: 2010/2/27 Sebastian Walter : > I'm sorry this comment turns out to be confusing. Maybe it's not important. > It has apparently quite the contrary effect of what I wanted to achieve: > Since there is already a polynomial module ?in numpy I wanted to > highlight their difference > and stress that they are used to do arithmetic, e.g. compute > > f([x],[y]) = [x] * (sin([x])**2 + [y]) > > in Taylor arithmetic. That's cool! You didn't mention that. Now I step by step find out what your module (package?) is for. You are a mathematician? Many physicists complain that mathematicians cannot make their point ;-) I think I can use that to make my upy accept arbitrary functions, but how do you apply sin() to a TTP? One more question: You said, t is an "external paramter". I, and maybe not only me, interpreted this as a complicated name for "variable". So I assumed it will be a parameter to some method of the TTP. But it isn't? It's just the way to define the ring? You could define it the same in Fourier space, except that you have to make the array large enough from the beginning? Why not doing that, and saying, your computation relies on the Fourier transform of the representation? Can this give insight why TTPs are a ring and why they have zero divisors? >>>> ?In fact, you /have/ to provide >>>> external binary operators, because I guess you also want to have >>>> numpy.ndarrays as left operand. ?In that case, the undarray will have >>>> higher precedence, and will treat your data structure as a scalar, >>>> applying it to all the undarray's elements. >>> >>> well, actually it should treat it as a scalar since the Taylor >>> polynomial is something like a real or complex number. >> >> Maybe I misunderstood you completely, but didn't you want to implement >> arrays of polynomials using numpy? ?So I guess you want to apply a >> vector from numpy pairwise to the polynomials in the P-object? > > no, the vectorization is something different. It's purpose becomes > only clear when applied in Algorithmic Differentiation. Hey folks, here's a cool package, but the maintainer didn't tell us! ;-) > ?E.g. if you have a function > f: R^N -> R > x -> y=f(x) > where x = [x1,...,xN] > > and you want to compute the gradient g(x) of f(x), then you can compute > df(x)/dxn by propagating ?the following array of Taylor polynomials: > > x = numpy.array( UTPS([x1_0, 0]), ..., UTPS([xn_0, 1]), ..., > UTPS([xN_0,0]), dtype=object) > y = f(x) So what is the result of applying f to some UTPS instance, is it a plain number, is an UTPS again? How do you calculate? Can one calculate the derivative of some function using your method at a certain point without knowledge of the analytical derivative? I guess that's the purpose. > if you want to have the complete gradient, you will have to repeat N > times. Each time for the same zero'th coefficients [x1,...,xN]. > > Using the vecorized version, you would do only one propagation > x = numpy.array( UTPS([x1_0, 1,0,...,0]), ..., UTPS([xn_0, > 0,...,1,...0]), ..., UTPS([xN_0,0,....,1]), dtype=object) > y = f(x) > > i.e. you save the overhead of calling the same function N times. Ok, I understand. Today it's too late, I will reason tomorrow about it. >> Why don't you use multidimensional arrays? ?Has it reasons in the C >> accessibility? ?Now, as I see it, you implement your strides manually. >> ?With a multidimensional array, you could even create arrays of shape >> (10, 12) of D-polynomials by storing an ndarray of shape (10, 12, D) >> or (D, 10, 12). > > the goal is to have several Taylor polynomials evaluated in the same > base point, e.g. > [x_0, x_{11}, x_{21}, x_{31}] > [x_0, x_{12}, x_{22}, x_{32}] > [x_0, x_{13}, x_{23}, x_{33}] > > i.e. P=3, D=3 > One could use an (P,D) array. However, one would do unnecessary > computations since x_0 is the same for all P polynomials. > I.e. one implements the data structure as > [x]_{D,P} := [x_0, x_{1,1},...,x_{1,D-1},x_{2,1},...,x_{P,D-1}]. > > This results in a non-const stride access. No? I think, the D axis stride shold be P with offset 1 and the P axis stride 1? Is there a specific reason to store it raveled? And not x_0 and ndarray(shape = (P, D - 1))? >> Furthermore, I believe there is some elegant way formulating the >> product for a single polynomial. ?Let me think a bit ... >> >> For one entry of [z]_E, you want to sum up all pairs: >> ? ?x_{0} y{E} + ... + x{D} y{E - D} , ? ? (1) >> right? ?In a matrix, containing all mutual products x_{i} y{j}, this >> are diagonals. ?Rotating the matrix by 45? counterclockwise, they are >> sums in columns. ?Hmmm... how to rotate a matrix by 45?? >> >> Another fresh look: (1) looks pretty much like the discrete >> convolution of x_{i} and y_{i} at argument E. ?Going into Fourier >> space, this becomes a product. ?x_i and y_i have finite support, >> because one sets x_{i} and y_{i} = 0 outside 0 <= i <= D. ?The support >> of (1) in the running index is at most [0, D]. ?The support in E is at >> most [0, 2 D]. ?Thus you don't make mistakes by using DFT, when you >> pad x and y by D zeros on the right. ?My strategy is: Push the padded >> versions of x and y into Fourier space, calculate their product, >> transform back, cut away the D last entries, and you're done! >> >> I guess you mainly want to parallelise the calculation instead of >> improving the singleton calculation itself? ?So then, even non-FFT >> would incorporate 2 D explicit python sums for Fourier transform, and >> again 2 D sums for transforming back, is this an improvement over the >> method you are using currently? ?And you could code it in pure Python >> :-) >> >> I will investigate this tonight. ?I'm curious. ?Irrespective of that I >> have also other fish to fry ... ;-) >> >> iirc, in fact DFT is used for efficient polynomial multiplication. >> Maybe Chuck or another core developer of numpy can tell whether >> numpy.polynomial does it actually that way? > > I believe the degree D is typically much to small (i.e. D <= 4) to > justify the additional overhead of using FFT, > though there may be cases when really high order polynomials are used. I guess it's not overhead. The number of calculations should be in equilibrium at very low D, am I wrong? And you win to not have to compile a C library but only native text Python code. E.g., your optimisation package is quite interesting for me, but I'm using Windows as my main system, so it will be painful to compile. And the code is more straightforward, more concise, easier to maintain and easier to understand, ... :-) I really do not want to diminish your programming skills, please do not misunderstand! I only mean the subject. Friedrich From friedrichromstedt at gmail.com Sat Feb 27 18:36:09 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sun, 28 Feb 2010 00:36:09 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: 2010/2/27 Sebastian Walter : > On Sat, Feb 27, 2010 at 11:11 PM, Friedrich Romstedt > wrote: >> Ok, it took me about one hour, but here they are: Fourier-accelerated >> polynomials. > > that's the spirit! ;) Yes! I like it! :-) >>> python >> Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32 >> Type "help", "copyright", "credits" or "license" for more information. >>>>> import gdft_polynomial >>>>> p1 = gdft_polynomial.Polynomial([1]) >>>>> p2 = gdft_polynomial.Polynomial([2]) >>>>> p1 * p2 >> >>>>> print p1 * p2 >> [ 2.+0.j] >>>>> p1 = gdft_polynomial.Polynomial([1, 1]) >>>>> p2 = gdft_polynomial.Polynomial([1]) >>>>> print p1 * p2 >> [ 1. +6.12303177e-17j ?1. -6.12303177e-17j] >>>>> p2 = gdft_polynomial.Polynomial([1, 2]) >>>>> print p1 * p2 >> [ 1. +8.51170986e-16j ?3. +3.70074342e-17j ?2. -4.44089210e-16j] >>>>> p1 = gdft_polynomial.Polynomial([1, 2, 3, 4, 3, 2, 1]) >>>>> p2 = gdft_polynomial.Polynomial([4, 3, 2, 1, 2, 3, 4]) >>>>> print (p1 * p2).coefficients.real >> [ ?4. ?11. ?20. ?30. ?34. ?35. ?36. ?35. ?34. ?30. ?20. ?11. ? 4.] >>>>> >> >> github.com/friedrichromstedt/gdft_polynomials >> >> It's open for bug hunting :-) >> >> Haven't checked the last result. > looks correct We should check, simply using numpy.polynomial >> I used my own gdft module. ?Maybe one could incorporate numpy.fft >> easily. ?But that's your job, Sebastian, isn't it? ?Feel free to push >> to the repo, and don't forget to add your name to the copyright >> notice, hope you are happy with MIT. > i'll have a look at it. I will be obliged. >> Anyway, I don't know whether numpy.fft supports transforming only one >> coordinate and using the others for "parallelisation"? I will check tomorrow. Suggestion: The other thread is the main thread, please reply there. (Gmane shows also the thread structure ...) If it's not related to this one ... From friedrichromstedt at gmail.com Sat Feb 27 18:47:05 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sun, 28 Feb 2010 00:47:05 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: Sebastian, and, please, be not offended by what I wrote. I regret a bit my jokes ... It's simply too late at night. Friedrich From rblove_lists at comcast.net Sat Feb 27 21:43:25 2010 From: rblove_lists at comcast.net (Robert Love) Date: Sat, 27 Feb 2010 20:43:25 -0600 Subject: [Numpy-discussion] Reading and Comparing Two Files Message-ID: <90485910-C1B8-49AC-9EF5-452670BFB182@comcast.net> What is the efficient numpy way to compare data from two different files? For the nth line in each file I want to operate on the numbers. I've been using loadtxt() data_5x5 = N.loadtxt("file5") data_8x8 = N.loadtxt("file8") for line in data_5x5: pos5 = N.array([line[0], line[1], line[2]]) This works fine for one file but how to I get the same line's worth of data from the other file? From patrickmarshwx at gmail.com Sat Feb 27 23:35:02 2010 From: patrickmarshwx at gmail.com (Patrick Marsh) Date: Sat, 27 Feb 2010 22:35:02 -0600 Subject: [Numpy-discussion] Building Numpy Windows Superpack Message-ID: Greetings, I have been trying to build the numpy superpack on windows using the binaries posted by David. Unfortunately, I haven't even been able to correctly write the site.cfg file to locate all three sets of binaries needed for the superpack. When I manually specify to use only the sse3 binaries, I can get numpy to build from trunk, but it fails miserably when running the test suite. In fact, in python26, the tests freeze python and causes it to exit. I figured I'd try to get this set up correctly before even trying to compile the cpucaps nsis plugin. If someone has successfully used David's binaries would they be willing to share their site.cfg? Thanks in advance. Patrick -- Patrick Marsh Ph.D. Student / NSSL Liaison to the HWT School of Meteorology / University of Oklahoma Cooperative Institute for Mesoscale Meteorological Studies National Severe Storms Laboratory http://www.patricktmarsh.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Sun Feb 28 01:56:17 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 28 Feb 2010 01:56:17 -0500 Subject: [Numpy-discussion] Apply a function to all indices In-Reply-To: <20100226131258.GB12636@doriath.local> References: <20100226114300.GA12362@doriath.local> <1267185095.2728.464.camel@talisman> <20100226131258.GB12636@doriath.local> Message-ID: On 26-Feb-10, at 8:12 AM, Ernest Adrogu? wrote: > Thanks for the tip. I didn't know that... > Also, frompyfunc appears to crash python when the last argument is 0: > > In [9]: func=np.frompyfunc(lambda x: x, 1, 0) > > In [10]: func(np.arange(5)) > Violaci? de segment > > This with Python 2.5.5, Numpy 1.3.0 on GNU/Linux. (previous reply mysteriously didn't make it to the list...) Still happening to me in latest svn. Can you file a ticket? http://projects.scipy.org/numpy/report Thanks, David From gael.varoquaux at normalesup.org Sun Feb 28 04:25:24 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 28 Feb 2010 10:25:24 +0100 Subject: [Numpy-discussion] Sorting objects with ndarrays Message-ID: <20100228092524.GA32162@phare.normalesup.org> Hi, I need to have list of objects that contain ndarrays to be sorted. The reason that I want them sorted is that these list are populated in an arbitrary order, but there order really doesn't matter, and I am trying to make it reproducible for debugging and hashing. The problem is that ndarrays cannot be compared. So I have tried to override the 'cmp' in the 'sorted' function, however I am comparing fairly complex objects, and I am having a hard time predicting wich member of the object will contain the array. So I am building a more and more complex 'cmp' replacement. Does anybody has a good idea what a better strategy would be? Cheers, Ga?l From cournape at gmail.com Sun Feb 28 05:31:04 2010 From: cournape at gmail.com (David Cournapeau) Date: Sun, 28 Feb 2010 19:31:04 +0900 Subject: [Numpy-discussion] Building Numpy Windows Superpack In-Reply-To: References: Message-ID: <5b8d13221002280231q4c3eb8fm8b8f25b8bbc36962@mail.gmail.com> Hi Patrick, On Sun, Feb 28, 2010 at 1:35 PM, Patrick Marsh wrote: > Greetings, > I have been trying to build the numpy superpack on windows using the > binaries posted by David. Could you post *exactly* the sequence of commands you executed ? Especially at the beginning, building things can be frustrating because the cause of failures can be hard to diagnose. FWIW, I've just built the nosse version with mingw on windows 7, there was no issue at all, cheers, David From pav at iki.fi Sun Feb 28 06:05:15 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 28 Feb 2010 13:05:15 +0200 Subject: [Numpy-discussion] Sorting objects with ndarrays In-Reply-To: <20100228092524.GA32162@phare.normalesup.org> References: <20100228092524.GA32162@phare.normalesup.org> Message-ID: <1267355114.8666.3.camel@idol> su, 2010-02-28 kello 10:25 +0100, Gael Varoquaux kirjoitti: [clip] > The problem is that ndarrays cannot be compared. So I have tried to > override the 'cmp' in the 'sorted' function, however I am comparing > fairly complex objects, and I am having a hard time predicting wich > member of the object will contain the array. I don't understand what "predicting which member of the object" means? Do you mean that in the array, you have classes that contain ndarrays as their attributes, and the classes have __cmp__ implemented? If not, can you tell why def xcmp(a, b): a_nd = isinstance(a, ndarray) b_nd = isinstance(b, ndarray) if a_nd and b_nd: pass # compare ndarrays in some way elif a_nd: return 1 # sort ndarrays first elif b_nd: return -1 # sort ndarrays first else: return cmp(a, b) # ordinary compare does not work? Cheers, Pauli From gael.varoquaux at normalesup.org Sun Feb 28 06:10:17 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 28 Feb 2010 12:10:17 +0100 Subject: [Numpy-discussion] Sorting objects with ndarrays In-Reply-To: <1267355114.8666.3.camel@idol> References: <20100228092524.GA32162@phare.normalesup.org> <1267355114.8666.3.camel@idol> Message-ID: <20100228111017.GB32162@phare.normalesup.org> On Sun, Feb 28, 2010 at 01:05:15PM +0200, Pauli Virtanen wrote: > su, 2010-02-28 kello 10:25 +0100, Gael Varoquaux kirjoitti: > [clip] > > The problem is that ndarrays cannot be compared. So I have tried to > > override the 'cmp' in the 'sorted' function, however I am comparing > > fairly complex objects, and I am having a hard time predicting wich > > member of the object will contain the array. > I don't understand what "predicting which member of the object" means? > Do you mean that in the array, you have classes that contain ndarrays as > their attributes, and the classes have __cmp__ implemented? Well, I might not have to compare ndarrays, but fairly arbitrary structures (dictionnaries, classes and lists) as I am dealing with semi-structured data coming from a stack of unorganised experimental data. Python has some logic for comparing these structures by comparing their members, but if these are ndarrays, I am back to my original problem. > If not, can you tell why > def xcmp(a, b): > a_nd = isinstance(a, ndarray) > b_nd = isinstance(b, ndarray) > if a_nd and b_nd: > pass # compare ndarrays in some way > elif a_nd: > return 1 # sort ndarrays first > elif b_nd: > return -1 # sort ndarrays first > else: > return cmp(a, b) # ordinary compare > does not work? Because I have things like lists of ndarrays, on which this fails. If I could say: use recursively xcmp instead of cmp for this sort, it would work, but the only way I can think of doing this is by monkey-patching temporarily __builtins__.cmp, which I'd like to avoid, as it is not thread safe. Cheers, Ga?l From friedrichromstedt at gmail.com Sun Feb 28 07:24:36 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sun, 28 Feb 2010 13:24:36 +0100 Subject: [Numpy-discussion] Reading and Comparing Two Files In-Reply-To: <90485910-C1B8-49AC-9EF5-452670BFB182@comcast.net> References: <90485910-C1B8-49AC-9EF5-452670BFB182@comcast.net> Message-ID: 2010/2/28 Robert Love : > What is the efficient numpy way to compare data from two different files? ?For the nth line in each file I want to operate on the numbers. ? I've been using loadtxt() > > data_5x5 = N.loadtxt("file5") > > data_8x8 = N.loadtxt("file8") > > for line in data_5x5: > ? ? ? ?pos5 = N.array([line[0], line[1], ?line[2]]) I believe there are several ways of doing that, and mine might not be the most efficient at all: for line5, line8 in zip(data_5x5, data_8x8): # line5 and line8 are row vectors of paired lines pass complete = numpy.hstack(data_5x5, data_8x8) # If data_5x5.shape[0] == data_8x8.shape[0], i.e., same number of rows. for line in complete: # complete is comprised of concatenated row vectors. pass for idx in xrange(0, min(data_5x5.shape[0], data_8x8.shape[0])): line5 = data_5x5[idx] line8 = data_8x8[idx] # Do sth with the vectors. Or: a1 = data_5x5[idx, (0, 1, 2)] # Extract items 0, 1, 2 of line idx of first file. a2 = data_8x8[idx, (0, 42)] # Extract items 0, 42 of line idx of second file. ... Friedrich From josef.pktd at gmail.com Sun Feb 28 08:52:00 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 28 Feb 2010 08:52:00 -0500 Subject: [Numpy-discussion] Reading and Comparing Two Files In-Reply-To: References: <90485910-C1B8-49AC-9EF5-452670BFB182@comcast.net> Message-ID: <1cd32cbb1002280552o2af4220ft9cb6797dd24f351a@mail.gmail.com> On Sun, Feb 28, 2010 at 7:24 AM, Friedrich Romstedt wrote: > 2010/2/28 Robert Love : >> What is the efficient numpy way to compare data from two different files? ?For the nth line in each file I want to operate on the numbers. ? I've been using loadtxt() >> >> data_5x5 = N.loadtxt("file5") >> >> data_8x8 = N.loadtxt("file8") >> >> for line in data_5x5: >> ? ? ? ?pos5 = N.array([line[0], line[1], ?line[2]]) If you just want to compare row by row when you already have the arrays, you can just use numpy, e.g. based on first 3 columns: (data_8x8[:,:3] == data_5x5[:,:3]).all(1) but from your question it's not clear to me what you actually want to compare Josef > > I believe there are several ways of doing that, and mine might not be > the most efficient at all: > > for line5, line8 in zip(data_5x5, data_8x8): > ? ?# line5 and line8 are row vectors of paired lines > ? ?pass > > complete = numpy.hstack(data_5x5, data_8x8) ?# If data_5x5.shape[0] == > data_8x8.shape[0], i.e., same number of rows. > for line in complete: > ? ?# complete is comprised of concatenated row vectors. > ? ?pass > > for idx in xrange(0, min(data_5x5.shape[0], data_8x8.shape[0])): > ? ?line5 = data_5x5[idx] > ? ?line8 = data_8x8[idx] > ? ?# Do sth with the vectors. Or: > ? ?a1 = data_5x5[idx, (0, 1, 2)] ?# Extract items 0, 1, 2 of line idx > of first file. > ? ?a2 = data_8x8[idx, (0, 42)] ? # Extract items 0, 42 of line idx of > second file. > > ... > > Friedrich > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From friedrichromstedt at gmail.com Sun Feb 28 09:01:18 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sun, 28 Feb 2010 15:01:18 +0100 Subject: [Numpy-discussion] Sorting objects with ndarrays In-Reply-To: <20100228111017.GB32162@phare.normalesup.org> References: <20100228092524.GA32162@phare.normalesup.org> <1267355114.8666.3.camel@idol> <20100228111017.GB32162@phare.normalesup.org> Message-ID: > Well, I might not have to compare ndarrays, but fairly arbitrary > structures (dictionnaries, classes and lists) as I am dealing with > semi-structured data coming from a stack of unorganised experimental > data. Python has some logic for comparing these structures by comparing > their members, but if these are ndarrays, I am back to my original > problem. I also do not understand how to build an oder on such a thing at all, maybe you can give a simple example? >> If not, can you tell why > >> ? ? ? ? def xcmp(a, b): >> ? ? ? ? ? ? a_nd = isinstance(a, ndarray) >> ? ? ? ? ? ? b_nd = isinstance(b, ndarray) > >> ? ? ? ? ? ? if a_nd and b_nd: >> ? ? ? ? ? ? ? ? pass # compare ndarrays in some way >> ? ? ? ? ? ? elif a_nd: >> ? ? ? ? ? ? ? ? return 1 ?# sort ndarrays first >> ? ? ? ? ? ? elif b_nd: >> ? ? ? ? ? ? ? ? return -1 # sort ndarrays first >> ? ? ? ? ? ? else: >> ? ? ? ? ? ? ? ? return cmp(a, b) # ordinary compare > >> does not work? > > Because I have things like lists of ndarrays, on which this fails. If I > could say: use recursively xcmp instead of cmp for this sort, it would > work, but the only way I can think of doing this is by monkey-patching > temporarily __builtins__.cmp, which I'd like to avoid, as it is not > thread safe. Hmm, you could also replace numpy.greater and similar temporarily with an with statement like: # Everything as usual, comparing ndarrays results in ndarrays here. with monkeypatched_operators: # Comparing ndarrays may result in scalars or what you need. pass # Perform the sorting # Everything as usual ... Though that's maybe not threadsafe too. I think I'm lacking knowledge of what you want to achieve. Ahh, I think you want to order them like in a telephone dictionary? Then you could use ndarray.flatten().tolist() to compare them using usual Python semantics? my 2 cents, Friedrich From gael.varoquaux at normalesup.org Sun Feb 28 09:07:43 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 28 Feb 2010 15:07:43 +0100 Subject: [Numpy-discussion] Sorting objects with ndarrays In-Reply-To: References: <20100228092524.GA32162@phare.normalesup.org> <1267355114.8666.3.camel@idol> <20100228111017.GB32162@phare.normalesup.org> Message-ID: <20100228140743.GC32162@phare.normalesup.org> On Sun, Feb 28, 2010 at 03:01:18PM +0100, Friedrich Romstedt wrote: > > Well, I might not have to compare ndarrays, but fairly arbitrary > > structures (dictionnaries, classes and lists) as I am dealing with > > semi-structured data coming from a stack of unorganised experimental > > data. Python has some logic for comparing these structures by comparing > > their members, but if these are ndarrays, I am back to my original > > problem. > I also do not understand how to build an oder on such a thing at all, > maybe you can give a simple example? Well, you can't really build an order in the mathematical sens of ordering. All I care is that if you give me twice the samed shuffled list of elements, it comes out identical. I am fighting the fact that dictionnaries in Python have no order, and thus shuflle the data from run to run. > Hmm, you could also replace numpy.greater and similar temporarily with > an with statement like: > # Everything as usual, comparing ndarrays results in ndarrays here. > with monkeypatched_operators: > # Comparing ndarrays may result in scalars or what you need. > pass # Perform the sorting > # Everything as usual ... > Though that's maybe not threadsafe too. Yes, it's not threadsafe either. > Then you could use ndarray.flatten().tolist() to compare them using > usual Python semantics? That solves the local problem of comparing 2 arrays (though will be quite slow), but not the general problem of sorting in a reproducible order (may it be arbitary) objects containing arrays. Anyhow, I solved the problem implementing a subclass of dict and using it everywhere in my code. Right now it seems to be working for what I need. Cheers, Ga?l From sebastian.walter at gmail.com Sun Feb 28 09:22:16 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Sun, 28 Feb 2010 15:22:16 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: On Sun, Feb 28, 2010 at 12:30 AM, Friedrich Romstedt wrote: > 2010/2/27 Sebastian Walter : >> I'm sorry this comment turns out to be confusing. > > Maybe it's not important. > >> It has apparently quite the contrary effect of what I wanted to achieve: >> Since there is already a polynomial module ?in numpy I wanted to >> highlight their difference >> and stress that they are used to do arithmetic, e.g. compute >> >> f([x],[y]) = [x] * (sin([x])**2 + [y]) >> >> in Taylor arithmetic. > > That's cool! ?You didn't mention that. ?Now I step by step find out > what your module (package?) is for. ?You are a mathematician? ?Many > physicists complain that mathematicians cannot make their point ;-) I studied physics but switchted to applied maths. > > I think I can use that to make my upy accept arbitrary functions, but > how do you apply sin() to a TTP? perform truncated Taylor expansion of [y]_D = sin([x]_D), i.e. y_d = d^d/dt^d sin( \sum_{k=0}^{D-1} x_k t^k) |_{t=0} to obtain an explicit algorithm. > > One more question: You said, t is an "external paramter". ?I, and > maybe not only me, interpreted this as a complicated name for > "variable". ?So I assumed it will be a parameter to some method of the > TTP. ?But it isn't? ?It's just the way to define the ring? ?You could > define it the same in Fourier space, except that you have to make the > array large enough from the beginning? ?Why not doing that, and > saying, your computation relies on the Fourier transform of the > representation? ?Can this give insight why TTPs are a ring and why > they have zero divisors? it has zero divisors because for instance multiplication of the two polynomials t*t^{D-1} truncated at t^{D-1} yields is zero. > >>>>> ?In fact, you /have/ to provide >>>>> external binary operators, because I guess you also want to have >>>>> numpy.ndarrays as left operand. ?In that case, the undarray will have >>>>> higher precedence, and will treat your data structure as a scalar, >>>>> applying it to all the undarray's elements. >>>> >>>> well, actually it should treat it as a scalar since the Taylor >>>> polynomial is something like a real or complex number. >>> >>> Maybe I misunderstood you completely, but didn't you want to implement >>> arrays of polynomials using numpy? ?So I guess you want to apply a >>> vector from numpy pairwise to the polynomials in the P-object? >> >> no, the vectorization is something different. It's purpose becomes >> only clear when applied in Algorithmic Differentiation. > > Hey folks, here's a cool package, but the maintainer didn't tell us! ;-) well, thanks :) > >> ?E.g. if you have a function >> f: R^N -> R >> x -> y=f(x) >> where x = [x1,...,xN] >> >> and you want to compute the gradient g(x) of f(x), then you can compute >> df(x)/dxn by propagating ?the following array of Taylor polynomials: >> >> x = numpy.array( UTPS([x1_0, 0]), ..., UTPS([xn_0, 1]), ..., >> UTPS([xN_0,0]), dtype=object) >> y = f(x) > > So what is the result of applying f to some UTPS instance, is it a > plain number, is an UTPS again? ?How do you calculate? > > Can one calculate the derivative of some function using your method at > a certain point without knowledge of the analytical derivative? ?I > guess that's the purpose. Yes, that's the whole point: Obtaining (higher order) derivatives of functions at machine precision for which no symbolic representation is readily available. That includes computer codes with recursions (e.g. for loops) that are a no-go for symbolic differentiation. Supposedly (I've never done that) you can even differentiate Monte Carlo simulations in that way. > >> if you want to have the complete gradient, you will have to repeat N >> times. Each time for the same zero'th coefficients [x1,...,xN]. >> >> Using the vecorized version, you would do only one propagation >> x = numpy.array( UTPS([x1_0, 1,0,...,0]), ..., UTPS([xn_0, >> 0,...,1,...0]), ..., UTPS([xN_0,0,....,1]), dtype=object) >> y = f(x) >> >> i.e. you save the overhead of calling the same function N times. > > Ok, I understand. ?Today it's too late, I will reason tomorrow about it. > >>> Why don't you use multidimensional arrays? ?Has it reasons in the C >>> accessibility? ?Now, as I see it, you implement your strides manually. >>> ?With a multidimensional array, you could even create arrays of shape >>> (10, 12) of D-polynomials by storing an ndarray of shape (10, 12, D) >>> or (D, 10, 12). >> >> the goal is to have several Taylor polynomials evaluated in the same >> base point, e.g. >> [x_0, x_{11}, x_{21}, x_{31}] >> [x_0, x_{12}, x_{22}, x_{32}] >> [x_0, x_{13}, x_{23}, x_{33}] >> >> i.e. P=3, D=3 >> One could use an (P,D) array. However, one would do unnecessary >> computations since x_0 is the same for all P polynomials. >> I.e. one implements the data structure as >> [x]_{D,P} := [x_0, x_{1,1},...,x_{1,D-1},x_{2,1},...,x_{P,D-1}]. >> >> This results in a non-const stride access. > > No? ?I think, the D axis stride shold be P with offset 1 and the P > axis stride 1? ?Is there a specific reason to store it raveled? ?And > not x_0 and ndarray(shape = (P, D - 1))? 1) cosmetic reasons 2) easier to interface C to Python. > >>> Furthermore, I believe there is some elegant way formulating the >>> product for a single polynomial. ?Let me think a bit ... >>> >>> For one entry of [z]_E, you want to sum up all pairs: >>> ? ?x_{0} y{E} + ... + x{D} y{E - D} , ? ? (1) >>> right? ?In a matrix, containing all mutual products x_{i} y{j}, this >>> are diagonals. ?Rotating the matrix by 45? counterclockwise, they are >>> sums in columns. ?Hmmm... how to rotate a matrix by 45?? >>> >>> Another fresh look: (1) looks pretty much like the discrete >>> convolution of x_{i} and y_{i} at argument E. ?Going into Fourier >>> space, this becomes a product. ?x_i and y_i have finite support, >>> because one sets x_{i} and y_{i} = 0 outside 0 <= i <= D. ?The support >>> of (1) in the running index is at most [0, D]. ?The support in E is at >>> most [0, 2 D]. ?Thus you don't make mistakes by using DFT, when you >>> pad x and y by D zeros on the right. ?My strategy is: Push the padded >>> versions of x and y into Fourier space, calculate their product, >>> transform back, cut away the D last entries, and you're done! >>> >>> I guess you mainly want to parallelise the calculation instead of >>> improving the singleton calculation itself? ?So then, even non-FFT >>> would incorporate 2 D explicit python sums for Fourier transform, and >>> again 2 D sums for transforming back, is this an improvement over the >>> method you are using currently? ?And you could code it in pure Python >>> :-) >>> >>> I will investigate this tonight. ?I'm curious. ?Irrespective of that I >>> have also other fish to fry ... ;-) >>> >>> iirc, in fact DFT is used for efficient polynomial multiplication. >>> Maybe Chuck or another core developer of numpy can tell whether >>> numpy.polynomial does it actually that way? >> >> I believe the degree D is typically much to small (i.e. D <= 4) to >> justify the additional overhead of using FFT, >> though there may be cases when really high order polynomials are used. > > I guess it's not overhead. ?The number of calculations should be in > equilibrium at very low D, am I wrong? ?And you win to not have to > compile a C library but only native text Python code. Well, I'm really no expert on the DFT. But doesn't the DFT compute on the complex numbers? you'll have extra overhead (let's say factor >= 2?) And as far as I can tell, you do the computations on padded arrays which possibly introduces cache misses (maybe another factor 2?) Isn't the advantage of the DFT not that you can use the FFT which would reduce the runtime from O(D^2) to O(D log(D))? I'm pretty sure that only pays off for D larger than 10. >?E.g., your > optimisation package is quite interesting for me, but I'm using > Windows as my main system, so it will be painful to compile. ?And the > code is more straightforward, more concise, easier to maintain and > easier to understand, ... :-) ?I really do not want to diminish your > programming skills, please do not misunderstand! ?I only mean the > subject. The project uses scons which is available for windows as binaries. I haven't tried it myself but I'm confident that it's a 1 minutes job on windows. I have implemented some of the algorithms just as you explained in another package (http://github.com/b45ch1/algopy/blob/master/algopy/utp/utps.py). But I don't think the code looks easier to maintain than the C code and it's also slower. > > Friedrich > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From sebastian.walter at gmail.com Sun Feb 28 09:23:40 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Sun, 28 Feb 2010 15:23:40 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: On Sun, Feb 28, 2010 at 12:47 AM, Friedrich Romstedt wrote: > Sebastian, and, please, be not offended by what I wrote. ?I regret a > bit my jokes ... It's simply too late at night. no offense taken > > Friedrich > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From ralf.gommers at googlemail.com Sun Feb 28 09:49:21 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 28 Feb 2010 22:49:21 +0800 Subject: [Numpy-discussion] Building Numpy Windows Superpack In-Reply-To: References: Message-ID: On Sun, Feb 28, 2010 at 12:35 PM, Patrick Marsh wrote: > Greetings, > > I have been trying to build the numpy superpack on windows using the > binaries posted by David. Unfortunately, I haven't even been able to > correctly write the site.cfg file to locate all three sets of binaries > needed for the superpack. When I manually specify to use only the sse3 > binaries, I can get numpy to build from trunk, but it fails miserably when > running the test suite. In fact, in python26, the tests freeze python and > causes it to exit. I figured I'd try to get this set up correctly before > even trying to compile the cpucaps nsis plugin. > > If someone has successfully used David's binaries would they be willing to > share their site.cfg? Thanks in advance. > > I haven't been able to finish the binaries just yet, but I got this to work. Without needing a site.cfg file, the paver script should be enough. In pavement.py: NOSSE_CFG = {'BLAS': r'/Users/rgommers/.wine/drive_c/local/bin/yop/nosse', 'LAPACK': r'/Users/rgommers/.wine/drive_c/local/bin/yop/nosse'} Then: $ paver bdist_superpack FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/Users/rgommers/.wine/drive_c/local/bin/yop/nosse'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Sun Feb 28 10:50:07 2010 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 28 Feb 2010 16:50:07 +0100 Subject: [Numpy-discussion] Python 3 porting In-Reply-To: <1266759837.5722.134.camel@idol> References: <1266759837.5722.134.camel@idol> Message-ID: <4B8A90AF.4020006@gmail.com> Hi, Do you plan to make some noise about that when numpy2.0 will be release? IMHO you should. Do you for instance plan to have a clear announcement on the scipy web site? Xavier > Hi, > > The test suite passes now on Pythons 2.4 - 3.1. Further testing is very > welcome -- also on Python 2.x. Please check that your favourite software > still builds and works with SVN trunk Numpy. > > Currently, Scipy has some known failures because of > > (i) removed new= keyword in numpy.histogram > (ii) Cython supports only native size/alignment PEP 3118 buffers, and > Numpy arrays are most naturally expressed in the standardized > sizes. Supporting the full struct module alignment stuff appears > to be a slight PITA. I'll try to take a look at how to address > this. > > But everything else seems to work on Python 2.6. > > *** > > Python version 2.4.6 (#2, Jan 21 2010, 23:27:36) [GCC 4.4.1] > Ran 2509 tests in 18.892s > OK (KNOWNFAIL=4, SKIP=2) > > Python version 2.5.4 (r254:67916, Jan 20 2010, 21:44:03) [GCC 4.4.1] > Ran 2512 tests in 18.531s > OK (KNOWNFAIL=4) > > Python version 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) [GCC 4.4.1] > Ran 2519 tests in 19.367s > OK (KNOWNFAIL=4) > > Python version 3.1.1+ (r311:74480, Nov 2 2009, 14:49:22) [GCC 4.4.1] > Ran 2518 tests in 23.239s > OK (KNOWNFAIL=5) > > > Cheers, > Pauli > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From eadrogue at gmx.net Sun Feb 28 11:05:35 2010 From: eadrogue at gmx.net (Ernest =?iso-8859-1?Q?Adrogu=E9?=) Date: Sun, 28 Feb 2010 17:05:35 +0100 Subject: [Numpy-discussion] Apply a function to all indices In-Reply-To: References: <20100226114300.GA12362@doriath.local> <1267185095.2728.464.camel@talisman> <20100226131258.GB12636@doriath.local> Message-ID: <20100228160535.GA7474@doriath.local> 28/02/10 @ 01:56 (-0500), thus spake David Warde-Farley: > On 26-Feb-10, at 8:12 AM, Ernest Adrogu? wrote: > > > Thanks for the tip. I didn't know that... > > Also, frompyfunc appears to crash python when the last argument is 0: > > > > In [9]: func=np.frompyfunc(lambda x: x, 1, 0) > > > > In [10]: func(np.arange(5)) > > Violaci? de segment > > > > This with Python 2.5.5, Numpy 1.3.0 on GNU/Linux. > > > (previous reply mysteriously didn't make it to the list...) > > Still happening to me in latest svn. Can you file a ticket? http://projects.scipy.org/numpy/report Filed. http://projects.scipy.org/numpy/ticket/1416 Cheers. From sierra_mtnview at sbcglobal.net Sun Feb 28 11:37:11 2010 From: sierra_mtnview at sbcglobal.net (Wayne Watson) Date: Sun, 28 Feb 2010 08:37:11 -0800 Subject: [Numpy-discussion] SciPy Mail List and Contacting Dave Kuhlman Message-ID: <4B8A9BB7.7040701@sbcglobal.net> An HTML attachment was scrubbed... URL: From rblove_lists at comcast.net Sun Feb 28 12:05:02 2010 From: rblove_lists at comcast.net (Robert Love) Date: Sun, 28 Feb 2010 11:05:02 -0600 Subject: [Numpy-discussion] Reading and Comparing Two Files In-Reply-To: References: <90485910-C1B8-49AC-9EF5-452670BFB182@comcast.net> Message-ID: On Feb 28, 2010, at 6:24 AM, Friedrich Romstedt wrote: > 2010/2/28 Robert Love : >> What is the efficient numpy way to compare data from two different files? For the nth line in each file I want to operate on the numbers. I've been using loadtxt() >> >> data_5x5 = N.loadtxt("file5") >> >> data_8x8 = N.loadtxt("file8") >> >> for line in data_5x5: >> pos5 = N.array([line[0], line[1], line[2]]) > > I believe there are several ways of doing that, and mine might not be > the most efficient at all: > > for line5, line8 in zip(data_5x5, data_8x8): > # line5 and line8 are row vectors of paired lines > pass > > complete = numpy.hstack(data_5x5, data_8x8) # If data_5x5.shape[0] == > data_8x8.shape[0], i.e., same number of rows. > for line in complete: > # complete is comprised of concatenated row vectors. > pass > > for idx in xrange(0, min(data_5x5.shape[0], data_8x8.shape[0])): > line5 = data_5x5[idx] > line8 = data_8x8[idx] > # Do sth with the vectors. Or: > a1 = data_5x5[idx, (0, 1, 2)] # Extract items 0, 1, 2 of line idx > of first file. > a2 = data_8x8[idx, (0, 42)] # Extract items 0, 42 of line idx of > second file. > Thank you, I will try this last method listed. I need to actually compute with the values from the two files to perform my comparison and the time tag is in different formats. Your method will get me access to the contents of two files. From josef.pktd at gmail.com Sun Feb 28 12:24:22 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 28 Feb 2010 12:24:22 -0500 Subject: [Numpy-discussion] SciPy Mail List and Contacting Dave Kuhlman In-Reply-To: <4B8A9BB7.7040701@sbcglobal.net> References: <4B8A9BB7.7040701@sbcglobal.net> Message-ID: <1cd32cbb1002280924k26d6ca5ela9db2bd0767dff87@mail.gmail.com> On Sun, Feb 28, 2010 at 11:37 AM, Wayne Watson wrote: > Google shows there is a mail list for SciPy, but when I go to the web page > it shows GMANE, and various feeds for SciPy-Dev and User. Maybe I'm missing > something? > > Information about gmane.comp.python.scientific.user that's the gmane mirror/interface to scipy-user the original location of scipy lists is here http://mail.scipy.org/mailman/listinfo > > The archive for this list can be read the following ways: > > On the web, using frames and threads. > On the web, using a blog-like, flat interface. > Using an NNTP newsreader. > RSS feeds: > > All messages from the list, with excerpted texts. > Topics from the list, with excerpted texts. > All messages from the list, with complete texts. > Topics from the list, with complete texts. > > D. Kuhlman wrote an interesting Tutorial about SciPy (course outline) in > June 2006. Has it ever been updated? Not that that know of, the basics haven't changed much, but the scipy part is largely an index to the content which is out of date. The most up to date documentation and index is http://docs.scipy.org/doc/ As general scipy tutorial, I also like http://johnstachurski.net/lectures/scipy.html (also the other parts with numpy tutorials) Josef > > -- > "There is nothing so annoying as to have two people > talking when you're busy interrupting." -- Mark Twain > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From xavier.gnata at gmail.com Sun Feb 28 13:51:59 2010 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 28 Feb 2010 19:51:59 +0100 Subject: [Numpy-discussion] Speedup a code using apply_along_axis Message-ID: <4B8ABB4F.6060608@gmail.com> Hi, I'm sure I reinventing the wheel with the following code: from numpy import * from scipy import polyfit,stats def f(x,y,z): return x+y+z M=fromfunction(f,(2000,2000,10)) def foo(M): ramp=where(M<1000)[0] l=len(ramp) t=arange(l) if(l>1): return polyfit(t,ramp,1)[0] else: return 0 print apply_along_axis(foo,2,M) In real life M is not the result of one fromfunction call but it does not matter. The basic idea is to compute the slope (and only the slope) along one axis of 3D array. Only the values below a given threshold should be taken into account. The current code is ugly and slow. How to remove the len and the if statement? How to rewrite the code in a numpy oriented way? Xavier From robert.kern at gmail.com Sun Feb 28 14:03:35 2010 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 28 Feb 2010 13:03:35 -0600 Subject: [Numpy-discussion] SciPy Mail List and Contacting Dave Kuhlman In-Reply-To: <4B8A9BB7.7040701@sbcglobal.net> References: <4B8A9BB7.7040701@sbcglobal.net> Message-ID: <3d375d731002281103q66200c8am5aecfc4f51a4656d@mail.gmail.com> On Sun, Feb 28, 2010 at 10:37, Wayne Watson wrote: > Google shows there is a mail list for SciPy, but when I go to the web page When you say "the web page", please include the URL. Are you talking about this page: http://www.scipy.org/Mailing_Lists ? > it shows GMANE, and various feeds for SciPy-Dev and User. Maybe I'm missing > something? In order to subscribe to one of the lists, click on the "Subscribe" link next to the list. That will show you all the information necessary to post to the list and receive replies. scipy-user is probably the one you are after. > D. Kuhlman wrote an interesting Tutorial about SciPy (course outline) in > June 2006. Has it ever been updated? If it's not updated on his own site, then no. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Sun Feb 28 14:17:07 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 28 Feb 2010 14:17:07 -0500 Subject: [Numpy-discussion] Speedup a code using apply_along_axis In-Reply-To: <4B8ABB4F.6060608@gmail.com> References: <4B8ABB4F.6060608@gmail.com> Message-ID: <1cd32cbb1002281117g2ccc1bn8a7f66fba6507203@mail.gmail.com> On Sun, Feb 28, 2010 at 1:51 PM, Xavier Gnata wrote: > Hi, > > I'm sure I reinventing the wheel with the following code: > from numpy import * > from scipy import polyfit,stats > > def f(x,y,z): > ? ?return x+y+z > M=fromfunction(f,(2000,2000,10)) > > def foo(M): > ? ?ramp=where(M<1000)[0] is this really what you want? I think this returns the indices not the values > ? ?l=len(ramp) > ? ?t=arange(l) > ? ?if(l>1): > ? ? ? ?return polyfit(t,ramp,1)[0] > ? ?else: > ? ? ? ?return 0 > > print apply_along_axis(foo,2,M) > > > In real life M is not the result of one fromfunction call but it does > not matter. > The basic idea is to compute the slope (and only the slope) along one > axis of 3D array. > Only the values below a given threshold should be taken into account. > > The current code is ugly and slow. > How to remove the len and the if statement? > How to rewrite the code in a numpy oriented way? Getting the slope or the linear fit can be done completely vectorized see numpy-discussion threads last April with titles "polyfit on multiple data points" "polyfit performance" Josef > Xavier > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From xavier.gnata at gmail.com Sun Feb 28 14:43:19 2010 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 28 Feb 2010 20:43:19 +0100 Subject: [Numpy-discussion] Speedup a code using apply_along_axis In-Reply-To: <1cd32cbb1002281117g2ccc1bn8a7f66fba6507203@mail.gmail.com> References: <4B8ABB4F.6060608@gmail.com> <1cd32cbb1002281117g2ccc1bn8a7f66fba6507203@mail.gmail.com> Message-ID: <4B8AC757.3060701@gmail.com> On 02/28/2010 08:17 PM, josef.pktd at gmail.com wrote: > On Sun, Feb 28, 2010 at 1:51 PM, Xavier Gnata wrote: > >> Hi, >> >> I'm sure I reinventing the wheel with the following code: >> from numpy import * >> from scipy import polyfit,stats >> >> def f(x,y,z): >> return x+y+z >> M=fromfunction(f,(2000,2000,10)) >> >> def foo(M): >> ramp=where(M<1000)[0] >> > is this really what you want? I think this returns the indices not the values > > Correct! It should be M[where(M<1000)] >> l=len(ramp) >> t=arange(l) >> if(l>1): >> return polyfit(t,ramp,1)[0] >> else: >> return 0 >> >> print apply_along_axis(foo,2,M) >> >> >> In real life M is not the result of one fromfunction call but it does >> not matter. >> The basic idea is to compute the slope (and only the slope) along one >> axis of 3D array. >> Only the values below a given threshold should be taken into account. >> >> The current code is ugly and slow. >> How to remove the len and the if statement? >> How to rewrite the code in a numpy oriented way? >> > Getting the slope or the linear fit can be done completely vectorized > see numpy-discussion threads last April with titles > "polyfit on multiple data points" "polyfit performance" > > Josef > > > Ok but the problem is that I also want to apply a threshold. In some cases, I end up less than 2 values below the threshold: There is nothing to fit and it should return 0. Hum....sounds like masked arrays could help...but I'm not familiar with masked arrays... Xavier From friedrichromstedt at gmail.com Sun Feb 28 15:06:43 2010 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sun, 28 Feb 2010 21:06:43 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: 2010/2/28 Sebastian Walter : >> I think I can use that to make my upy accept arbitrary functions, but >> how do you apply sin() to a TTP? > > perform truncated Taylor expansion of ?[y]_D = sin([x]_D), i.e. > y_d = d^d/dt^d ?sin( \sum_{k=0}^{D-1} x_k t^k) |_{t=0} > to obtain an explicit algorithm. > >> >> One more question: You said, t is an "external paramter". ?I, and >> maybe not only me, interpreted this as a complicated name for >> "variable". ?So I assumed it will be a parameter to some method of the >> TTP. ?But it isn't? ?It's just the way to define the ring? I guess you overlooked this question? >> You could >> define it the same in Fourier space, except that you have to make the >> array large enough from the beginning? ?Why not doing that, and >> saying, your computation relies on the Fourier transform of the >> representation? ?Can this give insight why TTPs are a ring and why >> they have zero divisors? > it has zero divisors because for instance multiplication of the two > polynomials t*t^{D-1} > truncated at t^{D-1} yields is zero. Yes, but I wanted to have a look from Fourier space of view. Because there everything is just a multiplication, and one does not have to perform the convolution in mind. I have to give up here. In fact, I do not really understand why my approach also works with DFT and not only analytically with steady FT. Consider the snippet: >>> p1 = gdft_polynomials.Polynomial([1]) >>> p1.get_dft(3) array([ 1.+0.j, 1.+0.j, 1.+0.j]) >>> p2 = gdft_polynomials.Polynomial([0, 1]) >>> p2.get_dft(3) array([ 1.0+0.j , -0.5+0.8660254j, -0.5-0.8660254j]) >>> p2.get_dft(4) array([ 1.00000000e+00 +0.00000000e+00j, 6.12303177e-17 +1.00000000e+00j, -1.00000000e+00 +1.22460635e-16j, -1.83690953e-16 -1.00000000e+00j]) As one increases the number of padding zeros, one increases Fourier space resolution, without affecting result: >>> p3 = gdft_polynomials.Polynomial([0, 1, 0, 0, 0]) >>> print p2 * p2 Polynomial(real part = [ 1.85037171e-16 -7.40148683e-17 1.00000000e+00] imaginary part = [ 2.59052039e-16 7.40148683e-17 0.00000000e+00]) >>> print p2 * p3 Polynomial(real part = [ 1.66533454e-16 1.48029737e-16 1.00000000e+00 -7.40148683e-17 -4.44089210e-16 -3.70074342e-17] imaginary part = [ 9.25185854e-17 1.48029737e-16 2.96059473e-16 1.11022302e-16 -3.70074342e-16 -1.44497045e-16]) >>> It's a bit of mystery to me. Of course, one can argue, well, DFT is information maintaining, and thus one can "feel" that it should work, but this is only a gut feeling. >>> ?E.g. if you have a function >>> f: R^N -> R >>> x -> y=f(x) >>> where x = [x1,...,xN] >>> >>> and you want to compute the gradient g(x) of f(x), then you can compute >>> df(x)/dxn by propagating ?the following array of Taylor polynomials: >>> >>> x = numpy.array( UTPS([x1_0, 0]), ..., UTPS([xn_0, 1]), ..., >>> UTPS([xN_0,0]), dtype=object) >>> y = f(x) >> >> So what is the result of applying f to some UTPS instance, is it a >> plain number, is an UTPS again? ?How do you calculate? >> >> Can one calculate the derivative of some function using your method at >> a certain point without knowledge of the analytical derivative? ?I >> guess that's the purpose. > Yes, that's the whole point: Obtaining (higher order) derivatives of > functions at machine precision for which no symbolic representation is > readily available. > That includes computer codes with recursions (e.g. for loops) that are > a no-go for symbolic differentiation. Supposedly (I've never done > that) you can even differentiate Monte Carlo simulations in that way. http://en.wikipedia.org/wiki/Automatic_differentiation: "Both classical methods have problems with calculating higher derivatives, where the complexity and errors increase. Finally, both classical methods are slow at computing the partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems." Yeah. Note at this point, that there is no chance for your project to be integrated in scipy, because you maybe HAVE TO PUBLISH UNDER GPL/CPAL (the ADOL-C is licensed GPL or CPAL). I cannot find CPL on www.opensource.org, but I guess it has been renamed to CPAL? Anyway, CPAL looks long enough to be GPL style ;-). I also published my projects under GPL first, and switched now to MIT, because Python, numpy, scipy, matplotlib, ... are published under BSD kind too, and in fact I like MIT/BSD more. Please check if your aren't violating GPL style licenses with publishing under BSD style. >>> E.g. if you have a function >>> f: R^N -> R >>> x -> y=f(x) >>> where x = [x1,...,xN] >>> >>> and you want to compute the gradient g(x) of f(x), then you can compute >>> df(x)/dxn by propagating the following array of Taylor polynomials: >>> >>> x = numpy.array( UTPS([x1_0, 0]), ..., UTPS([xn_0, 1]), ..., >>> UTPS([xN_0,0]), dtype=object) >>> y = f(x) But doesn't the call f(x) with x.shape = (N,) result in an array too? But you want a scalar number? >>> if you want to have the complete gradient, you will have to repeat N >>> times. Each time for the same zero'th coefficients [x1,...,xN]. >>> >>> Using the vecorized version, you would do only one propagation >>> x = numpy.array( UTPS([x1_0, 1,0,...,0]), ..., UTPS([xn_0, >>> 0,...,1,...0]), ..., UTPS([xN_0,0,....,1]), dtype=object) >>> y = f(x) >>> >>> i.e. you save the overhead of calling the same function N times. >> >> Ok, I understand. ?Today it's too late, I will reason tomorrow about it. I think I grasped the idea. But the thing is really tricky. I thought: So UTPS is not the thing you implemented, but you implemented rather the complete array. Right? But it's maybe wrong: UTPS([x1_0, 1, 0, ..., 0]) is with D = 1 and P = N (f: R^N -> R). I.e., P = N polynomials of degree 1, for calculating the first-order derivative? That's why your question (1) from Feb 27: What to hand over? I would say, make it possible to hand over an (P, N) ndarray. It will increase impact of your module (and graspableness) dramatically. And you have an indication how to interpret the array handed over without additional init args to UTPS(). I think the nominal value is always stored in x_{i, 0}, am I wrong? I'm not shure what to use as initial UTPSs. Can you make it possible that one doesn't have to think about that? What would be great, if I have a target function f(a, b, c, ...) stored, to hand over instead of ordinary numbers objects from your package, and everything works out such that I end up in the result with an object where both the nominal value is stored as also the gradient. Is that feasible? You could also monkey-patch numpy.sin etc. by a replacement calling the original numpy.sin with the nominal values but also doing the ttp job. >> I guess it's not overhead. ?The number of calculations should be in >> equilibrium at very low D, am I wrong? ?And you win to not have to >> compile a C library but only native text Python code. > > Well, I'm really no expert on the DFT. But doesn't the ?DFT compute on > the complex numbers? you'll have extra overhead (let's say factor >= > 2?) > And as far as I can tell, ?you do the computations on padded arrays > which possibly introduces cache misses (maybe another factor 2?) What are "cache misses"? > Isn't the advantage of the DFT not that you can use the FFT which > would reduce the runtime from O(D^2) to O(D log(D))? > I'm pretty sure that only pays off for D larger than 10. Your algorithm stays at O(D^2) as you do the convolution by hand, no? >>?E.g., your >> optimisation package is quite interesting for me, but I'm using >> Windows as my main system, so it will be painful to compile. ?And the >> code is more straightforward, more concise, easier to maintain and >> easier to understand, ... :-) ?I really do not want to diminish your >> programming skills, please do not misunderstand! ?I only mean the >> subject. > The project uses scons which is available for windows as binaries. > I haven't tried it myself but I'm confident that it's a 1 minutes job > on windows. The optimisation package or utp? I want to give utp a try. > I have implemented some of the algorithms just as you explained in > another package > (http://github.com/b45ch1/algopy/blob/master/algopy/utp/utps.py). > But I don't think the code looks easier to maintain than the C code > and it's also slower. Can you explain which repo I should clone at best? What are the differences between algopy, taylorpoly and pyadolc? I'm getting a bit confused slowly with all the mails in this thread, and also the subject isn't that easy ... http://en.wikipedia.org/wiki/Automatic_differentiation also refers to TTP only in a marginal note :-( Friedrich From sebastian.walter at gmail.com Sun Feb 28 17:52:50 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Sun, 28 Feb 2010 23:52:50 +0100 Subject: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions In-Reply-To: References: Message-ID: On Sun, Feb 28, 2010 at 9:06 PM, Friedrich Romstedt wrote: > 2010/2/28 Sebastian Walter : >>> I think I can use that to make my upy accept arbitrary functions, but >>> how do you apply sin() to a TTP? >> >> perform truncated Taylor expansion of ?[y]_D = sin([x]_D), i.e. >> y_d = d^d/dt^d ?sin( \sum_{k=0}^{D-1} x_k t^k) |_{t=0} >> to obtain an explicit algorithm. >> >>> >>> One more question: You said, t is an "external paramter". ?I, and >>> maybe not only me, interpreted this as a complicated name for >>> "variable". ?So I assumed it will be a parameter to some method of the >>> TTP. ?But it isn't? ?It's just the way to define the ring? > > I guess you overlooked this question? thought this is a rhetorical question. Tbh I don't know what the standard name for such a "formal variable" is. > >>> You could >>> define it the same in Fourier space, except that you have to make the >>> array large enough from the beginning? ?Why not doing that, and >>> saying, your computation relies on the Fourier transform of the >>> representation? ?Can this give insight why TTPs are a ring and why >>> they have zero divisors? >> it has zero divisors because for instance multiplication of the two >> polynomials t*t^{D-1} >> truncated at t^{D-1} yields is zero. > > Yes, but I wanted to have a look from Fourier space of view. ?Because > there everything is just a multiplication, and one does not have to > perform the convolution in mind. ?I have to give up here. ?In fact, I > do not really understand why my approach also works with DFT and not > only analytically with steady FT. ?Consider the snippet: > >>>> p1 = gdft_polynomials.Polynomial([1]) >>>> p1.get_dft(3) > array([ 1.+0.j, ?1.+0.j, ?1.+0.j]) >>>> p2 = gdft_polynomials.Polynomial([0, 1]) >>>> p2.get_dft(3) > array([ 1.0+0.j ? ? ? , -0.5+0.8660254j, -0.5-0.8660254j]) >>>> p2.get_dft(4) > array([ ?1.00000000e+00 +0.00000000e+00j, > ? ? ? ? 6.12303177e-17 +1.00000000e+00j, > ? ? ? ?-1.00000000e+00 +1.22460635e-16j, ?-1.83690953e-16 -1.00000000e+00j]) > > As one increases the number of padding zeros, one increases Fourier > space resolution, without affecting result: > >>>> p3 = gdft_polynomials.Polynomial([0, 1, 0, 0, 0]) >>>> print p2 * p2 > Polynomial(real part = > [ ?1.85037171e-16 ?-7.40148683e-17 ? 1.00000000e+00] > imaginary part = > [ ?2.59052039e-16 ? 7.40148683e-17 ? 0.00000000e+00]) >>>> print p2 * p3 > Polynomial(real part = > [ ?1.66533454e-16 ? 1.48029737e-16 ? 1.00000000e+00 ?-7.40148683e-17 > ?-4.44089210e-16 ?-3.70074342e-17] > imaginary part = > [ ?9.25185854e-17 ? 1.48029737e-16 ? 2.96059473e-16 ? 1.11022302e-16 > ?-3.70074342e-16 ?-1.44497045e-16]) >>>> > > It's a bit of mystery to me. ?Of course, one can argue, well, DFT is > information maintaining, and thus one can "feel" that it should work, > but this is only a gut feeling. I'm of no help here, I'm not familiar enough with the DFT. All I know is that F( conv(x,y)) = F(x) * F(y) and that one can speed up the convolution in that way. And most operations on truncated Taylor polynomials result in algorithms that contain convolutions. > >>>> ?E.g. if you have a function >>>> f: R^N -> R >>>> x -> y=f(x) >>>> where x = [x1,...,xN] >>>> >>>> and you want to compute the gradient g(x) of f(x), then you can compute >>>> df(x)/dxn by propagating ?the following array of Taylor polynomials: >>>> >>>> x = numpy.array( UTPS([x1_0, 0]), ..., UTPS([xn_0, 1]), ..., >>>> UTPS([xN_0,0]), dtype=object) >>>> y = f(x) >>> >>> So what is the result of applying f to some UTPS instance, is it a >>> plain number, is an UTPS again? ?How do you calculate? >>> >>> Can one calculate the derivative of some function using your method at >>> a certain point without knowledge of the analytical derivative? ?I >>> guess that's the purpose. >> Yes, that's the whole point: Obtaining (higher order) derivatives of >> functions at machine precision for which no symbolic representation is >> readily available. >> That includes computer codes with recursions (e.g. for loops) that are >> a no-go for symbolic differentiation. Supposedly (I've never done >> that) you can even differentiate Monte Carlo simulations in that way. > > http://en.wikipedia.org/wiki/Automatic_differentiation: > "Both classical methods have problems with calculating higher > derivatives, where the complexity and errors increase. Finally, both > classical methods are slow at computing the partial derivatives of a > function with respect to many inputs, as is needed for gradient-based > optimization algorithms. Automatic differentiation solves all of these > problems." > > Yeah. > > Note at this point, that there is no chance for your project to be > integrated in scipy, because you maybe HAVE TO PUBLISH UNDER GPL/CPAL > (the ADOL-C is licensed GPL or CPAL). ?I cannot find CPL on > www.opensource.org, but I guess it has been renamed to CPAL? ?Anyway, > CPAL looks long enough to be GPL style ;-). ?I also published my > projects under GPL first, and switched now to MIT, because Python, > numpy, scipy, matplotlib, ... are published under BSD kind too, and in > fact I like MIT/BSD more. ?Please check if your aren't violating GPL > style licenses with publishing under BSD style. AFAIK the CPL is basically the Eclipse Public License ( http://en.wikipedia.org/wiki/Eclipse_Public_License) which explicitly allows software under another licence to link to CPL code. The Python wrapper of ADOL-C is only linking to ADOL-C thus it can be BSD licensed. How is this related to the Taylor polynomials? > >>>> ?E.g. if you have a function >>>> f: R^N -> R >>>> x -> y=f(x) >>>> where x = [x1,...,xN] >>>> >>>> and you want to compute the gradient g(x) of f(x), then you can compute >>>> df(x)/dxn by propagating ?the following array of Taylor polynomials: >>>> >>>> x = numpy.array( UTPS([x1_0, 0]), ..., UTPS([xn_0, 1]), ..., >>>> UTPS([xN_0,0]), dtype=object) >>>> y = f(x) > > But doesn't the call f(x) with x.shape = (N,) result in an array too? > But you want a scalar number? Since x[0] is a scalar and not a 0-dimensional ndarray I conjecture that a function mapping to the real numbers should be implemented to return a scalar, not an array. > >>>> if you want to have the complete gradient, you will have to repeat N >>>> times. Each time for the same zero'th coefficients [x1,...,xN]. >>>> >>>> Using the vecorized version, you would do only one propagation >>>> x = numpy.array( UTPS([x1_0, 1,0,...,0]), ..., UTPS([xn_0, >>>> 0,...,1,...0]), ..., UTPS([xN_0,0,....,1]), dtype=object) >>>> y = f(x) >>>> >>>> i.e. you save the overhead of calling the same function N times. >>> >>> Ok, I understand. ?Today it's too late, I will reason tomorrow about it. > > I think I grasped the idea. ?But the thing is really tricky. > > I thought: > So UTPS is not the thing you implemented, but you implemented rather > the complete array. ?Right? I don't get your question. > But it's maybe wrong: > > UTPS([x1_0, 1, 0, ..., 0]) is with D = 1 and P = N (f: R^N -> R). > I.e., P = N polynomials of degree 1, for calculating the first-order > derivative? ?That's why your question (1) from Feb 27: What to hand > over? ?I would say, make it possible to hand over an (P, N) ndarray. > It will increase impact of your module (and graspableness) > dramatically. ?And you have an indication how to interpret the array > handed over without additional init args to UTPS(). Well, there is a problem with the (P,D) approach. What if there is an comparison operator, e.g. [x]_D > 0 in your code? It is defined as x_0 > 0. Now, if a single UTPS contains P zero coefficients x_{10},...,x_{P0} then at this point the computer program would have to branch. Hence there must be only one x_0 in an UTPS instance. If it wasn't for this branching I'd implemented it as (P,D) array. I thought about providing the possibility to the user to use an (P,D) array as input. However, this would mean to hide crucial information about the inner workings of the algorithms from the users. This is almost always a very bad idea. Also, it would violate the "only one way to do it" principle of Python. As Einstein said: Make things as simple as possible, but not any simpler ;) But it's good that you point that out. I agree that it would be nice to be more elegant. > > I think the nominal value is always stored in x_{i, 0}, am I wrong? there is only one zero'th coefficient x_0 for all directions P. > > I'm not shure what to use as initial UTPSs. ?Can you make it possible > that one doesn't have to think about that? ?What would be great, if I > have a target function f(a, b, c, ...) stored, to hand over instead of > ordinary numbers objects from your package, and everything works out > such that I end up in the result with an object where both the nominal > value is stored as also the gradient. ?Is that feasible? ?You could > also monkey-patch numpy.sin etc. by a replacement calling the original > numpy.sin with the nominal values but also doing the ttp job. The taylorpoly module is supposed to be a basic building block for AD tools and other software that relies on local polynomial approximations (e.g. Taylor series intergrators for ODEs/DAEs). There are quite a lot of AD tools available in Python. I use pyadolc on a regular basis and it turned out to be very reliable so far. It is also quite fast (probably factor 100 faster than pure Python based AD tools). numpy.sin(x) is smart enough to call x.sin() if x is an object. > >>> I guess it's not overhead. ?The number of calculations should be in >>> equilibrium at very low D, am I wrong? ?And you win to not have to >>> compile a C library but only native text Python code. >> >> Well, I'm really no expert on the DFT. But doesn't the ?DFT compute on >> the complex numbers? you'll have extra overhead (let's say factor >= >> 2?) >> And as far as I can tell, ?you do the computations on padded arrays >> which possibly introduces cache misses (maybe another factor 2?) > > What are "cache misses"? now that I think of it, the size of the polynomials are much to small not to fit into the cache... A cache miss occurs when a required memory block for the next operation on the CPU is not in the cache and has to be transfered from the main memory. Getting from the main memory is a process with high latency and relatively low bandwidth. I.e. the cpu is waiting, i.e. your algorithm operates significantly below peak performance. But anyway, forget about it. In this case it should be irrelevant. > >> Isn't the advantage of the DFT not that you can use the FFT which >> would reduce the runtime from O(D^2) to O(D log(D))? >> I'm pretty sure that only pays off for D larger than 10. > > Your algorithm stays at O(D^2) as you do the convolution by hand, no? Yes, its O(D^2). > >>>?E.g., your >>> optimisation package is quite interesting for me, but I'm using >>> Windows as my main system, so it will be painful to compile. ?And the >>> code is more straightforward, more concise, easier to maintain and >>> easier to understand, ... :-) ?I really do not want to diminish your >>> programming skills, please do not misunderstand! ?I only mean the >>> subject. >> The project uses scons which is available for windows as binaries. >> I haven't tried it myself but I'm confident that it's a 1 minutes job >> on windows. > > The optimisation package or utp? ?I want to give utp a try. to what do you refer to by "optimization package"? > >> I have implemented some of the algorithms just as you explained in >> another package >> (http://github.com/b45ch1/algopy/blob/master/algopy/utp/utps.py). >> But I don't think the code looks easier to maintain than the C code >> and it's also slower. > > Can you explain which repo I should clone at best? ?What are the > differences between algopy, taylorpoly and pyadolc? pyadolc is the wrapper of the very mature C++ software ADOL-C. It's quite fast and reliable. algopy is a research prototype with focus on higher order differentation of linear algebra functions (dot, inv, qr, eigh, trace, det, ...). This is done by propagation of univariate Taylor polynomials with numpy.arrays as coefficients. taylorpoly is supposed to provide building blocks. E.g. I plan to use these algorithms also in algopy. I hope that the software I write is deemed valuable enough to be included into scipy or preferably numpy. > > I'm getting a bit confused slowly with all the mails in this thread, > and also the subject isn't that easy ... > http://en.wikipedia.org/wiki/Automatic_differentiation also refers to > TTP only in a marginal note :-( One could/should improve the wikipedia article, I guess. > > Friedrich > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From geometrian at gmail.com Sun Feb 28 19:53:38 2010 From: geometrian at gmail.com (Ian Mallett) Date: Sun, 28 Feb 2010 16:53:38 -0800 Subject: [Numpy-discussion] Iterative Matrix Multiplication Message-ID: Hi, I have a list of vec3 lists (e.g. [[1,2,3],[4,5,6],[7,8,9],...]). To every single one of the vec3 sublists, I am currently applying transformations. I need to optimize this with numpy. To get proper results, as far as I can tell, the vec3 lists must be expressed as vec4s: [[1,2,3],[4,5,6],[7,8,9],...] -> [[1,2,3,1],[4,5,6,1],[7,8,9,1],...]. Each of these needs to be multiplied by either a translation matrix, or a rotation and translation matrix. I don't really know how to approach the problem . . . Thanks, Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrickmarshwx at gmail.com Sun Feb 28 20:22:41 2010 From: patrickmarshwx at gmail.com (Patrick Marsh) Date: Sun, 28 Feb 2010 19:22:41 -0600 Subject: [Numpy-discussion] Building Numpy Windows Superpack In-Reply-To: <5b8d13221002280231q4c3eb8fm8b8f25b8bbc36962@mail.gmail.com> References: <5b8d13221002280231q4c3eb8fm8b8f25b8bbc36962@mail.gmail.com> Message-ID: Hi David, There really isn't much in the way of commands that I've used - I haven't gotten that far. So far, I've downloaded your binaries and then attempted to set up my numpy site.cfg file to use your binaries. I used the following as my site.cfg [atlas] library_dirs = d:\svn\BlasLapack\binaries\nosse,d:\svn\BlasLapack\binaries\sse2,d:\svn\BlasLapack\binaries\sse3 atlas_libs = lapack, f77blas, cblas, atlas However, when invoking 'setup.py config' it won't recognize a list of directories, even though the example site.cfg has an example with one. As soon as I don't use a list of paths and only use one of them, I can get setup.py bdist_wininst to run without error. I'm going to play around with the paver script and follow Ralf's instructions in the previous example and see what happens. Patrick On Sun, Feb 28, 2010 at 4:31 AM, David Cournapeau wrote: > Hi Patrick, > > On Sun, Feb 28, 2010 at 1:35 PM, Patrick Marsh > wrote: > > Greetings, > > I have been trying to build the numpy superpack on windows using the > > binaries posted by David. > > Could you post *exactly* the sequence of commands you executed ? > Especially at the beginning, building things can be frustrating > because the cause of failures can be hard to diagnose. > > FWIW, I've just built the nosse version with mingw on windows 7, there > was no issue at all, > > cheers, > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Patrick Marsh Ph.D. Student / NSSL Liaison to the HWT School of Meteorology / University of Oklahoma Cooperative Institute for Mesoscale Meteorological Studies National Severe Storms Laboratory http://www.patricktmarsh.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Feb 28 20:45:29 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 28 Feb 2010 20:45:29 -0500 Subject: [Numpy-discussion] Iterative Matrix Multiplication In-Reply-To: References: Message-ID: <1cd32cbb1002281745t5dda8192ncc92396cb1c61e37@mail.gmail.com> On Sun, Feb 28, 2010 at 7:53 PM, Ian Mallett wrote: > Hi, > > I have a list of vec3 lists (e.g. [[1,2,3],[4,5,6],[7,8,9],...]). To every > single one of the vec3 sublists, I am currently applying transformations.? I > need to optimize this with numpy. > > To get proper results, as far as I can tell, the vec3 lists must be > expressed as vec4s: [[1,2,3],[4,5,6],[7,8,9],...] -> > [[1,2,3,1],[4,5,6,1],[7,8,9,1],...].?? Each of these needs to be multiplied > by either a translation matrix, or a rotation and translation matrix. > > I don't really know how to approach the problem . . . I'm not sure what exactly you need but it sounds similar to "Row-wise dot product" in numpy-discussion Sept 2009 there are several threads on rotation matrix, which might be useful depending on the structure of your arrays Josef > > Thanks, > Ian > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From david at silveregg.co.jp Sun Feb 28 20:45:53 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Mon, 01 Mar 2010 10:45:53 +0900 Subject: [Numpy-discussion] Building Numpy Windows Superpack In-Reply-To: References: <5b8d13221002280231q4c3eb8fm8b8f25b8bbc36962@mail.gmail.com> Message-ID: <4B8B1C51.60604@silveregg.co.jp> Patrick Marsh wrote: > Hi David, > > There really isn't much in the way of commands that I've used - I > haven't gotten that far. So far, I've downloaded your binaries and then > attempted to set up my numpy site.cfg file to use your binaries. I used > the following as my site.cfg > > [atlas] > library_dirs > = d:\svn\BlasLapack\binaries\nosse,d:\svn\BlasLapack\binaries\sse2,d:\svn\BlasLapack\binaries\sse3 > atlas_libs = lapack, f77blas, cblas, atlas > > However, when invoking 'setup.py config' it won't recognize a list of > directories, even though the example site.cfg has an example with one. First, you should not put the three paths into library_dirs, it does not make much sense here (I am not sure what distutils does exactly in this case, whether it took the first path or the last one, but it will only take into account one). Then, I would advise to bypass site.cfg altogether, and just use env variables, as done in the paver script. E.g.: set LAPACK=d:\svn\BlasLapack\binaries\nosse python setup.py build -c mingw32 bdist_wininst because then you can easily control which one gets included from the command line. It is also much easier to script it this way. > I'm going to play around with the paver script and follow Ralf's > instructions in the previous example and see what happens. In general, you should use the paver script as a reference. It contains a lot of small best-practice things I have ended up after quite a while. cheers, David From pete at shinners.org Sun Feb 28 20:59:14 2010 From: pete at shinners.org (Peter Shinners) Date: Sun, 28 Feb 2010 17:59:14 -0800 Subject: [Numpy-discussion] take not respecting masked arrays? Message-ID: <4B8B1F72.5050709@shinners.org> I have a 2D masked array that has indices into a 1D array. I want to use some form of "take" to fetch the values into the 2D array. I've tried both numpy.take and numpy.ma.take, but they both return a new unmasked array. I can get it working by converting the take results into a masked array and applying the original mask. But the values that are masked are actually illegal indices. This means I need to switch the take mode away from "raise", but I actually want raise. I'm still new to numpy, so it's likely I've overlooked something. Is there a masked take? From charlesr.harris at gmail.com Sun Feb 28 21:31:05 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 28 Feb 2010 19:31:05 -0700 Subject: [Numpy-discussion] Iterative Matrix Multiplication In-Reply-To: References: Message-ID: On Sun, Feb 28, 2010 at 5:53 PM, Ian Mallett wrote: > Hi, > > I have a list of vec3 lists (e.g. [[1,2,3],[4,5,6],[7,8,9],...]). To every > single one of the vec3 sublists, I am currently applying transformations. I > need to optimize this with numpy. > > To get proper results, as far as I can tell, the vec3 lists must be > expressed as vec4s: [[1,2,3],[4,5,6],[7,8,9],...] -> > [[1,2,3,1],[4,5,6,1],[7,8,9,1],...]. Each of these needs to be > multiplied by either a translation matrix, or a rotation and translation > matrix. > > I don't really know how to approach the problem . . . > > As I understand it, you want *different* matrices applied to each vector? There are generalized ufuncs, which I haven't tried, but for small vectors there is a trick. Let's see... heck, gmane looks to be dead at the moment. Anyway, I posted the method on the list here a couple of years ago and I'll put up a link if I can find it when gmane comes back. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From geometrian at gmail.com Sun Feb 28 21:35:22 2010 From: geometrian at gmail.com (Ian Mallett) Date: Sun, 28 Feb 2010 18:35:22 -0800 Subject: [Numpy-discussion] Iterative Matrix Multiplication In-Reply-To: References: Message-ID: On Sun, Feb 28, 2010 at 6:31 PM, Charles R Harris wrote: > As I understand it, you want *different* matrices applied to each vector? Nope--I need the same matrix applied to each vector. Because 3D translation matrices must, if I understand correctly be 4x4, the vectors must first be changed to length 4 (adding a 1 for the last term). Then, the matrices would be applied. Then, the resulting n*4 array would be converted back into a n*3 array (simply by dropping the last term). Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Feb 28 21:54:40 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 28 Feb 2010 19:54:40 -0700 Subject: [Numpy-discussion] Iterative Matrix Multiplication In-Reply-To: References: Message-ID: On Sun, Feb 28, 2010 at 7:35 PM, Ian Mallett wrote: > On Sun, Feb 28, 2010 at 6:31 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> As I understand it, you want *different* matrices applied to each vector? > > Nope--I need the same matrix applied to each vector. > > Because 3D translation matrices must, if I understand correctly be 4x4, the > vectors must first be changed to length 4 (adding a 1 for the last term). > Then, the matrices would be applied. Then, the resulting n*4 array would be > converted back into a n*3 array (simply by dropping the last term). > > Why not just add a vector to get translation? There is no need to go the homogeneous form. Or you can just leave the vectors at length 4 and use a slice to access the first three components. That way you can leave the ones in place. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From geometrian at gmail.com Sun Feb 28 21:58:50 2010 From: geometrian at gmail.com (Ian Mallett) Date: Sun, 28 Feb 2010 18:58:50 -0800 Subject: [Numpy-discussion] Iterative Matrix Multiplication In-Reply-To: References: Message-ID: On Sun, Feb 28, 2010 at 6:54 PM, Charles R Harris wrote: > Why not just add a vector to get translation? There is no need to go the > homogeneous form. Or you can just leave the vectors at length 4 and use a > slice to access the first three components. That way you can leave the ones > in place. > Oh . . . duh :D Excellent--and a 3D rotation matrix is 3x3--so the list can remain n*3. Now the question is how to apply a rotation matrix to the array of vec3? > Chuck > Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Feb 28 22:08:55 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 28 Feb 2010 20:08:55 -0700 Subject: [Numpy-discussion] Iterative Matrix Multiplication In-Reply-To: References: Message-ID: On Sun, Feb 28, 2010 at 7:58 PM, Ian Mallett wrote: > On Sun, Feb 28, 2010 at 6:54 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> Why not just add a vector to get translation? There is no need to go the >> homogeneous form. Or you can just leave the vectors at length 4 and use a >> slice to access the first three components. That way you can leave the ones >> in place. >> > Oh . . . duh :D > > Excellent--and a 3D rotation matrix is 3x3--so the list can remain n*3. > Now the question is how to apply a rotation matrix to the array of vec3? > It looks like you want something like res = dot(vec, rot) + tran You can avoid an extra copy being made by separating the parts res = dot(vec, rot) res += tran where I've used arrays, not matrices. Note that the rotation matrix multiplies every vector in the array. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Sun Feb 28 23:01:16 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 28 Feb 2010 23:01:16 -0500 Subject: [Numpy-discussion] take not respecting masked arrays? In-Reply-To: <4B8B1F72.5050709@shinners.org> References: <4B8B1F72.5050709@shinners.org> Message-ID: On Feb 28, 2010, at 8:59 PM, Peter Shinners wrote: > I have a 2D masked array that has indices into a 1D array. I want to use > some form of "take" to fetch the values into the 2D array. I've tried > both numpy.take and numpy.ma.take, but they both return a new unmasked > array. Mmh. Surprising. np.ma.take should return a masked array if it's given a masked array as input. Can you pastebin the array that gives you trouble ? I need to investigate that. As a temporary workaround, use np.take on first the _data, then the _mask and construct a new masked array from the two results. From charlesr.harris at gmail.com Sun Feb 28 23:12:13 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 28 Feb 2010 21:12:13 -0700 Subject: [Numpy-discussion] take not respecting masked arrays? In-Reply-To: References: <4B8B1F72.5050709@shinners.org> Message-ID: On Sun, Feb 28, 2010 at 9:01 PM, Pierre GM wrote: > On Feb 28, 2010, at 8:59 PM, Peter Shinners wrote: > > I have a 2D masked array that has indices into a 1D array. I want to use > > some form of "take" to fetch the values into the 2D array. I've tried > > both numpy.take and numpy.ma.take, but they both return a new unmasked > > array. > > > Mmh. Surprising. np.ma.take should return a masked array if it's given a > masked array as input. Can you pastebin the array that gives you trouble ? I > need to investigate that. > As a temporary workaround, use np.take on first the _data, then the _mask > and construct a new masked array from the two results. > ______ > Ah, Pierre, now that you are here... ;) Can you take a look at the invalid value warnings in the masked array tests and maybe fix them up by turning off the warnings where appropriate? I'd do it myself except that I hesitate to touch masked array stuff. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From bergstrj at iro.umontreal.ca Sun Feb 28 23:35:04 2010 From: bergstrj at iro.umontreal.ca (James Bergstra) Date: Sun, 28 Feb 2010 23:35:04 -0500 Subject: [Numpy-discussion] how to work with numpy.int8 in c Message-ID: <7f1eaee31002282035m1bd9dcc7p110accad7dbc1756@mail.gmail.com> Could someone point me to documentation (or even numpy src) that shows how to allocate a numpy.int8 in C, or check to see if a PyObject is a numpy.int8? Thanks, James -- http://www-etud.iro.umontreal.ca/~bergstrj