From toddrjen at gmail.com Mon Jun 1 08:54:20 2015 From: toddrjen at gmail.com (Todd) Date: Mon, 1 Jun 2015 14:54:20 +0200 Subject: [Numpy-discussion] Verify your sourceforge windows installer downloads In-Reply-To: References: <31698217454514411.075227sturla.molden-gmail.com@news.gmane.org> Message-ID: On Mon, Jun 1, 2015 at 3:43 AM, Ralf Gommers wrote: > > > On Fri, May 29, 2015 at 7:28 PM, Benjamin Root wrote: > >> Speaking from the matplotlib project, our binaries are substantial due to >> our suite of test images. Pypi worked with us on relaxing size constraints. >> Also, I think the new cheese shop/warehouse server they are using scales >> better, so size is not nearly the same concern as before. >> >> Ben Root >> On May 29, 2015 1:43 AM, "Todd" wrote: >> >>> On May 28, 2015 7:06 PM, "David Cournapeau" wrote: >>> > On Fri, May 29, 2015 at 2:00 AM, Andrew Collette < >>> andrew.collette at gmail.com> wrote: >>> >> >>> >> In any case I've always been surprised that NumPy is distributed >>> >> through SourceForge, which has been sketchy for years now. Could it >>> >> simply be hosted on PyPI? >>> > >>> > >>> > They don't accept arbitrary binaries like SF does, and some of our >>> installer formats can't be uploaded there. >>> > >>> > David >>> >>> Is that something that could be fixed? >>> >> > For the current .exe installers that cannot be fixed, because neither pip > nor easy_install can handle those. We actually have to ensure that we don't > link from pypi directly to the sourceforge folder with the latest release, > because then easy_install will follow the link, download the .exe and fail. > > Dmg's were another non-supported format, but we'll stop using those. So > if/when it's SSE2 .exe installers only (make with bdist_wininst and no > NSIS) then PyPi works. Size constraints are not an issue for Numpy I think. > > Ralf > What about adding some mechanism in pypi to flag that certain files should not by downloaded with pip? -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Jun 1 11:54:13 2015 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 1 Jun 2015 11:54:13 -0400 Subject: [Numpy-discussion] checking S versus U dtype Message-ID: What's the best way to check whether a numpy array is string or bytes on python3? using char? >>> A = np.asarray([[1, 0, 0],['E', 1, 0],['E', 'E', 1]], dtype='>> A array([['1', '0', '0'], ['E', '1', '0'], ['E', 'E', '1']], dtype='>> A.dtype dtype('>> A.dtype.char 'U' >>> A.dtype.char == 'U' True >>> A.dtype.char == 'S' False >>> A.astype('>> A.astype('>> background: I don't know why sometimes I got S and sometimes U on Python 3.4, and I want the code to work with both >>> A == 'E' array([[False, False, False], [ True, False, False], [ True, True, False]], dtype=bool) >>> A.astype('>> A.astype(' From ndarray at mac.com Wed Jun 3 15:38:19 2015 From: ndarray at mac.com (Alexander Belopolsky) Date: Wed, 3 Jun 2015 15:38:19 -0400 Subject: [Numpy-discussion] matmul needs some clarification. In-Reply-To: References: Message-ID: On Sat, May 30, 2015 at 6:23 PM, Charles R Harris wrote: > The problem arises when multiplying a stack of matrices times a vector. > PEP465 defines this as appending a '1' to the dimensions of the vector and > doing the defined stacked matrix multiply, then removing the last dimension > from the result. Note that in the middle step we have a stack of matrices > and after removing the last dimension we will still have a stack of > matrices. What we want is a stack of vectors, but we can't have those with > our conventions. This makes the result somewhat unexpected. How should we > resolve this? I think that before tackling the @ operator, we should implement the pure dot of stacks of matrices and dot of stacks of vectors generalized ufuncs. The first will have a 2d "core" and the second - 1d. Let's tentatively call them matmul and vecmul. Hopefully matrix vector product can be reduced to the vecmul, but I have not fully figured this out. If not - we may need the third ufunc. Once we have these ufuncs, we can decide what @ operator should do in terms of them and possibly some axes manipulation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Wed Jun 3 16:25:57 2015 From: shoyer at gmail.com (Stephan Hoyer) Date: Wed, 3 Jun 2015 13:25:57 -0700 Subject: [Numpy-discussion] matmul needs some clarification. In-Reply-To: References: Message-ID: On Sat, May 30, 2015 at 3:23 PM, Charles R Harris wrote: > The problem arises when multiplying a stack of matrices times a vector. > PEP465 defines this as appending a '1' to the dimensions of the vector and > doing the defined stacked matrix multiply, then removing the last dimension > from the result. Note that in the middle step we have a stack of matrices > and after removing the last dimension we will still have a stack of > matrices. What we want is a stack of vectors, but we can't have those with > our conventions. This makes the result somewhat unexpected. How should we > resolve this? > I'm afraid I don't quite understand the issue. Maybe a more specific example of the shapes you have in mind would help? Here's my attempt. Suppose we have two arrays: a with shape (i, j, k) b with shape (k,) Following the logic you describe from PEP465, for a @ b we have shapes transform like so: (i, j, k,) @ (k, 1) -> (i, j, 1) -> (i, j) This makes sense to me as a stack of vectors, as long as you are imagining the original stack of matrices as along the first dimension. Which I'll note is the default behavior for the new np.stack ( https://github.com/numpy/numpy/pull/5605). -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Jun 3 17:08:58 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 3 Jun 2015 15:08:58 -0600 Subject: [Numpy-discussion] matmul needs some clarification. In-Reply-To: References: Message-ID: On Wed, Jun 3, 2015 at 2:25 PM, Stephan Hoyer wrote: > On Sat, May 30, 2015 at 3:23 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> The problem arises when multiplying a stack of matrices times a vector. >> PEP465 defines this as appending a '1' to the dimensions of the vector and >> doing the defined stacked matrix multiply, then removing the last dimension >> from the result. Note that in the middle step we have a stack of matrices >> and after removing the last dimension we will still have a stack of >> matrices. What we want is a stack of vectors, but we can't have those with >> our conventions. This makes the result somewhat unexpected. How should we >> resolve this? >> > > I'm afraid I don't quite understand the issue. Maybe a more specific > example of the shapes you have in mind would help? Here's my attempt. > > Suppose we have two arrays: > a with shape (i, j, k) > b with shape (k,) > > Following the logic you describe from PEP465, for a @ b we have shapes > transform like so: > (i, j, k,) @ (k, 1) -> (i, j, 1) -> (i, j) > > This makes sense to me as a stack of vectors, as long as you are imagining > the original stack of matrices as along the first dimension. Which I'll > note is the default behavior for the new np.stack ( > https://github.com/numpy/numpy/pull/5605). > Yes, you end up with a stack of vectors, but matmul will interpret them as a stack of matrices. I decided there is nothing to be done there and just documented it as a potential gotcha. The other possibility would be to prohibit or warn on stacked matrices and vectors for the `@` operator and that might limit what some folks want to do. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Jun 3 17:12:48 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 3 Jun 2015 15:12:48 -0600 Subject: [Numpy-discussion] matmul needs some clarification. In-Reply-To: References: Message-ID: On Wed, Jun 3, 2015 at 1:38 PM, Alexander Belopolsky wrote: > > On Sat, May 30, 2015 at 6:23 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> The problem arises when multiplying a stack of matrices times a vector. >> PEP465 defines this as appending a '1' to the dimensions of the vector and >> doing the defined stacked matrix multiply, then removing the last dimension >> from the result. Note that in the middle step we have a stack of matrices >> and after removing the last dimension we will still have a stack of >> matrices. What we want is a stack of vectors, but we can't have those with >> our conventions. This makes the result somewhat unexpected. How should we >> resolve this? > > > I think that before tackling the @ operator, we should implement the pure > dot of stacks of matrices and dot of stacks of vectors generalized ufuncs. > The first will have a 2d "core" and the second - 1d. Let's tentatively > call them matmul and vecmul. Hopefully matrix vector product can be > reduced to the vecmul, > but I have not fully figured this out. If not - we may need the third > ufunc. > The `@` operator is done. I originally started with four ufuncs, mulvecvec, mulmatvec, etc, but decided to wait on that until we merged the ufunc and multiarray packages and did some other ufunc work. The matmul function can certainly be upgraded in the future, but is as good as dot right now except it doesn't handle object arrays. > > Once we have these ufuncs, we can decide what @ operator should do in > terms of them and possibly some axes manipulation. > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Jun 4 20:26:06 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 4 Jun 2015 18:26:06 -0600 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 Message-ID: Hi All, I've not strong feelings one way or the other on this proposed deprecation for numpy 1.10 and would like some feedback from interested users. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Jun 4 20:27:34 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 4 Jun 2015 18:27:34 -0600 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 In-Reply-To: References: Message-ID: On Thu, Jun 4, 2015 at 6:26 PM, Charles R Harris wrote: > Hi All, > > I've not strong feelings one way or the other on this proposed deprecation > for numpy 1.10 and would like some feedback from interested users. > Umm, link is #4353 . Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndarray at mac.com Thu Jun 4 20:50:56 2015 From: ndarray at mac.com (Alexander Belopolsky) Date: Thu, 4 Jun 2015 20:50:56 -0400 Subject: [Numpy-discussion] matmul needs some clarification. In-Reply-To: References: Message-ID: On Wed, Jun 3, 2015 at 5:12 PM, Charles R Harris wrote: > but is as good as dot right now except it doesn't handle object arrays. This is a fairly low standard. :-( -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Jun 4 20:57:35 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 4 Jun 2015 17:57:35 -0700 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 In-Reply-To: References: Message-ID: So specifically the question is -- if you have an array with five items, and a Boolean array with three items, then currently you can use the later to index the former: arr = np.arange(5) mask = np.asarray([True, False, True]) arr[mask] # returns array([0, 2]) This is justified by the rule that indexing with a Boolean array should be the same as indexing with the same array that's been passed to np.nonzero(). Empirically, though, this causes constant confusion and does not seen very useful, so the question is whether we should deprecate it. -n On Jun 4, 2015 5:30 PM, "Charles R Harris" wrote: > > > On Thu, Jun 4, 2015 at 6:26 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> Hi All, >> >> I've not strong feelings one way or the other on this proposed >> deprecation for numpy 1.10 and would like some feedback from interested >> users. >> > > Umm, link is #4353 . > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Jun 4 21:04:35 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 4 Jun 2015 18:04:35 -0700 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 In-Reply-To: References: Message-ID: On Thu, Jun 4, 2015 at 5:57 PM, Nathaniel Smith wrote: > So specifically the question is -- if you have an array with five items, and > a Boolean array with three items, then currently you can use the later to > index the former: > > arr = np.arange(5) > mask = np.asarray([True, False, True]) > arr[mask] # returns array([0, 2]) > > This is justified by the rule that indexing with a Boolean array should be > the same as indexing with the same array that's been passed to np.nonzero(). > Empirically, though, this causes constant confusion and does not seen very > useful, so the question is whether we should deprecate it. One place where the current behavior is particularly baffling and annoying is when you have multiple boolean masks in the same indexing operation. I think everyone would expect this to index separately on each axis ("outer product indexing" style, like slices do), and that's really the only useful interpretation, but that's not what it does...: In [3]: a = np.arange(9).reshape((3, 3)) In [4]: a Out[4]: array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) In [6]: a[np.asarray([True, False, True]), np.asarray([False, True, True])] Out[6]: array([1, 8]) In [7]: a[np.asarray([True, False, True]), np.asarray([False, False, True])] Out[7]: array([2, 8]) In [8]: a[np.asarray([True, False, True]), np.asarray([True, True, True])] --------------------------------------------------------------------------- IndexError Traceback (most recent call last) in () ----> 1 a[np.asarray([True, False, True]), np.asarray([True, True, True])] IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (2,) (3,) -n -- Nathaniel J. Smith -- http://vorpus.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Thu Jun 4 21:22:36 2015 From: ben.root at ou.edu (Benjamin Root) Date: Thu, 4 Jun 2015 21:22:36 -0400 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 In-Reply-To: References: Message-ID: On Thu, Jun 4, 2015 at 9:04 PM, Nathaniel Smith wrote: > On Thu, Jun 4, 2015 at 5:57 PM, Nathaniel Smith wrote: > > One place where the current behavior is particularly baffling and annoying > is when you have multiple boolean masks in the same indexing operation. I > think everyone would expect this to index separately on each axis ("outer > product indexing" style, like slices do), and that's really the only useful > interpretation, but that's not what it does...: > > As a huge user of boolean indexes, I have never expected this to work in any way, shape or form. I don't think it works in matlab (but someone should probably check that), so you wouldn't have to worry about converts missing a feature from there. I have always been told that boolean indexing will produce a flattened array, and I wouldn't want to be dealing with magic when the array does not match up right. Now, what if the boolean array is broadcastable (dimension-wise, not length-wise)? I do see some uses there. Modulo that, my vote is to deprecate. Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Jun 4 21:33:14 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 4 Jun 2015 19:33:14 -0600 Subject: [Numpy-discussion] matmul needs some clarification. In-Reply-To: References: Message-ID: On Thu, Jun 4, 2015 at 6:50 PM, Alexander Belopolsky wrote: > > On Wed, Jun 3, 2015 at 5:12 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> but is as good as dot right now except it doesn't handle object arrays. > > > > This is a fairly low standard. :-( > Meaning as fast. I expect ufuncs to have more call overhead and they need to use blas to be competitive for float et al. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Jun 4 22:41:36 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 4 Jun 2015 19:41:36 -0700 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 In-Reply-To: References: Message-ID: On Thu, Jun 4, 2015 at 6:22 PM, Benjamin Root wrote: > > On Thu, Jun 4, 2015 at 9:04 PM, Nathaniel Smith wrote: >> >> On Thu, Jun 4, 2015 at 5:57 PM, Nathaniel Smith wrote: >> >> One place where the current behavior is particularly baffling and annoying is when you have multiple boolean masks in the same indexing operation. I think everyone would expect this to index separately on each axis ("outer product indexing" style, like slices do), and that's really the only useful interpretation, but that's not what it does...: >> > > As a huge user of boolean indexes, I have never expected this to work in any way, shape or form. I don't think it works in matlab (but someone should probably check that), so you wouldn't have to worry about converts missing a feature from there. I have always been told that boolean indexing will produce a flattened array, and I wouldn't want to be dealing with magic when the array does not match up right. Note that there are two types of boolean indexing: type 1: arr[mask] where mask is n-d (ideally the same shape as "arr", but I think that it *is* broadcast if not). This always produces 1-d output. type 2: arr[..., mask, ...], where mask is 1-d and only applies to the given dimension. My comment was about the second type. Are your comments about the second type? The second type definitely does not produce a flattened array: In [7]: a = np.arange(9).reshape(3, 3) In [8]: a[np.asarray([True, False, True]), :] Out[8]: array([[0, 1, 2], [6, 7, 8]]) -n -- Nathaniel J. Smith -- http://vorpus.org From sebastian at sipsolutions.net Fri Jun 5 03:16:51 2015 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Fri, 05 Jun 2015 09:16:51 +0200 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 In-Reply-To: References: Message-ID: <1433488611.23793.2.camel@sipsolutions.net> On Do, 2015-06-04 at 18:04 -0700, Nathaniel Smith wrote: > On Thu, Jun 4, 2015 at 5:57 PM, Nathaniel Smith wrote: > > So specifically the question is -- if you have an array with five > items, and > > a Boolean array with three items, then currently you can use the > later to > > index the former: > > > > arr = np.arange(5) > > mask = np.asarray([True, False, True]) > > arr[mask] # returns array([0, 2]) > > > > This is justified by the rule that indexing with a Boolean array > should be > > the same as indexing with the same array that's been passed to > np.nonzero(). > > Empirically, though, this causes constant confusion and does not > seen very > > useful, so the question is whether we should deprecate it. > > One place where the current behavior is particularly baffling and > annoying is when you have multiple boolean masks in the same indexing > operation. I think everyone would expect this to index separately on > each axis ("outer product indexing" style, like slices do), and that's > really the only useful interpretation, but that's not what it does...: This is not being deprecated in there for the moment, it is a different discussion. Though maybe we can improve the error message to mention that the array was originally boolean, has always been bugging me a bit (it used to mention for some cases it is not anymore). - Sebastian > In [3]: a = np.arange(9).reshape((3, 3)) > > In [4]: a > Out[4]: > array([[0, 1, 2], > [3, 4, 5], > [6, 7, 8]]) > > In [6]: a[np.asarray([True, False, True]), np.asarray([False, True, > True])] > Out[6]: array([1, 8]) > > In [7]: a[np.asarray([True, False, True]), np.asarray([False, False, > True])] > Out[7]: array([2, 8]) > > In [8]: a[np.asarray([True, False, True]), np.asarray([True, True, > True])] > --------------------------------------------------------------------------- > IndexError Traceback (most recent call > last) > in () > ----> 1 a[np.asarray([True, False, True]), np.asarray([True, True, > True])] > > IndexError: shape mismatch: indexing arrays could not be broadcast > together with shapes (2,) (3,) > > > -n > > -- > Nathaniel J. Smith -- http://vorpus.org > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From pnavarre at gmail.com Fri Jun 5 03:17:13 2015 From: pnavarre at gmail.com (Pablo) Date: Fri, 05 Jun 2015 15:17:13 +0800 Subject: [Numpy-discussion] variable end border in arrays Message-ID: <55714CF9.1020802@gmail.com> Hi, If I want to remove 1 element in the beginning and the end of a numpy array "x" we do: x[1:-1] Now, if we have a border variable, and borders are allowed to be zero (which means no border), numpy syntax is inconvenient. For example if border=numpy.asarray([1,0]) and we try x[border[0],-border[1]] it will produce an empty array because border[1]==0 is not considered respect to the end of the array anymore. Is it possible to solve this without if/else's ? (I work with images and tensor's so if/else's have too many possible combinations) Thanks, Pablo From sebastian at sipsolutions.net Fri Jun 5 03:55:16 2015 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Fri, 05 Jun 2015 09:55:16 +0200 Subject: [Numpy-discussion] variable end border in arrays In-Reply-To: <55714CF9.1020802@gmail.com> References: <55714CF9.1020802@gmail.com> Message-ID: <1433490916.23793.4.camel@sipsolutions.net> On Fr, 2015-06-05 at 15:17 +0800, Pablo wrote: > Hi, > If I want to remove 1 element in the beginning and the end of a numpy > array "x" we do: > > x[1:-1] > > Now, if we have a border variable, and borders are allowed to be zero > (which means no border), numpy syntax is inconvenient. For example if > border=numpy.asarray([1,0]) and we try > > x[border[0],-border[1]] > > it will produce an empty array because border[1]==0 is not considered > respect to the end of the array anymore. > Is it possible to solve this without if/else's ? (I work with images and > tensor's so if/else's have too many possible combinations) > Yes and no. You could do if/else and use a None. Or just use the positive index: x[border[0]:x.shape[0] - border[1]]. - Sebastian > Thanks, > Pablo > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From josef.pktd at gmail.com Fri Jun 5 08:36:02 2015 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 5 Jun 2015 08:36:02 -0400 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 In-Reply-To: <1433488611.23793.2.camel@sipsolutions.net> References: <1433488611.23793.2.camel@sipsolutions.net> Message-ID: On Fri, Jun 5, 2015 at 3:16 AM, Sebastian Berg wrote: > On Do, 2015-06-04 at 18:04 -0700, Nathaniel Smith wrote: > > On Thu, Jun 4, 2015 at 5:57 PM, Nathaniel Smith wrote: > > > So specifically the question is -- if you have an array with five > > items, and > > > a Boolean array with three items, then currently you can use the > > later to > > > index the former: > > > > > > arr = np.arange(5) > > > mask = np.asarray([True, False, True]) > > > arr[mask] # returns array([0, 2]) > > > > > > This is justified by the rule that indexing with a Boolean array > > should be > > > the same as indexing with the same array that's been passed to > > np.nonzero(). > > > Empirically, though, this causes constant confusion and does not > > seen very > > > useful, so the question is whether we should deprecate it. > > > > One place where the current behavior is particularly baffling and > > annoying is when you have multiple boolean masks in the same indexing > > operation. I think everyone would expect this to index separately on > > each axis ("outer product indexing" style, like slices do), and that's > > really the only useful interpretation, but that's not what it does...: > > > This is not being deprecated in there for the moment, it is a different > discussion. Though maybe we can improve the error message to mention > that the array was originally boolean, has always been bugging me a bit > (it used to mention for some cases it is not anymore). > > - Sebastian > > > > In [3]: a = np.arange(9).reshape((3, 3)) > > > > In [4]: a > > Out[4]: > > array([[0, 1, 2], > > [3, 4, 5], > > [6, 7, 8]]) > > > > In [6]: a[np.asarray([True, False, True]), np.asarray([False, True, > > True])] > > Out[6]: array([1, 8]) > > > > In [7]: a[np.asarray([True, False, True]), np.asarray([False, False, > > True])] > > Out[7]: array([2, 8]) > > > > In [8]: a[np.asarray([True, False, True]), np.asarray([True, True, > > True])] > > > --------------------------------------------------------------------------- > > IndexError Traceback (most recent call > > last) > > in () > > ----> 1 a[np.asarray([True, False, True]), np.asarray([True, True, > > True])] > > > > IndexError: shape mismatch: indexing arrays could not be broadcast > > together with shapes (2,) (3,) > > > > > > -n > > > > -- > > Nathaniel J. Smith -- http://vorpus.org > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > What is actually being deprecated? It looks like there are different examples. wrong length: Nathaniels first example above, where the mask is not broadcastable to original array because mask is longer or shorter than shape[axis]. I also wouldn't have expected this to work, although I use np.nozero and boolean mask indexing interchangeably, I would assume we need the correct length for the mask. The second case where the boolean mask has an extra dimension of length one, or several boolean arrays might need more checking. I'm pretty sure I used various version, assuming they are a feature, and when I see arrays, I usually don't assume "outer product indexing" (that might lead to a similar discussion as the recent fancy versus orthogonal indexing) Josef -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Fri Jun 5 11:45:27 2015 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Fri, 05 Jun 2015 17:45:27 +0200 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 In-Reply-To: References: <1433488611.23793.2.camel@sipsolutions.net> Message-ID: <1433519127.23793.13.camel@sipsolutions.net> On Fr, 2015-06-05 at 08:36 -0400, josef.pktd at gmail.com wrote: > > > What is actually being deprecated? > It looks like there are different examples. > > > wrong length: Nathaniels first example above, where the mask is not > broadcastable to original array because mask is longer or shorter than > shape[axis]. > I also wouldn't have expected this to work, although I use np.nozero > and boolean mask indexing interchangeably, I would assume we need the > correct length for the mask. > For the moment we are only talking about wrong length (along a given dimension). Not about wrong number of dimensions or multiple boolean indices. As a side note: I don't think the single boolean index behaviour needs change, it is ok. Yes, it is not quite broadcasting, but there is no help considering transparent multidimensional indexing. As for multiple booleans, I think is more part of the "outer" indexing discussion, which is interesting but not here :). - Sebastian > > The second case where the boolean mask has an extra dimension of > length one, or several boolean arrays might need more checking. > I'm pretty sure I used various version, assuming they are a feature, > and when I see arrays, I usually don't assume "outer product > indexing" (that might lead to a similar discussion as the recent > fancy versus orthogonal indexing) > > > > > Josef > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From archibald at astron.nl Fri Jun 5 11:50:04 2015 From: archibald at astron.nl (Anne Archibald) Date: Fri, 05 Jun 2015 15:50:04 +0000 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 In-Reply-To: <1433519127.23793.13.camel@sipsolutions.net> References: <1433488611.23793.2.camel@sipsolutions.net> <1433519127.23793.13.camel@sipsolutions.net> Message-ID: On Fri, Jun 5, 2015 at 5:45 PM Sebastian Berg wrote: > On Fr, 2015-06-05 at 08:36 -0400, josef.pktd at gmail.com wrote: > > > > > > > What is actually being deprecated? > > It looks like there are different examples. > > > > > > wrong length: Nathaniels first example above, where the mask is not > > broadcastable to original array because mask is longer or shorter than > > shape[axis]. > > I also wouldn't have expected this to work, although I use np.nozero > > and boolean mask indexing interchangeably, I would assume we need the > > correct length for the mask. > > > > For the moment we are only talking about wrong length (along a given > dimension). Not about wrong number of dimensions or multiple boolean > indices. > I am pro-deprecation then, definitely. I don't see a use case for padding a wrong-shaped boolean array with Falses, and the padding has burned me in the past. It's not orthogonal to the wrong-number-of-dimensions issue, though, because if your Boolean array has a dimension of length 1, broadcasting says duplicate it along that axis to match the indexee, and wrong-length says pad it with Falses. This ambiguity/pitfall disappears if the padding never happens, and that kind of broadcasting is very useful. Anne -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Fri Jun 5 11:50:17 2015 From: ben.root at ou.edu (Benjamin Root) Date: Fri, 5 Jun 2015 11:50:17 -0400 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 In-Reply-To: References: Message-ID: On Thu, Jun 4, 2015 at 10:41 PM, Nathaniel Smith wrote: > My comment was about the second type. Are your comments about the > second type? The second type definitely does not produce a flattened > array: > I was talking about the second type in that I never even knew it existed. My understanding of boolean indexing has always been that it flattens, so the second type is a surprise to me. Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Jun 5 12:57:10 2015 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 5 Jun 2015 12:57:10 -0400 Subject: [Numpy-discussion] DEP: Deprecate boolean array indices with non-matching shape #4353 In-Reply-To: References: <1433488611.23793.2.camel@sipsolutions.net> <1433519127.23793.13.camel@sipsolutions.net> Message-ID: On Fri, Jun 5, 2015 at 11:50 AM, Anne Archibald wrote: > > > On Fri, Jun 5, 2015 at 5:45 PM Sebastian Berg > wrote: > >> On Fr, 2015-06-05 at 08:36 -0400, josef.pktd at gmail.com wrote: >> > >> >> > >> > What is actually being deprecated? >> > It looks like there are different examples. >> > >> > >> > wrong length: Nathaniels first example above, where the mask is not >> > broadcastable to original array because mask is longer or shorter than >> > shape[axis]. >> > I also wouldn't have expected this to work, although I use np.nozero >> > and boolean mask indexing interchangeably, I would assume we need the >> > correct length for the mask. >> > >> >> For the moment we are only talking about wrong length (along a given >> dimension). Not about wrong number of dimensions or multiple boolean >> indices. >> > > I am pro-deprecation then, definitely. I don't see a use case for padding > a wrong-shaped boolean array with Falses, and the padding has burned me in > the past. > > It's not orthogonal to the wrong-number-of-dimensions issue, though, > because if your Boolean array has a dimension of length 1, broadcasting > says duplicate it along that axis to match the indexee, and wrong-length > says pad it with Falses. This ambiguity/pitfall disappears if the padding > never happens, and that kind of broadcasting is very useful. > Good argument, now I understand why we only get a single column >>> x = np.arange(4*5).reshape(4,5) >>> mask = np.array([1,0,1,0,1], bool) padding with False, this would also be deprecated AFAIU, and Anna pointed out >>> x[mask[:4][:,None]] array([ 0, 10]) >>> x[mask[None,:]] array([0, 2, 4]) masks can only be combined with slices, so no "fancy masking" allowed nor defined (yet) >>> x[mask[:4][:,None], mask[None,:]] Traceback (most recent call last): File "", line 1, in x[mask[:4][:,None], mask[None,:]] IndexError: too many indices for array I'm using 1d masks quite often to select rows or columns, which seems to work in more than two dimensions (Benjamin's surprise) >>> x[:, mask] array([[ 0, 2, 4], [ 5, 7, 9], [10, 12, 14], [15, 17, 19]]) >>> x[mask[:4][:,None] * mask[None,:]] array([ 0, 2, 4, 10, 12, 14]) >>> x[:,:,None][mask[:4][:,None] * mask[None,:]] array([[ 0], [ 2], [ 4], [10], [12], [14]]) Josef > > Anne > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Mon Jun 8 18:11:47 2015 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Mon, 8 Jun 2015 15:11:47 -0700 Subject: [Numpy-discussion] NumPy + Python 3.5 + Windows + VS2015 Message-ID: I have just unsuccessfully tried to build numpy under Windows for Python 3.5, using the latest release candidate for Visual Studio 2015. A very early failure with a: RuntimeError: Broken toolchain: cannot link a simple C program even though repeating the sequence of commands that lead to the failure manually seems to work. Anyway, before diving deeper into this, has anyone tried this out already and have some success or failure stories to share? Thanks, Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From honi at brandeis.edu Mon Jun 8 21:54:18 2015 From: honi at brandeis.edu (Honi Sanders) Date: Mon, 8 Jun 2015 21:54:18 -0400 Subject: [Numpy-discussion] How to limit cross correlation window width in Numpy? Message-ID: I am learning numpy/scipy, coming from a MATLAB background. The xcorr function in Matlab has an optional argument "maxlag" that limits the lag range from ?maxlag to maxlag. This is very useful if you are looking at the cross-correlation between two very long time series but are only interested in the correlation within a certain time range. The performance increases are enormous considering that cross-correlation is incredibly expensive to compute. What is troubling me is that numpy.correlate does not have a maxlag feature. This means that even if I only want to see correlations between two time series with lags between -100 and +100 ms, for example, it will still calculate the correlation for every lag between -20000 and +20000 ms (which is the length of the time series). This (theoretically) gives a 200x performance hit! Is it possible that I could contribute this feature? I have introduced this question as a scipy issue https://github.com/scipy/scipy/issues/4940 and on the spicy-dev list (http://mail.scipy.org/pipermail/scipy-dev/2015-June/020757.html). It seems the best place to start is with numpy.correlate, so that is what I am requesting. I have done a simple implementation (https://gist.github.com/bringingheavendown/b4ce18aa007118e4e084) which gives 50x speedup under my conditions (https://github.com/scipy/scipy/issues/4940#issuecomment-110187847). This is my first experience with contributing to open-source software, so any pointers are appreciated. From antony.lee at berkeley.edu Tue Jun 9 13:07:59 2015 From: antony.lee at berkeley.edu (Antony Lee) Date: Tue, 9 Jun 2015 10:07:59 -0700 Subject: [Numpy-discussion] Backwards-incompatible improvements to numpy.random.RandomState In-Reply-To: References: Message-ID: 2015-05-29 14:06 GMT-07:00 Antony Lee : > > A proof-of-concept implementation, still missing tests, is tracked as >> #5911. It includes the patch proposed in #5158 as an example of how to >> include an improved version of random.choice. >> > > Tests are in now (whether we should bundle in pickles of old versions to > make sure they are still unpickled correctly and outputs of old random > streams to make sure they are still reproduced is a good question, though). > Comments welcome. > Kindly bumping the issue. Antony -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjhelmus at gmail.com Tue Jun 9 16:38:08 2015 From: jjhelmus at gmail.com (Jonathan Helmus) Date: Tue, 09 Jun 2015 15:38:08 -0500 Subject: [Numpy-discussion] ANN: Py-ART v1.4.0 released Message-ID: <55774EB0.3080705@gmail.com> I am happy to announce the release of Py-ART version 1.4.0. Py-ART is an open source Python module for reading, visualizing, correcting and analysis of weather radar data. Documentation : http://arm-doe.github.io/pyart/dev/index.html GitHub : https://github.com/ARM-DOE/pyart Pre-build conda binaries: https://binstar.org/jjhelmus/pyart/files?version=1.4.0 Version 1.4.0 is the result of 4 months of work by 7 contributors. Thanks to all contributors, especially those who have made their first contribution to Py-ART. Highlights of this release: * Support for reading and writing MDV Grid files. (thanks to Anderson Gama) * Support for reading GCPEX D3R files. (thanks to Steve Nesbitt) * Support for reading NEXRAD Level 3 files. * Optional loading of radar field data upon use rather than initial read. * Significantly faster gridding method, "map_gates_to_grid". * Improvements to the speed and bug fixes to the region based dealiasing algorithm. * Textures of differential phase fields. (thanks to Scott Collis) * Py-ART now can be used with Python 3.3 and 3.4 Cheers, - Jonathan Helmus From jaime.frio at gmail.com Tue Jun 9 19:12:10 2015 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Tue, 9 Jun 2015 16:12:10 -0700 Subject: [Numpy-discussion] NumPy + Python 3.5 + Windows + VS2015 In-Reply-To: References: Message-ID: On Mon, Jun 8, 2015 at 3:11 PM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > I have just unsuccessfully tried to build numpy under Windows for Python > 3.5, using the latest release candidate for Visual Studio 2015. > > A very early failure with a: > > RuntimeError: Broken toolchain: cannot link a simple C program > > even though repeating the sequence of commands that lead to the failure > manually seems to work. > > Anyway, before diving deeper into this, has anyone tried this out already > and have some success or failure stories to share? > I have finally managed to get this to compile. There are two places at which things go wrong: 1. The call to check_long_double_representation in numpy/core/setup.py. This tries to figure out the representation used by the compiler for long double by compiling C code declaring a struct with a char array, a long double, and another char array, initializing them to specific values, then parsing the obj file byte by byte to detect the sequence in the first and second char array, The sequences are there, but not in contiguous bytes, for some reason the compiler is adding 3 bytes between each of the bytes in the sequence. I bypassed this hardcoding the long double representation to IEEE_DOUBLE_LE. 2. The call to generate_libraries in numpy/random/setup.py. This is supposed to compile and run a small c program to check if _WIN32 is defined by the compiler, in which case the 'Advapi32' library is linked. Haven't gone into the details, but that compile and run also fails, so the library was never getting added. I simply unconditionally added the library to get it working. Once compiled there is something like 20 or 30 test failures, which I haven't looked into in any detail. I was also getting a handful of segfaults while running the tests, but it has stopped segfaulting now, even though I have run the tests in a loop 100 times. Not sure if we want to try to fix any of this for 1.10. It will probably be the first release that people try to make work with Python 3.5 when the final release comes out in September. But it is also hard to figure out how many of these problems are caused by Python 3.5 itself, or by MSVC 2015, which is still in RC phase. Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Jun 9 19:48:19 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 9 Jun 2015 17:48:19 -0600 Subject: [Numpy-discussion] NumPy + Python 3.5 + Windows + VS2015 In-Reply-To: References: Message-ID: On Tue, Jun 9, 2015 at 5:12 PM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > On Mon, Jun 8, 2015 at 3:11 PM, Jaime Fern?ndez del R?o < > jaime.frio at gmail.com> wrote: > >> I have just unsuccessfully tried to build numpy under Windows for Python >> 3.5, using the latest release candidate for Visual Studio 2015. >> >> A very early failure with a: >> >> RuntimeError: Broken toolchain: cannot link a simple C program >> >> even though repeating the sequence of commands that lead to the failure >> manually seems to work. >> >> Anyway, before diving deeper into this, has anyone tried this out already >> and have some success or failure stories to share? >> > > I have finally managed to get this to compile. There are two places at > which things go wrong: > > 1. The call to check_long_double_representation in > numpy/core/setup.py. This tries to figure out the representation used by > the compiler for long double by compiling C code declaring a struct with a > char array, a long double, and another char array, initializing them to > specific values, then parsing the obj file byte by byte to detect the > sequence in the first and second char array, The sequences are there, but > not in contiguous bytes, for some reason the compiler is adding 3 bytes > between each of the bytes in the sequence. I bypassed this hardcoding the > long double representation to IEEE_DOUBLE_LE. > 2. The call to generate_libraries in numpy/random/setup.py. This is > supposed to compile and run a small c program to check if _WIN32 is defined > by the compiler, in which case the 'Advapi32' library is linked. Haven't > gone into the details, but that compile and run also fails, so the library > was never getting added. I simply unconditionally added the library to get > it working. > > Once compiled there is something like 20 or 30 test failures, which I > haven't looked into in any detail. I was also getting a handful of > segfaults while running the tests, but it has stopped segfaulting now, even > though I have run the tests in a loop 100 times. > > Not sure if we want to try to fix any of this for 1.10. It will probably > be the first release that people try to make work with Python 3.5 when the > final release comes out in September. But it is also hard to figure out how > many of these problems are caused by Python 3.5 itself, or by MSVC 2015, > which is still in RC phase. > > Thanks for looking into this. It is depressing that Windows is so difficult to support. There might be some Microsoft pragmas/flags that will help. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Jun 10 13:24:25 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 10 Jun 2015 10:24:25 -0700 Subject: [Numpy-discussion] NumPy + Python 3.5 + Windows + VS2015 In-Reply-To: References: Message-ID: On Tue, Jun 9, 2015 at 4:48 PM, Charles R Harris wrote: > Thanks for looking into this. It is depressing that Windows is so > difficult to support. > yes, thanks! You might try posting on python-dev -- there is at least one person on that list trying to help get Windows builds working better! It seems to me that these are general compiler issues, not numpy-specific ones -- though numpy clearly stresses the system! -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Wed Jun 10 20:53:25 2015 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Wed, 10 Jun 2015 17:53:25 -0700 Subject: [Numpy-discussion] Open CV 3.0 + NPY_RELAXED_STRIDES Message-ID: I'm in the midst of a Python 3.5 + MSVS 2015 compilation frenzy. Today it was time for Open CV 3.0, where I found a nasty bug that I have eventually tracked down to using a development version of NumPy, and Open CV 3.0 choking on relaxed strides, as it does a check that every stride is a multiple of the itemsize. I was thinking of submitting a patch to opencv to fix this, but was wondering whether we have plans to eventually have relaxed strides out in the wild in user releases, or is it just a testing tool for development? Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Wed Jun 10 21:02:26 2015 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Wed, 10 Jun 2015 18:02:26 -0700 Subject: [Numpy-discussion] Open CV 3.0 + NPY_RELAXED_STRIDES In-Reply-To: References: Message-ID: On Wed, Jun 10, 2015 at 5:53 PM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > I'm in the midst of a Python 3.5 + MSVS 2015 compilation frenzy. Today it > was time for Open CV 3.0, where I found a nasty bug that I have eventually > tracked down to using a development version of NumPy, and Open CV 3.0 > choking on relaxed strides, as it does a check that every stride is a > multiple of the itemsize. > > I was thinking of submitting a patch to opencv to fix this, but was > wondering whether we have plans to eventually have relaxed strides out in > the wild in user releases, or is it just a testing tool for development? > I see that in the release notes of 1.9 we had the following: - Relaxed stride checking will be the default in 1.10.0 Is this still the plan? Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Jun 10 21:21:05 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 10 Jun 2015 18:21:05 -0700 Subject: [Numpy-discussion] Open CV 3.0 + NPY_RELAXED_STRIDES In-Reply-To: References: Message-ID: On Wed, Jun 10, 2015 at 5:53 PM, Jaime Fern?ndez del R?o wrote: > I'm in the midst of a Python 3.5 + MSVS 2015 compilation frenzy. Today it > was time for Open CV 3.0, where I found a nasty bug that I have eventually > tracked down to using a development version of NumPy, and Open CV 3.0 > choking on relaxed strides, as it does a check that every stride is a > multiple of the itemsize. > > I was thinking of submitting a patch to opencv to fix this, but was > wondering whether we have plans to eventually have relaxed strides out in > the wild in user releases, or is it just a testing tool for development? The ultimate goal is certainly to get it out into the wild, as not having relaxed strides creates other weird bugs instead. (Mostly spurious copies because of arrays being considered discontiguous when they actually were contiguous all along, but also fun stuff like tiny irrelevant changes in numpy breaking people's code because they are expecting an array with F contiguity and numpy has started describing the output array as C contiguity, when in fact it is both and the bug is entirely in the arbitrary assignment of these flags.) -n -- Nathaniel J. Smith -- http://vorpus.org From charlesr.harris at gmail.com Wed Jun 10 23:03:02 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 10 Jun 2015 21:03:02 -0600 Subject: [Numpy-discussion] Open CV 3.0 + NPY_RELAXED_STRIDES In-Reply-To: References: Message-ID: On Wed, Jun 10, 2015 at 7:02 PM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > On Wed, Jun 10, 2015 at 5:53 PM, Jaime Fern?ndez del R?o < > jaime.frio at gmail.com> wrote: > >> I'm in the midst of a Python 3.5 + MSVS 2015 compilation frenzy. Today it >> was time for Open CV 3.0, where I found a nasty bug that I have eventually >> tracked down to using a development version of NumPy, and Open CV 3.0 >> choking on relaxed strides, as it does a check that every stride is a >> multiple of the itemsize. >> >> I was thinking of submitting a patch to opencv to fix this, but was >> wondering whether we have plans to eventually have relaxed strides out in >> the wild in user releases, or is it just a testing tool for development? >> > > I see that in the release notes of 1.9 we had the following: > > > - Relaxed stride checking will be the default in 1.10.0 > > Is this still the plan? > Yes, but it won't be quite the same as the master branch. Currently an unusual value for the stride (?) is used in order to smoke out misuse, but that value will be more rational in the release. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Thu Jun 11 05:39:03 2015 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 11 Jun 2015 11:39:03 +0200 Subject: [Numpy-discussion] Open CV 3.0 + NPY_RELAXED_STRIDES In-Reply-To: References: Message-ID: <1434015543.1352.3.camel@sipsolutions.net> On Mi, 2015-06-10 at 21:03 -0600, Charles R Harris wrote: > > > > > * Relaxed stride checking will be the default in 1.10.0 > Is this still the plan? > > > Yes, but it won't be quite the same as the master branch. Currently > an unusual value for the stride (?) is used in order to smoke out > misuse, but that value will be more rational in the release. > +1, it should not be as bad/common in practice once rolled out. That said, I do not mind delaying things beyond 1.10, it might be better for compatibility if someone gets a new numpy on top of oldish other packages. So I am good with planning to go ahead for the moment. But if anyone complains, I would back down for 1.10 probably. - Sebastian > > Chuck > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From jtaylor.debian at googlemail.com Thu Jun 11 05:44:57 2015 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Thu, 11 Jun 2015 11:44:57 +0200 Subject: [Numpy-discussion] Open CV 3.0 + NPY_RELAXED_STRIDES In-Reply-To: <1434015543.1352.3.camel@sipsolutions.net> References: <1434015543.1352.3.camel@sipsolutions.net> Message-ID: On Thu, Jun 11, 2015 at 11:39 AM, Sebastian Berg wrote: > On Mi, 2015-06-10 at 21:03 -0600, Charles R Harris wrote: >> >> > >> >> >> * Relaxed stride checking will be the default in 1.10.0 >> Is this still the plan? >> >> >> Yes, but it won't be quite the same as the master branch. Currently >> an unusual value for the stride (?) is used in order to smoke out >> misuse, but that value will be more rational in the release. >> > > +1, it should not be as bad/common in practice once rolled out. That > said, I do not mind delaying things beyond 1.10, it might be better for > compatibility if someone gets a new numpy on top of oldish other > packages. > So I am good with planning to go ahead for the moment. But if anyone > complains, I would back down for 1.10 probably. > > - Sebastian > With beside scipy.ndimage also opencv being broken I think we will have to delay it beyond 1.10, though we should have at least an alpha, maybe even a beta with it enabled to induce some panic that hopefully will spure some fixes. From jaime.frio at gmail.com Thu Jun 11 13:10:49 2015 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Thu, 11 Jun 2015 10:10:49 -0700 Subject: [Numpy-discussion] Open CV 3.0 + NPY_RELAXED_STRIDES In-Reply-To: References: <1434015543.1352.3.camel@sipsolutions.net> Message-ID: On Thu, Jun 11, 2015 at 2:44 AM, Julian Taylor < jtaylor.debian at googlemail.com> wrote: > On Thu, Jun 11, 2015 at 11:39 AM, Sebastian Berg > wrote: > > On Mi, 2015-06-10 at 21:03 -0600, Charles R Harris wrote: > >> > >> > > > >> > >> > >> * Relaxed stride checking will be the default in 1.10.0 > >> Is this still the plan? > >> > >> > >> Yes, but it won't be quite the same as the master branch. Currently > >> an unusual value for the stride (?) is used in order to smoke out > >> misuse, but that value will be more rational in the release. > >> > > > > +1, it should not be as bad/common in practice once rolled out. That > > said, I do not mind delaying things beyond 1.10, it might be better for > > compatibility if someone gets a new numpy on top of oldish other > > packages. > > So I am good with planning to go ahead for the moment. But if anyone > > complains, I would back down for 1.10 probably. > > > > - Sebastian > > > > With beside scipy.ndimage also opencv being broken I think we will > have to delay it beyond 1.10, though we should have at least an alpha, > maybe even a beta with it enabled to induce some panic that hopefully > will spure some fixes. > OpenCV shouldn't be broken any more if the merge this: https://github.com/Itseez/opencv/pull/4117 I would appreciate a second set of eyes looking over the logic in that PR. Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Thu Jun 11 17:18:10 2015 From: shoyer at gmail.com (Stephan Hoyer) Date: Thu, 11 Jun 2015 16:18:10 -0500 Subject: [Numpy-discussion] ANN: xray v0.5 Message-ID: I'm pleased to announce version 0.5 of xray, N-D labeled arrays and datasets in Python. xray is an open source project and Python package that aims to bring the labeled data power of pandas to the physical sciences, by providing N-dimensional variants of the core pandas data structures. These data structures are based on the data model of the netCDF file format. Highlights of this release: * Support for parallel computation on arrays that don't fit in memory using dask.array (see http://continuum.io/blog/xray-dask for more details) * Support for multi-file datasets * assign and fillna methods, based on the pandas methods of the same name. * to_array and to_dataset methods for easier conversion between xray Dataset and DataArray objects. * Label based indexing with nearest neighbor lookups For more details, read the full release notes: http://xray.readthedocs.org/en/stable/whats-new.html Best, Stephan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dasssj at 126.com Fri Jun 12 12:23:03 2015 From: dasssj at 126.com (=?GBK?B?yq/Ntw==?=) Date: Sat, 13 Jun 2015 00:23:03 +0800 (CST) Subject: [Numpy-discussion] f2py problem with multiple fortran source files Message-ID: <56fc103d.26166.14de894ecf1.Coremail.dasssj@126.com> Hi,everybody, I'm new to f2py, and I got some trouble when wrapped some fortran files to Python. I have download a Fortran library (https://github.com/brianlockwood/ForK), I want to compile these files into a library and call the library by other Fortran file wrote by myself. Here are my questions: 1. How should I compile the library(in this case,Fork), and what command should I use; 2. How can I use the library and my own Fortran source fileI( All.f90 ) with the f2py command to generate the module I can use py Python Thanks! Shishijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From allanhaldane at gmail.com Fri Jun 12 13:46:39 2015 From: allanhaldane at gmail.com (Allan Haldane) Date: Fri, 12 Jun 2015 13:46:39 -0400 Subject: [Numpy-discussion] changing ValueError to KeyError for bad field access Message-ID: <557B1AFF.9050502@gmail.com> Hi all, I think it would be very nice to make access to invalid fields of a structured array give a KeyError instead of a ValueError. Like: >>> a = np.ones(3, dtype=[('a', 'f4'), ('b', 'f4')]) >>> a['c'] KeyError: 'c' A commit in my PR https://github.com/numpy/numpy/pull/5636 does this. As discussed there, backwards compatibility is a problem but it seems like the impact might be fairly small. Any opinions? Allan From sturla.molden at gmail.com Fri Jun 12 13:59:54 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Fri, 12 Jun 2015 19:59:54 +0200 Subject: [Numpy-discussion] changing ValueError to KeyError for bad field access In-Reply-To: <557B1AFF.9050502@gmail.com> References: <557B1AFF.9050502@gmail.com> Message-ID: On 12/06/15 19:46, Allan Haldane wrote: > >>> a = np.ones(3, dtype=[('a', 'f4'), ('b', 'f4')]) > >>> a['c'] > KeyError: 'c' > Any opinions? Sounds good to me. But it will probably break someones code. Sturla From hodge at stsci.edu Fri Jun 12 14:06:44 2015 From: hodge at stsci.edu (Phil Hodge) Date: Fri, 12 Jun 2015 14:06:44 -0400 Subject: [Numpy-discussion] changing ValueError to KeyError for bad field access In-Reply-To: <557B1AFF.9050502@gmail.com> References: <557B1AFF.9050502@gmail.com> Message-ID: <557B1FB4.7060607@stsci.edu> On 06/12/2015 01:46 PM, Allan Haldane wrote: > I think it would be very nice to make access to invalid fields of a > structured array give a KeyError instead of a ValueError. Like: > > >>> a = np.ones(3, dtype=[('a', 'f4'), ('b', 'f4')]) > >>> a['c'] > KeyError: 'c' This will break code, but it should do so in a way that's visible. KeyError does seem like a more appropriate exception to me. Phil From Jerome.Kieffer at esrf.fr Sat Jun 13 04:53:06 2015 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Sat, 13 Jun 2015 10:53:06 +0200 Subject: [Numpy-discussion] [JOBs] Data analysis position at the European synchrotron Message-ID: <20150613105306.4f8b3ed9@patagonia> Dear Pythonistas, The European Synchrotron, ESRF, located in the French Alps, just got approved a large upgrade in which data-analysis is a key element. I am pleased to announce this strategy is built around Python and all code developed in this frame will be based around Python and made open-source. Feel free to distribute this around. 1 metadata manager position: http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=413 2 data scientist positions: http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=421 http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=414 3 software engineer positions http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=418 http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=417 http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=419 Other related data analysis positions http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=420 http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=411 Best regards, Jerome Kieffer From sole at esrf.fr Sat Jun 13 06:34:57 2015 From: sole at esrf.fr (V. Armando Sole) Date: Sat, 13 Jun 2015 12:34:57 +0200 Subject: [Numpy-discussion] [JOBs] Data analysis position at the European synchrotron In-Reply-To: <20150613105306.4f8b3ed9@patagonia> References: <20150613105306.4f8b3ed9@patagonia> Message-ID: <09d1e63343c31d33fc2c254ffdfa7e3b@esrf.fr> Hi, The English versions: http://www.esrf.eu/Jobs/english/recruitment-portal Best regards, Armando On 13.06.2015 10:53, Jerome Kieffer wrote: > Dear Pythonistas, > > The European Synchrotron, ESRF, located in the French Alps, just got > approved a large upgrade in which data-analysis is a key element. I am > pleased to announce this strategy is built around Python and all code > developed in this frame will be based around Python and made > open-source. Feel free to distribute this around. > > 1 metadata manager position: > http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=413 > > 2 data scientist positions: > http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=421 > http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=414 > > 3 software engineer positions > http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=418 > http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=417 > http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=419 > > Other related data analysis positions > http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=420 > http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=411 > > Best regards, > > Jerome Kieffer > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From pearu.peterson at gmail.com Sat Jun 13 13:25:00 2015 From: pearu.peterson at gmail.com (Pearu Peterson) Date: Sat, 13 Jun 2015 20:25:00 +0300 Subject: [Numpy-discussion] f2py problem with multiple fortran source files In-Reply-To: <56fc103d.26166.14de894ecf1.Coremail.dasssj@126.com> References: <56fc103d.26166.14de894ecf1.Coremail.dasssj@126.com> Message-ID: Hi, On Fri, Jun 12, 2015 at 7:23 PM, ?? wrote: > Hi,everybody, > I'm new to f2py, and I got some trouble when wrapped some fortran files > to Python. > I have download a Fortran library ( > https://github.com/brianlockwood/ForK), I want to compile these files > into a library and call the library by other Fortran file wrote by myself. > Here are my questions: > 1. How should I compile the library(in this case,Fork), and what command > should I use; > In ForK, try make all that should result kriginglib.a. You might need to add -fPIC option to FFLAGS in Makefile before executing make. > 2. How can I use the library and my own Fortran source fileI( All.f90 ) > with the f2py command to generate the module I can use py Python > Try f2py -c -m mylib All.f90 /path/to/kriginglib.a python >>> import mylib >>> print mylib.__doc__ HTH, Pearu > Thanks! > Shishijie > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffreback at gmail.com Sat Jun 13 13:47:48 2015 From: jeffreback at gmail.com (Jeff Reback) Date: Sat, 13 Jun 2015 13:47:48 -0400 Subject: [Numpy-discussion] ANN: pandas v0.16.2 released Message-ID: Hello, We are proud to announce v0.16.2 of pandas, a minor release from 0.16.1. This release includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. This was a release of 4 weeks with 105 commits by 32 authors encompassing 48 issues and 71 pull-requests. We recommend that all users upgrade to this version. *What is it:* *pandas* is a Python package providing fast, flexible, and expressive data structures designed to make working with ?relational? or ?labeled? data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. Highlights of this release include: - - A new *pipe* method, see here - Documentation on how to use numba with *pandas*, see here See the Whatsnew in v0.16.2 Documentation: http://pandas.pydata.org/pandas-docs/stable/ Source tarballs, windows binaries are available on PyPI: https://pypi.python.org/pypi/pandas windows binaries are courtesy of Christoph Gohlke and are built on Numpy 1.9 macosx wheels are courtesy of Matthew Brett Please report any issues here: https://github.com/pydata/pandas/issues Thanks The Pandas Development Team Contributors to the 0.16.2 release - Andrew Rosenfeld - Artemy Kolchinsky - Bernard Willers - Christer van der Meeren - Christian Hudon - Constantine Glen Evans - Daniel Julius Lasiman - Evan Wright - Francesco Brundu - Ga?tan de Menten - Jake VanderPlas - James Hiebert - Jeff Reback - Joris Van den Bossche - Justin Lecher - Ka Wo Chen - Kevin Sheppard - Mortada Mehyar - Morton Fox - Robin Wilson - Thomas Grainger - Tom Ajamian - Tom Augspurger - Yoshiki V?zquez Baeza - Younggun Kim - austinc - behzad nouri - jreback - lexual - rekcahpassyla - scls19fr - sinhrks -------------- next part -------------- An HTML attachment was scrubbed... URL: From dasssj at 126.com Sat Jun 13 23:27:06 2015 From: dasssj at 126.com (=?GBK?B?yq/Ntw==?=) Date: Sun, 14 Jun 2015 11:27:06 +0800 (CST) Subject: [Numpy-discussion] f2py problem with multiple fortran source files Message-ID: <2701a8be.d219.14df01b3ec9.Coremail.dasssj@126.com> Dear Pearu Peterson, Thank you for your reply! I did as you said, and I got the module mulib.so successfully, but I got another problem when I try to import this module in Python. Here are the messeage I got in Python: Enthought Canopy Python 2.7.6 | 64-bit | (default, Sep 15 2014, 17:36:10) [GCC 4.1.2 20080704 (Red Hat 4.1.2-54)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import mylib Traceback (most recent call last): File "", line 1, in ImportError: /home/ssj/Enthought/Canopy_64bit/User/bin/../lib/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by ./mylib.so) >>> The version of gfortran I use is 4.9.2, and I'm using Debian 8 "Jessie". How can I fix this problem? Thanks again for your reply! Shishijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Sun Jun 14 22:33:15 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 15 Jun 2015 02:33:15 +0000 (UTC) Subject: [Numpy-discussion] Aternative to PyArray_SetBaseObject in NumPy 1.6? Message-ID: <521899772456028078.134896sturla.molden-gmail.com@news.gmane.org> What would be the best alternative to PyArray_SetBaseObject in NumPy 1.6? Purpose: Keep alive an object owning data passed to PyArray_SimpleNewFromData. Sturla From njs at pobox.com Mon Jun 15 05:00:18 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 15 Jun 2015 02:00:18 -0700 Subject: [Numpy-discussion] Homu Message-ID: Hi all, As an experiment, I just enabled @homu on the main numpy repository. Basically what this means is that there's a bot named @homu, and if someone with appropriate permissions posts a comment on a pull request that says: @homu r+ then homu will (a) doublecheck that the pull request still passes tests when merged into current master, and (b) if it does, then go ahead and hit the green merge button for you. ("r+" is mozilla-ese for "I approve this patch"; @homu comes out of the rust/mozilla community.) So you can still hit the big green button if you want, no change there, but this provides a second option with a few advantages: - Normally, a green light from Travis just means that the PR passed the tests when it was submitted. If master has changed since then, things might have become broken, but you'll never know until after you merge it and master turns red. More minor advantages: - You can approve a PR before Travis has even finished running, and it will automatically be merged iff the tests pass. - In theory, it should be possible to put someone on the @homu permissions list without adding them to github proper, which would mean that they have the ability to push to the repository via PRs-that-pass-tests-and-trigger-notifications, but can't do a direct commit directly into master that doesn't create any notifications. Not sure if this is really useful, but hey. - You don't have to merge-and-then-comment-saying thanks, you can just post a single comment, saving two entire mouse clicks. Efficiency! Anyway, seemed worth taking for a spin and seeing whether we liked it; we can always turn it off again if not. I think that everyone who has commit access to numpy/numpy is also listed on @homu's access list -- if I missed anyone just let me know. Links: http://homu.io/ https://www.reddit.com/r/rust/comments/39sogp/homu_a_gatekeeper_for_your_commits/ http://graydon.livejournal.com/186550.html http://homu.io/q/numpy/numpy -n -- Nathaniel J. Smith -- http://vorpus.org From pav at iki.fi Mon Jun 15 12:00:12 2015 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Jun 2015 19:00:12 +0300 Subject: [Numpy-discussion] Homu In-Reply-To: References: Message-ID: 15.06.2015, 12:00, Nathaniel Smith kirjoitti: [clip] > http://homu.io/ One thing to consider is the disadvantage from security POV: this gives full write access to the Numpy repository to that someone who is running the bot. I don't see information on who this person (or these persons) is and how access to the bot and the bot account is controlled. (Travis-CI doesn't have that AFAIK, it can only change the passed/not-passed icons.) Pauli From njs at pobox.com Mon Jun 15 15:30:34 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 15 Jun 2015 12:30:34 -0700 Subject: [Numpy-discussion] Homu In-Reply-To: References: Message-ID: On Jun 15, 2015 9:03 AM, "Pauli Virtanen" wrote: > > 15.06.2015, 12:00, Nathaniel Smith kirjoitti: > [clip] > > http://homu.io/ > > One thing to consider is the disadvantage from security POV: this gives > full write access to the Numpy repository to that someone who is running > the bot. I don't see information on who this person (or these persons) > is and how access to the bot and the bot account is controlled. > (Travis-CI doesn't have that AFAIK, it can only change the > passed/not-passed icons.) That's a fair point. The person running the bot is Barosl Lee (@barosl), who is also the author of the homu bot (https://github.com/barosl/homu) that the homu.io hosted service is based on. The Mozilla rust and servo teams are using this code to manage all their merges, e.g.: http://buildbot.rust-lang.org/homu/queue/rust though they are running a self hosted version, not using homu.io. If we're uncomfortable with the hosted service then hosting it ourselves wouldn't be hard -- I've actually had "set up a homu instance" as a todo item for most of a year now (check out Graydon's last comment on the lj past I linked to upthread, and who he's replying to ;-)). I literally sat down to get this done last night, got half way through, and then discovered that @barosl had finally announced their hosted service 18 hours earlier, so I figured I'd be lazy and just use that instead :-). Personally I'm not worried about the security issues -- I think the chances that @barosl is malicious are basically zero, and while every account that gets access to a repository increases the risk that someone might steal their credentials and do something naughty with them, the additional risk seems minimal to me. (Right now there are 16 accounts that have full admin access to numpy/numpy; @homu is not one of them.) But if people prefer I'm happy to self-host too. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From ewm at redtetrahedron.org Tue Jun 16 10:53:23 2015 From: ewm at redtetrahedron.org (Eric Moore) Date: Tue, 16 Jun 2015 10:53:23 -0400 Subject: [Numpy-discussion] Aternative to PyArray_SetBaseObject in NumPy 1.6? In-Reply-To: <521899772456028078.134896sturla.molden-gmail.com@news.gmane.org> References: <521899772456028078.134896sturla.molden-gmail.com@news.gmane.org> Message-ID: You have to do it by hand in numpy 1.6. For example see https://github.com/scipy/scipy/blob/master/scipy/signal/lfilter.c.src#L285-L292 -Eric On Sun, Jun 14, 2015 at 10:33 PM, Sturla Molden wrote: > What would be the best alternative to PyArray_SetBaseObject in NumPy 1.6? > > Purpose: Keep alive an object owning data passed to > PyArray_SimpleNewFromData. > > > Sturla > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Tue Jun 16 16:35:03 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 16 Jun 2015 20:35:03 +0000 (UTC) Subject: [Numpy-discussion] Aternative to PyArray_SetBaseObject in NumPy 1.6? References: <521899772456028078.134896sturla.molden-gmail.com@news.gmane.org> Message-ID: <1807397358456177813.811223sturla.molden-gmail.com@news.gmane.org> Eric Moore wrote: > You have to do it by hand in numpy 1.6. For example see > href="https://github.com/scipy/scipy/blob/master/scipy/signal/lfilter.c.src#L285-L292">https://github.com/scipy/scipy/blob/master/scipy/signal/lfilter.c.src#L285-L292 Thank you :) Sturla From honi at brandeis.edu Tue Jun 16 22:38:40 2015 From: honi at brandeis.edu (Honi Sanders) Date: Tue, 16 Jun 2015 22:38:40 -0400 Subject: [Numpy-discussion] How to limit cross correlation window width in Numpy? In-Reply-To: References: Message-ID: <2C882037-0653-41DC-B2AF-F87B51C6E11B@brandeis.edu> I have now implemented this functionality in numpy.correlate() and numpy.convolve(). https://github.com/bringingheavendown/numpy. The files that were edited are: numpy/core/src/multiarray/multiarraymodule.c numpy/core/numeric.py numpy/core/tests/test_numeric.py Please look over the code, my design decisions, and the unit tests I have written. This is my first time contributing, so I am not confident about any of these and welcome feedback. > On Jun 8, 2015, at 9:54 PM, Honi Sanders wrote: > > I am learning numpy/scipy, coming from a MATLAB background. The xcorr function in Matlab has an optional argument "maxlag" that limits the lag range from ?maxlag to maxlag. This is very useful if you are looking at the cross-correlation between two very long time series but are only interested in the correlation within a certain time range. The performance increases are enormous considering that cross-correlation is incredibly expensive to compute. > > What is troubling me is that numpy.correlate does not have a maxlag feature. This means that even if I only want to see correlations between two time series with lags between -100 and +100 ms, for example, it will still calculate the correlation for every lag between -20000 and +20000 ms (which is the length of the time series). This (theoretically) gives a 200x performance hit! Is it possible that I could contribute this feature? > > I have introduced this question as a scipy issue https://github.com/scipy/scipy/issues/4940 and on the spicy-dev list (http://mail.scipy.org/pipermail/scipy-dev/2015-June/020757.html). It seems the best place to start is with numpy.correlate, so that is what I am requesting. I have done a simple implementation (https://gist.github.com/bringingheavendown/b4ce18aa007118e4e084) which gives 50x speedup under my conditions (https://github.com/scipy/scipy/issues/4940#issuecomment-110187847). > > This is my first experience with contributing to open-source software, so any pointers are appreciated. > From sturla.molden at gmail.com Wed Jun 17 18:13:25 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Thu, 18 Jun 2015 00:13:25 +0200 Subject: [Numpy-discussion] How to limit cross correlation window width in Numpy? In-Reply-To: <2C882037-0653-41DC-B2AF-F87B51C6E11B@brandeis.edu> References: <2C882037-0653-41DC-B2AF-F87B51C6E11B@brandeis.edu> Message-ID: On 17/06/15 04:38, Honi Sanders wrote: > I have now implemented this functionality in numpy.correlate() and numpy.convolve(). https://github.com/bringingheavendown/numpy. The files that were edited are: > numpy/core/src/multiarray/multiarraymodule.c > numpy/core/numeric.py > numpy/core/tests/test_numeric.py > Please look over the code, my design decisions, and the unit tests I have written. This is my first time contributing, so I am not confident about any of these and welcome feedback. I'll just repeat here what I already said on Github. I think this stems from the need to compute cross-correlograms as used in statistical signal analysis, whereas numpy.correlate and scipy.signal.correlate are better suited for matched filtering. I think the best solution would be to add a function called scipy.signal.correlogram, which would return a cross-correlation and an array of time lags. It could take minlag and maxlag as optional arguments. Adding maxlag and minlag arguments to numpy.convolve makes very little sense, as far as I am concerned. Sturla From honi at brandeis.edu Wed Jun 17 18:22:33 2015 From: honi at brandeis.edu (Honi Sanders) Date: Wed, 17 Jun 2015 18:22:33 -0400 Subject: [Numpy-discussion] How to limit cross correlation window width in Numpy? In-Reply-To: References: <2C882037-0653-41DC-B2AF-F87B51C6E11B@brandeis.edu> Message-ID: <07A9FB09-CD74-4723-AA3E-85AFCF042B41@brandeis.edu> I will also repeat what I said in response on Github (discussions at: https://github.com/scipy/scipy/issues/4940, https://github.com/numpy/numpy/issues/5954): I do want a function that computes cross-correlograms, however the implementation is exactly the same for cross-correlograms as convolution. Not only that, is the numpy.correlate() function not for computing cross-correlograms? Maxlag and lagstep still make sense in the context of convolution. Say you have a time series (this is not the best example) of rain amounts and you have a kernel for plant growth given rain in the recent past. Your time series is the entire year, but you are only interested in the plant growth during the months of April through August. Not only that, you do not need daily readout of plant growth; weekly resolution is enough for your needs. You wouldn?t want to compute the convolution for the entire time series, instead you would do: numpy.convolve(rain, growth_kernel, (april, september, 7), lagvec) and get lagvec back with the indices of the sundays in april through august, and a return vector with the amount of plant growth on those days. I don?t really think it would be good to add an entirely new function to scipy.signal. It was already hard enough as a new user trying to figure out which of the five seemingly identical functions in numpy, scipy, and matplotlib that I should be using. Besides, if all of these functions are essentially doing the same computation, there should only be a single base implementation that they all use, so that 1) the learning curve is decreased and 2) that any optimizations be passed on to all of the functions instead of having to be independently reimplemented several times. So, even if we do decide that scipy.signal should have a new correlogram command, it should be a wrapper for numpy.correlate. But why wouldn't one just use scipy.signal.correlate for the 1d case as well? Also, see https://github.com/numpy/numpy/pull/5978 for the pull request with a list of specific issues in my implementation that may need attention. Honi > On Jun 17, 2015, at 6:13 PM, Sturla Molden wrote: > > On 17/06/15 04:38, Honi Sanders wrote: > >> I have now implemented this functionality in numpy.correlate() and numpy.convolve(). https://github.com/bringingheavendown/numpy. The files that were edited are: >> numpy/core/src/multiarray/multiarraymodule.c >> numpy/core/numeric.py >> numpy/core/tests/test_numeric.py >> Please look over the code, my design decisions, and the unit tests I have written. This is my first time contributing, so I am not confident about any of these and welcome feedback. > > I'll just repeat here what I already said on Github. > > I think this stems from the need to compute cross-correlograms as used > in statistical signal analysis, whereas numpy.correlate and > scipy.signal.correlate are better suited for matched filtering. > > I think the best solution would be to add a function called > scipy.signal.correlogram, which would return a cross-correlation and an > array of time lags. It could take minlag and maxlag as optional arguments. > > Adding maxlag and minlag arguments to numpy.convolve makes very little > sense, as far as I am concerned. > > Sturla > > > > > > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From charlesr.harris at gmail.com Wed Jun 17 23:25:35 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 17 Jun 2015 21:25:35 -0600 Subject: [Numpy-discussion] 1.10 release Message-ID: Hi All, I plan to branch the 1.10 release next Monday. I expect this to be a difficult release, much time has passed since 1.9 and there have been significant enhancements/changes to masked arrays, structured array assigment, and recored arrays. In addition, most ufuncs no longer return `NotImplemented`, so there have been changes associated with the handling of ndarray operators. There still remains some tasks to do, including some PEP8 cleanup before the branch, but I would much appreciate it if folks could start testing now, especially if you make use of the parts that have undergone signigicant change. This is also the time to complain if you think some needed PR has been left out. I would also appreciate help with the release notes, as I am sure there are changes that should be noted but are currently omitted. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From mansourmoufid at gmail.com Thu Jun 18 00:16:04 2015 From: mansourmoufid at gmail.com (Mansour Moufid) Date: Thu, 18 Jun 2015 00:16:04 -0400 Subject: [Numpy-discussion] How to limit cross correlation window width in Numpy? In-Reply-To: <07A9FB09-CD74-4723-AA3E-85AFCF042B41@brandeis.edu> References: <2C882037-0653-41DC-B2AF-F87B51C6E11B@brandeis.edu> <07A9FB09-CD74-4723-AA3E-85AFCF042B41@brandeis.edu> Message-ID: Hello, There is a simple solution. The cross-correlation of two arrays of lengths m and n is of length m + n - 1, where m is usually much larger than n. If you need to compute the cross-correlation with a bound on the lag of k, then truncate the longer array to length k - n + 1. That is, def _correlate(x, y, maxlag): n = y.shape[0] return numpy.correlate(x[:maxlag - n + 1], y) As for the lag array, it is defined as -n + 1, ..., 0, ..., m + 1 so truncate it too, -n + 1, ..., 0, ..., maxlag - n + 2 By the way, you should truncate to a power of two. Yours, Mansour From jensj at fysik.dtu.dk Thu Jun 18 01:53:31 2015 From: jensj at fysik.dtu.dk (=?UTF-8?B?SmVucyBKw7hyZ2VuIE1vcnRlbnNlbg==?=) Date: Thu, 18 Jun 2015 07:53:31 +0200 Subject: [Numpy-discussion] Python 3 and isinstance(np.int64(42), int) Message-ID: <55825CDB.80002@fysik.dtu.dk> Hi! I just finished porting a large code-base to Python 3 (making it work on 2.6, 2.7 and 3.4). It wasn't that difficult, but one thing gave me a hard time and it was this: Python 2.7.9 (default, Apr 2 2015, 15:33:21) [GCC 4.9.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> a = np.zeros(7, int) >>> n = a[3] >>> type(n) >>> isinstance(n, int) True With Python 3.4 you get False. I think I understand why (np.int64 is no longer a subclass of int). So, I did this instead: import numbers isinstance(n, numbers.Integral) which works fine (with numpy-1.9). Is this the "correct" way or is there a better way to do it? I would imagine that a lot of code will break because of this - so it would be nice if isinstance(n, int) could be made to work the same way in 2 and 3, but I don't know if this is possible (or desirable). Jens J?rgen From njs at pobox.com Thu Jun 18 02:13:39 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 17 Jun 2015 23:13:39 -0700 Subject: [Numpy-discussion] Python 3 and isinstance(np.int64(42), int) In-Reply-To: <55825CDB.80002@fysik.dtu.dk> References: <55825CDB.80002@fysik.dtu.dk> Message-ID: On Wed, Jun 17, 2015 at 10:53 PM, Jens J?rgen Mortensen wrote: > >>> type(n) > > >>> isinstance(n, int) > True > > With Python 3.4 you get False. I think I understand why (np.int64 is no > longer a subclass of int). Yep, that's correct. > So, I did this instead: > > import numbers > isinstance(n, numbers.Integral) > > which works fine (with numpy-1.9). Is this the "correct" way or is > there a better way to do it? That's the correct way to check whether an arbitrary object is of some integer-like-type, yes :-). There are alternatives, and there's some argument that in Python, doing explicit type checks like this is usually a sign that one is doing something awkward, but that's a more general issue that it's hard to comment on here without more detail about what exactly you're trying to accomplish. > I would imagine that a lot of code will > break because of this - so it would be nice if isinstance(n, int) could > be made to work the same way in 2 and 3, but I don't know if this is > possible (or desirable). It's not possible, unfortunately. In py2, 'int' is a 32- or 64-bit integer type, so we can arrange for numpy's int32 or int64 objects to be laid out the same in memory, so everything in python that expects an int (including C API functions) can handle a numpy int. In py3, 'int' is an arbitrary width integer bignum, like py2 'long', which is fundamentally different from int32 and int64 in both semantics and implementation. -n -- Nathaniel J. Smith -- http://vorpus.org From sebastian at sipsolutions.net Thu Jun 18 04:38:43 2015 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 18 Jun 2015 10:38:43 +0200 Subject: [Numpy-discussion] Python 3 and isinstance(np.int64(42), int) In-Reply-To: References: <55825CDB.80002@fysik.dtu.dk> Message-ID: <1434616723.3641.3.camel@sipsolutions.net> In some cases calling operator.index(n) may yield the desired result. I like operator.index, but maybe it is just me :). That uses duck typing instead of instance checking to ask if it represents an integer. But it also has some awkward corner cases in numpy, since arrays with a single element (deprecation pending) and 0D arrays (will continue) say they are integers when asked that way. - Sebastian On Mi, 2015-06-17 at 23:13 -0700, Nathaniel Smith wrote: > On Wed, Jun 17, 2015 at 10:53 PM, Jens J?rgen Mortensen > wrote: > > >>> type(n) > > > > >>> isinstance(n, int) > > True > > > > With Python 3.4 you get False. I think I understand why (np.int64 is no > > longer a subclass of int). > > Yep, that's correct. > > > So, I did this instead: > > > > import numbers > > isinstance(n, numbers.Integral) > > > > which works fine (with numpy-1.9). Is this the "correct" way or is > > there a better way to do it? > > That's the correct way to check whether an arbitrary object is of some > integer-like-type, yes :-). There are alternatives, and there's some > argument that in Python, doing explicit type checks like this is > usually a sign that one is doing something awkward, but that's a more > general issue that it's hard to comment on here without more detail > about what exactly you're trying to accomplish. > > > I would imagine that a lot of code will > > break because of this - so it would be nice if isinstance(n, int) could > > be made to work the same way in 2 and 3, but I don't know if this is > > possible (or desirable). > > It's not possible, unfortunately. In py2, 'int' is a 32- or 64-bit > integer type, so we can arrange for numpy's int32 or int64 objects to > be laid out the same in memory, so everything in python that expects > an int (including C API functions) can handle a numpy int. In py3, > 'int' is an arbitrary width integer bignum, like py2 'long', which is > fundamentally different from int32 and int64 in both semantics and > implementation. > > -n > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From sturla.molden at gmail.com Thu Jun 18 07:49:38 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Thu, 18 Jun 2015 11:49:38 +0000 (UTC) Subject: [Numpy-discussion] How to limit cross correlation window width in Numpy? References: <2C882037-0653-41DC-B2AF-F87B51C6E11B@brandeis.edu> <07A9FB09-CD74-4723-AA3E-85AFCF042B41@brandeis.edu> Message-ID: <1986779260456320291.558031sturla.molden-gmail.com@news.gmane.org> Mansour Moufid wrote: > The cross-correlation of two arrays of lengths m and n is of length > m + n - 1, where m is usually much larger than n. He is thinking about the situation where m == n and m is much larger than maxlag. Truncating the input arrays would also throw away data. This is about correlating two long signals, not about correlating a signal with a much shorter template. Sturla From sturla.molden at gmail.com Thu Jun 18 07:55:46 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Thu, 18 Jun 2015 11:55:46 +0000 (UTC) Subject: [Numpy-discussion] Python 3 and isinstance(np.int64(42), int) References: <55825CDB.80002@fysik.dtu.dk> Message-ID: <402705248456321161.281941sturla.molden-gmail.com@news.gmane.org> Nathaniel Smith wrote: > In py3, > 'int' is an arbitrary width integer bignum, like py2 'long', which is > fundamentally different from int32 and int64 in both semantics and > implementation. Only when stored in an ndarray. An array scalar object does not need to care about the exact number of bits used for storage as long as the storage is large enough, which a python int always is. Sturla From freddyrietdijk at fridh.nl Fri Jun 19 04:06:36 2015 From: freddyrietdijk at fridh.nl (Freddy Rietdijk) Date: Fri, 19 Jun 2015 10:06:36 +0200 Subject: [Numpy-discussion] Flag for np.tile to use as_strided to reduce memory Message-ID: Hi, Having read that it is possible to basically 'copy' elements along an axis without actually copying the values by making use of the strides, I wonder whether it is possible to add this as an option to np.tile. It would be easier than having to use as_strided or broadcast_arrays to repeat data without actually replicating it. http://stackoverflow.com/questions/23695851/python-repeating-numpy-array-without-replicating-data https://scipy-lectures.github.io/advanced/advanced_numpy/#example-fake-dimensions-with-strides Frederik -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Fri Jun 19 13:39:49 2015 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Fri, 19 Jun 2015 19:39:49 +0200 Subject: [Numpy-discussion] Flag for np.tile to use as_strided to reduce memory In-Reply-To: References: Message-ID: <1434735589.2035.1.camel@sipsolutions.net> On Fr, 2015-06-19 at 10:06 +0200, Freddy Rietdijk wrote: > Hi, > > > Having read that it is possible to basically 'copy' elements along an > axis without actually copying the values by making use of the strides, > I wonder whether it is possible to add this as an option to np.tile. > No, what tile does cannot be represented that way. If it was possible you can achieve the same using `np.broadcast_to` basically, which was just added though. There are some other things you can do, like rolling window (adding dimensions), maybe some day we should add that (or you want to take a shot ;)). - Sebastian > > It would be easier than having to use as_strided or broadcast_arrays > to repeat data without actually replicating it. > > > http://stackoverflow.com/questions/23695851/python-repeating-numpy-array-without-replicating-data > > https://scipy-lectures.github.io/advanced/advanced_numpy/#example-fake-dimensions-with-strides > > > > Frederik > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From shoyer at gmail.com Fri Jun 19 13:47:18 2015 From: shoyer at gmail.com (Stephan Hoyer) Date: Fri, 19 Jun 2015 10:47:18 -0700 Subject: [Numpy-discussion] Flag for np.tile to use as_strided to reduce memory In-Reply-To: <1434735589.2035.1.camel@sipsolutions.net> References: <1434735589.2035.1.camel@sipsolutions.net> Message-ID: On Fri, Jun 19, 2015 at 10:39 AM, Sebastian Berg wrote: > No, what tile does cannot be represented that way. If it was possible > you can achieve the same using `np.broadcast_to` basically, which was > just added though. There are some other things you can do, like rolling > window (adding dimensions), maybe some day we should add that (or you > want to take a shot ;)). > > - Sebastian > The one case where np.tile could be done using stride tricks is if the dimension you want to repeat has size 1 or currently does not exist. np.broadcast_to was an attempt to make this stuff less awkward, though it still requries mixing in transposes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Jun 19 16:08:10 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 19 Jun 2015 14:08:10 -0600 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. Message-ID: Hi All, I'm looking to change some numpy deprecations into errors as well as remove some deprecated functions. The problem I see is that SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really, old. So the question is, does "support" mean compiles with earlier versions of Numpy ? If that is the case there is very little that can be done about deprecation. OTOH, if it means Scipy can be compiled with more recent numpy versions but used with earlier Numpy versions (which is a good feat), I'd like to know. I'd also like to know what the interface requirements are, as I'd like to remove old_defines.h Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Jun 19 16:15:26 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 19 Jun 2015 13:15:26 -0700 Subject: [Numpy-discussion] Python 3 and isinstance(np.int64(42), int) In-Reply-To: References: <55825CDB.80002@fysik.dtu.dk> Message-ID: On Wed, Jun 17, 2015 at 11:13 PM, Nathaniel Smith wrote: > there's some > argument that in Python, doing explicit type checks like this is > usually a sign that one is doing something awkward, I tend to agree with that. On the other hand, numpy itself is kind-of sort-of statically typed. But in that case, if you need to know the type of an array -- check the array's dtype. Also: >>> a = np.zeros(7, int) >>> n = a[3] >>> type(n) I Never liked declaring numpy arrays with the python types like "int" or "float" -- in numpy you usually care more about the type, so should simple use "int64" if you want a 64 bit int. And "float64" if you want a 64 bit float. Granted, pyton floats have always been float64 (on all platfroms??), and python ints used to a be a reasonable int type, but now that python ints are bigInts in py3, it really makes sense to be clear. And now that I think about it, in py2, int is 32 bit on win64 and 64 bit on *nix64 -- so you're really better off being explicit with your numpy arrays. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Fri Jun 19 17:05:24 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Fri, 19 Jun 2015 21:05:24 +0000 (UTC) Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. References: Message-ID: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Charles R Harris wrote: > I'm looking to change some numpy deprecations into errors as well as remove > some deprecated functions. The problem I see is that > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really, old. > So the question is, does "support" mean compiles with earlier versions > of Numpy ? It means there is a Travis CI build with NumPy 1.6.2. So any change to the SciPy source code must compile with NumPy 1.6 and any later version of NumPy. There is no Travis CI build with NumPy 1.5. I don't think we know for sure if it is really compatible with the current SciPy. Sturla From Permafacture at gmail.com Fri Jun 19 17:19:56 2015 From: Permafacture at gmail.com (Elliot Hallmark) Date: Fri, 19 Jun 2015 16:19:56 -0500 Subject: [Numpy-discussion] I can't tell if Numpy is configured properly with show_config() Message-ID: Debian Sid, 64-bit. I was trying to fix the problem of np.dot running very slow. I ended up uninstalling numpy, installing libatlas3-base through apt-get and re-installing numpy. The performance of dot is greatly improved! But I can't tell from any other method whether numpy is set up correctly. Consider comparing the faster one to another in a virtual env that is still slow: ### fast one ### In [1]: import time, numpy In [2]: n=1000 In [3]: A = numpy.random.rand(n,n) In [4]: B = numpy.random.rand(n,n) In [5]: then = time.time(); C=numpy.dot(A,B); print time.time()-then 0.306427001953 In [6]: numpy.show_config() blas_info: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 lapack_info: libraries = ['lapack'] library_dirs = ['/usr/lib'] language = f77 atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 1)] atlas_blas_threads_info: NOT AVAILABLE openblas_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 1)] atlas_info: NOT AVAILABLE lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: NOT AVAILABLE mkl_info: NOT AVAILABLE ### slow one ### In [1]: import time, numpy In [2]: n=1000 In [3]: A = numpy.random.rand(n,n) In [4]: B = numpy.random.rand(n,n) In [5]: then = time.time(); C=numpy.dot(A,B); print time.time()-then 7.88430500031 In [6]: numpy.show_config() blas_info: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 lapack_info: libraries = ['lapack'] library_dirs = ['/usr/lib'] language = f77 atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 1)] atlas_blas_threads_info: NOT AVAILABLE openblas_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 1)] atlas_info: NOT AVAILABLE lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: NOT AVAILABLE mkl_info: NOT AVAILABLE ##### Further, in the following comparison between Cpython and converting to numpy array for one operation, I get Cpython being faster by the same amount in both environments. But another user got numpy being faster. In [1]: import numpy as np In [2]: pts = range(100,1000) In [3]: pts[100] = 0 In [4]: %timeit pts_arr = np.array(pts); mini = np.argmin(pts_arr) 10000 loops, best of 3: 129 ?s per loop In [5]: %timeit mini = sorted(enumerate(pts))[0][1] 10000 loops, best of 3: 89.2 ?s per loop The other user got In [29]: %timeit pts_arr = np.array(pts); mini = np.argmin(pts_arr) 10000 loops, best of 3: 37.7 ?s per loop In [30]: %timeit mini = sorted(enumerate(pts))[0][1] 10000 loops, best of 3: 69.2 ?s per loop And I can't help but wonder if there is further configuration I need to make numpy faster, or if this is just a difference between out machines In the future, should I ignore show_config() and just do this dot product test? Any guidance would be appreciated. Thanks, Elliot -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Fri Jun 19 17:33:56 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Fri, 19 Jun 2015 21:33:56 +0000 (UTC) Subject: [Numpy-discussion] I can't tell if Numpy is configured properly with show_config() References: Message-ID: <1210280065456442194.470778sturla.molden-gmail.com@news.gmane.org> Elliot Hallmark wrote: > And I can't help but wonder if there is further configuration I need > to make numpy faster, or if this is just a difference between out > machines Try to build NumPy with Intel MKL or OpenBLAS instead. ATLAS is only efficient on the host computer on which it is built, and even there it is not very fast (but far better than the reference BLAS). Sturla From josef.pktd at gmail.com Fri Jun 19 17:51:22 2015 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 19 Jun 2015 17:51:22 -0400 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: References: Message-ID: On Fri, Jun 19, 2015 at 4:08 PM, Charles R Harris wrote: > Hi All, > > I'm looking to change some numpy deprecations into errors as well as > remove some deprecated functions. The problem I see is that > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really, old. > So the question is, does "support" mean compiles with earlier versions > of Numpy ? If that is the case there is very little that can be done about > deprecation. OTOH, if it means Scipy can be compiled with more recent numpy > versions but used with earlier Numpy versions (which is a good feat), I'd > like to know. I'd also like to know what the interface requirements are, as > I'd like to remove old_defines.h > numpy 1.6 I think is still accurate https://github.com/scipy/scipy/pull/4265 As far as I know, you can never compile against a newer and run with an older version. We had the discussion recently about backwards versus forwards binary compatibility Josef > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Jun 19 17:52:06 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 19 Jun 2015 15:52:06 -0600 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> References: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Message-ID: On Fri, Jun 19, 2015 at 3:05 PM, Sturla Molden wrote: > Charles R Harris wrote: > > > I'm looking to change some numpy deprecations into errors as well as > remove > > some deprecated functions. The problem I see is that > > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really, > old. > > So the question is, does "support" mean compiles with earlier versions > > of Numpy ? > > It means there is a Travis CI build with NumPy 1.6.2. So any change to the > SciPy source code must compile with NumPy 1.6 and any later version of > NumPy. > > There is no Travis CI build with NumPy 1.5. I don't think we know for sure > if it is really compatible with the current SciPy. > I guess this also raises the question of what versions of Scipy Numpy needs to support. I'm thinking of removing the noprefix.h, but it doesn't cost to leave it in as it must be explicitly included by anyone who needs it. Hmm, maybe best to leave it be, although I suspect anyone using it could just as well use an earlier version of Numpy. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Sat Jun 20 05:09:58 2015 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Sat, 20 Jun 2015 11:09:58 +0200 Subject: [Numpy-discussion] I can't tell if Numpy is configured properly with show_config() In-Reply-To: References: Message-ID: <1434791398.6659.5.camel@sipsolutions.net> On Fr, 2015-06-19 at 16:19 -0500, Elliot Hallmark wrote: > Debian Sid, 64-bit. I was trying to fix the problem of np.dot running > very slow. > > > I ended up uninstalling numpy, installing libatlas3-base through > apt-get and re-installing numpy. The performance of dot is greatly > improved! But I can't tell from any other method whether numpy is set > up correctly. Consider comparing the faster one to another in a > virtual env that is still slow: > Not that I really know this stuff, but one thing to be sure is probably checking `ldd /usr/lib/python2.7/dist-packages/numpy/core/_dotblas.so`. That is probably silly (I really never cared to learn this stuff), but I think it can't go wrong.... About the other difference. Aside from CPU, etc. differences, I expect you got a newer numpy version then the other user. Not sure which part got much faster, but there were for example quite a few speedups in the code converting to array, so I expect it is very likely that this is the reason. - Sebastian > ### > > fast one > ### > > In [1]: import time, numpy > > In [2]: n=1000 > > In [3]: A = numpy.random.rand(n,n) > > In [4]: B = numpy.random.rand(n,n) > > In [5]: then = time.time(); C=numpy.dot(A,B); print time.time()-then > 0.306427001953 > > In [6]: numpy.show_config() > blas_info: > libraries = ['blas'] > library_dirs = ['/usr/lib'] > language = f77 > lapack_info: > libraries = ['lapack'] > library_dirs = ['/usr/lib'] > language = f77 > atlas_threads_info: > NOT AVAILABLE > blas_opt_info: > libraries = ['blas'] > library_dirs = ['/usr/lib'] > language = f77 > define_macros = [('NO_ATLAS_INFO', 1)] > atlas_blas_threads_info: > NOT AVAILABLE > openblas_info: > NOT AVAILABLE > lapack_opt_info: > libraries = ['lapack', 'blas'] > library_dirs = ['/usr/lib'] > language = f77 > define_macros = [('NO_ATLAS_INFO', 1)] > atlas_info: > NOT AVAILABLE > lapack_mkl_info: > NOT AVAILABLE > blas_mkl_info: > NOT AVAILABLE > atlas_blas_info: > NOT AVAILABLE > mkl_info: > NOT AVAILABLE > > ### > > slow one > ### > > In [1]: import time, numpy > > In [2]: n=1000 > > In [3]: A = numpy.random.rand(n,n) > > In [4]: B = numpy.random.rand(n,n) > > In [5]: then = time.time(); C=numpy.dot(A,B); print time.time()-then > 7.88430500031 > > In [6]: numpy.show_config() > blas_info: > libraries = ['blas'] > library_dirs = ['/usr/lib'] > language = f77 > lapack_info: > libraries = ['lapack'] > library_dirs = ['/usr/lib'] > language = f77 > atlas_threads_info: > NOT AVAILABLE > blas_opt_info: > libraries = ['blas'] > library_dirs = ['/usr/lib'] > language = f77 > define_macros = [('NO_ATLAS_INFO', 1)] > atlas_blas_threads_info: > NOT AVAILABLE > openblas_info: > NOT AVAILABLE > lapack_opt_info: > libraries = ['lapack', 'blas'] > library_dirs = ['/usr/lib'] > language = f77 > define_macros = [('NO_ATLAS_INFO', 1)] > atlas_info: > NOT AVAILABLE > lapack_mkl_info: > NOT AVAILABLE > blas_mkl_info: > NOT AVAILABLE > atlas_blas_info: > NOT AVAILABLE > mkl_info: > NOT AVAILABLE > > ##### > > > Further, in the following comparison between Cpython and converting to > numpy array for one operation, I get Cpython being faster by the same > amount in both environments. But another user got numpy being faster. > > In [1]: import numpy as np > > In [2]: pts = range(100,1000) > > In [3]: pts[100] = 0 > > In [4]: %timeit pts_arr = np.array(pts); mini = np.argmin(pts_arr) > 10000 loops, best of 3: 129 ?s per loop > > In [5]: %timeit mini = sorted(enumerate(pts))[0][1] > 10000 loops, best of 3: 89.2 ?s per loop > > The other user got > > In [29]: %timeit pts_arr = np.array(pts); mini = np.argmin(pts_arr) > 10000 loops, best of 3: 37.7 ?s per loop > > In [30]: %timeit mini = sorted(enumerate(pts))[0][1] > 10000 loops, best of 3: 69.2 ?s per loop > > > And I can't help but wonder if there is further configuration I need to make numpy faster, or if this is just a difference between out machines > In the future, should I ignore show_config() and just do this dot > product test? > > > Any guidance would be appreciated. > > > Thanks, > > Elliot > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From Permafacture at gmail.com Sat Jun 20 16:02:45 2015 From: Permafacture at gmail.com (Elliot Hallmark) Date: Sat, 20 Jun 2015 15:02:45 -0500 Subject: [Numpy-discussion] I can't tell if Numpy is configured properly with show_config() In-Reply-To: <1434791398.6659.5.camel@sipsolutions.net> References: <1434791398.6659.5.camel@sipsolutions.net> Message-ID: Well, here is the question that started this all. In the slow environment, blas seems to be there and work well, but numpy doesn't use it! In [1]: import time, numpy, scipy In [2]: from scipy import linalg In [3]: n=1000 In [4]: A = numpy.random.rand(n,n) In [5]: B = numpy.random.rand(n,n) In [6]: then = time.time(); C=scipy.dot(A,B); print time.time()-then 7.62005901337 In [7]: begin = time.time(); C=linalg.blas.dgemm(1.0,A,B);print time.time() - begin 0.325305938721 In [8]: begin = time.time(); C=linalg.blas.ddot(A,B);print time.time() - begin 0.0363020896912 On Sat, Jun 20, 2015 at 4:09 AM, Sebastian Berg wrote: > On Fr, 2015-06-19 at 16:19 -0500, Elliot Hallmark wrote: > > Debian Sid, 64-bit. I was trying to fix the problem of np.dot running > > very slow. > > > > > > I ended up uninstalling numpy, installing libatlas3-base through > > apt-get and re-installing numpy. The performance of dot is greatly > > improved! But I can't tell from any other method whether numpy is set > > up correctly. Consider comparing the faster one to another in a > > virtual env that is still slow: > > > > Not that I really know this stuff, but one thing to be sure is probably > checking `ldd /usr/lib/python2.7/dist-packages/numpy/core/_dotblas.so`. > That is probably silly (I really never cared to learn this stuff), but I > think it can't go wrong.... > > About the other difference. Aside from CPU, etc. differences, I expect > you got a newer numpy version then the other user. Not sure which part > got much faster, but there were for example quite a few speedups in the > code converting to array, so I expect it is very likely that this is the > reason. > > - Sebastian > > > > ### > > > > fast one > > ### > > > > In [1]: import time, numpy > > > > In [2]: n=1000 > > > > In [3]: A = numpy.random.rand(n,n) > > > > In [4]: B = numpy.random.rand(n,n) > > > > In [5]: then = time.time(); C=numpy.dot(A,B); print time.time()-then > > 0.306427001953 > > > > In [6]: numpy.show_config() > > blas_info: > > libraries = ['blas'] > > library_dirs = ['/usr/lib'] > > language = f77 > > lapack_info: > > libraries = ['lapack'] > > library_dirs = ['/usr/lib'] > > language = f77 > > atlas_threads_info: > > NOT AVAILABLE > > blas_opt_info: > > libraries = ['blas'] > > library_dirs = ['/usr/lib'] > > language = f77 > > define_macros = [('NO_ATLAS_INFO', 1)] > > atlas_blas_threads_info: > > NOT AVAILABLE > > openblas_info: > > NOT AVAILABLE > > lapack_opt_info: > > libraries = ['lapack', 'blas'] > > library_dirs = ['/usr/lib'] > > language = f77 > > define_macros = [('NO_ATLAS_INFO', 1)] > > atlas_info: > > NOT AVAILABLE > > lapack_mkl_info: > > NOT AVAILABLE > > blas_mkl_info: > > NOT AVAILABLE > > atlas_blas_info: > > NOT AVAILABLE > > mkl_info: > > NOT AVAILABLE > > > > ### > > > > slow one > > ### > > > > In [1]: import time, numpy > > > > In [2]: n=1000 > > > > In [3]: A = numpy.random.rand(n,n) > > > > In [4]: B = numpy.random.rand(n,n) > > > > In [5]: then = time.time(); C=numpy.dot(A,B); print time.time()-then > > 7.88430500031 > > > > In [6]: numpy.show_config() > > blas_info: > > libraries = ['blas'] > > library_dirs = ['/usr/lib'] > > language = f77 > > lapack_info: > > libraries = ['lapack'] > > library_dirs = ['/usr/lib'] > > language = f77 > > atlas_threads_info: > > NOT AVAILABLE > > blas_opt_info: > > libraries = ['blas'] > > library_dirs = ['/usr/lib'] > > language = f77 > > define_macros = [('NO_ATLAS_INFO', 1)] > > atlas_blas_threads_info: > > NOT AVAILABLE > > openblas_info: > > NOT AVAILABLE > > lapack_opt_info: > > libraries = ['lapack', 'blas'] > > library_dirs = ['/usr/lib'] > > language = f77 > > define_macros = [('NO_ATLAS_INFO', 1)] > > atlas_info: > > NOT AVAILABLE > > lapack_mkl_info: > > NOT AVAILABLE > > blas_mkl_info: > > NOT AVAILABLE > > atlas_blas_info: > > NOT AVAILABLE > > mkl_info: > > NOT AVAILABLE > > > > ##### > > > > > > Further, in the following comparison between Cpython and converting to > > numpy array for one operation, I get Cpython being faster by the same > > amount in both environments. But another user got numpy being faster. > > > > In [1]: import numpy as np > > > > In [2]: pts = range(100,1000) > > > > In [3]: pts[100] = 0 > > > > In [4]: %timeit pts_arr = np.array(pts); mini = np.argmin(pts_arr) > > 10000 loops, best of 3: 129 ?s per loop > > > > In [5]: %timeit mini = sorted(enumerate(pts))[0][1] > > 10000 loops, best of 3: 89.2 ?s per loop > > > > The other user got > > > > In [29]: %timeit pts_arr = np.array(pts); mini = np.argmin(pts_arr) > > 10000 loops, best of 3: 37.7 ?s per loop > > > > In [30]: %timeit mini = sorted(enumerate(pts))[0][1] > > 10000 loops, best of 3: 69.2 ?s per loop > > > > > > And I can't help but wonder if there is further configuration I need to > make numpy faster, or if this is just a difference between out machines > > In the future, should I ignore show_config() and just do this dot > > product test? > > > > > > Any guidance would be appreciated. > > > > > > Thanks, > > > > Elliot > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Permafacture at gmail.com Sat Jun 20 16:08:40 2015 From: Permafacture at gmail.com (Elliot Hallmark) Date: Sat, 20 Jun 2015 15:08:40 -0500 Subject: [Numpy-discussion] I can't tell if Numpy is configured properly with show_config() In-Reply-To: References: <1434791398.6659.5.camel@sipsolutions.net> Message-ID: Sebastian, in the slow virtual-env, _dotblas.so isn't there. I only have _dummy.so On Sat, Jun 20, 2015 at 3:02 PM, Elliot Hallmark wrote: > Well, here is the question that started this all. In the slow > environment, blas seems to be there and work well, but numpy doesn't use > it! > > In [1]: import time, numpy, scipy > > In [2]: from scipy import linalg > > In [3]: n=1000 > > In [4]: A = numpy.random.rand(n,n) > > In [5]: B = numpy.random.rand(n,n) > > In [6]: then = time.time(); C=scipy.dot(A,B); print time.time()-then > 7.62005901337 > > In [7]: begin = time.time(); C=linalg.blas.dgemm(1.0,A,B);print > time.time() - begin > 0.325305938721 > > In [8]: begin = time.time(); C=linalg.blas.ddot(A,B);print time.time() - > begin > 0.0363020896912 > > > On Sat, Jun 20, 2015 at 4:09 AM, Sebastian Berg < > sebastian at sipsolutions.net> wrote: > >> On Fr, 2015-06-19 at 16:19 -0500, Elliot Hallmark wrote: >> > Debian Sid, 64-bit. I was trying to fix the problem of np.dot running >> > very slow. >> > >> > >> > I ended up uninstalling numpy, installing libatlas3-base through >> > apt-get and re-installing numpy. The performance of dot is greatly >> > improved! But I can't tell from any other method whether numpy is set >> > up correctly. Consider comparing the faster one to another in a >> > virtual env that is still slow: >> > >> >> Not that I really know this stuff, but one thing to be sure is probably >> checking `ldd /usr/lib/python2.7/dist-packages/numpy/core/_dotblas.so`. >> That is probably silly (I really never cared to learn this stuff), but I >> think it can't go wrong.... >> >> About the other difference. Aside from CPU, etc. differences, I expect >> you got a newer numpy version then the other user. Not sure which part >> got much faster, but there were for example quite a few speedups in the >> code converting to array, so I expect it is very likely that this is the >> reason. >> >> - Sebastian >> >> >> > ### >> > >> > fast one >> > ### >> > >> > In [1]: import time, numpy >> > >> > In [2]: n=1000 >> > >> > In [3]: A = numpy.random.rand(n,n) >> > >> > In [4]: B = numpy.random.rand(n,n) >> > >> > In [5]: then = time.time(); C=numpy.dot(A,B); print time.time()-then >> > 0.306427001953 >> > >> > In [6]: numpy.show_config() >> > blas_info: >> > libraries = ['blas'] >> > library_dirs = ['/usr/lib'] >> > language = f77 >> > lapack_info: >> > libraries = ['lapack'] >> > library_dirs = ['/usr/lib'] >> > language = f77 >> > atlas_threads_info: >> > NOT AVAILABLE >> > blas_opt_info: >> > libraries = ['blas'] >> > library_dirs = ['/usr/lib'] >> > language = f77 >> > define_macros = [('NO_ATLAS_INFO', 1)] >> > atlas_blas_threads_info: >> > NOT AVAILABLE >> > openblas_info: >> > NOT AVAILABLE >> > lapack_opt_info: >> > libraries = ['lapack', 'blas'] >> > library_dirs = ['/usr/lib'] >> > language = f77 >> > define_macros = [('NO_ATLAS_INFO', 1)] >> > atlas_info: >> > NOT AVAILABLE >> > lapack_mkl_info: >> > NOT AVAILABLE >> > blas_mkl_info: >> > NOT AVAILABLE >> > atlas_blas_info: >> > NOT AVAILABLE >> > mkl_info: >> > NOT AVAILABLE >> > >> > ### >> > >> > slow one >> > ### >> > >> > In [1]: import time, numpy >> > >> > In [2]: n=1000 >> > >> > In [3]: A = numpy.random.rand(n,n) >> > >> > In [4]: B = numpy.random.rand(n,n) >> > >> > In [5]: then = time.time(); C=numpy.dot(A,B); print time.time()-then >> > 7.88430500031 >> > >> > In [6]: numpy.show_config() >> > blas_info: >> > libraries = ['blas'] >> > library_dirs = ['/usr/lib'] >> > language = f77 >> > lapack_info: >> > libraries = ['lapack'] >> > library_dirs = ['/usr/lib'] >> > language = f77 >> > atlas_threads_info: >> > NOT AVAILABLE >> > blas_opt_info: >> > libraries = ['blas'] >> > library_dirs = ['/usr/lib'] >> > language = f77 >> > define_macros = [('NO_ATLAS_INFO', 1)] >> > atlas_blas_threads_info: >> > NOT AVAILABLE >> > openblas_info: >> > NOT AVAILABLE >> > lapack_opt_info: >> > libraries = ['lapack', 'blas'] >> > library_dirs = ['/usr/lib'] >> > language = f77 >> > define_macros = [('NO_ATLAS_INFO', 1)] >> > atlas_info: >> > NOT AVAILABLE >> > lapack_mkl_info: >> > NOT AVAILABLE >> > blas_mkl_info: >> > NOT AVAILABLE >> > atlas_blas_info: >> > NOT AVAILABLE >> > mkl_info: >> > NOT AVAILABLE >> > >> > ##### >> > >> > >> > Further, in the following comparison between Cpython and converting to >> > numpy array for one operation, I get Cpython being faster by the same >> > amount in both environments. But another user got numpy being faster. >> > >> > In [1]: import numpy as np >> > >> > In [2]: pts = range(100,1000) >> > >> > In [3]: pts[100] = 0 >> > >> > In [4]: %timeit pts_arr = np.array(pts); mini = np.argmin(pts_arr) >> > 10000 loops, best of 3: 129 ?s per loop >> > >> > In [5]: %timeit mini = sorted(enumerate(pts))[0][1] >> > 10000 loops, best of 3: 89.2 ?s per loop >> > >> > The other user got >> > >> > In [29]: %timeit pts_arr = np.array(pts); mini = np.argmin(pts_arr) >> > 10000 loops, best of 3: 37.7 ?s per loop >> > >> > In [30]: %timeit mini = sorted(enumerate(pts))[0][1] >> > 10000 loops, best of 3: 69.2 ?s per loop >> > >> > >> > And I can't help but wonder if there is further configuration I need to >> make numpy faster, or if this is just a difference between out machines >> > In the future, should I ignore show_config() and just do this dot >> > product test? >> > >> > >> > Any guidance would be appreciated. >> > >> > >> > Thanks, >> > >> > Elliot >> > _______________________________________________ >> > NumPy-Discussion mailing list >> > NumPy-Discussion at scipy.org >> > http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Jun 20 16:40:05 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 20 Jun 2015 14:40:05 -0600 Subject: [Numpy-discussion] Removal of Deprecated Keywords/functionality Message-ID: Hi All, There are three long ago deprecations that I am not sure how to handle. - keywords skiprows and missing in genfromtxt, deprecated in 1.5. - keyword old_behavior (default False) in correlate. added in 1.5 at least, but default value changed later. The documentation says they will be removed in numpy 2.0, but we might want to try ealier. The case of the correlation function is trickier, as we probabaly need to provide a function with the old behavior before removing the keyword. I've left these cases as is, but the more old stuff hanging about the greater our technical debt. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Jun 20 16:45:42 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 20 Jun 2015 14:45:42 -0600 Subject: [Numpy-discussion] I can't tell if Numpy is configured properly with show_config() In-Reply-To: References: <1434791398.6659.5.camel@sipsolutions.net> Message-ID: On Sat, Jun 20, 2015 at 2:08 PM, Elliot Hallmark wrote: > Sebastian, in the slow virtual-env, _dotblas.so isn't there. I only have > _dummy.so > > On Sat, Jun 20, 2015 at 3:02 PM, Elliot Hallmark > wrote: > >> Well, here is the question that started this all. In the slow >> environment, blas seems to be there and work well, but numpy doesn't use >> it! >> >> In [1]: import time, numpy, scipy >> >> In [2]: from scipy import linalg >> >> In [3]: n=1000 >> >> In [4]: A = numpy.random.rand(n,n) >> >> In [5]: B = numpy.random.rand(n,n) >> >> In [6]: then = time.time(); C=scipy.dot(A,B); print time.time()-then >> 7.62005901337 >> >> In [7]: begin = time.time(); C=linalg.blas.dgemm(1.0,A,B);print >> time.time() - begin >> 0.325305938721 >> >> In [8]: begin = time.time(); C=linalg.blas.ddot(A,B);print time.time() - >> begin >> 0.0363020896912 >> > What numpy version? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Permafacture at gmail.com Sat Jun 20 17:06:00 2015 From: Permafacture at gmail.com (Elliot Hallmark) Date: Sat, 20 Jun 2015 16:06:00 -0500 Subject: [Numpy-discussion] I can't tell if Numpy is configured properly with show_config() In-Reply-To: References: <1434791398.6659.5.camel@sipsolutions.net> Message-ID: >What numpy version? 1.8.1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Jun 20 17:32:11 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 20 Jun 2015 14:32:11 -0700 Subject: [Numpy-discussion] Removal of Deprecated Keywords/functionality In-Reply-To: References: Message-ID: On Jun 20, 2015 1:43 PM, "Charles R Harris" wrote: > > Hi All, > > There are three long ago deprecations that I am not sure how to handle. > > keywords skiprows and missing in genfromtxt, deprecated in 1.5. > keyword old_behavior (default False) in correlate. added in 1.5 at least, but default value changed later. > > The documentation says they will be removed in numpy 2.0, but we might want to try ealier. The case of the correlation function is trickier, as we probabaly need to provide a function with the old behavior before removing the keyword. Wouldn't this function just be correlate(a, conj(b)) ? Surely just writing that is easier and clearer than any function call we could provide. > I've left these cases as is, but the more old stuff hanging about the greater our technical debt. I guess we could try dropping them from the first release candidate and at least get some data on whether anyone notices. 1.5 was a lonnnng time ago. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Jun 20 19:07:49 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 20 Jun 2015 17:07:49 -0600 Subject: [Numpy-discussion] Removal of Deprecated Keywords/functionality In-Reply-To: References: Message-ID: On Sat, Jun 20, 2015 at 3:32 PM, Nathaniel Smith wrote: > On Jun 20, 2015 1:43 PM, "Charles R Harris" > wrote: > > > > Hi All, > > > > There are three long ago deprecations that I am not sure how to handle. > > > > keywords skiprows and missing in genfromtxt, deprecated in 1.5. > > keyword old_behavior (default False) in correlate. added in 1.5 at > least, but default value changed later. > > > > The documentation says they will be removed in numpy 2.0, but we might > want to try ealier. The case of the correlation function is trickier, as we > probabaly need to provide a function with the old behavior before removing > the keyword. > > Wouldn't this function just be > correlate(a, conj(b)) > ? Surely just writing that is easier and clearer than any function call we > could provide. > > > I've left these cases as is, but the more old stuff hanging about the > greater our technical debt. > > I guess we could try dropping them from the first release candidate and at > least get some data on whether anyone notices. > > 1.5 was a lonnnng time ago. > I just removed all of these cases (in separate commits). We will see what happens. Note that the "skiprows" keyword is still used in loadtxt. It should probably be deprecated there for consistency, but it is possible that some use it as a positional argument. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebix at sebix.at Sun Jun 21 05:23:55 2015 From: sebix at sebix.at (Sebastian) Date: Sun, 21 Jun 2015 11:23:55 +0200 Subject: [Numpy-discussion] Removal of Deprecated Keywords/functionality In-Reply-To: References: Message-ID: <558682AB.9020208@sebix.at> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi, > Note that the "skiprows" keyword is still used in loadtxt. It should probably be deprecated there for consistency, but it is possible that some use it as a positional argument. skiprows is t the only argument of loadtxt that allows skipping a header or other data at the beginning, which do not start with #. This is often the case with data from measurement device and software. Sometimes these lines are also used to give informations about the circumstances or the probe in a non-CSV and non-tab-separated style. Sebastian, > -- > python programming - mail server - photo - video - https://sebix.at > To verify my cryptographic signature or send me encrypted mails, get my > key at https://sebix.at/DC9B463B.asc and on public keyservers. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBCAAGBQJVhoKrAAoJEBn0X+vcm0Y7QV0P/1Yvi3BXGHumFgjCu+dLqez7 y9rLjRDc5tgTXN0LFYsUcnRcrcwLrkDS2q95upy0HXGI+sYQKAfBvpCWcjTht657 VWCcS71jXZNU0YCwumTUEi815I2jGSV1WA2t6ckfCMiw19ePcYNSHbw4qbHHdxuw ZEncX8kJZ6/fKrY+0F2HJo0CGp+Wmn1f6Jzk/5sjaRFdy/g8GE9Txcrfr+i63uWt w1BGBziHkCR15AHS/LFs9/lWOPmfeoW8Wz+qErZ4m75WECOjbSXSOVaaWIaKWb9Z mkoVyt+0OSoTI0coUqkrl2Cju0vkSK7i+3+uM9dHcqkhlgNuFhHGoamtdg8yHrl8 RYMXgo7R0cZ2n6IJnS49vmbXiC5YeTlQ1HWeU+H2ZqJ00ZGNQBClrwhcNt6STCxP 8R1tp2UnmEYJq7JTtVppCLxowvPjOIL0K9xkCLEsM+AlEQq+e4RFMOgtAo5ptqZy kPgP9GWbMY160g4DirWn9VZdfzb3Jyh9tI0r8mL4uCzsrBZHqVgO8K8p2Gth6NsO 3fW2oGdSbRGLD/DlmK7h8X7VqrffcUCi1D21ZEmzZ/Yey9YxaEkhXw9H3dPC9AwD f1UGvAGrTz5nT7x9gctEJWyYCN0QTwuq1z0PV8qbn6/UG/ujv+OVLvXtZ5jTySvq YF6Ylhk1Bza4A5eiqXBw =Jb9N -----END PGP SIGNATURE----- From daniele at grinta.net Sun Jun 21 05:29:45 2015 From: daniele at grinta.net (Daniele Nicolodi) Date: Sun, 21 Jun 2015 11:29:45 +0200 Subject: [Numpy-discussion] Removal of Deprecated Keywords/functionality In-Reply-To: References: Message-ID: <55868409.6080804@grinta.net> On 21/06/15 01:07, Charles R Harris wrote: > > > On Sat, Jun 20, 2015 at 3:32 PM, Nathaniel Smith > wrote: > > On Jun 20, 2015 1:43 PM, "Charles R Harris" > > wrote: > > > > Hi All, > > > > There are three long ago deprecations that I am not sure how to handle. > > > > keywords skiprows and missing in genfromtxt, deprecated in 1.5. I believe you mean skip_rows here, which got replaced by skip_header. > Note that the "skiprows" keyword is still used in loadtxt. It should > probably be deprecated there for consistency, but it is possible that > some use it as a positional argument. The interface of loadtxt and genfromtxt are rather different, why would only this discrepancy be a problem? As far as I know there is no replacement for the skiprows argument in loadtxt and it is definitely an useful feature. Cheers, Daniele From ralf.gommers at gmail.com Sun Jun 21 09:14:22 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 21 Jun 2015 15:14:22 +0200 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: References: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Message-ID: On Fri, Jun 19, 2015 at 11:52 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Fri, Jun 19, 2015 at 3:05 PM, Sturla Molden > wrote: > >> Charles R Harris wrote: >> >> > I'm looking to change some numpy deprecations into errors as well as >> remove >> > some deprecated functions. The problem I see is that >> > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really, >> old. >> > So the question is, does "support" mean compiles with earlier versions >> > of Numpy ? >> >> It means there is a Travis CI build with NumPy 1.6.2. So any change to the >> SciPy source code must compile with NumPy 1.6 and any later version of >> NumPy. >> >> There is no Travis CI build with NumPy 1.5. I don't think we know for sure >> if it is really compatible with the current SciPy. >> > > I guess this also raises the question of what versions of Scipy Numpy > needs to support. > I'd treat Scipy like any other popular package that depends on Numpy. If a change in Numpy would break a Scipy version released in say the last 1.5 years, then that's a problem. If it's a quite old Scipy, then it may be OK. I'm thinking of removing the noprefix.h, but it doesn't cost to leave it in > as it must be explicitly included by anyone who needs it. Hmm, maybe best > to leave it be, although I suspect anyone using it could just as well use > an earlier version of Numpy. > Does noprefix.h even give you a deprecation warning now? Doesn't look like it to me, which means it should be left alone for quite a while. Ralf > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Jun 21 09:23:10 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 21 Jun 2015 15:23:10 +0200 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: References: Message-ID: On Fri, Jun 19, 2015 at 10:08 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > Hi All, > > I'm looking to change some numpy deprecations into errors as well as > remove some deprecated functions. The problem I see is that > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really, old. > Lowest supported Scipy version in master and 0.16.x is 1.6.2. This can be seen in the main setup.py, scipy/__init__.py and the 0.16.0 release notes. > So the question is, does "support" mean compiles with earlier versions > of Numpy ? > Indeed. > If that is the case there is very little that can be done about > deprecation. > They can be fixed in Scipy, see for example https://github.com/scipy/scipy/pull/4378 > OTOH, if it means Scipy can be compiled with more recent numpy versions > but used with earlier Numpy versions (which is a good feat), I'd like to > know. > That's never a good idea, and in most cases raises errors on import if you try. > I'd also like to know what the interface requirements are, as I'd like to > remove old_defines.h > This can be fixed in Scipy (see PR above); there's still a lot to do there though. More importantly, I think Cython still relies on this API and therefore also needs to be updated. This description of changes made in Theano might be helpful: http://mail.scipy.org/pipermail/numpy-discussion/2013-November/068209.html Ralf > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Jun 21 11:13:11 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 21 Jun 2015 09:13:11 -0600 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: References: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Message-ID: On Sun, Jun 21, 2015 at 7:14 AM, Ralf Gommers wrote: > > > On Fri, Jun 19, 2015 at 11:52 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Fri, Jun 19, 2015 at 3:05 PM, Sturla Molden >> wrote: >> >>> Charles R Harris wrote: >>> >>> > I'm looking to change some numpy deprecations into errors as well as >>> remove >>> > some deprecated functions. The problem I see is that >>> > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really, >>> old. >>> > So the question is, does "support" mean compiles with earlier versions >>> > of Numpy ? >>> >>> It means there is a Travis CI build with NumPy 1.6.2. So any change to >>> the >>> SciPy source code must compile with NumPy 1.6 and any later version of >>> NumPy. >>> >>> There is no Travis CI build with NumPy 1.5. I don't think we know for >>> sure >>> if it is really compatible with the current SciPy. >>> >> There is still a reference to 1.5 in Scipy, I forget where. > >> I guess this also raises the question of what versions of Scipy Numpy >> needs to support. >> > > I'd treat Scipy like any other popular package that depends on Numpy. If a > change in Numpy would break a Scipy version released in say the last 1.5 > years, then that's a problem. If it's a quite old Scipy, then it may be OK. > So that would be Scipy 0.13, looks like. > > I'm thinking of removing the noprefix.h, but it doesn't cost to leave it >> in as it must be explicitly included by anyone who needs it. Hmm, maybe >> best to leave it be, although I suspect anyone using it could just as well >> use an earlier version of Numpy. >> > > Does noprefix.h even give you a deprecation warning now? Doesn't look like > it to me, which means it should be left alone for quite a while. > Yeah, it's probably best to just leave it be. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Jun 21 11:31:24 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 21 Jun 2015 17:31:24 +0200 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: References: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Message-ID: On Sun, Jun 21, 2015 at 5:13 PM, Charles R Harris wrote: > > > > On Sun, Jun 21, 2015 at 7:14 AM, Ralf Gommers > wrote: > >> >> >> On Fri, Jun 19, 2015 at 11:52 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Fri, Jun 19, 2015 at 3:05 PM, Sturla Molden >>> wrote: >>> >>>> Charles R Harris wrote: >>>> >>>> > I'm looking to change some numpy deprecations into errors as well as >>>> remove >>>> > some deprecated functions. The problem I see is that >>>> > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, really, >>>> old. >>>> > So the question is, does "support" mean compiles with earlier versions >>>> > of Numpy ? >>>> >>>> It means there is a Travis CI build with NumPy 1.6.2. So any change to >>>> the >>>> SciPy source code must compile with NumPy 1.6 and any later version of >>>> NumPy. >>>> >>>> There is no Travis CI build with NumPy 1.5. I don't think we know for >>>> sure >>>> if it is really compatible with the current SciPy. >>>> >>> > There is still a reference to 1.5 in Scipy, I forget where. > In INSTALL.rst.txt, will fix that now. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Jun 21 11:45:25 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 21 Jun 2015 09:45:25 -0600 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: References: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Message-ID: On Sun, Jun 21, 2015 at 9:31 AM, Ralf Gommers wrote: > > > On Sun, Jun 21, 2015 at 5:13 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> >> On Sun, Jun 21, 2015 at 7:14 AM, Ralf Gommers >> wrote: >> >>> >>> >>> On Fri, Jun 19, 2015 at 11:52 PM, Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> >>>> On Fri, Jun 19, 2015 at 3:05 PM, Sturla Molden >>> > wrote: >>>> >>>>> Charles R Harris wrote: >>>>> >>>>> > I'm looking to change some numpy deprecations into errors as well as >>>>> remove >>>>> > some deprecated functions. The problem I see is that >>>>> > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, >>>>> really, old. >>>>> > So the question is, does "support" mean compiles with earlier >>>>> versions >>>>> > of Numpy ? >>>>> >>>>> It means there is a Travis CI build with NumPy 1.6.2. So any change to >>>>> the >>>>> SciPy source code must compile with NumPy 1.6 and any later version of >>>>> NumPy. >>>>> >>>>> There is no Travis CI build with NumPy 1.5. I don't think we know for >>>>> sure >>>>> if it is really compatible with the current SciPy. >>>>> >>>> >> There is still a reference to 1.5 in Scipy, I forget where. >> > > In INSTALL.rst.txt, will fix that now. > > Ralf, I cannot compile Scipy 0.13.3 on my system, it seems to fail here Error compiling Cython file: ------------------------------------------------------------ ... # and object. In this file, only NULL is passed to these parameters. cdef extern from *: cnp.ndarray PyArray_CheckFromAny(object, void*, int, int, int, void*) cnp.ndarray PyArray_FromArray(cnp.ndarray, void*, int) from . cimport cython_blas as blas_pointers ^ ------------------------------------------------------------ _decomp_update.pyx:60:0: 'cython_blas.pxd' not found Although it is hard to tell, the traceback doesn't give much useful information. I suspect this is due to a cython version mismatch, as it seems to be looking for cython_blas.pxd but only cython_blas.c is available. Have you seen this before? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Jun 21 11:57:04 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 21 Jun 2015 17:57:04 +0200 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: References: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Message-ID: On Sun, Jun 21, 2015 at 5:45 PM, Charles R Harris wrote: > > > On Sun, Jun 21, 2015 at 9:31 AM, Ralf Gommers > wrote: > >> >> >> On Sun, Jun 21, 2015 at 5:13 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> >>> On Sun, Jun 21, 2015 at 7:14 AM, Ralf Gommers >>> wrote: >>> >>>> >>>> >>>> On Fri, Jun 19, 2015 at 11:52 PM, Charles R Harris < >>>> charlesr.harris at gmail.com> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Jun 19, 2015 at 3:05 PM, Sturla Molden < >>>>> sturla.molden at gmail.com> wrote: >>>>> >>>>>> Charles R Harris wrote: >>>>>> >>>>>> > I'm looking to change some numpy deprecations into errors as well >>>>>> as remove >>>>>> > some deprecated functions. The problem I see is that >>>>>> > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, >>>>>> really, old. >>>>>> > So the question is, does "support" mean compiles with earlier >>>>>> versions >>>>>> > of Numpy ? >>>>>> >>>>>> It means there is a Travis CI build with NumPy 1.6.2. So any change >>>>>> to the >>>>>> SciPy source code must compile with NumPy 1.6 and any later version of >>>>>> NumPy. >>>>>> >>>>>> There is no Travis CI build with NumPy 1.5. I don't think we know for >>>>>> sure >>>>>> if it is really compatible with the current SciPy. >>>>>> >>>>> >>> There is still a reference to 1.5 in Scipy, I forget where. >>> >> >> In INSTALL.rst.txt, will fix that now. >> >> > Ralf, I cannot compile Scipy 0.13.3 on my system, it seems to fail here > > Error compiling Cython file: > ------------------------------------------------------------ > ... > # and object. In this file, only NULL is passed to these parameters. > cdef extern from *: > cnp.ndarray PyArray_CheckFromAny(object, void*, int, int, int, void*) > cnp.ndarray PyArray_FromArray(cnp.ndarray, void*, int) > > from . cimport cython_blas as blas_pointers > ^ > ------------------------------------------------------------ > > _decomp_update.pyx:60:0: 'cython_blas.pxd' not found > > > Although it is hard to tell, the traceback doesn't give much useful > information. I suspect this is due to a cython version mismatch, as it > seems to be looking for cython_blas.pxd but only cython_blas.c is > available. Have you seen this before? > > That's code that was only introduced for 0.16.x; a ``git clean -xdf`` should fix this for you. Next obstacle: I think it'll fail with Cython 0.22, you'll need a lower Cython version (probably around 0.19.x). Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Sun Jun 21 13:42:25 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Sun, 21 Jun 2015 17:42:25 +0000 (UTC) Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. References: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Message-ID: <1624542212456601116.887156sturla.molden-gmail.com@news.gmane.org> Charles R Harris wrote: > Ralf, I cannot compile Scipy 0.13.3 on my system, it seems to fail her > _decomp_update.pyx:60:0: 'cython_blas.pxd' not found Do you have a clean SciPy 0.13.3 source tree? cython_blas.pxd is introduced in 0.16, and should be in 0.13 at all. Sturla From charlesr.harris at gmail.com Sun Jun 21 13:49:47 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 21 Jun 2015 11:49:47 -0600 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: References: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Message-ID: On Sun, Jun 21, 2015 at 9:57 AM, Ralf Gommers wrote: > > > On Sun, Jun 21, 2015 at 5:45 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sun, Jun 21, 2015 at 9:31 AM, Ralf Gommers >> wrote: >> >>> >>> >>> On Sun, Jun 21, 2015 at 5:13 PM, Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> >>>> >>>> On Sun, Jun 21, 2015 at 7:14 AM, Ralf Gommers >>>> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Jun 19, 2015 at 11:52 PM, Charles R Harris < >>>>> charlesr.harris at gmail.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Fri, Jun 19, 2015 at 3:05 PM, Sturla Molden < >>>>>> sturla.molden at gmail.com> wrote: >>>>>> >>>>>>> Charles R Harris wrote: >>>>>>> >>>>>>> > I'm looking to change some numpy deprecations into errors as well >>>>>>> as remove >>>>>>> > some deprecated functions. The problem I see is that >>>>>>> > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, >>>>>>> really, old. >>>>>>> > So the question is, does "support" mean compiles with earlier >>>>>>> versions >>>>>>> > of Numpy ? >>>>>>> >>>>>>> It means there is a Travis CI build with NumPy 1.6.2. So any change >>>>>>> to the >>>>>>> SciPy source code must compile with NumPy 1.6 and any later version >>>>>>> of >>>>>>> NumPy. >>>>>>> >>>>>>> There is no Travis CI build with NumPy 1.5. I don't think we know >>>>>>> for sure >>>>>>> if it is really compatible with the current SciPy. >>>>>>> >>>>>> >>>> There is still a reference to 1.5 in Scipy, I forget where. >>>> >>> >>> In INSTALL.rst.txt, will fix that now. >>> >>> >> Ralf, I cannot compile Scipy 0.13.3 on my system, it seems to fail here >> >> Error compiling Cython file: >> ------------------------------------------------------------ >> ... >> # and object. In this file, only NULL is passed to these parameters. >> cdef extern from *: >> cnp.ndarray PyArray_CheckFromAny(object, void*, int, int, int, void*) >> cnp.ndarray PyArray_FromArray(cnp.ndarray, void*, int) >> >> from . cimport cython_blas as blas_pointers >> ^ >> ------------------------------------------------------------ >> >> _decomp_update.pyx:60:0: 'cython_blas.pxd' not found >> >> >> Although it is hard to tell, the traceback doesn't give much useful >> information. I suspect this is due to a cython version mismatch, as it >> seems to be looking for cython_blas.pxd but only cython_blas.c is >> available. Have you seen this before? >> >> > That's code that was only introduced for 0.16.x; a ``git clean -xdf`` > should fix this for you. > > Next obstacle: I think it'll fail with Cython 0.22, you'll need a lower > Cython version (probably around 0.19.x). > > Looks like Scipy 0.13.3 is OK against master apart from a bunch of runtime errors due to deprecation warnings, precision changes, TypeErrors due to default casting rule changes, and new runtime warnings about empty slices. I wouldn't recommend it for use with Numpy 1.10, but it is probably not fatal to do so. Nothing changes with the deprecation removals added. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From mansourmoufid at gmail.com Sun Jun 21 13:53:02 2015 From: mansourmoufid at gmail.com (Mansour Moufid) Date: Sun, 21 Jun 2015 13:53:02 -0400 Subject: [Numpy-discussion] How to limit cross correlation window width in Numpy? In-Reply-To: <1986779260456320291.558031sturla.molden-gmail.com@news.gmane.org> References: <2C882037-0653-41DC-B2AF-F87B51C6E11B@brandeis.edu> <07A9FB09-CD74-4723-AA3E-85AFCF042B41@brandeis.edu> <1986779260456320291.558031sturla.molden-gmail.com@news.gmane.org> Message-ID: I just realized that NumPy uses the time domain algorithm for correlation. So it would be much easier to modify the correlation functions in SciPy than in NumPy. From charlesr.harris at gmail.com Sun Jun 21 14:13:50 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 21 Jun 2015 12:13:50 -0600 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: References: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Message-ID: On Sun, Jun 21, 2015 at 11:49 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Sun, Jun 21, 2015 at 9:57 AM, Ralf Gommers > wrote: > >> >> >> On Sun, Jun 21, 2015 at 5:45 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Sun, Jun 21, 2015 at 9:31 AM, Ralf Gommers >>> wrote: >>> >>>> >>>> >>>> On Sun, Jun 21, 2015 at 5:13 PM, Charles R Harris < >>>> charlesr.harris at gmail.com> wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Sun, Jun 21, 2015 at 7:14 AM, Ralf Gommers >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Fri, Jun 19, 2015 at 11:52 PM, Charles R Harris < >>>>>> charlesr.harris at gmail.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, Jun 19, 2015 at 3:05 PM, Sturla Molden < >>>>>>> sturla.molden at gmail.com> wrote: >>>>>>> >>>>>>>> Charles R Harris wrote: >>>>>>>> >>>>>>>> > I'm looking to change some numpy deprecations into errors as well >>>>>>>> as remove >>>>>>>> > some deprecated functions. The problem I see is that >>>>>>>> > SciPy claims to support Numpy >= 1.5 and Numpy 1.5 is really, >>>>>>>> really, old. >>>>>>>> > So the question is, does "support" mean compiles with earlier >>>>>>>> versions >>>>>>>> > of Numpy ? >>>>>>>> >>>>>>>> It means there is a Travis CI build with NumPy 1.6.2. So any change >>>>>>>> to the >>>>>>>> SciPy source code must compile with NumPy 1.6 and any later version >>>>>>>> of >>>>>>>> NumPy. >>>>>>>> >>>>>>>> There is no Travis CI build with NumPy 1.5. I don't think we know >>>>>>>> for sure >>>>>>>> if it is really compatible with the current SciPy. >>>>>>>> >>>>>>> >>>>> There is still a reference to 1.5 in Scipy, I forget where. >>>>> >>>> >>>> In INSTALL.rst.txt, will fix that now. >>>> >>>> >>> Ralf, I cannot compile Scipy 0.13.3 on my system, it seems to fail here >>> >>> Error compiling Cython file: >>> ------------------------------------------------------------ >>> ... >>> # and object. In this file, only NULL is passed to these parameters. >>> cdef extern from *: >>> cnp.ndarray PyArray_CheckFromAny(object, void*, int, int, int, void*) >>> cnp.ndarray PyArray_FromArray(cnp.ndarray, void*, int) >>> >>> from . cimport cython_blas as blas_pointers >>> ^ >>> ------------------------------------------------------------ >>> >>> _decomp_update.pyx:60:0: 'cython_blas.pxd' not found >>> >>> >>> Although it is hard to tell, the traceback doesn't give much useful >>> information. I suspect this is due to a cython version mismatch, as it >>> seems to be looking for cython_blas.pxd but only cython_blas.c is >>> available. Have you seen this before? >>> >>> >> That's code that was only introduced for 0.16.x; a ``git clean -xdf`` >> should fix this for you. >> >> Next obstacle: I think it'll fail with Cython 0.22, you'll need a lower >> Cython version (probably around 0.19.x). >> >> > Looks like Scipy 0.13.3 is OK against master apart from a bunch of runtime > errors due to deprecation warnings, precision changes, TypeErrors due to > default casting rule changes, and new runtime warnings about empty slices. > I wouldn't recommend it for use with Numpy 1.10, but it is probably not > fatal to do so. Nothing changes with the deprecation removals added. > > Scipy 0.14.1 is clean except for InvalidValue warnings and is probably the earliest I'd recommend as "safe". It was released 6 months ago. Scipy 0.14.0 actually has fewer errors, those resulting from the changes to default casting rules, so is probably usable also, it was released about a year ago. Ralf, thoughts? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Jun 21 15:37:22 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 21 Jun 2015 21:37:22 +0200 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: References: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Message-ID: On Sun, Jun 21, 2015 at 7:49 PM, Charles R Harris wrote: > > > On Sun, Jun 21, 2015 at 9:57 AM, Ralf Gommers > wrote: > >> >> That's code that was only introduced for 0.16.x; a ``git clean -xdf`` >> should fix this for you. >> >> Next obstacle: I think it'll fail with Cython 0.22, you'll need a lower >> Cython version (probably around 0.19.x). >> >> > Looks like Scipy 0.13.3 is OK against master apart from a bunch of runtime > errors due to deprecation warnings, precision changes, TypeErrors due to > default casting rule changes, and new runtime warnings about empty slices. > I wouldn't recommend it for use with Numpy 1.10, but it is probably not > fatal to do so. > Thanks for checking. > Nothing changes with the deprecation removals added. > You mean the deprecations in your open PR, not old_defines.h right? Without old_defines.h Scipy 0.13.3 doesn't build. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Jun 21 15:47:18 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 21 Jun 2015 21:47:18 +0200 Subject: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements. In-Reply-To: References: <312514302456440083.589282sturla.molden-gmail.com@news.gmane.org> Message-ID: On Sun, Jun 21, 2015 at 8:13 PM, Charles R Harris wrote: > > > On Sun, Jun 21, 2015 at 11:49 AM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> Looks like Scipy 0.13.3 is OK against master apart from a bunch of >> runtime errors due to deprecation warnings, >> > Note that you only get RuntimeWarnings with numpy master, not with a released version (due to switching tests to 'release' mode). > precision changes, >> > That can always happen, those are usually harmless. > TypeErrors due to default casting rule changes, >> > That's the casting='same_kind' I assume? We did that on purpose and thought about it quite hard, so that's OK. If there are other, unintended casting rule changes then I'm not sure. > and new runtime warnings about empty slices. >> > Also not an issue, because they were added on purpose. I think those warnings are a bit too intrusive at the moment, but that's unrelated to Scipy 0.13.3 > I wouldn't recommend it for use with Numpy 1.10, but it is probably not >> fatal to do so. Nothing changes with the deprecation removals added. >> >> > Scipy 0.14.1 is clean except for InvalidValue warnings and is probably the > earliest I'd recommend as "safe". It was released 6 months ago. Scipy > 0.14.0 actually has fewer errors, those resulting from the changes to > default casting rules, so is probably usable also, it was released about a > year ago. > > Ralf, thoughts? > Sounds like we managed to not break anything seriously in numpy master recently, so branching 1.10.x seems OK from this point of view. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From honi at brandeis.edu Sun Jun 21 18:46:08 2015 From: honi at brandeis.edu (Honi Sanders) Date: Sun, 21 Jun 2015 18:46:08 -0400 Subject: [Numpy-discussion] How to limit cross correlation window width in Numpy? In-Reply-To: References: <2C882037-0653-41DC-B2AF-F87B51C6E11B@brandeis.edu> <07A9FB09-CD74-4723-AA3E-85AFCF042B41@brandeis.edu> <1986779260456320291.558031sturla.molden-gmail.com@news.gmane.org> Message-ID: <70E84681-F94C-4464-83B2-12A83EBB4EE4@brandeis.edu> Did you check out my implementation? I was able to modify the Numpy correlate function just fine. https://github.com/numpy/numpy/compare/master...bringingheavendown:maxlag > On Jun 21, 2015, at 1:53 PM, Mansour Moufid wrote: > > I just realized that NumPy uses the time domain algorithm for correlation. > So it would be much easier to modify the correlation functions in SciPy > than in NumPy. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From honi at brandeis.edu Sun Jun 21 18:48:07 2015 From: honi at brandeis.edu (Honi Sanders) Date: Sun, 21 Jun 2015 18:48:07 -0400 Subject: [Numpy-discussion] Removal of Deprecated Keywords/functionality In-Reply-To: References: Message-ID: <0ABE7B8B-3242-409A-A982-F8F320D28C07@brandeis.edu> OK. So I am in the midst of a pull request adding a ?maxlag? mode to correlate (https://github.com/numpy/numpy/compare/master...bringingheavendown:maxlag). Am I to understand that I no longer need to preserve the old_behavior functionality? Is it possible that we could address my pull request and then remove the old_behavior functionality because now my pull request is unmergeable. Honi > On Jun 20, 2015, at 5:32 PM, Nathaniel Smith wrote: > > On Jun 20, 2015 1:43 PM, "Charles R Harris" > wrote: > > > > Hi All, > > > > There are three long ago deprecations that I am not sure how to handle. > > > > keywords skiprows and missing in genfromtxt, deprecated in 1.5. > > keyword old_behavior (default False) in correlate. added in 1.5 at least, but default value changed later. > > > > The documentation says they will be removed in numpy 2.0, but we might want to try ealier. The case of the correlation function is trickier, as we probabaly need to provide a function with the old behavior before removing the keyword. > > Wouldn't this function just be > correlate(a, conj(b)) > ? Surely just writing that is easier and clearer than any function call we could provide. > > > I've left these cases as is, but the more old stuff hanging about the greater our technical debt. > > I guess we could try dropping them from the first release candidate and at least get some data on whether anyone notices. > > 1.5 was a lonnnng time ago. > > -n > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Jun 22 01:46:34 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 22 Jun 2015 07:46:34 +0200 Subject: [Numpy-discussion] Removal of Deprecated Keywords/functionality In-Reply-To: <0ABE7B8B-3242-409A-A982-F8F320D28C07@brandeis.edu> References: <0ABE7B8B-3242-409A-A982-F8F320D28C07@brandeis.edu> Message-ID: On Mon, Jun 22, 2015 at 12:48 AM, Honi Sanders wrote: > OK. So I am in the midst of a pull request adding a ?maxlag? mode to > correlate ( > https://github.com/numpy/numpy/compare/master...bringingheavendown:maxlag). > Am I to understand that I no longer need to preserve the old_behavior > functionality? > Indeed. It was scheduled for ripping out for a long time, and Chuck finally got that done. > Is it possible that we could address my pull request and then remove the > old_behavior functionality because now my pull request is unmergeable. > We can't undo the merge that made your PR need a rebase, but I'm happy to help you with the rebase and getting that into your branch if needed. Ralf > Honi > > On Jun 20, 2015, at 5:32 PM, Nathaniel Smith wrote: > > On Jun 20, 2015 1:43 PM, "Charles R Harris" > wrote: > > > > Hi All, > > > > There are three long ago deprecations that I am not sure how to handle. > > > > keywords skiprows and missing in genfromtxt, deprecated in 1.5. > > keyword old_behavior (default False) in correlate. added in 1.5 at > least, but default value changed later. > > > > The documentation says they will be removed in numpy 2.0, but we might > want to try ealier. The case of the correlation function is trickier, as we > probabaly need to provide a function with the old behavior before removing > the keyword. > > Wouldn't this function just be > correlate(a, conj(b)) > ? Surely just writing that is easier and clearer than any function call we > could provide. > > > I've left these cases as is, but the more old stuff hanging about the > greater our technical debt. > > I guess we could try dropping them from the first release candidate and at > least get some data on whether anyone notices. > > 1.5 was a lonnnng time ago. > > -n > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From insertinterestingnamehere at gmail.com Tue Jun 23 00:43:13 2015 From: insertinterestingnamehere at gmail.com (Ian Henriksen) Date: Tue, 23 Jun 2015 04:43:13 +0000 Subject: [Numpy-discussion] PR added: frozen dimensions in gufunc signatures In-Reply-To: References: Message-ID: On Fri, Aug 29, 2014 at 2:55 AM Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > On Thu, Aug 28, 2014 at 5:40 PM, Nathaniel Smith wrote: > >> Some thoughts: >> > >> But, for your computed dimension idea I'm wondering if what we should >> do instead is just let a gufunc provide a C callback that looks at the >> input array dimensions and explicitly says somehow which dimensions it >> wants to treat as the core dimensions and what its output shapes will >> be. There's no rule that we have to extend the signature mini-language >> to be Turing complete, we can just use C :-). >> >> It would be good to have a better motivation for computed gufunc >> dimensions, though. Your "all pairwise cross products" example would >> be *much* better handled by implementing the .outer method for binary >> gufuncs: pairwise_cross(a) == cross.outer(a, a). This would make >> gufuncs more consistent with ufuncs, plus let you do >> all-pairwise-cross-products between two different sets of cross >> products, plus give us all-pairwise-matrix-products for free, etc. >> > > The outer for binary gufuncs sounds like a good idea. A reduce for binary > gufuncs that allow it (like square matrix multiplication) would also be > nice. But going back to the original question, the pairwise whatevers were > just an example: one could come up with several others, e.g.: > > (m),(n)->($p),($q) with $p = m - n and $q = n - 1, could be (I think) > the signature of a polynomial division gufunc > (m),(n)->($p), with $p = m - n + 1, could be the signature of a > convolution or correlation gufunc > (m)->($n), with $n = m / 2, could be some form of downsampling gufunc > > >> While you're messing around with the gufunc dimension matching logic, >> any chance we can tempt you to implement the "optional dimensions" >> needed to handle '@', solve, etc. elegantly? The rule would be that >> you can write something like >> (n?,k),(k,m?)->(n?,m?) >> and the ? dimensions are allowed to take on an additional value >> "nothing at all". If there's no dimension available in the input, then >> we act like it was reshaped to add a dimension with shape 1, and then >> in the output we squeeze this dimension out again. I guess the rules >> would be that (1) in the input, you can have ? dimensions at the >> beginning or the end of your shape, but not both at the same time, (2) >> any dimension that has a ? in one place must have it in all places, >> (3) when checking argument conformity, "nothing at all" only matches >> against "nothing at all", not against 1; this is because if we allowed >> (n?,m),(n?,m)->(n?,m) to be applied to two arrays with shapes (5,) and >> (1, 5), then it would be ambiguous whether the output should have >> shape (5,) or (1, 5). >> > > I definitely do not mind taking a look into it. I need to think a little > more about the rules to convince myself that there is a consistent set of > them that we can use. I also thought there may be a performance concern, > that you may want to have different implementations when dimensions are > missing, not automatically add a 1 and then remove it. It doesn't seem to > be the case with neither `np.dot` nor `np.solve`, so maybe I am being > overly cautious. > > Thanks for your comments and ideas. I have a feeling there are some nice > features hidden in here, but I can't seem to figure out what should they be > on my own. > > Jaime > > -- > (\__/) > ( O.o) > ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes > de dominaci?n mundial. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion I'm not sure where this is at, given the current amount of work that is coming from the 1.10 release, but this sounds like a really great idea. Computed/fixed dimensions would allow gufuncs for things like: - polynomial multiplication, division, differentiation, and integration - convolutions - views of different types (see the corresponding discussion at http://permalink.gmane.org/gmane.comp.python.numeric.general/59847). Some of these examples would work better with gufuncs that can construct views and have an axes keyword, but this is exactly the kind of functionality that would be really great to have. Thanks for the great work! -Ian Henriksen -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.villellas at continuum.io Tue Jun 23 05:33:56 2015 From: oscar.villellas at continuum.io (Oscar Villellas) Date: Tue, 23 Jun 2015 11:33:56 +0200 Subject: [Numpy-discussion] PR added: frozen dimensions in gufunc signatures In-Reply-To: References: Message-ID: On Fri, Aug 29, 2014 at 10:55 AM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > On Thu, Aug 28, 2014 at 5:40 PM, Nathaniel Smith wrote: > >> Some thoughts: >> >> But, for your computed dimension idea I'm wondering if what we should >> do instead is just let a gufunc provide a C callback that looks at the >> input array dimensions and explicitly says somehow which dimensions it >> wants to treat as the core dimensions and what its output shapes will >> be. There's no rule that we have to extend the signature mini-language >> to be Turing complete, we can just use C :-). >> >> It would be good to have a better motivation for computed gufunc >> dimensions, though. Your "all pairwise cross products" example would >> be *much* better handled by implementing the .outer method for binary >> gufuncs: pairwise_cross(a) == cross.outer(a, a). This would make >> gufuncs more consistent with ufuncs, plus let you do >> all-pairwise-cross-products between two different sets of cross >> products, plus give us all-pairwise-matrix-products for free, etc. >> > > The outer for binary gufuncs sounds like a good idea. A reduce for binary > gufuncs that allow it (like square matrix multiplication) would also be > nice. But going back to the original question, the pairwise whatevers were > just an example: one could come up with several others, e.g.: > > (m),(n)->($p),($q) with $p = m - n and $q = n - 1, could be (I think) > the signature of a polynomial division gufunc > (m),(n)->($p), with $p = m - n + 1, could be the signature of a > convolution or correlation gufunc > (m)->($n), with $n = m / 2, could be some form of downsampling gufunc > > An example where a computed output dimension would be useful is with linalg.svd, as some resulting dimensions for a matrix (m, n) are based on min(m, n). This, coupled with the required keyword support makes it necessary to have 6 gufuncs to support the functionality. I do think that the C callback solution would be enough, and just allow the signature to have unbound variables that can be resolved by that callback... no need to change the syntax: (m),(n)->(p),(q) When registering such a gufunc, a callback function that resolves the missing dimensions would be required. Extra niceties that could be built on top of that: - pass keyword arguments to that function so that stuff like full_matrices could be resolved inside the gufunc. Maybe even allowing to modify the number of results (harder) that would be needed to support stuff like "compute_uv" in svd as well. - allow context to be created in that resolution that gets passed into the ufunc kernel itself (note that this might be *necessary*). If context is created another function would be needed to dispose that context. In my experience when implementing the linalg gufunc, a very common pattern was needing some buffers for the actual LAPACK calls (as those functions are inplace, a tmp buffer was always needed). Some setup and buffer allocation was performed before looping. Every iteration in the inner loop will reuse that data and at the end of the loop the buffers will be released. That means the initialization/allocation/release is done once per inner loop call. If the hooks to allocate/dispose the context existed, that initialization/allocation/release could be done once per ufunc call. AFAIK, a ufunc call can involve several inner loop calls depending on outer dimensions and layout of the operands. > While you're messing around with the gufunc dimension matching logic, >> any chance we can tempt you to implement the "optional dimensions" >> needed to handle '@', solve, etc. elegantly? The rule would be that >> you can write something like >> (n?,k),(k,m?)->(n?,m?) >> and the ? dimensions are allowed to take on an additional value >> "nothing at all". If there's no dimension available in the input, then >> we act like it was reshaped to add a dimension with shape 1, and then >> in the output we squeeze this dimension out again. I guess the rules >> would be that (1) in the input, you can have ? dimensions at the >> beginning or the end of your shape, but not both at the same time, (2) >> any dimension that has a ? in one place must have it in all places, >> (3) when checking argument conformity, "nothing at all" only matches >> against "nothing at all", not against 1; this is because if we allowed >> (n?,m),(n?,m)->(n?,m) to be applied to two arrays with shapes (5,) and >> (1, 5), then it would be ambiguous whether the output should have >> shape (5,) or (1, 5). >> > > I definitely do not mind taking a look into it. I need to think a little > more about the rules to convince myself that there is a consistent set of > them that we can use. I also thought there may be a performance concern, > that you may want to have different implementations when dimensions are > missing, not automatically add a 1 and then remove it. It doesn't seem to > be the case with neither `np.dot` nor `np.solve`, so maybe I am being > overly cautious. > > Thanks for your comments and ideas. I have a feeling there are some nice > features hidden in here, but I can't seem to figure out what should they be > on my own. > > Jaime > > -- > (\__/) > ( O.o) > ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes > de dominaci?n mundial. > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Jun 23 12:33:11 2015 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 23 Jun 2015 12:33:11 -0400 Subject: [Numpy-discussion] Python 3 and isinstance(np.int64(42), int) In-Reply-To: References: <55825CDB.80002@fysik.dtu.dk> Message-ID: On Fri, Jun 19, 2015 at 4:15 PM, Chris Barker wrote: > On Wed, Jun 17, 2015 at 11:13 PM, Nathaniel Smith wrote: > >> there's some >> argument that in Python, doing explicit type checks like this is >> usually a sign that one is doing something awkward, > > > I tend to agree with that. > > On the other hand, numpy itself is kind-of sort-of statically typed. But > in that case, if you need to know the type of an array -- check the array's > dtype. > > Also: > > >>> a = np.zeros(7, int) > >>> n = a[3] > >>> type(n) > > > I Never liked declaring numpy arrays with the python types like "int" or > "float" -- in numpy you usually care more about the type, so should simple > use "int64" if you want a 64 bit int. And "float64" if you want a 64 bit > float. Granted, pyton floats have always been float64 (on all platfroms??), > and python ints used to a be a reasonable int type, but now that python > ints are bigInts in py3, it really makes sense to be clear. > > And now that I think about it, in py2, int is 32 bit on win64 and 64 bit > on *nix64 -- so you're really better off being explicit with your numpy > arrays. > being late checking some examples >>> a = np.zeros(7, int) >>> a.dtype dtype('int32') >>> np.__version__ '1.9.2rc1' >>> type(a[3]) >>> a = np.zeros(7, int) >>> a = np.array([888888888888888888]) >>> a array([888888888888888888], dtype=int64) >>> a = np.array([888888888888888888888888888888888]) >>> a array([888888888888888888888888888888888], dtype=object) >>> a = np.array([888888888888888888888888888888888], dtype=int) Traceback (most recent call last): File "", line 1, in a = np.array([888888888888888888888888888888888], dtype=int) OverflowError: Python int too large to convert to C long Looks like we need to be a bit more careful now. Josef Python 3.4.3 > > -CHB > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Jun 26 05:32:28 2015 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 26 Jun 2015 02:32:28 -0700 Subject: [Numpy-discussion] Video meeting this week Message-ID: Hi all, In a week and a half, this is happening: https://github.com/numpy/numpy/wiki/SciPy-2015-developer-meeting It's somewhat short notice (my bad :-/), but I think it would be good to have a short video meeting sometime this week as a kind of "pre-meeting" -- to at least briefly go over the main issues we see facing the project to prime the pump, get a better idea about what we want to accomplish at the meeting itself, and gather some early feedback from anyone who won't be able to make it to SciPy (we'll miss you). The obligatory doodle: http://doodle.com/6b4s6thqt9xt4vnh Depending on the interest level, I'm thinking we'll either use Google Hangouts or Bluejeans (https://bluejeans.com/ -- same as what Ralf used for the similar SciPy meeting a few months ago; needs a plugin installed but is available for Windows / OS X / 64-bit Linux / Android / iOS, or regular telephone, or h323 softphone). -n -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Mon Jun 29 12:04:45 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 29 Jun 2015 09:04:45 -0700 Subject: [Numpy-discussion] Video meeting this week In-Reply-To: References: Message-ID: On Jun 26, 2015 2:32 AM, "Nathaniel Smith" wrote: > > Hi all, > > In a week and a half, this is happening: > > https://github.com/numpy/numpy/wiki/SciPy-2015-developer-meeting > > It's somewhat short notice (my bad :-/), but I think it would be good > to have a short video meeting sometime this week as a kind of > "pre-meeting" -- to at least briefly go over the main issues we see > facing the project to prime the pump, get a better idea about what we > want to accomplish at the meeting itself, and gather some early > feedback from anyone who won't be able to make it to SciPy (we'll miss > you). > > The obligatory doodle: > http://doodle.com/6b4s6thqt9xt4vnh It's looking like sometime Thursday between 12:00 and 15:00 California time (1900-2200 UTC) is most likely. Please fill this out now if you have an interest, and I'll declare a winner this afternoon... -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Jun 29 17:14:32 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 29 Jun 2015 14:14:32 -0700 Subject: [Numpy-discussion] Video meeting this week In-Reply-To: References: Message-ID: On Mon, Jun 29, 2015 at 9:04 AM, Nathaniel Smith wrote: > On Jun 26, 2015 2:32 AM, "Nathaniel Smith" wrote: >> >> Hi all, >> >> In a week and a half, this is happening: >> >> https://github.com/numpy/numpy/wiki/SciPy-2015-developer-meeting >> >> It's somewhat short notice (my bad :-/), but I think it would be good >> to have a short video meeting sometime this week as a kind of >> "pre-meeting" -- to at least briefly go over the main issues we see >> facing the project to prime the pump, get a better idea about what we >> want to accomplish at the meeting itself, and gather some early >> feedback from anyone who won't be able to make it to SciPy (we'll miss >> you). >> >> The obligatory doodle: >> http://doodle.com/6b4s6thqt9xt4vnh > > It's looking like sometime Thursday between 12:00 and 15:00 California time > (1900-2200 UTC) is most likely. Please fill this out now if you have an > interest, and I'll declare a winner this afternoon... Actually Wednesday 11:00-15:00 California time = 18:00-22:00 UTC is also viable -- Chuck/Sebastian/Julian/anyone else, any preference? -n -- Nathaniel J. Smith -- http://vorpus.org From charlesr.harris at gmail.com Mon Jun 29 23:32:00 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 29 Jun 2015 21:32:00 -0600 Subject: [Numpy-discussion] Video meeting this week In-Reply-To: References: Message-ID: Any of those times would work for me. On Mon, Jun 29, 2015 at 3:14 PM, Nathaniel Smith wrote: > On Mon, Jun 29, 2015 at 9:04 AM, Nathaniel Smith wrote: > > On Jun 26, 2015 2:32 AM, "Nathaniel Smith" wrote: > >> > >> Hi all, > >> > >> In a week and a half, this is happening: > >> > >> https://github.com/numpy/numpy/wiki/SciPy-2015-developer-meeting > >> > >> It's somewhat short notice (my bad :-/), but I think it would be good > >> to have a short video meeting sometime this week as a kind of > >> "pre-meeting" -- to at least briefly go over the main issues we see > >> facing the project to prime the pump, get a better idea about what we > >> want to accomplish at the meeting itself, and gather some early > >> feedback from anyone who won't be able to make it to SciPy (we'll miss > >> you). > >> > >> The obligatory doodle: > >> http://doodle.com/6b4s6thqt9xt4vnh > > > > It's looking like sometime Thursday between 12:00 and 15:00 California > time > > (1900-2200 UTC) is most likely. Please fill this out now if you have an > > interest, and I'll declare a winner this afternoon... > > Actually Wednesday 11:00-15:00 California time = 18:00-22:00 UTC is > also viable -- Chuck/Sebastian/Julian/anyone else, any preference? > > -n > > -- > Nathaniel J. Smith -- http://vorpus.org > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Jun 30 00:58:52 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 29 Jun 2015 21:58:52 -0700 Subject: [Numpy-discussion] Video meeting this week In-Reply-To: References: Message-ID: On Fri, Jun 26, 2015 at 2:32 AM, Nathaniel Smith wrote: > Hi all, > > In a week and a half, this is happening: > > https://github.com/numpy/numpy/wiki/SciPy-2015-developer-meeting > > It's somewhat short notice (my bad :-/), but I think it would be good > to have a short video meeting sometime this week as a kind of > "pre-meeting" -- to at least briefly go over the main issues we see > facing the project to prime the pump, get a better idea about what we > want to accomplish at the meeting itself, and gather some early > feedback from anyone who won't be able to make it to SciPy (we'll miss > you). > > The obligatory doodle: > http://doodle.com/6b4s6thqt9xt4vnh Okay, let's aim for: Thursday July 2 at 20:00 UTC. I believe that's 1pm California / 4 pm New York / 9pm London / 10pm western Europe And so far it looks like we'll be under the 10 person Google Hangouts limit, which I'm assuming is simpler for everybody, so let's assume we're doing that unless otherwise specified. (This does mean that I'd appreciate a quick email if you're planning on dialling in but haven't otherwise responded to the poll, though!) -n -- Nathaniel J. Smith -- http://vorpus.org From jeffreback at gmail.com Tue Jun 30 06:16:00 2015 From: jeffreback at gmail.com (Jeff Reback) Date: Tue, 30 Jun 2015 06:16:00 -0400 Subject: [Numpy-discussion] Video meeting this week In-Reply-To: References: Message-ID: <79C38017-4B5D-468D-ACB1-4AC69433083B@gmail.com> you guys have an agenda? I can be reached on my cell 917-971-6387 > On Jun 30, 2015, at 12:58 AM, Nathaniel Smith wrote: > >> On Fri, Jun 26, 2015 at 2:32 AM, Nathaniel Smith wrote: >> Hi all, >> >> In a week and a half, this is happening: >> >> https://github.com/numpy/numpy/wiki/SciPy-2015-developer-meeting >> >> It's somewhat short notice (my bad :-/), but I think it would be good >> to have a short video meeting sometime this week as a kind of >> "pre-meeting" -- to at least briefly go over the main issues we see >> facing the project to prime the pump, get a better idea about what we >> want to accomplish at the meeting itself, and gather some early >> feedback from anyone who won't be able to make it to SciPy (we'll miss >> you). >> >> The obligatory doodle: >> http://doodle.com/6b4s6thqt9xt4vnh > > Okay, let's aim for: > > Thursday July 2 at 20:00 UTC. > > I believe that's 1pm California / 4 pm New York / 9pm London / 10pm > western Europe > > And so far it looks like we'll be under the 10 person Google Hangouts > limit, which I'm assuming is simpler for everybody, so let's assume > we're doing that unless otherwise specified. (This does mean that I'd > appreciate a quick email if you're planning on dialling in but haven't > otherwise responded to the poll, though!) > > -n > > -- > Nathaniel J. Smith -- http://vorpus.org > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From honi at brandeis.edu Tue Jun 30 10:49:41 2015 From: honi at brandeis.edu (Honi Sanders) Date: Tue, 30 Jun 2015 10:49:41 -0400 Subject: [Numpy-discussion] Video meeting this week In-Reply-To: References: Message-ID: <81EADCDE-F696-46A1-9784-00F5A3FFBC22@brandeis.edu> I?m pretty new to Numpy so I won?t be able to contribute much, but I would appreciate being able to sit in on this if possible. Honi > On Jun 30, 2015, at 12:58 AM, Nathaniel Smith wrote: > > On Fri, Jun 26, 2015 at 2:32 AM, Nathaniel Smith wrote: >> Hi all, >> >> In a week and a half, this is happening: >> >> https://github.com/numpy/numpy/wiki/SciPy-2015-developer-meeting >> >> It's somewhat short notice (my bad :-/), but I think it would be good >> to have a short video meeting sometime this week as a kind of >> "pre-meeting" -- to at least briefly go over the main issues we see >> facing the project to prime the pump, get a better idea about what we >> want to accomplish at the meeting itself, and gather some early >> feedback from anyone who won't be able to make it to SciPy (we'll miss >> you). >> >> The obligatory doodle: >> http://doodle.com/6b4s6thqt9xt4vnh > > Okay, let's aim for: > > Thursday July 2 at 20:00 UTC. > > I believe that's 1pm California / 4 pm New York / 9pm London / 10pm > western Europe > > And so far it looks like we'll be under the 10 person Google Hangouts > limit, which I'm assuming is simpler for everybody, so let's assume > we're doing that unless otherwise specified. (This does mean that I'd > appreciate a quick email if you're planning on dialling in but haven't > otherwise responded to the poll, though!) > > -n > > -- > Nathaniel J. Smith -- http://vorpus.org > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From ralf.gommers at gmail.com Tue Jun 30 16:51:36 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 30 Jun 2015 22:51:36 +0200 Subject: [Numpy-discussion] Video meeting this week In-Reply-To: <79C38017-4B5D-468D-ACB1-4AC69433083B@gmail.com> References: <79C38017-4B5D-468D-ACB1-4AC69433083B@gmail.com> Message-ID: On Tue, Jun 30, 2015 at 12:16 PM, Jeff Reback wrote: > you guys have an agenda? > I'm guessing a subset of what's listed on https://github.com/numpy/numpy/wiki/SciPy-2015-developer-meeting Would indeed be good to make a proper agenda, so we can prepare properly for the meeting at SciPy. There are some topics (like commit rights) that can easily fill the whole call and are better done in person instead. @Nathaniel: as the organizer, do you want to make a proposal? Ralf > I can be reached on my cell 917-971-6387 > > > On Jun 30, 2015, at 12:58 AM, Nathaniel Smith wrote: > > > >> On Fri, Jun 26, 2015 at 2:32 AM, Nathaniel Smith wrote: > >> Hi all, > >> > >> In a week and a half, this is happening: > >> > >> https://github.com/numpy/numpy/wiki/SciPy-2015-developer-meeting > >> > >> It's somewhat short notice (my bad :-/), but I think it would be good > >> to have a short video meeting sometime this week as a kind of > >> "pre-meeting" -- to at least briefly go over the main issues we see > >> facing the project to prime the pump, get a better idea about what we > >> want to accomplish at the meeting itself, and gather some early > >> feedback from anyone who won't be able to make it to SciPy (we'll miss > >> you). > >> > >> The obligatory doodle: > >> http://doodle.com/6b4s6thqt9xt4vnh > > > > Okay, let's aim for: > > > > Thursday July 2 at 20:00 UTC. > > > > I believe that's 1pm California / 4 pm New York / 9pm London / 10pm > > western Europe > > > > And so far it looks like we'll be under the 10 person Google Hangouts > > limit, which I'm assuming is simpler for everybody, so let's assume > > we're doing that unless otherwise specified. (This does mean that I'd > > appreciate a quick email if you're planning on dialling in but haven't > > otherwise responded to the poll, though!) > > > > -n > > > > -- > > Nathaniel J. Smith -- http://vorpus.org > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Jun 30 19:44:11 2015 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 30 Jun 2015 16:44:11 -0700 Subject: [Numpy-discussion] Video meeting this week In-Reply-To: <81EADCDE-F696-46A1-9784-00F5A3FFBC22@brandeis.edu> References: <81EADCDE-F696-46A1-9784-00F5A3FFBC22@brandeis.edu> Message-ID: On Tue, Jun 30, 2015 at 7:49 AM, Honi Sanders wrote: > I?m pretty new to Numpy so I won?t be able to contribute much, but I would appreciate being able to sit in on this if possible. You certainly would be welcome. -n -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Tue Jun 30 20:12:44 2015 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 30 Jun 2015 17:12:44 -0700 Subject: [Numpy-discussion] Video meeting this week In-Reply-To: References: <79C38017-4B5D-468D-ACB1-4AC69433083B@gmail.com> Message-ID: On Tue, Jun 30, 2015 at 1:51 PM, Ralf Gommers wrote: > > On Tue, Jun 30, 2015 at 12:16 PM, Jeff Reback wrote: >> >> you guys have an agenda? > > I'm guessing a subset of what's listed on > https://github.com/numpy/numpy/wiki/SciPy-2015-developer-meeting > > Would indeed be good to make a proper agenda, so we can prepare properly for > the meeting at SciPy. There are some topics (like commit rights) that can > easily fill the whole call and are better done in person instead. > @Nathaniel: as the organizer, do you want to make a proposal? Yep. Just reorganized the wiki page a bit to hopefully highlight the big picture stuff (which I'm thinking is the stuff that's the highest priority for our limited face-to-face time?), and made a world-writeable google doc for us to use for the call here: https://docs.google.com/document/d/11KC2p3cCsbDVjLcQSCehUiWGyWDNCyOunKfrO7Q7m3E/edit?usp=sharing Copy/pasting the proposed agenda for the call from that google doc: * Logistics for next week: ** What time do we want to start? ** What are our main goals for the meeting? ** How do we want to organize our time? (Maybe just sitting in a meeting room grinding through an agenda for 10 hours straight is not the best approach.) ** Are there any topics that e.g. we should plan to discuss at a particular time so Ralf can plan to join remotely? ** Stuff like that. * Do a *brief* pass through the list of topics for the meeting listed on the wiki page (linked above). The goal here is not to actually have a detailed discussion, but just: ** Get a general sense of where we stand so people can come prepared next week ** Get feedback on these topics from those who won?t be present for the meeting next week -n -- Nathaniel J. Smith -- http://vorpus.org From chris.barker at noaa.gov Tue Jun 30 20:38:09 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 30 Jun 2015 17:38:09 -0700 Subject: [Numpy-discussion] Video meeting this week In-Reply-To: References: Message-ID: <-6313295440405203876@unknownmsgid> I _may_ be able to join -- but don't go setting up an alternative conferencing system just for me. But I'm planning on ring in Austin Tues in any case. -Chris Sent from my iPhone > On Jun 29, 2015, at 9:59 PM, Nathaniel Smith wrote: > >> On Fri, Jun 26, 2015 at 2:32 AM, Nathaniel Smith wrote: >> Hi all, >> >> In a week and a half, this is happening: >> >> https://github.com/numpy/numpy/wiki/SciPy-2015-developer-meeting >> >> It's somewhat short notice (my bad :-/), but I think it would be good >> to have a short video meeting sometime this week as a kind of >> "pre-meeting" -- to at least briefly go over the main issues we see >> facing the project to prime the pump, get a better idea about what we >> want to accomplish at the meeting itself, and gather some early >> feedback from anyone who won't be able to make it to SciPy (we'll miss >> you). >> >> The obligatory doodle: >> http://doodle.com/6b4s6thqt9xt4vnh > > Okay, let's aim for: > > Thursday July 2 at 20:00 UTC. > > I believe that's 1pm California / 4 pm New York / 9pm London / 10pm > western Europe > > And so far it looks like we'll be under the 10 person Google Hangouts > limit, which I'm assuming is simpler for everybody, so let's assume > we're doing that unless otherwise specified. (This does mean that I'd > appreciate a quick email if you're planning on dialling in but haven't > otherwise responded to the poll, though!) > > -n > > -- > Nathaniel J. Smith -- http://vorpus.org > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From josef.pktd at gmail.com Tue Jun 30 23:58:12 2015 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 30 Jun 2015 23:58:12 -0400 Subject: [Numpy-discussion] annoying Deprecation warnings about non-integers Message-ID: I'm trying to fix some code in statsmodels that creates Deprecation Warnings from numpy Most of it are quite easy to fix, mainly cases where we use floats to avoid integer division I have two problems first, I get Deprecation warnings in the test run that don't specify where they happen. I try to find them with file searches, but I don't see a `np.ones` that might cause a problem (needle in a haystack: Close to 4000 unittests and more than 100,000 lines of numpython) Also, I'm not sure the warnings are only from statsmodels, they could be in numpy, scipy or pandas, couldn't they? second, what's wrong with non-integers in `np.r_[[np.nan] * head, x, [np.nan] * tail]` (see below) I tried to set the warnings filter to `error` but then Python itself errored right away. https://travis-ci.org/statsmodels/statsmodels/jobs/68748936 https://github.com/statsmodels/statsmodels/issues/2480 Thanks for any clues Josef >nosetests -s --pdb-failures --pdb "M:\j\statsmodels\statsmodels_py34\statsmodels\tsa\tests" ..................C:\WinPython-64bit-3.4.3.1\python-3.4.3.amd64\lib\sit e-packages\numpy\core\numeric.py:183: DeprecationWarning: using a non-integer nu mber instead of an integer will result in an error in the future a = empty(shape, dtype, order) .......... .......M:\j\statsmodels\stat smodels_py34\statsmodels\tsa\filters\filtertools.py:28: DeprecationWarning: usin g a non-integer number instead of an integer will result in an error in the futu re return np.r_[[np.nan] * head, x, [np.nan] * tail] .......................... ...................C:\WinPython-64bit-3.4.3.1 \python-3.4.3.amd64\lib\site-packages\numpy\lib\twodim_base.py:231: DeprecationW arning: using a non-integer number instead of an integer will result in an error in the future m = zeros((N, M), dtype=dtype) C:\WinPython-64bit-3.4.3.1\python-3.4.3.amd64\lib\site-packages\numpy\l ib\twodim_base.py:238: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future m[:M-k].flat[i::M+1] = 1 ........... -------------- next part -------------- An HTML attachment was scrubbed... URL: