From stefanv at berkeley.edu Tue Sep 5 14:36:58 2017 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Tue, 05 Sep 2017 11:36:58 -0700 Subject: [Numpy-discussion] NumPy default citation Message-ID: <1504636618.478273.1096130488.4D658E7C@webmail.messagingengine.com> Hi, everyone I see that the NumPy homepage does not have a "Citation" section. Furthermore, on scipy.org, the default NumPy citation points to a short summary paper that I wrote with Gael V & Stephen C. While it's a reasonable introduction to three core concepts behind NumPy, attribution should certainly go to Travis & the community. Shall we add a citation to Travis's "Guide to NumPy (2nd ed.)" on both pages? Best regards St?fan From charlesr.harris at gmail.com Tue Sep 5 16:25:20 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 5 Sep 2017 14:25:20 -0600 Subject: [Numpy-discussion] NumPy default citation In-Reply-To: <1504636618.478273.1096130488.4D658E7C@webmail.messagingengine.com> References: <1504636618.478273.1096130488.4D658E7C@webmail.messagingengine.com> Message-ID: On Tue, Sep 5, 2017 at 12:36 PM, Stefan van der Walt wrote: > Hi, everyone > > I see that the NumPy homepage does not have a "Citation" section. > Furthermore, on scipy.org, the default NumPy citation points to a short > summary paper that I wrote with Gael V & Stephen C. While it's a > reasonable introduction to three core concepts behind NumPy, attribution > should certainly go to Travis & the community. > > Shall we add a citation to Travis's "Guide to NumPy (2nd ed.)" on both > What is the citation for? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Tue Sep 5 16:29:22 2017 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Tue, 05 Sep 2017 13:29:22 -0700 Subject: [Numpy-discussion] NumPy default citation In-Reply-To: References: <1504636618.478273.1096130488.4D658E7C@webmail.messagingengine.com> Message-ID: <1504643362.501728.1096266816.78CD187C@webmail.messagingengine.com> On Tue, Sep 5, 2017, at 13:25, Charles R Harris wrote: > > On Tue, Sep 5, 2017 at 12:36 PM, Stefan van der Walt > wrote:>> Shall we add a citation to Travis's "Guide to NumPy (2nd ed.)" >> on both> > What is the citation for? It's the suggested reference to add to your paper, if you use the NumPy package in your work. St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmhobson at gmail.com Tue Sep 5 17:15:11 2017 From: pmhobson at gmail.com (Paul Hobson) Date: Tue, 5 Sep 2017 14:15:11 -0700 Subject: [Numpy-discussion] NumPy default citation In-Reply-To: <1504643362.501728.1096266816.78CD187C@webmail.messagingengine.com> References: <1504636618.478273.1096130488.4D658E7C@webmail.messagingengine.com> <1504643362.501728.1096266816.78CD187C@webmail.messagingengine.com> Message-ID: Just a thought that popped into my head: It'd be cool with the sci/py/data stack had a convention of .citation so I could look it up w/o leaving my jupyter notebook :) -paul On Tue, Sep 5, 2017 at 1:29 PM, Stefan van der Walt wrote: > On Tue, Sep 5, 2017, at 13:25, Charles R Harris wrote: > > > On Tue, Sep 5, 2017 at 12:36 PM, Stefan van der Walt > wrote: > > Shall we add a citation to Travis's "Guide to NumPy (2nd ed.)" on both > > > What is the citation for? > > > It's the suggested reference to add to your paper, if you use the NumPy > package in your work. > > St?fan > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainwoodman at gmail.com Tue Sep 5 17:21:24 2017 From: rainwoodman at gmail.com (Feng Yu) Date: Tue, 5 Sep 2017 14:21:24 -0700 Subject: [Numpy-discussion] NumPy default citation In-Reply-To: References: <1504636618.478273.1096130488.4D658E7C@webmail.messagingengine.com> <1504643362.501728.1096266816.78CD187C@webmail.messagingengine.com> Message-ID: str(numpy.version.citation) and numpy.version.citation.to_bibtex()? On Tue, Sep 5, 2017 at 2:15 PM, Paul Hobson wrote: > Just a thought that popped into my head: > It'd be cool with the sci/py/data stack had a convention of > .citation so I could look it up w/o leaving my jupyter notebook :) > > -paul > > On Tue, Sep 5, 2017 at 1:29 PM, Stefan van der Walt > wrote: >> >> On Tue, Sep 5, 2017, at 13:25, Charles R Harris wrote: >> >> >> On Tue, Sep 5, 2017 at 12:36 PM, Stefan van der Walt >> wrote: >> >> Shall we add a citation to Travis's "Guide to NumPy (2nd ed.)" on both >> >> >> What is the citation for? >> >> >> It's the suggested reference to add to your paper, if you use the NumPy >> package in your work. >> >> St?fan >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > From ben.v.root at gmail.com Tue Sep 5 17:35:51 2017 From: ben.v.root at gmail.com (Benjamin Root) Date: Tue, 5 Sep 2017 17:35:51 -0400 Subject: [Numpy-discussion] NumPy default citation In-Reply-To: References: <1504636618.478273.1096130488.4D658E7C@webmail.messagingengine.com> <1504643362.501728.1096266816.78CD187C@webmail.messagingengine.com> Message-ID: There was discussion awhile back of adopting a `__citation__` attribute. Anyone remember what happened with that idea? On Tue, Sep 5, 2017 at 5:21 PM, Feng Yu wrote: > str(numpy.version.citation) and numpy.version.citation.to_bibtex()? > > On Tue, Sep 5, 2017 at 2:15 PM, Paul Hobson wrote: > > Just a thought that popped into my head: > > It'd be cool with the sci/py/data stack had a convention of > > .citation so I could look it up w/o leaving my jupyter notebook > :) > > > > -paul > > > > On Tue, Sep 5, 2017 at 1:29 PM, Stefan van der Walt < > stefanv at berkeley.edu> > > wrote: > >> > >> On Tue, Sep 5, 2017, at 13:25, Charles R Harris wrote: > >> > >> > >> On Tue, Sep 5, 2017 at 12:36 PM, Stefan van der Walt > >> wrote: > >> > >> Shall we add a citation to Travis's "Guide to NumPy (2nd ed.)" on both > >> > >> > >> What is the citation for? > >> > >> > >> It's the suggested reference to add to your paper, if you use the NumPy > >> package in your work. > >> > >> St?fan > >> > >> > >> _______________________________________________ > >> NumPy-Discussion mailing list > >> NumPy-Discussion at python.org > >> https://mail.python.org/mailman/listinfo/numpy-discussion > >> > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholdav at gmail.com Tue Sep 5 18:10:06 2017 From: nicholdav at gmail.com (David Nicholson) Date: Tue, 5 Sep 2017 18:10:06 -0400 Subject: [Numpy-discussion] NumPy-Discussion Digest, Vol 132, Issue 1 In-Reply-To: References: Message-ID: @paul: not quite the same as building a .citation method into every module but there is https://github.com/duecredit/duecredit On Tue, Sep 5, 2017 at 5:21 PM, wrote: > Send NumPy-Discussion mailing list submissions to > numpy-discussion at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.python.org/mailman/listinfo/numpy-discussion > or, via email, send a message with subject or body 'help' to > numpy-discussion-request at python.org > > You can reach the person managing the list at > numpy-discussion-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of NumPy-Discussion digest..." > > > Today's Topics: > > 1. NumPy default citation (Stefan van der Walt) > 2. Re: NumPy default citation (Charles R Harris) > 3. Re: NumPy default citation (Stefan van der Walt) > 4. Re: NumPy default citation (Paul Hobson) > 5. Re: NumPy default citation (Feng Yu) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 05 Sep 2017 11:36:58 -0700 > From: Stefan van der Walt > To: Discussion of Numerical Python > Subject: [Numpy-discussion] NumPy default citation > Message-ID: > <1504636618.478273.1096130488.4D658E7C at webmail.messagingengine.com > > > Content-Type: text/plain; charset="utf-8" > > Hi, everyone > > I see that the NumPy homepage does not have a "Citation" section. > Furthermore, on scipy.org, the default NumPy citation points to a short > summary paper that I wrote with Gael V & Stephen C. While it's a > reasonable introduction to three core concepts behind NumPy, attribution > should certainly go to Travis & the community. > > Shall we add a citation to Travis's "Guide to NumPy (2nd ed.)" on both > pages? > > Best regards > St?fan > > > ------------------------------ > > Message: 2 > Date: Tue, 5 Sep 2017 14:25:20 -0600 > From: Charles R Harris > To: Discussion of Numerical Python > Subject: Re: [Numpy-discussion] NumPy default citation > Message-ID: > gmail.com> > Content-Type: text/plain; charset="utf-8" > > On Tue, Sep 5, 2017 at 12:36 PM, Stefan van der Walt > > wrote: > > > Hi, everyone > > > > I see that the NumPy homepage does not have a "Citation" section. > > Furthermore, on scipy.org, the default NumPy citation points to a short > > summary paper that I wrote with Gael V & Stephen C. While it's a > > reasonable introduction to three core concepts behind NumPy, attribution > > should certainly go to Travis & the community. > > > > Shall we add a citation to Travis's "Guide to NumPy (2nd ed.)" on both > > > > What is the citation for? > > Chuck > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: attachments/20170905/7b8bab3c/attachment-0001.html> > > ------------------------------ > > Message: 3 > Date: Tue, 05 Sep 2017 13:29:22 -0700 > From: Stefan van der Walt > To: Charles R Harris , Discussion of > Numerical Python > Subject: Re: [Numpy-discussion] NumPy default citation > Message-ID: > <1504643362.501728.1096266816.78CD187C at webmail.messagingengine.com > > > Content-Type: text/plain; charset="utf-8" > > On Tue, Sep 5, 2017, at 13:25, Charles R Harris wrote: > > > > On Tue, Sep 5, 2017 at 12:36 PM, Stefan van der Walt > > wrote:>> Shall we add a citation to Travis's > "Guide to NumPy (2nd ed.)" > >> on both> > > What is the citation for? > > It's the suggested reference to add to your paper, if you use the NumPy > package in your work. > St?fan > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: attachments/20170905/d3c84340/attachment-0001.html> > > ------------------------------ > > Message: 4 > Date: Tue, 5 Sep 2017 14:15:11 -0700 > From: Paul Hobson > To: Discussion of Numerical Python > Subject: Re: [Numpy-discussion] NumPy default citation > Message-ID: > gmail.com> > Content-Type: text/plain; charset="utf-8" > > Just a thought that popped into my head: > It'd be cool with the sci/py/data stack had a convention of > .citation so I could look it up w/o leaving my jupyter notebook :) > > -paul > > On Tue, Sep 5, 2017 at 1:29 PM, Stefan van der Walt > wrote: > > > On Tue, Sep 5, 2017, at 13:25, Charles R Harris wrote: > > > > > > On Tue, Sep 5, 2017 at 12:36 PM, Stefan van der Walt < > stefanv at berkeley.edu > > > wrote: > > > > Shall we add a citation to Travis's "Guide to NumPy (2nd ed.)" on both > > > > > > What is the citation for? > > > > > > It's the suggested reference to add to your paper, if you use the NumPy > > package in your work. > > > > St?fan > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: attachments/20170905/a675afed/attachment-0001.html> > > ------------------------------ > > Message: 5 > Date: Tue, 5 Sep 2017 14:21:24 -0700 > From: Feng Yu > To: Discussion of Numerical Python > Subject: Re: [Numpy-discussion] NumPy default citation > Message-ID: > @mail.gmail.com> > Content-Type: text/plain; charset="UTF-8" > > str(numpy.version.citation) and numpy.version.citation.to_bibtex()? > > On Tue, Sep 5, 2017 at 2:15 PM, Paul Hobson wrote: > > Just a thought that popped into my head: > > It'd be cool with the sci/py/data stack had a convention of > > .citation so I could look it up w/o leaving my jupyter notebook > :) > > > > -paul > > > > On Tue, Sep 5, 2017 at 1:29 PM, Stefan van der Walt < > stefanv at berkeley.edu> > > wrote: > >> > >> On Tue, Sep 5, 2017, at 13:25, Charles R Harris wrote: > >> > >> > >> On Tue, Sep 5, 2017 at 12:36 PM, Stefan van der Walt > >> wrote: > >> > >> Shall we add a citation to Travis's "Guide to NumPy (2nd ed.)" on both > >> > >> > >> What is the citation for? > >> > >> > >> It's the suggested reference to add to your paper, if you use the NumPy > >> package in your work. > >> > >> St?fan > >> > >> > >> _______________________________________________ > >> NumPy-Discussion mailing list > >> NumPy-Discussion at python.org > >> https://mail.python.org/mailman/listinfo/numpy-discussion > >> > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > > ------------------------------ > > End of NumPy-Discussion Digest, Vol 132, Issue 1 > ************************************************ > -- David Nicholson, Ph.D. Sober Lab , Emory Neuroscience Program. www.nicholdav.info; https://github.com/NickleDave -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Sep 5 21:12:23 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 5 Sep 2017 19:12:23 -0600 Subject: [Numpy-discussion] New distribution Message-ID: Hi All, This is a heads up that there is a pull request adding a univariate complex_normal distribution. Anyone interested in this should take a look at the PR. I'd also be interested if there was a desire for the multivariate version. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjol at tjol.eu Wed Sep 6 06:16:10 2017 From: tjol at tjol.eu (Thomas Jollans) Date: Wed, 6 Sep 2017 12:16:10 +0200 Subject: [Numpy-discussion] np.array, copy=False and memmap In-Reply-To: References: Message-ID: <43410b78-f433-7066-d821-8766db9d6f2a@tjol.eu> On 2017-08-07 23:01, Nisoli Isaia wrote: > Dear all, > I have a question about the behaviour of? > | > | > |y ||=||np.array(x, copy||=||False||, dtype||=||'float32'||)| > > when x is a memmap. If we check the memmap attribute of mmap> | > | > |print||"mmap attribute"||, y._mmap| > | > | > numpy tells us that y is not a memmap. Regardless of any bugs exposed by the snippet of code below, everything is fine here. You created y as an array, so it's an array, not a memmap. Maybe it should be a memmap. It doesn't matter: it's still backed by a memmap! Python 2.7.5 (default, Aug 2 2017, 11:05:32) Type "copyright", "credits" or "license" for more information. IPython 5.4.1 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: import numpy as np In [2]: np.__version__ Out[2]: '1.13.0' In [3]: with open('test_memmap', 'w+b') as fp: ...: fp.write(b'\0' * 2048) ...: In [4]: x = np.memmap('test_memmap', dtype='int16') In [5]: x Out[5]: memmap([0, 0, 0, ..., 0, 0, 0], dtype=int16) In [6]: id(x) Out[6]: 47365848 In [7]: y = np.array(x, copy=False) In [8]: y Out[8]: array([0, 0, 0, ..., 0, 0, 0], dtype=int16) In [9]: del x In [10]: y.base Out[10]: memmap([0, 0, 0, ..., 0, 0, 0], dtype=int16) In [11]: id(y.base) == Out[6] Out[11]: True In [12]: y[:] = 0x0102 In [13]: y Out[13]: array([258, 258, 258, ..., 258, 258, 258], dtype=int16) In [14]: del y In [15]: with open('test_memmap', 'rb') as fp: ...: print [ord(c) for c in fp.read(10)] ...: [2, 1, 2, 1, 2, 1, 2, 1, 2, 1] In [16]: > But the following code snippet crashes the python interpreter > > |# opens the memmap| > |with ||open||(filename,||'r+b'||) as f:| > |??????||mm ||=| |mmap.mmap(f.fileno(),||0||)| > |??????||x ||=| |np.frombuffer(mm, dtype||=||'float32'||)| > ? > |# builds an array from the memmap, with the option copy=False| > |y ||=| |np.array(x, copy||=||False||, dtype||=||'float32'||)| > |print| |"before"||, y| > ? > |# closes the file| > |mm.close()| > |print| |"after"||, y| > > In my code I use memmaps to share read-only objects when doing parallel > processing > and the behaviour of np.array, even if not consistent, it's desirable. > I share scipy sparse matrices over many processes and if np.array would > make a copy > when dealing with memmaps this would force me to rewrite part of the > sparse matrices > code. > Would it be possible in the future releases of numpy to have np.array > check,? > if copy is false, if y is a memmap and in that case return a full memmap > object > instead of slicing it? > > Best wishes > Isaia > > P.S. A longer account of the issue may be found on my university blog > http://www.im.ufrj.br/nisoli/blog/?p=131 > > -- > Isaia Nisoli > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -- Thomas Jollans From kevin.k.sheppard at gmail.com Fri Sep 8 06:56:45 2017 From: kevin.k.sheppard at gmail.com (Kevin Sheppard) Date: Fri, 08 Sep 2017 10:56:45 +0000 Subject: [Numpy-discussion] ENH: Add complex random normal generator (PR #9561) Message-ID: I would like to add a complex random normal generator to mtrand/RandomState. A scalar complex normal is a (double) bivariate normal. The main motivation is to simplify the construction of complex normals where are generally parameterized in terms of three values: location, covariance and relation. location is the same as in a standard normal. The covariance and the relation jointly determine the variance of the real and imaginary parts as well as the covariance between the two. #1 The initial implementation in the PR has followed the standard template for scalar RV generators with three paths, scalar->scalar, scalar->array and array->array. It is bulky since the existing array fillers that handle the scalar->array and array->array for double rvs cannot be used. It supports broadcasting and has a similar API to other scalar RV generators (e.g. normal). #2 The PR discussion has moved towards exploiting the relationship with the multivariate normal. Currently the MV normal doesn't broadcast, and so following this path would only allow the scalar->scalar and the scalar->array paths. This could theoretically be extended to allow broadcasting if multivariate_normal was extended to allow broadcasting. #3 If broadcasting is off the table, then it might make more sense to skip a scalar complex normal and just move directly to a multivariate_complex_normal since this is also just a higher dimension (double) multivariate normal. This function could just wrap multivariate_normal and would be relatively straightforward forward. The only down side of this path is that it would not easily support a scalar->scalar path, although this could be added. This probably isn't much of a performance hit for following #2 or #3. I checked how normal and multivariate normal perform for large draws: %timeit np.random.normal(2.0,4.0,size=1000000) 30.8 ms ? 125 ?s per loop (mean ? std. dev. of 7 runs, 10 loops each) %timeit np.random.multivariate_normal([2.0],[[4.0]],size=1000000) 32.2 ms ? 308 ?s per loop (mean ? std. dev. of 7 runs, 10 loops each) For smaller draws the performance difference is larger: %timeit np.random.normal(2.0,4.0,size=10) 2.95 ?s ? 16.5 ns per loop (mean ? std. dev. of 7 runs, 100000 loops each) %timeit np.random.multivariate_normal([2.0],[[4.0]],size=10) 49.4 ?s ? 249 ns per loop (mean ? std. dev. of 7 runs, 10000 loops each) And for scalars the scalar path is about 30x faster than the multivariate path. It is also worth noting that multivariate_normal will only return a vector even if the inputs only generate a single scalar. %timeit np.random.normal(2.0,4.0) 1.42 ?s ? 3.05 ns per loop (mean ? std. dev. of 7 runs, 1000000 loops each) %timeit np.random.multivariate_normal([2.0],[[4.0]]) 47.9 ?s ? 167 ns per loop (mean ? std. dev. of 7 runs, 10000 loops each) It would be helpful do determine which path, #1 Clone standard scalar generator with 3 paths including broadcasting #2 Scalar generator using multivariate normal, excluding broadcasting #3 Multivariate generator using multivariate normal, excluding broadcasting is the preferred one. Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Sep 8 22:34:56 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 9 Sep 2017 14:34:56 +1200 Subject: [Numpy-discussion] NumPy default citation In-Reply-To: <1504643362.501728.1096266816.78CD187C@webmail.messagingengine.com> References: <1504636618.478273.1096130488.4D658E7C@webmail.messagingengine.com> <1504643362.501728.1096266816.78CD187C@webmail.messagingengine.com> Message-ID: On Wed, Sep 6, 2017 at 8:29 AM, Stefan van der Walt wrote: > On Tue, Sep 5, 2017, at 13:25, Charles R Harris wrote: > > > On Tue, Sep 5, 2017 at 12:36 PM, Stefan van der Walt > wrote: > > Shall we add a citation to Travis's "Guide to NumPy (2nd ed.)" on both > > > What is the citation for? > > > It's the suggested reference to add to your paper, if you use the NumPy > package in your work. > +1 for changing the recommended citation to Guide to NumPy now. I do think that we're kind of wasting those citations though. I'm not an academic, but for those contributors who are, citations of a paper that is indexed (counts towards h-index etc.) can be very important. So probably we should find the time to write a paper, with Travis still as first author but with all core devs & major contributors on it. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From theodore.goetz at gmail.com Sat Sep 9 12:34:28 2017 From: theodore.goetz at gmail.com (Johann Goetz) Date: Sat, 09 Sep 2017 16:34:28 +0000 Subject: [Numpy-discussion] ENH: faster histograms (PR #9627) Message-ID: I have received and addressed many great suggestions and critiques from @juliantaylor and @eric-wieser for pull request #9627 which moves the np.histogram() and np.histogramdd() methods into C. Speed ups of 2x to 20x were realized for large sample data depending on the percentage of sample points that lay outside the histogramming range. For details more see my report here . I'd like to know now how to proceed with this pull request. I.e., how can I move the process along. Additionally, I'd like to propose a new feature which I'm sure requires some discussion: The inspiration for this effort came from the fast-histogram python package which is still faster because it ignores ULP-level correctness. Towards the bottom of my report, I suggest adding a new option to the histogramming methods to ignore ULP corrections which would make the numpy implementation on-par with fast-histogram's. Something like: np.histogram(sample, bins=10, range=(0, 10), fast=True) which would raise an exception or ignore the "fast" parameter perhaps if bins were given as a list of edges: np.histogram(sample, bins=[0,1,2,3], fast=True) # not fast. I think I'd shy away from testing the bin-uniformity since it is very hard to do without a specified tolerance. This can be done by the user with something like this: np.all(np.abs(np.diff(np.diff(edges))) <= \ 2**6 * np.finfo(edges.dtype).eps) Or by comparison with the output of np.linspace(). -- Johann. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bostock83 at gmail.com Thu Sep 14 04:36:14 2017 From: bostock83 at gmail.com (Michael Bostock) Date: Thu, 14 Sep 2017 09:36:14 +0100 Subject: [Numpy-discussion] Only integer scalar arrays can be converted to a scalar index Message-ID: Hi, I am trying to do a slding window on a cube (3D array) to get the average over a block of vertical 1D arrays. I have achieved this using the stride_tricks.asstrided but this will load the whole cube into memory at once and is not suitable for large cubes. I have also achieved it using an nditer but this is a lot slower: def trace_block(vol, x, y, half_window): return vol[x - half_window:x + half_window + 1, y - half_window:y + half_window + 1] vol = np.linspace(1, 125, 125, dtype=np.int32).reshape(5, 5, 5) window_size = 3 x, y, z = vol.shape half_window = (window_size - 1) // 2 xs = np.arange(half_window, x - half_window, dtype=np.int16) ys = np.arange(half_window, y - half_window, dtype=np.int16) averaged = np.zeros((5, 5, 5)) for x, y in np.nditer(np.ix_(xs, ys)): averaged[x, y] = np.mean(trace_block(vol, x, y, half_window), (0, 1)) My attempt at using numpy vectorisation to avoid the for loop throws the error in the subject: vol = np.linspace(1, 125, 125, dtype=np.int32).reshape(5, 5, 5) window_size = 3 x, y, z = vol.shape half_window = (window_size - 1) // 2 xs = np.arange(half_window, x - half_window, dtype=np.int16) ys = np.arange(half_window, y - half_window, dtype=np.int16) averaged = np.zeros((5, 5, 5)) xi, yi = np.ix_(xs, ys) averaged[xi, yi] = np.mean(trace_block(vol, xi, yi, half_window), (0, 1)) Is there any way to do slicing as shown in the trace_block function to support the xi and yi grid arrays? Any help you can provide will be greatly appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From NissimD at elspec-ltd.com Thu Sep 14 13:11:20 2017 From: NissimD at elspec-ltd.com (Nissim Derdiger) Date: Thu, 14 Sep 2017 17:11:20 +0000 Subject: [Numpy-discussion] converting list of int16 values to bitmask and back to list of int32\float values Message-ID: <9EFE3345170EF24DB67C61C1B05EEEDB407B8992@EX10.Elspec.local> Hi all! I'm writing a Modbus TCP client using pymodbus3 library. When asking for some parameters, the response is always a list of int16. In order to make the values usable, I need to transfer them into 32bit bites, than put them in the correct order (big\little endian wise), and then to cast them back to the desired format (usually int32 or float) I've solved it with a pretty na?ve code, but I'm guessing there must be a more elegant and fast way to solve it with NumPy. Your help would be very much appreciated! Nissim. My code: def Read(StartAddress, NumOfRegisters, FunctionCode,ParameterType,BitOrder): # select the Parameters format PrmFormat = 'f' # default is float if ParameterType == 'int': PrmFormat = 'i' # select the endian state - maybe move to the connect function? endian = ' From Permafacture at gmail.com Thu Sep 14 20:16:01 2017 From: Permafacture at gmail.com (Elliot Hallmark) Date: Thu, 14 Sep 2017 19:16:01 -0500 Subject: [Numpy-discussion] Only integer scalar arrays can be converted to a scalar index In-Reply-To: References: Message-ID: Won't any solution not using hdf5 or some other chunked on disk storage method load the whole cube into memory? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bostock83 at gmail.com Fri Sep 15 03:41:38 2017 From: bostock83 at gmail.com (Michael Bostock) Date: Fri, 15 Sep 2017 08:41:38 +0100 Subject: [Numpy-discussion] Only integer scalar arrays can be converted to a scalar index In-Reply-To: References: Message-ID: I was hoping that numpy doing this in a vectorised way would only load the surrounding traces into memory for each X and Y as it needs to rather than the whole cube. I'm using hdf5 for the storage. My example was just a short example without using hdf5. On 15 Sep 2017 1:16 am, "Elliot Hallmark" wrote: Won't any solution not using hdf5 or some other chunked on disk storage method load the whole cube into memory? _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at python.org https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From solarjoe at posteo.org Fri Sep 15 08:02:01 2017 From: solarjoe at posteo.org (Joe) Date: Fri, 15 Sep 2017 14:02:01 +0200 Subject: [Numpy-discussion] Questions on np.piecewise Message-ID: Hello, I have two questions and hope that you can help me. 1.) Is np.piecewise only defined for two conditions or should something like [0 < x <= 90, 90 < x <= 180, 180 < x <= 270] also work? 2.) Why does np.piecewise(np.array([50]), [0 < x <= 90, 90 < x <= 180], [1.1, 2.1]) return [2] and not [2.1] ? Kind regards, Joe From efiring at hawaii.edu Fri Sep 15 10:30:24 2017 From: efiring at hawaii.edu (Eric Firing) Date: Fri, 15 Sep 2017 04:30:24 -1000 Subject: [Numpy-discussion] Questions on np.piecewise In-Reply-To: References: Message-ID: On 2017/09/15 2:02 AM, Joe wrote: > Hello, > > I have two questions and hope that you can help me. > > 1.) > Is np.piecewise only defined for two conditions or should something like > > [0 < x <= 90, 90 < x <= 180, 180 < x <= 270] > > also work? > > 2.) > Why does > > np.piecewise(np.array([50]), [0 < x <= 90, 90 < x <= 180], [1.1, 2.1]) > > return [2] and not [2.1] ? > > Kind regards, > Joe Your example doesn't run, but here is one that does: In [8]: x = np.array([50], dtype=float) In [9]: np.piecewise(x, [0 < x <= 90, 90 < x <= 180], [1.1, 2.1]) array([ 1.1]) The answer to your second question is that it is returning an array with the same dtype as its first argument. The answer to your first question is "yes", and evidently if more than one condition matches, it is the last that prevails: In [10]: np.piecewise(x, [0 < x <= 90, 90 < x <= 180, 30 References: Message-ID: <707a12a52bf9ddf02533c0e6746ad4cd@posteo.de> > Your example doesn't run, but here is one that does: > > In [8]: x = np.array([50], dtype=float) > > In [9]: np.piecewise(x, [0 < x <= 90, 90 < x <= 180], [1.1, 2.1]) > array([ 1.1]) > > The answer to your second question is that it is returning an array > with the same dtype as its first argument. > > The answer to your first question is "yes", and evidently if more than > one condition matches, it is the last that prevails: > > In [10]: np.piecewise(x, [0 < x <= 90, 90 < x <= 180, 30 2.1, 3.3]) > array([ 3.3]) Thank you very much for the good answer! From Permafacture at gmail.com Fri Sep 15 17:37:19 2017 From: Permafacture at gmail.com (Elliot Hallmark) Date: Fri, 15 Sep 2017 16:37:19 -0500 Subject: [Numpy-discussion] Only integer scalar arrays can be converted to a scalar index In-Reply-To: References: Message-ID: Nope. Numpy only works on in memory arrays. You can determine your own chunking strategy using hdf5, or something like dask can figure that strategy out for you. With numpy you might worry about not accidentally making duplicates or intermediate arrays, but that's the extent of memory optimization you can do in numpy itself. Elliot -------------- next part -------------- An HTML attachment was scrubbed... URL: From robbmcleod at gmail.com Fri Sep 15 17:46:50 2017 From: robbmcleod at gmail.com (Robert McLeod) Date: Fri, 15 Sep 2017 14:46:50 -0700 Subject: [Numpy-discussion] Only integer scalar arrays can be converted to a scalar index In-Reply-To: References: Message-ID: On Fri, Sep 15, 2017 at 2:37 PM, Elliot Hallmark wrote: > Nope. Numpy only works on in memory arrays. You can determine your own > chunking strategy using hdf5, or something like dask can figure that > strategy out for you. With numpy you might worry about not accidentally > making duplicates or intermediate arrays, but that's the extent of memory > optimization you can do in numpy itself. > NumPy does have it's own memory map variant on ndarray: https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html -- Robert McLeod, Ph.D. robbmcleod at gmail.com robbmcleod at protonmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Sep 15 18:16:27 2017 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Fri, 15 Sep 2017 18:16:27 -0400 Subject: [Numpy-discussion] Only integer scalar arrays can be converted to a scalar index In-Reply-To: References: Message-ID: <8679378682858018650@unknownmsgid> No thoughts on optimizing memory, but that indexing error probably comes from np.mean producing float results. An astype call shoulder that work. -CHB Sent from my iPhone On Sep 15, 2017, at 5:51 PM, Robert McLeod wrote: On Fri, Sep 15, 2017 at 2:37 PM, Elliot Hallmark wrote: > Nope. Numpy only works on in memory arrays. You can determine your own > chunking strategy using hdf5, or something like dask can figure that > strategy out for you. With numpy you might worry about not accidentally > making duplicates or intermediate arrays, but that's the extent of memory > optimization you can do in numpy itself. > NumPy does have it's own memory map variant on ndarray: https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html -- Robert McLeod, Ph.D. robbmcleod at gmail.com robbmcleod at protonmail.com _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at python.org https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Fri Sep 15 22:23:12 2017 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sat, 16 Sep 2017 12:23:12 +1000 Subject: [Numpy-discussion] Only integer scalar arrays can be converted to a scalar index In-Reply-To: <8679378682858018650@unknownmsgid> References: <8679378682858018650@unknownmsgid> Message-ID: <97cb1925-56f3-4f86-bc0c-53dadd42dfc1@Spark> +1 on the astype(int) call. +1 also on using dask. scikit-image has a couple of functions that might be useful: - skimage.util.apply_parallel: applies a function to an input array in chunks, with user-selectable chunk size and margins. This is powered by dask. - skimage.util.view_as_windows: uses stride tricks to produce a sliding window view over an n-dimensional array. On 16 Sep 2017, 8:16 AM +1000, Chris Barker - NOAA Federal , wrote: > No thoughts on optimizing memory, but that indexing error probably comes from np.mean producing float results. An astype call shoulder that work. > > -CHB > > Sent from my iPhone > > On Sep 15, 2017, at 5:51 PM, Robert McLeod wrote: > > > > > > On Fri, Sep 15, 2017 at 2:37 PM, Elliot Hallmark wrote: > > > > Nope. Numpy only works on in memory arrays. You can determine your own chunking strategy using hdf5, or something like dask can figure that strategy out for you. With numpy you might worry about not accidentally making duplicates or intermediate arrays, but that's the extent of memory optimization you can do in numpy itself. > > > > NumPy does have it's own memory map variant on ndarray: > > > > https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html > > > > > > > > -- > > Robert McLeod, Ph.D. > > robbmcleod at gmail.com > > robbmcleod at protonmail.com > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Sep 15 22:33:40 2017 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 16 Sep 2017 11:33:40 +0900 Subject: [Numpy-discussion] Only integer scalar arrays can be converted to a scalar index In-Reply-To: <8679378682858018650@unknownmsgid> References: <8679378682858018650@unknownmsgid> Message-ID: On Sat, Sep 16, 2017 at 7:16 AM, Chris Barker - NOAA Federal < chris.barker at noaa.gov> wrote: > > No thoughts on optimizing memory, but that indexing error probably comes from np.mean producing float results. An astype call shoulder that work. Why? It's not being used as an index. It's being assigned into a float array. Rather, it's the slicing inside of `trace_block()` when it's being given arrays as inputs for `x` and `y`. numpy simply doesn't support that because in general the result wouldn't have a uniform shape. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Fri Sep 15 23:39:10 2017 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sat, 16 Sep 2017 13:39:10 +1000 Subject: [Numpy-discussion] Only integer scalar arrays can be converted to a scalar index In-Reply-To: References: <8679378682858018650@unknownmsgid> Message-ID: <79609a24-332c-4598-a963-513e2ccc39c8@Spark> @Robert, good point, always good to try out code before speculating on a thread. ;) Here?s working code to do the averaging, though it?s not block-wise, you?ll have to add that on top with dask/util.apply_parallel. Note also that because of the C-order of numpy arrays, it?s much more efficient to think of axis 0 as the ?vertical? axis, rather than axis 2. See?http://scikit-image.org/docs/dev/user_guide/numpy_images.html#notes-on-array-order?for more info. import numpy as np from skimage import util vol = np.linspace(1, 125, 125, dtype=np.int32).reshape(5, 5, 5) window_shape = (1, 3, 3) windows = util.view_as_windows(vol, window_shape) print(windows.shape) ?# (5, 3, 3, 1, 3, 3) averaged = np.mean(windows, axis=(3, 4, 5)) HTH! Juan. On 16 Sep 2017, 12:34 PM +1000, Robert Kern , wrote: > On Sat, Sep 16, 2017 at 7:16 AM, Chris Barker - NOAA Federal wrote: > > > > No thoughts on optimizing memory, but that indexing error probably comes from np.mean producing float results. An astype call shoulder that work. > > Why? It's not being used as an index. It's being assigned into a float array. > > Rather, it's the slicing inside of `trace_block()` when it's being given arrays as inputs for `x` and `y`. numpy simply doesn't support that because in general the result wouldn't have a uniform shape. > > -- > Robert Kern > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Sep 16 00:40:02 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 16 Sep 2017 16:40:02 +1200 Subject: [Numpy-discussion] Dropping support for Accelerate In-Reply-To: References: Message-ID: On Sat, Jul 22, 2017 at 10:50 PM, Ilhan Polat wrote: > A few months ago, I had the innocent intention to wrap LDLt decomposition > routines of LAPACK into SciPy but then I am made aware that the minimum > required version of LAPACK/BLAS was due to Accelerate framework. Since then > I've been following the core SciPy team and others' discussion on this > issue. > > We have been exchanging opinions for quite a while now within various > SciPy issues and PRs about the ever-increasing Accelerate-related issues > and I've compiled a brief summary about the ongoing discussions to reduce > the clutter. > > First, I would like to kindly invite everyone to contribute and sharpen > the cases presented here > > https://github.com/scipy/scipy/wiki/Dropping-support-for-Accelerate > > The reason I specifically wanted to post this also in NumPy mailing list > is to probe for the situation from the NumPy-Accelerate perspective. Is > there any NumPy specific problem that would indirectly effect SciPy should > the support for Accelerate is dropped? > An update on this: discussion on https://github.com/scipy/scipy/pull/6051 has mostly converged, and we're about to decide to start requiring a higher LAPACK version (after 1.0, no changes for the next release). Looks like that'll be LAPACK 3.4.0 for now. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Sep 17 06:48:35 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 17 Sep 2017 22:48:35 +1200 Subject: [Numpy-discussion] ANN: SciPy 1.0 beta release Message-ID: Hi all, I'm excited to be able to announce the availability of the first beta release of Scipy 1.0. This is a big release, and a version number that has been 16 years in the making. It contains a few more deprecations and backwards incompatible changes than an average release. Therefore please do test it on your own code, and report any issues on the Github issue tracker or on the scipy-dev mailing list. Sources: https://github.com/scipy/scipy/releases/tag/v1.0.0b1 Binary wheels: will follow tomorrow, I'll announce those when ready (TravisCI is under maintenance right now) Thanks to everyone who contributed to this release! Ralf Release notes (full notes including authors, closed issued and merged PRs at the Github Releases link above): ========================== SciPy 1.0.0 Release Notes ========================== .. note:: Scipy 1.0.0 is not released yet! .. contents:: SciPy 1.0.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 1.0.x branch, and on adding new features on the master branch. Some of the highlights of this release are: - Major build improvements. Windows wheels are available on PyPI for the first time, and continuous integration has been set up on Windows and OS X in addition to Linux. - A set of new ODE solvers and a unified interface to them (`scipy.integrate.solve_ivp`). - Two new trust region optimizers and a new linear programming method, with improved performance compared to what `scipy.optimize` offered previously. - Many new BLAS and LAPACK functions were wrapped. The BLAS wrappers are now complete. This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater. This is also the last release to support LAPACK 3.1.x - 3.3.x. Moving the lowest supported LAPACK version to >3.2.x was long blocked by Apple Accelerate providing the LAPACK 3.2.1 API. We have decided that it's time to either drop Accelerate or, if there is enough interest, provide shims for functions added in more recent LAPACK versions so it can still be used. New features ============ `scipy.cluster` improvements ---------------------------- `scipy.cluster.hierarchy.optimal_leaf_ordering`, a function to reorder a linkage matrix to minimize distances between adjacent leaves, was added. `scipy.fftpack` improvements ---------------------------- N-dimensional versions of the discrete sine and cosine transforms and their inverses were added as ``dctn``, ``idctn``, ``dstn`` and ``idstn``. `scipy.integrate` improvements ------------------------------ A set of new ODE solvers have been added to `scipy.integrate`. The convenience function `scipy.integrate.solve_ivp` allows uniform access to all solvers. The individual solvers (``RK23``, ``RK45``, ``Radau``, ``BDF`` and ``LSODA``) can also be used directly. `scipy.linalg` improvements ---------------------------- The BLAS wrappers in `scipy.linalg.blas` have been completed. Added functions are ``*gbmv``, ``*hbmv``, ``*hpmv``, ``*hpr``, ``*hpr2``, ``*spmv``, ``*spr``, ``*tbmv``, ``*tbsv``, ``*tpmv``, ``*tpsv``, ``*trsm``, ``*trsv``, ``*sbmv``, ``*spr2``, Wrappers for the LAPACK functions ``*gels``, ``*stev``, ``*sytrd``, ``*hetrd``, ``*sytf2``, ``*hetrf``, ``*sytrf``, ``*sycon``, ``*hecon``, ``*gglse``, ``*stebz``, ``*stemr``, ``*sterf``, and ``*stein`` have been added. The function `scipy.linalg.subspace_angles` has been added to compute the subspace angles between two matrices. The function `scipy.linalg.clarkson_woodruff_transform` has been added. It finds low-rank matrix approximation via the Clarkson-Woodruff Transform. The functions `scipy.linalg.eigh_tridiagonal` and `scipy.linalg.eigvalsh_tridiagonal`, which find the eigenvalues and eigenvectors of tridiagonal hermitian/symmetric matrices, were added. `scipy.ndimage` improvements ---------------------------- Support for homogeneous coordinate transforms has been added to `scipy.ndimage.affine_transform`. The ``ndimage`` C code underwent a significant refactoring, and is now a lot easier to understand and maintain. `scipy.optimize` improvements ----------------------------- The methods ``trust-region-exact`` and ``trust-krylov`` have been added to the function `scipy.optimize.minimize`. These new trust-region methods solve the subproblem with higher accuracy at the cost of more Hessian factorizations (compared to dogleg) or more matrix vector products (compared to ncg) but usually require less nonlinear iterations and are able to deal with indefinite Hessians. They seem very competitive against the other Newton methods implemented in scipy. `scipy.optimize.linprog` gained an interior point method. Its performance is superior (both in accuracy and speed) to the older simplex method. `scipy.signal` improvements --------------------------- An argument ``fs`` (sampling frequency) was added to the following functions: ``firwin``, ``firwin2``, ``firls``, and ``remez``. This makes these functions consistent with many other functions in `scipy.signal` in which the sampling frequency can be specified. `scipy.signal.freqz` has been sped up significantly for FIR filters. `scipy.sparse` improvements --------------------------- Iterating over and slicing of CSC and CSR matrices is now faster by up to ~35%. The ``tocsr`` method of COO matrices is now several times faster. The ``diagonal`` method of sparse matrices now takes a parameter, indicating which diagonal to return. `scipy.sparse.linalg` improvements ---------------------------------- A new iterative solver for large-scale nonsymmetric sparse linear systems, `scipy.sparse.linalg.gcrotmk`, was added. It implements ``GCROT(m,k)``, a flexible variant of ``GCROT``. `scipy.sparse.linalg.lsmr` now accepts an initial guess, yielding potentially faster convergence. SuperLU was updated to version 5.2.1. `scipy.spatial` improvements ---------------------------- Many distance metrics in `scipy.spatial.distance` gained support for weights. The signatures of `scipy.spatial.distance.pdist` and `scipy.spatial.distance.cdist` were changed to ``*args, **kwargs`` in order to support a wider range of metrics (e.g. string-based metrics that need extra keywords). Also, an optional ``out`` parameter was added to ``pdist`` and ``cdist`` allowing the user to specify where the resulting distance matrix is to be stored `scipy.stats` improvements -------------------------- The methods ``cdf`` and ``logcdf`` were added to `scipy.stats.multivariate_normal`, providing the cumulative distribution function of the multivariate normal distribution. New statistical distance functions were added, namely `scipy.stats.wasserstein_distance` for the first Wasserstein distance and `scipy.stats.energy_distance` for the energy distance. Deprecated features =================== The following functions in `scipy.misc` are deprecated: ``bytescale``, ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, ``imsave``, ``imshow`` and ``toimage``. Most of those functions have unexpected behavior (like rescaling and type casting image data without the user asking for that). Other functions simply have better alternatives. ``scipy.interpolate.interpolate_wrapper`` and all functions in that submodule are deprecated. This was a never finished set of wrapper functions which is not relevant anymore. The ``fillvalue`` of `scipy.signal.convolve2d` will be cast directly to the dtypes of the input arrays in the future and checked that it is a scalar or an array with a single element. Backwards incompatible changes ============================== The following deprecated functions have been removed from `scipy.stats`: ``betai``, ``chisqprob``, ``f_value``, ``histogram``, ``histogram2``, ``pdf_fromgamma``, ``signaltonoise``, ``square_of_sums``, ``ss`` and ``threshold``. The following deprecated functions have been removed from `scipy.stats.mstats`: ``betai``, ``f_value_wilks_lambda``, ``signaltonoise`` and ``threshold``. The deprecated ``a`` and ``reta`` keywords have been removed from `scipy.stats.shapiro`. The deprecated functions ``sparse.csgraph.cs_graph_components`` and ``sparse.linalg.symeig`` have been removed from `scipy.sparse`. The following deprecated keywords have been removed in `scipy.sparse.linalg`: ``drop_tol`` from ``splu``, and ``xtype`` from ``bicg``, ``bicgstab``, ``cg``, ``cgs``, ``gmres``, ``qmr`` and ``minres``. The deprecated functions ``expm2`` and ``expm3`` have been removed from `scipy.linalg`. The deprecated keyword ``q`` was removed from `scipy.linalg.expm`. And the deprecated submodule ``linalg.calc_lwork`` was removed. The deprecated functions ``C2K``, ``K2C``, ``F2C``, ``C2F``, ``F2K`` and ``K2F`` have been removed from `scipy.constants`. The deprecated ``ppform`` class was removed from `scipy.interpolate`. The deprecated keyword ``iprint`` was removed from `scipy.optimize.fmin_cobyla`. The default value for the ``zero_phase`` keyword of `scipy.signal.decimate` has been changed to True. The ``kmeans`` and ``kmeans2`` functions in `scipy.cluster.vq` changed the method used for random initialization, so using a fixed random seed will not necessarily produce the same results as in previous versions. `scipy.special.gammaln` does not accept complex arguments anymore. The deprecated functions ``sph_jn``, ``sph_yn``, ``sph_jnyn``, ``sph_in``, ``sph_kn``, and ``sph_inkn`` have been removed. Users should instead use the functions ``spherical_jn``, ``spherical_yn``, ``spherical_in``, and ``spherical_kn``. Be aware that the new functions have different signatures. The cross-class properties of `scipy.signal.lti` systems have been removed. The following properties/setters have been removed: Name - (accessing/setting has been removed) - (setting has been removed) * StateSpace - (``num``, ``den``, ``gain``) - (``zeros``, ``poles``) * TransferFunction (``A``, ``B``, ``C``, ``D``, ``gain``) - (``zeros``, ``poles``) * ZerosPolesGain (``A``, ``B``, ``C``, ``D``, ``num``, ``den``) - () ``signal.freqz(b, a)`` with ``b`` or ``a`` >1-D raises a ``ValueError``. This was a corner case for which it was unclear that the behavior was well-defined. The method ``var`` of `scipy.stats.dirichlet` now returns a scalar rather than an ndarray when the length of alpha is 1. Other changes ============= SciPy now has a formal governance structure. It consists of a BDFL (Pauli Virtanen) and a Steering Committee. See `the governance document < https://github.com/scipy/scipy/blob/master/doc/source/dev/governance/governance.rst >`_ for details. It is now possible to build SciPy on Windows with MSVC + gfortran! Continuous integration has been set up for this build configuration on Appveyor, building against OpenBLAS. Continuous integration for OS X has been set up on TravisCI. The SciPy test suite has been migrated from ``nose`` to ``pytest``. ``scipy/_distributor_init.py`` was added to allow redistributors of SciPy to add custom code that needs to run when importing SciPy (e.g. checks for hardware, DLL search paths, etc.). Support for PEP 518 (specifying build system requirements) was added - see ``pyproject.toml`` in the root of the SciPy repository. In order to have consistent function names, the function ``scipy.linalg.solve_lyapunov`` is renamed to `scipy.linalg.solve_continuous_lyapunov`. The old name is kept for backwards-compatibility. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilhanpolat at gmail.com Sun Sep 17 10:47:57 2017 From: ilhanpolat at gmail.com (Ilhan Polat) Date: Sun, 17 Sep 2017 16:47:57 +0200 Subject: [Numpy-discussion] ANN: SciPy 1.0 beta release In-Reply-To: References: Message-ID: Well also thank you Ralf, for going through all those issues one by one from all kinds of topics. Must be really painstakingly tedious. On Sun, Sep 17, 2017 at 12:48 PM, Ralf Gommers wrote: > Hi all, > > I'm excited to be able to announce the availability of the first beta > release of Scipy 1.0. This is a big release, and a version number that > has been 16 years in the making. It contains a few more deprecations and > backwards incompatible changes than an average release. Therefore please do > test it on your own code, and report any issues on the Github issue tracker > or on the scipy-dev mailing list. > > Sources: https://github.com/scipy/scipy/releases/tag/v1.0.0b1 > Binary wheels: will follow tomorrow, I'll announce those when ready > (TravisCI is under maintenance right now) > > Thanks to everyone who contributed to this release! > > Ralf > > > > > Release notes (full notes including authors, closed issued and merged PRs > at the Github Releases link above): > > [snip] > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcaswell at gmail.com Sun Sep 17 11:12:35 2017 From: tcaswell at gmail.com (Thomas Caswell) Date: Sun, 17 Sep 2017 15:12:35 +0000 Subject: [Numpy-discussion] ANN: SciPy 1.0 beta release In-Reply-To: References: Message-ID: It seems major versions are in the air! For matplotlib 2.0 we put together http://matplotlib.org/users/dflt_style_changes.html for the style changes which shows the new behavior, the old behavior, and how to get the old behavior back. Tom On Sun, Sep 17, 2017 at 10:48 AM Ilhan Polat wrote: > Well also thank you Ralf, for going through all those issues one by one > from all kinds of topics. Must be really painstakingly tedious. > > > On Sun, Sep 17, 2017 at 12:48 PM, Ralf Gommers > wrote: > >> Hi all, >> >> I'm excited to be able to announce the availability of the first beta >> release of Scipy 1.0. This is a big release, and a version number that >> has been 16 years in the making. It contains a few more deprecations and >> backwards incompatible changes than an average release. Therefore please do >> test it on your own code, and report any issues on the Github issue tracker >> or on the scipy-dev mailing list. >> >> Sources: https://github.com/scipy/scipy/releases/tag/v1.0.0b1 >> Binary wheels: will follow tomorrow, I'll announce those when ready >> (TravisCI is under maintenance right now) >> >> Thanks to everyone who contributed to this release! >> >> Ralf >> >> >> >> >> Release notes (full notes including authors, closed issued and merged PRs >> at the Github Releases link above): >> >> [snip] >> > >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> >> _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Sep 17 11:32:15 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 17 Sep 2017 09:32:15 -0600 Subject: [Numpy-discussion] ANN: SciPy 1.0 beta release In-Reply-To: References: Message-ID: On Sun, Sep 17, 2017 at 4:48 AM, Ralf Gommers wrote: > Hi all, > > I'm excited to be able to announce the availability of the first beta > release of Scipy 1.0. This is a big release, and a version number that > has been 16 years in the making. It contains a few more deprecations and > backwards incompatible changes than an average release. Therefore please do > test it on your own code, and report any issues on the Github issue tracker > or on the scipy-dev mailing list. > > Sources: https://github.com/scipy/scipy/releases/tag/v1.0.0b1 > Binary wheels: will follow tomorrow, I'll announce those when ready > (TravisCI is under maintenance right now) > > Thanks to everyone who contributed to this release! > Congratulations to all, and an extra congratulations to Matthew and everyone else involved in getting the scipy wheels building on all the supported platforms. For those unfamiliar with the history, Ralf became release manager for NumPy 1.4.1 back in early 2010 and switched to full time SciPy release manager in 2011. It has been a long, productive, seven years. Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Sep 18 04:55:03 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 18 Sep 2017 20:55:03 +1200 Subject: [Numpy-discussion] ANN: SciPy 1.0 beta release In-Reply-To: References: Message-ID: On Mon, Sep 18, 2017 at 3:12 AM, Thomas Caswell wrote: > It seems major versions are in the air! > > For matplotlib 2.0 we put together http://matplotlib. > org/users/dflt_style_changes.html for the style changes which shows the > new behavior, the old behavior, and how to get the old behavior back. > We certainly didn't make that many backwards incompatible changes (very few in fact, mostly removing long deprecated code), but yes - we'll do something more than the regular announcement email for the final 1.0 release. Ralf > > Tom > > On Sun, Sep 17, 2017 at 10:48 AM Ilhan Polat wrote: > >> Well also thank you Ralf, for going through all those issues one by one >> from all kinds of topics. Must be really painstakingly tedious. >> >> >> On Sun, Sep 17, 2017 at 12:48 PM, Ralf Gommers >> wrote: >> >>> Hi all, >>> >>> I'm excited to be able to announce the availability of the first beta >>> release of Scipy 1.0. This is a big release, and a version number that >>> has been 16 years in the making. It contains a few more deprecations and >>> backwards incompatible changes than an average release. Therefore please do >>> test it on your own code, and report any issues on the Github issue tracker >>> or on the scipy-dev mailing list. >>> >>> Sources: https://github.com/scipy/scipy/releases/tag/v1.0.0b1 >>> Binary wheels: will follow tomorrow, I'll announce those when ready >>> (TravisCI is under maintenance right now) >>> >>> Thanks to everyone who contributed to this release! >>> >>> Ralf >>> >>> >>> >>> >>> Release notes (full notes including authors, closed issued and merged >>> PRs at the Github Releases link above): >>> >>> [snip] >>> >> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >>> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Sep 18 05:59:15 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 18 Sep 2017 21:59:15 +1200 Subject: [Numpy-discussion] ANN: SciPy 1.0 beta release In-Reply-To: References: Message-ID: On Sun, Sep 17, 2017 at 10:48 PM, Ralf Gommers wrote: > Hi all, > > I'm excited to be able to announce the availability of the first beta > release of Scipy 1.0. This is a big release, and a version number that > has been 16 years in the making. It contains a few more deprecations and > backwards incompatible changes than an average release. Therefore please do > test it on your own code, and report any issues on the Github issue tracker > or on the scipy-dev mailing list. > > Sources: https://github.com/scipy/scipy/releases/tag/v1.0.0b1 > Binary wheels: will follow tomorrow, I'll announce those when ready > (TravisCI is under maintenance right now) > Binary wheels for Windows, Linux and OS X (for all supported Python versions, 32-bit and 64-bit) can be found at http://wheels.scipy.org. To install directly with pip: pip install scipy=='1.0.0b1' -f http://wheels.scipy.org --trusted-host wheels.scipy.org (add --user and/or --upgrade as required to that command). Alternatively, just download the wheel you need and do `pip install scipy-1.0.0b1-.whl`. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Sep 19 05:10:15 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 19 Sep 2017 21:10:15 +1200 Subject: [Numpy-discussion] [SciPy-User] ANN: SciPy 1.0 beta release In-Reply-To: References: Message-ID: On Mon, Sep 18, 2017 at 10:36 PM, Matthew Brett wrote: > Hi, > > On Mon, Sep 18, 2017 at 11:14 AM, Ralf Gommers > wrote: > > > > > > On Mon, Sep 18, 2017 at 10:11 PM, Matthew Brett > > > wrote: > >> > >> Hi, > >> > >> On Mon, Sep 18, 2017 at 11:07 AM, Thomas Kluyver > wrote: > >> > On 18 September 2017 at 10:59, Ralf Gommers > >> > wrote: > >> >> > >> >> Binary wheels for Windows, Linux and OS X (for all supported Python > >> >> versions, 32-bit and 64-bit) can be found at http://wheels.scipy.org > . > >> >> To > >> >> install directly with pip: > >> >> > >> >> pip install scipy=='1.0.0b1' -f http://wheels.scipy.org > >> >> --trusted-host > >> >> wheels.scipy.org > >> > > >> > > >> > I don't want to criticise the hard work that has gone into making this > >> > available, but I'm disappointed that we're telling people to install > >> > software over an insecure HTTP connection. > >> > >> I personally prefer the following recipe: > >> > >> pip install -f > >> https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a4 > 3.ssl.cf2.rackcdn.com > >> scipy=='1.0.0b1' > >> > >> > Can the wheels not be uploaded to PyPI? > >> > >> Sounds like a good idea. I can do that - any objections? > > > > > > That would be helpful Matthew, I'm about to sign off for today. > > Done - new instructions for testing: > > pip install --pre --upgrade scipy > Thanks Matthew! Replying to all lists with the better install instructions. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Tue Sep 19 09:23:23 2017 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 19 Sep 2017 15:23:23 +0200 Subject: [Numpy-discussion] ANN: SfePy 2017.3 Message-ID: <300388fd-de7f-ad26-248a-e81de1f18240@ntc.zcu.cz> I am pleased to announce release 2017.3 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method or by the isogeometric analysis (limited support). It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: https://mail.python.org/mm3/mailman3/lists/sfepy.python.org/ Git (source) repository, issue tracker: https://github.com/sfepy/sfepy Highlights of this release -------------------------- - support preconditioning in SciPy and PyAMG based linear solvers - user-defined preconditioners for PETSc linear solvers - parallel multiscale (macro-micro) homogenization-based computations - improved tutorial and installation instructions For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Cheers, Robert Cimrman --- Contributors to this release in alphabetical order: Robert Cimrman Lubos Kejzlar Vladimir Lukes Matyas Novak From chris.barker at noaa.gov Tue Sep 19 20:18:42 2017 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 19 Sep 2017 17:18:42 -0700 Subject: [Numpy-discussion] converting list of int16 values to bitmask and back to list of int32\float values In-Reply-To: <9EFE3345170EF24DB67C61C1B05EEEDB407B8992@EX10.Elspec.local> References: <9EFE3345170EF24DB67C61C1B05EEEDB407B8992@EX10.Elspec.local> Message-ID: not sure what you are getting from: Modbus.read_input_registers() but if it is a binary stream then you can put it all in one numpy array (probably type uint8 (byte)). then you can manipulate the type with arr.astype() and arr.byteswap() astype will tell numpy to interpret the same block of data as a different type. You also may be able to create the array with np.fromstring() or np.frombuffer() in the fisrst place. -CHB On Thu, Sep 14, 2017 at 10:11 AM, Nissim Derdiger wrote: > Hi all! > > I'm writing a Modbus TCP client using *pymodbus3* library. > > When asking for some parameters, the response is always a list of int16. > > In order to make the values usable, I need to transfer them into 32bit > bites, than put them in the correct order (big\little endian wise), and > then to cast them back to the desired format (usually int32 or float) > > I've solved it with a pretty na?ve code, but I'm guessing there must be a > more elegant and fast way to solve it with NumPy. > > Your help would be very much appreciated! > > Nissim. > > > > My code: > > def Read(StartAddress, NumOfRegisters, FunctionCode,ParameterType, > BitOrder): > > # select the Parameters format > > PrmFormat = 'f' # default is float > > if ParameterType == 'int': > > PrmFormat = 'i' > > # select the endian state - maybe move to the connect > function? > > endian = ' > if BitOrder == 'little': > > endian = '>I' > > # start asking for the payload > > payload = None > > while payload == None: > > payload = Modbus.read_input_registers(StartAddress, > NumOfRegisters) > > #### parse the answer > > ResultRegisters = [] > > # convert the returned registers from list of int16 to > list of 32 bits bitmaks > > for reg in range(int(NumOfRegisters / 2)): > > ResultRegisters[reg] = > struct.pack(endian, payload.registers[2 * reg]) + > struct.pack(endian,payload.registers[2 * reg + 1]) > > # convert this list to a list with the real parameter > format > > for reg in range(len(ResultRegisters)): > > ResultRegisters[reg]= > struct.unpack(PrmFormat,ResultRegisters(reg)) > > # return results > > return ResultRegisters > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Sep 21 08:53:00 2017 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 Sep 2017 08:53:00 -0400 Subject: [Numpy-discussion] I love integers Message-ID: After many hours of debugging deep inside a hessian that sometimes produced weird results. In [48]: (-y**2 * (y - 1)) / (y**2. * (y - 1)) Out[48]: array([[ -1.00000000e+00], [ -1.77041643e-03], [ 5.80863636e-04], [ -5.37729923e-03], [ -1.74809893e-03], [ 2.25499819e-02], [ 8.65999453e-03]]) y is int32 Josef https://github.com/statsmodels/statsmodels/issues/3919 -------------- next part -------------- An HTML attachment was scrubbed... URL: From renato.fabbri at gmail.com Mon Sep 25 04:59:51 2017 From: renato.fabbri at gmail.com (Renato Fabbri) Date: Mon, 25 Sep 2017 05:59:51 -0300 Subject: [Numpy-discussion] floor with dtype Message-ID: """ In [3]: n.floor(n.linspace(0,5,7), dtype=n.int) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 n.floor(n.linspace(0,5,7), dtype=n.int) TypeError: No loop matching the specified signature and casting was found for ufunc floor In [4]: n.__version__ Out[4]: '1.11.0' """ Is this the expected behavior? I am doing: >>> myints = n.array(n.floor(myarray), dtype=n.int) to get the integers. tx. R. -- Renato Fabbri GNU/Linux User #479299 labmacambira.sourceforge.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjol at tjol.eu Mon Sep 25 06:23:25 2017 From: tjol at tjol.eu (Thomas Jollans) Date: Mon, 25 Sep 2017 12:23:25 +0200 Subject: [Numpy-discussion] floor with dtype In-Reply-To: References: Message-ID: <20cd5521-35cf-9cbb-5970-b72623883f98@tjol.eu> On 2017-09-25 10:59, Renato Fabbri wrote: > """ > In [3]: n.floor(n.linspace(0,5,7), dtype=n.int ) > --------------------------------------------------------------------------- > TypeError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) > in () > ----> 1 n.floor(n.linspace(0,5,7), dtype=n.int ) > > TypeError: No loop matching the specified signature and casting > was found for ufunc floor > > In [4]: n.__version__ > Out[4]: '1.11.0' > """ > > Is this the expected behavior? Yes. There is no floor function for integers. The dtype argument specified not only the return type, but the type the calculation is done in as well. floor() only exists, and only makes sense, for floats. (You can use floor(a, dtype='f4') and so on to insist on floats of a different width) If you have some floats, and you want to get their floor as integers, you'll have to cast. In that case, in actual fact, there is little reason to use floor at all: In [2]: np.arange(1.9, 11.) Out[2]: array([ 1.9, 2.9, 3.9, 4.9, 5.9, 6.9, 7.9, 8.9, 9.9, 10.9]) In [3]: np.arange(1.9, 11.).astype('i8') Out[3]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) > > I am doing: >>>> myints = n.array(n.floor(myarray), dtype=n.int ) > to get the integers. > > tx. > R. > > > -- > Renato Fabbri > GNU/Linux User #479299 > labmacambira.sourceforge.net > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -- Thomas Jollans From renato.fabbri at gmail.com Mon Sep 25 07:23:19 2017 From: renato.fabbri at gmail.com (Renato Fabbri) Date: Mon, 25 Sep 2017 08:23:19 -0300 Subject: [Numpy-discussion] floor with dtype In-Reply-To: <20cd5521-35cf-9cbb-5970-b72623883f98@tjol.eu> References: <20cd5521-35cf-9cbb-5970-b72623883f98@tjol.eu> Message-ID: I have an array of floats and want their floor values as integers. (e.g. to use them as indexes for a table lookup) It seems reasonable to assume this is a frequent use of floor. Anyway, you gave me a better vay to do it: >>> myints = n.floor(myarray).astype(n.int) On Mon, Sep 25, 2017 at 7:23 AM, Thomas Jollans wrote: > On 2017-09-25 10:59, Renato Fabbri wrote: > > """ > > In [3]: n.floor(n.linspace(0,5,7), dtype=n.int ) > > ------------------------------------------------------------ > --------------- > > TypeError Traceback (most recent call > last) > > in () > > ----> 1 n.floor(n.linspace(0,5,7), dtype=n.int ) > > > > TypeError: No loop matching the specified signature and casting > > was found for ufunc floor > > > > In [4]: n.__version__ > > Out[4]: '1.11.0' > > """ > > > > Is this the expected behavior? > > Yes. There is no floor function for integers. > > The dtype argument specified not only the return type, but the type the > calculation is done in as well. floor() only exists, and only makes > sense, for floats. (You can use floor(a, dtype='f4') and so on to insist > on floats of a different width) > > If you have some floats, and you want to get their floor as integers, > you'll have to cast. In that case, in actual fact, there is little > reason to use floor at all: > > In [2]: np.arange(1.9, 11.) > Out[2]: array([ 1.9, 2.9, 3.9, 4.9, 5.9, 6.9, 7.9, 8.9, > 9.9, 10.9]) > > In [3]: np.arange(1.9, 11.).astype('i8') > Out[3]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) > > > > > > I am doing: > >>>> myints = n.array(n.floor(myarray), dtype=n.int ) > > to get the integers. > > > > tx. > > R. > > > > > > -- > > Renato Fabbri > > GNU/Linux User #479299 > > labmacambira.sourceforge.net > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > > -- > Thomas Jollans > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -- Renato Fabbri GNU/Linux User #479299 labmacambira.sourceforge.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Sep 25 08:53:14 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 25 Sep 2017 13:53:14 +0100 Subject: [Numpy-discussion] Proposal - change to OpenBLAS for Windows wheels Message-ID: Hi, I suggest we switch from ATLAS to OpenBLAS for our Windows wheels: * OpenBLAS is much faster, at least when Tony Kelman tested it last year [1]; * We now have an automated Appveyor build for OpenBLAS [2, 3]; * Tests are passing with 32-bit and 64-bit wheels [4]; * The next Scipy release will have OpenBLAS wheels; Any objections / questions / alternatives? Cheers, Matthew [1] https://github.com/numpy/numpy/issues/5479#issuecomment-185033668 [2] https://github.com/matthew-brett/build-openblas [3] https://ci.appveyor.com/project/matthew-brett/build-openblas [4] https://ci.appveyor.com/project/matthew-brett/numpy-wheels/build/1.0.50 From olivier.grisel at ensta.org Mon Sep 25 11:05:42 2017 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Mon, 25 Sep 2017 17:05:42 +0200 Subject: [Numpy-discussion] Proposal - change to OpenBLAS for Windows wheels In-Reply-To: References: Message-ID: +1 for the change. -- Olivier ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Mon Sep 25 13:36:58 2017 From: matti.picus at gmail.com (Matti Picus) Date: Mon, 25 Sep 2017 20:36:58 +0300 Subject: [Numpy-discussion] nditer and updateifcopy semantics - advice needed Message-ID: <03170f82-c2d8-9d1e-abaf-8a73ee468c63@gmail.com> I filed issue 9714 trying to get some feedback on what to do with updateifcopy semantics and user-exposed nditer. For those who are unfamiliar with the issue see below for a short summary, issue 7054 for a lengthy discussion, or pull request 9639 (which is still not merged). As I mention in the issue, I am willing to put in the work to make the magical update done in the last line of this snippet more explicit: a = arange(24, dtype=' References: Message-ID: Makes sense to me. On Sep 25, 2017 05:54, "Matthew Brett" wrote: > Hi, > > I suggest we switch from ATLAS to OpenBLAS for our Windows wheels: > > * OpenBLAS is much faster, at least when Tony Kelman tested it last year > [1]; > * We now have an automated Appveyor build for OpenBLAS [2, 3]; > * Tests are passing with 32-bit and 64-bit wheels [4]; > * The next Scipy release will have OpenBLAS wheels; > > Any objections / questions / alternatives? > > Cheers, > > Matthew > > [1] https://github.com/numpy/numpy/issues/5479#issuecomment-185033668 > [2] https://github.com/matthew-brett/build-openblas > [3] https://ci.appveyor.com/project/matthew-brett/build-openblas > [4] https://ci.appveyor.com/project/matthew-brett/numpy- > wheels/build/1.0.50 > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Sep 25 15:42:33 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 26 Sep 2017 08:42:33 +1300 Subject: [Numpy-discussion] Proposal - change to OpenBLAS for Windows wheels In-Reply-To: References: Message-ID: On Tue, Sep 26, 2017 at 6:48 AM, Nathaniel Smith wrote: > Makes sense to me. > > On Sep 25, 2017 05:54, "Matthew Brett" wrote: > >> Hi, >> >> I suggest we switch from ATLAS to OpenBLAS for our Windows wheels: >> >> * OpenBLAS is much faster, at least when Tony Kelman tested it last year >> [1]; >> * We now have an automated Appveyor build for OpenBLAS [2, 3]; >> * Tests are passing with 32-bit and 64-bit wheels [4]; >> * The next Scipy release will have OpenBLAS wheels; >> >> Any objections / questions / alternatives? >> > +1 Ralf >> Cheers, >> >> Matthew >> >> [1] https://github.com/numpy/numpy/issues/5479#issuecomment-185033668 >> [2] https://github.com/matthew-brett/build-openblas >> [3] https://ci.appveyor.com/project/matthew-brett/build-openblas >> [4] https://ci.appveyor.com/project/matthew-brett/numpy-wheels/ >> build/1.0.50 >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markbak at gmail.com Mon Sep 25 16:36:09 2017 From: markbak at gmail.com (Mark Bakker) Date: Mon, 25 Sep 2017 22:36:09 +0200 Subject: [Numpy-discussion] floor with dtype Message-ID: > On 2017-09-25 10:59, Renato Fabbri wrote: > > > """ > > > In [3]: n.floor(n.linspace(0,5,7), dtype=n.int ) > > > ------------------------------------------------------------ > > --------------- > > > TypeError Traceback (most recent call > > last) > > > in () > > > ----> 1 n.floor(n.linspace(0,5,7), dtype=n.int ) > > > > > > TypeError: No loop matching the specified signature and casting > > > was found for ufunc floor > > > > > > In [4]: n.__version__ > > > Out[4]: '1.11.0' > > > """ > > > > > > Is this the expected behavior? > > > > Yes. There is no floor function for integers. > > > > The dtype argument specified not only the return type, but the type the > > calculation is done in as well. floor() only exists, and only makes > > sense, for floats. (You can use floor(a, dtype='f4') and so on to insist > > on floats of a different width) > +1 for specifying a dtype in np.floor and np.ceil. Now it is pretty odd that np.floor and np.ceil results in an integer, except for that it doesn't. it returns a float with all zeros as the decimals. It would be very useful to be able to specify the dtype at 'int'. I frequently use floor or ceil to determine the indices of an array, but now need to convert to integers in addition to floor and ceil. -------------- next part -------------- An HTML attachment was scrubbed... URL: From renato.fabbri at gmail.com Wed Sep 27 04:36:45 2017 From: renato.fabbri at gmail.com (Renato Fabbri) Date: Wed, 27 Sep 2017 05:36:45 -0300 Subject: [Numpy-discussion] floor with dtype In-Reply-To: References: Message-ID: >>> myarray.astype(n.int) returns the same values as >>> n.floor(myarray).astype(n.int) for positive values?? And the same as >>> n.trunc(myarray) for any value? On Mon, Sep 25, 2017 at 5:36 PM, Mark Bakker wrote: > > On 2017-09-25 10:59, Renato Fabbri wrote: > >> > > """ >> > > In [3]: n.floor(n.linspace(0,5,7), dtype=n.int ) >> > > ------------------------------------------------------------ >> > --------------- >> > > TypeError Traceback (most recent call >> > last) >> > > in () >> > > ----> 1 n.floor(n.linspace(0,5,7), dtype=n.int ) >> > > >> > > TypeError: No loop matching the specified signature and casting >> > > was found for ufunc floor >> > > >> > > In [4]: n.__version__ >> > > Out[4]: '1.11.0' >> > > """ >> > > >> > > Is this the expected behavior? >> > >> > Yes. There is no floor function for integers. >> > >> > The dtype argument specified not only the return type, but the type the >> > calculation is done in as well. floor() only exists, and only makes >> > sense, for floats. (You can use floor(a, dtype='f4') and so on to insist >> > on floats of a different width) >> > > +1 for specifying a dtype in np.floor and np.ceil. > > Now it is pretty odd that np.floor and np.ceil results in an integer, > except for that it doesn't. it returns a float with all zeros as the > decimals. It would be very useful to be able to specify the dtype at 'int'. > I frequently use floor or ceil to determine the indices of an array, but > now need to convert to integers in addition to floor and ceil. > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -- Renato Fabbri GNU/Linux User #479299 labmacambira.sourceforge.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Sep 27 17:41:35 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 28 Sep 2017 10:41:35 +1300 Subject: [Numpy-discussion] ANN: first SciPy 1.0.0 release candidate Message-ID: Hi all, I'm excited to be able to announce the availability of the first release candidate of Scipy 1.0. This is a big release, and a version number that has been 16 years in the making. It contains a few more deprecations and backwards incompatible changes than an average release. Therefore please do test it on your own code, and report any issues on the Github issue tracker or on the scipy-dev mailing list. Sources and binary wheels can be found at https://pypi.python.org/pypi/scipy and https://github.com/scipy/scipy/releases/tag/v1.0.0rc1. To install with pip: pip install --pre --upgrade scipy Thanks to everyone who contributed to this release! Ralf Pull requests merged after v1.0.0b1: - `#7876 `__: GEN: Add comments to the tests for clarification - `#7891 `__: ENH: backport #7879 to 1.0.x - `#7902 `__: MAINT: signal: Make freqz handling of multidim. arrays match... - `#7905 `__: REV: restore wminkowski - `#7908 `__: FIX: Avoid bad ``__del__`` (close) behavior - `#7918 `__: TST: mark two optimize.linprog tests as xfail. See gh-7877. - `#7929 `__: MAINT: changed defaults to lower in sytf2, sytrf and hetrf - `#7938 `__: MAINT: backports from 1.0.x - `#7939 `__: Fix umfpack solver construction for win-amd64 ========================== SciPy 1.0.0 Release Notes ========================== .. note:: Scipy 1.0.0 is not released yet! .. contents:: SciPy 1.0.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 1.0.x branch, and on adding new features on the master branch. Some of the highlights of this release are: - Major build improvements. Windows wheels are available on PyPI for the first time, and continuous integration has been set up on Windows and OS X in addition to Linux. - A set of new ODE solvers and a unified interface to them (`scipy.integrate.solve_ivp`). - Two new trust region optimizers and a new linear programming method, with improved performance compared to what `scipy.optimize` offered previously. - Many new BLAS and LAPACK functions were wrapped. The BLAS wrappers are now complete. This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater. This is also the last release to support LAPACK 3.1.x - 3.3.x. Moving the lowest supported LAPACK version to >3.2.x was long blocked by Apple Accelerate providing the LAPACK 3.2.1 API. We have decided that it's time to either drop Accelerate or, if there is enough interest, provide shims for functions added in more recent LAPACK versions so it can still be used. New features ============ `scipy.cluster` improvements ---------------------------- `scipy.cluster.hierarchy.optimal_leaf_ordering`, a function to reorder a linkage matrix to minimize distances between adjacent leaves, was added. `scipy.fftpack` improvements ---------------------------- N-dimensional versions of the discrete sine and cosine transforms and their inverses were added as ``dctn``, ``idctn``, ``dstn`` and ``idstn``. `scipy.integrate` improvements ------------------------------ A set of new ODE solvers have been added to `scipy.integrate`. The convenience function `scipy.integrate.solve_ivp` allows uniform access to all solvers. The individual solvers (``RK23``, ``RK45``, ``Radau``, ``BDF`` and ``LSODA``) can also be used directly. `scipy.linalg` improvements ---------------------------- The BLAS wrappers in `scipy.linalg.blas` have been completed. Added functions are ``*gbmv``, ``*hbmv``, ``*hpmv``, ``*hpr``, ``*hpr2``, ``*spmv``, ``*spr``, ``*tbmv``, ``*tbsv``, ``*tpmv``, ``*tpsv``, ``*trsm``, ``*trsv``, ``*sbmv``, ``*spr2``, Wrappers for the LAPACK functions ``*gels``, ``*stev``, ``*sytrd``, ``*hetrd``, ``*sytf2``, ``*hetrf``, ``*sytrf``, ``*sycon``, ``*hecon``, ``*gglse``, ``*stebz``, ``*stemr``, ``*sterf``, and ``*stein`` have been added. The function `scipy.linalg.subspace_angles` has been added to compute the subspace angles between two matrices. The function `scipy.linalg.clarkson_woodruff_transform` has been added. It finds low-rank matrix approximation via the Clarkson-Woodruff Transform. The functions `scipy.linalg.eigh_tridiagonal` and `scipy.linalg.eigvalsh_tridiagonal`, which find the eigenvalues and eigenvectors of tridiagonal hermitian/symmetric matrices, were added. `scipy.ndimage` improvements ---------------------------- Support for homogeneous coordinate transforms has been added to `scipy.ndimage.affine_transform`. The ``ndimage`` C code underwent a significant refactoring, and is now a lot easier to understand and maintain. `scipy.optimize` improvements ----------------------------- The methods ``trust-region-exact`` and ``trust-krylov`` have been added to the function `scipy.optimize.minimize`. These new trust-region methods solve the subproblem with higher accuracy at the cost of more Hessian factorizations (compared to dogleg) or more matrix vector products (compared to ncg) but usually require less nonlinear iterations and are able to deal with indefinite Hessians. They seem very competitive against the other Newton methods implemented in scipy. `scipy.optimize.linprog` gained an interior point method. Its performance is superior (both in accuracy and speed) to the older simplex method. `scipy.signal` improvements --------------------------- An argument ``fs`` (sampling frequency) was added to the following functions: ``firwin``, ``firwin2``, ``firls``, and ``remez``. This makes these functions consistent with many other functions in `scipy.signal` in which the sampling frequency can be specified. `scipy.signal.freqz` has been sped up significantly for FIR filters. `scipy.sparse` improvements --------------------------- Iterating over and slicing of CSC and CSR matrices is now faster by up to ~35%. The ``tocsr`` method of COO matrices is now several times faster. The ``diagonal`` method of sparse matrices now takes a parameter, indicating which diagonal to return. `scipy.sparse.linalg` improvements ---------------------------------- A new iterative solver for large-scale nonsymmetric sparse linear systems, `scipy.sparse.linalg.gcrotmk`, was added. It implements ``GCROT(m,k)``, a flexible variant of ``GCROT``. `scipy.sparse.linalg.lsmr` now accepts an initial guess, yielding potentially faster convergence. SuperLU was updated to version 5.2.1. `scipy.spatial` improvements ---------------------------- Many distance metrics in `scipy.spatial.distance` gained support for weights. The signatures of `scipy.spatial.distance.pdist` and `scipy.spatial.distance.cdist` were changed to ``*args, **kwargs`` in order to support a wider range of metrics (e.g. string-based metrics that need extra keywords). Also, an optional ``out`` parameter was added to ``pdist`` and ``cdist`` allowing the user to specify where the resulting distance matrix is to be stored `scipy.stats` improvements -------------------------- The methods ``cdf`` and ``logcdf`` were added to `scipy.stats.multivariate_normal`, providing the cumulative distribution function of the multivariate normal distribution. New statistical distance functions were added, namely `scipy.stats.wasserstein_distance` for the first Wasserstein distance and `scipy.stats.energy_distance` for the energy distance. Deprecated features =================== The following functions in `scipy.misc` are deprecated: ``bytescale``, ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, ``imsave``, ``imshow`` and ``toimage``. Most of those functions have unexpected behavior (like rescaling and type casting image data without the user asking for that). Other functions simply have better alternatives. ``scipy.interpolate.interpolate_wrapper`` and all functions in that submodule are deprecated. This was a never finished set of wrapper functions which is not relevant anymore. The ``fillvalue`` of `scipy.signal.convolve2d` will be cast directly to the dtypes of the input arrays in the future and checked that it is a scalar or an array with a single element. ``scipy.spatial.distance.matching`` is deprecated. It is an alias of `scipy.spatial.distance.hamming`, which should be used instead. Implementation of `scipy.spatial.distance.wminkowski` was based on a wrong interpretation of the metric definition. In scipy 1.0 it has been just deprecated in the documentation to keep retro-compatibility but is recommended to use the new version of `scipy.spatial.distance.minkowski` that implements the correct behaviour. Positional arguments of `scipy.spatial.distance.pdist` and `scipy.spatial.distance.cdist` should be replaced with their keyword version. Backwards incompatible changes ============================== The following deprecated functions have been removed from `scipy.stats`: ``betai``, ``chisqprob``, ``f_value``, ``histogram``, ``histogram2``, ``pdf_fromgamma``, ``signaltonoise``, ``square_of_sums``, ``ss`` and ``threshold``. The following deprecated functions have been removed from `scipy.stats.mstats`: ``betai``, ``f_value_wilks_lambda``, ``signaltonoise`` and ``threshold``. The deprecated ``a`` and ``reta`` keywords have been removed from `scipy.stats.shapiro`. The deprecated functions ``sparse.csgraph.cs_graph_components`` and ``sparse.linalg.symeig`` have been removed from `scipy.sparse`. The following deprecated keywords have been removed in `scipy.sparse.linalg`: ``drop_tol`` from ``splu``, and ``xtype`` from ``bicg``, ``bicgstab``, ``cg``, ``cgs``, ``gmres``, ``qmr`` and ``minres``. The deprecated functions ``expm2`` and ``expm3`` have been removed from `scipy.linalg`. The deprecated keyword ``q`` was removed from `scipy.linalg.expm`. And the deprecated submodule ``linalg.calc_lwork`` was removed. The deprecated functions ``C2K``, ``K2C``, ``F2C``, ``C2F``, ``F2K`` and ``K2F`` have been removed from `scipy.constants`. The deprecated ``ppform`` class was removed from `scipy.interpolate`. The deprecated keyword ``iprint`` was removed from `scipy.optimize.fmin_cobyla`. The default value for the ``zero_phase`` keyword of `scipy.signal.decimate` has been changed to True. The ``kmeans`` and ``kmeans2`` functions in `scipy.cluster.vq` changed the method used for random initialization, so using a fixed random seed will not necessarily produce the same results as in previous versions. `scipy.special.gammaln` does not accept complex arguments anymore. The deprecated functions ``sph_jn``, ``sph_yn``, ``sph_jnyn``, ``sph_in``, ``sph_kn``, and ``sph_inkn`` have been removed. Users should instead use the functions ``spherical_jn``, ``spherical_yn``, ``spherical_in``, and ``spherical_kn``. Be aware that the new functions have different signatures. The cross-class properties of `scipy.signal.lti` systems have been removed. The following properties/setters have been removed: Name - (accessing/setting has been removed) - (setting has been removed) * StateSpace - (``num``, ``den``, ``gain``) - (``zeros``, ``poles``) * TransferFunction (``A``, ``B``, ``C``, ``D``, ``gain``) - (``zeros``, ``poles``) * ZerosPolesGain (``A``, ``B``, ``C``, ``D``, ``num``, ``den``) - () ``signal.freqz(b, a)`` with ``b`` or ``a`` >1-D raises a ``ValueError``. This was a corner case for which it was unclear that the behavior was well-defined. The method ``var`` of `scipy.stats.dirichlet` now returns a scalar rather than an ndarray when the length of alpha is 1. Other changes ============= SciPy now has a formal governance structure. It consists of a BDFL (Pauli Virtanen) and a Steering Committee. See `the governance document < https://github.com/scipy/scipy/blob/master/doc/source/dev/governance/governance.rst >`_ for details. It is now possible to build SciPy on Windows with MSVC + gfortran! Continuous integration has been set up for this build configuration on Appveyor, building against OpenBLAS. Continuous integration for OS X has been set up on TravisCI. The SciPy test suite has been migrated from ``nose`` to ``pytest``. ``scipy/_distributor_init.py`` was added to allow redistributors of SciPy to add custom code that needs to run when importing SciPy (e.g. checks for hardware, DLL search paths, etc.). Support for PEP 518 (specifying build system requirements) was added - see ``pyproject.toml`` in the root of the SciPy repository. In order to have consistent function names, the function ``scipy.linalg.solve_lyapunov`` is renamed to `scipy.linalg.solve_continuous_lyapunov`. The old name is kept for backwards-compatibility. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Sep 28 00:10:57 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 27 Sep 2017 22:10:57 -0600 Subject: [Numpy-discussion] NumPy 1.13.2 released. Message-ID: HI All, On behalf of the NumPy team, I am pleased to annouce the release of Numpy 1.13.2. This is a bugfix release for some problems found since 1.13.1. The most important fixes are for CVE-2017-12852 and temporary elision. Users of earlier versions of 1.13 should upgrade. The Python versions supported are 2.7 and 3.4 - 3.6. The Python 3.6 wheels available from PIP are built with Python 3.6.2 and should be compatible with all previous versions of Python 3.6. The Windows wheels are now built with OpenBlas instead ATLAS, which should improve the performance of the linearalgebra functions. Contributors ============ A total of 12 people contributed to this release. People with a "+" by their names contributed a patch for the first time. * Allan Haldane * Brandon Carter * Charles Harris * Eric Wieser * Iryna Shcherbina + * James Bourbeau + * Jonathan Helmus * Julian Taylor * Matti Picus * Michael Lamparski + * Michael Seifert * Ralf Gommers Pull requests merged ==================== A total of 20 pull requests were merged for this release. * #9390 BUG: Return the poly1d coefficients array directly * #9555 BUG: Fix regression in 1.13.x in distutils.mingw32ccompiler. * #9556 BUG: Fix true_divide when dtype=np.float64 specified. * #9557 DOC: Fix some rst markup in numpy/doc/basics.py. * #9558 BLD: Remove -xhost flag from IntelFCompiler. * #9559 DOC: Removes broken docstring example (source code, png, pdf)... * #9580 BUG: Add hypot and cabs functions to WIN32 blacklist. * #9732 BUG: Make scalar function elision check if temp is writeable. * #9736 BUG: Various fixes to np.gradient * #9742 BUG: Fix np.pad for CVE-2017-12852 * #9744 BUG: Check for exception in sort functions, add tests * #9745 DOC: Add whitespace after "versionadded::" directive so it actually... * #9746 BUG: Memory leak in np.dot of size 0 * #9747 BUG: Adjust gfortran version search regex * #9757 BUG: Cython 0.27 breaks NumPy on Python 3. * #9764 BUG: Ensure `_npy_scaled_cexp{,f,l}` is defined when needed. * #9765 BUG: PyArray_CountNonzero does not check for exceptions * #9766 BUG: Fixes histogram monotonicity check for unsigned bin values * #9767 BUG: Ensure consistent result dtype of count_nonzero * #9771 BUG, MAINT: Fix mtrand for Cython 0.27. Enjoy Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean.christophe.houde at gmail.com Thu Sep 28 09:26:07 2017 From: jean.christophe.houde at gmail.com (Jean-Christophe Houde) Date: Thu, 28 Sep 2017 09:26:07 -0400 Subject: [Numpy-discussion] Numpy wheels, openBLAS and threading Message-ID: Hi all, not sure if this is the best place to ask for this. If not, please advise on the correct place. Since the numpy wheels internally use openBLAS, operations can be implicitly multithreaded directly by openBLAS. This, of course, can clash with multithreading or parallel processing. The recommended practice in this case is to set export OPENBLAS_NUM_THREADS=1 in the environment. However, I would like to be able to adjust this directly in my python code. Is there a way to control this directly through Python, whether through numpy or not? Thanks for your time! -- Jean-Christophe Houde -------------- next part -------------- An HTML attachment was scrubbed... URL: From max_linke at gmx.de Thu Sep 28 09:37:07 2017 From: max_linke at gmx.de (Max Linke) Date: Thu, 28 Sep 2017 15:37:07 +0200 Subject: [Numpy-discussion] Numpy wheels, openBLAS and threading In-Reply-To: References: Message-ID: <87tvznrlks.fsf@gmx.de> os.environ can be used to change environment variables from within python. https://docs.python.org/2/library/os.html#os.environ I do not know when openBLAS is reading the environment variables though. Changing a value while your python process is running might be to late. Jean-Christophe Houde writes: > Hi all, > > not sure if this is the best place to ask for this. If not, please advise > on the correct place. > > Since the numpy wheels internally use openBLAS, operations can be > implicitly multithreaded directly by openBLAS. > > This, of course, can clash with multithreading or parallel processing. The > recommended practice in this case is to set > > export OPENBLAS_NUM_THREADS=1 > > in the environment. However, I would like to be able to adjust this > directly in my python code. > > Is there a way to control this directly through Python, whether through > numpy or not? > > Thanks for your time! From p.j.a.cock at googlemail.com Thu Sep 28 09:54:54 2017 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Thu, 28 Sep 2017 14:54:54 +0100 Subject: [Numpy-discussion] Numpy wheels, openBLAS and threading In-Reply-To: References: Message-ID: This came up for Biopython recently (someone using our library on a cluster ran into thread limits triggered by the importing of NumPy), and suggested something like this: import os try: os.environ["OMP_NUM_THREADS"] = "1" import numpy finally: del os.environ["OMP_NUM_THREADS"] Or MKL_NUM_THREADS, or apparently also it might be OPENBLAS_NUM_THREADS as well: https://github.com/biopython/biopython/pull/1401 Peter On Thu, Sep 28, 2017 at 2:26 PM, Jean-Christophe Houde wrote: > Hi all, > > not sure if this is the best place to ask for this. If not, please advise on > the correct place. > > Since the numpy wheels internally use openBLAS, operations can be implicitly > multithreaded directly by openBLAS. > > This, of course, can clash with multithreading or parallel processing. The > recommended practice in this case is to set > > export OPENBLAS_NUM_THREADS=1 > > in the environment. However, I would like to be able to adjust this directly > in my python code. > > Is there a way to control this directly through Python, whether through > numpy or not? > > Thanks for your time! > > -- > Jean-Christophe Houde > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > From derek at astro.physik.uni-goettingen.de Thu Sep 28 09:50:13 2017 From: derek at astro.physik.uni-goettingen.de (Derek Homeier) Date: Thu, 28 Sep 2017 15:50:13 +0200 Subject: [Numpy-discussion] Numpy wheels, openBLAS and threading In-Reply-To: <87tvznrlks.fsf@gmx.de> References: <87tvznrlks.fsf@gmx.de> Message-ID: <34057B1C-FA11-4104-9BA9-0B067C3B7546@astro.physik.uni-goettingen.de> On 28 Sep 2017, at 3:37 pm, Max Linke wrote: > > os.environ can be used to change environment variables from within > python. > > https://docs.python.org/2/library/os.html#os.environ > > I do not know when openBLAS is reading the environment variables though. > Changing a value while your python process is running might be to late. It should use the value that is set at the time a BLAS routine is called. At least I can confirm that this works analogously within Fortran programs setting the *NUM_THREADS variables at runtime. HTH Derek From charlesr.harris at gmail.com Fri Sep 29 19:52:17 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 29 Sep 2017 17:52:17 -0600 Subject: [Numpy-discussion] NumPy 1.13.3 released. Message-ID: HI All, On behalf of the NumPy team, I am pleased to annouce the release of Numpy 1.13.3. This is a re-release of 1.13.2, which suffered from compatibility problems, see issue 9786 . It is a bugfix release for some problems found since 1.13.1. The most important fixes are for CVE-2017-12852 and the new temporary elision. Users of earlier versions of 1.13 should upgrade. The Python versions supported are 2.7 and 3.4 - 3.6. The Python 3.6 wheels available from PIP are built with Python 3.6.2 and should be compatible with all previous versions of Python 3.6. It was cythonized with Cython 0.26.1, which should be free of the bugs found in 0.27 while also being compatible with Python 3.7-dev. The Windows wheels were built with OpenBlas instead ATLAS, which should improve the performance of the linear algebra functions. Wheels and zip archives are available from PyPI , both zip and tar archives are available from Github . Contributors ============ A total of 12 people contributed to this release. People with a "+" by their names contributed a patch for the first time. * Allan Haldane * Brandon Carter * Charles Harris * Eric Wieser * Iryna Shcherbina + * James Bourbeau + * Jonathan Helmus * Julian Taylor * Matti Picus * Michael Lamparski + * Michael Seifert * Ralf Gommers Pull requests merged ==================== A total of 20 pull requests were merged for this release. * #9390 BUG: Return the poly1d coefficients array directly * #9555 BUG: Fix regression in 1.13.x in distutils.mingw32ccompiler. * #9556 BUG: Fix true_divide when dtype=np.float64 specified. * #9557 DOC: Fix some rst markup in numpy/doc/basics.py. * #9558 BLD: Remove -xhost flag from IntelFCompiler. * #9559 DOC: Removes broken docstring example (source code, png, pdf)... * #9580 BUG: Add hypot and cabs functions to WIN32 blacklist. * #9732 BUG: Make scalar function elision check if temp is writeable. * #9736 BUG: Various fixes to np.gradient * #9742 BUG: Fix np.pad for CVE-2017-12852 * #9744 BUG: Check for exception in sort functions, add tests * #9745 DOC: Add whitespace after "versionadded::" directive so it actually... * #9746 BUG: Memory leak in np.dot of size 0 * #9747 BUG: Adjust gfortran version search regex * #9757 BUG: Cython 0.27 breaks NumPy on Python 3. * #9764 BUG: Ensure `_npy_scaled_cexp{,f,l}` is defined when needed. * #9765 BUG: PyArray_CountNonzero does not check for exceptions * #9766 BUG: Fixes histogram monotonicity check for unsigned bin values * #9767 BUG: Ensure consistent result dtype of count_nonzero * #9771 BUG, MAINT: Fix mtrand for Cython 0.27. Enjoy Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: