From tyler.je.reddy at gmail.com Wed May 1 14:12:59 2019 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Wed, 1 May 2019 11:12:59 -0700 Subject: [Numpy-discussion] ANN: SciPy 1.3.0rc1 -- please test In-Reply-To: References: Message-ID: I'd very much appreciate if someone with access to Skylake architecture can confirm linear algebra-related failures / issues for SciPy 1.2.1 and SciPy 1.3.0rc1: for a simple example: https://github.com/numpy/numpy/issues/13401#issuecomment-486690487 There's some credible concern that both of those releases have effectively broken linear algebra behavior for wheels with the OpenBLAS versions they are distributed with, but I want to be sure before investing time in updating two release branches obviously! On Fri, 26 Apr 2019 at 18:25, Tyler Reddy wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi all, > > On behalf of the SciPy development team I'm pleased to announce > the release candidate SciPy 1.3.0rc1. Please help us test this pre-release. > > Sources and binary wheels can be found at:https://pypi.org/project/scipy/ > and at:https://github.com/scipy/scipy/releases/tag/v1.3.0rc1 > One of a few ways to install the release candidate with pip: > pip install scipy==1.3.0rc1 > > ========================== > SciPy 1.3.0 Release Notes > ========================== > > Note: Scipy 1.3.0 is not released yet! > > SciPy 1.3.0 is the culmination of 5 months of hard work. It contains > many new features, numerous bug-fixes, improved test coverage and better > documentation. There have been some API changes > in this release, which are documented below. All users are encouraged to > upgrade to this release, as there are a large number of bug-fixes and > optimizations. Before upgrading, we recommend that users check that > their own code does not use deprecated SciPy functionality (to do so, > run your code with ``python -Wd`` and check for ``DeprecationWarning`` s). > Our development attention will now shift to bug-fix releases on the > 1.3.x branch, and on adding new features on the master branch. > > This release requires Python 3.5+ and NumPy 1.13.3 or greater. > > For running on PyPy, PyPy3 6.0+ and NumPy 1.15.0 are required. > > Highlights of this release > -------------------------- > > - Three new ``stats`` functions, a rewrite of ``pearsonr``, and an exact > computation of the Kolmogorov-Smirnov two-sample test > - A new Cython API for bounded scalar-function root-finders in `scipy.optimize` > - Substantial ``CSR`` and ``CSC`` sparse matrix indexing performance > improvements > - Added support for interpolation of rotations with continuous angular > rate and acceleration in ``RotationSpline`` > > > New features > ============ > > `scipy.interpolate` improvements > -------------------------------- > > A new class ``CubicHermiteSpline`` is introduced. It is a piecewise-cubic > interpolator which matches observed values and first derivatives. Existing > cubic interpolators ``CubicSpline``, ``PchipInterpolator`` and > ``Akima1DInterpolator`` were made subclasses of ``CubicHermiteSpline``. > > `scipy.io` improvements > ----------------------- > > For the Attribute-Relation File Format (ARFF) `scipy.io.arff.loadarff` > now supports relational attributes. > > `scipy.io.mmread` can now parse Matrix Market format files with empty lines. > > `scipy.linalg` improvements > --------------------------- > > Added wrappers for ``?syconv`` routines, which convert a symmetric matrix > given by a triangular matrix factorization into two matrices and vice versa. > > `scipy.linalg.clarkson_woodruff_transform` now uses an algorithm that leverages > sparsity. This may provide a 60-90 percent speedup for dense input matrices. > Truly sparse input matrices should also benefit from the improved sketch > algorithm, which now correctly runs in ``O(nnz(A))`` time. > > Added new functions to calculate symmetric Fiedler matrices and > Fiedler companion matrices, named `scipy.linalg.fiedler` and > `scipy.linalg.fiedler_companion`, respectively. These may be used > for root finding. > > `scipy.ndimage` improvements > ---------------------------- > > Gaussian filter performances may improve by an order of magnitude in > some cases, thanks to removal of a dependence on ``np.polynomial``. This > may impact `scipy.ndimage.gaussian_filter` for example. > > `scipy.optimize` improvements > ----------------------------- > > The `scipy.optimize.brute` minimizer obtained a new keyword ``workers``, which > can be used to parallelize computation. > > A Cython API for bounded scalar-function root-finders in `scipy.optimize` > is available in a new module `scipy.optimize.cython_optimize` via ``cimport``. > This API may be used with ``nogil`` and ``prange`` to loop > over an array of function arguments to solve for an array of roots more > quickly than with pure Python. > > ``'interior-point'`` is now the default method for ``linprog``, and > ``'interior-point'`` now uses SuiteSparse for sparse problems when the > required scikits (scikit-umfpack and scikit-sparse) are available. > On benchmark problems (gh-10026), execution time reductions by factors of 2-3 > were typical. Also, a new ``method='revised simplex'`` has been added. > It is not as fast or robust as ``method='interior-point'``, but it is a faster, > more robust, and equally accurate substitute for the legacy > ``method='simplex'``. > > ``differential_evolution`` can now use a ``Bounds`` class to specify the > bounds for the optimizing argument of a function. > > `scipy.optimize.dual_annealing` performance improvements related to > vectorisation of some internal code. > > `scipy.signal` improvements > --------------------------- > > Two additional methods of discretization are now supported by > `scipy.signal.cont2discrete`: ``impulse`` and ``foh``. > > `scipy.signal.firls` now uses faster solvers > > `scipy.signal.detrend` now has a lower physical memory footprint in some > cases, which may be leveraged using the new ``overwrite_data`` keyword argument > > `scipy.signal.firwin` ``pass_zero`` argument now accepts new string arguments > that allow specification of the desired filter type: ``'bandpass'``, > ``'lowpass'``, ``'highpass'``, and ``'bandstop'`` > > `scipy.signal.sosfilt` may have improved performance due to lower retention > of the global interpreter lock (GIL) in algorithm > > `scipy.sparse` improvements > --------------------------- > > A new keyword was added to ``csgraph.dijsktra`` that > allows users to query the shortest path to ANY of the passed in indices, > as opposed to the shortest path to EVERY passed index. > > `scipy.sparse.linalg.lsmr` performance has been improved by roughly 10 percent > on large problems > > Improved performance and reduced physical memory footprint of the algorithm > used by `scipy.sparse.linalg.lobpcg` > > ``CSR`` and ``CSC`` sparse matrix fancy indexing performance has been > improved substantially > > `scipy.spatial` improvements > ---------------------------- > > `scipy.spatial.ConvexHull` now has a ``good`` attribute that can be used > alongsize the ``QGn`` Qhull options to determine which external facets of a > convex hull are visible from an external query point. > > `scipy.spatial.cKDTree.query_ball_point` has been modernized to use some newer > Cython features, including GIL handling and exception translation. An issue > with ``return_sorted=True`` and scalar queries was fixed, and a new mode named > ``return_length`` was added. ``return_length`` only computes the length of the > returned indices list instead of allocating the array every time. > > `scipy.spatial.transform.RotationSpline` has been added to enable interpolation > of rotations with continuous angular rates and acceleration > > `scipy.stats` improvements > -------------------------- > > Added a new function to compute the Epps-Singleton test statistic, > `scipy.stats.epps_singleton_2samp`, which can be applied to continuous and > discrete distributions. > > New functions `scipy.stats.median_absolute_deviation` and `scipy.stats.gstd` > (geometric standard deviation) were added. The `scipy.stats.combine_pvalues` > method now supports ``pearson``, ``tippett`` and ``mudholkar_george`` pvalue > combination methods. > > The `scipy.stats.ortho_group` and `scipy.stats.special_ortho_group` > ``rvs(dim)`` functions' algorithms were updated from a ``O(dim^4)`` > implementation to a ``O(dim^3)`` which gives large speed improvements > for ``dim>100``. > > A rewrite of `scipy.stats.pearsonr` to use a more robust algorithm, > provide meaningful exceptions and warnings on potentially pathological input, > and fix at least five separate reported issues in the original implementation. > > Improved the precision of ``hypergeom.logcdf`` and ``hypergeom.logsf``. > > Added exact computation for Kolmogorov-Smirnov (KS) two-sample test, replacing > the previously approximate computation for the two-sided test `stats.ks_2samp`. > Also added a one-sided, two-sample KS test, and a keyword ``alternative`` to > `stats.ks_2samp`. > > Backwards incompatible changes > ============================== > > `scipy.interpolate` changes > --------------------------- > > Functions from ``scipy.interpolate`` (``spleval``, ``spline``, ``splmake``, > and ``spltopp``) and functions from ``scipy.misc`` (``bytescale``, > ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, > ``imsave``, ``imshow``, ``toimage``) have been removed. The former set has > been deprecated since v0.19.0 and the latter has been deprecated since v1.0.0. > Similarly, aliases from ``scipy.misc`` (``comb``, ``factorial``, > ``factorial2``, ``factorialk``, ``logsumexp``, ``pade``, ``info``, ``source``, > ``who``) which have been deprecated since v1.0.0 are removed. > `SciPy documentation for > v1.1.0 `__ > can be used to track the new import locations for the relocated functions. > > `scipy.linalg` changes > ---------------------- > > For ``pinv``, ``pinv2``, and ``pinvh``, the default cutoff values are changed > for consistency (see the docs for the actual values). > > `scipy.stats` changes > --------------------- > > Previously, ``ks_2samp(data1, data2)`` would run a two-sided test and return > the approximated p-value. The new signature, ``ks_2samp(data1, data2, > alternative="two-sided", method="auto")``, still runs the two-sided test by > default but returns the exact p-value for small samples and the approximated > value for large samples. ``method="asymp"`` would be equivalent to the > old version but ``auto`` is the better choice. > > Other changes > ============= > > Our tutorial has been expanded with a new section on global optimizers > > There has been a rework of the ``stats.distributions`` tutorials. > > `scipy.optimize` now correctly sets the convergence flag of the result to > ``CONVERR``, a convergence error, for bounded scalar-function root-finders > if the maximum iterations has been exceeded, ``disp`` is false, and > ``full_output`` is true. > > `scipy.optimize.curve_fit` no longer fails if ``xdata`` and ``ydata`` dtypes > differ; they are both now automatically cast to ``float64``. > > `scipy.ndimage` functions including ``binary_erosion``, ``binary_closing``, and > ``binary_dilation`` now require an integer value for the number of iterations, > which alleviates a number of reported issues. > > Fixed normal approximation in case ``zero_method == "pratt"`` in > `scipy.stats.wilcoxon`. > > Fixes for incorrect probabilities, broadcasting issues and thread-safety > related to stats distributions setting member variables inside ``_argcheck()``. > > `scipy.optimize.newton` now correctly raises a ``RuntimeError``, when default > arguments are used, in the case that a derivative of value zero is obtained, > which is a special case of failing to converge. > > A draft toolchain roadmap is now available, laying out a compatibility plan > including Python versions, C standards, and NumPy versions. > > > Authors > ======= > > * ananyashreyjain + > * ApamNapat + > * Scott Calabrese Barton + > * Christoph Baumgarten > * Peter Bell + > * Jacob Blomgren + > * Doctor Bob + > * Mana Borwornpadungkitti + > * Matthew Brett > * Evgeni Burovski > * CJ Carey > * Vega Theil Carstensen + > * Robert Cimrman > * Forrest Collman + > * Pietro Cottone + > * David + > * Idan David + > * Christoph Deil > * Dieter Werthm?ller > * Conner DiPaolo + > * Dowon > * Michael Dunphy + > * Peter Andreas Entschev + > * G?k?en Eraslan + > * Johann Faouzi + > * Yu Feng > * Piotr Figiel + > * Matthew H Flamm > * Franz Forstmayr + > * Christoph Gohlke > * Richard Janis Goldschmidt + > * Ralf Gommers > * Lars Grueter > * Sylvain Gubian > * Matt Haberland > * Yaroslav Halchenko > * Charles Harris > * Lindsey Hiltner > * JakobStruye + > * He Jia + > * Jwink3101 + > * Greg Kiar + > * Julius Bier Kirkegaard > * John Kirkham + > * Thomas Kluyver > * Vladimir Korolev + > * Joseph Kuo + > * Michael Lamparski + > * Eric Larson > * Denis Laxalde > * Katrin Leinweber > * Jesse Livezey > * ludcila + > * Dhruv Madeka + > * Magnus + > * Nikolay Mayorov > * Mark Mikofski > * Jarrod Millman > * Markus Mohrhard + > * Eric Moore > * Andrew Nelson > * Aki Nishimura + > * OGordon100 + > * Petar Mlinari? + > * Stefan Peterson > * Matti Picus + > * Ilhan Polat > * Aaron Pries + > * Matteo Ravasi + > * Tyler Reddy > * Ashton Reimer + > * Joscha Reimer > * rfezzani + > * Riadh + > * Lucas Roberts > * Heshy Roskes + > * Mirko Scholz + > * Taylor D. Scott + > * Srikrishna Sekhar + > * Kevin Sheppard + > * Sourav Singh > * skjerns + > * Kai Striega > * SyedSaifAliAlvi + > * Gopi Manohar T + > * Albert Thomas + > * Timon + > * Paul van Mulbregt > * Jacob Vanderplas > * Daniel Vargas + > * Pauli Virtanen > * VNMabus + > * Stefan van der Walt > * Warren Weckesser > * Josh Wilson > * Nate Yoder + > * Roman Yurchak > > A total of 97 people contributed to this release. > People with a "+" by their names contributed a patch for the first time. > This list of names is automatically generated, and may not be fully complete. > > Issues closed for 1.3.0 > ----------------------- > > * `#1320 `__: scipy.stats.distribution: problem with self.a, self.b if they... > * `#2002 `__: members set in scipy.stats.distributions.##._argcheck (Trac #1477) > * `#2823 `__: distribution methods add tmp > * `#3220 `__: Scipy.opimize.fmin_powell direc argument syntax unclear > * `#3728 `__: scipy.stats.pearsonr: possible bug with zero variance input > * `#6805 `__: error-in-scipy-wilcoxon-signed-rank-test-for-equal-series > * `#6873 `__: 'stats.boxcox' return all same values > * `#7117 `__: Warn users when using float32 input data to curve_fit and friends > * `#7632 `__: it's not possible to tell the \`optimize.least_squares\` solver... > * `#7730 `__: stats.pearsonr: Potential division by zero for dataset of length... > * `#7933 `__: stats.truncnorm fails when providing values outside truncation... > * `#8033 `__: Add standard filter types to firwin to set pass_zero intuitively... > * `#8600 `__: lfilter.c.src zfill has erroneous header > * `#8692 `__: Non-negative values of \`stats.hypergeom.logcdf\` > * `#8734 `__: Enable pip build isolation > * `#8861 `__: scipy.linalg.pinv gives wrong result while scipy.linalg.pinv2... > * `#8915 `__: need to fix macOS build against older numpy versions > * `#8980 `__: scipy.stats.pearsonr overflows with high values of x and y > * `#9226 `__: BUG: signal: SystemError: ... > * `#9254 `__: BUG: root finders brentq, etc, flag says "converged" even if... > * `#9308 `__: Test failure - test_initial_constraints_as_canonical > * `#9353 `__: scipy.stats.pearsonr returns r=1 if r_num/r_den = inf > * `#9359 `__: Planck distribution is a geometric distribution > * `#9381 `__: linregress should warn user in 2x2 array case > * `#9406 `__: BUG: stats: In pearsonr, when r is nan, the p-value must also... > * `#9437 `__: Cannot create sparse matrix from size_t indexes > * `#9518 `__: Relational attributes in loadarff > * `#9551 `__: BUG: scipy.optimize.newton says the root of x^2+1 is zero. > * `#9564 `__: rv_sample accepts invalid input in scipy.stats > * `#9565 `__: improper handling of multidimensional input in stats.rv_sample > * `#9581 `__: Least-squares minimization fails silently when x and y data are... > * `#9587 `__: Outdated value for scipy.constants.au > * `#9611 `__: Overflow error with new way of p-value calculation in kendall... > * `#9645 `__: \`scipy.stats.mode\` crashes with variable length arrays (\`dtype=object\`) > * `#9734 `__: PendingDeprecationWarning for np.matrix with pytest > * `#9786 `__: stats.ks_2samp() misleading for small data sets. > * `#9790 `__: Excessive memory usage on detrend > * `#9801 `__: dual_annealing does not set the success attribute in OptimizeResult > * `#9833 `__: IntegrationWarning from mielke.stats() during build of html doc. > * `#9835 `__: scipy.signal.firls seems to be inefficient versus MATLAB firls > * `#9864 `__: Curve_fit does not check for empty input data if called with... > * `#9869 `__: scipy.ndimage.label: Minor documentation issue > * `#9882 `__: format at the wrong paranthesis in scipy.spatial.transform > * `#9889 `__: scipy.signal.find_peaks minor documentation issue > * `#9890 `__: Minkowski p-norm Issues in cKDTree For Values Other Than 2 Or... > * `#9896 `__: scipy.stats._argcheck sets (not just checks) values > * `#9905 `__: Memory error in ndimage.binary_erosion > * `#9909 `__: binary_dilation/erosion/closing crashes when iterations is float > * `#9919 `__: BUG: \`coo_matrix\` does not validate the \`shape\` argument. > * `#9982 `__: lsq_linear hangs/infinite loop with 'trf' method > * `#10003 `__: exponnorm.pdf returns NAN for small K > * `#10011 `__: Incorrect check for invalid rotation plane in scipy.ndimage.rotate > * `#10024 `__: Fails to build from git > * `#10048 `__: DOC: scipy.optimize.root_scalar > * `#10068 `__: DOC: scipy.interpolate.splev > * `#10074 `__: BUG: \`expm\` calculates the wrong coefficients in the backward... > > > Pull requests for 1.3.0 > ----------------------- > > * `#7827 `__: ENH: sparse: overhaul of sparse matrix indexing > * `#8431 `__: ENH: Cython optimize zeros api > * `#8743 `__: DOC: Updated linalg.pinv, .pinv2, .pinvh docstrings > * `#8744 `__: DOC: added examples to remez docstring > * `#9227 `__: DOC: update description of "direc" parameter of "fmin_powell" > * `#9263 `__: ENH: optimize: added "revised simplex" for scipy.optimize.linprog > * `#9325 `__: DEP: Remove deprecated functions for 1.3.0 > * `#9330 `__: Add note on push and pull affine transformations > * `#9423 `__: DOC: Clearly state how 2x2 input arrays are handled in stats.linregress > * `#9428 `__: ENH: parallelised brute > * `#9438 `__: BUG: Initialize coo matrix with size_t indexes > * `#9455 `__: MAINT: Speed up get_(lapack,blas)_func > * `#9465 `__: MAINT: Clean up optimize.zeros C solvers interfaces/code. > * `#9477 `__: DOC: linalg: fix lstsq docstring on residues shape > * `#9478 `__: DOC: Add docstring examples for rosen functions > * `#9479 `__: DOC: Add docstring example for ai_zeros and bi_zeros > * `#9480 `__: MAINT: linalg: lstsq clean up > * `#9489 `__: DOC: roadmap update for changes over the last year. > * `#9492 `__: MAINT: stats: Improve implementation of chi2 ppf method. > * `#9497 `__: DOC: Improve docstrings sparse.linalg.isolve > * `#9499 `__: DOC: Replace "Scipy" with "SciPy" in the .rst doc files for consistency. > * `#9500 `__: DOC: Document the toolchain and its roadmap. > * `#9505 `__: DOC: specify which definition of skewness is used > * `#9511 `__: DEP: interpolate: remove deprecated interpolate_wrapper > * `#9517 `__: BUG: improve error handling in stats.iqr > * `#9522 `__: ENH: Add Fiedler and fiedler companion to special matrices > * `#9526 `__: TST: relax precision requirements in signal.correlate tests > * `#9529 `__: DOC: fix missing random seed in optimize.newton example > * `#9533 `__: MAINT: Use list comprehension when possible > * `#9537 `__: DOC: add a "big picture" roadmap > * `#9538 `__: DOC: Replace "Numpy" with "NumPy" in .py, .rst and .txt doc files... > * `#9539 `__: ENH: add two-sample test (Epps-Singleton) to scipy.stats > * `#9559 `__: DOC: add section on global optimizers to tutorial > * `#9561 `__: ENH: remove noprefix.h, change code appropriately > * `#9562 `__: MAINT: stats: Rewrite pearsonr. > * `#9563 `__: BUG: Minor bug fix Callback in linprog(method='simplex') > * `#9568 `__: MAINT: raise runtime error for newton with zeroder if disp true,... > * `#9570 `__: Correct docstring in show_options in optimize. Fixes #9407 > * `#9573 `__: BUG fixes range of pk variable pre-check > * `#9577 `__: TST: fix minor issue in a signal.stft test. > * `#9580 `__: Included blank line before list - Fixes #8658 > * `#9582 `__: MAINT: drop Python 2.7 and 3.4 > * `#9588 `__: MAINT: update \`constants.astronomical_unit\` to new 2012 value. > * `#9592 `__: TST: Add 32-bit testing to CI > * `#9593 `__: DOC: Replace cumulative density with cumulative distribution > * `#9596 `__: TST: remove VC 9.0 from Azure CI > * `#9599 `__: Hyperlink DOI to preferred resolver > * `#9601 `__: DEV: try to limit GC memory use on PyPy > * `#9603 `__: MAINT: improve logcdf and logsf of hypergeometric distribution > * `#9605 `__: Reference to pylops in LinearOperator notes and ARPACK example > * `#9617 `__: TST: reduce max memory usage for sparse.linalg.lgmres test > * `#9619 `__: FIX: Sparse matrix addition/subtraction eliminates explicit zeros > * `#9621 `__: bugfix in rv_sample in scipy.stats > * `#9622 `__: MAINT: Raise error in directed_hausdorff distance > * `#9623 `__: DOC: Build docs with warnings as errors > * `#9625 `__: Return the number of calls to 'hessp' (not just 'hess') in trust... > * `#9627 `__: BUG: ignore empty lines in mmio > * `#9637 `__: Function to calculate the MAD of an array > * `#9646 `__: BUG: stats: mode for objects w/ndim > 1 > * `#9648 `__: Add \`stats.contingency\` to refguide-check > * `#9650 `__: ENH: many lobpcg() algorithm improvements > * `#9652 `__: Move misc.doccer to _lib.doccer > * `#9660 `__: ENH: add pearson, tippett, and mudholkar-george to combine_pvalues > * `#9661 `__: BUG: Fix ksone right-hand endpoint, documentation and tests. > * `#9664 `__: ENH: adding multi-target dijsktra performance enhancement > * `#9670 `__: MAINT: link planck and geometric distribution in scipy.stats > * `#9676 `__: ENH: optimize: change default linprog method to interior-point > * `#9685 `__: Added reference to ndimage.filters.median_filter > * `#9705 `__: Fix coefficients in expm helper function > * `#9711 `__: Release the GIL during sosfilt processing for simple types > * `#9721 `__: ENH: Convexhull visiblefacets > * `#9723 `__: BLD: Modify rv_generic._construct_doc to print out failing distribution... > * `#9726 `__: BUG: Fix small issues with \`signal.lfilter' > * `#9729 `__: BUG: Typecheck iterations for binary image operations > * `#9730 `__: ENH: reduce sizeof(NI_WatershedElement) by 20% > * `#9731 `__: ENH: remove suspicious sequence of type castings > * `#9739 `__: BUG: qr_updates fails if u is exactly in span Q > * `#9749 `__: BUG: MapWrapper.__exit__ should terminate > * `#9753 `__: ENH: Added exact computation for Kolmogorov-Smirnov two-sample... > * `#9755 `__: DOC: Added example for signal.impulse, copied from impulse2 > * `#9756 `__: DOC: Added docstring example for iirdesign > * `#9757 `__: DOC: Added examples for step functions > * `#9759 `__: ENH: Allow pass_zero to act like btype > * `#9760 `__: DOC: Added docstring for lp2bs > * `#9761 `__: DOC: Added docstring and example for lp2bp > * `#9764 `__: BUG: Catch internal warnings for matrix > * `#9766 `__: ENH: Speed up _gaussian_kernel1d by removing dependence on np.polynomial > * `#9769 `__: BUG: Fix Cubic Spline Read Only issues > * `#9773 `__: DOC: Several docstrings > * `#9774 `__: TST: bump Azure CI OpenBLAS version to match wheels > * `#9775 `__: DOC: Improve clarity of cov_x documentation for scipy.optimize.leastsq > * `#9779 `__: ENH: dual_annealing vectorise visit_fn > * `#9788 `__: TST, BUG: f2py-related issues with NumPy < 1.14.0 > * `#9791 `__: BUG: fix amax constraint not enforced in scalar_search_wolfe2 > * `#9792 `__: ENH: Allow inplace copying in place in "detrend" function > * `#9795 `__: DOC: Fix/update docstring for dstn and dst > * `#9796 `__: MAINT: Allow None tolerances in least_squares > * `#9798 `__: BUG: fixes abort trap 6 error in scipy issue 9785 in unit tests > * `#9807 `__: MAINT: improve doc and add alternative keyword to wilcoxon in... > * `#9808 `__: Fix PPoly integrate and test for CubicSpline > * `#9810 `__: ENH: Add the geometric standard deviation function > * `#9811 `__: MAINT: remove invalid derphi default None value in scalar_search_wolfe2 > * `#9813 `__: Adapt hamming distance in C to support weights > * `#9817 `__: DOC: Copy solver description to solver modules > * `#9829 `__: ENH: Add FOH and equivalent impulse response discretizations... > * `#9831 `__: ENH: Implement RotationSpline > * `#9834 `__: DOC: Change mielke distribution default parameters to ensure... > * `#9838 `__: ENH: Use faster solvers for firls > * `#9854 `__: ENH: loadarff now supports relational attributes. > * `#9856 `__: integrate.bvp - improve handling of nonlinear boundary conditions > * `#9862 `__: TST: reduce Appveyor CI load > * `#9874 `__: DOC: Update requirements in release notes > * `#9883 `__: BUG: fixed parenthesis in spatial.rotation > * `#9884 `__: ENH: Use Sparsity in Clarkson-Woodruff Sketch > * `#9888 `__: MAINT: Replace NumPy aliased functions > * `#9892 `__: BUG: Fix 9890 query_ball_point returns wrong result when p is... > * `#9893 `__: BUG: curve_fit doesn't check for empty input if called with bounds > * `#9894 `__: scipy.signal.find_peaks documentation error > * `#9898 `__: BUG: Set success attribute in OptimizeResult. See #9801 > * `#9900 `__: BUG: Restrict rv_generic._argcheck() and its overrides from setting... > * `#9906 `__: fixed a bug in kde logpdf > * `#9911 `__: DOC: replace example for "np.select" with the one from numpy... > * `#9912 `__: BF(DOC): point to numpy.select instead of plain (python) .select > * `#9914 `__: DOC: change ValueError message in _validate_pad of signaltools. > * `#9915 `__: cKDTree query_ball_point improvements > * `#9918 `__: Update ckdtree.pyx with boxsize argument in docstring > * `#9920 `__: BUG: sparse: Validate explicit shape if given with dense argument... > * `#9924 `__: BLD: add back pyproject.toml > * `#9931 `__: Fix empty constraint > * `#9935 `__: DOC: fix references for stats.f_oneway > * `#9936 `__: Revert gh-9619: "FIX: Sparse matrix addition/subtraction eliminates... > * `#9937 `__: MAINT: fix PEP8 issues and update to pycodestyle 2.5.0 > * `#9939 `__: DOC: correct \`structure\` description in \`ndimage.label\` docstring > * `#9940 `__: MAINT: remove extraneous distutils copies > * `#9945 `__: ENH: differential_evolution can use Bounds object > * `#9949 `__: Added 'std' to add doctstrings since it is a \`known_stats\`... > * `#9953 `__: DOC: Documentation cleanup for stats tutorials. > * `#9962 `__: __repr__ for Bounds > * `#9971 `__: ENH: Improve performance of lsmr > * `#9987 `__: CI: pin Sphinx version to 1.8.5 > * `#9990 `__: ENH: constraint violation > * `#9991 `__: BUG: Avoid inplace modification of input array in newton > * `#9995 `__: MAINT: sparse.csgraph: Add cdef to stop build warning. > * `#9996 `__: BUG: Make minimize_quadratic_1d work with infinite bounds correctly > * `#10004 `__: BUG: Fix unbound local error in linprog - simplex. > * `#10007 `__: BLD: fix Python 3.7 build with build isolation > * `#10009 `__: BUG: Make sure that _binary_erosion only accepts an integer number... > * `#10016 `__: Update link to airspeed-velocity > * `#10017 `__: DOC: Update \`interpolate.LSQSphereBivariateSpline\` to include... > * `#10018 `__: MAINT: special: Fix a few warnings that occur when compiling... > * `#10019 `__: TST: Azure summarizes test failures > * `#10021 `__: ENH: Introduce CubicHermiteSpline > * `#10022 `__: BENCH: Increase cython version in asv to fix benchmark builds > * `#10023 `__: BUG: Avoid exponnorm producing nan for small K values. > * `#10025 `__: BUG: optimize: tweaked linprog status 4 error message > * `#10026 `__: ENH: optimize: use SuiteSparse in linprog interior-point when... > * `#10027 `__: MAINT: cluster: clean up the use of malloc() in the function... > * `#10028 `__: Fix rotate invalid plane check > * `#10040 `__: MAINT: fix pratt method of wilcox test in scipy.stats > * `#10041 `__: MAINT: special: Fix a warning generated when building the AMOS... > * `#10044 `__: DOC: fix up spatial.transform.Rotation docstrings > * `#10047 `__: MAINT: interpolate: Fix a few build warnings. > * `#10051 `__: Add project_urls to setup > * `#10052 `__: don't set flag to "converged" if max iter exceeded > * `#10054 `__: MAINT: signal: Fix a few build warnings and modernize some C... > * `#10056 `__: BUG: Ensure factorial is not too large in kendaltau > * `#10058 `__: Small speedup in samping from ortho and special_ortho groups > * `#10059 `__: BUG: optimize: fix #10038 by increasing tol > * `#10061 `__: BLD: DOC: make building docs easier by parsing python version. > * `#10064 `__: ENH: Significant speedup for ortho and special ortho group > * `#10065 `__: DOC: Reword parameter descriptions in \`optimize.root_scalar\` > * `#10066 `__: BUG: signal: Fix error raised by savgol_coeffs when deriv > polyorder. > * `#10067 `__: MAINT: Fix the cutoff value inconsistency for pinv2 and pinvh > * `#10072 `__: BUG: stats: Fix boxcox_llf to avoid loss of precision. > * `#10075 `__: ENH: Add wrappers for ?syconv routines > * `#10076 `__: BUG: optimize: fix curve_fit for mixed float32/float64 input > * `#10077 `__: DOC: Replace undefined \`k\` in \`interpolate.splev\` docstring > * `#10079 `__: DOC: Fixed typo, rearranged some doc of stats.morestats.wilcoxon. > * `#10080 `__: TST: install scikit-sparse for full TravisCI tests > * `#10083 `__: Clean \`\`_clean_inputs\`\` in optimize.linprog > * `#10088 `__: ENH: optimize: linprog test CHOLMOD/UMFPACK solvers when available > * `#10090 `__: MAINT: Fix CubicSplinerInterpolator for pandas > * `#10091 `__: MAINT: improve logcdf and logsf of hypergeometric distribution > * `#10095 `__: MAINT: Clean \`\`_clean_inputs\`\` in linprog > > Checksums > ========= > > MD5 > ~~~ > > 5a71a217fa4ff372097f501daf816f3b scipy-1.3.0rc1-cp35-cp35m-macosx_10_6_intel.macosx_10_9_int > el.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > c154ca8eee9ebafe04575b316e41ed85 scipy-1.3.0rc1-cp35-cp35m-manylinux1_i686.whl > 36a91fa4ae6eeceeb79bf97b9bd013eb scipy-1.3.0rc1-cp35-cp35m-manylinux1_x86_64.whl > f1f4259b373332d6edc6bef123b0dc7c scipy-1.3.0rc1-cp35-cp35m-win32.whl > c81d78bed8e2176cf0168785b7e1b692 scipy-1.3.0rc1-cp35-cp35m-win_amd64.whl > c43dd24f349c9d37a6c996e7c0674141 scipy-1.3.0rc1-cp36-cp36m-macosx_10_6_intel.macosx_10_9_int > el.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 8188210e0fd710f4544f314306b76313 scipy-1.3.0rc1-cp36-cp36m-manylinux1_i686.whl > 0cf317ee185a8f5736b479d1c8b5f415 scipy-1.3.0rc1-cp36-cp36m-manylinux1_x86_64.whl > e46e5b38288d79321d8d6ffa15a8f54e scipy-1.3.0rc1-cp36-cp36m-win32.whl > 85a79e9be408de72056c6efc1eef7d46 scipy-1.3.0rc1-cp36-cp36m-win_amd64.whl > 2436169658f74e03b4037142e51a8f86 scipy-1.3.0rc1-cp37-cp37m-macosx_10_6_intel.macosx_10_9_int > el.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 1b9e2caa5994ee227b4ad8e46b45ad7e scipy-1.3.0rc1-cp37-cp37m-manylinux1_i686.whl > 05a51b40471abdf4e9020183ad449bf2 scipy-1.3.0rc1-cp37-cp37m-manylinux1_x86_64.whl > debecbc0e54fe4e737971b0a6d9f24f5 scipy-1.3.0rc1-cp37-cp37m-win32.whl > 79c725144fa59566d8ebd3bf556533aa scipy-1.3.0rc1-cp37-cp37m-win_amd64.whl > 0b9fa3583bcf2c8190b277cddb287132 scipy-1.3.0rc1.tar.gz > d22a40e138ecd6bb26990e22d4a1ac1b scipy-1.3.0rc1.tar.xz > 9cc12f26980587900befabafaac2078b scipy-1.3.0rc1.zip > > > SHA256 ~~~~~~ 3491e5453acec48ff8bc1e96980a9ca225bf653eb8e2fad2efe44ca54fd61230 scipy-1.3.0rc1-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl b04fed432d0d2b7aa52fb83c87390f22e34407baa404385e6c804c6d2f9fe3dc scipy-1.3.0rc1-cp35-cp35m-manylinux1_i686.whl 71c236d8b036caa84a018b056c6ced101bcb3efb160fab18957daf5a41c7319c scipy-1.3.0rc1-cp35-cp35m-manylinux1_x86_64.whl 6fa6a341ab6920f9233ce5da16572e3e403540f90c17269e27a7a7451e05d40e scipy-1.3.0rc1-cp35-cp35m-win32.whl 7ec09797276d26c74234c056415234a7168e536011767af984b1700410806699 scipy-1.3.0rc1-cp35-cp35m-win_amd64.whl 55fcdd1ea9bb3d5461477391d924e24c56c8fa3cb3aba98c2ee2c47e3ccd6ce2 scipy-1.3.0rc1-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl b149d330d0a8219b68d2839cc49a37df163b0ec0b56194f8f0aa6878c1e7c2a4 scipy-1.3.0rc1-cp36-cp36m-manylinux1_i686.whl 9fed021210077c2e183f621d84bef428762b0b226f8f6da2b03a7d93628e3089 scipy-1.3.0rc1-cp36-cp36m-manylinux1_x86_64.whl 5c9c9a47e914fbf8edc9a1da1e10a9f5204b3dfc89c93b721b658290884bfe45 scipy-1.3.0rc1-cp36-cp36m-win32.whl 09b2c3e099b6274b142e7e05132e79efbbe4daa9dd593a226a3bc9820adf966a scipy-1.3.0rc1-cp36-cp36m-win_amd64.whl 4e93edc6d4c1296ac39ae4be2e8d9336a37a3e5c6e104801a288db0f18d5dbd1 scipy-1.3.0rc1-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 1edafef721c859848b8575e7313fba062903f3c256661304f488b96fff4f759d scipy-1.3.0rc1-cp37-cp37m-manylinux1_i686.whl 2fe186fff442d3f54f8e6950e809c571ea29db8333ed30608c4074a843d5cdf1 scipy-1.3.0rc1-cp37-cp37m-manylinux1_x86_64.whl 7c59ec7d5148538978da6c66059c9e3240ae9cf17b803b15354fffc8d3320961 scipy-1.3.0rc1-cp37-cp37m-win32.whl > 2ab6d6f940b6b09cbee6d7cb2de5914a50f459cbc612b4328b033f67452fd1d6 scipy-1.3.0rc1-cp37-cp37m-win_amd64.whl > d09e6fae7434aa9e1422d95bbee28f0b66ba97ab770fec24f31c75a873823cd6 scipy-1.3.0rc1.tar.gz > ba49645a693f6e70e690cf6e2175865b7bf0182cf59bdce872968f47546f4269 scipy-1.3.0rc1.tar.xz > f1cdb4651f3d150f5c145dc930a627f09aa2afc5275c6c5da97a9a5df274c531 scipy-1.3.0rc1.zip > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQIcBAEBAgAGBQJcw54gAAoJELD/41ZX0J71zbEQAKaapoUb0gbLvljeLL+FZmaZ > Ha8OecYf7Fsdgj9kiGk2kKgrvz5nkoRyJDpxi4BOzPua+HBUhECoH3DBL5kA6sbv > VsMAhTb5HFPiKmC78LHEUNSY1fPJrdK7s99pIgedjQFR5diODHqqtB54awbRsTOs > Vdq45I7JoCd+DUMqQWIA3TZyjrZzw2V1KBFS8mHFdcop71Q1RqRNf3rVw7Rpydob > uegbae42cJ2Hej4+viU8hsCM+JIgkCuZaQEN2wp4W9pmHsDCzJcoyQjulTjZQAeG > W3L/F5O1p9A9nHIPk+wvS3D2ageKOhYmVSgB6dznXnFRsjwIKH4O4k0TCzYAamsd > HvcKnncGAzc+95o2k+v46575O3pBPRYCmOKz6LlFfFGNr/PWkxPYuG49nGIwQZU+ > /w0RYvu2NIPyd05gQQfwyEAATmwbYQfCelbQtHtPehDrtwMINZF4ZCqVg8D7d2ns > c2mUUC72Iq62R17CdQOjp/zkvU4Xo6KGm0TPQY7xuJwXOaiM5cJ49iY+/Z6eBfBM > JS0MoIZ8PRzhzQ6gKTPt4exil75ybNpCeL/Ny/LY5dgkfaOdTuplmxwxgngxaECG > W4SaGw4P0SiATwyz7hMprv/Xkq59iK6IsF6Ki7uxPuMps+nbYhvVIUXqh8Y5iVxx > piFfF2ct8rgxkVm3qBmN > =KK2K > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu May 2 21:31:26 2019 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 2 May 2019 20:31:26 -0500 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> Message-ID: Sounds like this is a NASA specific thing, in which case, I guess someone at NASA would need to step up. I?m afraid I know no pythonistas at NASA. But I?ll poke around NOAA to see if there?s anything similar. -CHB On Apr 25, 2019, at 1:04 PM, Ralf Gommers wrote: On Sat, Apr 20, 2019 at 12:41 PM Ralf Gommers wrote: > > > On Thu, Apr 18, 2019 at 10:03 PM Joe Harrington > wrote: > > >> 3. There's such a thing as a share-in-savings contract at NASA, in which >> you calculate a savings, such as from avoided costs of licensing IDL or >> Matlab, and say you'll develop a replacement for that product that costs >> less, in exchange for a portion of the savings. These are rare and few >> people know about them, but one presenter to the committee did discuss >> them and thought they'd be appropriate. I've always felt that we could >> get a chunk of change this way, and was surprised to find that the >> approach exists and has a name. About 3 of 4 people I talk to at NASA >> have no idea this even exists, though, and I haven't pursued it to its >> logical end to see if it's viable. >> > > I've heard of these. Definitely worth looking into. > It seems to be hard to find any information about these share-in-savings contracts. The closest thing I found is this: https://www.federalregister.gov/documents/2018/06/22/2018-13463/nasa-federal-acquisition-regulation-supplement-removal-of-reference-to-the-shared-savings-policy-and It is called "Shared Savings" there, and was replaced last year by something called "Value Engineering Change Proposal". If anyone can comment on whether that's the same thing as Joe meant and whether this is worth following up on, that would be very helpful. Cheers, Ralf _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at python.org https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From waterbug at pangalactic.us Thu May 2 21:48:51 2019 From: waterbug at pangalactic.us (Stephen Waterbury) Date: Thu, 2 May 2019 21:48:51 -0400 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> Message-ID: <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> P.S.? If anyone wants to continue this discussion at SciPy 2019, I will be there (on my own nickel!? ;) ... Steve On 5/2/19 9:45 PM, Stephen Waterbury wrote: > I am a NASA pythonista (for 20+ years ;), but you can now say you know > yet another person at NASA who has no idea this even exists ... :) > Not only do I not know of that, but I know of NASA policies that make > it very difficult for NASA civil servants to contribute to open source > projects -- quite hypocritical, given the amount of open source > code that NASA (like all other large organizations) depends critically > on, but it's a fact. > > Cheers, > Steve Waterbury > > (CLEARLY **NOT** SPEAKING IN ANY OFFICIAL CAPACITY FOR NASA OR > THE U.S. GOVERNMENT AS A WHOLE!? Hence the personal email > address. :) > > On 5/2/19 9:31 PM, Chris Barker - NOAA Federal wrote: > >> Sounds like this is a NASA specific thing, in which case, I guess >> someone at NASA would need to step up. >> >> I?m afraid I know no pythonistas at NASA. >> >> But I?ll poke around NOAA to see if there?s anything similar. >> >> -CHB >> >> On Apr 25, 2019, at 1:04 PM, Ralf Gommers > > wrote: >> >>> >>> >>> On Sat, Apr 20, 2019 at 12:41 PM Ralf Gommers >>> > wrote: >>> >>> >>> >>> On Thu, Apr 18, 2019 at 10:03 PM Joe Harrington >>> > wrote: >>> >>> >>> 3. There's such a thing as a share-in-savings contract at >>> NASA, in which >>> you calculate a savings, such as from avoided costs of >>> licensing IDL or >>> Matlab, and say you'll develop a replacement for that >>> product that costs >>> less, in exchange for a portion of the savings.? These are >>> rare and few >>> people know about them, but one presenter to the committee >>> did discuss >>> them and thought they'd be appropriate. I've always felt >>> that we could >>> get a chunk of change this way, and was surprised to find >>> that the >>> approach exists and has a name.? About 3 of 4 people I talk >>> to at NASA >>> have no idea this even exists, though, and I haven't pursued >>> it to its >>> logical end to see if it's viable. >>> >>> >>> I've heard of these. Definitely worth looking into. >>> >>> >>> It seems to be hard to find any information about these >>> share-in-savings contracts. The closest thing I found is this: >>> https://www.federalregister.gov/documents/2018/06/22/2018-13463/nasa-federal-acquisition-regulation-supplement-removal-of-reference-to-the-shared-savings-policy-and >>> >>> It is called "Shared Savings" there, and was replaced last year by >>> something called "Value Engineering Change Proposal". If anyone can >>> comment on whether that's the same thing as Joe meant and whether >>> this is worth following up on, that would be very helpful. >>> >>> Cheers, >>> Ralf >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From waterbug at pangalactic.us Thu May 2 21:45:00 2019 From: waterbug at pangalactic.us (Stephen Waterbury) Date: Thu, 2 May 2019 21:45:00 -0400 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> Message-ID: <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> I am a NASA pythonista (for 20+ years ;), but you can now say you know yet another person at NASA who has no idea this even exists ... :) Not only do I not know of that, but I know of NASA policies that make it very difficult for NASA civil servants to contribute to open source projects -- quite hypocritical, given the amount of open source code that NASA (like all other large organizations) depends critically on, but it's a fact. Cheers, Steve Waterbury (CLEARLY **NOT** SPEAKING IN ANY OFFICIAL CAPACITY FOR NASA OR THE U.S. GOVERNMENT AS A WHOLE!? Hence the personal email address. :) On 5/2/19 9:31 PM, Chris Barker - NOAA Federal wrote: > Sounds like this is a NASA specific thing, in which case, I guess > someone at NASA would need to step up. > > I?m afraid I know no pythonistas at NASA. > > But I?ll poke around NOAA to see if there?s anything similar. > > -CHB > > On Apr 25, 2019, at 1:04 PM, Ralf Gommers > wrote: > >> >> >> On Sat, Apr 20, 2019 at 12:41 PM Ralf Gommers > > wrote: >> >> >> >> On Thu, Apr 18, 2019 at 10:03 PM Joe Harrington >> > wrote: >> >> >> 3. There's such a thing as a share-in-savings contract at >> NASA, in which >> you calculate a savings, such as from avoided costs of >> licensing IDL or >> Matlab, and say you'll develop a replacement for that product >> that costs >> less, in exchange for a portion of the savings.? These are >> rare and few >> people know about them, but one presenter to the committee >> did discuss >> them and thought they'd be appropriate. I've always felt that >> we could >> get a chunk of change this way, and was surprised to find >> that the >> approach exists and has a name.? About 3 of 4 people I talk >> to at NASA >> have no idea this even exists, though, and I haven't pursued >> it to its >> logical end to see if it's viable. >> >> >> I've heard of these. Definitely worth looking into. >> >> >> It seems to be hard to find any information about these >> share-in-savings contracts. The closest thing I found is this: >> https://www.federalregister.gov/documents/2018/06/22/2018-13463/nasa-federal-acquisition-regulation-supplement-removal-of-reference-to-the-shared-savings-policy-and >> >> It is called "Shared Savings" there, and was replaced last year by >> something called "Value Engineering Change Proposal". If anyone can >> comment on whether that's the same thing as Joe meant and whether >> this is worth following up on, that would be very helpful. >> >> Cheers, >> Ralf >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri May 3 02:50:34 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 3 May 2019 08:50:34 +0200 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> Message-ID: On Fri, May 3, 2019 at 3:49 AM Stephen Waterbury wrote: > P.S. If anyone wants to continue this discussion at SciPy 2019, > I will be there (on my own nickel! ;) ... > Thanks for the input Stephen, and looking forward to see you at SciPy'19! Ralf Steve > > On 5/2/19 9:45 PM, Stephen Waterbury wrote: > > I am a NASA pythonista (for 20+ years ;), but you can now say you know > yet another person at NASA who has no idea this even exists ... :) > Not only do I not know of that, but I know of NASA policies that make > it very difficult for NASA civil servants to contribute to open source > projects -- quite hypocritical, given the amount of open source > code that NASA (like all other large organizations) depends critically > on, but it's a fact. > > Cheers, > Steve Waterbury > > (CLEARLY **NOT** SPEAKING IN ANY OFFICIAL CAPACITY FOR NASA OR > THE U.S. GOVERNMENT AS A WHOLE! Hence the personal email > address. :) > > On 5/2/19 9:31 PM, Chris Barker - NOAA Federal wrote: > > Sounds like this is a NASA specific thing, in which case, I guess someone > at NASA would need to step up. > > I?m afraid I know no pythonistas at NASA. > > But I?ll poke around NOAA to see if there?s anything similar. > > -CHB > > On Apr 25, 2019, at 1:04 PM, Ralf Gommers wrote: > > > > On Sat, Apr 20, 2019 at 12:41 PM Ralf Gommers > wrote: > >> >> >> On Thu, Apr 18, 2019 at 10:03 PM Joe Harrington >> wrote: >> >> >>> 3. There's such a thing as a share-in-savings contract at NASA, in which >>> you calculate a savings, such as from avoided costs of licensing IDL or >>> Matlab, and say you'll develop a replacement for that product that costs >>> less, in exchange for a portion of the savings. These are rare and few >>> people know about them, but one presenter to the committee did discuss >>> them and thought they'd be appropriate. I've always felt that we could >>> get a chunk of change this way, and was surprised to find that the >>> approach exists and has a name. About 3 of 4 people I talk to at NASA >>> have no idea this even exists, though, and I haven't pursued it to its >>> logical end to see if it's viable. >>> >> >> I've heard of these. Definitely worth looking into. >> > > It seems to be hard to find any information about these share-in-savings > contracts. The closest thing I found is this: > https://www.federalregister.gov/documents/2018/06/22/2018-13463/nasa-federal-acquisition-regulation-supplement-removal-of-reference-to-the-shared-savings-policy-and > > It is called "Shared Savings" there, and was replaced last year by > something called "Value Engineering Change Proposal". If anyone can comment > on whether that's the same thing as Joe meant and whether this is worth > following up on, that would be very helpful. > > Cheers, > Ralf > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing listNumPy-Discussion at python.orghttps://mail.python.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.debuyl at kuleuven.be Fri May 3 10:12:18 2019 From: pierre.debuyl at kuleuven.be (Pierre de Buyl) Date: Fri, 3 May 2019 16:12:18 +0200 Subject: [Numpy-discussion] EuroSciPy 2019 - extension of deadline Message-ID: <20190503141218.GA25510@pi-x230> Hi all, The call for proposals for EuroSciPy 2019 is extended to 12 may 2019. https://pretalx.com/euroscipy-2019/cfp EuroSciPy 2019 takes place 2-6 September 2019 in Bilbao, Spain. There will be two days of tutorials, two days of conference, and one day of sprints! Find us on the web https://www.euroscipy.org/2019/ and on twitter https://twitter.com/EuroSciPy Regards, The EuroSciPy 2019 team From mikofski at berkeley.edu Fri May 3 11:40:09 2019 From: mikofski at berkeley.edu (Mark Mikofski) Date: Fri, 3 May 2019 08:40:09 -0700 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> Message-ID: Hi Ralf, and others, Sorry for the late notice, but there is are several funding opportunities in solar, including one for $350,000 to develop open source software to lower soft costs of solar. https://eere-exchange.energy.gov/#FoaId45eda43a-e826-4481-ae7a-cc6e8ed4fdae see topic 3.4 specifically in attached PDF - also note to view the recording the password is "*Setofoa2019"* it's about 30 minutes long. I know that this is a extremely niche, but as a few others have said, [the DOE] grants tend to be very specific, but perhaps we can creatively think of ways to channel funds to NumPy and SciPy. Also there is a cost share that is typically 20%, which would be a non-starter for volunteer projects. But here's an idea, perhaps partnering with a company, like mine (DNV GL) who is applying for the grant, and who uses NumPy,and could pay the cost share, and then we collaborate on something that is required to complete the project, which is contributed to NumPy (or SciPy) - but we would have to figure what we could align on. Seems like NumFOCUS, Quantsight, or some other company in the OSS space could figure out ways to help connect companies, OSS projects, and funding opportunities like these, where there's a possibility of alignment and mutual benefit? The full list of funding opportunities is here: https://eere-exchange.energy.gov/ Best Regards, Mark On Thu, May 2, 2019 at 11:52 PM Ralf Gommers wrote: > > > On Fri, May 3, 2019 at 3:49 AM Stephen Waterbury > wrote: > >> P.S. If anyone wants to continue this discussion at SciPy 2019, >> I will be there (on my own nickel! ;) ... >> > > Thanks for the input Stephen, and looking forward to see you at SciPy'19! > > Ralf > > > Steve >> >> On 5/2/19 9:45 PM, Stephen Waterbury wrote: >> >> I am a NASA pythonista (for 20+ years ;), but you can now say you know >> yet another person at NASA who has no idea this even exists ... :) >> Not only do I not know of that, but I know of NASA policies that make >> it very difficult for NASA civil servants to contribute to open source >> projects -- quite hypocritical, given the amount of open source >> code that NASA (like all other large organizations) depends critically >> on, but it's a fact. >> >> Cheers, >> Steve Waterbury >> >> (CLEARLY **NOT** SPEAKING IN ANY OFFICIAL CAPACITY FOR NASA OR >> THE U.S. GOVERNMENT AS A WHOLE! Hence the personal email >> address. :) >> >> On 5/2/19 9:31 PM, Chris Barker - NOAA Federal wrote: >> >> Sounds like this is a NASA specific thing, in which case, I guess someone >> at NASA would need to step up. >> >> I?m afraid I know no pythonistas at NASA. >> >> But I?ll poke around NOAA to see if there?s anything similar. >> >> -CHB >> >> On Apr 25, 2019, at 1:04 PM, Ralf Gommers wrote: >> >> >> >> On Sat, Apr 20, 2019 at 12:41 PM Ralf Gommers >> wrote: >> >>> >>> >>> On Thu, Apr 18, 2019 at 10:03 PM Joe Harrington >>> wrote: >>> >>> >>>> 3. There's such a thing as a share-in-savings contract at NASA, in >>>> which >>>> you calculate a savings, such as from avoided costs of licensing IDL or >>>> Matlab, and say you'll develop a replacement for that product that >>>> costs >>>> less, in exchange for a portion of the savings. These are rare and few >>>> people know about them, but one presenter to the committee did discuss >>>> them and thought they'd be appropriate. I've always felt that we could >>>> get a chunk of change this way, and was surprised to find that the >>>> approach exists and has a name. About 3 of 4 people I talk to at NASA >>>> have no idea this even exists, though, and I haven't pursued it to its >>>> logical end to see if it's viable. >>>> >>> >>> I've heard of these. Definitely worth looking into. >>> >> >> It seems to be hard to find any information about these share-in-savings >> contracts. The closest thing I found is this: >> https://www.federalregister.gov/documents/2018/06/22/2018-13463/nasa-federal-acquisition-regulation-supplement-removal-of-reference-to-the-shared-savings-policy-and >> >> It is called "Shared Savings" there, and was replaced last year by >> something called "Value Engineering Change Proposal". If anyone can comment >> on whether that's the same thing as Joe meant and whether this is worth >> following up on, that would be very helpful. >> >> Cheers, >> Ralf >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> >> >> _______________________________________________ >> NumPy-Discussion mailing listNumPy-Discussion at python.orghttps://mail.python.org/mailman/listinfo/numpy-discussion >> >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -- Mark Mikofski, PhD (2005) *Fiat Lux* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: SETO_FY19_FOA_Applicant_Webinar_Topic_Area_3_-_Balance_of_Systems_Soft_Costs_Reduction.pdf Type: application/pdf Size: 949355 bytes Desc: not available URL: From chris.barker at noaa.gov Fri May 3 12:23:59 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 3 May 2019 09:23:59 -0700 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> Message-ID: On Thu, May 2, 2019 at 11:51 PM Ralf Gommers wrote: > On Fri, May 3, 2019 at 3:49 AM Stephen Waterbury > wrote: > >> P.S. If anyone wants to continue this discussion at SciPy 2019, >> I will be there (on my own nickel! ;) ... >> > So will I (on NOAA's nickel, which I am grateful for) Maybe we should hold a BoF, or even something more formal, on Government support for SciPY Stack development? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikofski at berkeley.edu Fri May 3 12:47:50 2019 From: mikofski at berkeley.edu (Mark Mikofski) Date: Fri, 3 May 2019 09:47:50 -0700 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> Message-ID: Sorry, that last attachment was just a slide show of the topic 3 recording, here is the full funding opportunity announcement - letter with 200 word abstract are due May 7th On Fri, May 3, 2019 at 8:40 AM Mark Mikofski wrote: > Hi Ralf, and others, > > Sorry for the late notice, but there is are several funding opportunities > in solar, including one for $350,000 to develop open source software to > lower soft costs of solar. > https://eere-exchange.energy.gov/#FoaId45eda43a-e826-4481-ae7a-cc6e8ed4fdae > > see topic 3.4 specifically in attached PDF - also note to view the > recording the password is "*Setofoa2019"* it's about 30 minutes long. > > I know that this is a extremely niche, but as a few others have said, [the > DOE] grants tend to be very specific, but perhaps we can creatively think > of ways to channel funds to NumPy and SciPy. > > Also there is a cost share that is typically 20%, which would be a > non-starter for volunteer projects. > > But here's an idea, perhaps partnering with a company, like mine (DNV GL) > who is applying for the grant, and who uses NumPy,and could pay the cost > share, and then we collaborate on something that is required to complete > the project, which is contributed to NumPy (or SciPy) - but we would have > to figure what we could align on. > > Seems like NumFOCUS, Quantsight, or some other company in the OSS space > could figure out ways to help connect companies, OSS projects, and funding > opportunities like these, where there's a possibility of alignment and > mutual benefit? > > The full list of funding opportunities is here: > https://eere-exchange.energy.gov/ > > Best Regards, > Mark > > > On Thu, May 2, 2019 at 11:52 PM Ralf Gommers > wrote: > >> >> >> On Fri, May 3, 2019 at 3:49 AM Stephen Waterbury >> wrote: >> >>> P.S. If anyone wants to continue this discussion at SciPy 2019, >>> I will be there (on my own nickel! ;) ... >>> >> >> Thanks for the input Stephen, and looking forward to see you at SciPy'19! >> >> Ralf >> >> >> Steve >>> >>> On 5/2/19 9:45 PM, Stephen Waterbury wrote: >>> >>> I am a NASA pythonista (for 20+ years ;), but you can now say you know >>> yet another person at NASA who has no idea this even exists ... :) >>> Not only do I not know of that, but I know of NASA policies that make >>> it very difficult for NASA civil servants to contribute to open source >>> projects -- quite hypocritical, given the amount of open source >>> code that NASA (like all other large organizations) depends critically >>> on, but it's a fact. >>> >>> Cheers, >>> Steve Waterbury >>> >>> (CLEARLY **NOT** SPEAKING IN ANY OFFICIAL CAPACITY FOR NASA OR >>> THE U.S. GOVERNMENT AS A WHOLE! Hence the personal email >>> address. :) >>> >>> On 5/2/19 9:31 PM, Chris Barker - NOAA Federal wrote: >>> >>> Sounds like this is a NASA specific thing, in which case, I guess >>> someone at NASA would need to step up. >>> >>> I?m afraid I know no pythonistas at NASA. >>> >>> But I?ll poke around NOAA to see if there?s anything similar. >>> >>> -CHB >>> >>> On Apr 25, 2019, at 1:04 PM, Ralf Gommers >>> wrote: >>> >>> >>> >>> On Sat, Apr 20, 2019 at 12:41 PM Ralf Gommers >>> wrote: >>> >>>> >>>> >>>> On Thu, Apr 18, 2019 at 10:03 PM Joe Harrington >>>> wrote: >>>> >>>> >>>>> 3. There's such a thing as a share-in-savings contract at NASA, in >>>>> which >>>>> you calculate a savings, such as from avoided costs of licensing IDL >>>>> or >>>>> Matlab, and say you'll develop a replacement for that product that >>>>> costs >>>>> less, in exchange for a portion of the savings. These are rare and >>>>> few >>>>> people know about them, but one presenter to the committee did discuss >>>>> them and thought they'd be appropriate. I've always felt that we >>>>> could >>>>> get a chunk of change this way, and was surprised to find that the >>>>> approach exists and has a name. About 3 of 4 people I talk to at NASA >>>>> have no idea this even exists, though, and I haven't pursued it to its >>>>> logical end to see if it's viable. >>>>> >>>> >>>> I've heard of these. Definitely worth looking into. >>>> >>> >>> It seems to be hard to find any information about these share-in-savings >>> contracts. The closest thing I found is this: >>> https://www.federalregister.gov/documents/2018/06/22/2018-13463/nasa-federal-acquisition-regulation-supplement-removal-of-reference-to-the-shared-savings-policy-and >>> >>> It is called "Shared Savings" there, and was replaced last year by >>> something called "Value Engineering Change Proposal". If anyone can comment >>> on whether that's the same thing as Joe meant and whether this is worth >>> following up on, that would be very helpful. >>> >>> Cheers, >>> Ralf >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing listNumPy-Discussion at python.orghttps://mail.python.org/mailman/listinfo/numpy-discussion >>> >>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > > > -- > Mark Mikofski, PhD (2005) > *Fiat Lux* > -- Mark Mikofski, PhD (2005) *Fiat Lux* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DE-FOA-0002064_FY19_SETO_FOA.pdf Type: application/pdf Size: 3136915 bytes Desc: not available URL: From waterbug at pangalactic.us Fri May 3 12:55:04 2019 From: waterbug at pangalactic.us (Stephen Waterbury) Date: Fri, 3 May 2019 12:55:04 -0400 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> Message-ID: <7d472056-8b66-b9f1-4f79-89a9b76c8156@pangalactic.us> Sure, I would be interested to discuss, let's try to meet up there. Steve On 5/3/19 12:23 PM, Chris Barker wrote: > On Thu, May 2, 2019 at 11:51 PM Ralf Gommers > wrote: > > On Fri, May 3, 2019 at 3:49 AM Stephen Waterbury > > wrote: > > P.S.? If anyone wants to continue this discussion at SciPy 2019, > I will be there (on my own nickel!? ;) ... > > > So will I (on NOAA's nickel, which I am grateful for) > > Maybe we should hold a BoF, or even something more formal, on > Government support for SciPY Stack development? > > -CHB > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959?? voice > 7600 Sand Point Way NE ??(206) 526-6329?? fax > Seattle, WA ?98115 ? ? ??(206) 526-6317?? main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 3 13:50:41 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 3 May 2019 10:50:41 -0700 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: <7d472056-8b66-b9f1-4f79-89a9b76c8156@pangalactic.us> References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> <7d472056-8b66-b9f1-4f79-89a9b76c8156@pangalactic.us> Message-ID: On Fri, May 3, 2019 at 9:56 AM Stephen Waterbury wrote: > Sure, I would be interested to discuss, let's try to meet up there. > > OK< that's two of us :-) NumFocus folk: Should we take this off the list and talk about a BoF or something at SciPy? -CHB > Steve > > On 5/3/19 12:23 PM, Chris Barker wrote: > > On Thu, May 2, 2019 at 11:51 PM Ralf Gommers > wrote: > >> On Fri, May 3, 2019 at 3:49 AM Stephen Waterbury >> wrote: >> >>> P.S. If anyone wants to continue this discussion at SciPy 2019, >>> I will be there (on my own nickel! ;) ... >>> >> > So will I (on NOAA's nickel, which I am grateful for) > > Maybe we should hold a BoF, or even something more formal, on Government > support for SciPY Stack development? > > -CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > NumPy-Discussion mailing listNumPy-Discussion at python.orghttps://mail.python.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From jh at physics.ucf.edu Fri May 3 18:13:59 2019 From: jh at physics.ucf.edu (Joe Harrington) Date: Fri, 3 May 2019 18:13:59 -0400 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: References: Message-ID: <6e516c45-bc7a-9a95-f911-299243834328@physics.ucf.edu> Just to keep people in the loop, Ralf and I are in discussion with people at NASA HQ about a funding stream for core development.? Ralf has put together a short description of the development and funding model (5 core projects, 10-20 core developers each, nearly all volunteer now, how NumFOCUS fits in, what we hope to establish from NASA vs. from other agencies, industry, other countries' science entities, etc.).? That will circulate within the agency, to see what can be scraped together. Program managers in NASA's Science Mission Directorate (SMD) gave quite-positive feedback on how vital the Python ecosystem is to NASA's mission.? We're emphasizing the need for both new functionality and maintenance (e.g., docs, web site, bug fixing). If this is ultimately successful, it can be a model for approaching other agencies in the US and elsewhere. To Steve's point, regarding how hard it is for Civil Servants to contribute to OSS (due to NASA's lengthy internal review process for releasing software), this problem was clearly called out in the Academies report.? We proposed some solutions to streamline things.? What's needed now is for NASA Civil Servants to take that report and the relevant white papers (cited in the report and posted online) to their center's senior management, and to NASA HQ, and similarly for others in government agencies.? You may wish to start from NASA's (or your agency's) mission, which includes sharing technology openly to boost the economy, and how you are encountering unreasonable barriers to that goal.? This is mandated by the National Air and Space Act of 1958. For example, there is little reason to conduct an export-control review with lawyers looking at code emerging from a group that has nothing to do with anything near an export-controlled topic. Universities and contractors are subject to the same export-control laws as NASA, and they have not routinely conducted similar reviews of every line of code released.? This has not led to a pattern of export violations.? (Whether there is any benefit at all to the export control laws as applied to software is debatable, since it's usually easy for coders elsewhere to write the same codes, but the law is the law.) --jh-- On 5/3/19 12:48 PM, numpy-discussion-request at python.org wrote: > Subject: > Re: [Numpy-discussion] grant proposal for core scientific Python > projects (rejected) > From: > Mark Mikofski > Date: > 5/3/19, 12:47 PM > > To: > Discussion of Numerical Python > > > Sorry, that last attachment was just a slide show of the topic 3 > recording, here is the full funding opportunity announcement - letter > with 200 word abstract are due May 7th > > On Fri, May 3, 2019 at 8:40 AM Mark Mikofski > wrote: > > Hi Ralf, and others, > > Sorry for the late notice, but there is are several funding > opportunities in solar, including one for $350,000 to develop open > source software to lower soft costs of solar. > https://eere-exchange.energy.gov/#FoaId45eda43a-e826-4481-ae7a-cc6e8ed4fdae > > see topic 3.4 specifically in attached PDF - also note to view the > recording the password is "*Setofoa2019"*?it's about 30 minutes long. > > I know that this is a extremely niche, but as a few others have > said, [the DOE] grants tend to be very specific, but perhaps we > can creatively think of ways to channel funds to NumPy and SciPy. > > Also there is a cost share that is typically 20%, which would be a > non-starter for volunteer projects. > > But here's an idea, perhaps partnering with a company, like mine > (DNV GL) who is applying for the grant, and who uses NumPy,and > could pay the cost share, and then we collaborate on something > that is required to complete the?project, which is contributed to > NumPy (or SciPy) - but we would have to figure what we could align on. > > Seems like NumFOCUS, Quantsight, or some other company in the OSS > space could figure out ways to help connect companies, OSS > projects, and funding opportunities like these, where there's a > possibility of alignment and mutual benefit? > > The full list of funding opportunities is here: > https://eere-exchange.energy.gov/ > > Best Regards, > Mark > > > On Thu, May 2, 2019 at 11:52 PM Ralf Gommers > > wrote: > > > > On Fri, May 3, 2019 at 3:49 AM Stephen Waterbury > > wrote: > > P.S.? If anyone wants to continue this discussion at SciPy > 2019, > I will be there (on my own nickel!? ;) ... > > > Thanks for the input Stephen, and looking forward to see you > at SciPy'19! > > Ralf > > > Steve > > On 5/2/19 9:45 PM, Stephen Waterbury wrote: > >> I am a NASA pythonista (for 20+ years ;), but you can now >> say you know >> yet another person at NASA who has no idea this even >> exists ... :) >> Not only do I not know of that, but I know of NASA >> policies that make >> it very difficult for NASA civil servants to contribute >> to open source >> projects -- quite hypocritical, given the amount of open >> source >> code that NASA (like all other large organizations) >> depends critically >> on, but it's a fact. >> >> Cheers, >> Steve Waterbury >> >> (CLEARLY **NOT** SPEAKING IN ANY OFFICIAL CAPACITY FOR >> NASA OR >> THE U.S. GOVERNMENT AS A WHOLE!? Hence the personal email >> address. :) >> >> On 5/2/19 9:31 PM, Chris Barker - NOAA Federal wrote: >> >>> Sounds like this is a NASA specific thing, in which >>> case, I guess someone at NASA would need to step up. >>> >>> I?m afraid I know no pythonistas at NASA. >>> >>> But I?ll poke around NOAA to see if there?s anything >>> similar. >>> >>> -CHB >>> >>> On Apr 25, 2019, at 1:04 PM, Ralf Gommers >>> > >>> wrote: >>> >>>> >>>> >>>> On Sat, Apr 20, 2019 at 12:41 PM Ralf Gommers >>>> >>> > wrote: >>>> >>>> >>>> >>>> On Thu, Apr 18, 2019 at 10:03 PM Joe Harrington >>>> > wrote: >>>> >>>> >>>> 3. There's such a thing as a share-in-savings >>>> contract at NASA, in which >>>> you calculate a savings, such as from avoided >>>> costs of licensing IDL or >>>> Matlab, and say you'll develop a replacement >>>> for that product that costs >>>> less, in exchange for a portion of the savings. >>>> These are rare and few >>>> people know about them, but one presenter to >>>> the committee did discuss >>>> them and thought they'd be appropriate.? I've >>>> always felt that we could >>>> get a chunk of change this way, and was >>>> surprised to find that the >>>> approach exists and has a name.? About 3 of 4 >>>> people I talk to at NASA >>>> have no idea this even exists, though, and I >>>> haven't pursued it to its >>>> logical end to see if it's viable. >>>> >>>> >>>> I've heard of these. Definitely worth looking into. >>>> >>>> >>>> It seems to be hard to find any information about these >>>> share-in-savings contracts. The closest thing I found >>>> is this: >>>> https://www.federalregister.gov/documents/2018/06/22/2018-13463/nasa-federal-acquisition-regulation-supplement-removal-of-reference-to-the-shared-savings-policy-and >>>> >>>> It is called "Shared Savings" there, and was replaced >>>> last year by something called "Value Engineering Change >>>> Proposal". If anyone can comment on whether that's the >>>> same thing as Joe meant and whether this is worth >>>> following up on, that would be very helpful. >>>> >>>> Cheers, >>>> Ralf >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at python.org >>>> >>>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat May 4 06:24:45 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 4 May 2019 12:24:45 +0200 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: <7d472056-8b66-b9f1-4f79-89a9b76c8156@pangalactic.us> References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> <7d472056-8b66-b9f1-4f79-89a9b76c8156@pangalactic.us> Message-ID: On Fri, May 3, 2019 at 6:55 PM Stephen Waterbury wrote: > Sure, I would be interested to discuss, let's try to meet up there. > > Steve > > On 5/3/19 12:23 PM, Chris Barker wrote: > > On Thu, May 2, 2019 at 11:51 PM Ralf Gommers > wrote: > >> On Fri, May 3, 2019 at 3:49 AM Stephen Waterbury >> wrote: >> >>> P.S. If anyone wants to continue this discussion at SciPy 2019, >>> I will be there (on my own nickel! ;) ... >>> >> > So will I (on NOAA's nickel, which I am grateful for) > > Maybe we should hold a BoF, or even something more formal, on Government > support for SciPY Stack development? > > That would be very useful. Would you be interested to co-organize this Chris? Cheers, Ralf > > -CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > NumPy-Discussion mailing listNumPy-Discussion at python.orghttps://mail.python.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat May 4 06:30:56 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 4 May 2019 12:30:56 +0200 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> Message-ID: On Fri, May 3, 2019 at 6:49 PM Mark Mikofski wrote: > Sorry, that last attachment was just a slide show of the topic 3 > recording, here is the full funding opportunity announcement - letter with > 200 word abstract are due May 7th > > On Fri, May 3, 2019 at 8:40 AM Mark Mikofski > wrote: > >> Hi Ralf, and others, >> >> Sorry for the late notice, but there is are several funding opportunities >> in solar, including one for $350,000 to develop open source software to >> lower soft costs of solar. >> >> https://eere-exchange.energy.gov/#FoaId45eda43a-e826-4481-ae7a-cc6e8ed4fdae >> >> see topic 3.4 specifically in attached PDF - also note to view the >> recording the password is "*Setofoa2019"* it's about 30 minutes long. >> > Thanks for bringing up this opportunity Mark. >> I know that this is a extremely niche, but as a few others have said, >> [the DOE] grants tend to be very specific, but perhaps we can creatively >> think of ways to channel funds to NumPy and SciPy. >> > I think I prefer to pass on this one. Not only because abstracts are due in 3 days, but mainly because it's not the best fit. Perhaps we'll be forced to partner with others on application-specific grants and goals at some point. However it would be much better (as I've said before) to obtain funding for what we really want and need rather than channeling some some proportion of a grant meant for something different into development of our projects. My main goal at this point is getting clearer (also in written form) exactly what we need, then asking for exactly that. Format TBD - Chris' proposal of a BoF at SciPy may be a good forum to discuss. Cheers, Ralf >> Also there is a cost share that is typically 20%, which would be a >> non-starter for volunteer projects. >> >> But here's an idea, perhaps partnering with a company, like mine (DNV GL) >> who is applying for the grant, and who uses NumPy,and could pay the cost >> share, and then we collaborate on something that is required to complete >> the project, which is contributed to NumPy (or SciPy) - but we would have >> to figure what we could align on. >> >> Seems like NumFOCUS, Quantsight, or some other company in the OSS space >> could figure out ways to help connect companies, OSS projects, and funding >> opportunities like these, where there's a possibility of alignment and >> mutual benefit? >> >> The full list of funding opportunities is here: >> https://eere-exchange.energy.gov/ >> >> Best Regards, >> Mark >> >> >> On Thu, May 2, 2019 at 11:52 PM Ralf Gommers >> wrote: >> >>> >>> >>> On Fri, May 3, 2019 at 3:49 AM Stephen Waterbury < >>> waterbug at pangalactic.us> wrote: >>> >>>> P.S. If anyone wants to continue this discussion at SciPy 2019, >>>> I will be there (on my own nickel! ;) ... >>>> >>> >>> Thanks for the input Stephen, and looking forward to see you at SciPy'19! >>> >>> Ralf >>> >>> >>> Steve >>>> >>>> On 5/2/19 9:45 PM, Stephen Waterbury wrote: >>>> >>>> I am a NASA pythonista (for 20+ years ;), but you can now say you know >>>> yet another person at NASA who has no idea this even exists ... :) >>>> Not only do I not know of that, but I know of NASA policies that make >>>> it very difficult for NASA civil servants to contribute to open source >>>> projects -- quite hypocritical, given the amount of open source >>>> code that NASA (like all other large organizations) depends critically >>>> on, but it's a fact. >>>> >>>> Cheers, >>>> Steve Waterbury >>>> >>>> (CLEARLY **NOT** SPEAKING IN ANY OFFICIAL CAPACITY FOR NASA OR >>>> THE U.S. GOVERNMENT AS A WHOLE! Hence the personal email >>>> address. :) >>>> >>>> On 5/2/19 9:31 PM, Chris Barker - NOAA Federal wrote: >>>> >>>> Sounds like this is a NASA specific thing, in which case, I guess >>>> someone at NASA would need to step up. >>>> >>>> I?m afraid I know no pythonistas at NASA. >>>> >>>> But I?ll poke around NOAA to see if there?s anything similar. >>>> >>>> -CHB >>>> >>>> On Apr 25, 2019, at 1:04 PM, Ralf Gommers >>>> wrote: >>>> >>>> >>>> >>>> On Sat, Apr 20, 2019 at 12:41 PM Ralf Gommers >>>> wrote: >>>> >>>>> >>>>> >>>>> On Thu, Apr 18, 2019 at 10:03 PM Joe Harrington >>>>> wrote: >>>>> >>>>> >>>>>> 3. There's such a thing as a share-in-savings contract at NASA, in >>>>>> which >>>>>> you calculate a savings, such as from avoided costs of licensing IDL >>>>>> or >>>>>> Matlab, and say you'll develop a replacement for that product that >>>>>> costs >>>>>> less, in exchange for a portion of the savings. These are rare and >>>>>> few >>>>>> people know about them, but one presenter to the committee did >>>>>> discuss >>>>>> them and thought they'd be appropriate. I've always felt that we >>>>>> could >>>>>> get a chunk of change this way, and was surprised to find that the >>>>>> approach exists and has a name. About 3 of 4 people I talk to at >>>>>> NASA >>>>>> have no idea this even exists, though, and I haven't pursued it to >>>>>> its >>>>>> logical end to see if it's viable. >>>>>> >>>>> >>>>> I've heard of these. Definitely worth looking into. >>>>> >>>> >>>> It seems to be hard to find any information about these >>>> share-in-savings contracts. The closest thing I found is this: >>>> https://www.federalregister.gov/documents/2018/06/22/2018-13463/nasa-federal-acquisition-regulation-supplement-removal-of-reference-to-the-shared-savings-policy-and >>>> >>>> It is called "Shared Savings" there, and was replaced last year by >>>> something called "Value Engineering Change Proposal". If anyone can comment >>>> on whether that's the same thing as Joe meant and whether this is worth >>>> following up on, that would be very helpful. >>>> >>>> Cheers, >>>> Ralf >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at python.org >>>> https://mail.python.org/mailman/listinfo/numpy-discussion >>>> >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing listNumPy-Discussion at python.orghttps://mail.python.org/mailman/listinfo/numpy-discussion >>>> >>>> >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at python.org >>>> https://mail.python.org/mailman/listinfo/numpy-discussion >>>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >> >> >> -- >> Mark Mikofski, PhD (2005) >> *Fiat Lux* >> > > > -- > Mark Mikofski, PhD (2005) > *Fiat Lux* > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat May 4 12:00:48 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 4 May 2019 18:00:48 +0200 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> <7d472056-8b66-b9f1-4f79-89a9b76c8156@pangalactic.us> Message-ID: On Sat, May 4, 2019 at 12:24 PM Ralf Gommers wrote: > > > On Fri, May 3, 2019 at 6:55 PM Stephen Waterbury > wrote: > >> Sure, I would be interested to discuss, let's try to meet up there. >> >> Steve >> >> On 5/3/19 12:23 PM, Chris Barker wrote: >> >> On Thu, May 2, 2019 at 11:51 PM Ralf Gommers >> wrote: >> >>> On Fri, May 3, 2019 at 3:49 AM Stephen Waterbury < >>> waterbug at pangalactic.us> wrote: >>> >>>> P.S. If anyone wants to continue this discussion at SciPy 2019, >>>> I will be there (on my own nickel! ;) ... >>>> >>> >> So will I (on NOAA's nickel, which I am grateful for) >> >> Maybe we should hold a BoF, or even something more formal, on Government >> support for SciPY Stack development? >> >> That would be very useful. Would you be interested to co-organize this > Chris? > Okay never mind, this is apparently happening already: https://hackmd.io/YbxTpC1ZT_aEapTqydmHCA. Please jump in there instead:) Ralf > Cheers, > Ralf > >> >> -CHB >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> >> _______________________________________________ >> NumPy-Discussion mailing listNumPy-Discussion at python.orghttps://mail.python.org/mailman/listinfo/numpy-discussion >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat May 4 15:29:06 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 4 May 2019 21:29:06 +0200 Subject: [Numpy-discussion] Adding to the non-dispatched implementation of NumPy methods In-Reply-To: References: <16a24b876f0.27ae.acf34a9c767d7bb498a799333be0433e@fastmail.com> <66ea23ed-e434-42d5-9546-1a3a7528ce9e@Canary> Message-ID: We seem to have run out of steam a bit here. On Tue, Apr 30, 2019 at 7:24 AM Stephan Hoyer wrote: > On Mon, Apr 29, 2019 at 5:49 AM Marten van Kerkwijk < > m.h.vankerkwijk at gmail.com> wrote: > >> The uses that I've seen so far (in CuPy and JAX), involve a handful of >>> functions that are directly re-exported from NumPy, e.g., >>> jax.numpy.array_repr is the exact same object as numpy.array_repr: >>> >>> https://github.com/cupy/cupy/blob/c3f1be602bf6951b007beaae644a5662f910048b/cupy/__init__.py#L341-L366 >>> >>> https://github.com/google/jax/blob/5edb23679f2605654949156da84e330205840695/jax/numpy/lax_numpy.py#L89-L132 >>> >>> >>> I suspect this will be less common in the future if __array_function__ >>> takes off, but for now it's convenient because users don't need to know >>> exactly which functions have been reimplemented. They can just use "import >>> jax.numpy as np" and everything works. >>> >>> These libraries are indeed passing CuPy or JAX arrays into NumPy >>> functions, which currently happen to have the desired behavior, thanks to >>> accidental details about how NumPy currently supports duck-typing and/or >>> coercions. >>> >>> To this end, it would be really nice to have an alias that *is* >>> guaranteed to work exactly as if __array_function__ didn't exist, and not >>> only for numpy.ndarray arrays. >>> >> >> Just to be clear: for this purpose, being able to call the implementation >> is still mostly a convenient crutch, correct? For classes that define >> __array_function__, would you expect more than the guarantee I wrote above, >> that the wrapped version will continue to work as advertised for ndarray >> input only? >> > > I'm not sure I agree -- what would be the more principled alternative here? > > Modules that emulate NumPy's public API for a new array type are both > pretty common (cupy, jax.numpy, autograd, dask.array, pydata/sparse, etc) > and also the best early candidates for adopting NEP-18, because they don't > need to do much extra work to write a __array_function__ method. I want to > make it as easy as possible for these early adopters, because their success > will make or break the entire __array_function__ protocol. > > In the long term, I agree that the importance of these numpy-like > namespaces will diminish, because it will be possible to use the original > NumPy namespace instead. Possibly new projects will decide that they don't > need to bother with them at all. But there are still lots of plausible > reasons for keeping them around even for a project that implements > __array_function__, e.g., > (a) to avoid the overhead of NumPy's dispatching > (b) to access functions like np.ones that return a different array type > (c) to make use of optional duck-array specific arguments, e.g., the > split_every argument to dask.array.sum() > (d) if they care about supporting versions of NumPy older than 1.17 > > In practice, I suspect we'll see these modules continue to exist for a > long time. And they really do rely upon the exact behavior of NumPy today, > whatever that happens to be (e.g., the undocumented fact that > np.result_type supports duck-typing with the .dtype attribute rather than > coercing arguments to NumPy arrays).. > > In particular, suppose we change an implementation to use different other >> numpy functions inside (which are of course overridden using >> __array_function__). I could imagine situations where that would work fine >> for everything that does not define __array_ufunc__, but where it would not >> for classes that do define it. Is that then a problem for numpy or for the >> project that has a class that defines __array_function__? >> > > If we change an existing NumPy function to start calling ufuncs directly > on input arguments, rather than calling np.asarray() on its inputs, > This wasn't really the question I believe. More like, if numpy function A now calls B under the hood, and we replace it with C (in a way that's fully backwards compatible for users of A), then will that be a problem in the future? I think that in practice this doesn't happen a lot, and is quite unlikely to be a problem. that will already (potentially) be a breaking change. We lost the ability > to these sorts of refactors without breaking backwards compatibility when > we added __array_ufunc__. So I think it's already our problem, unless we're > willing to risk breaking __array_ufunc__ users. > > That said, I doubt this would actually be a major issue in practice. The > projects for which __array_function__ makes the most sense are "full duck > arrays," and all these projects are going to implement __array_ufunc__, > too, in a mostly compatible way. > > I'm a little puzzled by why you are concerned about retaining this > flexibility to reuse the attribute I'm asking for here for a function that > works differently. What I want is a special attribute that is guaranteed to > work like the public version of a NumPy function, but without checking for > an __array_function__ attribute. > > If we later decide we want to expose an attribute that provides a > non-coercing function that calls ufuncs directly instead of np.asarray, > what do we lose by giving it a new name so users don't need to worry about > changed behavior? There is plenty of room for special attributes on NumPy > functions. We can have both np.something.__skip_array_overrides__ and > np.something.__array_implementation__. > That's a good argument I think. Ralf > So we might as well pick a name that works for both, e.g., >>> __skip_array_overrides__ rather than __skip_array_function__. This would >>> let us save our users a bit of pain by not requiring them to make changes >>> like np.where.__skip_array_function__ -> np.where.__skip_array_ufunc__. >>> >> >> Note that for ufuncs it is not currently possible to skip the override. I >> don't think it is super-hard to do it, but I'm not sure I see the need to >> add a crutch where none has been needed so far. More generally, it is not >> obvious there is any C code where skipping the override is useful, since >> the C code relies much more directly on inputs being ndarray. >> > > To be entirely clear: I was thinking of > ufunc.method.__skip_array_overrides__() as "equivalent to ufunc.method() > except not checking for __array_ufunc__ attributes". > > I think the use-cases would be for Python code that ufuncs, in much the > same way that there are use-cases for Python code that call other NumPy > functions, e.g., > - np.sin.__skip_array_overrides__() could be a slightly faster than > np.sin(), because it avoids checking for __array_ufunc__ attributes. > - np.add.__skip_array_overrides__(x, y) is definitely going to be a faster > than np.add(np.asarray(x), np.asarray(y)), because it avoids the overhead > of two Python function calls. > > The use cases here are certainly not as compelling as those for > __array_function__, because __array_ufunc__'s arguments are in a > standardized form, but I think there's still meaningful. Not to mention, we > can refactor np.ndarray.__array_ufunc__ to work exactly like > np.ndarray.__array_function__, eliminating the special case in NEP-13's > dispatch rules. > > I agree that it wouldn't make sense to call the "generic duck-array > implementation" of a ufunc (these don't exist), but that wasn't what I was > proposing here. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sat May 4 19:26:16 2019 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Sat, 4 May 2019 16:26:16 -0700 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: References: <5aa81bea-c7c1-250c-5b9f-f75aaecc4f44@physics.ucf.edu> <2ea57ca2-cfb3-fc7d-14b6-577afeb07eb0@pangalactic.us> <1274b1c4-7c72-41c4-7cdb-d222cae167b7@pangalactic.us> <7d472056-8b66-b9f1-4f79-89a9b76c8156@pangalactic.us> Message-ID: On May 4, 2019, at 9:00 AM, Ralf Gommers Okay never mind, this is apparently happening already: https://hackmd.io/YbxTpC1ZT_aEapTqydmHCA. Please jump in there instead:) Slightly different focus than I had in mind, but yes, it makes sense to join that effort. -CHB Ralf > Cheers, > Ralf > >> >> -CHB >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> >> _______________________________________________ >> NumPy-Discussion mailing listNumPy-Discussion at python.orghttps://mail.python.org/mailman/listinfo/numpy-discussion >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at python.org https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sun May 5 00:10:56 2019 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Sun, 5 May 2019 00:10:56 -0400 Subject: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) In-Reply-To: <214582303.42434.1557011114234@itbp5.prod.google.com> References: <214582303.42434.1557011114234@itbp5.prod.google.com> Message-ID: Thanks for the update ? this is great stuff! -CHB On May 3, 2019, at 3:13 PM, Joe Harrington wrote: Just to keep people in the loop, Ralf and I are in discussion with people at NASA HQ about a funding stream for core development.??? Ralf has put together a short description of the development and funding model (5 core projects, 10-20 core developers each, nearly all volunteer now, how NumFOCUS fits in, what we hope to establish from NASA vs. from other agencies, industry, other countries' science entities, etc.).??? That will circulate within the agency, to see what can be scraped together.??? Program managers in NASA's Science Mission Directorate (SMD) gave quite-positive feedback on how vital the Python ecosystem is to NASA's mission.??? We're emphasizing the need for both new functionality and maintenance (e.g., docs, web site, bug fixing).??? If this is ultimately successful, it can be a model for approaching other agencies in the US and elsewhere. To Steve's point, regarding how hard it is for Civil Servants to contribute to OSS (due to NASA's lengthy internal review process for releasing software), this problem was clearly called out in the Academies report.??? We proposed some solutions to streamline things.??? What's needed now is for NASA Civil Servants to take that report and the relevant white papers (cited in the report and posted online) to their center's senior management, and to NASA HQ, and similarly for others in government agencies.??? You may wish to start from NASA's (or your agency's) mission, which includes sharing technology openly to boost the economy, and how you are encountering unreasonable barriers to that goal.??? This is mandated by the National Air and Space Act of 1958. For example, there is little reason to conduct an export-control review with lawyers looking at code emerging from a group that has nothing to do with anything near an export-controlled topic.??? Universities and contractors are subject to the same export-control laws as NASA, and they have not routinely conducted similar reviews of every line of code released.??? This has not led to a pattern of export violations.??? (Whether there is any benefit at all to the export control laws as applied to software is debatable, since it's usually easy for coders elsewhere to write the same codes, but the law is the law.) --jh-- On 5/3/19 12:48 PM, numpy-discussion-request at python.org wrote: Subject: Re: [Numpy-discussion] grant proposal for core scientific Python projects (rejected) From: Mark Mikofski Date: 5/3/19, 12:47 PM To: Discussion of Numerical Python Sorry, that last attachment was just a slide show of the topic 3 recording, here is the full funding opportunity announcement - letter with 200 word abstract are due May 7th On Fri, May 3, 2019 at 8:40 AM Mark Mikofski wrote: > Hi Ralf, and others, > > Sorry for the late notice, but there is are several funding opportunities > in solar, including one for $350,000 to develop open source software to > lower soft costs of solar. > https://eere-exchange.energy.gov/#FoaId45eda43a-e826-4481-ae7a-cc6e8ed4fdae > ??? > see topic 3.4 specifically in attached PDF - also note to view the > recording the password is "*Setofoa2019"*???it's about 30 minutes long. > > I know that this is a extremely niche, but as a few others have said, [the > DOE] grants tend to be very specific, but perhaps we can creatively think > of ways to channel funds to NumPy and SciPy. > > Also there is a cost share that is typically 20%, which would be a > non-starter for volunteer projects. > > But here's an idea, perhaps partnering with a company, like mine (DNV GL) > who is applying for the grant, and who uses NumPy,and could pay the cost > share, and then we collaborate on something that is required to complete > the???project, which is contributed to NumPy (or SciPy) - but we would have > to figure what we could align on. > > Seems like NumFOCUS, Quantsight, or some other company in the OSS space > could figure out ways to help connect companies, OSS projects, and funding > opportunities like these, where there's a possibility of alignment and > mutual benefit? > > The full list of funding opportunities is here: > https://eere-exchange.energy.gov/??? > > Best Regards, > Mark??? > ??? > > On Thu, May 2, 2019 at 11:52 PM Ralf Gommers > wrote: > >> >> >> On Fri, May 3, 2019 at 3:49 AM Stephen Waterbury >> wrote: >> >>> P.S.??? If anyone wants to continue this discussion at SciPy 2019, >>> I will be there (on my own nickel!??? ;) ... >>> >> >> Thanks for the input Stephen, and looking forward to see you at SciPy'19! >> >> Ralf >> >> >> Steve >>> >>> On 5/2/19 9:45 PM, Stephen Waterbury wrote: >>> >>> I am a NASA pythonista (for 20+ years ;), but you can now say you know >>> yet another person at NASA who has no idea this even exists ... :) >>> Not only do I not know of that, but I know of NASA policies that make >>> it very difficult for NASA civil servants to contribute to open source >>> projects -- quite hypocritical, given the amount of open source >>> code that NASA (like all other large organizations) depends critically >>> on, but it's a fact. >>> >>> Cheers, >>> Steve Waterbury >>> >>> (CLEARLY **NOT** SPEAKING IN ANY OFFICIAL CAPACITY FOR NASA OR >>> THE U.S. GOVERNMENT AS A WHOLE!??? Hence the personal email >>> address. :) >>> >>> On 5/2/19 9:31 PM, Chris Barker - NOAA Federal wrote: >>> >>> Sounds like this is a NASA specific thing, in which case, I guess >>> someone at NASA would need to step up. >>> >>> I???m afraid I know no pythonistas at NASA.??? >>> >>> But I???ll poke around NOAA to see if there???s anything similar. >>> >>> -CHB >>> >>> On Apr 25, 2019, at 1:04 PM, Ralf Gommers >>> wrote: >>> >>> >>> >>> On Sat, Apr 20, 2019 at 12:41 PM Ralf Gommers >>> wrote: >>> >>>> >>>> >>>> On Thu, Apr 18, 2019 at 10:03 PM Joe Harrington >>>> wrote: >>>> >>>> >>>>> 3. There's such a thing as a share-in-savings contract at NASA, in >>>>> which >>>>> you calculate a savings, such as from avoided costs of licensing IDL >>>>> or >>>>> Matlab, and say you'll develop a replacement for that product that >>>>> costs >>>>> less, in exchange for a portion of the savings.??? These are rare and >>>>> few >>>>> people know about them, but one presenter to the committee did discuss >>>>> them and thought they'd be appropriate.??? I've always felt that we >>>>> could >>>>> get a chunk of change this way, and was surprised to find that the >>>>> approach exists and has a name.??? About 3 of 4 people I talk to at >>>>> NASA >>>>> have no idea this even exists, though, and I haven't pursued it to its >>>>> logical end to see if it's viable. >>>>> >>>> >>>> I've heard of these. Definitely worth looking into. >>>> >>> >>> It seems to be hard to find any information about these share-in-savings >>> contracts. The closest thing I found is this: >>> https://www.federalregister.gov/documents/2018/06/22/2018-13463/nasa-federal-acquisition-regulation-supplement-removal-of-reference-to-the-shared-savings-policy-and >>> >>> It is called "Shared Savings" there, and was replaced last year by >>> something called "Value Engineering Change Proposal". If anyone can comment >>> on whether that's the same thing as Joe meant and whether this is worth >>> following up on, that would be very helpful. >>> >>> Cheers, >>> Ralf >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing listNumPy-Discussion at python.orghttps://mail.python.org/mailman/listinfo/numpy-discussion >>> >>> >>> >>> _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at python.org https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From joycetirnyuy at gmail.com Sun May 5 04:53:13 2019 From: joycetirnyuy at gmail.com (Joyce Tirnyuy) Date: Sun, 5 May 2019 09:53:13 +0100 Subject: [Numpy-discussion] My Introduction and Getting Started with Numpy. Message-ID: Hi All, I am Ngoran Clare-Joyce, an Electrical Engineer from Cameroon. I use Python and Javascript for Software Development. Over the past year, I have gained insight into Machine Learning and Data Science Algorithms. I have used Numpy, Scipy, Pandas, Pytorch, Scikit-Learn libraries. I have realized that to take my career to the next level, I need to contribute to open source as a way to gain skills, experience and proper understanding of how these libraries work. Please, I will appreciate help on how to get started, set up my development environment, some important documentation, and beginners issues so I can start contributing to Numpy. Thanks, Ngoran Clare-Joyce. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at alexsamuel.net Sun May 5 10:58:13 2019 From: alex at alexsamuel.net (Alex Samuel) Date: Sun, 5 May 2019 10:58:13 -0400 Subject: [Numpy-discussion] type and kind for custom dtypes Message-ID: <5BC9B1D8-DEB1-48E4-B8BC-3E9038B293D8@alexsamuel.net> Hi, I'm working on building a number of related custom dtypes, and I'm not sure how to set the type and kind fields in PyArray_Descr. I tried using type='V' and choosing a single unused kind for all my dtypes; this mostly worked, except I found that coercions would sometimes treat values of two different dtypes as if they were the same. But not always... sometimes my registered cast functions would be called. Through trial and error, I've found that if I choose an unused type code for each dtype, coercion seems to work as I expect it to (no coercion unless I've provided a cast). The kind doesn't seem to matter. I couldn't find any guidance in the docs for how to choose these values. Apologies if I've overlooked something. Could someone please advise me? More widely, is there some global registry of these codes? Is the number of NumPy dtypes limited to the number of (UTF-8-encodable) chars? It seems like common practice to use dtype.kind in user code. If I use one or more for my custom dtypes, is there any mechanism to ensure they do not collide with others'? Are there any other semantics for either field I should take into account? Thanks, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at alexsamuel.net Sun May 5 11:03:23 2019 From: alex at alexsamuel.net (Alex Samuel) Date: Sun, 5 May 2019 11:03:23 -0400 Subject: [Numpy-discussion] type and kind for custom dtypes In-Reply-To: <5BC9B1D8-DEB1-48E4-B8BC-3E9038B293D8@alexsamuel.net> References: <5BC9B1D8-DEB1-48E4-B8BC-3E9038B293D8@alexsamuel.net> Message-ID: <3B9D65F9-E80A-4C7C-AF56-20B28CD3626E@alexsamuel.net> > On May 5, 2019, at 10:58, Alex Samuel wrote: > > Through trial and error, I've found that if I choose an unused type code for each dtype, coercion seems to work as I expect it to (no coercion unless I've provided a cast). The kind doesn't seem to matter. Apologies, a correction: I mixed up kind and type above. I meant that I've found I need to choose distinct kinds for the coercion rules to treat my dtypes as distinct, rather than the type. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Sun May 5 12:54:16 2019 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Sun, 05 May 2019 09:54:16 -0700 Subject: [Numpy-discussion] type and kind for custom dtypes In-Reply-To: <3B9D65F9-E80A-4C7C-AF56-20B28CD3626E@alexsamuel.net> References: <5BC9B1D8-DEB1-48E4-B8BC-3E9038B293D8@alexsamuel.net> <3B9D65F9-E80A-4C7C-AF56-20B28CD3626E@alexsamuel.net> Message-ID: Hi Alex, On Sun, 2019-05-05 at 11:03 -0400, Alex Samuel wrote: > > On May 5, 2019, at 10:58, Alex Samuel wrote: > > > > Through trial and error, I've found that if I choose an unused type > > code for each dtype, coercion seems to work as I expect it to (no > > coercion unless I've provided a cast). The kind doesn't seem to > > matter. > > Apologies, a correction: I mixed up kind and type above. I meant > that I've found I need to choose distinct kinds for the coercion > rules to treat my dtypes as distinct, rather than the type. > It is cool to here about interest in custom dtypes. Numpy has the concept of "same-kind" casting, which may be what bites you here? So you have unsafe casting, but because you pick the same "kind" numpy thinks it is OK to do it in ufuncs? There may also be issues surrounding 0-D arrays casting differently. I honestly do not think there is any way to ensure you do not collide with other kinds right now, but will check more closely tomorrow. I am currently not even quite sure how the type code really interacts when we have usertypes, and a bit surprised about what you describe. We are now starting the progress of trying to improve the situation with creating custom dtypes. There will actually be discussions about this end of next week (in Berkeley). But in any case I would be very interested in your specific use-case and needs, and hopefully we can help you also on your end with the current situation. We can discuss on the list, or get in contact privately. Best Regards, Sebastian > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From sebastian at sipsolutions.net Sun May 5 21:01:53 2019 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Sun, 05 May 2019 18:01:53 -0700 Subject: [Numpy-discussion] type and kind for custom dtypes In-Reply-To: <5BC9B1D8-DEB1-48E4-B8BC-3E9038B293D8@alexsamuel.net> References: <5BC9B1D8-DEB1-48E4-B8BC-3E9038B293D8@alexsamuel.net> Message-ID: OK, I looked into the code, so here is a small followup. On Sun, 2019-05-05 at 10:58 -0400, Alex Samuel wrote: > Hi, > > I'm working on building a number of related custom dtypes, and I'm > not sure how to set the type and kind fields in PyArray_Descr. I > tried using type='V' and choosing a single unused kind for all my > dtypes; this mostly worked, except I found that coercions would > sometimes treat values of two different dtypes as if they were the > same. But not always... sometimes my registered cast functions would > be called. The reason is that when the "kind" and "itemsize" and "byte order" are identical, the numpy code decides that data types can be cast (because they are equivalent). So basically, the "kind" must not be equal unless the "type"/dtype only differs in precision or similar. (The relevant code is in multiarraymodule.c in PyArray_EquivTypes) > > Through trial and error, I've found that if I choose an unused type > code for each dtype, coercion seems to work as I expect it to (no > coercion unless I've provided a cast). The kind doesn't seem to > matter. > > I couldn't find any guidance in the docs for how to choose these > values. Apologies if I've overlooked something. Could someone > please advise me? > Frankly, I do not think there is any, because nobody ever created many types (there is only quaternions and rationals publicly available). > More widely, is there some global registry of these codes? Is the > number of NumPy dtypes limited to the number of (UTF-8-encodable) > chars? It seems like common practice to use dtype.kind in user code. > If I use one or more for my custom dtypes, is there any mechanism to > ensure they do not collide with others'? Are there any other > semantics for either field I should take into account? I have checked the code, and no, there appears to be no such thing currently. I suppose (on the C-side) you could find all types, by using their type number and then asking them. dtype.kind is indeed used a lot, mostly to decide that a type is e.g. an integer. My best guess right now is that the rule you saw above is the only thing you have to take into account. Best, Sebastian > > Thanks, > Alex > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From alex at alexsamuel.net Mon May 6 09:46:50 2019 From: alex at alexsamuel.net (Alex Samuel) Date: Mon, 6 May 2019 09:46:50 -0400 Subject: [Numpy-discussion] type and kind for custom dtypes In-Reply-To: References: <5BC9B1D8-DEB1-48E4-B8BC-3E9038B293D8@alexsamuel.net> Message-ID: Thanks very much for looking into this! > The reason is that when the "kind" and "itemsize" and "byte order" are > identical, the numpy code decides that data types can be cast (because > they are equivalent). So basically, the "kind" must not be equal unless > the "type"/dtype only differs in precision or similar. > > (The relevant code is in multiarraymodule.c in PyArray_EquivTypes) That makes sense, and explains why the cast-less coercion takes place for some type pairs and not for others. > Frankly, I do not think there is any, because nobody ever created many > types (there is only quaternions and rationals publicly available). OK. I'm a bit surprised to hear this, as the API for adding dtypes is actually rather straightforward! For now, then, I will stick with my current scheme of assigning successive kind values to my dtypes, and hope for the best when running with other extension dtypes (which, it seems, may be unlikely). From ralf.gommers at gmail.com Mon May 6 15:57:53 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 6 May 2019 21:57:53 +0200 Subject: [Numpy-discussion] My Introduction and Getting Started with Numpy. In-Reply-To: References: Message-ID: Hi Ngoran, welcome! On Sun, May 5, 2019 at 10:53 AM Joyce Tirnyuy wrote: > Hi All, > > I am Ngoran Clare-Joyce, an Electrical Engineer from Cameroon. I use > Python and Javascript for Software Development. Over the past year, I have > gained insight into Machine Learning and Data Science Algorithms. I have > used Numpy, Scipy, Pandas, Pytorch, Scikit-Learn libraries. > > I have realized that to take my career to the next level, I need to > contribute to open source as a way to gain skills, experience and proper > understanding of how these libraries work. > Excellent, we can always use help:) > Please, I will appreciate help on how to get started, set up my > development environment, some important documentation, and beginners issues > so I can start contributing to Numpy. > This is the most recent version of our documentation: https://www.numpy.org/devdocs/index.html. It has a link "NumPy Developer Guide" which walks you through setting up your development environment. There's also "Building and Extending the Documentation" which will help if you want to work on improving the documentation. For some beginner issues, please have a look at the ones labelled "easy" on GitHub: https://github.com/numpy/numpy/issues?q=is%3Aopen+is%3Aissue+label%3A%22difficulty%3A+Easy%22 Cheers, Ralf > > Thanks, > Ngoran Clare-Joyce. > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at alexsamuel.net Mon May 6 16:16:40 2019 From: alex at alexsamuel.net (Alex Samuel) Date: Mon, 6 May 2019 16:16:40 -0400 Subject: [Numpy-discussion] type and kind for custom dtypes In-Reply-To: References: <5BC9B1D8-DEB1-48E4-B8BC-3E9038B293D8@alexsamuel.net> <3B9D65F9-E80A-4C7C-AF56-20B28CD3626E@alexsamuel.net> Message-ID: > We are now starting the progress of trying to improve the situation > with creating custom dtypes. > There will actually be discussions about this end of next week (in > Berkeley). But in any case I would be very interested in your specific > use-case and needs, and hopefully we can help you also on your end with > the current situation. We can discuss on the list, or get in contact > privately. Unfortunately, I'm in NYC, but I'd be happy to participate however I can, whether it is to describe my use case, or help writing docs, or just chat. Here's some info about my project: Ora (https://github.com/alexhsamuel/ora/ ) is a new date/time implementation. The intention is to provide types with ticks-since-epoch representation (rather than YMD, HMS) with full functionality for both standalone scalar (i.e. no NumPy) and ndarray use cases. Essentially, the convenience of datetime, with the performance of datetime64, and much of dateutil rolled in. I've also experimented with a number of other matters, including variable width/precision/range types. As a result I provide various time, date, and time-of-day types, for instance 32-, 64-, and 128-bit time types, and each has a corresponding dtype and complete NumPy support. It's possible to adjust this set of types, if you are willing to recompile (C++). That's why I'm interested in how dtypes are managed globally. Ora has a lot of functionality that works well, and performance is good, though it's so far a solo project and there are still lots of rough edges / missing features / bugs. I'd love to get feedback from people who work with dates and times a lot, either scalar or vectorized. My wish list for NumPy's dtype support is, - better docs on writing dtypes (though they are not bad) - ability to use a scalar type that doesn't derive from a NumPy base type, so that the scalar type can be used without importing NumPy - clear management for dtypes Please let me know how best I could participate or help. Regards, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From oishikapradhan at gmail.com Mon May 6 16:51:04 2019 From: oishikapradhan at gmail.com (Oishika Pradhan) Date: Tue, 7 May 2019 02:21:04 +0530 Subject: [Numpy-discussion] GSoD'19 project discussion Message-ID: Hi, I am Oishika Pradhan, a research student at IIIT Hyderabad, India. I am interested in machine learning and neural networks and hence have used numpy very regularly in my projects and assignments. This is why I'm interested in becoming a contributor to this organization. I have some prior experience in technical documentation. I am interested in working on the following project: Improving the structure and content of https://numpy.org/ Please provide pointers on how to get started with this project. Thanks, Oishika Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Mon May 6 17:56:28 2019 From: matti.picus at gmail.com (Matti Picus) Date: Mon, 6 May 2019 17:56:28 -0400 Subject: [Numpy-discussion] GSoD'19 project discussion In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From iec2016027 at iiita.ac.in Mon May 6 22:22:48 2019 From: iec2016027 at iiita.ac.in (Bhanu Bhandari) Date: Tue, 7 May 2019 07:52:48 +0530 Subject: [Numpy-discussion] Contributing: NumPy Documentation Message-ID: Hello, I?m Bhanu, a third year student of Electronics and Computers Engineering. I?d like to get started with contributing into the NumPy documentation. I?m a writer who is actively machine learning research as well, and have used NumPy extensively for my coding assignments and research work. Thanks, Bhanu Bhandari -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Mon May 6 22:30:14 2019 From: matti.picus at gmail.com (Matti Picus) Date: Mon, 6 May 2019 22:30:14 -0400 Subject: [Numpy-discussion] Contributing: NumPy Documentation In-Reply-To: References: Message-ID: <52caf170-a4e1-5ef8-1965-ee5a33e4582c@gmail.com> On 6/5/19 10:22 pm, Bhanu Bhandari wrote: > Hello, > I?m Bhanu, a third year student of Electronics and Computers > Engineering. I?d like to get started with contributing into the NumPy > documentation. I?m a writer who is actively machine learning research > as well, and have used NumPy extensively for my coding assignments and > research work. > > Thanks, > Bhanu Bhandari > Hi and welcome. We use github to manage our code which is found at https://github.com/numpy/numpy. You already found our documentation which includes the guide to building and extending the documentation http://www.numpy.org/devdocs/docs/index.html. For a more general description of getting started with git and a general developer workflow, see http://www.numpy.org/devdocs/dev/index.html. Specifically for the GSOD '19 project we are looking to engage with experienced technical communicators on ways to improve the documentations website: layout, content design, usability and more. Feel free to reach out to me directly to continue the conversation on what that may entail. Matti From tyler.je.reddy at gmail.com Tue May 7 15:00:51 2019 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Tue, 7 May 2019 12:00:51 -0700 Subject: [Numpy-discussion] NumPy Community Meeting May 8/ 2019 Message-ID: Hi, There will be a NumPy Community meeting at 12 pm Pacific Time on May 8/ 2019. Anyone is welcome to join and edit the work in progress meeting document: https://hackmd.io/M-ef_Fu5QOOitACnyoO0kQ?view Best wishes, Tyler -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue May 7 17:51:24 2019 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 7 May 2019 15:51:24 -0600 Subject: [Numpy-discussion] Preparation for 1.17.x branch Message-ID: Hi All, It's time to look forward to branching 1.17.x and the first rc. Please note of any issues that you think fixing or PRs that should be merged before the first release. It would also help if some folks would review the release notes for completeness. I figure the branch should take place in a week or so. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Tue May 7 18:00:16 2019 From: shoyer at gmail.com (Stephan Hoyer) Date: Tue, 7 May 2019 15:00:16 -0700 Subject: [Numpy-discussion] Preparation for 1.17.x branch In-Reply-To: References: Message-ID: I'd love to get the revisions of NEP-18 finalized. I guess this means we'll need to reach a decision on https://github.com/numpy/numpy/pull/13305 Charles -- Will you be at the developer meeting in Berkeley this week? This could be a good time to discuss things in person. On Tue, May 7, 2019 at 2:51 PM Charles R Harris wrote: > Hi All, > > It's time to look forward to branching 1.17.x and the first rc. Please > note of any issues that you think fixing or PRs that should be merged > before the first release. It would also help if some folks would review the > release notes for completeness. I figure the branch should take place in a > week or so. > > Chuck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue May 7 19:02:48 2019 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 7 May 2019 17:02:48 -0600 Subject: [Numpy-discussion] Preparation for 1.17.x branch In-Reply-To: References: Message-ID: On Tue, May 7, 2019 at 4:00 PM Stephan Hoyer wrote: > I'd love to get the revisions of NEP-18 finalized. I guess this means > we'll need to reach a decision on > https://github.com/numpy/numpy/pull/13305 > > Charles -- Will you be at the developer meeting in Berkeley this week? > This could be a good time to discuss things in person. > > Yes, I'll be there. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed May 8 05:26:28 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 8 May 2019 11:26:28 +0200 Subject: [Numpy-discussion] Preparation for 1.17.x branch In-Reply-To: References: Message-ID: On Wed, May 8, 2019 at 1:03 AM Charles R Harris wrote: > > > On Tue, May 7, 2019 at 4:00 PM Stephan Hoyer wrote: > >> I'd love to get the revisions of NEP-18 finalized. I guess this means >> we'll need to reach a decision on >> https://github.com/numpy/numpy/pull/13305 >> > Another thing that would be nice to do is have an early beta or release candidate with __array_function__ enabled that stays up for a while (say 3 weeks). And then encourage people to test their code a bit more than we usually do (if it's a pip install --pre away, that's easier to do). At the moment __array_function__ has gotten almost zero testing is my feeling, because it's tricky to enable. Ralf >> >> Charles -- Will you be at the developer meeting in Berkeley this week? >> This could be a good time to discuss things in person. >> >> > Yes, I'll be there. > > > > Chuck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed May 8 15:10:41 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 8 May 2019 12:10:41 -0700 Subject: [Numpy-discussion] Style guide for numpy code? Message-ID: Hey all, Do any of you know of a style guide for computational / numpy code? I don't mean code that will go into numpy itself, but rather, users code that uses numpy (and scipy, and...) I know about (am a proponent -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu May 9 11:25:30 2019 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 9 May 2019 11:25:30 -0400 Subject: [Numpy-discussion] Style guide for numpy code? In-Reply-To: References: Message-ID: Oops, Somehow that got sent before I was done. (Like my use of the passive voice there?) Here is a complete message: Do any of you know of a style guide for computational / numpy code? I don't mean code that will go into numpy itself, but rather, users code that uses numpy (and scipy, and...) I know about (am a proponent of) PEP8, but it doesn?t address the unique needs of scientific programming. This is mostly about variable names. In scientific code, we often want: - variable names that match the math notation- so single character names, maybe upper or lower case to mean different things ( in ocean wave mechanics, often ?h? is the water depth, and ?H? is the wave height) -to distinguish between scalar, vector, and matrix values ? often UpperCase means an array or matrix, for instance. But despite (or because of) these unique needs, a style guide would be really helpful. Anyone have one? Or even any notes on what you do yourself? Thanks, -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From jh at physics.ucf.edu Thu May 9 14:12:28 2019 From: jh at physics.ucf.edu (Joe Harrington) Date: Thu, 9 May 2019 14:12:28 -0400 Subject: [Numpy-discussion] Style guide for numpy code? In-Reply-To: References: Message-ID: <9f50cd94-ef12-efc4-62fa-b1e1af9e9c1d@physics.ucf.edu> I have a handout for my PHZ 3150 Introduction to Numerical Computing course that includes some rules: (a) All integer-valued floating-point numbers should have decimal points after them. For example, if you have a time of 10 sec, do not use y = np.e**10 # sec use y = np.e**10. # sec instead.? For example, an item count is always an integer, but a distance is always a float.? A decimal in the range (-1,1) must always have a zero before the decimal point, for readability: x = 0.23 # Right! x = .23 # WRONG The purpose of this one is simply to build the decimal-point habit.? In Python it's less of an issue now, but sometimes code is translated, and integer division is still out there.? For that reason, in other languages, it may be desirable to use a decimal point even for counts, unless integer division is wanted.? Make a comment whenever you intend integer division and the language uses the same symbol (/) for both kinds of division. (b) Use spaces around binary operations and relations (=<>+-*/). Put a space after ?,?. Do not put space around ?=? in keyword arguments, or around ? ** ?. (c) Do not put plt.show() in your homework file! You may put it in a comment if you like, but it is not necessary. Just save the plot. If you say plt.ion() plots will automatically show while you are working. (d) Use: import matplotlib.pyplot as plt NOT: import matplotlib.pylab as plt (e) Keep lines to 80 characters, max, except in rare cases that are well justified, such as very long strings. If you make comments on the same line as code, keep them short or break them over more than a line: code = code2 # set code equal to code2 # Longer comment requiring much more space because # I'm explaining something complicated. code = code2 code = code2 # Another way to do a very long comment, # like this one, which runs over more than # one line. (f) Keep blocks of similar lines internally lined up on decimals, comments, and = signs.? This makes them easier to read and verify.? There will be some cases when this is impractical.? Use your judgment (you're not a computer, you control the computer!): x = 1. # this is a comment y = 378.2345 # here's another fred = chuck # note how the decimals, = signs, and # comments line up nicely... alacazamshmazooboloid = 2721 # but not always! (g) Put the units and sources of all values in comments: t_planet = 523. ? ? # K, Smith and Jones (2016, ApJ 234, 22) (h) I don't mean to start a religious war, but I emphasize the alignment of similar adjacent code lines to make differences pop out and reduce the likelihood of bugs.? For example, it is much easier to verify the correctness of: a = 3 * x + 3 * 8. * short - 5. * np.exp(np.pi * omega * t) a_alt = 3 * x + 3 * 8. * anotshortvar - 5. * np.exp(np.pi * omega * t) than: a = 3 * x + 3 * 8. * short - 5. * np.exp(np.pi * omega * t) a_altvarname = 3 * x + 3*9*anotshortvar - 5. * np.exp(np.pi * omega * i) (i) Assign values to meaningful variables, and use them in formulae and functions: ny = 512 nx = 512 image = np.zeros((ny, nx)) expr1 = ny * 3 expr2 = nx * 4 Otherwise, later on when you upgrade to 2560x1440 arrays, you won't know which of the 512s are in the x direction and which are in the y direction.? Or, the student you (now a senior researcher) assign to code the upgrade won't!? Also, it reduces bugs arising from the order of arguments to functions if the args have meaningful names.? This is not to say that you should assign all numbers to functions.? This is fine: circ = 2 * np.pi * r (j) All functions assigned for grading must have full docstrings in numpy's format, as well as internal comments.? Utility functions not requested in the assignment and that the user will never see can have reduced docstrings if the functions are simple and obvious, but at least give the one-line summary. (k) If you modify an existing function, you must either make a Git entry or, if it is not under revision control, include a Revision History section in your docstring and record your name, the date, the version number, your email, and the nature of the change you made. (l) Choose variable names that are meaningful and consistent in style.? Document your style either at the head of a module or in a separate text file for the project.? For example, if you use CamelCaps with initial capital, say that.? If you reserve initial capitals for classes, say that.? If you use underscores for variable subscripts and camelCaps for the base variables, say that.? If you accept some other style and build on that, say that.? There are too many good reasons to have such styles for only one to be the community standard.? If certain kinds of values should get the same variable or base variable, such as fundamental constants or things like amplitudes, say that. (j) It's best if variables that will appear in formulae are short, so more terms can fit in one 80 character line. Overall, having and following a style makes code easier to read. And, as an added bonus, if you take care to be consistent, you will write slower, view your code more times, and catch more bugs as you write them.? Thus, for codes of any significant size, writing pedantically commented and aligned code is almost always faster than blast coding, if you include debugging time. Did you catch both bugs in item h? --jh-- On 5/9/19 11:25 AM, Chris Barker - NOAA Federal wrote: > Do any of you know of a style guide for computational / numpy code? > > I don't mean code that will go into numpy itself, but rather, users > code that uses numpy (and scipy, and...) > > I know about (am a proponent of) PEP8, but it doesn?t address the > unique needs of scientific programming. > > This is mostly about variable names. In scientific code, we often want: > > - variable names that match the math notation- so single character > names, maybe upper or lower case to mean different things ( in ocean > wave mechanics, often ?h? is the water depth, and ?H? is the wave height) > > -to distinguish between scalar, vector, and matrix values ? often > UpperCase means an array or matrix, for instance. > > But despite (or because of) these unique needs, a style guide would be > really helpful. > > Anyone have one? Or even any notes on what you do yourself? > > Thanks, > -CHB > > > > >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959?? voice >> 7600 Sand Point Way NE ??(206) 526-6329?? fax >> Seattle, WA ?98115 ? ? ??(206) 526-6317?? main reception >> >> Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Thu May 9 14:40:41 2019 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Thu, 9 May 2019 11:40:41 -0700 Subject: [Numpy-discussion] ANN: SciPy 1.3.0rc2 -- please test Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all, On behalf of the SciPy development team I'm pleased to announce the release candidate SciPy 1.3.0rc2. Please help us test this pre-release. The primary motivation for the second release candidate to is to update wheels to use a more recent OpenBLAS with fixes for SkylakeX AVX kernel problems. Sources and binary wheels can be found at: https://pypi.org/project/scipy/ and at: https://github.com/scipy/scipy/releases/tag/v1.3.0rc2 One of a few ways to install the release candidate with pip: pip install scipy==1.3.0rc2 ========================== SciPy 1.3.0 Release Notes ========================== Note: Scipy 1.3.0 is not released yet! SciPy 1.3.0 is the culmination of 5 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been some API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Before upgrading, we recommend that users check that their own code does not use deprecated SciPy functionality (to do so, run your code with ``python -Wd`` and check for ``DeprecationWarning`` s). Our development attention will now shift to bug-fix releases on the 1.3.x branch, and on adding new features on the master branch. This release requires Python 3.5+ and NumPy 1.13.3 or greater. For running on PyPy, PyPy3 6.0+ and NumPy 1.15.0 are required. Highlights of this release - -------------------------- - - Three new ``stats`` functions, a rewrite of ``pearsonr``, and an exact computation of the Kolmogorov-Smirnov two-sample test - - A new Cython API for bounded scalar-function root-finders in `scipy.optimize` - - Substantial ``CSR`` and ``CSC`` sparse matrix indexing performance improvements - - Added support for interpolation of rotations with continuous angular rate and acceleration in ``RotationSpline`` New features ============ `scipy.interpolate` improvements - -------------------------------- A new class ``CubicHermiteSpline`` is introduced. It is a piecewise-cubic interpolator which matches observed values and first derivatives. Existing cubic interpolators ``CubicSpline``, ``PchipInterpolator`` and ``Akima1DInterpolator`` were made subclasses of ``CubicHermiteSpline``. `scipy.io` improvements - ----------------------- For the Attribute-Relation File Format (ARFF) `scipy.io.arff.loadarff` now supports relational attributes. `scipy.io.mmread` can now parse Matrix Market format files with empty lines. `scipy.linalg` improvements - --------------------------- Added wrappers for ``?syconv`` routines, which convert a symmetric matrix given by a triangular matrix factorization into two matrices and vice versa. `scipy.linalg.clarkson_woodruff_transform` now uses an algorithm that leverages sparsity. This may provide a 60-90 percent speedup for dense input matrices. Truly sparse input matrices should also benefit from the improved sketch algorithm, which now correctly runs in ``O(nnz(A))`` time. Added new functions to calculate symmetric Fiedler matrices and Fiedler companion matrices, named `scipy.linalg.fiedler` and `scipy.linalg.fiedler_companion`, respectively. These may be used for root finding. `scipy.ndimage` improvements - ---------------------------- Gaussian filter performances may improve by an order of magnitude in some cases, thanks to removal of a dependence on ``np.polynomial``. This may impact `scipy.ndimage.gaussian_filter` for example. `scipy.optimize` improvements - ----------------------------- The `scipy.optimize.brute` minimizer obtained a new keyword ``workers``, which can be used to parallelize computation. A Cython API for bounded scalar-function root-finders in `scipy.optimize` is available in a new module `scipy.optimize.cython_optimize` via ``cimport``. This API may be used with ``nogil`` and ``prange`` to loop over an array of function arguments to solve for an array of roots more quickly than with pure Python. ``'interior-point'`` is now the default method for ``linprog``, and ``'interior-point'`` now uses SuiteSparse for sparse problems when the required scikits (scikit-umfpack and scikit-sparse) are available. On benchmark problems (gh-10026), execution time reductions by factors of 2-3 were typical. Also, a new ``method='revised simplex'`` has been added. It is not as fast or robust as ``method='interior-point'``, but it is a faster, more robust, and equally accurate substitute for the legacy ``method='simplex'``. ``differential_evolution`` can now use a ``Bounds`` class to specify the bounds for the optimizing argument of a function. `scipy.optimize.dual_annealing` performance improvements related to vectorisation of some internal code. `scipy.signal` improvements - --------------------------- Two additional methods of discretization are now supported by `scipy.signal.cont2discrete`: ``impulse`` and ``foh``. `scipy.signal.firls` now uses faster solvers `scipy.signal.detrend` now has a lower physical memory footprint in some cases, which may be leveraged using the new ``overwrite_data`` keyword argument `scipy.signal.firwin` ``pass_zero`` argument now accepts new string arguments that allow specification of the desired filter type: ``'bandpass'``, ``'lowpass'``, ``'highpass'``, and ``'bandstop'`` `scipy.signal.sosfilt` may have improved performance due to lower retention of the global interpreter lock (GIL) in algorithm `scipy.sparse` improvements - --------------------------- A new keyword was added to ``csgraph.dijsktra`` that allows users to query the shortest path to ANY of the passed in indices, as opposed to the shortest path to EVERY passed index. `scipy.sparse.linalg.lsmr` performance has been improved by roughly 10 percent on large problems Improved performance and reduced physical memory footprint of the algorithm used by `scipy.sparse.linalg.lobpcg` ``CSR`` and ``CSC`` sparse matrix fancy indexing performance has been improved substantially `scipy.spatial` improvements - ---------------------------- `scipy.spatial.ConvexHull` now has a ``good`` attribute that can be used alongsize the ``QGn`` Qhull options to determine which external facets of a convex hull are visible from an external query point. `scipy.spatial.cKDTree.query_ball_point` has been modernized to use some newer Cython features, including GIL handling and exception translation. An issue with ``return_sorted=True`` and scalar queries was fixed, and a new mode named ``return_length`` was added. ``return_length`` only computes the length of the returned indices list instead of allocating the array every time. `scipy.spatial.transform.RotationSpline` has been added to enable interpolation of rotations with continuous angular rates and acceleration `scipy.stats` improvements - -------------------------- Added a new function to compute the Epps-Singleton test statistic, `scipy.stats.epps_singleton_2samp`, which can be applied to continuous and discrete distributions. New functions `scipy.stats.median_absolute_deviation` and `scipy.stats.gstd` (geometric standard deviation) were added. The `scipy.stats.combine_pvalues` method now supports ``pearson``, ``tippett`` and ``mudholkar_george`` pvalue combination methods. The `scipy.stats.ortho_group` and `scipy.stats.special_ortho_group` ``rvs(dim)`` functions' algorithms were updated from a ``O(dim^4)`` implementation to a ``O(dim^3)`` which gives large speed improvements for ``dim>100``. A rewrite of `scipy.stats.pearsonr` to use a more robust algorithm, provide meaningful exceptions and warnings on potentially pathological input, and fix at least five separate reported issues in the original implementation. Improved the precision of ``hypergeom.logcdf`` and ``hypergeom.logsf``. Added exact computation for Kolmogorov-Smirnov (KS) two-sample test, replacing the previously approximate computation for the two-sided test `stats.ks_2samp`. Also added a one-sided, two-sample KS test, and a keyword ``alternative`` to `stats.ks_2samp`. Backwards incompatible changes ============================== `scipy.interpolate` changes - --------------------------- Functions from ``scipy.interpolate`` (``spleval``, ``spline``, ``splmake``, and ``spltopp``) and functions from ``scipy.misc`` (``bytescale``, ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, ``imsave``, ``imshow``, ``toimage``) have been removed. The former set has been deprecated since v0.19.0 and the latter has been deprecated since v1.0.0. Similarly, aliases from ``scipy.misc`` (``comb``, ``factorial``, ``factorial2``, ``factorialk``, ``logsumexp``, ``pade``, ``info``, ``source``, ``who``) which have been deprecated since v1.0.0 are removed. `SciPy documentation for v1.1.0 `__ can be used to track the new import locations for the relocated functions. `scipy.linalg` changes - ---------------------- For ``pinv``, ``pinv2``, and ``pinvh``, the default cutoff values are changed for consistency (see the docs for the actual values). `scipy.stats` changes - --------------------- Previously, ``ks_2samp(data1, data2)`` would run a two-sided test and return the approximated p-value. The new signature, ``ks_2samp(data1, data2, alternative="two-sided", method="auto")``, still runs the two-sided test by default but returns the exact p-value for small samples and the approximated value for large samples. ``method="asymp"`` would be equivalent to the old version but ``auto`` is the better choice. Other changes ============= Our tutorial has been expanded with a new section on global optimizers There has been a rework of the ``stats.distributions`` tutorials. `scipy.optimize` now correctly sets the convergence flag of the result to ``CONVERR``, a convergence error, for bounded scalar-function root-finders if the maximum iterations has been exceeded, ``disp`` is false, and ``full_output`` is true. `scipy.optimize.curve_fit` no longer fails if ``xdata`` and ``ydata`` dtypes differ; they are both now automatically cast to ``float64``. `scipy.ndimage` functions including ``binary_erosion``, ``binary_closing``, and ``binary_dilation`` now require an integer value for the number of iterations, which alleviates a number of reported issues. Fixed normal approximation in case ``zero_method == "pratt"`` in `scipy.stats.wilcoxon`. Fixes for incorrect probabilities, broadcasting issues and thread-safety related to stats distributions setting member variables inside ``_argcheck()``. `scipy.optimize.newton` now correctly raises a ``RuntimeError``, when default arguments are used, in the case that a derivative of value zero is obtained, which is a special case of failing to converge. A draft toolchain roadmap is now available, laying out a compatibility plan including Python versions, C standards, and NumPy versions. Authors ======= * ananyashreyjain + * ApamNapat + * Scott Calabrese Barton + * Christoph Baumgarten * Peter Bell + * Jacob Blomgren + * Doctor Bob + * Mana Borwornpadungkitti + * Matthew Brett * Evgeni Burovski * CJ Carey * Vega Theil Carstensen + * Robert Cimrman * Forrest Collman + * Pietro Cottone + * David + * Idan David + * Christoph Deil * Dieter Werthm?ller * Conner DiPaolo + * Dowon * Michael Dunphy + * Peter Andreas Entschev + * G?k?en Eraslan + * Johann Faouzi + * Yu Feng * Piotr Figiel + * Matthew H Flamm * Franz Forstmayr + * Christoph Gohlke * Richard Janis Goldschmidt + * Ralf Gommers * Lars Grueter * Sylvain Gubian * Matt Haberland * Yaroslav Halchenko * Charles Harris * Lindsey Hiltner * JakobStruye + * He Jia + * Jwink3101 + * Greg Kiar + * Julius Bier Kirkegaard * John Kirkham + * Thomas Kluyver * Vladimir Korolev + * Joseph Kuo + * Michael Lamparski + * Eric Larson * Denis Laxalde * Katrin Leinweber * Jesse Livezey * ludcila + * Dhruv Madeka + * Magnus + * Nikolay Mayorov * Mark Mikofski * Jarrod Millman * Markus Mohrhard + * Eric Moore * Andrew Nelson * Aki Nishimura + * OGordon100 + * Petar Mlinari? + * Stefan Peterson * Matti Picus + * Ilhan Polat * Aaron Pries + * Matteo Ravasi + * Tyler Reddy * Ashton Reimer + * Joscha Reimer * rfezzani + * Riadh + * Lucas Roberts * Heshy Roskes + * Mirko Scholz + * Taylor D. Scott + * Srikrishna Sekhar + * Kevin Sheppard + * Sourav Singh * skjerns + * Kai Striega * SyedSaifAliAlvi + * Gopi Manohar T + * Albert Thomas + * Timon + * Paul van Mulbregt * Jacob Vanderplas * Daniel Vargas + * Pauli Virtanen * VNMabus + * Stefan van der Walt * Warren Weckesser * Josh Wilson * Nate Yoder + * Roman Yurchak A total of 97 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. Issues closed for 1.3.0 - ----------------------- * `#1320 `__: scipy.stats.distribution: problem with self.a, self.b if they... * `#2002 `__: members set in scipy.stats.distributions.##._argcheck (Trac #1477) * `#2823 `__: distribution methods add tmp * `#3220 `__: Scipy.opimize.fmin_powell direc argument syntax unclear * `#3728 `__: scipy.stats.pearsonr: possible bug with zero variance input * `#6805 `__: error-in-scipy-wilcoxon-signed-rank-test-for-equal-series * `#6873 `__: 'stats.boxcox' return all same values * `#7117 `__: Warn users when using float32 input data to curve_fit and friends * `#7632 `__: it's not possible to tell the \`optimize.least_squares\` solver... * `#7730 `__: stats.pearsonr: Potential division by zero for dataset of length... * `#7933 `__: stats.truncnorm fails when providing values outside truncation... * `#8033 `__: Add standard filter types to firwin to set pass_zero intuitively... * `#8600 `__: lfilter.c.src zfill has erroneous header * `#8692 `__: Non-negative values of \`stats.hypergeom.logcdf\` * `#8734 `__: Enable pip build isolation * `#8861 `__: scipy.linalg.pinv gives wrong result while scipy.linalg.pinv2... * `#8915 `__: need to fix macOS build against older numpy versions * `#8980 `__: scipy.stats.pearsonr overflows with high values of x and y * `#9226 `__: BUG: signal: SystemError: ... * `#9254 `__: BUG: root finders brentq, etc, flag says "converged" even if... * `#9308 `__: Test failure - test_initial_constraints_as_canonical * `#9353 `__: scipy.stats.pearsonr returns r=1 if r_num/r_den = inf * `#9359 `__: Planck distribution is a geometric distribution * `#9381 `__: linregress should warn user in 2x2 array case * `#9406 `__: BUG: stats: In pearsonr, when r is nan, the p-value must also... * `#9437 `__: Cannot create sparse matrix from size_t indexes * `#9518 `__: Relational attributes in loadarff * `#9551 `__: BUG: scipy.optimize.newton says the root of x^2+1 is zero. * `#9564 `__: rv_sample accepts invalid input in scipy.stats * `#9565 `__: improper handling of multidimensional input in stats.rv_sample * `#9581 `__: Least-squares minimization fails silently when x and y data are... * `#9587 `__: Outdated value for scipy.constants.au * `#9611 `__: Overflow error with new way of p-value calculation in kendall... * `#9645 `__: \`scipy.stats.mode\` crashes with variable length arrays (\`dtype=object\`) * `#9734 `__: PendingDeprecationWarning for np.matrix with pytest * `#9786 `__: stats.ks_2samp() misleading for small data sets. * `#9790 `__: Excessive memory usage on detrend * `#9801 `__: dual_annealing does not set the success attribute in OptimizeResult * `#9833 `__: IntegrationWarning from mielke.stats() during build of html doc. * `#9835 `__: scipy.signal.firls seems to be inefficient versus MATLAB firls * `#9864 `__: Curve_fit does not check for empty input data if called with... * `#9869 `__: scipy.ndimage.label: Minor documentation issue * `#9882 `__: format at the wrong paranthesis in scipy.spatial.transform * `#9889 `__: scipy.signal.find_peaks minor documentation issue * `#9890 `__: Minkowski p-norm Issues in cKDTree For Values Other Than 2 Or... * `#9896 `__: scipy.stats._argcheck sets (not just checks) values * `#9905 `__: Memory error in ndimage.binary_erosion * `#9909 `__: binary_dilation/erosion/closing crashes when iterations is float * `#9919 `__: BUG: \`coo_matrix\` does not validate the \`shape\` argument. * `#9982 `__: lsq_linear hangs/infinite loop with 'trf' method * `#10003 `__: exponnorm.pdf returns NAN for small K * `#10011 `__: Incorrect check for invalid rotation plane in scipy.ndimage.rotate * `#10024 `__: Fails to build from git * `#10048 `__: DOC: scipy.optimize.root_scalar * `#10068 `__: DOC: scipy.interpolate.splev * `#10074 `__: BUG: \`expm\` calculates the wrong coefficients in the backward... Pull requests for 1.3.0 - ----------------------- * `#7827 `__: ENH: sparse: overhaul of sparse matrix indexing * `#8431 `__: ENH: Cython optimize zeros api * `#8743 `__: DOC: Updated linalg.pinv, .pinv2, .pinvh docstrings * `#8744 `__: DOC: added examples to remez docstring * `#9227 `__: DOC: update description of "direc" parameter of "fmin_powell" * `#9263 `__: ENH: optimize: added "revised simplex" for scipy.optimize.linprog * `#9325 `__: DEP: Remove deprecated functions for 1.3.0 * `#9330 `__: Add note on push and pull affine transformations * `#9423 `__: DOC: Clearly state how 2x2 input arrays are handled in stats.linregress * `#9428 `__: ENH: parallelised brute * `#9438 `__: BUG: Initialize coo matrix with size_t indexes * `#9455 `__: MAINT: Speed up get_(lapack,blas)_func * `#9465 `__: MAINT: Clean up optimize.zeros C solvers interfaces/code. * `#9477 `__: DOC: linalg: fix lstsq docstring on residues shape * `#9478 `__: DOC: Add docstring examples for rosen functions * `#9479 `__: DOC: Add docstring example for ai_zeros and bi_zeros * `#9480 `__: MAINT: linalg: lstsq clean up * `#9489 `__: DOC: roadmap update for changes over the last year. * `#9492 `__: MAINT: stats: Improve implementation of chi2 ppf method. * `#9497 `__: DOC: Improve docstrings sparse.linalg.isolve * `#9499 `__: DOC: Replace "Scipy" with "SciPy" in the .rst doc files for consistency. * `#9500 `__: DOC: Document the toolchain and its roadmap. * `#9505 `__: DOC: specify which definition of skewness is used * `#9511 `__: DEP: interpolate: remove deprecated interpolate_wrapper * `#9517 `__: BUG: improve error handling in stats.iqr * `#9522 `__: ENH: Add Fiedler and fiedler companion to special matrices * `#9526 `__: TST: relax precision requirements in signal.correlate tests * `#9529 `__: DOC: fix missing random seed in optimize.newton example * `#9533 `__: MAINT: Use list comprehension when possible * `#9537 `__: DOC: add a "big picture" roadmap * `#9538 `__: DOC: Replace "Numpy" with "NumPy" in .py, .rst and .txt doc files... * `#9539 `__: ENH: add two-sample test (Epps-Singleton) to scipy.stats * `#9559 `__: DOC: add section on global optimizers to tutorial * `#9561 `__: ENH: remove noprefix.h, change code appropriately * `#9562 `__: MAINT: stats: Rewrite pearsonr. * `#9563 `__: BUG: Minor bug fix Callback in linprog(method='simplex') * `#9568 `__: MAINT: raise runtime error for newton with zeroder if disp true,... * `#9570 `__: Correct docstring in show_options in optimize. Fixes #9407 * `#9573 `__: BUG fixes range of pk variable pre-check * `#9577 `__: TST: fix minor issue in a signal.stft test. * `#9580 `__: Included blank line before list - Fixes #8658 * `#9582 `__: MAINT: drop Python 2.7 and 3.4 * `#9588 `__: MAINT: update \`constants.astronomical_unit\` to new 2012 value. * `#9592 `__: TST: Add 32-bit testing to CI * `#9593 `__: DOC: Replace cumulative density with cumulative distribution * `#9596 `__: TST: remove VC 9.0 from Azure CI * `#9599 `__: Hyperlink DOI to preferred resolver * `#9601 `__: DEV: try to limit GC memory use on PyPy * `#9603 `__: MAINT: improve logcdf and logsf of hypergeometric distribution * `#9605 `__: Reference to pylops in LinearOperator notes and ARPACK example * `#9617 `__: TST: reduce max memory usage for sparse.linalg.lgmres test * `#9619 `__: FIX: Sparse matrix addition/subtraction eliminates explicit zeros * `#9621 `__: bugfix in rv_sample in scipy.stats * `#9622 `__: MAINT: Raise error in directed_hausdorff distance * `#9623 `__: DOC: Build docs with warnings as errors * `#9625 `__: Return the number of calls to 'hessp' (not just 'hess') in trust... * `#9627 `__: BUG: ignore empty lines in mmio * `#9637 `__: Function to calculate the MAD of an array * `#9646 `__: BUG: stats: mode for objects w/ndim > 1 * `#9648 `__: Add \`stats.contingency\` to refguide-check * `#9650 `__: ENH: many lobpcg() algorithm improvements * `#9652 `__: Move misc.doccer to _lib.doccer * `#9660 `__: ENH: add pearson, tippett, and mudholkar-george to combine_pvalues * `#9661 `__: BUG: Fix ksone right-hand endpoint, documentation and tests. * `#9664 `__: ENH: adding multi-target dijsktra performance enhancement * `#9670 `__: MAINT: link planck and geometric distribution in scipy.stats * `#9676 `__: ENH: optimize: change default linprog method to interior-point * `#9685 `__: Added reference to ndimage.filters.median_filter * `#9705 `__: Fix coefficients in expm helper function * `#9711 `__: Release the GIL during sosfilt processing for simple types * `#9721 `__: ENH: Convexhull visiblefacets * `#9723 `__: BLD: Modify rv_generic._construct_doc to print out failing distribution... * `#9726 `__: BUG: Fix small issues with \`signal.lfilter' * `#9729 `__: BUG: Typecheck iterations for binary image operations * `#9730 `__: ENH: reduce sizeof(NI_WatershedElement) by 20% * `#9731 `__: ENH: remove suspicious sequence of type castings * `#9739 `__: BUG: qr_updates fails if u is exactly in span Q * `#9749 `__: BUG: MapWrapper.__exit__ should terminate * `#9753 `__: ENH: Added exact computation for Kolmogorov-Smirnov two-sample... * `#9755 `__: DOC: Added example for signal.impulse, copied from impulse2 * `#9756 `__: DOC: Added docstring example for iirdesign * `#9757 `__: DOC: Added examples for step functions * `#9759 `__: ENH: Allow pass_zero to act like btype * `#9760 `__: DOC: Added docstring for lp2bs * `#9761 `__: DOC: Added docstring and example for lp2bp * `#9764 `__: BUG: Catch internal warnings for matrix * `#9766 `__: ENH: Speed up _gaussian_kernel1d by removing dependence on np.polynomial * `#9769 `__: BUG: Fix Cubic Spline Read Only issues * `#9773 `__: DOC: Several docstrings * `#9774 `__: TST: bump Azure CI OpenBLAS version to match wheels * `#9775 `__: DOC: Improve clarity of cov_x documentation for scipy.optimize.leastsq * `#9779 `__: ENH: dual_annealing vectorise visit_fn * `#9788 `__: TST, BUG: f2py-related issues with NumPy < 1.14.0 * `#9791 `__: BUG: fix amax constraint not enforced in scalar_search_wolfe2 * `#9792 `__: ENH: Allow inplace copying in place in "detrend" function * `#9795 `__: DOC: Fix/update docstring for dstn and dst * `#9796 `__: MAINT: Allow None tolerances in least_squares * `#9798 `__: BUG: fixes abort trap 6 error in scipy issue 9785 in unit tests * `#9807 `__: MAINT: improve doc and add alternative keyword to wilcoxon in... * `#9808 `__: Fix PPoly integrate and test for CubicSpline * `#9810 `__: ENH: Add the geometric standard deviation function * `#9811 `__: MAINT: remove invalid derphi default None value in scalar_search_wolfe2 * `#9813 `__: Adapt hamming distance in C to support weights * `#9817 `__: DOC: Copy solver description to solver modules * `#9829 `__: ENH: Add FOH and equivalent impulse response discretizations... * `#9831 `__: ENH: Implement RotationSpline * `#9834 `__: DOC: Change mielke distribution default parameters to ensure... * `#9838 `__: ENH: Use faster solvers for firls * `#9854 `__: ENH: loadarff now supports relational attributes. * `#9856 `__: integrate.bvp - improve handling of nonlinear boundary conditions * `#9862 `__: TST: reduce Appveyor CI load * `#9874 `__: DOC: Update requirements in release notes * `#9883 `__: BUG: fixed parenthesis in spatial.rotation * `#9884 `__: ENH: Use Sparsity in Clarkson-Woodruff Sketch * `#9888 `__: MAINT: Replace NumPy aliased functions * `#9892 `__: BUG: Fix 9890 query_ball_point returns wrong result when p is... * `#9893 `__: BUG: curve_fit doesn't check for empty input if called with bounds * `#9894 `__: scipy.signal.find_peaks documentation error * `#9898 `__: BUG: Set success attribute in OptimizeResult. See #9801 * `#9900 `__: BUG: Restrict rv_generic._argcheck() and its overrides from setting... * `#9906 `__: fixed a bug in kde logpdf * `#9911 `__: DOC: replace example for "np.select" with the one from numpy... * `#9912 `__: BF(DOC): point to numpy.select instead of plain (python) .select * `#9914 `__: DOC: change ValueError message in _validate_pad of signaltools. * `#9915 `__: cKDTree query_ball_point improvements * `#9918 `__: Update ckdtree.pyx with boxsize argument in docstring * `#9920 `__: BUG: sparse: Validate explicit shape if given with dense argument... * `#9924 `__: BLD: add back pyproject.toml * `#9931 `__: Fix empty constraint * `#9935 `__: DOC: fix references for stats.f_oneway * `#9936 `__: Revert gh-9619: "FIX: Sparse matrix addition/subtraction eliminates... * `#9937 `__: MAINT: fix PEP8 issues and update to pycodestyle 2.5.0 * `#9939 `__: DOC: correct \`structure\` description in \`ndimage.label\` docstring * `#9940 `__: MAINT: remove extraneous distutils copies * `#9945 `__: ENH: differential_evolution can use Bounds object * `#9949 `__: Added 'std' to add doctstrings since it is a \`known_stats\`... * `#9953 `__: DOC: Documentation cleanup for stats tutorials. * `#9962 `__: __repr__ for Bounds * `#9971 `__: ENH: Improve performance of lsmr * `#9987 `__: CI: pin Sphinx version to 1.8.5 * `#9990 `__: ENH: constraint violation * `#9991 `__: BUG: Avoid inplace modification of input array in newton * `#9995 `__: MAINT: sparse.csgraph: Add cdef to stop build warning. * `#9996 `__: BUG: Make minimize_quadratic_1d work with infinite bounds correctly * `#10004 `__: BUG: Fix unbound local error in linprog - simplex. * `#10007 `__: BLD: fix Python 3.7 build with build isolation * `#10009 `__: BUG: Make sure that _binary_erosion only accepts an integer number... * `#10016 `__: Update link to airspeed-velocity * `#10017 `__: DOC: Update \`interpolate.LSQSphereBivariateSpline\` to include... * `#10018 `__: MAINT: special: Fix a few warnings that occur when compiling... * `#10019 `__: TST: Azure summarizes test failures * `#10021 `__: ENH: Introduce CubicHermiteSpline * `#10022 `__: BENCH: Increase cython version in asv to fix benchmark builds * `#10023 `__: BUG: Avoid exponnorm producing nan for small K values. * `#10025 `__: BUG: optimize: tweaked linprog status 4 error message * `#10026 `__: ENH: optimize: use SuiteSparse in linprog interior-point when... * `#10027 `__: MAINT: cluster: clean up the use of malloc() in the function... * `#10028 `__: Fix rotate invalid plane check * `#10040 `__: MAINT: fix pratt method of wilcox test in scipy.stats * `#10041 `__: MAINT: special: Fix a warning generated when building the AMOS... * `#10044 `__: DOC: fix up spatial.transform.Rotation docstrings * `#10047 `__: MAINT: interpolate: Fix a few build warnings. * `#10051 `__: Add project_urls to setup * `#10052 `__: don't set flag to "converged" if max iter exceeded * `#10054 `__: MAINT: signal: Fix a few build warnings and modernize some C... * `#10056 `__: BUG: Ensure factorial is not too large in kendaltau * `#10058 `__: Small speedup in samping from ortho and special_ortho groups * `#10059 `__: BUG: optimize: fix #10038 by increasing tol * `#10061 `__: BLD: DOC: make building docs easier by parsing python version. * `#10064 `__: ENH: Significant speedup for ortho and special ortho group * `#10065 `__: DOC: Reword parameter descriptions in \`optimize.root_scalar\` * `#10066 `__: BUG: signal: Fix error raised by savgol_coeffs when deriv > polyorder. * `#10067 `__: MAINT: Fix the cutoff value inconsistency for pinv2 and pinvh * `#10072 `__: BUG: stats: Fix boxcox_llf to avoid loss of precision. * `#10075 `__: ENH: Add wrappers for ?syconv routines * `#10076 `__: BUG: optimize: fix curve_fit for mixed float32/float64 input * `#10077 `__: DOC: Replace undefined \`k\` in \`interpolate.splev\` docstring * `#10079 `__: DOC: Fixed typo, rearranged some doc of stats.morestats.wilcoxon. * `#10080 `__: TST: install scikit-sparse for full TravisCI tests * `#10083 `__: Clean \`\`_clean_inputs\`\` in optimize.linprog * `#10088 `__: ENH: optimize: linprog test CHOLMOD/UMFPACK solvers when available * `#10090 `__: MAINT: Fix CubicSplinerInterpolator for pandas * `#10091 `__: MAINT: improve logcdf and logsf of hypergeometric distribution * `#10095 `__: MAINT: Clean \`\`_clean_inputs\`\` in linprog * `#10116 `__: MAINT: update scipy-sphinx-theme * `#10135 `__: BUG: fix linprog revised simplex docstring problem failure Checksums ========= MD5 ~~~ 11305bc9940ca76568a1c7e46ea0ff05 scipy-1.3.0rc2-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 7cef904e09aff64d5207064bb61e1a6b scipy-1.3.0rc2-cp35-cp35m-manylinux1_i686.whl 63ba156855e33b971dc94206ecbdd7f3 scipy-1.3.0rc2-cp35-cp35m-manylinux1_x86_64.whl 00e48cb29dbb88e56730a59285c463ec scipy-1.3.0rc2-cp35-cp35m-win32.whl 232eb3104f19d1559e30e749a92707cd scipy-1.3.0rc2-cp35-cp35m-win_amd64.whl 559b152a8b438825ba0352a3303e2651 scipy-1.3.0rc2-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 3b663db47752b092a53179d10f3716ea scipy-1.3.0rc2-cp36-cp36m-manylinux1_i686.whl 87615391c5020fe1533cc2bac524ea04 scipy-1.3.0rc2-cp36-cp36m-manylinux1_x86_64.whl 900d5b9893bf105d9df219cbcd3f3dab scipy-1.3.0rc2-cp36-cp36m-win32.whl 5a4bae8f02e87602d710977923490b8b scipy-1.3.0rc2-cp36-cp36m-win_amd64.whl 3a14e70a42d5fd813b43c587314aae7c scipy-1.3.0rc2-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 56b643dc91bb72d926bd80c844643274 scipy-1.3.0rc2-cp37-cp37m-manylinux1_i686.whl 992832ed9a5431c1ba45aa543f2e7e99 scipy-1.3.0rc2-cp37-cp37m-manylinux1_x86_64.whl bb78702fc1aaaf425bafc424303a15cc scipy-1.3.0rc2-cp37-cp37m-win32.whl e14f12154170624262607c0c30390bc2 scipy-1.3.0rc2-cp37-cp37m-win_amd64.whl 62de7f94825d5b6b99ed44955c3e1965 scipy-1.3.0rc2.tar.gz 461dc0660a84538954a41d3dd91dde71 scipy-1.3.0rc2.tar.xz 6fda886c9976a16fc01a9b639ffbfd8c scipy-1.3.0rc2.zip SHA256 ~~~~~~ 8b860d29b4d0d8d3781bc1d51eaa4fc47a903633d545eba074f5e2bc2cbafaa1 scipy-1.3.0rc2-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 0ba6d287862065eb65c7f3f1ed6285a8abf6dec78e3b7c62f203961ba1eae12e scipy-1.3.0rc2-cp35-cp35m-manylinux1_i686.whl 0c29f55c00eae36199733a8c214a8a8493bc47dbb822ffe3847cf58150a06f3d scipy-1.3.0rc2-cp35-cp35m-manylinux1_x86_64.whl b8744f4505a6674eb3e21f779e90ed8efc0d9ee68e6ea8f0f993433dde2e3aee scipy-1.3.0rc2-cp35-cp35m-win32.whl 573c1088ed3daf56c52710e28f11baed8b4cdcabc4411f10d1fb9f61221c539b scipy-1.3.0rc2-cp35-cp35m-win_amd64.whl 134d6969b8025006f7231dfaba611a2314e978cdefa1e0c1b73f7e3010ef807a scipy-1.3.0rc2-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 13d309d9ddf56601afbe8f66bfb44ec374df912203e98d8ddc4e7433925db488 scipy-1.3.0rc2-cp36-cp36m-manylinux1_i686.whl c100badbbacf175e2a0122f59575905d092c8080cb2fa40cade0b9c0a0ae1f8c scipy-1.3.0rc2-cp36-cp36m-manylinux1_x86_64.whl 87b5f5990b4d580fb6b7eaaeb01e756f5aa9ebedc514dece67a2f79e33c7528e scipy-1.3.0rc2-cp36-cp36m-win32.whl 51a7d7b15cd44af1221d5d6c54bf8da470f7de6117bc525f8e8e75ef65a23ccc scipy-1.3.0rc2-cp36-cp36m-win_amd64.whl e4d90d5cc4e3896816d1c31952017bbe4e26a0222702eef046c4a5d2b7f4c9b6 scipy-1.3.0rc2-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 2799e2c07a9600a21e73756578cde0a8426009b525572ae32f3acb6246642d37 scipy-1.3.0rc2-cp37-cp37m-manylinux1_i686.whl ecacc8e4aa91ee90c5499b97c517a61d447da25774c65d3938f2c5916fd2981d scipy-1.3.0rc2-cp37-cp37m-manylinux1_x86_64.whl e8890f4407db3395873973a416a92ddd496e369b01f4273dd85d187bd9ab2b65 scipy-1.3.0rc2-cp37-cp37m-win32.whl 3c913244a1b34b83ec2a2ba3fc168210db13529a15706c0242499975caf59e61 scipy-1.3.0rc2-cp37-cp37m-win_amd64.whl c7299e4eb2420cfdedd8b090fce549d29fb06c09f5046251f7aa48f7d55fe1c2 scipy-1.3.0rc2.tar.gz 950b6a4cffba2cc25e655489f9a883d6ae8ea97708e5f0a2af9b624c57a27e59 scipy-1.3.0rc2.tar.xz d69f331044d9214dc6a117d4e090d6a1240ccc0c4faaaa6dc49402ecf087dea4 scipy-1.3.0rc2.zip -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJc1FGzAAoJELD/41ZX0J715p0P+wfhJw/hSdZPnh+kIt40rGlO UsiAycApbmWSmHCjlyb3EdLJPW7WC9MoF1JUXfX2X6YFnJ9LF4TUhiFlmf3mgpJC 3nklRqZYaF5GPkm+akThOvTYEoxLwobuPEa9vEeYjDMCxrOdrf9bAsHXoeotspFL NWCvEAbTfghHWjNT1Ug6/oZBBlKrjzlcPYAgzhnL9eQudGdOkvY9NW8DCLoXcuA9 FMQqMnE8SbqSYIvM6udXWbZUHbySwITLFC5h/dIXjDWNbTznb5TyCvhzfKl3GHjp sdrkyZrNhkYPp5W3mr1PV42gIci/Nd7NGhfTMlEWCON3/P+jgR1COE+aMUO5VkTH +620ukPBd/SI0gFG03JctJT2X+6OgqnwXfVkOKbnCI7XWM3I+P0xfLAfHEIaS6+M OrExq9tfDiX9p+CcyGFqZvrkDifq0fglFxQ5cExCQ4+lNVjGci3pC6OJLktR1h2W JJPONQH3YqJIfCUNhfSt2X873dcLNwQiI7rTm76cvxhAqAYK5ZwMoNG2nqpQdB+Z DnOAMQGLwntwm55gi4JQW7Yrx72avsJNrH5s4q6Qt1sKJ/vzZabB5gW61e6r5rYH lqro/09hvnn/7t1J+Snpb1ZlUkp0F/oS4Uh/PSPXONDzE5LprimJ3pCFCObMNdu2 rQKhjhgPhJMzI2+U0gHN =7jl+ -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From joycetirnyuy at gmail.com Fri May 10 02:24:17 2019 From: joycetirnyuy at gmail.com (Ngoran Clare-Joyce F.) Date: Fri, 10 May 2019 07:24:17 +0100 Subject: [Numpy-discussion] My Introduction and Getting Started with Numpy. In-Reply-To: References: Message-ID: Hello Ralf, Thank you for the resources, they were very helpful. I am done setting up the environment and I'm looking forward to making contributions. Thank you, Joyce. On Mon, May 6, 2019 at 8:57 PM Ralf Gommers wrote: > Hi Ngoran, welcome! > > > On Sun, May 5, 2019 at 10:53 AM Joyce Tirnyuy > wrote: > >> Hi All, >> >> I am Ngoran Clare-Joyce, an Electrical Engineer from Cameroon. I use >> Python and Javascript for Software Development. Over the past year, I have >> gained insight into Machine Learning and Data Science Algorithms. I have >> used Numpy, Scipy, Pandas, Pytorch, Scikit-Learn libraries. >> >> I have realized that to take my career to the next level, I need to >> contribute to open source as a way to gain skills, experience and proper >> understanding of how these libraries work. >> > > Excellent, we can always use help:) > > >> Please, I will appreciate help on how to get started, set up my >> development environment, some important documentation, and beginners issues >> so I can start contributing to Numpy. >> > > This is the most recent version of our documentation: > https://www.numpy.org/devdocs/index.html. It has a link "NumPy Developer > Guide" which walks you through setting up your development environment. > There's also "Building and Extending the Documentation" which will help if > you want to work on improving the documentation. For some beginner issues, > please have a look at the ones labelled "easy" on GitHub: > https://github.com/numpy/numpy/issues?q=is%3Aopen+is%3Aissue+label%3A%22difficulty%3A+Easy%22 > > Cheers, > Ralf > > >> >> Thanks, >> Ngoran Clare-Joyce. >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wieser.eric+numpy at gmail.com Fri May 10 02:55:42 2019 From: wieser.eric+numpy at gmail.com (Eric Wieser) Date: Thu, 9 May 2019 23:55:42 -0700 Subject: [Numpy-discussion] Style guide for numpy code? In-Reply-To: <9f50cd94-ef12-efc4-62fa-b1e1af9e9c1d@physics.ucf.edu> References: <9f50cd94-ef12-efc4-62fa-b1e1af9e9c1d@physics.ucf.edu> Message-ID: Joe, While most of your style suggestions are reasonable, I would actually recommend the opposite of the first point you make in (a)., especially if you're trying to write generic reusable code. > For example, an item count is always an integer, but a distance is always a float. This is close, but `int` and `float` are implementation details. I think a more precise way to state this is _"an item count is a `numbers.Integral`, a distance is a `numbers.Real`. Where this distinction matters is if you start using `decimal.Decimal` or `fractions.Fraction` for your distances. Those are subclasses of `numbers.Real`, but if you mix them with floats, you either lose precision or crash due to refusing to: ```python In [11]: Fraction(1, 3) + 1.0 Out[11]: 1.3333333333333333 In [12]: Fraction(1, 3) + 1 Out[12]: Fraction(4, 3) In [15]: Decimal('0.1') + 0 Out[15]: Decimal('0.1') In [16]: Decimal('0.1') + 0. TypeError: unsupported operand type(s) for +: 'decimal.Decimal' and 'float' ``` For an example of this coming up in real-world functions, look at https://github.com/numpy/numpy/pull/13390 Eric On Thu, 9 May 2019 at 11:19, Joe Harrington wrote: > > I have a handout for my PHZ 3150 Introduction to Numerical Computing course that includes some rules: > > (a) All integer-valued floating-point numbers should have decimal points after them. For > example, if you have a time of 10 sec, do not use > > y = np.e**10 # sec > > use > > y = np.e**10. # sec > > instead. For example, an item count is always an integer, but a distance is always a float. A decimal in the range (-1,1) must always have a zero before the decimal point, for readability: > > x = 0.23 # Right! > > x = .23 # WRONG > > The purpose of this one is simply to build the decimal-point habit. In Python it's less of an issue now, but sometimes code is translated, and integer division is still out there. For that reason, in other languages, it may be desirable to use a decimal point even for counts, unless integer division is wanted. Make a comment whenever you intend integer division and the language uses the same symbol (/) for both kinds of division. > > (b) Use spaces around binary operations and relations (=<>+-*/). Put a space after ?,?. > Do not put space around ?=? in keyword arguments, or around ? ** ?. > > (c) Do not put plt.show() in your homework file! You may put it in a comment if you > like, but it is not necessary. Just save the plot. If you say > > plt.ion() > > plots will automatically show while you are working. > > (d) Use: > > import matplotlib.pyplot as plt > > NOT: > > import matplotlib.pylab as plt > > (e) Keep lines to 80 characters, max, except in rare cases that are well justified, such as > very long strings. If you make comments on the same line as code, keep them short or > break them over more than a line: > > code = code2 # set code equal to code2 > > # Longer comment requiring much more space because > # I'm explaining something complicated. > code = code2 > > code = code2 # Another way to do a very long comment, > # like this one, which runs over more than > # one line. > > (f) Keep blocks of similar lines internally lined up on decimals, comments, and = signs. This makes them easier to read and verify. There will be some cases when this is impractical. Use your judgment (you're not a computer, you control the computer!): > > x = 1. # this is a comment > y = 378.2345 # here's another > fred = chuck # note how the decimals, = signs, and > # comments line up nicely... > alacazamshmazooboloid = 2721 # but not always! > > (g) Put the units and sources of all values in comments: > > t_planet = 523. # K, Smith and Jones (2016, ApJ 234, 22) > > (h) I don't mean to start a religious war, but I emphasize the alignment of similar adjacent code lines to make differences pop out and reduce the likelihood of bugs. For example, it is much easier to verify the correctness of: > > a = 3 * x + 3 * 8. * short - 5. * np.exp(np.pi * omega * t) > a_alt = 3 * x + 3 * 8. * anotshortvar - 5. * np.exp(np.pi * omega * t) > > than: > > a = 3 * x + 3 * 8. * short - 5. * np.exp(np.pi * omega * t) > a_altvarname = 3 * x + 3*9*anotshortvar - 5. * np.exp(np.pi * omega * i) > > (i) Assign values to meaningful variables, and use them in formulae and functions: > > ny = 512 > nx = 512 > image = np.zeros((ny, nx)) > expr1 = ny * 3 > expr2 = nx * 4 > > Otherwise, later on when you upgrade to 2560x1440 arrays, you won't know which of the 512s are in the x direction and which are in the y direction. Or, the student you (now a senior researcher) assign to code the upgrade won't! Also, it reduces bugs arising from the order of arguments to functions if the args have meaningful names. This is not to say that you should assign all numbers to functions. This is fine: > > circ = 2 * np.pi * r > > (j) All functions assigned for grading must have full docstrings in numpy's format, as well as internal comments. Utility functions not requested in the assignment and that the user will never see can have reduced docstrings if the functions are simple and obvious, but at least give the one-line summary. > > (k) If you modify an existing function, you must either make a Git entry or, if it is not under revision control, include a Revision History section in your docstring and record your name, the date, the version number, your email, and the nature of the change you made. > > (l) Choose variable names that are meaningful and consistent in style. Document your style either at the head of a module or in a separate text file for the project. For example, if you use CamelCaps with initial capital, say that. If you reserve initial capitals for classes, say that. If you use underscores for variable subscripts and camelCaps for the base variables, say that. If you accept some other style and build on that, say that. There are too many good reasons to have such styles for only one to be the community standard. If certain kinds of values should get the same variable or base variable, such as fundamental constants or things like amplitudes, say that. > > (j) It's best if variables that will appear in formulae are short, so more terms can fit in one 80 character line. > > Overall, having and following a style makes code easier to read. And, as an added bonus, if you take care to be consistent, you will write slower, view your code more times, and catch more bugs as you write them. Thus, for codes of any significant size, writing pedantically commented and aligned code is almost always faster than blast coding, if you include debugging time. > > Did you catch both bugs in item h? > > --jh-- > > On 5/9/19 11:25 AM, Chris Barker - NOAA Federal wrote: > > Do any of you know of a style guide for computational / numpy code? > > I don't mean code that will go into numpy itself, but rather, users code that uses numpy (and scipy, and...) > > I know about (am a proponent of) PEP8, but it doesn?t address the unique needs of scientific programming. > > This is mostly about variable names. In scientific code, we often want: > > - variable names that match the math notation- so single character names, maybe upper or lower case to mean different things ( in ocean wave mechanics, often ?h? is the water depth, and ?H? is the wave height) > > -to distinguish between scalar, vector, and matrix values ? often UpperCase means an array or matrix, for instance. > > But despite (or because of) these unique needs, a style guide would be really helpful. > > Anyone have one? Or even any notes on what you do yourself? > > Thanks, > -CHB > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion From evgeny.burovskiy at gmail.com Fri May 10 03:30:46 2019 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Fri, 10 May 2019 10:30:46 +0300 Subject: [Numpy-discussion] Style guide for numpy code? In-Reply-To: <9f50cd94-ef12-efc4-62fa-b1e1af9e9c1d@physics.ucf.edu> References: <9f50cd94-ef12-efc4-62fa-b1e1af9e9c1d@physics.ucf.edu> Message-ID: Hi Joe, Thanks for sharing! I'm going to use your handout as a base for my numerical computing classes, (with an appropriate citation, of course :-)). ??, 9 ??? 2019 ?., 21:19 Joe Harrington : > I have a handout for my PHZ 3150 Introduction to Numerical Computing > course that includes some rules: > > (a) All integer-valued floating-point numbers should have decimal points > after them. For > example, if you have a time of 10 sec, do not use > > y = np.e**10 # sec > > use > > y = np.e**10. # sec > > instead. For example, an item count is always an integer, but a distance > is always a float. A decimal in the range (-1,1) must always have a zero > before the decimal point, for readability: > > x = 0.23 # Right! > > x = .23 # WRONG > > The purpose of this one is simply to build the decimal-point habit. In > Python it's less of an issue now, but sometimes code is translated, and > integer division is still out there. For that reason, in other languages, > it may be desirable to use a decimal point even for counts, unless integer > division is wanted. Make a comment whenever you intend integer division > and the language uses the same symbol (/) for both kinds of division. > > (b) Use spaces around binary operations and relations (=<>+-*/). Put a > space after ?,?. > Do not put space around ?=? in keyword arguments, or around ? ** ?. > > (c) Do not put plt.show() in your homework file! You may put it in a > comment if you > like, but it is not necessary. Just save the plot. If you say > > plt.ion() > > plots will automatically show while you are working. > > (d) Use: > > import matplotlib.pyplot as plt > > NOT: > > import matplotlib.pylab as plt > > (e) Keep lines to 80 characters, max, except in rare cases that are well > justified, such as > very long strings. If you make comments on the same line as code, keep > them short or > break them over more than a line: > > code = code2 # set code equal to code2 > > # Longer comment requiring much more space because > # I'm explaining something complicated. > code = code2 > > code = code2 # Another way to do a very long comment, > # like this one, which runs over more than > # one line. > > (f) Keep blocks of similar lines internally lined up on decimals, > comments, and = signs. This makes them easier to read and verify. There > will be some cases when this is impractical. Use your judgment (you're not > a computer, you control the computer!): > > x = 1. # this is a comment > y = 378.2345 # here's another > fred = chuck # note how the decimals, = signs, and > # comments line up nicely... > alacazamshmazooboloid = 2721 # but not always! > > (g) Put the units and sources of all values in comments: > > t_planet = 523. # K, Smith and Jones (2016, ApJ 234, 22) > > (h) I don't mean to start a religious war, but I emphasize the alignment > of similar adjacent code lines to make differences pop out and reduce the > likelihood of bugs. For example, it is much easier to verify the > correctness of: > > a = 3 * x + 3 * 8. * short - 5. * np.exp(np.pi * omega * t) > a_alt = 3 * x + 3 * 8. * anotshortvar - 5. * np.exp(np.pi * omega * t) > > than: > > a = 3 * x + 3 * 8. * short - 5. * np.exp(np.pi * omega * t) > a_altvarname = 3 * x + 3*9*anotshortvar - 5. * np.exp(np.pi * omega * i) > > (i) Assign values to meaningful variables, and use them in formulae and > functions: > > ny = 512 > nx = 512 > image = np.zeros((ny, nx)) > expr1 = ny * 3 > expr2 = nx * 4 > > Otherwise, later on when you upgrade to 2560x1440 arrays, you won't know > which of the 512s are in the x direction and which are in the y direction. > Or, the student you (now a senior researcher) assign to code the upgrade > won't! Also, it reduces bugs arising from the order of arguments to > functions if the args have meaningful names. This is not to say that you > should assign all numbers to functions. This is fine: > > circ = 2 * np.pi * r > > (j) All functions assigned for grading must have full docstrings in > numpy's format, as well as internal comments. Utility functions not > requested in the assignment and that the user will never see can have > reduced docstrings if the functions are simple and obvious, but at least > give the one-line summary. > > (k) If you modify an existing function, you must either make a Git entry > or, if it is not under revision control, include a Revision History section > in your docstring and record your name, the date, the version number, your > email, and the nature of the change you made. > > (l) Choose variable names that are meaningful and consistent in style. > Document your style either at the head of a module or in a separate text > file for the project. For example, if you use CamelCaps with initial > capital, say that. If you reserve initial capitals for classes, say that. > If you use underscores for variable subscripts and camelCaps for the base > variables, say that. If you accept some other style and build on that, say > that. There are too many good reasons to have such styles for only one to > be the community standard. If certain kinds of values should get the same > variable or base variable, such as fundamental constants or things like > amplitudes, say that. > > (j) It's best if variables that will appear in formulae are short, so more > terms can fit in one 80 character line. > > Overall, having and following a style makes code easier to read. And, as > an added bonus, if you take care to be consistent, you will write slower, > view your code more times, and catch more bugs as you write them. Thus, > for codes of any significant size, writing pedantically commented and > aligned code is almost always faster than blast coding, if you include > debugging time. > > Did you catch both bugs in item h? > > --jh-- > > On 5/9/19 11:25 AM, Chris Barker - NOAA Federal > wrote: > > Do any of you know of a style guide for computational / numpy code? > > I don't mean code that will go into numpy itself, but rather, users code > that uses numpy (and scipy, and...) > > I know about (am a proponent of) PEP8, but it doesn?t address the unique > needs of scientific programming. > > This is mostly about variable names. In scientific code, we often want: > > - variable names that match the math notation- so single character names, > maybe upper or lower case to mean different things ( in ocean wave > mechanics, often ?h? is the water depth, and ?H? is the wave height) > > -to distinguish between scalar, vector, and matrix values ? often > UpperCase means an array or matrix, for instance. > > But despite (or because of) these unique needs, a style guide would be > really helpful. > > Anyone have one? Or even any notes on what you do yourself? > > Thanks, > -CHB > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri May 10 07:29:45 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 10 May 2019 13:29:45 +0200 Subject: [Numpy-discussion] My Introduction and Getting Started with Numpy. In-Reply-To: References: Message-ID: On Fri, May 10, 2019 at 8:24 AM Ngoran Clare-Joyce F. < joycetirnyuy at gmail.com> wrote: > Hello Ralf, > > Thank you for the resources, they were very helpful. > I am done setting up the environment and I'm looking forward to making > contributions. > Hi Joyce, that's great. Let us know if you need any pointers, please ask either on GitHub or here. Cheers, Ralf > Thank you, > Joyce. > > On Mon, May 6, 2019 at 8:57 PM Ralf Gommers > wrote: > >> Hi Ngoran, welcome! >> >> >> On Sun, May 5, 2019 at 10:53 AM Joyce Tirnyuy >> wrote: >> >>> Hi All, >>> >>> I am Ngoran Clare-Joyce, an Electrical Engineer from Cameroon. I use >>> Python and Javascript for Software Development. Over the past year, I have >>> gained insight into Machine Learning and Data Science Algorithms. I have >>> used Numpy, Scipy, Pandas, Pytorch, Scikit-Learn libraries. >>> >>> I have realized that to take my career to the next level, I need to >>> contribute to open source as a way to gain skills, experience and proper >>> understanding of how these libraries work. >>> >> >> Excellent, we can always use help:) >> >> >>> Please, I will appreciate help on how to get started, set up my >>> development environment, some important documentation, and beginners issues >>> so I can start contributing to Numpy. >>> >> >> This is the most recent version of our documentation: >> https://www.numpy.org/devdocs/index.html. It has a link "NumPy Developer >> Guide" which walks you through setting up your development environment. >> There's also "Building and Extending the Documentation" which will help if >> you want to work on improving the documentation. For some beginner issues, >> please have a look at the ones labelled "easy" on GitHub: >> https://github.com/numpy/numpy/issues?q=is%3Aopen+is%3Aissue+label%3A%22difficulty%3A+Easy%22 >> >> Cheers, >> Ralf >> >> >>> >>> Thanks, >>> Ngoran Clare-Joyce. >>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 10 14:33:04 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 10 May 2019 11:33:04 -0700 Subject: [Numpy-discussion] numpy finding local tests on import?!?! Message-ID: TL;DR: This issue appears to have been fixed in numpy 1.15 (at least, I didn't test 1.14) However, I also had some issues in my environment that I also fixed, so it may be that numpy's behavior hasn't changed -- I don't have the energy to test now. And it doesn't hurt to have this in the archives in case someone else runs into the problem. Read on if you care about weird behaviour with the testing package in numpy 1.13.... Numpy appears to be both running tests on import (or at lest the the runner), and finding local tests that are not numpy's I found this issue (closed without a resolution): https://github.com/numpy/numpy/issues/11457 which is related -- but it's about the import time of numpy.testing, and not about errors/issues from that import. But maybe the import process ahs been changed in newer numpys What I did, and what I got: I am trying t debug what looks like a numpy-related issue in a project. So one thing I did was try to import numpy and check __version__: python -c "import numpy; print(numpy.__version__)" very weird barf: File "/Users/chris.barker/miniconda2/envs/gridded/lib/python2.7/unittest/runner.py", line 4, in import time File "time.py", line 7, in import netCDF4 as nc4 File "/Users/chris.barker/miniconda2/envs/gridded/lib/python2.7/site-packages/netCDF4/__init__.py", line 3, in from ._netCDF4 import * File "include/netCDF4.pxi", line 728, in init netCDF4._netCDF4 (netCDF4/_netCDF4.c:83784) AttributeError: 'module' object has no attribute 'ndarray I get the same thing if I fire up the interpreter and then import numpy as the error seemed to come from: unittest/runner.py I had a hunch. I was, in fact, running with my current working directory in the package dir of my project, and there is a test package in that dir I cd out of that, and presto! numy imports fine: $ python -c "import numpy; print(numpy.__version__)" 1.13.1 OK, that's a kinda old numpy -- but it's the minimum required by my project. (though I can probably update that -- I"ll do that soon) So it appears that the test runner is looking in the current working dir (or, I suppose sys.PATH) for packages called tests -- this seems like a broken system, unless you are runing the tests explicitly from teh command line, it shouldn't look in the cwd, and it probably shouldn't ever look in all of sys.path. BUt my bigger confusion here is -- why the heck is the test runner being run at ALL on a simple import ?!?!? If this has been fixed / changed in newer numpy's the OK -- I'll update my dependencies. -CHB - Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Fri May 10 22:04:18 2019 From: shoyer at gmail.com (Stephan Hoyer) Date: Fri, 10 May 2019 19:04:18 -0700 Subject: [Numpy-discussion] Adding to the non-dispatched implementation of NumPy methods In-Reply-To: References: <16a24b876f0.27ae.acf34a9c767d7bb498a799333be0433e@fastmail.com> <66ea23ed-e434-42d5-9546-1a3a7528ce9e@Canary> Message-ID: On Sat, May 4, 2019 at 12:29 PM Ralf Gommers wrote: > We seem to have run out of steam a bit here. > We discussed this today in person at the NumPy sprint. The consensus was to go for a name like __skip_array_function__. Ufuncs don't have very good use-cases for a function that skips dispatch: 1. The overhead of the ufunc dispatch machinery is much smaller, especially in the case where all arguments are NumPy arrays, because there is no need for a wrapper function in Python. 2. Inside __array_ufunc__ it's possible to cast arguments into NumPy arrays explicitly and then call the ufunc again. There's no need to explicitly skip overrides. We also don't really care about supporting the use-case where a function gets changed into a ufunc. We already warn users not to call __skip_array_function__ directly (without using getattr) outside __array_function__. Given all this, it seems best to stick with a name that mirrors __array_function__ as closely as possible. I picked "skip" instead of "skpping" just because it's slightly shorter, but otherwise don't have a strong preference. I've edited the NEP [1] and implementation [2] pull requests to use this new name, and clarify the use-cases. If there no serious objections, I'd love to merge these soon, in time for the NumPy 1.17 release candidate. [1] https://github.com/numpy/numpy/pull/13305 [2] https://github.com/numpy/numpy/pull/13389 -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun May 12 08:58:57 2019 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 12 May 2019 05:58:57 -0700 Subject: [Numpy-discussion] Release vs development testing. Message-ID: Hi All, NumPy currently distinguishes between release and development versions when running tests. Is there a good reason to continue this practice? I ask, because with the last pytest release it would be convenient to always include `pytest.ini ` so that we can register markers. The presence of `pytest.ini` is how we distinguish betweendevelopment from release for testing purposes. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaylor.debian at googlemail.com Sun May 12 09:33:08 2019 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Sun, 12 May 2019 15:33:08 +0200 Subject: [Numpy-discussion] Release vs development testing. In-Reply-To: References: Message-ID: On 12.05.19 14:58, Charles R Harris wrote: > Hi All, > > NumPy currently distinguishes between release and development versions > when running tests. Is there a good reason to continue this practice? I > ask, because with the last pytest release it would be convenient to > always include `pytest.ini ` so that we can register markers. The > presence of `pytest.ini` is how we distinguish betweendevelopment from > release for testing purposes. > One difference between development and release builds was that in development releases numpy.testing throws errors on floating point exceptions while the release version it did not. If that is still the case removing the distinction could require a lot of changes in upstream test suites that are not regularly run against development builds. The motivation is not quite clear to me, can you please elaborate on what you want to do. From charlesr.harris at gmail.com Sun May 12 09:55:48 2019 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 12 May 2019 06:55:48 -0700 Subject: [Numpy-discussion] Release vs development testing. In-Reply-To: References: Message-ID: On Sun, May 12, 2019 at 6:33 AM Julian Taylor wrote: > On 12.05.19 14:58, Charles R Harris wrote: > > Hi All, > > > > NumPy currently distinguishes between release and development versions > > when running tests. Is there a good reason to continue this practice? I > > ask, because with the last pytest release it would be convenient to > > always include `pytest.ini ` so that we can register markers. The > > presence of `pytest.ini` is how we distinguish betweendevelopment from > > release for testing purposes. > > > > One difference between development and release builds was that in > development releases numpy.testing throws errors on floating point > exceptions while the release version it did not. > If that is still the case removing the distinction could require a lot > of changes in upstream test suites that are not regularly run against > development builds. > > The motivation is not quite clear to me, can you please elaborate on > what you want to do. > NumPy pytest testing is NumPy specific and not used downstream like our nose testing framework was, so I don't see why that should affect other projects. What motivates this question is that the new version of pytest released yesterday raises warnings for non-registered markers, `pytest.mark.slow` in particular, and that was causing CI failures. The easiest way to register a mark is using `pytest.ini`, but we currently don't include that in released wheels, only in source releases. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun May 12 10:26:25 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 12 May 2019 16:26:25 +0200 Subject: [Numpy-discussion] Release vs development testing. In-Reply-To: References: Message-ID: On Sun, May 12, 2019 at 3:56 PM Charles R Harris wrote: > > > On Sun, May 12, 2019 at 6:33 AM Julian Taylor < > jtaylor.debian at googlemail.com> wrote: > >> On 12.05.19 14:58, Charles R Harris wrote: >> > Hi All, >> > >> > NumPy currently distinguishes between release and development versions >> > when running tests. Is there a good reason to continue this practice? I >> > ask, because with the last pytest release it would be convenient to >> > always include `pytest.ini ` so that we can register markers. The >> > presence of `pytest.ini` is how we distinguish betweendevelopment from >> > release for testing purposes. >> > >> >> One difference between development and release builds was that in >> development releases numpy.testing throws errors on floating point >> exceptions while the release version it did not. >> > I'd prefer to keep this behavior. It's not clear to me if the proposal is to change the behavior or not. If that is still the case removing the distinction could require a lot >> of changes in upstream test suites that are not regularly run against >> development builds. >> >> The motivation is not quite clear to me, can you please elaborate on >> what you want to do. >> > > NumPy pytest testing is NumPy specific and not used downstream like our > nose testing framework was, so I don't see why that should affect other > projects. What motivates this question is that the new version of pytest > released yesterday raises warnings for non-registered markers, > `pytest.mark.slow` in particular, and that was causing CI failures. The > easiest way to register a mark is using `pytest.ini`, but we currently > don't include that in released wheels, only in source releases. > Adding a pytest.ini file in wheels should be perfectly fine I think, Ralf > Chuck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun May 12 10:40:46 2019 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 12 May 2019 07:40:46 -0700 Subject: [Numpy-discussion] Release vs development testing. In-Reply-To: References: Message-ID: On Sun, May 12, 2019 at 7:27 AM Ralf Gommers wrote: > > > On Sun, May 12, 2019 at 3:56 PM Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sun, May 12, 2019 at 6:33 AM Julian Taylor < >> jtaylor.debian at googlemail.com> wrote: >> >>> On 12.05.19 14:58, Charles R Harris wrote: >>> > Hi All, >>> > >>> > NumPy currently distinguishes between release and development versions >>> > when running tests. Is there a good reason to continue this practice? I >>> > ask, because with the last pytest release it would be convenient to >>> > always include `pytest.ini ` so that we can register markers. The >>> > presence of `pytest.ini` is how we distinguish betweendevelopment from >>> > release for testing purposes. >>> > >>> >>> One difference between development and release builds was that in >>> development releases numpy.testing throws errors on floating point >>> exceptions while the release version it did not. >>> >> > I'd prefer to keep this behavior. It's not clear to me if the proposal is > to change the behavior or not. > > If that is still the case removing the distinction could require a lot >>> of changes in upstream test suites that are not regularly run against >>> development builds. >>> >>> The motivation is not quite clear to me, can you please elaborate on >>> what you want to do. >>> >> >> NumPy pytest testing is NumPy specific and not used downstream like our >> nose testing framework was, so I don't see why that should affect other >> projects. What motivates this question is that the new version of pytest >> released yesterday raises warnings for non-registered markers, >> `pytest.mark.slow` in particular, and that was causing CI failures. The >> easiest way to register a mark is using `pytest.ini`, but we currently >> don't include that in released wheels, only in source releases. >> > > Adding a pytest.ini file in wheels should be perfectly fine I think, > > It is the absence of pytest.ini that makes it a release, for that is the file that turns warnings into errors. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.isaac at gmail.com Sun May 12 10:48:16 2019 From: alan.isaac at gmail.com (Alan Isaac) Date: Sun, 12 May 2019 10:48:16 -0400 Subject: [Numpy-discussion] folds and scans Message-ID: <81b10e16-e728-9c04-5edc-51585abb312c@gmail.com> What is the recommended (i.e., fast) way to do folds and scans across one axis of a NumPy array? (The equivalent of Mma's Fold and FoldList.) Assume an arbitrary Python function that produces one array from two arrays, where these three arrays share the same dimensions. (So the use of Python's `reduce` is certainly conceivable for the fold.) Thank you, Alan Isaac From alex at alexsamuel.net Mon May 13 00:35:58 2019 From: alex at alexsamuel.net (Alex Samuel) Date: Mon, 13 May 2019 00:35:58 -0400 Subject: [Numpy-discussion] casting from datetime64 Message-ID: <3f392953-4383-4dc4-9715-3a0d493e5446@www.fastmail.com> Hi, When registering a custom cast function from datetime64 to another dtype, how can I get the units? I am calling PyArray_RegisterCastFunc from NPY_DATETIME. Ideally, I'd like to register a separate cast function for each datetime64 units (or none at all... I don't want all units to be castable). Next best thing would be to obtain the units in the cast function and dispatch accordingly. Is this possible? I glanced through the code, and it looks like there's a lot of hard-coded logic around datetime64, but I didn't go through it carefully. Thought I'd ask before drilling further down. Thanks in advance, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Mon May 13 01:29:59 2019 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Sun, 12 May 2019 22:29:59 -0700 Subject: [Numpy-discussion] casting from datetime64 In-Reply-To: <3f392953-4383-4dc4-9715-3a0d493e5446@www.fastmail.com> References: <3f392953-4383-4dc4-9715-3a0d493e5446@www.fastmail.com> Message-ID: Hi Alex, On Mon, 2019-05-13 at 00:35 -0400, Alex Samuel wrote: > Hi, > > When registering a custom cast function from datetime64 to another > dtype, how can I get the units? > > I am calling PyArray_RegisterCastFunc from NPY_DATETIME. Ideally, > I'd like to register a separate cast function for each datetime64 > units (or none at all... I don't want all units to be castable). > Next best thing would be to obtain the units in the cast function and > dispatch accordingly. > > Is this possible? I glanced through the code, and it looks like > there's a lot of hard-coded logic around datetime64, but I didn't go > through it carefully. Thought I'd ask before drilling further down. > No, I do not think that is possible. But you do get the array pointers during the cast. I honestly would prefer not to promise that all of this will survive if we change this in numpy, since there are almost no users. But I think if we change it I could promise to help with cleaning it up in ora. I think this is public API (but I do not really like anyone using it ;)), so you can use: ``` /* * This function returns a pointer to the DateTimeMetaData * contained within the provided datetime dtype. */ static PyArray_DatetimeMetaData * get_datetime_metadata_from_dtype(PyArray_Descr *dtype) { /* original error check for DATETIME unnecessary for you */ return &(((PyArray_DatetimeDTypeMetaData *)dtype->c_metadata)->meta); } ``` And in the castfunc (fromarr is passed in as a void * as far as): ``` NPY_DATETIMEUNIT base = get_datetime_metadata_from_dtype( PyArray_DESCR(fromarr))->base; ``` Where NPY_DATETIMEUNIT is the enum defined in ndarraytypes.h. The logic will have to happen inside the cast func. I would hope there was a better way, but I cannot think of any, and I am scared that supporting this will add yet another ugly hack when we want to improve dtypes... Best, Sebastian > Thanks in advance, > Alex > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ralf.gommers at gmail.com Mon May 13 05:01:27 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 13 May 2019 11:01:27 +0200 Subject: [Numpy-discussion] Release vs development testing. In-Reply-To: References: Message-ID: On Sun, May 12, 2019 at 4:41 PM Charles R Harris wrote: > > > On Sun, May 12, 2019 at 7:27 AM Ralf Gommers > wrote: > >> >> >> On Sun, May 12, 2019 at 3:56 PM Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Sun, May 12, 2019 at 6:33 AM Julian Taylor < >>> jtaylor.debian at googlemail.com> wrote: >>> >>>> On 12.05.19 14:58, Charles R Harris wrote: >>>> > Hi All, >>>> > >>>> > NumPy currently distinguishes between release and development versions >>>> > when running tests. Is there a good reason to continue this practice? >>>> I >>>> > ask, because with the last pytest release it would be convenient to >>>> > always include `pytest.ini ` so that we can register markers. The >>>> > presence of `pytest.ini` is how we distinguish betweendevelopment from >>>> > release for testing purposes. >>>> > >>>> >>>> One difference between development and release builds was that in >>>> development releases numpy.testing throws errors on floating point >>>> exceptions while the release version it did not. >>>> >>> >> I'd prefer to keep this behavior. It's not clear to me if the proposal is >> to change the behavior or not. >> >> If that is still the case removing the distinction could require a lot >>>> of changes in upstream test suites that are not regularly run against >>>> development builds. >>>> >>>> The motivation is not quite clear to me, can you please elaborate on >>>> what you want to do. >>>> >>> >>> NumPy pytest testing is NumPy specific and not used downstream like our >>> nose testing framework was, so I don't see why that should affect other >>> projects. What motivates this question is that the new version of pytest >>> released yesterday raises warnings for non-registered markers, >>> `pytest.mark.slow` in particular, and that was causing CI failures. The >>> easiest way to register a mark is using `pytest.ini`, but we currently >>> don't include that in released wheels, only in source releases. >>> >> >> Adding a pytest.ini file in wheels should be perfectly fine I think, >> >> > It is the absence of pytest.ini that makes it a release, for that is the > file that turns warnings into errors. > Why don't we just always keep pytest.ini, and move the settings for warnings-to-errors to runtests.py or tools/travis_test.sh? It's not important how we check, all we need is some mechanism to prevent new warnings creeping in. Same as we set -Wall for building in CI. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon May 13 06:51:34 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 13 May 2019 12:51:34 +0200 Subject: [Numpy-discussion] Adding to the non-dispatched implementation of NumPy methods In-Reply-To: References: <16a24b876f0.27ae.acf34a9c767d7bb498a799333be0433e@fastmail.com> <66ea23ed-e434-42d5-9546-1a3a7528ce9e@Canary> Message-ID: On Sat, May 11, 2019 at 4:04 AM Stephan Hoyer wrote: > On Sat, May 4, 2019 at 12:29 PM Ralf Gommers > wrote: > >> We seem to have run out of steam a bit here. >> > > We discussed this today in person at the NumPy sprint. > > The consensus was to go for a name like __skip_array_function__. Ufuncs > don't have very good use-cases for a function that skips dispatch: > 1. The overhead of the ufunc dispatch machinery is much smaller, > especially in the case where all arguments are NumPy arrays, because there > is no need for a wrapper function in Python. > 2. Inside __array_ufunc__ it's possible to cast arguments into NumPy > arrays explicitly and then call the ufunc again. There's no need to > explicitly skip overrides. > > We also don't really care about supporting the use-case where a function > gets changed into a ufunc. We already warn users not to call > __skip_array_function__ directly (without using getattr) outside > __array_function__. > > Given all this, it seems best to stick with a name that mirrors > __array_function__ as closely as possible. I picked "skip" instead of > "skpping" just because it's slightly shorter, but otherwise don't have a > strong preference. > > I've edited the NEP [1] and implementation [2] pull requests to use this > new name, and clarify the use-cases. If there no serious objections, I'd > love to merge these soon, in time for the NumPy 1.17 release candidate. > > [1] https://github.com/numpy/numpy/pull/13305 > [2] https://github.com/numpy/numpy/pull/13389 > Thanks for the update Stephan, that all sounds good to me. Looks like it was a productive sprint! Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at alexsamuel.net Mon May 13 09:31:47 2019 From: alex at alexsamuel.net (Alex Samuel) Date: Mon, 13 May 2019 09:31:47 -0400 Subject: [Numpy-discussion] casting from datetime64 In-Reply-To: References: <3f392953-4383-4dc4-9715-3a0d493e5446@www.fastmail.com> Message-ID: <4c9dde72-7c8e-4a4c-badf-331c3e191af1@www.fastmail.com> Thanks! I understand your hesitation; certainly if I am using undocumented APIs (or even if they are otherwise unused) I understand that it will be on my shoulders to preserve compatibility if you change things. But I would definitely like to get basic functionality working with current (and older, to be honest) NumPy versions. Ah, I missed previously that the array objects are passed as the void* args to the PyArray_VectorUnaryFunc. But it's documented here. https://docs.scipy.org/doc/numpy-1.13.0/reference/c-api.types-and-structures.html#c.PyArray_ArrFuncs.cast (Why aren't these two args PyArrayObject* then?) As far as I can tell, get_datetime_metadata_from_dtype() isn't a public API? It certainly isn't spelled like one. It's declared in numpy/core/src/multiarray/_datetime.h, which doesn't appear to be #included in any other header. I suppose I could duplicate the struct layout in my own code to fish out the field I need. Obviously, compatibility is my own problem then. Regards, Alex On Mon, May 13, 2019, at 01:31, Sebastian Berg wrote: > Hi Alex, > > On Mon, 2019-05-13 at 00:35 -0400, Alex Samuel wrote: > > Hi, > > > > When registering a custom cast function from datetime64 to another > > dtype, how can I get the units? > > > > I am calling PyArray_RegisterCastFunc from NPY_DATETIME. Ideally, > > I'd like to register a separate cast function for each datetime64 > > units (or none at all... I don't want all units to be castable). > > Next best thing would be to obtain the units in the cast function and > > dispatch accordingly. > > > > Is this possible? I glanced through the code, and it looks like > > there's a lot of hard-coded logic around datetime64, but I didn't go > > through it carefully. Thought I'd ask before drilling further down. > > get_datetime_metadata_from_dtype > > No, I do not think that is possible. But you do get the array pointers > during the cast. I honestly would prefer not to promise that all of > this will survive if we change this in numpy, since there are almost no > users. But I think if we change it I could promise to help with > cleaning it up in ora. > > I think this is public API (but I do not really like anyone using it > ;)), so you can use: > > ``` > /* > * This function returns a pointer to the DateTimeMetaData > * contained within the provided datetime dtype. > */ > static PyArray_DatetimeMetaData * > get_datetime_metadata_from_dtype(PyArray_Descr *dtype) > { > /* original error check for DATETIME unnecessary for you */ > return &(((PyArray_DatetimeDTypeMetaData *)dtype->c_metadata)->meta); > } > ``` > > And in the castfunc (fromarr is passed in as a void * as far as): > ``` > NPY_DATETIMEUNIT base = get_datetime_metadata_from_dtype( > PyArray_DESCR(fromarr))->base; > ``` > > Where NPY_DATETIMEUNIT is the enum defined in ndarraytypes.h. The logic > will have to happen inside the cast func. > > I would hope there was a better way, but I cannot think of any, and I > am scared that supporting this will add yet another ugly hack when we > want to improve dtypes... > > Best, > > Sebastian > > > > > Thanks in advance, > > Alex > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > > *Attachments:* > * signature.asc -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon May 13 13:25:57 2019 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 13 May 2019 11:25:57 -0600 Subject: [Numpy-discussion] Release vs development testing. In-Reply-To: References: Message-ID: On Mon, May 13, 2019 at 3:02 AM Ralf Gommers wrote: > > > On Sun, May 12, 2019 at 4:41 PM Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sun, May 12, 2019 at 7:27 AM Ralf Gommers >> wrote: >> >>> >>> >>> On Sun, May 12, 2019 at 3:56 PM Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> >>>> On Sun, May 12, 2019 at 6:33 AM Julian Taylor < >>>> jtaylor.debian at googlemail.com> wrote: >>>> >>>>> On 12.05.19 14:58, Charles R Harris wrote: >>>>> > Hi All, >>>>> > >>>>> > NumPy currently distinguishes between release and development >>>>> versions >>>>> > when running tests. Is there a good reason to continue this >>>>> practice? I >>>>> > ask, because with the last pytest release it would be convenient to >>>>> > always include `pytest.ini ` so that we can register markers. The >>>>> > presence of `pytest.ini` is how we distinguish betweendevelopment >>>>> from >>>>> > release for testing purposes. >>>>> > >>>>> >>>>> One difference between development and release builds was that in >>>>> development releases numpy.testing throws errors on floating point >>>>> exceptions while the release version it did not. >>>>> >>>> >>> I'd prefer to keep this behavior. It's not clear to me if the proposal >>> is to change the behavior or not. >>> >>> If that is still the case removing the distinction could require a lot >>>>> of changes in upstream test suites that are not regularly run against >>>>> development builds. >>>>> >>>>> The motivation is not quite clear to me, can you please elaborate on >>>>> what you want to do. >>>>> >>>> >>>> NumPy pytest testing is NumPy specific and not used downstream like our >>>> nose testing framework was, so I don't see why that should affect other >>>> projects. What motivates this question is that the new version of pytest >>>> released yesterday raises warnings for non-registered markers, >>>> `pytest.mark.slow` in particular, and that was causing CI failures. The >>>> easiest way to register a mark is using `pytest.ini`, but we currently >>>> don't include that in released wheels, only in source releases. >>>> >>> >>> Adding a pytest.ini file in wheels should be perfectly fine I think, >>> >>> >> It is the absence of pytest.ini that makes it a release, for that is the >> file that turns warnings into errors. >> > > Why don't we just always keep pytest.ini, and move the settings for > warnings-to-errors to runtests.py or tools/travis_test.sh? > > It's not important how we check, all we need is some mechanism to prevent > new warnings creeping in. Same as we set -Wall for building in CI. > > I just checked that current wheels show the warnings when `numpy.test()` is run with latest pytest. However, moving the `pytest.ini` file into the `numpy` directory is tricky, as we need to tell pytest where to find the installed file (-c option). The simplest short time solution is to ignore the warning, but long term I'm worried that the warning will become an error as pytest is doing this because they want to clean up the their implementation. Ideally there would be a better way to register the marks. Note that we currently deal with the missing `pytest.ini` in wheels by duplicating the warnings filters in `_pytesttester.py` using the pytest command line. Adding the `error` filter in the command line worries me, as I'm not sure how the priorities work out, the command line is now appended to the contents of `pytest.ini`. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon May 13 13:42:22 2019 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 13 May 2019 11:42:22 -0600 Subject: [Numpy-discussion] Release vs development testing. In-Reply-To: References: Message-ID: On Mon, May 13, 2019 at 11:25 AM Charles R Harris wrote: > > > On Mon, May 13, 2019 at 3:02 AM Ralf Gommers > wrote: > >> >> >> On Sun, May 12, 2019 at 4:41 PM Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Sun, May 12, 2019 at 7:27 AM Ralf Gommers >>> wrote: >>> >>>> >>>> >>>> On Sun, May 12, 2019 at 3:56 PM Charles R Harris < >>>> charlesr.harris at gmail.com> wrote: >>>> >>>>> >>>>> >>>>> On Sun, May 12, 2019 at 6:33 AM Julian Taylor < >>>>> jtaylor.debian at googlemail.com> wrote: >>>>> >>>>>> On 12.05.19 14:58, Charles R Harris wrote: >>>>>> > Hi All, >>>>>> > >>>>>> > NumPy currently distinguishes between release and development >>>>>> versions >>>>>> > when running tests. Is there a good reason to continue this >>>>>> practice? I >>>>>> > ask, because with the last pytest release it would be convenient to >>>>>> > always include `pytest.ini ` so that we can register markers. The >>>>>> > presence of `pytest.ini` is how we distinguish betweendevelopment >>>>>> from >>>>>> > release for testing purposes. >>>>>> > >>>>>> >>>>>> One difference between development and release builds was that in >>>>>> development releases numpy.testing throws errors on floating point >>>>>> exceptions while the release version it did not. >>>>>> >>>>> >>>> I'd prefer to keep this behavior. It's not clear to me if the proposal >>>> is to change the behavior or not. >>>> >>>> If that is still the case removing the distinction could require a lot >>>>>> of changes in upstream test suites that are not regularly run against >>>>>> development builds. >>>>>> >>>>>> The motivation is not quite clear to me, can you please elaborate on >>>>>> what you want to do. >>>>>> >>>>> >>>>> NumPy pytest testing is NumPy specific and not used downstream like >>>>> our nose testing framework was, so I don't see why that should affect other >>>>> projects. What motivates this question is that the new version of pytest >>>>> released yesterday raises warnings for non-registered markers, >>>>> `pytest.mark.slow` in particular, and that was causing CI failures. The >>>>> easiest way to register a mark is using `pytest.ini`, but we currently >>>>> don't include that in released wheels, only in source releases. >>>>> >>>> >>>> Adding a pytest.ini file in wheels should be perfectly fine I think, >>>> >>>> >>> It is the absence of pytest.ini that makes it a release, for that is the >>> file that turns warnings into errors. >>> >> >> Why don't we just always keep pytest.ini, and move the settings for >> warnings-to-errors to runtests.py or tools/travis_test.sh? >> >> It's not important how we check, all we need is some mechanism to prevent >> new warnings creeping in. Same as we set -Wall for building in CI. >> >> > I just checked that current wheels show the warnings when `numpy.test()` > is run with latest pytest. However, moving the `pytest.ini` file into the > `numpy` directory is tricky, as we need to tell pytest where to find the > installed file (-c option). The simplest short time solution is to ignore > the warning, but long term I'm worried that the warning will become an > error as pytest is doing this because they want to clean up the their > implementation. Ideally there would be a better way to register the marks. > Note that we currently deal with the missing `pytest.ini` in wheels by > duplicating the warnings filters in `_pytesttester.py` using the pytest > command line. > > Adding the `error` filter in the command line worries me, as I'm not sure > how the priorities work out, the command line is now appended to the > contents of `pytest.ini`. > > Note that this is also a problem when numpy is installed with `setup.py install`. Things work now for runtests because `pytest.ini` is in the directory from which the tests are run. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon May 13 13:49:31 2019 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 13 May 2019 10:49:31 -0700 Subject: [Numpy-discussion] Release vs development testing. In-Reply-To: References: Message-ID: On Mon, May 13, 2019, 10:26 Charles R Harris wrote: > I just checked that current wheels show the warnings when `numpy.test()` > is run with latest pytest. However, moving the `pytest.ini` file into the > `numpy` directory is tricky, as we need to tell pytest where to find the > installed file (-c option). The simplest short time solution is to ignore > the warning, but long term I'm worried that the warning will become an > error as pytest is doing this because they want to clean up the their > implementation. Ideally there would be a better way to register the marks. > I bet if you open an issue on pytest explaining that numpy needs to either use unregistered marks or else have some programmatic non-pytest.ini-based way to register marks, then they'll figure something out. I think they added a warning because they're hoping to flush out these kinds of problems so they can fix them before they do the cleanup. -n > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon May 13 15:37:07 2019 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 13 May 2019 13:37:07 -0600 Subject: [Numpy-discussion] Release vs development testing. In-Reply-To: References: Message-ID: On Mon, May 13, 2019 at 11:50 AM Nathaniel Smith wrote: > On Mon, May 13, 2019, 10:26 Charles R Harris > wrote: > >> I just checked that current wheels show the warnings when `numpy.test()` >> is run with latest pytest. However, moving the `pytest.ini` file into the >> `numpy` directory is tricky, as we need to tell pytest where to find the >> installed file (-c option). The simplest short time solution is to ignore >> the warning, but long term I'm worried that the warning will become an >> error as pytest is doing this because they want to clean up the their >> implementation. Ideally there would be a better way to register the marks. >> > > I bet if you open an issue on pytest explaining that numpy needs to either > use unregistered marks or else have some programmatic non-pytest.ini-based > way to register marks, then they'll figure something out. I think they > added a warning because they're hoping to flush out these kinds of problems > so they can fix them before they do the cleanup. > > Turns out that markers can be registered in conftest.py, so that fixes the problem for NumPy. SciPy will need this fix also. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Tue May 14 15:44:17 2019 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Tue, 14 May 2019 12:44:17 -0700 Subject: [Numpy-discussion] NumPy Community Meeting -- May 15/ 2019 Message-ID: Hi, There will be a community meeting at 12 pm Pacific Time on May 15/ 2019. Anyone is free to join and edit the work in progress meeting notes: https://hackmd.io/M-ef_Fu5QOOitACnyoO0kQ?view Best wishes, Tyler -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue May 14 19:36:32 2019 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 14 May 2019 16:36:32 -0700 Subject: [Numpy-discussion] Style guide for numpy code? In-Reply-To: References: <9f50cd94-ef12-efc4-62fa-b1e1af9e9c1d@physics.ucf.edu> Message-ID: Thanks Joe, Looks like a good list, though I personally would not recommend that students pick their own style. I tell my students (general purpose Python, not Numerical work per se) If your organization has a style guide, use that. If it doesn?t use PEP8. In your case, you ARE the organization? You might consider defining a style. But I?ll read over this ? you have some add-one and deviations from PEP8 that make sense for computational computing. -Chris On May 10, 2019, at 12:30 AM, Evgeni Burovski wrote: Hi Joe, Thanks for sharing! I'm going to use your handout as a base for my numerical computing classes, (with an appropriate citation, of course :-)). ??, 9 ??? 2019 ?., 21:19 Joe Harrington : > I have a handout for my PHZ 3150 Introduction to Numerical Computing > course that includes some rules: > > (a) All integer-valued floating-point numbers should have decimal points > after them. For > example, if you have a time of 10 sec, do not use > > y = np.e**10 # sec > > use > > y = np.e**10. # sec > > instead. For example, an item count is always an integer, but a distance > is always a float. A decimal in the range (-1,1) must always have a zero > before the decimal point, for readability: > > x = 0.23 # Right! > > x = .23 # WRONG > > The purpose of this one is simply to build the decimal-point habit. In > Python it's less of an issue now, but sometimes code is translated, and > integer division is still out there. For that reason, in other languages, > it may be desirable to use a decimal point even for counts, unless integer > division is wanted. Make a comment whenever you intend integer division > and the language uses the same symbol (/) for both kinds of division. > > (b) Use spaces around binary operations and relations (=<>+-*/). Put a > space after ?,?. > Do not put space around ?=? in keyword arguments, or around ? ** ?. > > (c) Do not put plt.show() in your homework file! You may put it in a > comment if you > like, but it is not necessary. Just save the plot. If you say > > plt.ion() > > plots will automatically show while you are working. > > (d) Use: > > import matplotlib.pyplot as plt > > NOT: > > import matplotlib.pylab as plt > > (e) Keep lines to 80 characters, max, except in rare cases that are well > justified, such as > very long strings. If you make comments on the same line as code, keep > them short or > break them over more than a line: > > code = code2 # set code equal to code2 > > # Longer comment requiring much more space because > # I'm explaining something complicated. > code = code2 > > code = code2 # Another way to do a very long comment, > # like this one, which runs over more than > # one line. > > (f) Keep blocks of similar lines internally lined up on decimals, > comments, and = signs. This makes them easier to read and verify. There > will be some cases when this is impractical. Use your judgment (you're not > a computer, you control the computer!): > > x = 1. # this is a comment > y = 378.2345 # here's another > fred = chuck # note how the decimals, = signs, and > # comments line up nicely... > alacazamshmazooboloid = 2721 # but not always! > > (g) Put the units and sources of all values in comments: > > t_planet = 523. # K, Smith and Jones (2016, ApJ 234, 22) > > (h) I don't mean to start a religious war, but I emphasize the alignment > of similar adjacent code lines to make differences pop out and reduce the > likelihood of bugs. For example, it is much easier to verify the > correctness of: > > a = 3 * x + 3 * 8. * short - 5. * np.exp(np.pi * omega * t) > a_alt = 3 * x + 3 * 8. * anotshortvar - 5. * np.exp(np.pi * omega * t) > > than: > > a = 3 * x + 3 * 8. * short - 5. * np.exp(np.pi * omega * t) > a_altvarname = 3 * x + 3*9*anotshortvar - 5. * np.exp(np.pi * omega * i) > > (i) Assign values to meaningful variables, and use them in formulae and > functions: > > ny = 512 > nx = 512 > image = np.zeros((ny, nx)) > expr1 = ny * 3 > expr2 = nx * 4 > > Otherwise, later on when you upgrade to 2560x1440 arrays, you won't know > which of the 512s are in the x direction and which are in the y direction. > Or, the student you (now a senior researcher) assign to code the upgrade > won't! Also, it reduces bugs arising from the order of arguments to > functions if the args have meaningful names. This is not to say that you > should assign all numbers to functions. This is fine: > > circ = 2 * np.pi * r > > (j) All functions assigned for grading must have full docstrings in > numpy's format, as well as internal comments. Utility functions not > requested in the assignment and that the user will never see can have > reduced docstrings if the functions are simple and obvious, but at least > give the one-line summary. > > (k) If you modify an existing function, you must either make a Git entry > or, if it is not under revision control, include a Revision History section > in your docstring and record your name, the date, the version number, your > email, and the nature of the change you made. > > (l) Choose variable names that are meaningful and consistent in style. > Document your style either at the head of a module or in a separate text > file for the project. For example, if you use CamelCaps with initial > capital, say that. If you reserve initial capitals for classes, say that. > If you use underscores for variable subscripts and camelCaps for the base > variables, say that. If you accept some other style and build on that, say > that. There are too many good reasons to have such styles for only one to > be the community standard. If certain kinds of values should get the same > variable or base variable, such as fundamental constants or things like > amplitudes, say that. > > (j) It's best if variables that will appear in formulae are short, so more > terms can fit in one 80 character line. > > Overall, having and following a style makes code easier to read. And, as > an added bonus, if you take care to be consistent, you will write slower, > view your code more times, and catch more bugs as you write them. Thus, > for codes of any significant size, writing pedantically commented and > aligned code is almost always faster than blast coding, if you include > debugging time. > > Did you catch both bugs in item h? > > --jh-- > > On 5/9/19 11:25 AM, Chris Barker - NOAA Federal > wrote: > > Do any of you know of a style guide for computational / numpy code? > > I don't mean code that will go into numpy itself, but rather, users code > that uses numpy (and scipy, and...) > > I know about (am a proponent of) PEP8, but it doesn?t address the unique > needs of scientific programming. > > This is mostly about variable names. In scientific code, we often want: > > - variable names that match the math notation- so single character names, > maybe upper or lower case to mean different things ( in ocean wave > mechanics, often ?h? is the water depth, and ?H? is the wave height) > > -to distinguish between scalar, vector, and matrix values ? often > UpperCase means an array or matrix, for instance. > > But despite (or because of) these unique needs, a style guide would be > really helpful. > > Anyone have one? Or even any notes on what you do yourself? > > Thanks, > -CHB > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at python.org https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From einstein.edison at gmail.com Wed May 15 12:46:37 2019 From: einstein.edison at gmail.com (Hameer Abbasi) Date: Wed, 15 May 2019 18:46:37 +0200 Subject: [Numpy-discussion] New Proposed Time for the NumPy Community Meeting Message-ID: <8ea76d5b-3468-4fe9-86b6-8873a22e9471@Canary> Hello everyone! I?d like to propose that we shift the time of the NumPy community meeting to one hour earlier starting next week. The reason is twofold: One, we have an Indian student who wishes to join the meetings, and so it?d be nice to have a more reasonable time for her timezone. Second, my fast for the month of Ramadan breaks at 9:07 PM, and will only get later. I?d hate to eat during the meeting. Best Regards, Hameer Abbasi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed May 15 18:02:38 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 16 May 2019 00:02:38 +0200 Subject: [Numpy-discussion] New Proposed Time for the NumPy Community Meeting In-Reply-To: <8ea76d5b-3468-4fe9-86b6-8873a22e9471@Canary> References: <8ea76d5b-3468-4fe9-86b6-8873a22e9471@Canary> Message-ID: On Wed, May 15, 2019 at 6:47 PM Hameer Abbasi wrote: > Hello everyone! > > I?d like to propose that we shift the time of the NumPy community meeting > to one hour earlier starting next week. > > The reason is twofold: One, we have an Indian student who wishes to join > the meetings, and so it?d be nice to have a more reasonable time for her > timezone. > > Second, my fast for the month of Ramadan breaks at 9:07 PM, and will only > get later. I?d hate to eat during the meeting. > We discussed this in the call, everyone was in favor. It helps two people who want to attend, and doesn't seem to shift things to too early in the morning for anyone else. Thanks for bringing it up Hameer. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikofski at berkeley.edu Fri May 17 02:01:17 2019 From: mikofski at berkeley.edu (Mark Mikofski) Date: Thu, 16 May 2019 23:01:17 -0700 Subject: [Numpy-discussion] [ANN] pvlib-python v0.6.3: predicting power for solar energy Message-ID: pvlib-0.6.3 has been released What's New: https://pvlib-python.readthedocs.io/en/stable/whatsnew.html#v0-6-3-may-15-2019 PyPI: https://pypi.org/project/pvlib/ Read the Docs: https://pvlib-python.readthedocs.io/en/latest/ GitHub: https://github.com/pvlib/pvlib-python -- Mark Mikofski, PhD (2005) *Fiat Lux* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikofski at berkeley.edu Fri May 17 02:36:09 2019 From: mikofski at berkeley.edu (Mark Mikofski) Date: Thu, 16 May 2019 23:36:09 -0700 Subject: [Numpy-discussion] [ANN] PVMismatch v4.1: Python tools for photovoltaic IV curve modeling Message-ID: PVMismatch-4.1 has been released Release Notes: https://github.com/SunPower/PVMismatch/releases/tag/v4.1 PyPI: https://pypi.org/project/pvmismatch/ Docs: https://sunpower.github.io/PVMismatch/ GitHub: https://github.com/SunPower/PVMismatch -- Mark Mikofski, PhD (2005) *Fiat Lux* -------------- next part -------------- An HTML attachment was scrubbed... URL: From dashohoxha at gmail.com Fri May 17 12:57:36 2019 From: dashohoxha at gmail.com (Dashamir Hoxha) Date: Fri, 17 May 2019 18:57:36 +0200 Subject: [Numpy-discussion] GSoD - Technical Writter In-Reply-To: References: Message-ID: On Fri, May 17, 2019 at 2:54 PM Ralf Gommers wrote: > Hi Dashamir, > > Thank you for your email and interest in NumPy and SciPy. I'm excited > about this program and opportunity to work with a technical writer. Please > rest assured that we do not assume or expect you to be familiar with the > project already. The scipy.stats idea we included in case someone would be > familiar with it and wanted to work at the individual module level. > Personally I think the most important and impactful thing though is to > shape the structure of our documentation content. For that it's not > necessarily an advantage to know numpy or scipy well - fresh eyes can be > helpful. And in general, I'd say we have lots of people that can provide > pieces of content; the ability to create the right framework/structure to > effectively place and solicit that content is what we habe been missing. > > If you look at the NumPy and SciPy documentation, you will see that the > reference guides (which are aimed at experienced users) are very large. The > user guides (for beginning users) and the overall structuring could really > benefit from a good technical writer. > Thanks for your encouraging message, Ralf. Something that I can notice immediately is that the interface of the docs looks a bit outdated and maybe it can benefit from an update (or replacing it with another template), in order to make it a bit more responsive. It is true that when you program you usually work on a big screen, so a responsive web page may not be an absolute requirement, but still it may be nice to be able to read the docs from a tablet or smartphone. Unfortunately I am not familiar yet with Sphinx, but I hope that it can be integrated with Jekyll or Hugo, and then one of their templates can be used. About the content of the User Guide etc. I don't see any obvious improvement that is needed (maybe because I have not read them yet). One thing that may help is making the code examples interactive, so that the readers can play with them and see how the results change. For example this may be useful: https://github.com/RunestoneInteractive/RunestoneComponents The two changes that I have suggested above seem more like engineering work (for improving the documentation infrastructure), than documentation work. For making a content that can be easily grasped by the beginners, I think that it should be presented as a series of problems and their solutions. In other words don't show the users the features and their details, but ask them to solve a simple problem, and then show them how to solve it with NumPy/SciPy and its features. This would make it more attractive because people usually don't like to read manuals from beginning to the end. This is a job that can be done by the teachers for their students, having in mind the level of their students and what they actually want them to learn. I have noticed that there are already some lectures, or books, or tutorials like this. This is a creative work, with a specific target audience in mind, so I can't pretend that I can possibly do something useful about this in a short time (2-3 months). But of course the links to the existing resources can be made more visible and reachable from the main page of the website. Best regards, Dashamir -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Fri May 17 18:36:14 2019 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Fri, 17 May 2019 15:36:14 -0700 Subject: [Numpy-discussion] ANN: SciPy 1.3.0 Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi all, On behalf of the SciPy development team I'm pleased to announce the release of SciPy 1.3.0. Sources and binary wheels can be found at: https://pypi.org/project/scipy/ and at: https://github.com/scipy/scipy/releases/tag/v1.3.0 One of a few ways to install this release with pip: pip install scipy==1.3.0 ========================== SciPy 1.3.0 Release Notes ========================== SciPy 1.3.0 is the culmination of 5 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been some API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Before upgrading, we recommend that users check that their own code does not use deprecated SciPy functionality (to do so, run your code with ``python -Wd`` and check for ``DeprecationWarning`` s). Our development attention will now shift to bug-fix releases on the 1.3.x branch, and on adding new features on the master branch. This release requires Python 3.5+ and NumPy 1.13.3 or greater. For running on PyPy, PyPy3 6.0+ and NumPy 1.15.0 are required. Highlights of this release --------------------------------- - - Three new ``stats`` functions, a rewrite of ``pearsonr``, and an exact computation of the Kolmogorov-Smirnov two-sample test - - A new Cython API for bounded scalar-function root-finders in `scipy.optimize` - - Substantial ``CSR`` and ``CSC`` sparse matrix indexing performance improvements - - Added support for interpolation of rotations with continuous angular rate and acceleration in ``RotationSpline`` New features ============ `scipy.interpolate` improvements ------------------------------------------- A new class ``CubicHermiteSpline`` is introduced. It is a piecewise-cubic interpolator which matches observed values and first derivatives. Existing cubic interpolators ``CubicSpline``, ``PchipInterpolator`` and ``Akima1DInterpolator`` were made subclasses of ``CubicHermiteSpline``. `scipy.io` improvements -------------------------------- For the Attribute-Relation File Format (ARFF) `scipy.io.arff.loadarff` now supports relational attributes. `scipy.io.mmread` can now parse Matrix Market format files with empty lines. `scipy.linalg` improvements ------------------------------------ Added wrappers for ``?syconv`` routines, which convert a symmetric matrix given by a triangular matrix factorization into two matrices and vice versa. `scipy.linalg.clarkson_woodruff_transform` now uses an algorithm that leverages sparsity. This may provide a 60-90 percent speedup for dense input matrices. Truly sparse input matrices should also benefit from the improved sketch algorithm, which now correctly runs in ``O(nnz(A))`` time. Added new functions to calculate symmetric Fiedler matrices and Fiedler companion matrices, named `scipy.linalg.fiedler` and `scipy.linalg.fiedler_companion`, respectively. These may be used for root finding. `scipy.ndimage` improvements ---------------------------------------- Gaussian filter performances may improve by an order of magnitude in some cases, thanks to removal of a dependence on ``np.polynomial``. This may impact `scipy.ndimage.gaussian_filter` for example. `scipy.optimize` improvements ---------------------------------------- The `scipy.optimize.brute` minimizer obtained a new keyword ``workers``, which can be used to parallelize computation. A Cython API for bounded scalar-function root-finders in `scipy.optimize` is available in a new module `scipy.optimize.cython_optimize` via ``cimport``. This API may be used with ``nogil`` and ``prange`` to loop over an array of function arguments to solve for an array of roots more quickly than with pure Python. ``'interior-point'`` is now the default method for ``linprog``, and ``'interior-point'`` now uses SuiteSparse for sparse problems when the required scikits (scikit-umfpack and scikit-sparse) are available. On benchmark problems (gh-10026), execution time reductions by factors of 2-3 were typical. Also, a new ``method='revised simplex'`` has been added. It is not as fast or robust as ``method='interior-point'``, but it is a faster, more robust, and equally accurate substitute for the legacy ``method='simplex'``. ``differential_evolution`` can now use a ``Bounds`` class to specify the bounds for the optimizing argument of a function. `scipy.optimize.dual_annealing` performance improvements related to vectorisation of some internal code. `scipy.signal` improvements ------------------------------------- Two additional methods of discretization are now supported by `scipy.signal.cont2discrete`: ``impulse`` and ``foh``. `scipy.signal.firls` now uses faster solvers `scipy.signal.detrend` now has a lower physical memory footprint in some cases, which may be leveraged using the new ``overwrite_data`` keyword argument `scipy.signal.firwin` ``pass_zero`` argument now accepts new string arguments that allow specification of the desired filter type: ``'bandpass'``, ``'lowpass'``, ``'highpass'``, and ``'bandstop'`` `scipy.signal.sosfilt` may have improved performance due to lower retention of the global interpreter lock (GIL) in algorithm `scipy.sparse` improvements -------------------------------------- A new keyword was added to ``csgraph.dijsktra`` that allows users to query the shortest path to ANY of the passed in indices, as opposed to the shortest path to EVERY passed index. `scipy.sparse.linalg.lsmr` performance has been improved by roughly 10 percent on large problems Improved performance and reduced physical memory footprint of the algorithm used by `scipy.sparse.linalg.lobpcg` ``CSR`` and ``CSC`` sparse matrix fancy indexing performance has been improved substantially `scipy.spatial` improvements -------------------------------------- `scipy.spatial.ConvexHull` now has a ``good`` attribute that can be used alongsize the ``QGn`` Qhull options to determine which external facets of a convex hull are visible from an external query point. `scipy.spatial.cKDTree.query_ball_point` has been modernized to use some newer Cython features, including GIL handling and exception translation. An issue with ``return_sorted=True`` and scalar queries was fixed, and a new mode named ``return_length`` was added. ``return_length`` only computes the length of the returned indices list instead of allocating the array every time. `scipy.spatial.transform.RotationSpline` has been added to enable interpolation of rotations with continuous angular rates and acceleration `scipy.stats` improvements ------------------------------------ Added a new function to compute the Epps-Singleton test statistic, `scipy.stats.epps_singleton_2samp`, which can be applied to continuous and discrete distributions. New functions `scipy.stats.median_absolute_deviation` and `scipy.stats.gstd` (geometric standard deviation) were added. The `scipy.stats.combine_pvalues` method now supports ``pearson``, ``tippett`` and ``mudholkar_george`` pvalue combination methods. The `scipy.stats.ortho_group` and `scipy.stats.special_ortho_group` ``rvs(dim)`` functions' algorithms were updated from a ``O(dim^4)`` implementation to a ``O(dim^3)`` which gives large speed improvements for ``dim>100``. A rewrite of `scipy.stats.pearsonr` to use a more robust algorithm, provide meaningful exceptions and warnings on potentially pathological input, and fix at least five separate reported issues in the original implementation. Improved the precision of ``hypergeom.logcdf`` and ``hypergeom.logsf``. Added exact computation for Kolmogorov-Smirnov (KS) two-sample test, replacing the previously approximate computation for the two-sided test `stats.ks_2samp`. Also added a one-sided, two-sample KS test, and a keyword ``alternative`` to `stats.ks_2samp`. Backwards incompatible changes ========================== `scipy.interpolate` changes ------------------------------------ Functions from ``scipy.interpolate`` (``spleval``, ``spline``, ``splmake``, and ``spltopp``) and functions from ``scipy.misc`` (``bytescale``, ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, ``imsave``, ``imshow``, ``toimage``) have been removed. The former set has been deprecated since v0.19.0 and the latter has been deprecated since v1.0.0. Similarly, aliases from ``scipy.misc`` (``comb``, ``factorial``, ``factorial2``, ``factorialk``, ``logsumexp``, ``pade``, ``info``, ``source``, ``who``) which have been deprecated since v1.0.0 are removed. `SciPy documentation for v1.1.0 `__ can be used to track the new import locations for the relocated functions. `scipy.linalg` changes ----------------------------- For ``pinv``, ``pinv2``, and ``pinvh``, the default cutoff values are changed for consistency (see the docs for the actual values). `scipy.optimize` changes --------------------------------- The default method for ``linprog`` is now ``'interior-point'``. The method's robustness and speed come at a cost: solutions may not be accurate to machine precision or correspond with a vertex of the polytope defined by the constraints. To revert to the original simplex method, include the argument ``method='simplex'``. `scipy.stats` changes ---------------------------- Previously, ``ks_2samp(data1, data2)`` would run a two-sided test and return the approximated p-value. The new signature, ``ks_2samp(data1, data2, alternative="two-sided", method="auto")``, still runs the two-sided test by default but returns the exact p-value for small samples and the approximated value for large samples. ``method="asymp"`` would be equivalent to the old version but ``auto`` is the better choice. Other changes ============= Our tutorial has been expanded with a new section on global optimizers There has been a rework of the ``stats.distributions`` tutorials. `scipy.optimize` now correctly sets the convergence flag of the result to ``CONVERR``, a convergence error, for bounded scalar-function root-finders if the maximum iterations has been exceeded, ``disp`` is false, and ``full_output`` is true. `scipy.optimize.curve_fit` no longer fails if ``xdata`` and ``ydata`` dtypes differ; they are both now automatically cast to ``float64``. `scipy.ndimage` functions including ``binary_erosion``, ``binary_closing``, and ``binary_dilation`` now require an integer value for the number of iterations, which alleviates a number of reported issues. Fixed normal approximation in case ``zero_method == "pratt"`` in `scipy.stats.wilcoxon`. Fixes for incorrect probabilities, broadcasting issues and thread-safety related to stats distributions setting member variables inside ``_argcheck()``. `scipy.optimize.newton` now correctly raises a ``RuntimeError``, when default arguments are used, in the case that a derivative of value zero is obtained, which is a special case of failing to converge. A draft toolchain roadmap is now available, laying out a compatibility plan including Python versions, C standards, and NumPy versions. Authors ======= * ananyashreyjain + * ApamNapat + * Scott Calabrese Barton + * Christoph Baumgarten * Peter Bell + * Jacob Blomgren + * Doctor Bob + * Mana Borwornpadungkitti + * Matthew Brett * Evgeni Burovski * CJ Carey * Vega Theil Carstensen + * Robert Cimrman * Forrest Collman + * Pietro Cottone + * David + * Idan David + * Christoph Deil * Dieter Werthm?ller * Conner DiPaolo + * Dowon * Michael Dunphy + * Peter Andreas Entschev + * G?k?en Eraslan + * Johann Faouzi + * Yu Feng * Piotr Figiel + * Matthew H Flamm * Franz Forstmayr + * Christoph Gohlke * Richard Janis Goldschmidt + * Ralf Gommers * Lars Grueter * Sylvain Gubian * Matt Haberland * Yaroslav Halchenko * Charles Harris * Lindsey Hiltner * JakobStruye + * He Jia + * Jwink3101 + * Greg Kiar + * Julius Bier Kirkegaard * John Kirkham + * Thomas Kluyver * Vladimir Korolev + * Joseph Kuo + * Michael Lamparski + * Eric Larson * Denis Laxalde * Katrin Leinweber * Jesse Livezey * ludcila + * Dhruv Madeka + * Magnus + * Nikolay Mayorov * Mark Mikofski * Jarrod Millman * Markus Mohrhard + * Eric Moore * Andrew Nelson * Aki Nishimura + * OGordon100 + * Petar Mlinari? + * Stefan Peterson * Matti Picus + * Ilhan Polat * Aaron Pries + * Matteo Ravasi + * Tyler Reddy * Ashton Reimer + * Joscha Reimer * rfezzani + * Riadh + * Lucas Roberts * Heshy Roskes + * Mirko Scholz + * Taylor D. Scott + * Srikrishna Sekhar + * Kevin Sheppard + * Sourav Singh * skjerns + * Kai Striega * SyedSaifAliAlvi + * Gopi Manohar T + * Albert Thomas + * Timon + * Paul van Mulbregt * Jacob Vanderplas * Daniel Vargas + * Pauli Virtanen * VNMabus + * Stefan van der Walt * Warren Weckesser * Josh Wilson * Nate Yoder + * Roman Yurchak A total of 97 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. Issues closed for 1.3.0 ------------------------------- * `#1320 `__: scipy.stats.distribution: problem with self.a, self.b if they... * `#2002 `__: members set in scipy.stats.distributions.##._argcheck (Trac #1477) * `#2823 `__: distribution methods add tmp * `#3220 `__: Scipy.opimize.fmin_powell direc argument syntax unclear * `#3728 `__: scipy.stats.pearsonr: possible bug with zero variance input * `#6805 `__: error-in-scipy-wilcoxon-signed-rank-test-for-equal-series * `#6873 `__: 'stats.boxcox' return all same values * `#7117 `__: Warn users when using float32 input data to curve_fit and friends * `#7632 `__: it's not possible to tell the \`optimize.least_squares\` solver... * `#7730 `__: stats.pearsonr: Potential division by zero for dataset of length... * `#7933 `__: stats.truncnorm fails when providing values outside truncation... * `#8033 `__: Add standard filter types to firwin to set pass_zero intuitively... * `#8600 `__: lfilter.c.src zfill has erroneous header * `#8692 `__: Non-negative values of \`stats.hypergeom.logcdf\` * `#8734 `__: Enable pip build isolation * `#8861 `__: scipy.linalg.pinv gives wrong result while scipy.linalg.pinv2... * `#8915 `__: need to fix macOS build against older numpy versions * `#8980 `__: scipy.stats.pearsonr overflows with high values of x and y * `#9226 `__: BUG: signal: SystemError: ... * `#9254 `__: BUG: root finders brentq, etc, flag says "converged" even if... * `#9308 `__: Test failure - test_initial_constraints_as_canonical * `#9353 `__: scipy.stats.pearsonr returns r=1 if r_num/r_den = inf * `#9359 `__: Planck distribution is a geometric distribution * `#9381 `__: linregress should warn user in 2x2 array case * `#9406 `__: BUG: stats: In pearsonr, when r is nan, the p-value must also... * `#9437 `__: Cannot create sparse matrix from size_t indexes * `#9518 `__: Relational attributes in loadarff * `#9551 `__: BUG: scipy.optimize.newton says the root of x^2+1 is zero. * `#9564 `__: rv_sample accepts invalid input in scipy.stats * `#9565 `__: improper handling of multidimensional input in stats.rv_sample * `#9581 `__: Least-squares minimization fails silently when x and y data are... * `#9587 `__: Outdated value for scipy.constants.au * `#9611 `__: Overflow error with new way of p-value calculation in kendall... * `#9645 `__: \`scipy.stats.mode\` crashes with variable length arrays (\`dtype=object\`) * `#9734 `__: PendingDeprecationWarning for np.matrix with pytest * `#9786 `__: stats.ks_2samp() misleading for small data sets. * `#9790 `__: Excessive memory usage on detrend * `#9801 `__: dual_annealing does not set the success attribute in OptimizeResult * `#9833 `__: IntegrationWarning from mielke.stats() during build of html doc. * `#9835 `__: scipy.signal.firls seems to be inefficient versus MATLAB firls * `#9864 `__: Curve_fit does not check for empty input data if called with... * `#9869 `__: scipy.ndimage.label: Minor documentation issue * `#9882 `__: format at the wrong paranthesis in scipy.spatial.transform * `#9889 `__: scipy.signal.find_peaks minor documentation issue * `#9890 `__: Minkowski p-norm Issues in cKDTree For Values Other Than 2 Or... * `#9896 `__: scipy.stats._argcheck sets (not just checks) values * `#9905 `__: Memory error in ndimage.binary_erosion * `#9909 `__: binary_dilation/erosion/closing crashes when iterations is float * `#9919 `__: BUG: \`coo_matrix\` does not validate the \`shape\` argument. * `#9982 `__: lsq_linear hangs/infinite loop with 'trf' method * `#10003 `__: exponnorm.pdf returns NAN for small K * `#10011 `__: Incorrect check for invalid rotation plane in scipy.ndimage.rotate * `#10024 `__: Fails to build from git * `#10048 `__: DOC: scipy.optimize.root_scalar * `#10068 `__: DOC: scipy.interpolate.splev * `#10074 `__: BUG: \`expm\` calculates the wrong coefficients in the backward... Pull requests for 1.3.0 ------------------------------ * `#7827 `__: ENH: sparse: overhaul of sparse matrix indexing * `#8431 `__: ENH: Cython optimize zeros api * `#8743 `__: DOC: Updated linalg.pinv, .pinv2, .pinvh docstrings * `#8744 `__: DOC: added examples to remez docstring * `#9227 `__: DOC: update description of "direc" parameter of "fmin_powell" * `#9263 `__: ENH: optimize: added "revised simplex" for scipy.optimize.linprog * `#9325 `__: DEP: Remove deprecated functions for 1.3.0 * `#9330 `__: Add note on push and pull affine transformations * `#9423 `__: DOC: Clearly state how 2x2 input arrays are handled in stats.linregress * `#9428 `__: ENH: parallelised brute * `#9438 `__: BUG: Initialize coo matrix with size_t indexes * `#9455 `__: MAINT: Speed up get_(lapack,blas)_func * `#9465 `__: MAINT: Clean up optimize.zeros C solvers interfaces/code. * `#9477 `__: DOC: linalg: fix lstsq docstring on residues shape * `#9478 `__: DOC: Add docstring examples for rosen functions * `#9479 `__: DOC: Add docstring example for ai_zeros and bi_zeros * `#9480 `__: MAINT: linalg: lstsq clean up * `#9489 `__: DOC: roadmap update for changes over the last year. * `#9492 `__: MAINT: stats: Improve implementation of chi2 ppf method. * `#9497 `__: DOC: Improve docstrings sparse.linalg.isolve * `#9499 `__: DOC: Replace "Scipy" with "SciPy" in the .rst doc files for consistency. * `#9500 `__: DOC: Document the toolchain and its roadmap. * `#9505 `__: DOC: specify which definition of skewness is used * `#9511 `__: DEP: interpolate: remove deprecated interpolate_wrapper * `#9517 `__: BUG: improve error handling in stats.iqr * `#9522 `__: ENH: Add Fiedler and fiedler companion to special matrices * `#9526 `__: TST: relax precision requirements in signal.correlate tests * `#9529 `__: DOC: fix missing random seed in optimize.newton example * `#9533 `__: MAINT: Use list comprehension when possible * `#9537 `__: DOC: add a "big picture" roadmap * `#9538 `__: DOC: Replace "Numpy" with "NumPy" in .py, .rst and .txt doc files... * `#9539 `__: ENH: add two-sample test (Epps-Singleton) to scipy.stats * `#9559 `__: DOC: add section on global optimizers to tutorial * `#9561 `__: ENH: remove noprefix.h, change code appropriately * `#9562 `__: MAINT: stats: Rewrite pearsonr. * `#9563 `__: BUG: Minor bug fix Callback in linprog(method='simplex') * `#9568 `__: MAINT: raise runtime error for newton with zeroder if disp true,... * `#9570 `__: Correct docstring in show_options in optimize. Fixes #9407 * `#9573 `__: BUG fixes range of pk variable pre-check * `#9577 `__: TST: fix minor issue in a signal.stft test. * `#9580 `__: Included blank line before list - Fixes #8658 * `#9582 `__: MAINT: drop Python 2.7 and 3.4 * `#9588 `__: MAINT: update \`constants.astronomical_unit\` to new 2012 value. * `#9592 `__: TST: Add 32-bit testing to CI * `#9593 `__: DOC: Replace cumulative density with cumulative distribution * `#9596 `__: TST: remove VC 9.0 from Azure CI * `#9599 `__: Hyperlink DOI to preferred resolver * `#9601 `__: DEV: try to limit GC memory use on PyPy * `#9603 `__: MAINT: improve logcdf and logsf of hypergeometric distribution * `#9605 `__: Reference to pylops in LinearOperator notes and ARPACK example * `#9617 `__: TST: reduce max memory usage for sparse.linalg.lgmres test * `#9619 `__: FIX: Sparse matrix addition/subtraction eliminates explicit zeros * `#9621 `__: bugfix in rv_sample in scipy.stats * `#9622 `__: MAINT: Raise error in directed_hausdorff distance * `#9623 `__: DOC: Build docs with warnings as errors * `#9625 `__: Return the number of calls to 'hessp' (not just 'hess') in trust... * `#9627 `__: BUG: ignore empty lines in mmio * `#9637 `__: Function to calculate the MAD of an array * `#9646 `__: BUG: stats: mode for objects w/ndim > 1 * `#9648 `__: Add \`stats.contingency\` to refguide-check * `#9650 `__: ENH: many lobpcg() algorithm improvements * `#9652 `__: Move misc.doccer to _lib.doccer * `#9660 `__: ENH: add pearson, tippett, and mudholkar-george to combine_pvalues * `#9661 `__: BUG: Fix ksone right-hand endpoint, documentation and tests. * `#9664 `__: ENH: adding multi-target dijsktra performance enhancement * `#9670 `__: MAINT: link planck and geometric distribution in scipy.stats * `#9676 `__: ENH: optimize: change default linprog method to interior-point * `#9685 `__: Added reference to ndimage.filters.median_filter * `#9705 `__: Fix coefficients in expm helper function * `#9711 `__: Release the GIL during sosfilt processing for simple types * `#9721 `__: ENH: Convexhull visiblefacets * `#9723 `__: BLD: Modify rv_generic._construct_doc to print out failing distribution... * `#9726 `__: BUG: Fix small issues with \`signal.lfilter' * `#9729 `__: BUG: Typecheck iterations for binary image operations * `#9730 `__: ENH: reduce sizeof(NI_WatershedElement) by 20% * `#9731 `__: ENH: remove suspicious sequence of type castings * `#9739 `__: BUG: qr_updates fails if u is exactly in span Q * `#9749 `__: BUG: MapWrapper.__exit__ should terminate * `#9753 `__: ENH: Added exact computation for Kolmogorov-Smirnov two-sample... * `#9755 `__: DOC: Added example for signal.impulse, copied from impulse2 * `#9756 `__: DOC: Added docstring example for iirdesign * `#9757 `__: DOC: Added examples for step functions * `#9759 `__: ENH: Allow pass_zero to act like btype * `#9760 `__: DOC: Added docstring for lp2bs * `#9761 `__: DOC: Added docstring and example for lp2bp * `#9764 `__: BUG: Catch internal warnings for matrix * `#9766 `__: ENH: Speed up _gaussian_kernel1d by removing dependence on np.polynomial * `#9769 `__: BUG: Fix Cubic Spline Read Only issues * `#9773 `__: DOC: Several docstrings * `#9774 `__: TST: bump Azure CI OpenBLAS version to match wheels * `#9775 `__: DOC: Improve clarity of cov_x documentation for scipy.optimize.leastsq * `#9779 `__: ENH: dual_annealing vectorise visit_fn * `#9788 `__: TST, BUG: f2py-related issues with NumPy < 1.14.0 * `#9791 `__: BUG: fix amax constraint not enforced in scalar_search_wolfe2 * `#9792 `__: ENH: Allow inplace copying in place in "detrend" function * `#9795 `__: DOC: Fix/update docstring for dstn and dst * `#9796 `__: MAINT: Allow None tolerances in least_squares * `#9798 `__: BUG: fixes abort trap 6 error in scipy issue 9785 in unit tests * `#9807 `__: MAINT: improve doc and add alternative keyword to wilcoxon in... * `#9808 `__: Fix PPoly integrate and test for CubicSpline * `#9810 `__: ENH: Add the geometric standard deviation function * `#9811 `__: MAINT: remove invalid derphi default None value in scalar_search_wolfe2 * `#9813 `__: Adapt hamming distance in C to support weights * `#9817 `__: DOC: Copy solver description to solver modules * `#9829 `__: ENH: Add FOH and equivalent impulse response discretizations... * `#9831 `__: ENH: Implement RotationSpline * `#9834 `__: DOC: Change mielke distribution default parameters to ensure... * `#9838 `__: ENH: Use faster solvers for firls * `#9854 `__: ENH: loadarff now supports relational attributes. * `#9856 `__: integrate.bvp - improve handling of nonlinear boundary conditions * `#9862 `__: TST: reduce Appveyor CI load * `#9874 `__: DOC: Update requirements in release notes * `#9883 `__: BUG: fixed parenthesis in spatial.rotation * `#9884 `__: ENH: Use Sparsity in Clarkson-Woodruff Sketch * `#9888 `__: MAINT: Replace NumPy aliased functions * `#9892 `__: BUG: Fix 9890 query_ball_point returns wrong result when p is... * `#9893 `__: BUG: curve_fit doesn't check for empty input if called with bounds * `#9894 `__: scipy.signal.find_peaks documentation error * `#9898 `__: BUG: Set success attribute in OptimizeResult. See #9801 * `#9900 `__: BUG: Restrict rv_generic._argcheck() and its overrides from setting... * `#9906 `__: fixed a bug in kde logpdf * `#9911 `__: DOC: replace example for "np.select" with the one from numpy... * `#9912 `__: BF(DOC): point to numpy.select instead of plain (python) .select * `#9914 `__: DOC: change ValueError message in _validate_pad of signaltools. * `#9915 `__: cKDTree query_ball_point improvements * `#9918 `__: Update ckdtree.pyx with boxsize argument in docstring * `#9920 `__: BUG: sparse: Validate explicit shape if given with dense argument... * `#9924 `__: BLD: add back pyproject.toml * `#9931 `__: Fix empty constraint * `#9935 `__: DOC: fix references for stats.f_oneway * `#9936 `__: Revert gh-9619: "FIX: Sparse matrix addition/subtraction eliminates... * `#9937 `__: MAINT: fix PEP8 issues and update to pycodestyle 2.5.0 * `#9939 `__: DOC: correct \`structure\` description in \`ndimage.label\` docstring * `#9940 `__: MAINT: remove extraneous distutils copies * `#9945 `__: ENH: differential_evolution can use Bounds object * `#9949 `__: Added 'std' to add doctstrings since it is a \`known_stats\`... * `#9953 `__: DOC: Documentation cleanup for stats tutorials. * `#9962 `__: __repr__ for Bounds * `#9971 `__: ENH: Improve performance of lsmr * `#9987 `__: CI: pin Sphinx version to 1.8.5 * `#9990 `__: ENH: constraint violation * `#9991 `__: BUG: Avoid inplace modification of input array in newton * `#9995 `__: MAINT: sparse.csgraph: Add cdef to stop build warning. * `#9996 `__: BUG: Make minimize_quadratic_1d work with infinite bounds correctly * `#10004 `__: BUG: Fix unbound local error in linprog - simplex. * `#10007 `__: BLD: fix Python 3.7 build with build isolation * `#10009 `__: BUG: Make sure that _binary_erosion only accepts an integer number... * `#10016 `__: Update link to airspeed-velocity * `#10017 `__: DOC: Update \`interpolate.LSQSphereBivariateSpline\` to include... * `#10018 `__: MAINT: special: Fix a few warnings that occur when compiling... * `#10019 `__: TST: Azure summarizes test failures * `#10021 `__: ENH: Introduce CubicHermiteSpline * `#10022 `__: BENCH: Increase cython version in asv to fix benchmark builds * `#10023 `__: BUG: Avoid exponnorm producing nan for small K values. * `#10025 `__: BUG: optimize: tweaked linprog status 4 error message * `#10026 `__: ENH: optimize: use SuiteSparse in linprog interior-point when... * `#10027 `__: MAINT: cluster: clean up the use of malloc() in the function... * `#10028 `__: Fix rotate invalid plane check * `#10040 `__: MAINT: fix pratt method of wilcox test in scipy.stats * `#10041 `__: MAINT: special: Fix a warning generated when building the AMOS... * `#10044 `__: DOC: fix up spatial.transform.Rotation docstrings * `#10047 `__: MAINT: interpolate: Fix a few build warnings. * `#10051 `__: Add project_urls to setup * `#10052 `__: don't set flag to "converged" if max iter exceeded * `#10054 `__: MAINT: signal: Fix a few build warnings and modernize some C... * `#10056 `__: BUG: Ensure factorial is not too large in kendaltau * `#10058 `__: Small speedup in samping from ortho and special_ortho groups * `#10059 `__: BUG: optimize: fix #10038 by increasing tol * `#10061 `__: BLD: DOC: make building docs easier by parsing python version. * `#10064 `__: ENH: Significant speedup for ortho and special ortho group * `#10065 `__: DOC: Reword parameter descriptions in \`optimize.root_scalar\` * `#10066 `__: BUG: signal: Fix error raised by savgol_coeffs when deriv > polyorder. * `#10067 `__: MAINT: Fix the cutoff value inconsistency for pinv2 and pinvh * `#10072 `__: BUG: stats: Fix boxcox_llf to avoid loss of precision. * `#10075 `__: ENH: Add wrappers for ?syconv routines * `#10076 `__: BUG: optimize: fix curve_fit for mixed float32/float64 input * `#10077 `__: DOC: Replace undefined \`k\` in \`interpolate.splev\` docstring * `#10079 `__: DOC: Fixed typo, rearranged some doc of stats.morestats.wilcoxon. * `#10080 `__: TST: install scikit-sparse for full TravisCI tests * `#10083 `__: Clean \`\`_clean_inputs\`\` in optimize.linprog * `#10088 `__: ENH: optimize: linprog test CHOLMOD/UMFPACK solvers when available * `#10090 `__: MAINT: Fix CubicSplinerInterpolator for pandas * `#10091 `__: MAINT: improve logcdf and logsf of hypergeometric distribution * `#10095 `__: MAINT: Clean \`\`_clean_inputs\`\` in linprog * `#10116 `__: MAINT: update scipy-sphinx-theme * `#10135 `__: BUG: fix linprog revised simplex docstring problem failure Checksums ========= MD5 ~~~ 209c50a628a624fc82535299f5913d65 scipy-1.3.0-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 54e6fdb6aacbcaeff6dc86fc736cf39a scipy-1.3.0-cp35-cp35m-manylinux1_i686.whl 752f9cae504e7ea06cd818fc74b829c0 scipy-1.3.0-cp35-cp35m-manylinux1_x86_64.whl c7a0ff2b530570feefa8102813fc6dd1 scipy-1.3.0-cp35-cp35m-win32.whl 1c53ccff157fe23b165e53fba87c37e0 scipy-1.3.0-cp35-cp35m-win_amd64.whl 6762dc85ef6fe357e5710c32451b29a2 scipy-1.3.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 03d9c756b5bc836194cd5d13cd73e3fe scipy-1.3.0-cp36-cp36m-manylinux1_i686.whl 1e5af3fade676e5a588d40d785e7ee4d scipy-1.3.0-cp36-cp36m-manylinux1_x86_64.whl fe130e4cb77078c6a886795bcf1fa66d scipy-1.3.0-cp36-cp36m-win32.whl f62f60ea0397b7aa9a90fb610fc54d33 scipy-1.3.0-cp36-cp36m-win_amd64.whl a1ec52b1b162bb7ae0d0ea76438e35ce scipy-1.3.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 260d182114edaed177c64d60776ceee6 scipy-1.3.0-cp37-cp37m-manylinux1_i686.whl 38c5a038504e03b503f7674b30218068 scipy-1.3.0-cp37-cp37m-manylinux1_x86_64.whl 2390fdb5a4330c54c2a5308afe959bb9 scipy-1.3.0-cp37-cp37m-win32.whl 452157882a9f180914906df9bbf9d7bf scipy-1.3.0-cp37-cp37m-win_amd64.whl c6876673adf7e9e6c0307beaca784ad2 scipy-1.3.0.tar.gz e7153c2eb276bc303699b75858db6276 scipy-1.3.0.tar.xz 16b9e6a0ea8bdcf2ea72fda5975a252c scipy-1.3.0.zip SHA256 ~~~~~~ 4907040f62b91c2e170359c3d36c000af783f0fa1516a83d6c1517cde0af5340 scipy-1.3.0-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 1db9f964ed9c52dc5bd6127f0dd90ac89791daa690a5665cc01eae185912e1ba scipy-1.3.0-cp35-cp35m-manylinux1_i686.whl adadeeae5500de0da2b9e8dd478520d0a9945b577b2198f2462555e68f58e7ef scipy-1.3.0-cp35-cp35m-manylinux1_x86_64.whl 03b1e0775edbe6a4c64effb05fff2ce1429b76d29d754aa5ee2d848b60033351 scipy-1.3.0-cp35-cp35m-win32.whl a7695a378c2ce402405ea37b12c7a338a8755e081869bd6b95858893ceb617ae scipy-1.3.0-cp35-cp35m-win_amd64.whl 826b9f5fbb7f908a13aa1efd4b7321e36992f5868d5d8311c7b40cf9b11ca0e7 scipy-1.3.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl b283a76a83fe463c9587a2c88003f800e08c3929dfbeba833b78260f9c209785 scipy-1.3.0-cp36-cp36m-manylinux1_i686.whl db61a640ca20f237317d27bc658c1fc54c7581ff7f6502d112922dc285bdabee scipy-1.3.0-cp36-cp36m-manylinux1_x86_64.whl 409846be9d6bdcbd78b9e5afe2f64b2da5a923dd7c1cd0615ce589489533fdbb scipy-1.3.0-cp36-cp36m-win32.whl c19a7389ab3cd712058a8c3c9ffd8d27a57f3d84b9c91a931f542682bb3d269d scipy-1.3.0-cp36-cp36m-win_amd64.whl 09d008237baabf52a5d4f5a6fcf9b3c03408f3f61a69c404472a16861a73917e scipy-1.3.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl a84c31e8409b420c3ca57fd30c7589378d6fdc8d155d866a7f8e6e80dec6fd06 scipy-1.3.0-cp37-cp37m-manylinux1_i686.whl c5ea60ece0c0c1c849025bfc541b60a6751b491b6f11dd9ef37ab5b8c9041921 scipy-1.3.0-cp37-cp37m-manylinux1_x86_64.whl 6c0543f2fdd38dee631fb023c0f31c284a532d205590b393d72009c14847f5b1 scipy-1.3.0-cp37-cp37m-win32.whl 10325f0ffac2400b1ec09537b7e403419dcd25d9fee602a44e8a32119af9079e scipy-1.3.0-cp37-cp37m-win_amd64.whl c3bb4bd2aca82fb498247deeac12265921fe231502a6bc6edea3ee7fe6c40a7a scipy-1.3.0.tar.gz ae105c28c1fdb480bf22fd1b1392eeb8679f3c0c8917c87fca8aabf323918455 scipy-1.3.0.tar.xz b711ec1567439a1abfac59321b73a40de70a93ace4a33be26001eb4b12206356 scipy-1.3.0.zip -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJc3vVHAAoJELD/41ZX0J71Dz4P+QFv7E3OIWcrTm6qjt9X4Mgi 6heershUpVhZATGj7Kpl+lPEBGthsnkT8MeS0/YW3ZKiDWnQoSWSIxBRnqoXup2D CwjeXn4eKw7Z4G2A6MpKcdJe1xV1lY2Wi3MyfmkYnkO0be+NLMStrTS/1+JdM/Xg fGt5KqE+QXqB3sEGGf3SXP8tnSS2ULbKNPAxSTL1twS0bEprOVspCtCQJd3Xm1Oi g2+vWcIwH80KpphvZLl7F22FOI+birxn19CwNupMaN8IyW0RADUKOvYlWMamUA3K iW6KUyHXolyjixAh4RDZUKg0hNUIbDpMBKslqY+Faz92RCCbxCx+TZGT6y+0chrp ujE+jRfuXcSk5eykBIzYx3aLPkMH1aQ4ERCi1hxODkTlV22btlSam5diNAOmQeZz MhQEmbtx5C9xEEHIrGpsuMHVGZfMm0/QaN23Wn/oRq4e7BsICHfZIoNMjW7/ohSv cT0jKFHjzvS3gigT1c8EkzwtFweLp5gYGGUD4IiLOI898pvmns+DcW9coGkwrQ0E 0OqTjpIIEPCDpHNrgRqWK8RhvqvkiDTs3HbCaxsMOMkWzlPOnrM37frqUO53SVQH SKwe1ic13dKh332CMPcqB5EslBKEx2juexwmqPhG3Wnsy2+dL40yi7TGJYh9Vytn ByYnyWurPkhp4WJTN1OD =xiYy -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Sat May 18 00:53:45 2019 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sat, 18 May 2019 00:53:45 -0400 Subject: [Numpy-discussion] ANN: SciPy 1.3.0 In-Reply-To: References: Message-ID: On 5/17/19, Tyler Reddy wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hi all, > > On behalf of the SciPy development team I'm pleased to announce > the release of SciPy 1.3.0. A big "thank you" to everyone who contributed to 1.3.0, and especially to Tyler for managing the release so well. Congatulations! Warren > > Sources and binary wheels can be found at: > https://pypi.org/project/scipy/ > and at: > https://github.com/scipy/scipy/releases/tag/v1.3.0 > > One of a few ways to install this release with pip: > > pip install scipy==1.3.0 > > ========================== > SciPy 1.3.0 Release Notes > ========================== > > SciPy 1.3.0 is the culmination of 5 months of hard work. It contains > many new features, numerous bug-fixes, improved test coverage and better > documentation. There have been some API changes > in this release, which are documented below. All users are encouraged to > upgrade to this release, as there are a large number of bug-fixes and > optimizations. Before upgrading, we recommend that users check that > their own code does not use deprecated SciPy functionality (to do so, > run your code with ``python -Wd`` and check for ``DeprecationWarning`` s). > Our development attention will now shift to bug-fix releases on the > 1.3.x branch, and on adding new features on the master branch. > > This release requires Python 3.5+ and NumPy 1.13.3 or greater. > > For running on PyPy, PyPy3 6.0+ and NumPy 1.15.0 are required. > > Highlights of this release > --------------------------------- > > - - Three new ``stats`` functions, a rewrite of ``pearsonr``, and an exact > computation of the Kolmogorov-Smirnov two-sample test > - - A new Cython API for bounded scalar-function root-finders in > `scipy.optimize` > - - Substantial ``CSR`` and ``CSC`` sparse matrix indexing performance > improvements > - - Added support for interpolation of rotations with continuous angular > rate and acceleration in ``RotationSpline`` > > > New features > ============ > > `scipy.interpolate` improvements > ------------------------------------------- > > A new class ``CubicHermiteSpline`` is introduced. It is a piecewise-cubic > interpolator which matches observed values and first derivatives. Existing > cubic interpolators ``CubicSpline``, ``PchipInterpolator`` and > ``Akima1DInterpolator`` were made subclasses of ``CubicHermiteSpline``. > > `scipy.io` improvements > -------------------------------- > > For the Attribute-Relation File Format (ARFF) `scipy.io.arff.loadarff` > now supports relational attributes. > > `scipy.io.mmread` can now parse Matrix Market format files with empty > lines. > > `scipy.linalg` improvements > ------------------------------------ > > Added wrappers for ``?syconv`` routines, which convert a symmetric matrix > given by a triangular matrix factorization into two matrices and vice > versa. > > `scipy.linalg.clarkson_woodruff_transform` now uses an algorithm that > leverages > sparsity. This may provide a 60-90 percent speedup for dense input > matrices. > Truly sparse input matrices should also benefit from the improved sketch > algorithm, which now correctly runs in ``O(nnz(A))`` time. > > Added new functions to calculate symmetric Fiedler matrices and > Fiedler companion matrices, named `scipy.linalg.fiedler` and > `scipy.linalg.fiedler_companion`, respectively. These may be used > for root finding. > > `scipy.ndimage` improvements > ---------------------------------------- > > Gaussian filter performances may improve by an order of magnitude in > some cases, thanks to removal of a dependence on ``np.polynomial``. This > may impact `scipy.ndimage.gaussian_filter` for example. > > `scipy.optimize` improvements > ---------------------------------------- > > The `scipy.optimize.brute` minimizer obtained a new keyword ``workers``, > which > can be used to parallelize computation. > > A Cython API for bounded scalar-function root-finders in `scipy.optimize` > is available in a new module `scipy.optimize.cython_optimize` via > ``cimport``. > This API may be used with ``nogil`` and ``prange`` to loop > over an array of function arguments to solve for an array of roots more > quickly than with pure Python. > > ``'interior-point'`` is now the default method for ``linprog``, and > ``'interior-point'`` now uses SuiteSparse for sparse problems when the > required scikits (scikit-umfpack and scikit-sparse) are available. > On benchmark problems (gh-10026), execution time reductions by factors of > 2-3 > were typical. Also, a new ``method='revised simplex'`` has been added. > It is not as fast or robust as ``method='interior-point'``, but it is a > faster, > more robust, and equally accurate substitute for the legacy > ``method='simplex'``. > > ``differential_evolution`` can now use a ``Bounds`` class to specify the > bounds for the optimizing argument of a function. > > `scipy.optimize.dual_annealing` performance improvements related to > vectorisation of some internal code. > > `scipy.signal` improvements > ------------------------------------- > > Two additional methods of discretization are now supported by > `scipy.signal.cont2discrete`: ``impulse`` and ``foh``. > > `scipy.signal.firls` now uses faster solvers > > `scipy.signal.detrend` now has a lower physical memory footprint in some > cases, which may be leveraged using the new ``overwrite_data`` keyword > argument > > `scipy.signal.firwin` ``pass_zero`` argument now accepts new string > arguments > that allow specification of the desired filter type: ``'bandpass'``, > ``'lowpass'``, ``'highpass'``, and ``'bandstop'`` > > `scipy.signal.sosfilt` may have improved performance due to lower retention > of the global interpreter lock (GIL) in algorithm > > `scipy.sparse` improvements > -------------------------------------- > > A new keyword was added to ``csgraph.dijsktra`` that > allows users to query the shortest path to ANY of the passed in indices, > as opposed to the shortest path to EVERY passed index. > > `scipy.sparse.linalg.lsmr` performance has been improved by roughly 10 > percent > on large problems > > Improved performance and reduced physical memory footprint of the algorithm > used by `scipy.sparse.linalg.lobpcg` > > ``CSR`` and ``CSC`` sparse matrix fancy indexing performance has been > improved substantially > > `scipy.spatial` improvements > -------------------------------------- > > `scipy.spatial.ConvexHull` now has a ``good`` attribute that can be used > alongsize the ``QGn`` Qhull options to determine which external facets of a > convex hull are visible from an external query point. > > `scipy.spatial.cKDTree.query_ball_point` has been modernized to use some > newer > Cython features, including GIL handling and exception translation. An issue > with ``return_sorted=True`` and scalar queries was fixed, and a new mode > named > ``return_length`` was added. ``return_length`` only computes the length of > the > returned indices list instead of allocating the array every time. > > `scipy.spatial.transform.RotationSpline` has been added to enable > interpolation > of rotations with continuous angular rates and acceleration > > `scipy.stats` improvements > ------------------------------------ > > Added a new function to compute the Epps-Singleton test statistic, > `scipy.stats.epps_singleton_2samp`, which can be applied to continuous and > discrete distributions. > > New functions `scipy.stats.median_absolute_deviation` and > `scipy.stats.gstd` > (geometric standard deviation) were added. The > `scipy.stats.combine_pvalues` > method now supports ``pearson``, ``tippett`` and ``mudholkar_george`` > pvalue > combination methods. > > The `scipy.stats.ortho_group` and `scipy.stats.special_ortho_group` > ``rvs(dim)`` functions' algorithms were updated from a ``O(dim^4)`` > implementation to a ``O(dim^3)`` which gives large speed improvements > for ``dim>100``. > > A rewrite of `scipy.stats.pearsonr` to use a more robust algorithm, > provide meaningful exceptions and warnings on potentially pathological > input, > and fix at least five separate reported issues in the original > implementation. > > Improved the precision of ``hypergeom.logcdf`` and ``hypergeom.logsf``. > > Added exact computation for Kolmogorov-Smirnov (KS) two-sample test, > replacing > the previously approximate computation for the two-sided test > `stats.ks_2samp`. > Also added a one-sided, two-sample KS test, and a keyword ``alternative`` > to > `stats.ks_2samp`. > > Backwards incompatible changes > ========================== > > `scipy.interpolate` changes > ------------------------------------ > > Functions from ``scipy.interpolate`` (``spleval``, ``spline``, ``splmake``, > and ``spltopp``) and functions from ``scipy.misc`` (``bytescale``, > ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, > ``imsave``, ``imshow``, ``toimage``) have been removed. The former set has > been deprecated since v0.19.0 and the latter has been deprecated since > v1.0.0. > Similarly, aliases from ``scipy.misc`` (``comb``, ``factorial``, > ``factorial2``, ``factorialk``, ``logsumexp``, ``pade``, ``info``, > ``source``, > ``who``) which have been deprecated since v1.0.0 are removed. > `SciPy documentation for > v1.1.0 `__ > can be used to track the new import locations for the relocated functions. > > `scipy.linalg` changes > ----------------------------- > > For ``pinv``, ``pinv2``, and ``pinvh``, the default cutoff values are > changed > for consistency (see the docs for the actual values). > > `scipy.optimize` changes > --------------------------------- > > The default method for ``linprog`` is now ``'interior-point'``. The > method's > robustness and speed come at a cost: solutions may not be accurate to > machine precision or correspond with a vertex of the polytope defined > by the constraints. To revert to the original simplex method, > include the argument ``method='simplex'``. > > `scipy.stats` changes > ---------------------------- > > Previously, ``ks_2samp(data1, data2)`` would run a two-sided test and > return > the approximated p-value. The new signature, ``ks_2samp(data1, data2, > alternative="two-sided", method="auto")``, still runs the two-sided test by > default but returns the exact p-value for small samples and the > approximated > value for large samples. ``method="asymp"`` would be equivalent to the > old version but ``auto`` is the better choice. > > Other changes > ============= > > Our tutorial has been expanded with a new section on global optimizers > > There has been a rework of the ``stats.distributions`` tutorials. > > `scipy.optimize` now correctly sets the convergence flag of the result to > ``CONVERR``, a convergence error, for bounded scalar-function root-finders > if the maximum iterations has been exceeded, ``disp`` is false, and > ``full_output`` is true. > > `scipy.optimize.curve_fit` no longer fails if ``xdata`` and ``ydata`` > dtypes > differ; they are both now automatically cast to ``float64``. > > `scipy.ndimage` functions including ``binary_erosion``, ``binary_closing``, > and > ``binary_dilation`` now require an integer value for the number of > iterations, > which alleviates a number of reported issues. > > Fixed normal approximation in case ``zero_method == "pratt"`` in > `scipy.stats.wilcoxon`. > > Fixes for incorrect probabilities, broadcasting issues and thread-safety > related to stats distributions setting member variables inside > ``_argcheck()``. > > `scipy.optimize.newton` now correctly raises a ``RuntimeError``, when > default > arguments are used, in the case that a derivative of value zero is > obtained, > which is a special case of failing to converge. > > A draft toolchain roadmap is now available, laying out a compatibility plan > including Python versions, C standards, and NumPy versions. > > > Authors > ======= > > * ananyashreyjain + > * ApamNapat + > * Scott Calabrese Barton + > * Christoph Baumgarten > * Peter Bell + > * Jacob Blomgren + > * Doctor Bob + > * Mana Borwornpadungkitti + > * Matthew Brett > * Evgeni Burovski > * CJ Carey > * Vega Theil Carstensen + > * Robert Cimrman > * Forrest Collman + > * Pietro Cottone + > * David + > * Idan David + > * Christoph Deil > * Dieter Werthm?ller > * Conner DiPaolo + > * Dowon > * Michael Dunphy + > * Peter Andreas Entschev + > * G?k?en Eraslan + > * Johann Faouzi + > * Yu Feng > * Piotr Figiel + > * Matthew H Flamm > * Franz Forstmayr + > * Christoph Gohlke > * Richard Janis Goldschmidt + > * Ralf Gommers > * Lars Grueter > * Sylvain Gubian > * Matt Haberland > * Yaroslav Halchenko > * Charles Harris > * Lindsey Hiltner > * JakobStruye + > * He Jia + > * Jwink3101 + > * Greg Kiar + > * Julius Bier Kirkegaard > * John Kirkham + > * Thomas Kluyver > * Vladimir Korolev + > * Joseph Kuo + > * Michael Lamparski + > * Eric Larson > * Denis Laxalde > * Katrin Leinweber > * Jesse Livezey > * ludcila + > * Dhruv Madeka + > * Magnus + > * Nikolay Mayorov > * Mark Mikofski > * Jarrod Millman > * Markus Mohrhard + > * Eric Moore > * Andrew Nelson > * Aki Nishimura + > * OGordon100 + > * Petar Mlinari? + > * Stefan Peterson > * Matti Picus + > * Ilhan Polat > * Aaron Pries + > * Matteo Ravasi + > * Tyler Reddy > * Ashton Reimer + > * Joscha Reimer > * rfezzani + > * Riadh + > * Lucas Roberts > * Heshy Roskes + > * Mirko Scholz + > * Taylor D. Scott + > * Srikrishna Sekhar + > * Kevin Sheppard + > * Sourav Singh > * skjerns + > * Kai Striega > * SyedSaifAliAlvi + > * Gopi Manohar T + > * Albert Thomas + > * Timon + > * Paul van Mulbregt > * Jacob Vanderplas > * Daniel Vargas + > * Pauli Virtanen > * VNMabus + > * Stefan van der Walt > * Warren Weckesser > * Josh Wilson > * Nate Yoder + > * Roman Yurchak > > A total of 97 people contributed to this release. > People with a "+" by their names contributed a patch for the first time. > This list of names is automatically generated, and may not be fully > complete. > > Issues closed for 1.3.0 > ------------------------------- > > * `#1320 `__: > scipy.stats.distribution: problem with self.a, self.b if they... > * `#2002 `__: members set in > scipy.stats.distributions.##._argcheck (Trac #1477) > * `#2823 `__: distribution > methods add tmp > * `#3220 `__: > Scipy.opimize.fmin_powell direc argument syntax unclear > * `#3728 `__: > scipy.stats.pearsonr: possible bug with zero variance input > * `#6805 `__: > error-in-scipy-wilcoxon-signed-rank-test-for-equal-series > * `#6873 `__: 'stats.boxcox' > return all same values > * `#7117 `__: Warn users when > using float32 input data to curve_fit and friends > * `#7632 `__: it's not possible > to tell the \`optimize.least_squares\` solver... > * `#7730 `__: stats.pearsonr: > Potential division by zero for dataset of length... > * `#7933 `__: stats.truncnorm > fails when providing values outside truncation... > * `#8033 `__: Add standard > filter types to firwin to set pass_zero intuitively... > * `#8600 `__: lfilter.c.src > zfill has erroneous header > * `#8692 `__: Non-negative > values of \`stats.hypergeom.logcdf\` > * `#8734 `__: Enable pip build > isolation > * `#8861 `__: scipy.linalg.pinv > gives wrong result while scipy.linalg.pinv2... > * `#8915 `__: need to fix macOS > build against older numpy versions > * `#8980 `__: > scipy.stats.pearsonr overflows with high values of x and y > * `#9226 `__: BUG: signal: > SystemError: ... > * `#9254 `__: BUG: root finders > brentq, etc, flag says "converged" even if... > * `#9308 `__: Test failure - > test_initial_constraints_as_canonical > * `#9353 `__: > scipy.stats.pearsonr returns r=1 if r_num/r_den = inf > * `#9359 `__: Planck > distribution is a geometric distribution > * `#9381 `__: linregress should > warn user in 2x2 array case > * `#9406 `__: BUG: stats: In > pearsonr, when r is nan, the p-value must also... > * `#9437 `__: Cannot create > sparse matrix from size_t indexes > * `#9518 `__: Relational > attributes in loadarff > * `#9551 `__: BUG: > scipy.optimize.newton says the root of x^2+1 is zero. > * `#9564 `__: rv_sample accepts > invalid input in scipy.stats > * `#9565 `__: improper handling > of multidimensional input in stats.rv_sample > * `#9581 `__: Least-squares > minimization fails silently when x and y data are... > * `#9587 `__: Outdated value > for scipy.constants.au > * `#9611 `__: Overflow error > with new way of p-value calculation in kendall... > * `#9645 `__: > \`scipy.stats.mode\` crashes with variable length arrays (\`dtype=object\`) > * `#9734 `__: > PendingDeprecationWarning for np.matrix with pytest > * `#9786 `__: stats.ks_2samp() > misleading for small data sets. > * `#9790 `__: Excessive memory > usage on detrend > * `#9801 `__: dual_annealing > does not set the success attribute in OptimizeResult > * `#9833 `__: > IntegrationWarning from mielke.stats() during build of html doc. > * `#9835 `__: > scipy.signal.firls seems to be inefficient versus MATLAB firls > * `#9864 `__: Curve_fit does > not check for empty input data if called with... > * `#9869 `__: > scipy.ndimage.label: Minor documentation issue > * `#9882 `__: format at the > wrong paranthesis in scipy.spatial.transform > * `#9889 `__: > scipy.signal.find_peaks minor documentation issue > * `#9890 `__: Minkowski p-norm > Issues in cKDTree For Values Other Than 2 Or... > * `#9896 `__: > scipy.stats._argcheck sets (not just checks) values > * `#9905 `__: Memory error in > ndimage.binary_erosion > * `#9909 `__: > binary_dilation/erosion/closing crashes when iterations is float > * `#9919 `__: BUG: > \`coo_matrix\` does not validate the \`shape\` argument. > * `#9982 `__: lsq_linear > hangs/infinite loop with 'trf' method > * `#10003 `__: exponnorm.pdf > returns NAN for small K > * `#10011 `__: Incorrect check > for invalid rotation plane in scipy.ndimage.rotate > * `#10024 `__: Fails to build > from git > * `#10048 `__: DOC: > scipy.optimize.root_scalar > * `#10068 `__: DOC: > scipy.interpolate.splev > * `#10074 `__: BUG: \`expm\` > calculates the wrong coefficients in the backward... > > > Pull requests for 1.3.0 > ------------------------------ > > * `#7827 `__: ENH: sparse: > overhaul of sparse matrix indexing > * `#8431 `__: ENH: Cython > optimize zeros api > * `#8743 `__: DOC: Updated > linalg.pinv, .pinv2, .pinvh docstrings > * `#8744 `__: DOC: added examples > to remez docstring > * `#9227 `__: DOC: update > description of "direc" parameter of "fmin_powell" > * `#9263 `__: ENH: optimize: > added "revised simplex" for scipy.optimize.linprog > * `#9325 `__: DEP: Remove > deprecated functions for 1.3.0 > * `#9330 `__: Add note on push > and pull affine transformations > * `#9423 `__: DOC: Clearly state > how 2x2 input arrays are handled in stats.linregress > * `#9428 `__: ENH: parallelised > brute > * `#9438 `__: BUG: Initialize coo > matrix with size_t indexes > * `#9455 `__: MAINT: Speed up > get_(lapack,blas)_func > * `#9465 `__: MAINT: Clean up > optimize.zeros C solvers interfaces/code. > * `#9477 `__: DOC: linalg: fix > lstsq docstring on residues shape > * `#9478 `__: DOC: Add docstring > examples for rosen functions > * `#9479 `__: DOC: Add docstring > example for ai_zeros and bi_zeros > * `#9480 `__: MAINT: linalg: > lstsq clean up > * `#9489 `__: DOC: roadmap update > for changes over the last year. > * `#9492 `__: MAINT: stats: > Improve implementation of chi2 ppf method. > * `#9497 `__: DOC: Improve > docstrings sparse.linalg.isolve > * `#9499 `__: DOC: Replace > "Scipy" with "SciPy" in the .rst doc files for consistency. > * `#9500 `__: DOC: Document the > toolchain and its roadmap. > * `#9505 `__: DOC: specify which > definition of skewness is used > * `#9511 `__: DEP: interpolate: > remove deprecated interpolate_wrapper > * `#9517 `__: BUG: improve error > handling in stats.iqr > * `#9522 `__: ENH: Add Fiedler > and fiedler companion to special matrices > * `#9526 `__: TST: relax > precision requirements in signal.correlate tests > * `#9529 `__: DOC: fix missing > random seed in optimize.newton example > * `#9533 `__: MAINT: Use list > comprehension when possible > * `#9537 `__: DOC: add a "big > picture" roadmap > * `#9538 `__: DOC: Replace > "Numpy" with "NumPy" in .py, .rst and .txt doc files... > * `#9539 `__: ENH: add two-sample > test (Epps-Singleton) to scipy.stats > * `#9559 `__: DOC: add section on > global optimizers to tutorial > * `#9561 `__: ENH: remove > noprefix.h, change code appropriately > * `#9562 `__: MAINT: stats: > Rewrite pearsonr. > * `#9563 `__: BUG: Minor bug fix > Callback in linprog(method='simplex') > * `#9568 `__: MAINT: raise > runtime error for newton with zeroder if disp true,... > * `#9570 `__: Correct docstring > in show_options in optimize. Fixes #9407 > * `#9573 `__: BUG fixes range of > pk variable pre-check > * `#9577 `__: TST: fix minor > issue in a signal.stft test. > * `#9580 `__: Included blank line > before list - Fixes #8658 > * `#9582 `__: MAINT: drop Python > 2.7 and 3.4 > * `#9588 `__: MAINT: update > \`constants.astronomical_unit\` to new 2012 value. > * `#9592 `__: TST: Add 32-bit > testing to CI > * `#9593 `__: DOC: Replace > cumulative density with cumulative distribution > * `#9596 `__: TST: remove VC 9.0 > from Azure CI > * `#9599 `__: Hyperlink DOI to > preferred resolver > * `#9601 `__: DEV: try to limit > GC memory use on PyPy > * `#9603 `__: MAINT: improve > logcdf and logsf of hypergeometric distribution > * `#9605 `__: Reference to pylops > in LinearOperator notes and ARPACK example > * `#9617 `__: TST: reduce max > memory usage for sparse.linalg.lgmres test > * `#9619 `__: FIX: Sparse matrix > addition/subtraction eliminates explicit zeros > * `#9621 `__: bugfix in rv_sample > in scipy.stats > * `#9622 `__: MAINT: Raise error > in directed_hausdorff distance > * `#9623 `__: DOC: Build docs > with warnings as errors > * `#9625 `__: Return the number > of calls to 'hessp' (not just 'hess') in trust... > * `#9627 `__: BUG: ignore empty > lines in mmio > * `#9637 `__: Function to > calculate the MAD of an array > * `#9646 `__: BUG: stats: mode > for objects w/ndim > 1 > * `#9648 `__: Add > \`stats.contingency\` to refguide-check > * `#9650 `__: ENH: many lobpcg() > algorithm improvements > * `#9652 `__: Move misc.doccer to > _lib.doccer > * `#9660 `__: ENH: add pearson, > tippett, and mudholkar-george to combine_pvalues > * `#9661 `__: BUG: Fix ksone > right-hand endpoint, documentation and tests. > * `#9664 `__: ENH: adding > multi-target dijsktra performance enhancement > * `#9670 `__: MAINT: link planck > and geometric distribution in scipy.stats > * `#9676 `__: ENH: optimize: > change default linprog method to interior-point > * `#9685 `__: Added reference to > ndimage.filters.median_filter > * `#9705 `__: Fix coefficients in > expm helper function > * `#9711 `__: Release the GIL > during sosfilt processing for simple types > * `#9721 `__: ENH: Convexhull > visiblefacets > * `#9723 `__: BLD: Modify > rv_generic._construct_doc to print out failing distribution... > * `#9726 `__: BUG: Fix small > issues with \`signal.lfilter' > * `#9729 `__: BUG: Typecheck > iterations for binary image operations > * `#9730 `__: ENH: reduce > sizeof(NI_WatershedElement) by 20% > * `#9731 `__: ENH: remove > suspicious sequence of type castings > * `#9739 `__: BUG: qr_updates > fails if u is exactly in span Q > * `#9749 `__: BUG: > MapWrapper.__exit__ should terminate > * `#9753 `__: ENH: Added exact > computation for Kolmogorov-Smirnov two-sample... > * `#9755 `__: DOC: Added example > for signal.impulse, copied from impulse2 > * `#9756 `__: DOC: Added > docstring example for iirdesign > * `#9757 `__: DOC: Added examples > for step functions > * `#9759 `__: ENH: Allow > pass_zero to act like btype > * `#9760 `__: DOC: Added > docstring for lp2bs > * `#9761 `__: DOC: Added > docstring and example for lp2bp > * `#9764 `__: BUG: Catch internal > warnings for matrix > * `#9766 `__: ENH: Speed up > _gaussian_kernel1d by removing dependence on np.polynomial > * `#9769 `__: BUG: Fix Cubic > Spline Read Only issues > * `#9773 `__: DOC: Several > docstrings > * `#9774 `__: TST: bump Azure CI > OpenBLAS version to match wheels > * `#9775 `__: DOC: Improve > clarity of cov_x documentation for scipy.optimize.leastsq > * `#9779 `__: ENH: dual_annealing > vectorise visit_fn > * `#9788 `__: TST, BUG: > f2py-related issues with NumPy < 1.14.0 > * `#9791 `__: BUG: fix amax > constraint not enforced in scalar_search_wolfe2 > * `#9792 `__: ENH: Allow inplace > copying in place in "detrend" function > * `#9795 `__: DOC: Fix/update > docstring for dstn and dst > * `#9796 `__: MAINT: Allow None > tolerances in least_squares > * `#9798 `__: BUG: fixes abort > trap 6 error in scipy issue 9785 in unit tests > * `#9807 `__: MAINT: improve doc > and add alternative keyword to wilcoxon in... > * `#9808 `__: Fix PPoly integrate > and test for CubicSpline > * `#9810 `__: ENH: Add the > geometric standard deviation function > * `#9811 `__: MAINT: remove > invalid derphi default None value in scalar_search_wolfe2 > * `#9813 `__: Adapt hamming > distance in C to support weights > * `#9817 `__: DOC: Copy solver > description to solver modules > * `#9829 `__: ENH: Add FOH and > equivalent impulse response discretizations... > * `#9831 `__: ENH: Implement > RotationSpline > * `#9834 `__: DOC: Change mielke > distribution default parameters to ensure... > * `#9838 `__: ENH: Use faster > solvers for firls > * `#9854 `__: ENH: loadarff now > supports relational attributes. > * `#9856 `__: integrate.bvp - > improve handling of nonlinear boundary conditions > * `#9862 `__: TST: reduce > Appveyor CI load > * `#9874 `__: DOC: Update > requirements in release notes > * `#9883 `__: BUG: fixed > parenthesis in spatial.rotation > * `#9884 `__: ENH: Use Sparsity > in Clarkson-Woodruff Sketch > * `#9888 `__: MAINT: Replace > NumPy aliased functions > * `#9892 `__: BUG: Fix 9890 > query_ball_point returns wrong result when p is... > * `#9893 `__: BUG: curve_fit > doesn't check for empty input if called with bounds > * `#9894 `__: > scipy.signal.find_peaks documentation error > * `#9898 `__: BUG: Set success > attribute in OptimizeResult. See #9801 > * `#9900 `__: BUG: Restrict > rv_generic._argcheck() and its overrides from setting... > * `#9906 `__: fixed a bug in kde > logpdf > * `#9911 `__: DOC: replace > example for "np.select" with the one from numpy... > * `#9912 `__: BF(DOC): point to > numpy.select instead of plain (python) .select > * `#9914 `__: DOC: change > ValueError message in _validate_pad of signaltools. > * `#9915 `__: cKDTree > query_ball_point improvements > * `#9918 `__: Update ckdtree.pyx > with boxsize argument in docstring > * `#9920 `__: BUG: sparse: > Validate explicit shape if given with dense argument... > * `#9924 `__: BLD: add back > pyproject.toml > * `#9931 `__: Fix empty > constraint > * `#9935 `__: DOC: fix references > for stats.f_oneway > * `#9936 `__: Revert gh-9619: > "FIX: Sparse matrix addition/subtraction eliminates... > * `#9937 `__: MAINT: fix PEP8 > issues and update to pycodestyle 2.5.0 > * `#9939 `__: DOC: correct > \`structure\` description in \`ndimage.label\` docstring > * `#9940 `__: MAINT: remove > extraneous distutils copies > * `#9945 `__: ENH: > differential_evolution can use Bounds object > * `#9949 `__: Added 'std' to add > doctstrings since it is a \`known_stats\`... > * `#9953 `__: DOC: Documentation > cleanup for stats tutorials. > * `#9962 `__: __repr__ for Bounds > * `#9971 `__: ENH: Improve > performance of lsmr > * `#9987 `__: CI: pin Sphinx > version to 1.8.5 > * `#9990 `__: ENH: constraint > violation > * `#9991 `__: BUG: Avoid inplace > modification of input array in newton > * `#9995 `__: MAINT: > sparse.csgraph: Add cdef to stop build warning. > * `#9996 `__: BUG: Make > minimize_quadratic_1d work with infinite bounds correctly > * `#10004 `__: BUG: Fix unbound > local error in linprog - simplex. > * `#10007 `__: BLD: fix Python > 3.7 build with build isolation > * `#10009 `__: BUG: Make sure > that _binary_erosion only accepts an integer number... > * `#10016 `__: Update link to > airspeed-velocity > * `#10017 `__: DOC: Update > \`interpolate.LSQSphereBivariateSpline\` to include... > * `#10018 `__: MAINT: special: > Fix a few warnings that occur when compiling... > * `#10019 `__: TST: Azure > summarizes test failures > * `#10021 `__: ENH: Introduce > CubicHermiteSpline > * `#10022 `__: BENCH: Increase > cython version in asv to fix benchmark builds > * `#10023 `__: BUG: Avoid > exponnorm producing nan for small K values. > * `#10025 `__: BUG: optimize: > tweaked linprog status 4 error message > * `#10026 `__: ENH: optimize: > use SuiteSparse in linprog interior-point when... > * `#10027 `__: MAINT: cluster: > clean up the use of malloc() in the function... > * `#10028 `__: Fix rotate > invalid plane check > * `#10040 `__: MAINT: fix pratt > method of wilcox test in scipy.stats > * `#10041 `__: MAINT: special: > Fix a warning generated when building the AMOS... > * `#10044 `__: DOC: fix up > spatial.transform.Rotation docstrings > * `#10047 `__: MAINT: > interpolate: Fix a few build warnings. > * `#10051 `__: Add project_urls > to setup > * `#10052 `__: don't set flag to > "converged" if max iter exceeded > * `#10054 `__: MAINT: signal: > Fix a few build warnings and modernize some C... > * `#10056 `__: BUG: Ensure > factorial is not too large in kendaltau > * `#10058 `__: Small speedup in > samping from ortho and special_ortho groups > * `#10059 `__: BUG: optimize: > fix #10038 by increasing tol > * `#10061 `__: BLD: DOC: make > building docs easier by parsing python version. > * `#10064 `__: ENH: Significant > speedup for ortho and special ortho group > * `#10065 `__: DOC: Reword > parameter descriptions in \`optimize.root_scalar\` > * `#10066 `__: BUG: signal: Fix > error raised by savgol_coeffs when deriv > polyorder. > * `#10067 `__: MAINT: Fix the > cutoff value inconsistency for pinv2 and pinvh > * `#10072 `__: BUG: stats: Fix > boxcox_llf to avoid loss of precision. > * `#10075 `__: ENH: Add wrappers > for ?syconv routines > * `#10076 `__: BUG: optimize: > fix curve_fit for mixed float32/float64 input > * `#10077 `__: DOC: Replace > undefined \`k\` in \`interpolate.splev\` docstring > * `#10079 `__: DOC: Fixed typo, > rearranged some doc of stats.morestats.wilcoxon. > * `#10080 `__: TST: install > scikit-sparse for full TravisCI tests > * `#10083 `__: Clean > \`\`_clean_inputs\`\` in optimize.linprog > * `#10088 `__: ENH: optimize: > linprog test CHOLMOD/UMFPACK solvers when available > * `#10090 `__: MAINT: Fix > CubicSplinerInterpolator for pandas > * `#10091 `__: MAINT: improve > logcdf and logsf of hypergeometric distribution > * `#10095 `__: MAINT: Clean > \`\`_clean_inputs\`\` in linprog > * `#10116 `__: MAINT: update > scipy-sphinx-theme > * `#10135 `__: BUG: fix linprog > revised simplex docstring problem failure > > Checksums > ========= > > MD5 > ~~~ > > 209c50a628a624fc82535299f5913d65 > scipy-1.3.0-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 54e6fdb6aacbcaeff6dc86fc736cf39a > scipy-1.3.0-cp35-cp35m-manylinux1_i686.whl > 752f9cae504e7ea06cd818fc74b829c0 > scipy-1.3.0-cp35-cp35m-manylinux1_x86_64.whl > c7a0ff2b530570feefa8102813fc6dd1 scipy-1.3.0-cp35-cp35m-win32.whl > 1c53ccff157fe23b165e53fba87c37e0 scipy-1.3.0-cp35-cp35m-win_amd64.whl > 6762dc85ef6fe357e5710c32451b29a2 > scipy-1.3.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 03d9c756b5bc836194cd5d13cd73e3fe > scipy-1.3.0-cp36-cp36m-manylinux1_i686.whl > 1e5af3fade676e5a588d40d785e7ee4d > scipy-1.3.0-cp36-cp36m-manylinux1_x86_64.whl > fe130e4cb77078c6a886795bcf1fa66d scipy-1.3.0-cp36-cp36m-win32.whl > f62f60ea0397b7aa9a90fb610fc54d33 scipy-1.3.0-cp36-cp36m-win_amd64.whl > a1ec52b1b162bb7ae0d0ea76438e35ce > scipy-1.3.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 260d182114edaed177c64d60776ceee6 > scipy-1.3.0-cp37-cp37m-manylinux1_i686.whl > 38c5a038504e03b503f7674b30218068 > scipy-1.3.0-cp37-cp37m-manylinux1_x86_64.whl > 2390fdb5a4330c54c2a5308afe959bb9 scipy-1.3.0-cp37-cp37m-win32.whl > 452157882a9f180914906df9bbf9d7bf scipy-1.3.0-cp37-cp37m-win_amd64.whl > c6876673adf7e9e6c0307beaca784ad2 scipy-1.3.0.tar.gz > e7153c2eb276bc303699b75858db6276 scipy-1.3.0.tar.xz > 16b9e6a0ea8bdcf2ea72fda5975a252c scipy-1.3.0.zip > > SHA256 > ~~~~~~ > > 4907040f62b91c2e170359c3d36c000af783f0fa1516a83d6c1517cde0af5340 > scipy-1.3.0-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 1db9f964ed9c52dc5bd6127f0dd90ac89791daa690a5665cc01eae185912e1ba > scipy-1.3.0-cp35-cp35m-manylinux1_i686.whl > adadeeae5500de0da2b9e8dd478520d0a9945b577b2198f2462555e68f58e7ef > scipy-1.3.0-cp35-cp35m-manylinux1_x86_64.whl > 03b1e0775edbe6a4c64effb05fff2ce1429b76d29d754aa5ee2d848b60033351 > scipy-1.3.0-cp35-cp35m-win32.whl > a7695a378c2ce402405ea37b12c7a338a8755e081869bd6b95858893ceb617ae > scipy-1.3.0-cp35-cp35m-win_amd64.whl > 826b9f5fbb7f908a13aa1efd4b7321e36992f5868d5d8311c7b40cf9b11ca0e7 > scipy-1.3.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > b283a76a83fe463c9587a2c88003f800e08c3929dfbeba833b78260f9c209785 > scipy-1.3.0-cp36-cp36m-manylinux1_i686.whl > db61a640ca20f237317d27bc658c1fc54c7581ff7f6502d112922dc285bdabee > scipy-1.3.0-cp36-cp36m-manylinux1_x86_64.whl > 409846be9d6bdcbd78b9e5afe2f64b2da5a923dd7c1cd0615ce589489533fdbb > scipy-1.3.0-cp36-cp36m-win32.whl > c19a7389ab3cd712058a8c3c9ffd8d27a57f3d84b9c91a931f542682bb3d269d > scipy-1.3.0-cp36-cp36m-win_amd64.whl > 09d008237baabf52a5d4f5a6fcf9b3c03408f3f61a69c404472a16861a73917e > scipy-1.3.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > a84c31e8409b420c3ca57fd30c7589378d6fdc8d155d866a7f8e6e80dec6fd06 > scipy-1.3.0-cp37-cp37m-manylinux1_i686.whl > c5ea60ece0c0c1c849025bfc541b60a6751b491b6f11dd9ef37ab5b8c9041921 > scipy-1.3.0-cp37-cp37m-manylinux1_x86_64.whl > 6c0543f2fdd38dee631fb023c0f31c284a532d205590b393d72009c14847f5b1 > scipy-1.3.0-cp37-cp37m-win32.whl > 10325f0ffac2400b1ec09537b7e403419dcd25d9fee602a44e8a32119af9079e > scipy-1.3.0-cp37-cp37m-win_amd64.whl > c3bb4bd2aca82fb498247deeac12265921fe231502a6bc6edea3ee7fe6c40a7a > scipy-1.3.0.tar.gz > ae105c28c1fdb480bf22fd1b1392eeb8679f3c0c8917c87fca8aabf323918455 > scipy-1.3.0.tar.xz > b711ec1567439a1abfac59321b73a40de70a93ace4a33be26001eb4b12206356 > scipy-1.3.0.zip > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQIcBAEBCAAGBQJc3vVHAAoJELD/41ZX0J71Dz4P+QFv7E3OIWcrTm6qjt9X4Mgi > 6heershUpVhZATGj7Kpl+lPEBGthsnkT8MeS0/YW3ZKiDWnQoSWSIxBRnqoXup2D > CwjeXn4eKw7Z4G2A6MpKcdJe1xV1lY2Wi3MyfmkYnkO0be+NLMStrTS/1+JdM/Xg > fGt5KqE+QXqB3sEGGf3SXP8tnSS2ULbKNPAxSTL1twS0bEprOVspCtCQJd3Xm1Oi > g2+vWcIwH80KpphvZLl7F22FOI+birxn19CwNupMaN8IyW0RADUKOvYlWMamUA3K > iW6KUyHXolyjixAh4RDZUKg0hNUIbDpMBKslqY+Faz92RCCbxCx+TZGT6y+0chrp > ujE+jRfuXcSk5eykBIzYx3aLPkMH1aQ4ERCi1hxODkTlV22btlSam5diNAOmQeZz > MhQEmbtx5C9xEEHIrGpsuMHVGZfMm0/QaN23Wn/oRq4e7BsICHfZIoNMjW7/ohSv > cT0jKFHjzvS3gigT1c8EkzwtFweLp5gYGGUD4IiLOI898pvmns+DcW9coGkwrQ0E > 0OqTjpIIEPCDpHNrgRqWK8RhvqvkiDTs3HbCaxsMOMkWzlPOnrM37frqUO53SVQH > SKwe1ic13dKh332CMPcqB5EslBKEx2juexwmqPhG3Wnsy2+dL40yi7TGJYh9Vytn > ByYnyWurPkhp4WJTN1OD > =xiYy > -----END PGP SIGNATURE----- > From mcfarljm at gmail.com Sat May 18 11:00:08 2019 From: mcfarljm at gmail.com (John McFarland) Date: Sat, 18 May 2019 10:00:08 -0500 Subject: [Numpy-discussion] Swig typemaps for inplace fortran arrays Message-ID: Hi, In numpy.i, I?m looking at the typemaps for inplace Fortran arrays, for example: Typemap suite for (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) The typemap code (e.g. line 1579) uses the require_contiguous function for error checking, but I think maybe this is supposed to be require_c_or_f_contiguous? Otherwise, if the provided numpy array is in Fortran order, it will fail require_contiguous and generate a TypeError. (For what it?s worth, I pulled down the latest numpy.i from github and was doing some testing with numpy 1.13.3 under Python 2.7.16.) Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat May 18 12:36:50 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 18 May 2019 18:36:50 +0200 Subject: [Numpy-discussion] acknowledging sponsorship and institutional partners Message-ID: Hi all, In [1] I am adding institutional partners and sponsor logos to the numpy.org website. Sebastian made some good comments there, and I think it would be very helpful if we had some guidelines on how we acknowledge sponsorship. Our governance doc has clear text on what an Institutional Partner is, see [2]. We don't have anything written down about sponsorship though. In my open PR I followed the example of Jupyter (see [3]), which lists Institutional Partners first, followed by Sponsors. For sponsors I think we will want to define some minimum level of sponsorship for which we will put a logo somewhere (main page, or about page). Jupyter seems to just put everything together. Scikit-learn and NumFOCUS do the same on their front pages. NumFOCUS has tiered levels as well with different benefits, and displays the tiers at [4]. Page 17 of the NumFOCUS sponsorship brochure [5] spells out the sponsorship levels very clearly: from platinum at $100k to bronze at $10k, and a special level for "emerging leader" (startups) below that. I think that following the NumFOCUS model would be the most straightforward thing to do, because (a) we're part of NumFOCUS, and (b) it's very well documented. And also fairest in a way - it gives some recognition proportionally to the contribution. My PR right now lists Moore, Sloan and Tidelift as the 3 sponsors. The first two contributed on the order of $500k each (spread out over 2-3 years), while Tidelift currently contributes $1000/month. So I propose: - acknowledging all active sponsorship (within the last 12 months) by logo placement on numpy.org - acknowledging past sponsorship as well on numpy.org, but on a separate page and perhaps just as a listing rather than prominent logo placement - adopting the NumFOCUS tiered sponsorship model - listing institutional partners and sponsors in the same place, with partners first (following what Jupyter does). Thoughts? Cheers, Ralf [1] https://github.com/numpy/numpy.org/pull/21 [2] https://www.numpy.org/devdocs/dev/governance/governance.html#institutional-partners-and-funding [3] https://jupyter.org/about [4] https://numfocus.org/sponsors [5] https://numfocus.org/wp-content/uploads/2018/07/NumFOCUS-Corporate-Sponsorship-Brochure.pdf -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sun May 19 16:35:35 2019 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Sun, 19 May 2019 13:35:35 -0700 Subject: [Numpy-discussion] GSoD - Technical Writter In-Reply-To: References: Message-ID: > a responsive web page may not be an absolute requirement, but still it may be nice to be able to read the docs from a tablet or smartphone. Unfortunately I am not familiar yet with Sphinx, but I hope that it can be integrated with Jekyll or Hugo, and then one of their templates can be used. Sphinx is powerful, featurefull, and the standard doc system for Python. Let?s just stick with that. But there are a LOT of themes available for Sphinx? I?m sure there are responsive ones out there that could be used or adapted. http://www.sphinx-doc.org/en/stable/theming.html You might check out the bootstrap theme: https://github.com/ryan-roemer/sphinx-bootstrap-theme -CHB About the content of the User Guide etc. I don't see any obvious improvement that is needed (maybe because I have not read them yet). One thing that may help is making the code examples interactive, so that the readers can play with them and see how the results change. For example this may be useful: https://github.com/RunestoneInteractive/RunestoneComponents The two changes that I have suggested above seem more like engineering work (for improving the documentation infrastructure), than documentation work. For making a content that can be easily grasped by the beginners, I think that it should be presented as a series of problems and their solutions. In other words don't show the users the features and their details, but ask them to solve a simple problem, and then show them how to solve it with NumPy/SciPy and its features. This would make it more attractive because people usually don't like to read manuals from beginning to the end. This is a job that can be done by the teachers for their students, having in mind the level of their students and what they actually want them to learn. I have noticed that there are already some lectures, or books, or tutorials like this. This is a creative work, with a specific target audience in mind, so I can't pretend that I can possibly do something useful about this in a short time (2-3 months). But of course the links to the existing resources can be made more visible and reachable from the main page of the website. Best regards, Dashamir _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at python.org https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun May 19 17:01:36 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 19 May 2019 23:01:36 +0200 Subject: [Numpy-discussion] GSoD - Technical Writter In-Reply-To: References: Message-ID: On Sun, May 19, 2019 at 10:35 PM Chris Barker - NOAA Federal < chris.barker at noaa.gov> wrote: > > > a responsive web page may not be an absolute requirement, but still it > may be nice to be able to read the docs from a tablet or smartphone. > > Unfortunately I am not familiar yet with Sphinx, but I hope that it can be > integrated with Jekyll or Hugo, and then one of their templates can be used. > > > Sphinx is powerful, featurefull, and the standard doc system for Python. > Let?s just stick with that. > > But there are a LOT of themes available for Sphinx? I?m sure there are > responsive ones out there that could be used or adapted. > > http://www.sphinx-doc.org/en/stable/theming.html > > You might check out the bootstrap theme: > > https://github.com/ryan-roemer/sphinx-bootstrap-theme > Hi Chris, this discussion already continued on scipy-user. Let's keep it there to avoid the double posting. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From anntzer.lee at gmail.com Mon May 20 10:51:54 2019 From: anntzer.lee at gmail.com (Antony Lee) Date: Mon, 20 May 2019 16:51:54 +0200 Subject: [Numpy-discussion] Proposed change in construction of structured dtypes with a shape-(1, ) field Message-ID: In #13112/#13326, I proposed to change the semantics of constructing structured dtypes with a shape-(1,) field (with a deprecation period). Currently, a construct like `np.empty(1, ("a", int, 1))` is treated as a shape-() field, i.e. the same as `np.empty(1, ("a", int))`; the PR proposes to (ultimately) change it to mean using a shape-(1,) field, i.e. `np.empty(1, ("a", int, 1))`. This is consistent e.g. with `np.empty(1, ("a", int, 2))` being equivalent to `np.empty(1, ("a", int, (2,)))` and more generally with numpy accepting a scalar integer n to mean shape-(n,) in many places (e.g. `np.zeros(3)` and `np.zeros((3,))`). Thoughts? Antony -------------- next part -------------- An HTML attachment was scrubbed... URL: From nelle.varoquaux at gmail.com Mon May 20 14:01:15 2019 From: nelle.varoquaux at gmail.com (Nelle Varoquaux) Date: Mon, 20 May 2019 11:01:15 -0700 Subject: [Numpy-discussion] acknowledging sponsorship and institutional partners In-Reply-To: References: Message-ID: Hi, I'm pretty sure not all funding is acknowledged on scikit-learn's frontpage. I think the minimum amount to be acknowledge with a logo is funding for a full time developer for at least a year, ie at least 100k?. Cheers, N On Sat, 18 May 2019 at 09:37, Ralf Gommers wrote: > Hi all, > > In [1] I am adding institutional partners and sponsor logos to the > numpy.org website. Sebastian made some good comments there, and I think > it would be very helpful if we had some guidelines on how we acknowledge > sponsorship. > > Our governance doc has clear text on what an Institutional Partner is, see > [2]. We don't have anything written down about sponsorship though. In my > open PR I followed the example of Jupyter (see [3]), which lists > Institutional Partners first, followed by Sponsors. > > For sponsors I think we will want to define some minimum level of > sponsorship for which we will put a logo somewhere (main page, or about > page). Jupyter seems to just put everything together. Scikit-learn and > NumFOCUS do the same on their front pages. NumFOCUS has tiered levels as > well with different benefits, and displays the tiers at [4]. Page 17 of the > NumFOCUS sponsorship brochure [5] spells out the sponsorship levels very > clearly: from platinum at $100k to bronze at $10k, and a special level for > "emerging leader" (startups) below that. > > I think that following the NumFOCUS model would be the most > straightforward thing to do, because (a) we're part of NumFOCUS, and (b) > it's very well documented. And also fairest in a way - it gives some > recognition proportionally to the contribution. My PR right now lists > Moore, Sloan and Tidelift as the 3 sponsors. The first two contributed on > the order of $500k each (spread out over 2-3 years), while Tidelift > currently contributes $1000/month. > > So I propose: > - acknowledging all active sponsorship (within the last 12 months) by logo > placement on numpy.org > - acknowledging past sponsorship as well on numpy.org, but on a separate > page and perhaps just as a listing rather than prominent logo placement > - adopting the NumFOCUS tiered sponsorship model > - listing institutional partners and sponsors in the same place, with > partners first (following what Jupyter does). > > Thoughts? > > Cheers, > Ralf > > > [1] https://github.com/numpy/numpy.org/pull/21 > [2] > https://www.numpy.org/devdocs/dev/governance/governance.html#institutional-partners-and-funding > [3] https://jupyter.org/about > [4] https://numfocus.org/sponsors > [5] > https://numfocus.org/wp-content/uploads/2018/07/NumFOCUS-Corporate-Sponsorship-Brochure.pdf > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Mon May 20 14:10:30 2019 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 20 May 2019 11:10:30 -0700 Subject: [Numpy-discussion] Proposed change in construction of structured dtypes with a shape-(1, ) field In-Reply-To: References: Message-ID: <20190520181030.ocxctof5k6qyhp42@carbo> Hi Anthony, On Mon, 20 May 2019 16:51:54 +0200, Antony Lee wrote: > In #13112/#13326, I proposed to change the semantics of constructing > structured dtypes with a shape-(1,) field (with a deprecation period). > Currently, a construct like `np.empty(1, ("a", int, 1))` is treated as a > shape-() field, i.e. the same as `np.empty(1, ("a", int))`; the PR proposes > to (ultimately) change it to mean using a shape-(1,) field, i.e. > `np.empty(1, ("a", int, 1))`. This is consistent e.g. with `np.empty(1, > ("a", int, 2))` being equivalent to `np.empty(1, ("a", int, (2,)))` and > more generally with numpy accepting a scalar integer n to mean shape-(n,) > in many places (e.g. `np.zeros(3)` and `np.zeros((3,))`). > > Thoughts? Your change doesn't seem to complicate the function, and improves consistency. So, +1. I also think this falls into the bin of "corner cases with marginal gain that we shouldn't spend developer/review time on", but since you already have the review completed, that point is moot. Best regards, St?fan From gael.varoquaux at normalesup.org Mon May 20 14:44:53 2019 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 20 May 2019 20:44:53 +0200 Subject: [Numpy-discussion] acknowledging sponsorship and institutional partners In-Reply-To: References: Message-ID: <20190520184453.a36ufdheddue7fuv@phare.normalesup.org> On Mon, May 20, 2019 at 11:01:15AM -0700, Nelle Varoquaux wrote: > I'm pretty sure not all funding is acknowledged on scikit-learn's frontpage. I > think the minimum amount to be acknowledge with a logo is funding for a full > time developer for at least a year, ie at least 100k?. These days, it's actually more: it should be several years. G From ralf.gommers at gmail.com Mon May 20 16:40:20 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 20 May 2019 22:40:20 +0200 Subject: [Numpy-discussion] acknowledging sponsorship and institutional partners In-Reply-To: <20190520184453.a36ufdheddue7fuv@phare.normalesup.org> References: <20190520184453.a36ufdheddue7fuv@phare.normalesup.org> Message-ID: On Mon, May 20, 2019 at 8:45 PM Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > On Mon, May 20, 2019 at 11:01:15AM -0700, Nelle Varoquaux wrote: > > I'm pretty sure not all funding is acknowledged on scikit-learn's > frontpage. I > > think the minimum amount to be acknowledge with a logo is funding for a > full > > time developer for at least a year, ie at least 100k?. > > These days, it's actually more: it should be several years. > Thanks Nelle and Gael, that's useful as a reference. I see the rest of the funding is at https://scikit-learn.org/stable/about.html#funding That seems to be a good model - the front page is prime real estate. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Tue May 21 13:06:30 2019 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Tue, 21 May 2019 10:06:30 -0700 Subject: [Numpy-discussion] Community Call -- May 22 (New time / platform) Message-ID: Hi, Starting from this week, the community meetings will be at a new time (11 am Pacific Time) and on a new meeting platform (see the linked doc). Anyone is free to join and edit the work-in-progress meeting notes: https://hackmd.io/bQoK2wuaQV2hJSVuoBUUtg?view Best wishes, Tyler -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.h.vankerkwijk at gmail.com Tue May 21 21:30:37 2019 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Tue, 21 May 2019 21:30:37 -0400 Subject: [Numpy-discussion] Keep __array_function__ unexposed by default for 1.17? Message-ID: Hi All, For 1.17, there has been a big effort, especially by Stephan, to make __array_function__ sufficiently usable that it can be exposed. I think this is great, and still like the idea very much, but its impact on the numpy code base has gotten so big in the most recent PR (gh-13585) that I wonder if we shouldn't reconsider the approach, and at least for 1.17 stick with the status quo. Since that seems to be a bigger question than can be usefully addressed in the PR, I thought I would raise it here. Specifically, now not only does every numpy function have its dispatcher function, but also internally all numpy function calls are being done via the new `__skip_array_function__` attribute, to avoid further overrides. I think both changes make the code significantly less readable, thus, e.g., making it even harder than it is already to attract new contributors. I think with this it is probably time to step back and check whether the implementation is in fact the right one. For instance, among the alternatives we originally considered was one that had the overridable versions of functions in the regular `numpy` namespace, and the once that would not themselves check in a different one. Alternatively, for some of the benefits provided by `__skip_array_function__`, there was a different suggestion to have a special return value, of `NotImplementedButCoercible`. Might these be better after all? More generally, I think we're suffering from the fact that several of us seem to have rather different final goals in mind In particular, I'd like to move to a state where as much of the code as possible makes use of the simplest possible implementation, with only a few true base functions, so that all but those simplest functions will generally work on any type of array. Others, however, worry much more about making implementations (even more) part of the API. All the best, Marten -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Wed May 22 02:44:24 2019 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 22 May 2019 16:44:24 +1000 Subject: [Numpy-discussion] =?utf-8?q?Keep_=5F=5Farray=5Ffunction=5F=5F_u?= =?utf-8?q?nexposed_by_default_for_1=2E17=3F?= In-Reply-To: References: Message-ID: <55175cd2-7654-4b6a-a5ac-8faf965085c6@www.fastmail.com> I just want to express my general support for Marten's concerns. As an "interested observer", I've been meaning to give `__array_function__` a try but haven't had the chance yet. So from my anecdotal experience I expect that more people need to play with this before setting the API in stone. At scikit-image we place a very strong emphasis on code simplicity and readability, so I also share Marten's concerns about code getting too complex. My impression reading the NEP was "whoa, this is hard, I'm glad smarter people than me are working on this, I'm sure it'll get simpler in time". But I haven't seen the simplicity materialise... On Wed, 22 May 2019, at 11:31 AM, Marten van Kerkwijk wrote: > Hi All, > > For 1.17, there has been a big effort, especially by Stephan, to make __array_function__ sufficiently usable that it can be exposed. I think this is great, and still like the idea very much, but its impact on the numpy code base has gotten so big in the most recent PR (gh-13585) that I wonder if we shouldn't reconsider the approach, and at least for 1.17 stick with the status quo. Since that seems to be a bigger question than can be usefully addressed in the PR, I thought I would raise it here. > > Specifically, now not only does every numpy function have its dispatcher function, but also internally all numpy function calls are being done via the new `__skip_array_function__` attribute, to avoid further overrides. I think both changes make the code significantly less readable, thus, e.g., making it even harder than it is already to attract new contributors. > > I think with this it is probably time to step back and check whether the implementation is in fact the right one. For instance, among the alternatives we originally considered was one that had the overridable versions of functions in the regular `numpy` namespace, and the once that would not themselves check in a different one. Alternatively, for some of the benefits provided by `__skip_array_function__`, there was a different suggestion to have a special return value, of `NotImplementedButCoercible`. Might these be better after all? > > More generally, I think we're suffering from the fact that several of us seem to have rather different final goals in mind In particular, I'd like to move to a state where as much of the code as possible makes use of the simplest possible implementation, with only a few true base functions, so that all but those simplest functions will generally work on any type of array. Others, however, worry much more about making implementations (even more) part of the API. > > All the best, > > Marten > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Wed May 22 11:52:43 2019 From: shoyer at gmail.com (Stephan Hoyer) Date: Wed, 22 May 2019 08:52:43 -0700 Subject: [Numpy-discussion] Keep __array_function__ unexposed by default for 1.17? In-Reply-To: <55175cd2-7654-4b6a-a5ac-8faf965085c6@www.fastmail.com> References: <55175cd2-7654-4b6a-a5ac-8faf965085c6@www.fastmail.com> Message-ID: Thanks for raising these concerns. The full implications of my recent __skip_array_function__ proposal are only now becoming evident to me now, looking at it's use in GH-13585. Guaranteeing that it does not expand NumPy's API surface seems hard to achieve without pervasive use of __skip_array_function__ internally. Taking a step back, the sort of minor hacks [1] that motivated __skip_array_function__ for me are annoying, but really not too bad -- they are a small amount of additional code duplication in a proposal that already requires a large amount of code duplication. So let's roll back the recent NEP change adding __skip_array_function__ to the public interface [2]. Inside the few NumPy functions where __array_function__ causes a measurable performance impact due to repeated calls (most notably np.block, for which some benchmarks are 25% slower), we can make use of the private __wrapped__ attribute. I would still like to turn on __array_function__ in NumPy 1.17. At least, let's try that for the release candidate and see how it goes. The "all in" nature of __array_function__ without __skip_array_function__ will both limit its use to cases where it is strongly motivated, and also limits the API implications for NumPy. There is still plenty of room for expanding the protocol, but it's really hard to see what is necessary (and prudent!) without actual use. [1] e.g., see https://github.com/google/jax/blob/62473351643cecb6c248a50601af163646ba7be6/jax/numpy/lax_numpy.py#L2440-L2459 [2] https://github.com/numpy/numpy/pull/13305 On Tue, May 21, 2019 at 11:44 PM Juan Nunez-Iglesias wrote: > I just want to express my general support for Marten's concerns. As an > "interested observer", I've been meaning to give `__array_function__` a try > but haven't had the chance yet. So from my anecdotal experience I expect > that more people need to play with this before setting the API in stone. > > At scikit-image we place a very strong emphasis on code simplicity and > readability, so I also share Marten's concerns about code getting too > complex. My impression reading the NEP was "whoa, this is hard, I'm glad > smarter people than me are working on this, I'm sure it'll get simpler in > time". But I haven't seen the simplicity materialise... > > On Wed, 22 May 2019, at 11:31 AM, Marten van Kerkwijk wrote: > > Hi All, > > For 1.17, there has been a big effort, especially by Stephan, to make > __array_function__ sufficiently usable that it can be exposed. I think this > is great, and still like the idea very much, but its impact on the numpy > code base has gotten so big in the most recent PR (gh-13585) that I wonder > if we shouldn't reconsider the approach, and at least for 1.17 stick with > the status quo. Since that seems to be a bigger question than can be > usefully addressed in the PR, I thought I would raise it here. > > Specifically, now not only does every numpy function have its dispatcher > function, but also internally all numpy function calls are being done via > the new `__skip_array_function__` attribute, to avoid further overrides. I > think both changes make the code significantly less readable, thus, e.g., > making it even harder than it is already to attract new contributors. > > I think with this it is probably time to step back and check whether the > implementation is in fact the right one. For instance, among the > alternatives we originally considered was one that had the overridable > versions of functions in the regular `numpy` namespace, and the once that > would not themselves check in a different one. Alternatively, for some of > the benefits provided by `__skip_array_function__`, there was a different > suggestion to have a special return value, of `NotImplementedButCoercible`. > Might these be better after all? > > More generally, I think we're suffering from the fact that several of us > seem to have rather different final goals in mind In particular, I'd like > to move to a state where as much of the code as possible makes use of the > simplest possible implementation, with only a few true base functions, so > that all but those simplest functions will generally work on any type of > array. Others, however, worry much more about making implementations (even > more) part of the API. > > All the best, > > Marten > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Wed May 22 13:03:46 2019 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Wed, 22 May 2019 10:03:46 -0700 Subject: [Numpy-discussion] Converting np.sinc into a ufunc Message-ID: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Hi all, there is an open PR (https://github.com/numpy/numpy/pull/12924) to convert `np.sinc` into a ufunc. Since it should improve general precision in `np.sinc`, I thought we could try to move that forward a bit. We check whether this is worth it or not in the end. However, it would also change behaviour slightly since `np.sinc(x=arr)` will not work, as ufuncs are positional arguments only (we could wrap `sinc`, but that hides all the nice features). Otherwise, there should be no change except additional features of ufuncs and the move to a C implementation. This is mostly to see if anyone is worried about possible slight API change here. All the Best, Sebastian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From nathan12343 at gmail.com Wed May 22 13:34:23 2019 From: nathan12343 at gmail.com (Nathan Goldbaum) Date: Wed, 22 May 2019 13:34:23 -0400 Subject: [Numpy-discussion] Converting np.sinc into a ufunc In-Reply-To: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> References: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Message-ID: It might be worth using BigQuery to search the github repository public dataset for usages of np.sinc with keyword arguments. On Wed, May 22, 2019 at 1:05 PM Sebastian Berg wrote: > Hi all, > > there is an open PR (https://github.com/numpy/numpy/pull/12924) to > convert `np.sinc` into a ufunc. Since it should improve general > precision in `np.sinc`, I thought we could try to move that forward a > bit. We check whether this is worth it or not in the end. > > However, it would also change behaviour slightly since `np.sinc(x=arr)` > will not work, as ufuncs are positional arguments only (we could wrap > `sinc`, but that hides all the nice features). Otherwise, there should > be no change except additional features of ufuncs and the move to a C > implementation. > > This is mostly to see if anyone is worried about possible slight API > change here. > > All the Best, > > Sebastian > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed May 22 14:12:23 2019 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 22 May 2019 11:12:23 -0700 Subject: [Numpy-discussion] Community Call -- May 22 (New time / platform) In-Reply-To: References: Message-ID: <20190522181223.6ednsik7hcj6psuk@carbo> Hi all, On Tue, 21 May 2019 10:06:30 -0700, Tyler Reddy wrote: > Hi, > > Starting from this week, the community meetings will be at a new time (11 > am Pacific Time) and on a new meeting platform (see the linked doc). > > Anyone is free to join and edit the work-in-progress meeting notes: > https://hackmd.io/bQoK2wuaQV2hJSVuoBUUtg?view Google Meet didn't work well, so we've switched to https://berkeley.zoom.us/j/2742716467 Best regards, St?fan From m.h.vankerkwijk at gmail.com Wed May 22 15:46:08 2019 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Wed, 22 May 2019 15:46:08 -0400 Subject: [Numpy-discussion] Keep __array_function__ unexposed by default for 1.17? In-Reply-To: References: <55175cd2-7654-4b6a-a5ac-8faf965085c6@www.fastmail.com> Message-ID: Hi Stephan, I'm quite happy with the idea of turning on __array_function__ but postponing any formal solution to getting into the wrapped routines (i.e., one can use __wrapped__, but it is an implementation detail that is not documented and comes with absolutely no guarantees). That way, 1.17 will be a release where we can think of how to address two different things: 1. Reduce the overhead costs for pure ndarray cases (i.e., mostly within numpy itself); 2. Simplify implementation in outside packages. On the performance front, I'm not quite sure what the state of the environment variable check is, but is it possible to just flip the default, i.e., for 1.17 one gets __array_function__ support turned on by default, but can turn it off if wanted? All the best, Marten On Wed, May 22, 2019 at 11:53 AM Stephan Hoyer wrote: > Thanks for raising these concerns. > > The full implications of my recent __skip_array_function__ proposal are > only now becoming evident to me now, looking at it's use in GH-13585. > Guaranteeing that it does not expand NumPy's API surface seems hard to > achieve without pervasive use of __skip_array_function__ internally. > > Taking a step back, the sort of minor hacks [1] that motivated > __skip_array_function__ for me are annoying, but really not too bad -- they > are a small amount of additional code duplication in a proposal that > already requires a large amount of code duplication. > > So let's roll back the recent NEP change adding __skip_array_function__ to > the public interface [2]. Inside the few NumPy functions where > __array_function__ causes a measurable performance impact due to repeated > calls (most notably np.block, for which some benchmarks are 25% slower), we > can make use of the private __wrapped__ attribute. > > I would still like to turn on __array_function__ in NumPy 1.17. At least, > let's try that for the release candidate and see how it goes. The "all in" > nature of __array_function__ without __skip_array_function__ will both > limit its use to cases where it is strongly motivated, and also limits the > API implications for NumPy. There is still plenty of room for expanding the > protocol, but it's really hard to see what is necessary (and prudent!) > without actual use. > > [1] e.g., see > https://github.com/google/jax/blob/62473351643cecb6c248a50601af163646ba7be6/jax/numpy/lax_numpy.py#L2440-L2459 > [2] https://github.com/numpy/numpy/pull/13305 > > > > > On Tue, May 21, 2019 at 11:44 PM Juan Nunez-Iglesias > wrote: > >> I just want to express my general support for Marten's concerns. As an >> "interested observer", I've been meaning to give `__array_function__` a try >> but haven't had the chance yet. So from my anecdotal experience I expect >> that more people need to play with this before setting the API in stone. >> >> At scikit-image we place a very strong emphasis on code simplicity and >> readability, so I also share Marten's concerns about code getting too >> complex. My impression reading the NEP was "whoa, this is hard, I'm glad >> smarter people than me are working on this, I'm sure it'll get simpler in >> time". But I haven't seen the simplicity materialise... >> >> On Wed, 22 May 2019, at 11:31 AM, Marten van Kerkwijk wrote: >> >> Hi All, >> >> For 1.17, there has been a big effort, especially by Stephan, to make >> __array_function__ sufficiently usable that it can be exposed. I think this >> is great, and still like the idea very much, but its impact on the numpy >> code base has gotten so big in the most recent PR (gh-13585) that I wonder >> if we shouldn't reconsider the approach, and at least for 1.17 stick with >> the status quo. Since that seems to be a bigger question than can be >> usefully addressed in the PR, I thought I would raise it here. >> >> Specifically, now not only does every numpy function have its dispatcher >> function, but also internally all numpy function calls are being done via >> the new `__skip_array_function__` attribute, to avoid further overrides. I >> think both changes make the code significantly less readable, thus, e.g., >> making it even harder than it is already to attract new contributors. >> >> I think with this it is probably time to step back and check whether the >> implementation is in fact the right one. For instance, among the >> alternatives we originally considered was one that had the overridable >> versions of functions in the regular `numpy` namespace, and the once that >> would not themselves check in a different one. Alternatively, for some of >> the benefits provided by `__skip_array_function__`, there was a different >> suggestion to have a special return value, of `NotImplementedButCoercible`. >> Might these be better after all? >> >> More generally, I think we're suffering from the fact that several of us >> seem to have rather different final goals in mind In particular, I'd like >> to move to a state where as much of the code as possible makes use of the >> simplest possible implementation, with only a few true base functions, so >> that all but those simplest functions will generally work on any type of >> array. Others, however, worry much more about making implementations (even >> more) part of the API. >> >> All the best, >> >> Marten >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed May 22 16:58:48 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 22 May 2019 22:58:48 +0200 Subject: [Numpy-discussion] Converting np.sinc into a ufunc In-Reply-To: References: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Message-ID: On Wed, May 22, 2019 at 7:34 PM Nathan Goldbaum wrote: > It might be worth using BigQuery to search the github repository public > dataset for usages of np.sinc with keyword arguments. > We spent some effort at Quansight to try different approaches to this. BigQuery turns out to be suboptimal, parsing code with ast.parse is more robust. Chris Ostrouchov just released some code for this (blog post with details to follow) and the results of running that code: https://github.com/Quansight-Labs/python-api-inspect/blob/master/data/numpy-summary.csv np.sinc has 35 usages. to put that in perspective, np.array has ~31,000, np.dot ~2200, np.floor ~220, trace/inner/spacing/copyto are all similar to sinc. > On Wed, May 22, 2019 at 1:05 PM Sebastian Berg > wrote: > >> Hi all, >> >> there is an open PR (https://github.com/numpy/numpy/pull/12924) to >> convert `np.sinc` into a ufunc. Since it should improve general >> precision in `np.sinc`, I thought we could try to move that forward a >> bit. We check whether this is worth it or not in the end. >> > Can you quantify the precision improvement (approximately)? >> However, it would also change behaviour slightly since `np.sinc(x=arr)` >> will not work, as ufuncs are positional arguments only (we could wrap >> `sinc`, but that hides all the nice features). > > This would give an exception, so at least it's easy to fix. As backwards compat breaks go, this is a pretty minor one I think. Otherwise, there should >> be no change except additional features of ufuncs and the move to a C >> implementation. >> > I see this is one of the functions that uses asanyarray, so what about impact on subclass behavior? Cheers, Ralf >> This is mostly to see if anyone is worried about possible slight API >> change here. >> >> All the Best, >> >> Sebastian >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed May 22 17:34:37 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 22 May 2019 23:34:37 +0200 Subject: [Numpy-discussion] Keep __array_function__ unexposed by default for 1.17? In-Reply-To: References: <55175cd2-7654-4b6a-a5ac-8faf965085c6@www.fastmail.com> Message-ID: On Wed, May 22, 2019 at 9:46 PM Marten van Kerkwijk < m.h.vankerkwijk at gmail.com> wrote: > Hi Stephan, > > I'm quite happy with the idea of turning on __array_function__ but > postponing any formal solution to getting into the wrapped routines (i.e., > one can use __wrapped__, but it is an implementation detail that is not > documented and comes with absolutely no guarantees). > > That way, 1.17 will be a release where we can think of how to address two > different things: > 1. Reduce the overhead costs for pure ndarray cases (i.e., mostly within > numpy itself); > 2. Simplify implementation in outside packages. > > On the performance front, I'm not quite sure what the state of the > environment variable check is, but is it possible to just flip the default, > i.e., for 1.17 one gets __array_function__ support turned on by default, > but can turn it off if wanted? > This would be useful as a safety measure. > All the best, > > Marten > > On Wed, May 22, 2019 at 11:53 AM Stephan Hoyer wrote: > >> Thanks for raising these concerns. >> >> The full implications of my recent __skip_array_function__ proposal are >> only now becoming evident to me now, looking at it's use in GH-13585. >> Guaranteeing that it does not expand NumPy's API surface seems hard to >> achieve without pervasive use of __skip_array_function__ internally. >> >> Taking a step back, the sort of minor hacks [1] that motivated >> __skip_array_function__ for me are annoying, but really not too bad -- they >> are a small amount of additional code duplication in a proposal that >> already requires a large amount of code duplication. >> >> So let's roll back the recent NEP change adding __skip_array_function__ >> to the public interface [2]. Inside the few NumPy functions where >> __array_function__ causes a measurable performance impact due to repeated >> calls (most notably np.block, for which some benchmarks are 25% slower), we >> can make use of the private __wrapped__ attribute. >> > Thanks Stephan, this sounds good. >> I would still like to turn on __array_function__ in NumPy 1.17. At least, >> let's try that for the release candidate and see how it goes. >> > I agree. I'd actually suggest flipping the switch asap and see if it causes any issues for projects that test against numpy master in their CI, and the people that like to live on the bleeding edge by installing master into their environment. Cheers, Ralf The "all in" nature of __array_function__ without __skip_array_function__ >> will both limit its use to cases where it is strongly motivated, and also >> limits the API implications for NumPy. There is still plenty of room for >> expanding the protocol, but it's really hard to see what is necessary (and >> prudent!) without actual use. >> >> [1] e.g., see >> https://github.com/google/jax/blob/62473351643cecb6c248a50601af163646ba7be6/jax/numpy/lax_numpy.py#L2440-L2459 >> [2] https://github.com/numpy/numpy/pull/13305 >> >> >> >> >> On Tue, May 21, 2019 at 11:44 PM Juan Nunez-Iglesias >> wrote: >> >>> I just want to express my general support for Marten's concerns. As an >>> "interested observer", I've been meaning to give `__array_function__` a try >>> but haven't had the chance yet. So from my anecdotal experience I expect >>> that more people need to play with this before setting the API in stone. >>> >>> At scikit-image we place a very strong emphasis on code simplicity and >>> readability, so I also share Marten's concerns about code getting too >>> complex. My impression reading the NEP was "whoa, this is hard, I'm glad >>> smarter people than me are working on this, I'm sure it'll get simpler in >>> time". But I haven't seen the simplicity materialise... >>> >>> On Wed, 22 May 2019, at 11:31 AM, Marten van Kerkwijk wrote: >>> >>> Hi All, >>> >>> For 1.17, there has been a big effort, especially by Stephan, to make >>> __array_function__ sufficiently usable that it can be exposed. I think this >>> is great, and still like the idea very much, but its impact on the numpy >>> code base has gotten so big in the most recent PR (gh-13585) that I wonder >>> if we shouldn't reconsider the approach, and at least for 1.17 stick with >>> the status quo. Since that seems to be a bigger question than can be >>> usefully addressed in the PR, I thought I would raise it here. >>> >>> Specifically, now not only does every numpy function have its dispatcher >>> function, but also internally all numpy function calls are being done via >>> the new `__skip_array_function__` attribute, to avoid further overrides. I >>> think both changes make the code significantly less readable, thus, e.g., >>> making it even harder than it is already to attract new contributors. >>> >>> I think with this it is probably time to step back and check whether the >>> implementation is in fact the right one. For instance, among the >>> alternatives we originally considered was one that had the overridable >>> versions of functions in the regular `numpy` namespace, and the once that >>> would not themselves check in a different one. Alternatively, for some of >>> the benefits provided by `__skip_array_function__`, there was a different >>> suggestion to have a special return value, of `NotImplementedButCoercible`. >>> Might these be better after all? >>> >>> More generally, I think we're suffering from the fact that several of us >>> seem to have rather different final goals in mind In particular, I'd like >>> to move to a state where as much of the code as possible makes use of the >>> simplest possible implementation, with only a few true base functions, so >>> that all but those simplest functions will generally work on any type of >>> array. Others, however, worry much more about making implementations (even >>> more) part of the API. >>> >>> All the best, >>> >>> Marten >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Wed May 22 17:45:33 2019 From: shoyer at gmail.com (Stephan Hoyer) Date: Wed, 22 May 2019 14:45:33 -0700 Subject: [Numpy-discussion] Keep __array_function__ unexposed by default for 1.17? In-Reply-To: References: <55175cd2-7654-4b6a-a5ac-8faf965085c6@www.fastmail.com> Message-ID: On Wed, May 22, 2019 at 2:36 PM Ralf Gommers wrote: > I would still like to turn on __array_function__ in NumPy 1.17. At least, >>> let's try that for the release candidate and see how it goes. >>> >> > I agree. I'd actually suggest flipping the switch asap and see if it > causes any issues for projects that test against numpy master in their CI, > and the people that like to live on the bleeding edge by installing master > into their environment. > The switch actually has already been done on master for several months now, until for a period in the 1.16 release cycle before we added the off switch. Doing so did turn up a few bugs, e.g., https://github.com/numpy/numpy/issues/12263 We will actually need to re-add in the code that does the environment variable to allow for turning it off, but this isn't a big deal. My main concern is that this adds some complexity for third-party projects in detecting whether __array_function__ is enabled or not. They can't just use the NumPy version and will need to check the environment variable as well, or actually try using it on an example object. If we want to keep an "off" switch we might want to add some sort of API for exposing whether NumPy is using __array_function__ or not. Maybe numpy.__experimental_array_function_enabled__ = True, so you can just test `hasattr(numpy, '__experimental_array_function_enabled__')`? This is assuming that we are OK with adding an underscore attribute to NumPy's namespace semi-indefinitely. > > Cheers, > Ralf > > > The "all in" nature of __array_function__ without __skip_array_function__ >>> will both limit its use to cases where it is strongly motivated, and also >>> limits the API implications for NumPy. There is still plenty of room for >>> expanding the protocol, but it's really hard to see what is necessary (and >>> prudent!) without actual use. >>> >>> [1] e.g., see >>> https://github.com/google/jax/blob/62473351643cecb6c248a50601af163646ba7be6/jax/numpy/lax_numpy.py#L2440-L2459 >>> [2] https://github.com/numpy/numpy/pull/13305 >>> >>> >>> >>> >>> On Tue, May 21, 2019 at 11:44 PM Juan Nunez-Iglesias >>> wrote: >>> >>>> I just want to express my general support for Marten's concerns. As an >>>> "interested observer", I've been meaning to give `__array_function__` a try >>>> but haven't had the chance yet. So from my anecdotal experience I expect >>>> that more people need to play with this before setting the API in stone. >>>> >>>> At scikit-image we place a very strong emphasis on code simplicity and >>>> readability, so I also share Marten's concerns about code getting too >>>> complex. My impression reading the NEP was "whoa, this is hard, I'm glad >>>> smarter people than me are working on this, I'm sure it'll get simpler in >>>> time". But I haven't seen the simplicity materialise... >>>> >>>> On Wed, 22 May 2019, at 11:31 AM, Marten van Kerkwijk wrote: >>>> >>>> Hi All, >>>> >>>> For 1.17, there has been a big effort, especially by Stephan, to make >>>> __array_function__ sufficiently usable that it can be exposed. I think this >>>> is great, and still like the idea very much, but its impact on the numpy >>>> code base has gotten so big in the most recent PR (gh-13585) that I wonder >>>> if we shouldn't reconsider the approach, and at least for 1.17 stick with >>>> the status quo. Since that seems to be a bigger question than can be >>>> usefully addressed in the PR, I thought I would raise it here. >>>> >>>> Specifically, now not only does every numpy function have its >>>> dispatcher function, but also internally all numpy function calls are being >>>> done via the new `__skip_array_function__` attribute, to avoid further >>>> overrides. I think both changes make the code significantly less readable, >>>> thus, e.g., making it even harder than it is already to attract new >>>> contributors. >>>> >>>> I think with this it is probably time to step back and check whether >>>> the implementation is in fact the right one. For instance, among the >>>> alternatives we originally considered was one that had the overridable >>>> versions of functions in the regular `numpy` namespace, and the once that >>>> would not themselves check in a different one. Alternatively, for some of >>>> the benefits provided by `__skip_array_function__`, there was a different >>>> suggestion to have a special return value, of `NotImplementedButCoercible`. >>>> Might these be better after all? >>>> >>>> More generally, I think we're suffering from the fact that several of >>>> us seem to have rather different final goals in mind In particular, I'd >>>> like to move to a state where as much of the code as possible makes use of >>>> the simplest possible implementation, with only a few true base functions, >>>> so that all but those simplest functions will generally work on any type of >>>> array. Others, however, worry much more about making implementations (even >>>> more) part of the API. >>>> >>>> All the best, >>>> >>>> Marten >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at python.org >>>> https://mail.python.org/mailman/listinfo/numpy-discussion >>>> >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at python.org >>>> https://mail.python.org/mailman/listinfo/numpy-discussion >>>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Wed May 22 17:56:31 2019 From: shoyer at gmail.com (Stephan Hoyer) Date: Wed, 22 May 2019 14:56:31 -0700 Subject: [Numpy-discussion] Converting np.sinc into a ufunc In-Reply-To: References: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Message-ID: On Wed, May 22, 2019 at 2:00 PM Ralf Gommers wrote: > > > On Wed, May 22, 2019 at 7:34 PM Nathan Goldbaum > wrote: > >> It might be worth using BigQuery to search the github repository public >> dataset for usages of np.sinc with keyword arguments. >> > > We spent some effort at Quansight to try different approaches to this. > BigQuery turns out to be suboptimal, parsing code with ast.parse is more > robust. Chris Ostrouchov just released some code for this (blog post with > details to follow) and the results of running that code: > https://github.com/Quansight-Labs/python-api-inspect/blob/master/data/numpy-summary.csv > > np.sinc has 35 usages. to put that in perspective, np.array has ~31,000, > np.dot ~2200, np.floor ~220, trace/inner/spacing/copyto are all similar to > sinc. > Searching Google's internal code base (including open source dependencies), I found many uses of np.sinc, but no uses of the keyword argument "x". I think it's pretty safe to go ahead here. -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.h.vankerkwijk at gmail.com Wed May 22 21:00:35 2019 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Wed, 22 May 2019 21:00:35 -0400 Subject: [Numpy-discussion] Keep __array_function__ unexposed by default for 1.17? In-Reply-To: References: <55175cd2-7654-4b6a-a5ac-8faf965085c6@www.fastmail.com> Message-ID: > If we want to keep an "off" switch we might want to add some sort of API > for exposing whether NumPy is using __array_function__ or not. Maybe > numpy.__experimental_array_function_enabled__ = True, so you can just test > `hasattr(numpy, '__experimental_array_function_enabled__')`? This is > assuming that we are OK with adding an underscore attribute to NumPy's > namespace semi-indefinitely. > Might this be overthinking it? I might use this myself on supercomputer runs were I know that I'm using arrays only. Though one should not extrapolate from oneself! That said, it is not difficult as is. For instance, we could explain in the docs that one can tell from: ``` enabled = hasattr(np.core, 'overrides') and np.core.overrides.ENABLE_ARRAY_FUNCTION ``` One could even allow for eventual removal by explaining it should be, ``` enabled = hasattr(np.core, 'overrides') and getattr(np.core.overrides, 'ENABLE_ARRAY_FUNCTION', True) ``` (If I understand correctly, one cannot tell from the presence of `ndarray.__array_function__`, correct?) -- Marten -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.h.vankerkwijk at gmail.com Wed May 22 21:10:25 2019 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Wed, 22 May 2019 21:10:25 -0400 Subject: [Numpy-discussion] Converting np.sinc into a ufunc In-Reply-To: References: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Message-ID: > Otherwise, there should >>> be no change except additional features of ufuncs and the move to a C >>> implementation. >>> >> > I see this is one of the functions that uses asanyarray, so what about > impact on subclass behavior? > So, subclasses are passed on, as they are in ufuncs. In general, that should mean it is OK for subclasses. For astropy's Quantity, it would be an improvement, as `where` was never properly supported, and with a ufunc, we can handle it easily. -- Marten -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.h.vankerkwijk at gmail.com Wed May 22 21:13:34 2019 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Wed, 22 May 2019 21:13:34 -0400 Subject: [Numpy-discussion] Converting np.sinc into a ufunc In-Reply-To: References: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Message-ID: On a more general note, if we change to a ufunc, it will get us stuck with sinc being the normalized version, where the units of the input have to be in the half-cycles preferred by signal-processing people rather than the radians preferred by mathematicians. In this respect, note that there is an outstanding issue about whether to allow one to choose between the two: https://github.com/numpy/numpy/issues/13457 (which itself was raised following an inconclusive PR that tried to add a keyword argument for it). Adding a keyword argument is much easier for a general function than for a ufunc. -- Marten -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed May 22 21:23:43 2019 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 22 May 2019 19:23:43 -0600 Subject: [Numpy-discussion] Converting np.sinc into a ufunc In-Reply-To: References: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Message-ID: On Wed, May 22, 2019 at 7:14 PM Marten van Kerkwijk < m.h.vankerkwijk at gmail.com> wrote: > On a more general note, if we change to a ufunc, it will get us stuck with > sinc being the normalized version, where the units of the input have to be > in the half-cycles preferred by signal-processing people rather than the > radians preferred by mathematicians. > > In this respect, note that there is an outstanding issue about whether to > allow one to choose between the two: > https://github.com/numpy/numpy/issues/13457 (which itself was raised > following an inconclusive PR that tried to add a keyword argument for it). > > Adding a keyword argument is much easier for a general function than for a > ufunc. > > I'd be tempted to have two sinc functions with the different normalizations. Of course, one could say the same about trig functions in both radians and degrees. If I had to pick one, I'd choose sinc in radians, but I think that ship has sailed. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.craig.wilson at gmail.com Wed May 22 22:07:18 2019 From: josh.craig.wilson at gmail.com (Joshua Wilson) Date: Wed, 22 May 2019 19:07:18 -0700 Subject: [Numpy-discussion] Converting np.sinc into a ufunc In-Reply-To: References: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Message-ID: Re Ralf's question: > Can you quantify the precision improvement (approximately)? On one level you'll get a large decrease in relative error around the zeros of the sinc function because argument reduction is being done by a number which is exactly representable in double precision (i.e. the number 2) versus an irrational number. For example, consider: >>> import numpy as np >>> import mpmath >>> x = 1 + 1e-8 >>> (np.sinc(x) - mpmath.sincpi(x))/mpmath.sincpi(x) On master that will give you mpf('-5.1753363184721223e-9') versus mpf('1.6543612517040003e-16') on the new branch. *But* there are some caveats to that answer: since we are close to the zeros of the sinc function the condition number is large, so real world data that has already been rounded before even calling sinc will incur the same mathematically unavoidable loss of precision. On Wed, May 22, 2019 at 6:24 PM Charles R Harris wrote: > > > > On Wed, May 22, 2019 at 7:14 PM Marten van Kerkwijk wrote: >> >> On a more general note, if we change to a ufunc, it will get us stuck with sinc being the normalized version, where the units of the input have to be in the half-cycles preferred by signal-processing people rather than the radians preferred by mathematicians. >> >> In this respect, note that there is an outstanding issue about whether to allow one to choose between the two: https://github.com/numpy/numpy/issues/13457 (which itself was raised following an inconclusive PR that tried to add a keyword argument for it). >> >> Adding a keyword argument is much easier for a general function than for a ufunc. >> > > I'd be tempted to have two sinc functions with the different normalizations. Of course, one could say the same about trig functions in both radians and degrees. If I had to pick one, I'd choose sinc in radians, but I think that ship has sailed. > > Chuck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion From mikofski at berkeley.edu Thu May 23 03:19:49 2019 From: mikofski at berkeley.edu (Mark Mikofski) Date: Thu, 23 May 2019 00:19:49 -0700 Subject: [Numpy-discussion] [ANN] SolarUtils-0.3 released - wrappers for NREL SOLPOS and SPECTRL2 algorithms Message-ID: This update adds two convenience functions: 1. get_solposAM(location, datetimes, weather) - returns solar positions and airmass for arbitrary sequence of datetime vectors [year, month, day, hour, minute second]. 2. get_solpos8760(location, year, weather) - returns 8760 annyual hourly solar position and airmass for given year. For example: >>> location = [35.56836, -119.2022, -8.0]>>> datetimes = [... (datetime.datetime(2013, 1, 1, 0, 0, 0)... + datetime.timedelta(hours=h)).timetuple()[:6]... for h in range(1000)]>>> weather = [1015.62055, 40.0]>>> angles, airmass = get_solposAM(location, datetimes, weather) For more info, please see: - docs: https://sunpower.github.io/SolarUtils/ - repo: https://github.com/SunPower/SolarUtils - pypi: https://pypi.org/project/SolarUtils/ Thanks! -- Mark Mikofski, PhD (2005) *Fiat Lux* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu May 23 05:41:39 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 23 May 2019 11:41:39 +0200 Subject: [Numpy-discussion] Keep __array_function__ unexposed by default for 1.17? In-Reply-To: References: <55175cd2-7654-4b6a-a5ac-8faf965085c6@www.fastmail.com> Message-ID: On Thu, May 23, 2019 at 3:02 AM Marten van Kerkwijk < m.h.vankerkwijk at gmail.com> wrote: > > If we want to keep an "off" switch we might want to add some sort of API >> for exposing whether NumPy is using __array_function__ or not. Maybe >> numpy.__experimental_array_function_enabled__ = True, so you can just test >> `hasattr(numpy, '__experimental_array_function_enabled__')`? This is >> assuming that we are OK with adding an underscore attribute to NumPy's >> namespace semi-indefinitely. >> > I don't think we want to add or document anything publicly. That only adds to the configuration problem, and indeed makes it harder to rely on the issue. All I was suggested was keeping some (private) safety switch in the code base for a while in case of real issues as a workaround. > Might this be overthinking it? I might use this myself on supercomputer > runs were I know that I'm using arrays only. Though one should not > extrapolate from oneself! > > That said, it is not difficult as is. For instance, we could explain in > the docs that one can tell from: > ``` > enabled = hasattr(np.core, 'overrides') and > np.core.overrides.ENABLE_ARRAY_FUNCTION > ``` > One could even allow for eventual removal by explaining it should be, > ``` > enabled = hasattr(np.core, 'overrides') and getattr(np.core.overrides, > 'ENABLE_ARRAY_FUNCTION', True) > ``` > (If I understand correctly, one cannot tell from the presence of > `ndarray.__array_function__`, correct?) > I think a hasattr check for __array_function__ is right. Ralf > -- Marten > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu May 23 05:59:37 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 23 May 2019 11:59:37 +0200 Subject: [Numpy-discussion] Converting np.sinc into a ufunc In-Reply-To: References: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Message-ID: On Thu, May 23, 2019 at 3:24 AM Charles R Harris wrote: > > > On Wed, May 22, 2019 at 7:14 PM Marten van Kerkwijk < > m.h.vankerkwijk at gmail.com> wrote: > >> On a more general note, if we change to a ufunc, it will get us stuck >> with sinc being the normalized version, where the units of the input have >> to be in the half-cycles preferred by signal-processing people rather than >> the radians preferred by mathematicians. >> >> In this respect, note that there is an outstanding issue about whether to >> allow one to choose between the two: >> https://github.com/numpy/numpy/issues/13457 (which itself was raised >> following an inconclusive PR that tried to add a keyword argument for it). >> >> Adding a keyword argument is much easier for a general function than for >> a ufunc. >> >> > I'd be tempted to have two sinc functions with the different > normalizations. Of course, one could say the same about trig functions in > both radians and degrees. If I had to pick one, I'd choose sinc in radians, > but I think that ship has sailed. > Please let's not add more functions. This shouldn't be in numpy in the first place if we had that choice today (but the ship has sailed). I'd refer to scipy.special.sinc rather than expand the coverage here in ways that are specific to signal processing or some other domain. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.h.vankerkwijk at gmail.com Thu May 23 10:17:21 2019 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Thu, 23 May 2019 10:17:21 -0400 Subject: [Numpy-discussion] Converting np.sinc into a ufunc In-Reply-To: References: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Message-ID: I agree that we should not have two functions I also am rather unsure whether a ufunc is a good idea. Earlier, while discussing other possible additions, like `erf`, the conclusion seemed to be that in numpy we should just cover whatever is in the C standard. This suggests `sinc` should not be a ufunc. -- Marten p.s.`scipy.special.sinc` *is* `np.sinc` p.s.2 The accuracy argument is not in itself an argument for a ufunc, as it could be done in python too. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu May 23 10:26:56 2019 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 23 May 2019 07:26:56 -0700 Subject: [Numpy-discussion] Converting np.sinc into a ufunc In-Reply-To: References: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Message-ID: On Thu, May 23, 2019 at 7:20 AM Marten van Kerkwijk < m.h.vankerkwijk at gmail.com> wrote: > I agree that we should not have two functions > > I also am rather unsure whether a ufunc is a good idea. Earlier, while > discussing other possible additions, like `erf`, the conclusion seemed to > be that in numpy we should just cover whatever is in the C standard. This > suggests `sinc` should not be a ufunc. > That standard is for "what special functions we include in numpy, regardless of implementation", not "which special functions we already have in numpy should be implemented as ufuncs or regular functions". -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Thu May 23 13:19:25 2019 From: shoyer at gmail.com (Stephan Hoyer) Date: Thu, 23 May 2019 10:19:25 -0700 Subject: [Numpy-discussion] Keep __array_function__ unexposed by default for 1.17? In-Reply-To: References: <55175cd2-7654-4b6a-a5ac-8faf965085c6@www.fastmail.com> Message-ID: On Thu, May 23, 2019 at 2:43 AM Ralf Gommers wrote: > > > On Thu, May 23, 2019 at 3:02 AM Marten van Kerkwijk < > m.h.vankerkwijk at gmail.com> wrote: > >> >> If we want to keep an "off" switch we might want to add some sort of API >>> for exposing whether NumPy is using __array_function__ or not. Maybe >>> numpy.__experimental_array_function_enabled__ = True, so you can just test >>> `hasattr(numpy, '__experimental_array_function_enabled__')`? This is >>> assuming that we are OK with adding an underscore attribute to NumPy's >>> namespace semi-indefinitely. >>> >> > I don't think we want to add or document anything publicly. That only adds > to the configuration problem, and indeed makes it harder to rely on the > issue. All I was suggested was keeping some (private) safety switch in the > code base for a while in case of real issues as a workaround. > I was concerned that libraries dask might have different behavior internally depending upon whether or not __array_function__ is enabled, but looking more carefully dask only does this detection for tests. So maybe this is not needed. Still, I'm concerned about the potential broader implications of making it possibly to turn this off. In general, I don't think NumPy should have configurable global state -- it opens up the possibility of a whole class of issues. Stefan van der Walt raised this point when this "off switch" was suggested a few months ago: https://mail.python.org/pipermail/numpy-discussion/2019-March/079207.html That said, I'd be OK with keeping around an environment variable as an emergency opt-out for now, especially to support benchmarking the impact of __array_function__ checks. But I would definitely be opposed to keeping around this switch around long term, for more than a major version or two. If there will be an outcry when we remove checks for NUMPY_EXPERIMENTAL_ARRAY_FUNCTION, then we should reconsider the entire __array_function__ approach. Might this be overthinking it? I might use this myself on supercomputer >> runs were I know that I'm using arrays only. Though one should not >> extrapolate from oneself! >> >> That said, it is not difficult as is. For instance, we could explain in >> the docs that one can tell from: >> ``` >> enabled = hasattr(np.core, 'overrides') and >> np.core.overrides.ENABLE_ARRAY_FUNCTION >> > ``` >> One could even allow for eventual removal by explaining it should be, >> ``` >> enabled = hasattr(np.core, 'overrides') and getattr(np.core.overrides, >> 'ENABLE_ARRAY_FUNCTION', True) >> ``` >> (If I understand correctly, one cannot tell from the presence of >> `ndarray.__array_function__`, correct?) >> > > I think a hasattr check for __array_function__ is right. > We define ndarray.__array_function__ (even on NumPy 1.16) regardless of whether __array_function__ is enabled or not. In principle we could have checked the environment variable from C before defining the method, but it's too late for that now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Thu May 23 16:22:48 2019 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 23 May 2019 13:22:48 -0700 Subject: [Numpy-discussion] Converting np.sinc into a ufunc In-Reply-To: References: <5fc51159dc571266913d9b6631f6aceee2e070fd.camel@sipsolutions.net> Message-ID: <4f518937edd8352185f3769edef1821842601099.camel@sipsolutions.net> On Thu, 2019-05-23 at 10:17 -0400, Marten van Kerkwijk wrote: > I agree that we should not have two functions > > I also am rather unsure whether a ufunc is a good idea. Earlier, > while discussing other possible additions, like `erf`, the conclusion > seemed to be that in numpy we should just cover whatever is in the C > standard. This suggests `sinc` should not be a ufunc. > We are not adding a function though, as Robert already noted. So this is about how much we prefer ufuncs for functions that are a perfect fit. The accuracy can certainly be improved, but it involves some branching, so I think we would see at least 50% speed penalty compared to the ufunc version in the end (right now the speed improvement is about 20% for larger arrays, much more for smaller of course). I do not have a perfect feeling about what the precision improvements mean exactly here, but I posted some relative errors below as additional stats [0]. Overall I think I would be pretty neutral if there was no gain at all (as there is some loss of maintainability). Here it seems that we have some decent enhancement as well though, so I am slightly in favor. Best, Sebastian [0] And here maybe an additional point for better relative precision: ``` xarr = np.linspace(0, 2.1, 200000) res = [] for x in xarr: res.append((np.sinc(x) - mpmath.sincpi(x))/mpmath.sincpi(x)) res = np.asarray(res, dtype=object) print(abs(res).mean(), abs(res).max()) ``` master: 1.70112344295248e-15 6.61977521930425e-11 New branch: 5.64884361112036e-17 4.01208410359583e-16 we probably should have tests if we add this. > -- Marten > > p.s.`scipy.special.sinc` *is* `np.sinc` > > p.s.2 The accuracy argument is not in itself an argument for a ufunc, > as it could be done in python too. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From sebastian at sipsolutions.net Thu May 23 17:33:17 2019 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 23 May 2019 14:33:17 -0700 Subject: [Numpy-discussion] __skip_array_function__ discussion summary Message-ID: <15c6e87e049cb3eee6246cd7364eba352534c478.camel@sipsolutions.net> Hi all, This is an attempt from me to wrap up the discussion a bit so that others can chime in if they want to. NumPy 1.17 will ship with `__array_function__` a way for array like projects (dask, cupy) to override almost all numpy functions [0]. This addition is uncontroversial. NumPy 1.17 will _not_ ship with the `__skip_array_funciton__` following a longer dicussion. For those interested, I tried to give an very short overview over the topic below. The discussion here is around the addition of `__skip_array_function__` which would allow code to use: np.ones_like.__skip_array_function__(*args) to reuse the current implementation in numpy (i.e. directly call the current code). This can simplify things drastically for some array- likes, since they do not have to provide an alternative implementation. However, PR-13585 [1] sparked a more detailed discussion, since it was going to add the use of `__skip_array_function__` internally in numpy [2]. The issue is exposure of implementation details. If we do not use it internally, a user may implement their own `np.empty_like` and rely on `np.ones_like` to use `np.empty_like` [3] internally. Thus, `np.ones_like(my_array_like)` can work without `my_array_like` having any special code for `np.ones_like`. The PR exposes the issue that if `np.ones_like` is changed to call `np.empty_like.__skip_array_function__` internally, this will break the users `my_array_like` (it will not call their own `np.empty_like` implementation. We could expect users to fix up such breaking changes, but it exposes how fragile the interaction of user types using `__skip_array_function__` and changes in the specific implementation used by numpy can be in some cases. The second option would be to make sure we use `__skip_array_function__` internally, so that users cannot expect `np.ones_like` to work because they made `np.empty_like` work in the above example (does not increase the "API surface" of NumPy). Plus it increases the issue that the numpy code itself is less readable if we use `__skip_array_function__` internally in many/all places. Those two options further have very different goals in mind for the final usage of the protocol. So that right now the solution is to step back, not include the addition and rather gain experience with the NumPy 1.17 release that includes `__array_function__` but not `__skip_array_function`. I hope this may help those interested who did not follow the full discussion, can't say I feel I am very good at summarizing. For details I encourage you to have a look at the PR discussion and the recent mails to the list. Best, Sebastian [0] http://www.numpy.org/neps/nep-0018-array-function-protocol.html#implementations-in-terms-of-a-limited-core-api [1] https://github.com/numpy/numpy/pull/13585 [2] Mostly for slight optimization. [3] It also uses `np.copyto` which can be overridden as well. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From einstein.edison at gmail.com Thu May 23 18:48:54 2019 From: einstein.edison at gmail.com (Hameer Abbasi) Date: Fri, 24 May 2019 00:48:54 +0200 Subject: [Numpy-discussion] Keep __array_function__ unexposed by default for 1.17? In-Reply-To: References: <55175cd2-7654-4b6a-a5ac-8faf965085c6@www.fastmail.com> Message-ID: <454a2ecdbd86a647c698411186e4eeb7908731bc.camel@gmail.com> On Thu, 2019-05-23 at 10:19 -0700, Stephan Hoyer wrote: > On Thu, May 23, 2019 at 2:43 AM Ralf Gommers > wrote: > > On Thu, May 23, 2019 at 3:02 AM Marten van Kerkwijk < > > m.h.vankerkwijk at gmail.com> wrote: > > > > If we want to keep an "off" switch we might want to add some > > > > sort of API for exposing whether NumPy is using > > > > __array_function__ or not. Maybe > > > > numpy.__experimental_array_function_enabled__ = True, so you > > > > can just test `hasattr(numpy, > > > > '__experimental_array_function_enabled__')`? This is assuming > > > > that we are OK with adding an underscore attribute to NumPy's > > > > namespace semi-indefinitely. > > > > I don't think we want to add or document anything publicly. That > > only adds to the configuration problem, and indeed makes it harder > > to rely on the issue. All I was suggested was keeping some > > (private) safety switch in the code base for a while in case of > > real issues as a workaround. > > > > I was concerned that libraries dask might have different behavior > internally depending upon whether or not __array_function__ is > enabled, but looking more carefully dask only does this detection for > tests. So maybe this is not needed. > > Still, I'm concerned about the potential broader implications of > making it possibly to turn this off. In general, I don't think NumPy > should have configurable global state -- it opens up the possibility > of a whole class of issues. Stefan van der Walt raised this point > when this "off switch" was suggested a few months ago: > https://mail.python.org/pipermail/numpy-discussion/2019-March/079207.html I agree -- Global mutable state is bad in general, but keeping around the environment variable is okay. > That said, I'd be OK with keeping around an environment variable as > an emergency opt-out for now, especially to support benchmarking the > impact of __array_function__ checks. +1 for keeping the env var for now. > But I would definitely be opposed to keeping around this switch > around long term, for more than a major version or two. If there will > be an outcry when we remove checks for > NUMPY_EXPERIMENTAL_ARRAY_FUNCTION, then we should reconsider the > entire __array_function__ approach. > > > > Might this be overthinking it? I might use this myself on > > > supercomputer runs were I know that I'm using arrays only. Though > > > one should not extrapolate from oneself! > > > > > > That said, it is not difficult as is. For instance, we could > > > explain in the docs that one can tell from: > > > ``` > > > enabled = hasattr(np.core, 'overrides') and > > > np.core.overrides.ENABLE_ARRAY_FUNCTION > > > ``` > > > One could even allow for eventual removal by explaining it should > > > be, > > > ``` > > > enabled = hasattr(np.core, 'overrides') and > > > getattr(np.core.overrides, 'ENABLE_ARRAY_FUNCTION', True) > > > ``` > > > (If I understand correctly, one cannot tell from the presence of > > > `ndarray.__array_function__`, correct?) > > > > I think a hasattr check for __array_function__ is right. > > We define ndarray.__array_function__ (even on NumPy 1.16) regardless > of whether __array_function__ is enabled or not. > > In principle we could have checked the environment variable from C > before defining the method, but it's too late for that now. I disagree here: In principle the only people relying on this would be the same ones relying on the functionality of this protocol, so this would be an easy change to undo, if at all needed. I do not know of any libraries that actually use/call the __array_function__ attribute other than NumPy, when it isn't enabled. > _______________________________________________NumPy-Discussion > mailing listNumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From einstein.edison at gmail.com Thu May 23 18:56:19 2019 From: einstein.edison at gmail.com (Hameer Abbasi) Date: Fri, 24 May 2019 00:56:19 +0200 Subject: [Numpy-discussion] Keep __array_function__ unexposed by default for 1.17? In-Reply-To: References: <55175cd2-7654-4b6a-a5ac-8faf965085c6@www.fastmail.com> Message-ID: <1b071a0f006ee8963d772dfb599c78eabcdd5cb4.camel@gmail.com> On Wed, 2019-05-22 at 08:52 -0700, Stephan Hoyer wrote: > Thanks for raising these concerns. > The full implications of my recent __skip_array_function__ proposal > are only now becoming evident to me now, looking at it's use in GH- > 13585. Guaranteeing that it does not expand NumPy's API surface seems > hard to achieve without pervasive use of __skip_array_function__ > internally. > > Taking a step back, the sort of minor hacks [1] that motivated > __skip_array_function__ for me are annoying, but really not too bad > -- they are a small amount of additional code duplication in a > proposal that already requires a large amount of code duplication. > > So let's roll back the recent NEP change adding > __skip_array_function__ to the public interface [2]. Inside the few > NumPy functions where __array_function__ causes a measurable > performance impact due to repeated calls (most notably np.block, for > which some benchmarks are 25% slower), we can make use of the private > __wrapped__ attribute. > > I would still like to turn on __array_function__ in NumPy 1.17. At > least, let's try that for the release candidate and see how it goes. > The "all in" nature of __array_function__ without > __skip_array_function__ will both limit its use to cases where it is > strongly motivated, and also limits the API implications for NumPy. > There is still plenty of room for expanding the protocol, but it's > really hard to see what is necessary (and prudent!) without actual > use. Agreed that we should turn it on for 1.17 RC, and see if there are any complaints. > [1] e.g., see > https://github.com/google/jax/blob/62473351643cecb6c248a50601af163646ba7be6/jax/numpy/lax_numpy.py#L2440-L2459 > [2] https://github.com/numpy/numpy/pull/13305 > > > > > > On Tue, May 21, 2019 at 11:44 PM Juan Nunez-Iglesias < > jni.soma at gmail.com> wrote: > > I just want to express my general support for Marten's concerns. As > > an "interested observer", I've been meaning to give > > `__array_function__` a try but haven't had the chance yet. So from > > my anecdotal experience I expect that more people need to play with > > this before setting the API in stone. > > > > At scikit-image we place a very strong emphasis on code simplicity > > and readability, so I also share Marten's concerns about code > > getting too complex. My impression reading the NEP was "whoa, this > > is hard, I'm glad smarter people than me are working on this, I'm > > sure it'll get simpler in time". But I haven't seen the simplicity > > materialise... > > > > On Wed, 22 May 2019, at 11:31 AM, Marten van Kerkwijk wrote: > > > Hi All, > > > > > > For 1.17, there has been a big effort, especially by Stephan, to > > > make __array_function__ sufficiently usable that it can be > > > exposed. I think this is great, and still like the idea very > > > much, but its impact on the numpy code base has gotten so big in > > > the most recent PR (gh-13585) that I wonder if we shouldn't > > > reconsider the approach, and at least for 1.17 stick with the > > > status quo. Since that seems to be a bigger question than can be > > > usefully addressed in the PR, I thought I would raise it here. > > > > > > Specifically, now not only does every numpy function have its > > > dispatcher function, but also internally all numpy function calls > > > are being done via the new `__skip_array_function__` attribute, > > > to avoid further overrides. I think both changes make the code > > > significantly less readable, thus, e.g., making it even harder > > > than it is already to attract new contributors. > > > > > > I think with this it is probably time to step back and check > > > whether the implementation is in fact the right one. For > > > instance, among the alternatives we originally considered was one > > > that had the overridable versions of functions in the regular > > > `numpy` namespace, and the once that would not themselves check > > > in a different one. Alternatively, for some of the benefits > > > provided by `__skip_array_function__`, there was a different > > > suggestion to have a special return value, of > > > `NotImplementedButCoercible`. Might these be better after all? > > > > > > More generally, I think we're suffering from the fact that > > > several of us seem to have rather different final goals in mind > > > In particular, I'd like to move to a state where as much of the > > > code as possible makes use of the simplest possible > > > implementation, with only a few true base functions, so that all > > > but those simplest functions will generally work on any type of > > > array. Others, however, worry much more about making > > > implementations (even more) part of the API. > > > > > > All the best, > > > > > > Marten > > > > > > _______________________________________________ > > > NumPy-Discussion mailing list > > > NumPy-Discussion at python.org > > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > > > > _______________________________________________ > > > > NumPy-Discussion mailing list > > > > NumPy-Discussion at python.org > > > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________NumPy-Discussion > mailing listNumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Thu May 23 19:20:36 2019 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Thu, 23 May 2019 16:20:36 -0700 Subject: [Numpy-discussion] __skip_array_function__ discussion summary In-Reply-To: <15c6e87e049cb3eee6246cd7364eba352534c478.camel@sipsolutions.net> References: <15c6e87e049cb3eee6246cd7364eba352534c478.camel@sipsolutions.net> Message-ID: <20190523232036.yrf6afvf4hukek2g@carbo> On Thu, 23 May 2019 14:33:17 -0700, Sebastian Berg wrote: > Those two options further have very different goals in mind for the > final usage of the protocol. So that right now the solution is to step > back, not include the addition and rather gain experience with the > NumPy 1.17 release that includes `__array_function__` but not > `__skip_array_function`. To emphasize how this solves the API exposure problem: If `__skip_array_function__` is being made available, the user can implement `ones_like` for their custom class as: class MyArray: def __array_function__(func, types, *args, **kwargs): if func == np.ones_like: return np.ones_like.__skip_array_function__(x) Without it, they are forced to reimplement `ones_like` from scratch. This ensures that they never rely on any internal behavior of `np.ones_like`, which may change at any time to break for their custom array class. Here's a concrete example: The user wants to override `ones_like` and `zeros_like` for their custom array. They implement it as follows: class MyArray: def __array_function__(func, types, *args, **kwargs): if func == np.ones_like: return np.ones_like.__skip_array_function__(*args, **kwargs) elif func == np.zeros_like: return MyArray(...) Would this work? Well, it depends on how NumPy implements `ones_like` internally. If NumPy used `__skip_array_function__ consistently throughout, it would not work: def np.ones_like(x): y = np.zeros_like.__skip_array_function__(x) y.fill(1) return y If, instead, the implementation was def np.ones_like(x): y = np.zeros_like(x) y.fill(1) return y it would work. *BUT*, it would be brittle, because our internal implementation may easily change to: def np.ones_like(x): y = np.empty_like(x) y.fill(1) return y And if `empty_like` isn't implemented by MyArray, this would break. The workaround that Stephan Hoyer mentioned (and that will have to be used in 1.17) is that you can still use the NumPy machinery to operate on pure arrays: class MyArray: def __array_function__(func, types, *args, **kwargs): if func == np.ones_like: x_arr = np.asarray(x) ones = np.ones_like(x_arr) return MyArray.from_array(ones) St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 529 bytes Desc: not available URL: From m.h.vankerkwijk at gmail.com Thu May 23 20:25:26 2019 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Thu, 23 May 2019 20:25:26 -0400 Subject: [Numpy-discussion] __skip_array_function__ discussion summary In-Reply-To: <20190523232036.yrf6afvf4hukek2g@carbo> References: <15c6e87e049cb3eee6246cd7364eba352534c478.camel@sipsolutions.net> <20190523232036.yrf6afvf4hukek2g@carbo> Message-ID: Hi Sebastian, St?fan, Thanks for the very good summaries! An additional item worth mentioning is that by using `__skip_array_function__` everywhere inside, one minimizes the performance penalty of checking for `__array_function__`. It would obviously be worth trying to do that, but ideally in a way that is much less intrusive. Furthermore, it became clear that there were different pictures of the final goal, with quite a bit of discussion about the relevant benefits of trying the limit exposure of the internal API and of, conversely, trying to (incrementally) move to implementations that are maximally re-usable (using duck-typing), which are themselves based around a smaller core (more in line with Nathaniel's NEP-22). In the latter respect, St?fan's example is instructive. The real implementation of `ones_like` is: ``` def ones_like(a, dtype=None, order='K', subok=True, shape=None): res = empty_like(a, dtype=dtype, order=order, subok=subok, shape=shape) multiarray.copyto(res, 1, casting='unsafe') return res ``` The first step is here seems obvious: an "empty_like" function would seem to belong in the core. The second step less so: St?fan's `res.fill(1)` seems more logical, as surely a class's method is the optimal way to do something. Though I do feel `.fill` itself breaks "There should be one-- and preferably only one --obvious way to do it." So, I'd want to replace it with `res[...] = 1`, so that one relies on the more obvious `__setitem__`. (Note that all are equally fast even now.) Of course, in this idealized future, there would be little reason to even allow `ones_like` to be overridden with __array_function__... All the best, Marten -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmrsg11 at gmail.com Fri May 24 22:26:43 2019 From: tmrsg11 at gmail.com (C W) Date: Fri, 24 May 2019 22:26:43 -0400 Subject: [Numpy-discussion] Was the range() function ever created? Message-ID: Hello all, I am want to calculate the range of a vector. I saw that someone asked for range() in 2011, but was it ever created? https://github.com/pandas-dev/pandas/issues/288 Response at the time was to use df.describe(). But df.describe() gives all the 5-number summary statistics, but I DON'T WANT wall the extra stuff I didn't ask for. I was expected a numerical number. I can use that to feed into another function. It exists in Matlab and R, why not in Python? I'm quite frustrated every time I need to calculate the range. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.v.root at gmail.com Fri May 24 22:32:45 2019 From: ben.v.root at gmail.com (Benjamin Root) Date: Fri, 24 May 2019 22:32:45 -0400 Subject: [Numpy-discussion] Was the range() function ever created? In-Reply-To: References: Message-ID: This is the numpy discussion list, not the pandas discussion list. Now, for numpy's part, I have had hankerings for a `np.minmax()` ufunc, but never enough to get over just calling min and max on my data separately. On Fri, May 24, 2019 at 10:27 PM C W wrote: > Hello all, > > I am want to calculate the range of a vector. I saw that someone asked for > range() in 2011, but was it ever created? > https://github.com/pandas-dev/pandas/issues/288 > > Response at the time was to use df.describe(). But df.describe() gives all > the 5-number summary statistics, but I DON'T WANT wall the extra stuff I > didn't ask for. I was expected a numerical number. I can use that to feed > into another function. > > It exists in Matlab and R, why not in Python? I'm quite frustrated every > time I need to calculate the range. > > Thanks in advance. > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmrsg11 at gmail.com Fri May 24 22:43:42 2019 From: tmrsg11 at gmail.com (C W) Date: Fri, 24 May 2019 22:43:42 -0400 Subject: [Numpy-discussion] Was the range() function ever created? In-Reply-To: References: Message-ID: When I looked up pandas mailing list. Numpy showed up. Maybe is because Pandas is built on Numpy? My apologies. Yes, please do. For people with statistical background, but not CS. It seems strange the *real* range() function is used to generate natural numbers. Thanks, Ben! On Fri, May 24, 2019 at 10:34 PM Benjamin Root wrote: > This is the numpy discussion list, not the pandas discussion list. Now, > for numpy's part, I have had hankerings for a `np.minmax()` ufunc, but > never enough to get over just calling min and max on my data separately. > > On Fri, May 24, 2019 at 10:27 PM C W wrote: > >> Hello all, >> >> I am want to calculate the range of a vector. I saw that someone asked >> for range() in 2011, but was it ever created? >> https://github.com/pandas-dev/pandas/issues/288 >> >> Response at the time was to use df.describe(). But df.describe() gives >> all the 5-number summary statistics, but I DON'T WANT wall the extra stuff >> I didn't ask for. I was expected a numerical number. I can use that to feed >> into another function. >> >> It exists in Matlab and R, why not in Python? I'm quite frustrated every >> time I need to calculate the range. >> >> Thanks in advance. >> >> >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.v.root at gmail.com Fri May 24 23:08:54 2019 From: ben.v.root at gmail.com (Benjamin Root) Date: Fri, 24 May 2019 23:08:54 -0400 Subject: [Numpy-discussion] Was the range() function ever created? In-Reply-To: References: Message-ID: pandas is not built on numpy (at least, not anymore), but borrows a lot of inspirations from numpy, and interacts with numpy fairly well. As part of the scipy ecosystem, we all work together to improve interoperability and features. python's built-in range() function has been there long before numpy came on the scene, so it just made sense to adopt that name since it was the way to generate numbers in python. Ben On Fri, May 24, 2019 at 10:44 PM C W wrote: > When I looked up pandas mailing list. Numpy showed up. Maybe is because > Pandas is built on Numpy? My apologies. > > Yes, please do. For people with statistical background, but not CS. It > seems strange the *real* range() function is used to generate natural > numbers. > > Thanks, Ben! > > > > On Fri, May 24, 2019 at 10:34 PM Benjamin Root > wrote: > >> This is the numpy discussion list, not the pandas discussion list. Now, >> for numpy's part, I have had hankerings for a `np.minmax()` ufunc, but >> never enough to get over just calling min and max on my data separately. >> >> On Fri, May 24, 2019 at 10:27 PM C W wrote: >> >>> Hello all, >>> >>> I am want to calculate the range of a vector. I saw that someone asked >>> for range() in 2011, but was it ever created? >>> https://github.com/pandas-dev/pandas/issues/288 >>> >>> Response at the time was to use df.describe(). But df.describe() gives >>> all the 5-number summary statistics, but I DON'T WANT wall the extra stuff >>> I didn't ask for. I was expected a numerical number. I can use that to feed >>> into another function. >>> >>> It exists in Matlab and R, why not in Python? I'm quite frustrated every >>> time I need to calculate the range. >>> >>> Thanks in advance. >>> >>> >>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmrsg11 at gmail.com Fri May 24 23:48:53 2019 From: tmrsg11 at gmail.com (C W) Date: Fri, 24 May 2019 23:48:53 -0400 Subject: [Numpy-discussion] Was the range() function ever created? In-Reply-To: References: Message-ID: I can't be the first person who asked about range() that calculates the *actual* range of two numbers. I have not used numpy or pandas long enough to know, but how has it been dealt with before? On Fri, May 24, 2019 at 11:10 PM Benjamin Root wrote: > pandas is not built on numpy (at least, not anymore), but borrows a lot of > inspirations from numpy, and interacts with numpy fairly well. As part of > the scipy ecosystem, we all work together to improve interoperability and > features. > > python's built-in range() function has been there long before numpy came > on the scene, so it just made sense to adopt that name since it was the way > to generate numbers in python. > > Ben > > On Fri, May 24, 2019 at 10:44 PM C W wrote: > >> When I looked up pandas mailing list. Numpy showed up. Maybe is because >> Pandas is built on Numpy? My apologies. >> >> Yes, please do. For people with statistical background, but not CS. It >> seems strange the *real* range() function is used to generate natural >> numbers. >> >> Thanks, Ben! >> >> >> >> On Fri, May 24, 2019 at 10:34 PM Benjamin Root >> wrote: >> >>> This is the numpy discussion list, not the pandas discussion list. Now, >>> for numpy's part, I have had hankerings for a `np.minmax()` ufunc, but >>> never enough to get over just calling min and max on my data separately. >>> >>> On Fri, May 24, 2019 at 10:27 PM C W wrote: >>> >>>> Hello all, >>>> >>>> I am want to calculate the range of a vector. I saw that someone asked >>>> for range() in 2011, but was it ever created? >>>> https://github.com/pandas-dev/pandas/issues/288 >>>> >>>> Response at the time was to use df.describe(). But df.describe() gives >>>> all the 5-number summary statistics, but I DON'T WANT wall the extra stuff >>>> I didn't ask for. I was expected a numerical number. I can use that to feed >>>> into another function. >>>> >>>> It exists in Matlab and R, why not in Python? I'm quite frustrated >>>> every time I need to calculate the range. >>>> >>>> Thanks in advance. >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at python.org >>>> https://mail.python.org/mailman/listinfo/numpy-discussion >>>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Sat May 25 00:03:08 2019 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sat, 25 May 2019 14:03:08 +1000 Subject: [Numpy-discussion] Was the range() function ever created? In-Reply-To: References: Message-ID: On Sat, 25 May 2019, at 1:49 PM, C W wrote: > I can't be the first person who asked about range() that calculates the *actual* range of two numbers. Based on your mention of matlab's `range`, I think you're looking for numpy.ptp. https://docs.scipy.org/doc/numpy/reference/generated/numpy.ptp.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat May 25 00:04:58 2019 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 24 May 2019 21:04:58 -0700 Subject: [Numpy-discussion] Was the range() function ever created? In-Reply-To: References: Message-ID: On Fri, May 24, 2019 at 8:50 PM C W wrote: > I can't be the first person who asked about range() that calculates the > *actual* range of two numbers. > > I have not used numpy or pandas long enough to know, but how has it been > dealt with before? > First, through `describe()`, then they added `value_range()`, then they deprecated `value_range()` in favor of `describe()` again. https://github.com/pandas-dev/pandas/commit/e66d25e9f082c93bb4bab3caf2a4fdc8fe904d55 http://pandas.pydata.org/pandas-docs/version/0.16.0/whatsnew.html#removal-of-prior-version-deprecations-changes You can ask on the pandas-dev mailing list why: https://mail.python.org/mailman/listinfo/pandas-dev As for numpy, trying to come up with the right semantics for the shape of the output is usually when such discussions die. Functions like a statistical range calculation are expected to be like `min()` and `max()` and allow us to apply them axis-wise (e.g. just down columns or just across rows, or more any other axis in an N-D array). Odds are, the way that we'll pack the two results into a single output will probably not be what you want in half of the cases, so you'll just have to unpack anyways, and at that point, it's just not *that* much more convenient than calling `min()` and `max()` separately. So every time we write `xmin, xmax = x.min(), x.max()`, we grumble a little bit, but it's just a grumble, not a significant pain. pandas has other considerations, but you'll have to ask them. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmrsg11 at gmail.com Sat May 25 00:31:55 2019 From: tmrsg11 at gmail.com (C W) Date: Sat, 25 May 2019 00:31:55 -0400 Subject: [Numpy-discussion] Was the range() function ever created? In-Reply-To: References: Message-ID: Thank you, Robert. I will take it up to the Pandas-dev mailing list. I'm not sure if I follow you on "right semantics for the shape of the output." Range is just a summary statistic which is a number. I'm not an expert, but wouldn't something like this do? def range(vec): return np.max(vec) - np.min(vec) On Sat, May 25, 2019 at 12:06 AM Robert Kern wrote: > On Fri, May 24, 2019 at 8:50 PM C W wrote: > >> I can't be the first person who asked about range() that calculates the >> *actual* range of two numbers. >> >> I have not used numpy or pandas long enough to know, but how has it been >> dealt with before? >> > > First, through `describe()`, then they added `value_range()`, then they > deprecated `value_range()` in favor of `describe()` again. > > > https://github.com/pandas-dev/pandas/commit/e66d25e9f082c93bb4bab3caf2a4fdc8fe904d55 > > http://pandas.pydata.org/pandas-docs/version/0.16.0/whatsnew.html#removal-of-prior-version-deprecations-changes > > You can ask on the pandas-dev mailing list why: > https://mail.python.org/mailman/listinfo/pandas-dev > > As for numpy, trying to come up with the right semantics for the shape of > the output is usually when such discussions die. Functions like a > statistical range calculation are expected to be like `min()` and `max()` > and allow us to apply them axis-wise (e.g. just down columns or just across > rows, or more any other axis in an N-D array). Odds are, the way that we'll > pack the two results into a single output will probably not be what you > want in half of the cases, so you'll just have to unpack anyways, and at > that point, it's just not *that* much more convenient than calling > `min()` and `max()` separately. So every time we write `xmin, xmax = > x.min(), x.max()`, we grumble a little bit, but it's just a grumble, not a > significant pain. > > pandas has other considerations, but you'll have to ask them. > > -- > Robert Kern > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat May 25 00:56:01 2019 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 24 May 2019 21:56:01 -0700 Subject: [Numpy-discussion] Was the range() function ever created? In-Reply-To: References: Message-ID: On Fri, May 24, 2019 at 9:33 PM C W wrote: > Thank you, Robert. I will take it up to the Pandas-dev mailing list. > > I'm not sure if I follow you on "right semantics for the shape of the > output." Range is just a summary statistic which is a number. > > I'm not an expert, but wouldn't something like this do? > def range(vec): > return np.max(vec) - np.min(vec) > Oh.You referenced the R range() function, which returns the minimum and the maximum as separate numbers, not their difference. https://www.rdocumentation.org/packages/base/versions/3.6.0/topics/range And the pandas issue that you referenced was asking for the same. In fact, numpy does have the function you are looking for, as Juan noted. It's called `ptp()` (early numpy developers tended to be more from a signal processing background than a statistics background). -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmrsg11 at gmail.com Sat May 25 01:05:30 2019 From: tmrsg11 at gmail.com (C W) Date: Sat, 25 May 2019 01:05:30 -0400 Subject: [Numpy-discussion] Was the range() function ever created? In-Reply-To: References: Message-ID: I somehow missed Juan's reply. Yes, I think Juan solved the problem. Thanks, Juan! On Sat, May 25, 2019 at 12:57 AM Robert Kern wrote: > On Fri, May 24, 2019 at 9:33 PM C W wrote: > >> Thank you, Robert. I will take it up to the Pandas-dev mailing list. >> >> I'm not sure if I follow you on "right semantics for the shape of the >> output." Range is just a summary statistic which is a number. >> >> I'm not an expert, but wouldn't something like this do? >> def range(vec): >> return np.max(vec) - np.min(vec) >> > > Oh.You referenced the R range() function, which returns the minimum and > the maximum as separate numbers, not their difference. > > https://www.rdocumentation.org/packages/base/versions/3.6.0/topics/range > > And the pandas issue that you referenced was asking for the same. > > In fact, numpy does have the function you are looking for, as Juan noted. > It's called `ptp()` (early numpy developers tended to be more from a signal > processing background than a statistics background). > > -- > Robert Kern > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Sat May 25 15:14:41 2019 From: shoyer at gmail.com (Stephan Hoyer) Date: Sat, 25 May 2019 12:14:41 -0700 Subject: [Numpy-discussion] __skip_array_function__ discussion summary In-Reply-To: References: <15c6e87e049cb3eee6246cd7364eba352534c478.camel@sipsolutions.net> <20190523232036.yrf6afvf4hukek2g@carbo> Message-ID: Sebastian, Stefan and Marten -- thanks for the excellent summaries of the discussion. In line with this consensus, I have drafted a revision of the NEP without __skip_array_function__: https://github.com/numpy/numpy/pull/13624 On Thu, May 23, 2019 at 5:28 PM Marten van Kerkwijk < m.h.vankerkwijk at gmail.com> wrote: > Hi Sebastian, St?fan, > > Thanks for the very good summaries! > > An additional item worth mentioning is that by using > `__skip_array_function__` everywhere inside, one minimizes the performance > penalty of checking for `__array_function__`. It would obviously be worth > trying to do that, but ideally in a way that is much less intrusive. > > Furthermore, it became clear that there were different pictures of the > final goal, with quite a bit of discussion about the relevant benefits of > trying the limit exposure of the internal API and of, conversely, trying to > (incrementally) move to implementations that are maximally re-usable (using > duck-typing), which are themselves based around a smaller core (more in > line with Nathaniel's NEP-22). > > In the latter respect, St?fan's example is instructive. The real > implementation of `ones_like` is: > ``` > def ones_like(a, dtype=None, order='K', subok=True, shape=None): > res = empty_like(a, dtype=dtype, order=order, subok=subok, shape=shape) > multiarray.copyto(res, 1, casting='unsafe') > return res > ``` > > The first step is here seems obvious: an "empty_like" function would seem > to belong in the core. > The second step less so: St?fan's `res.fill(1)` seems more logical, as > surely a class's method is the optimal way to do something. Though I do > feel `.fill` itself breaks "There should be one-- and preferably only one > --obvious way to do it." So, I'd want to replace it with `res[...] = 1`, so > that one relies on the more obvious `__setitem__`. (Note that all are > equally fast even now.) > > Of course, in this idealized future, there would be little reason to even > allow `ones_like` to be overridden with __array_function__... > > All the best, > > Marten > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat May 25 18:08:48 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 26 May 2019 00:08:48 +0200 Subject: [Numpy-discussion] scientific Python featured in GitHub keynote Message-ID: Hi all, On Thursday I had the pleasure to be at GitHub Satellite, together with quite a few other maintainers from projects throughout our ecosystem, and see NumPy, Matplotlib, AstroPy and other projects highlighted prominently in Nat Friedman's keynote. It included the story of the black hole image, and the open source software that enabled that image. It's the first 21 minutes of https://www.youtube.com/watch?v=xAbJkn4uRL4. Also, we now have "used by" for each repo and the dependency graph ( https://github.com/numpy/numpy/network/dependents): right now there are 205,240 repos and 13,877 packages on GitHub that depend on NumPy. Those numbers were not easy to get before, so very useful to have them in the UI now. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat May 25 20:19:16 2019 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 25 May 2019 18:19:16 -0600 Subject: [Numpy-discussion] scientific Python featured in GitHub keynote In-Reply-To: References: Message-ID: On Sat, May 25, 2019 at 4:09 PM Ralf Gommers wrote: > Hi all, > > On Thursday I had the pleasure to be at GitHub Satellite, together with > quite a few other maintainers from projects throughout our ecosystem, and > see NumPy, Matplotlib, AstroPy and other projects highlighted prominently > in Nat Friedman's keynote. It included the story of the black hole image, > and the open source software that enabled that image. It's the first 21 > minutes of https://www.youtube.com/watch?v=xAbJkn4uRL4. > > Also, we now have "used by" for each repo and the dependency graph ( > https://github.com/numpy/numpy/network/dependents): right now there are > 205,240 repos and 13,877 packages on GitHub that depend on NumPy. Those > numbers were not easy to get before, so very useful to have them in the UI > now. > > Thanks for the link. That was a lot of material to digest, do you have thoughts about which things we should be interested in? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun May 26 05:58:39 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 26 May 2019 11:58:39 +0200 Subject: [Numpy-discussion] scientific Python featured in GitHub keynote In-Reply-To: References: Message-ID: On Sun, May 26, 2019 at 2:19 AM Charles R Harris wrote: > > > On Sat, May 25, 2019 at 4:09 PM Ralf Gommers > wrote: > >> Hi all, >> >> On Thursday I had the pleasure to be at GitHub Satellite, together with >> quite a few other maintainers from projects throughout our ecosystem, and >> see NumPy, Matplotlib, AstroPy and other projects highlighted prominently >> in Nat Friedman's keynote. It included the story of the black hole image, >> and the open source software that enabled that image. It's the first 21 >> minutes of https://www.youtube.com/watch?v=xAbJkn4uRL4. >> >> Also, we now have "used by" for each repo and the dependency graph ( >> https://github.com/numpy/numpy/network/dependents): right now there are >> 205,240 repos and 13,877 packages on GitHub that depend on NumPy. Those >> numbers were not easy to get before, so very useful to have them in the UI >> now. >> >> > Thanks for the link. That was a lot of material to digest, do you have > thoughts about which things we should be interested in? > The triage role will be very useful (not yet available except as beta, being rolled out over the next couple of weeks). It nicely fills the gap between "nothing" and "full write access". The "used by" and the dependency graph features will be very useful when, e.g., writing proposals. It's not 100% complete (no OpenBLAS link for us for example) but it's better than anything we had before. I'm still wrapping my head around "sponsors". It's aimed at individuals and in general not the best for for NumPy and similar size projects I think, but there's a lot to like as well and there may be more coming in that direction. For those who are interested in funding/sponsoring, this is a nice reflection on the sponsors feature: https://nadiaeghbal.com/github-sponsors Finally I think the Event Horizon Telescope "story" as presented in that keynote is interesting and very useful when explaining the impact of our projects. We can use that on the website and in other places. Getting similar stories from outside physics and astronomy - the more diverse the better - will be valuable too. If there are people out there who can explain or help write up examples with large impact (say a major discovery in biology, a Nobel prize in economics, etc. using scientific Python), let's talk! Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon May 27 08:19:00 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 27 May 2019 14:19:00 +0200 Subject: [Numpy-discussion] acknowledging sponsorship and institutional partners In-Reply-To: References: Message-ID: On Sat, May 18, 2019 at 6:36 PM Ralf Gommers wrote: > Hi all, > > In [1] I am adding institutional partners and sponsor logos to the > numpy.org website. Sebastian made some good comments there, and I think > it would be very helpful if we had some guidelines on how we acknowledge > sponsorship. > > Our governance doc has clear text on what an Institutional Partner is, see > [2]. We don't have anything written down about sponsorship though. In my > open PR I followed the example of Jupyter (see [3]), which lists > Institutional Partners first, followed by Sponsors. > > For sponsors I think we will want to define some minimum level of > sponsorship for which we will put a logo somewhere (main page, or about > page). Jupyter seems to just put everything together. Scikit-learn and > NumFOCUS do the same on their front pages. NumFOCUS has tiered levels as > well with different benefits, and displays the tiers at [4]. Page 17 of the > NumFOCUS sponsorship brochure [5] spells out the sponsorship levels very > clearly: from platinum at $100k to bronze at $10k, and a special level for > "emerging leader" (startups) below that. > > I think that following the NumFOCUS model would be the most > straightforward thing to do, because (a) we're part of NumFOCUS, and (b) > it's very well documented. And also fairest in a way - it gives some > recognition proportionally to the contribution. My PR right now lists > Moore, Sloan and Tidelift as the 3 sponsors. The first two contributed on > the order of $500k each (spread out over 2-3 years), while Tidelift > currently contributes $1000/month. > > So I propose: > - acknowledging all active sponsorship (within the last 12 months) by logo > placement on numpy.org > - acknowledging past sponsorship as well on numpy.org, but on a separate > page and perhaps just as a listing rather than prominent logo placement > - adopting the NumFOCUS tiered sponsorship model > - listing institutional partners and sponsors in the same place, with > partners first (following what Jupyter does). > > Thoughts? > There seem to be no further comments on this. With an edit to take the feedback of Gael and Nelle into account (let's say $100k minimum for a place on the front page for now - more seems overly ambitious right now), I will take this as "roughly agreed" and make future website edits accordingly. And will find some time to write this up as a formal policy and add it to our governance/dev docs somewhere. Cheers, Ralf > Cheers, > Ralf > > > [1] https://github.com/numpy/numpy.org/pull/21 > [2] > https://www.numpy.org/devdocs/dev/governance/governance.html#institutional-partners-and-funding > [3] https://jupyter.org/about > [4] https://numfocus.org/sponsors > [5] > https://numfocus.org/wp-content/uploads/2018/07/NumFOCUS-Corporate-Sponsorship-Brochure.pdf > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Mon May 27 16:05:26 2019 From: matti.picus at gmail.com (Matti Picus) Date: Mon, 27 May 2019 23:05:26 +0300 Subject: [Numpy-discussion] numpy.org/neps not refreshing Message-ID: <585ef500-6d5b-be5a-d779-c6f81b2687ca@picus.org.il> When we merge a PR, a CI job on circleCI updates both https://github.com/numpy/neps and https://github.com/numpy/devdocs. These are meant to be served as github pages at http://www.numpy.org/neps/ and http://www.numpy.org/devdocs respectively. For some reason the devdocs is updating but the neps site is not, http://www.numpy.org/neps is still showing content from May 22 (a week ago). Could someone who is an admin to the numpy organization try to work out what is going on? It would be nice to be able to point to the updated roadmap and neps. Matti From stefanv at berkeley.edu Mon May 27 17:51:06 2019 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 27 May 2019 14:51:06 -0700 Subject: [Numpy-discussion] numpy.org/neps not refreshing In-Reply-To: <585ef500-6d5b-be5a-d779-c6f81b2687ca@picus.org.il> References: <585ef500-6d5b-be5a-d779-c6f81b2687ca@picus.org.il> Message-ID: <20190527215106.bjrv7kaodspwj6vd@carbo> Hi Matti, On Mon, 27 May 2019 23:05:26 +0300, Matti Picus wrote: > Could someone who is an admin to > the numpy organization try to work out what is going on? It would be nice to > be able to point to the updated roadmap and neps. You should have push rights now; but let's not update the pages right now?I just sent GitHub a support request (I've double checked all of our settings and see nothing wrong). Best regards, St?fan From tyler.je.reddy at gmail.com Tue May 28 15:02:13 2019 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Tue, 28 May 2019 12:02:13 -0700 Subject: [Numpy-discussion] Community Call -- May 29/ 2019 Message-ID: Hi, There will be a NumPy Community Call on May 29/ 2019 at 11 am Pacific Time. Anyone may join and edit the work-in-progress meeting notes: https://hackmd.io/Au2YB5QpQjyFUcCfdT1efw?view I think we still need to sort out a link / medium for the call--hopefully that will get added to the document in time. Best wishes, Tyler -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue May 28 15:39:48 2019 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 28 May 2019 13:39:48 -0600 Subject: [Numpy-discussion] NumPy 1.16.4 released. Message-ID: Charles R Harris Apr 21, 2019, 8:39 PM to numpy-discussion, SciPy, SciPy-User, bcc: python-announce-list Hi All, On behalf of the NumPy team I am pleased to announce the release of NumPy 1.16.4 which contains several fixes for newly reported bugs. The Python versions supported in this release are 2.7 and 3.5-3.7. Downstream developers building this release should use Cython >= 0.29.2 and, if using OpenBLAS, OpenBLAS > v0.3.7. Wheels for this release can be downloaded from PyPI , source archives and release notes are available from Github . If you are installing using pip, you may encounter a problem with older installed versions of NumPy that pip did not delete becoming mixed with the current version, resulting in an ``ImportError``. That problem is particularly common on Debian derived distributions due to a modified pip. The fix is to make sure all previous NumPy versions installed by pip have been removed. See #12736 for discussion of the issue. *Contributors* A total of 10 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - Charles Harris - Eric Wieser - Dennis Zollo + - Hunter Damron + - Jingbei Li + - Kevin Sheppard - Matti Picus - Nicola Soranzo + - Sebastian Berg - Tyler Reddy *Pull requests merged* A total of 16 pull requests were merged for this release. - gh-13392: BUG: Some PyPy versions lack PyStructSequence_InitType2. - gh-13394: MAINT, DEP: Fix deprecated ``assertEquals()`` - gh-13396: BUG: Fix structured_to_unstructured on single-field types (backport) - gh-13549: BLD: Make CI pass again with pytest 4.5 - gh-13552: TST: Register markers in conftest.py. - gh-13559: BUG: Removes ValueError for empty kwargs in arraymultiter_new - gh-13560: BUG: Add TypeError to accepted exceptions in crackfortran. - gh-13561: BUG: Handle subarrays in descr_to_dtype - gh-13562: BUG: Protect generators from log(0.0) - gh-13563: BUG: Always return views from structured_to_unstructured when... - gh-13564: BUG: Catch stderr when checking compiler version - gh-13565: BUG: longdouble(int) does not work - gh-13587: BUG: distutils/system_info.py fix missing subprocess import (#13523) - gh-13620: BUG,DEP: Fix writeable flag setting for arrays without base - gh-13641: MAINT: Prepare for the 1.16.4 release. - gh-13644: BUG: special case object arrays when printing rel-, abs-error Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue May 28 15:45:42 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 28 May 2019 21:45:42 +0200 Subject: [Numpy-discussion] [SciPy-Dev] NumPy 1.16.4 released. In-Reply-To: References: Message-ID: On Tue, May 28, 2019 at 9:40 PM Charles R Harris wrote: > Hi All, > > On behalf of the NumPy team I am pleased to announce the release of NumPy > 1.16.4 which contains several fixes for newly reported bugs. The Python > versions supported in this release are 2.7 and 3.5-3.7. > Thanks so much for managing yet another release Chuck! Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Wed May 29 13:18:17 2019 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Wed, 29 May 2019 10:18:17 -0700 Subject: [Numpy-discussion] Community Call -- May 29/ 2019 In-Reply-To: References: Message-ID: I added a zoom link to the meeting notes--we'll be capped at 40 minutes time with my free account though. On Tue, 28 May 2019 at 12:02, Tyler Reddy wrote: > Hi, > > There will be a NumPy Community Call on May 29/ 2019 at 11 am Pacific > Time. Anyone may join and edit the work-in-progress meeting notes: > https://hackmd.io/Au2YB5QpQjyFUcCfdT1efw?view > > I think we still need to sort out a link / medium for the call--hopefully > that will get added to the document in time. > > Best wishes, > Tyler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Fri May 31 16:45:46 2019 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Fri, 31 May 2019 13:45:46 -0700 Subject: [Numpy-discussion] Histgrom density estimation (density kwarg) with out of range values Message-ID: Hi all, unfortunately it was noticed in Issue 13604 [0] that when histogram is given used with a specified range and the `density=True` keyword argument out of bound values are simply discarded [1]. Discarding out of bound values makes sense when the density/normed option is not used, since in that case event counts are reported. However, when out of bound values exist, the probability density should arguably not sum up to 1 anymore. We seem to have three ways possible to continue here: 1. Call it an outright bug and fix it. 2. Add a FutureWarning, and change it later (unfortunately noisy) 3. Add a new kwarg to control what happens and a FutureWarning which can be silenced using the new kwarg. (No change will ever happen if `range` or manual bin edges were not specified.) If all agree that there is no reasonable use case for the current implementation, it would be tempting to simply change it, or use a FutureWarning (unfortunately forcing users to manually calculate the density). If there is any half-decent use case, the kwarg may be the nicer option. Personally, if no one finds has a use case, I am slightly tending towards the "bug fix" option right now. All the Best, Sebastian [0] https://github.com/numpy/numpy/issues/13604 [1] As mentioned in the documentation. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: