For teaching it is certainly nice to have numpy.polynomial.polynomial.polyfit
providing modern (vs. traditional) parameter order, but
- it is rather buried
- np.polyfit uses traditional order and has the same name
I recall there was some controversy (?) over all of this,
but might it not be appropriate to have a keyword argument to
both specifying whether the parameter order is to be modern
or traditional (in both polyfits and polyvals)?
In order to repeat rows or columns of an array as
I can use np.repeat as suggested by pv. However, looking at the flags of
the resulting array, data seems to be copied and actually repeated in
memory. This is not applicable if want a 1000x repetition.
What are the other options for such a repeat ?
On scipy lectures, there is a suggestion to use as_strided :
Otherwise, I see broadcast_arrays :
> N = 3
> data = np.arange(N)
> np.broadcast_arrays(data[:,None], np.zeros((1,2)))
This works but it feels like invoking a magic formula. Did I miss a
simpler function ?
I'm glad to inform you about new OpenOpt Suite release 0.52 (2013-Dec-15):
Minor interalg speedup
MATLAB solvers fmincon and fsolve have been connected
Several MATLAB ODE solvers have been connected
New ODE solvers, parameters abstol and reltol
New GLP solver: direct
Some minor bugfixes and improvements
I'm happy to announce the availability of the scipy 0.13.2 release. This is
a bugfix only release; it contains fixes for ndimage and optimize, and most
importantly was compiled with Cython 0.19.2 to fix memory leaks in code
using Cython fused types.
Source tarballs, binaries and release notes can be found at
SciPy 0.13.2 Release Notes
SciPy 0.13.2 is a bug-fix release with no new features compared to 0.13.1.
- 3096: require Cython 0.19, earlier versions have memory leaks in fused
- 3079: ``ndimage.label`` fix swapped 64-bitness test
- 3108: ``optimize.fmin_slsqp`` constraint violation
With the new np.partition functionality, there is a more efficient, but
also less obvious, way of extracting the n largest (or smallest) elements
from an array, i.e.:
def smallest_n(a, n):
return np.sort(np.partition(a, n)[:n])
def argsmallest_n(a, n):
ret = np.argpartition(a, n)[:n]
b = np.take(a, ret)
return np.take(ret, np.argsort(b))
instead of the usual:
Are those 4 functions (smallest, argsmallest, largest, arglargest), with
adequate axis support, worthy of including in numpy, or is the name space
already too cluttered?
( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
Announcing HDF5 for Python (h5py) 2.2.1
The h5py team is happy, in a sense, to announce the availability of h5py 2.2.1.
This release fixes a critical bug reported by Jim Parker on December 7th, which
affects code using HDF5 compound types.
We recommend that all users of h5py 2.2.0 upgrade to avoid crashes or possible
About h5py, downloads, documentation: http://www.h5py.org
Scope of bug
The issue affects a feature introduced in h5py 2.2.0, in which HDF5 compound
datasets may be updated in-place, by specifying a field name or names when
writing to the dataset:
>>> dataset['field_name'] = value
Under certain conditions, h5py can supply uninitialized memory to the HDF5
conversion machinery, leading (in the case reported) to a segmentation fault.
It is also possible for other fields of the type to be corrupted.
This issue affects only code which updates a subset of the fields in the
compound type. Programs reading from a compound type, writing all fields, or
using other datatypes, are not affected; nor are versions of h5py
prior to 2.2.0.
Github issue: https://github.com/h5py/h5py/issues/372
Original thread: https://groups.google.com/forum/#!topic/h5py/AbUOZ1MXf3U
Thanks also to Christoph Gohlke for making Windows installers available on very
short notice, after a glitch in the h5py build system.
I've uploaded on https://code.google.com/p/mingw-w64-static/ numpy/scipy
binaries as wheel builds for testing. The binaries have been build with the
help of a customized mingw-w64 toolchain and a recent (git) openBLAS.
There are a few discussions on packaging for the scientific Python stack
ongoing, on the NumFOCUS and distutils lists:
One of the things that we should start doing for numpy is distribute
releases as wheels. On OS X at least this is quite simple, so I propose to
just experiment with it. I can create some to try out and put them on a
separate folder on SourceForge. If that works they can be put on PyPi.
For Windows things are less simple, because the wheel format doesn't handle
the multiple builds (no SSE, SSE2, SSE3) that are in the superpack
installers. A problem is that we don't really know how many users still
have old CPUs that don't support SSE3. The impact for those users is high,
numpy will install but crash (see https://github.com/scipy/scipy/issues/1697).
1. does anyone have a good idea to obtain statistics?
2. in the absence of statistics, can we do an experiment by putting one
wheel up on PyPi which contains SSE3 instructions, for python 3.3 I
propose, and seeing for how many (if any) users this goes wrong?
P.S. related question: did anyone check whether the recently merged
NPY_HAVE_SSE2_INTRINSIC puts SSE2 instructions into the no-SSE binary?