I'm building numpy 1.6.2 for python 2.5 on centos 5.8.
I ran into a problem where bdist_egg was not working. It seems there's
a minor bug in numpy/distutils/core.py
Under python 2.5 the check for setuptools does not work, so the bdist
target for eggs is not available.
I've attached a patch that works around the issue for me. It is my
understanding that python 2.5 should still be a valid target for
building this release. If not, ignore this.
--
J.

The syntax "numpy.complex(A)" seems to be the most natural and obvious
thing a user would want for casting an array A to complex values.
Expressions like "A.astype(complex)", "array(A, dtype=complex)",
"numpy.complex128(A)" are less obvious, especially the last two ones,
which look a bit far-fetched.
Of course, these tricks can be learned. But Python is a language where
natural and obvious things most often work as expected. Here, it is not
the case.
It also breaks the Principle of Least Astonishment, by comparison with
"numpy.real(A)".
> numpy.complex is just a reference to the built in complex, so only works
> on scalars:
>
> In [5]: numpy.complex is complex
> Out[5]: True
Thank you for pointing this out.
What is the use of storing the "complex()" built-in function in the
numpy namespace, when it is already accessible from everywhere?
Best regards,
--
O.C.

The attached program leaks about 24 bytes per loop. The comments give a
bit more detail as to when the leak occurs and doesn't. How can I track
down where this leak is actually coming from?
Here is a sample run on my machine:
$ python simple.py
Python Version: 2.7.3 (default, Apr 20 2012, 22:39:59)
[GCC 4.6.3]
numpy version: 1.6.1
/etc/lsb-release:
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.04
DISTRIB_CODENAME=precise
DISTRIB_DESCRIPTION="Ubuntu 12.04 LTS"
110567424.0 24.465408
135168000.0 24.600576
159768576.0 24.600576
184369152.0 24.600576
208969728.0 24.600576
233570304.0 24.600576
258035712.0 24.465408
282636288.0 24.600576
307236864.0 24.600576
331837440.0 24.600576

Hi,
I am a seasoned numpy/pandas user mainly interested in financial
applications. These and other applications would greatly benefit from a
decimal data type with flexible rounding rules, precision etc.
Yes, there is cdecimal, the traditional decimal module from the Python
stdlib rewritten in C,
- http://www.bytereef.org/mpdecimal/index.html -
which has become part of the stdlib from Python 3.3.
However, it appears that cdecimal cannot be meaningfully used with numpy
(see the benchmark below). Squaring an n=10000 ndarray is 1500 times
faster with float64 than with a dtype=object ndarray based on
cdecimal.Decimal, and even simple operations fail in the first place.
I am not deeply enough into ufuncs etc. to judge if some of these
problems can be avoided with a few lines of Python code. However, my
impression is that ultimately we would all benefit from cdecimal.Decimal
becoming a native numpy type. Put bluntly, cdecimal is a great tool. But
it is not yet where we most need it.
The author of cdecimal, Stefan Krah, would probably have a great deal of
the skillset needed to successfully take such a project forward. He
happens to have also written the new memoryview implementation of Python
3.3. And from recent correspondence I understand he might be willing to
get involved in an effort to marry numpy and cdecimal.
The main question is if such project would fit into what core developers
see as the future of numpy.
Regards
Leo
And here is the benchmark:
In [1]: from numpy import *
In [2]: from cdecimal import Decimal
In [3]: r=random.rand(10000)
In [4]: d=ndarray(10000, dtype=Decimal)
In [5]: d.dtype
Out[5]: dtype('object')
In [6]: r.dtype
Out[6]: dtype('float64')
In [7]: for i in range(10000): d[i] = Decimal(r[i])
In [8]: %timeit r**2
100000 loops, best of 3: 14.7 us per loop
In [9]: %timeit d**2
10 loops, best of 3: 21.2 ms per loop
In [10]: r.var()
Out[10]: 0.082478142261349557
In [11]: d.var()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
C:\<ipython-input
-11-bf09d28e33ab> in <module>()
----> 1 d.var()

Hi,
I have a problem using histogram2d:
from numpy import linspace, histogram2d
bins_x = linspace(-180., 180., 360)
bins_y = linspace(-90., 90., 180)
data_x = linspace(-179.96875, 179.96875, 5760)
data_y = linspace(-89.96875, 89.96875, 2880)
histogram2d(data_x, data_y, (bins_x, bins_y))
AttributeError: The dimension of bins must be equal to the dimension of
the sample x.
I would expect histogram2d to return a 2d array of shape (360,180), which
is full of 256s. What am I missing here?
Cheers,
Andreas.

==========================
Announcing PyTables 2.4.0
==========================
We are happy to announce PyTables 2.4.0.
This is an incremental release which includes many changes to prepare
for future Python 3 support.
What's new
==========
This release includes support for the float16 data type and read-only
support for variable length string attributes.
The handling of HDF5 errors has been improved. The user will no longer
see HDF5 error stacks dumped to the console. All HDF5 error messages
are trapped and attached to a proper Python exception.
Now PyTables only supports HDF5 v1.8.4+. All the code has been updated
to the new HDF5 API. Supporting only HDF5 1.8 series is beneficial for
future development.
Documentation has been improved.
As always, a large amount of bugs have been addressed and squashed as well.
In case you want to know more in detail what has changed in this
version, please refer to:
http://pytables.github.com/release_notes.html
You can download a source package with generated PDF and HTML docs, as
well as binaries for Windows, from:
http://sourceforge.net/projects/pytables/files/pytables/2.4.0
For an online version of the manual, visit:
http://pytables.github.com/usersguide/index.html
What it is?
===========
PyTables is a library for managing hierarchical datasets and designed to
efficiently cope with extremely large amounts of data with support for
full 64-bit file addressing. PyTables runs on top of the HDF5 library
and NumPy package for achieving maximum throughput and convenient use.
PyTables includes OPSI, a new indexing technology, allowing to perform
data lookups in tables exceeding 10 gigarows (10**10 rows) in less than
a tenth of a second.
Resources
=========
About PyTables: http://www.pytables.org
About the HDF5 library: http://hdfgroup.org/HDF5/
About NumPy: http://numpy.scipy.org/
Acknowledgments
===============
Thanks to many users who provided feature improvements, patches, bug
reports, support and suggestions. See the ``THANKS`` file in the
distribution package for a (incomplete) list of contributors. Most
specially, a lot of kudos go to the HDF5 and NumPy (and numarray!)
makers. Without them, PyTables simply would not exist.
Share your experience
=====================
Let us know of any bugs, suggestions, gripes, kudos, etc. you may have.
----
**Enjoy data!**
-- The PyTables Team

Hi,
I am pleased to announce the availability of the first release candidate of
SciPy 0.11.0. For this release many new features have been added, and over
120 tickets and pull requests have been closed. Also noteworthy is that the
number of contributors for this release has risen to over 50. Some of the
highlights are:
- A new module, sparse.csgraph, has been added which provides a number of
common sparse graph algorithms.
- New unified interfaces to the existing optimization and root finding
functions have been added.
Sources and binaries can be found at
https://sourceforge.net/projects/scipy/files/scipy/0.11.0rc1/, release
notes are copied below.
Please try this release candidate and report any problems on the scipy
mailing lists.
Cheers,
Ralf
==========================
SciPy 0.11.0 Release Notes
==========================
.. note:: Scipy 0.11.0 is not released yet!
.. contents::
SciPy 0.11.0 is the culmination of 8 months of hard work. It contains
many new features, numerous bug-fixes, improved test coverage and
better documentation. Highlights of this release are:
- A new module has been added which provides a number of common sparse
graph
algorithms.
- New unified interfaces to the existing optimization and root finding
functions have been added.
All users are encouraged to upgrade to this release, as there are a large
number of bug-fixes and optimizations. Our development attention will now
shift to bug-fix releases on the 0.11.x branch, and on adding new features
on
the master branch.
This release requires Python 2.4-2.7 or 3.1-3.2 and NumPy 1.5.1 or greater.
New features
============
Sparse Graph Submodule
----------------------
The new submodule :mod:`scipy.sparse.csgraph` implements a number of
efficient
graph algorithms for graphs stored as sparse adjacency matrices. Available
routines are:
- :func:`connected_components` - determine connected components of a
graph
- :func:`laplacian` - compute the laplacian of a graph
- :func:`shortest_path` - compute the shortest path between points on a
positive graph
- :func:`dijkstra` - use Dijkstra's algorithm for shortest path
- :func:`floyd_warshall` - use the Floyd-Warshall algorithm for
shortest path
- :func:`breadth_first_order` - compute a breadth-first order of nodes
- :func:`depth_first_order` - compute a depth-first order of nodes
- :func:`breadth_first_tree` - construct the breadth-first tree from
a given node
- :func:`depth_first_tree` - construct a depth-first tree from a given
node
- :func:`minimum_spanning_tree` - construct the minimum spanning
tree of a graph
``scipy.optimize`` improvements
-------------------------------
The optimize module has received a lot of attention this release. In
addition
to added tests, documentation improvements, bug fixes and code clean-up, the
following improvements were made:
- A unified interface to minimizers of univariate and multivariate
functions has been added.
- A unified interface to root finding algorithms for multivariate functions
has been added.
- The L-BFGS-B algorithm has been updated to version 3.0.
Unified interfaces to minimizers
````````````````````````````````
Two new functions ``scipy.optimize.minimize`` and
``scipy.optimize.minimize_scalar`` were added to provide a common interface
to minimizers of multivariate and univariate functions respectively.
For multivariate functions, ``scipy.optimize.minimize`` provides an
interface to methods for unconstrained optimization (`fmin`, `fmin_powell`,
`fmin_cg`, `fmin_ncg`, `fmin_bfgs` and `anneal`) or constrained
optimization (`fmin_l_bfgs_b`, `fmin_tnc`, `fmin_cobyla` and `fmin_slsqp`).
For univariate functions, ``scipy.optimize.minimize_scalar`` provides an
interface to methods for unconstrained and bounded optimization (`brent`,
`golden`, `fminbound`).
This allows for easier comparing and switching between solvers.
Unified interface to root finding algorithms
````````````````````````````````````````````
The new function ``scipy.optimize.root`` provides a common interface to
root finding algorithms for multivariate functions, embeding `fsolve`,
`leastsq` and `nonlin` solvers.
``scipy.linalg`` improvements
-----------------------------
New matrix equation solvers
```````````````````````````
Solvers for the Sylvester equation (``scipy.linalg.solve_sylvester``,
discrete
and continuous Lyapunov equations (``scipy.linalg.solve_lyapunov``,
``scipy.linalg.solve_discrete_lyapunov``) and discrete and continuous
algebraic
Riccati equations (``scipy.linalg.solve_continuous_are``,
``scipy.linalg.solve_discrete_are``) have been added to ``scipy.linalg``.
These solvers are often used in the field of linear control theory.
QZ and QR Decomposition
````````````````````````
It is now possible to calculate the QZ, or Generalized Schur, decomposition
using ``scipy.linalg.qz``. This function wraps the LAPACK routines sgges,
dgges, cgges, and zgges.
The function ``scipy.linalg.qr_multiply``, which allows efficient
computation
of the matrix product of Q (from a QR decompostion) and a vector, has been
added.
Pascal matrices
```````````````
A function for creating Pascal matrices, ``scipy.linalg.pascal``, was added.
Sparse matrix construction and operations
-----------------------------------------
Two new functions, ``scipy.sparse.diags`` and ``scipy.sparse.block_diag``,
were
added to easily construct diagonal and block-diagonal sparse matrices
respectively.
``scipy.sparse.csc_matrix`` and ``csr_matrix`` now support the operations
``sin``, ``tan``, ``arcsin``, ``arctan``, ``sinh``, ``tanh``, ``arcsinh``,
``arctanh``, ``rint``, ``sign``, ``expm1``, ``log1p``, ``deg2rad``,
``rad2deg``,
``floor``, ``ceil`` and ``trunc``. Previously, these operations had to be
performed by operating on the matrices' ``data`` attribute.
LSMR iterative solver
---------------------
LSMR, an iterative method for solving (sparse) linear and linear
least-squares systems, was added as ``scipy.sparse.linalg.lsmr``.
Discrete Sine Transform
-----------------------
Bindings for the discrete sine transform functions have been added to
``scipy.fftpack``.
``scipy.interpolate`` improvements
----------------------------------
For interpolation in spherical coordinates, the three classes
``scipy.interpolate.SmoothSphereBivariateSpline``,
``scipy.interpolate.LSQSphereBivariateSpline``, and
``scipy.interpolate.RectSphereBivariateSpline`` have been added.
Binned statistics (``scipy.stats``)
-----------------------------------
The stats module has gained functions to do binned statistics, which are a
generalization of histograms, in 1-D, 2-D and multiple dimensions:
``scipy.stats.binned_statistic``, ``scipy.stats.binned_statistic_2d`` and
``scipy.stats.binned_statistic_dd``.
Deprecated features
===================
``scipy.sparse.cs_graph_components`` has been made a part of the sparse
graph
submodule, and renamed to ``scipy.sparse.csgraph.connected_components``.
Calling the former routine will result in a deprecation warning.
``scipy.misc.radon`` has been deprecated. A more full-featured radon
transform
can be found in scikits-image.
``scipy.io.save_as_module`` has been deprecated. A better way to save
multiple
Numpy arrays is the ``numpy.savez`` function.
The `xa` and `xb` parameters for all distributions in
``scipy.stats.distributions`` already weren't used; they have now been
deprecated.
Backwards incompatible changes
==============================
Removal of ``scipy.maxentropy``
-------------------------------
The ``scipy.maxentropy`` module, which was deprecated in the 0.10.0 release,
has been removed. Logistic regression in scikits.learn is a good and modern
alternative for this functionality.
Minor change in behavior of ``splev``
-------------------------------------
The spline evaluation function now behaves similarly to ``interp1d``
for size-1 arrays. Previous behavior::
>>> from scipy.interpolate import splev, splrep, interp1d
>>> x = [1,2,3,4,5]
>>> y = [4,5,6,7,8]
>>> tck = splrep(x, y)
>>> splev([1], tck)
4.
>>> splev(1, tck)
4.
Corrected behavior::
>>> splev([1], tck)
array([ 4.])
>>> splev(1, tck)
array(4.)
This affects also the ``UnivariateSpline`` classes.
Behavior of ``scipy.integrate.complex_ode``
-------------------------------------------
The behavior of the ``y`` attribute of ``complex_ode`` is changed.
Previously, it expressed the complex-valued solution in the form::
z = ode.y[::2] + 1j * ode.y[1::2]
Now, it is directly the complex-valued solution::
z = ode.y
Minor change in behavior of T-tests
-----------------------------------
The T-tests ``scipy.stats.ttest_ind``, ``scipy.stats.ttest_rel`` and
``scipy.stats.ttest_1samp`` have been changed so that 0 / 0 now returns NaN
instead of 1.
Other changes
=============
The SuperLU sources in ``scipy.sparse.linalg`` have been updated to version
4.3
from upstream.
The function ``scipy.signal.bode``, which calculates magnitude and phase
data
for a continuous-time system, has been added.
The two-sample T-test ``scipy.stats.ttest_ind`` gained an option to compare
samples with unequal variances, i.e. Welch's T-test.
``scipy.misc.logsumexp`` now takes an optional ``axis`` keyword argument.
Authors
=======
This release contains work by the following people (contributed at least
one patch to this release, names in alphabetical order):
* Jeff Armstrong
* Chad Baker
* Brandon Beacher +
* behrisch +
* borishim +
* Matthew Brett
* Lars Buitinck
* Luis Pedro Coelho +
* Johann Cohen-Tanugi
* David Cournapeau
* dougal +
* Ali Ebrahim +
* endolith +
* Bjørn Forsman +
* Robert Gantner +
* Sebastian Gassner +
* Christoph Gohlke
* Ralf Gommers
* Yaroslav Halchenko
* Charles Harris
* Jonathan Helmus +
* Andreas Hilboll +
* Marc Honnorat +
* Jonathan Hunt +
* Maxim Ivanov +
* Thouis (Ray) Jones
* Christopher Kuster +
* Josh Lawrence +
* Denis Laxalde +
* Travis Oliphant
* Joonas Paalasmaa +
* Fabian Pedregosa
* Josef Perktold
* Gavin Price +
* Jim Radford +
* Andrew Schein +
* Skipper Seabold
* Jacob Silterra +
* Scott Sinclair
* Alexis Tabary +
* Martin Teichmann
* Matt Terry +
* Nicky van Foreest +
* Jacob Vanderplas
* Patrick Varilly +
* Pauli Virtanen
* Nils Wagner +
* Darryl Wally +
* Stefan van der Walt
* Liming Wang +
* David Warde-Farley +
* Warren Weckesser
* Sebastian Werk +
* Mike Wimmer +
* Tony S Yu +
A total of 55 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.

Hi All,
Travis and I agree that it would be appropriate to remove the current 1.7.x
branch and branch again after a code freeze. That way we can avoid the pain
and potential errors of backports. It is considered bad form to mess with
public repositories that way, so another option would be to rename the
branch, although I'm not sure how well that would work. Suggestions?
I've forward ported the 1.7 release notes, which probably should have been
in master to start with. Are there any other commits that should be forward
ported?
Chuck

Hi, folks! Having a problem w/ the Windows installer; first, the
"back-story": I have both Python 2.7 and 3.2 installed. When I run the
installer and click next on the first dialog, I get the message that I need
Python 2.7, which was not found in my registry. I ran regedit and searched
for Python and get multiple hits on both Python 2.7 and 3.2. So, precisely
which registry key has to have the value Python 2.7 for the installer to
find it? Thanks!
OlyDLG