Hi,
I am pleased to announce the availability of the first beta of NumPy
1.6.0. Due to the extensive changes in the Numpy core for this
release, the beta testing phase will last at least one month. Please
test this beta and report any problems on the Numpy mailing list.
Sources and binaries can be found at:
http://sourceforge.net/projects/numpy/files/NumPy/1.6.0b1/
For (preliminary) release notes see below.
Enjoy,
Ralf
=========================
NumPy 1.6.0 Release Notes
=========================
This release includes several new features as well as numerous bug fixes and
improved documentation. It is backward compatible with the 1.5.0 release, and
supports Python 2.4 - 2.7 and 3.1 - 3.2.
Highlights
==========
* Re-introduction of datetime dtype support to deal with dates in arrays.
* A new 16-bit floating point type.
* A new iterator, which improves performance of many functions.
New features
============
New 16-bit floating point type
------------------------------
This release adds support for the IEEE 754-2008 binary16 format, available as
the data type ``numpy.half``. Within Python, the type behaves similarly to
`float` or `double`, and C extensions can add support for it with the exposed
half-float API.
New iterator
------------
A new iterator has been added, replacing the functionality of the
existing iterator and multi-iterator with a single object and API.
This iterator works well with general memory layouts different from
C or Fortran contiguous, and handles both standard NumPy and
customized broadcasting. The buffering, automatic data type
conversion, and optional output parameters, offered by
ufuncs but difficult to replicate elsewhere, are now exposed by this
iterator.
Legendre, Laguerre, Hermite, HermiteE polynomials in ``numpy.polynomial``
-------------------------------------------------------------------------
Extend the number of polynomials available in the polynomial package. In
addition, a new ``window`` attribute has been added to the classes in
order to specify the range the ``domain`` maps to. This is mostly useful
for the Laguerre, Hermite, and HermiteE polynomials whose natural domains
are infinite and provides a more intuitive way to get the correct mapping
of values without playing unnatural tricks with the domain.
Fortran assumed shape array and size function support in ``numpy.f2py``
-----------------------------------------------------------------------
F2py now supports wrapping Fortran 90 routines that use assumed shape
arrays. Before such routines could be called from Python but the
corresponding Fortran routines received assumed shape arrays as zero
length arrays which caused unpredicted results. Thanks to Lorenz
Hüdepohl for pointing out the correct way to interface routines with
assumed shape arrays.
In addition, f2py interprets Fortran expression ``size(array, dim)``
as ``shape(array, dim-1)`` which makes it possible to automatically
wrap Fortran routines that use two argument ``size`` function in
dimension specifications. Before users were forced to apply this
mapping manually.
Other new functions
-------------------
``numpy.ravel_multi_index`` : Converts a multi-index tuple into
an array of flat indices, applying boundary modes to the indices.
``numpy.einsum`` : Evaluate the Einstein summation convention. Using the
Einstein summation convention, many common multi-dimensional array operations
can be represented in a simple fashion. This function provides a way compute
such summations.
``numpy.count_nonzero`` : Counts the number of non-zero elements in an array.
``numpy.result_type`` and ``numpy.min_scalar_type`` : These functions expose
the underlying type promotion used by the ufuncs and other operations to
determine the types of outputs. These improve upon the ``numpy.common_type``
and ``numpy.mintypecode`` which provide similar functionality but do
not match the ufunc implementation.
Changes
=======
Changes and improvements in the numpy core
------------------------------------------
``numpy.distutils``
-------------------
Several new compilers are supported for building Numpy: the Portland Group
Fortran compiler on OS X, the PathScale compiler suite and the 64-bit Intel C
compiler on Linux.
``numpy.testing``
-----------------
The testing framework gained ``numpy.testing.assert_allclose``, which provides
a more convenient way to compare floating point arrays than
`assert_almost_equal`, `assert_approx_equal` and `assert_array_almost_equal`.
``C API``
---------
In addition to the APIs for the new iterator and half data type, a number
of other additions have been made to the C API. The type promotion
mechanism used by ufuncs is exposed via ``PyArray_PromoteTypes``,
``PyArray_ResultType``, and ``PyArray_MinScalarType``. A new enumeration
``NPY_CASTING`` has been added which controls what types of casts are
permitted. This is used by the new functions ``PyArray_CanCastArrayTo``
and ``PyArray_CanCastTypeTo``. A more flexible way to handle
conversion of arbitrary python objects into arrays is exposed by
``PyArray_GetArrayParamsFromObject``.
Removed features
================
``numpy.fft``
-------------
The functions `refft`, `refft2`, `refftn`, `irefft`, `irefft2`, `irefftn`,
which were aliases for the same functions without the 'e' in the name, were
removed.
``numpy.memmap``
----------------
The `sync()` and `close()` methods of memmap were removed. Use `flush()` and
"del memmap" instead.
``numpy.lib``
-------------
The deprecated functions ``numpy.unique1d``, ``numpy.setmember1d``,
``numpy.intersect1d_nu`` and ``numpy.lib.ufunclike.log2`` were removed.
``numpy.ma``
------------
Several deprecated items were removed from the ``numpy.ma`` module::
* ``numpy.ma.MaskedArray`` "raw_data" method
* ``numpy.ma.MaskedArray`` constructor "flag" keyword
* ``numpy.ma.make_mask`` "flag" keyword
* ``numpy.ma.allclose`` "fill_value" keyword
``numpy.distutils``
-------------------
The ``numpy.get_numpy_include`` function was removed, use ``numpy.get_include``
instead.
Checksums
=========
89f52ae0f0ea84cfcb457298190bee14
release/installers/numpy-1.6.0b1-py2.7-python.org.dmg
8dee06b362540b2c8c033399951e5386
release/installers/numpy-1.6.0b1-win32-superpack-python2.5.exe
c1b11bf48037ac8fe025bfd297969840
release/installers/numpy-1.6.0b1-win32-superpack-python2.6.exe
2b005cb359d8123bd5ee3063d9eae3ac
release/installers/numpy-1.6.0b1-win32-superpack-python2.7.exe
45627e8f63fe34011817df66722c39a5
release/installers/numpy-1.6.0b1-win32-superpack-python3.1.exe
1d8b214752b19b51ee747a6436ca1d38
release/installers/numpy-1.6.0b1-win32-superpack-python3.2.exe
aeab5881974aac595b87a848c0c6344a release/installers/numpy-1.6.0b1.tar.gz
3ffc6e308f9e0614531fa3babcb75544 release/installers/numpy-1.6.0b1.zip

---------- Forwarded message ----------
From: Ralf Gommers <ralf.gommers(a)googlemail.com>
Date: Thu, Mar 31, 2011 at 7:31 PM
Subject: Re: [Numpy-discussion] np.histogramdd of empty data
To: Nils Becker <n.becker(a)amolf.nl>
On Thu, Mar 31, 2011 at 12:33 PM, Nils Becker <n.becker(a)amolf.nl> wrote:
> Hi Ralf,
>
> I cloned numpy/master and played around a little.
>
> when giving the bins explicitely, now histogram2d and histogramdd work
> as expected in all tests i tried.
>
>
> However, some of the cases with missing bin specification appear
> somewhat inconsistent.
>
> The first question is if creating arbitrary bins for empty data and
> empty bin specification is better than raising an Exception:
>
> Specifically:
Bins of size 0 should give a meaningful error, I was just fixing that
as part of #1788 in
https://github.com/rgommers/numpy/tree/ticket-1788-histogramdd
> numpy.histogram2d([],[],bins=[0,0])
>> (array([ 0., 0.]), array([ 0.]), array([ 0.]))
Now gives:
ValueError: Element at index 0 in `bins` should be a positive integer.
> numpy.histogram([],bins=0)
>> ValueError: zero-size array to minimum.reduce without identity
Now gives:
ValueError: `bins` should be a positive integer.
> so 1-d and 2-d behave not quite the same.
>
> also, these work (although with arbitrary bin edges):
>
> numpy.histogram2d([],[],bins=[1,1])
>> (array([ 0., 0.]), array([ 0., 1.]), array([ 0., 1.]))
>
> numpy.histogram2d([],[],bins=[0,1])
>> (array([ 0., 0.]), array([ 0.]), array([ 0., 1.]))
>
> while this raises an error:
>
> numpy.histogram([],bins=1)
>> ValueError: zero-size array to minimum.reduce without identity
Now gives:
(array([0]), array([ 0., 1.]))
> another thing with non-empty data:
>
> numpy.histogram([1],bins=1)
>> (array([1]), array([ 0.5, 1.5]))
That is the correct answer.
> numpy.histogram([1],bins=0)
>> (array([], dtype=int64), array([ 0.5]))
Now gives:
ValueError: `bins` should be a positive integer.
> while
>
> numpy.histogram2d([1],[1],bins=A)
>> ValueError: zero-size array to minimum.reduce without identity
>
> (here A==[0,0] or A==[0,1] but not A==[1,1] which gives a result)
Same sensible errors now, telling you bins elements shouldn't be 0.
Cheers,
Ralf

Hi,
I would like to seriously start contributing to NumPy and/or SciPy, as much
as I possibly can.
Because NumPy & SciPy are new to me, I'm not so sure whether I could be of
some help. But if my help is welcomed, and if it makes sense to ask: "What
could I best contribute to?", then perhaps some background information about
myself would be useful ...
*ACADEMICS*
I completed an undergraduate degree in Computer Science, along with some
mathematics and physics courses. For the mathematics, I did some Calculus,
Linear Algebra, Differential Equations, Fourier series, Fourier & Laplace
transform. In Physics, I did up to and including a first course in Quantum
Mechanics.
I also did some undergraduate research work, that involved computational
high energy physics (proton-proton collision). The work was done with some
Mathematica packages (FeynArts <http://www.feynarts.de/>).
*INDUSTRY*
I currently work in the software industry and for the past few months, have
been performing both software development and system administration tasks.
In terms of object-oriented programming, my main experience is with Java.
I'm a beginner with Python. I used Python at work to automate the
reverse-engineering process of some Java programs.
Career-wise, one of my goals is to go into scientific computing, and use
Python as the main programming language, and to perhaps pursue graduate
studies in Computational Physics, hence my current interest for contributing
to NumPy & SciPy.
If you wish to know more about me I have a LinkedIn profile at
http://www.linkedin.com/in/sylvainbellemare.
*DOCUMENTATION*
Documentation-wise, I can use LaTeX. For instance, my resume, which is
attached with this email, was written with LaTeX. It's noteworthy to mention
that I like to work on documentation, as long as it balances with coding
work [?].
If that makes any sense to you and you have ideas on projects that I could
contribute to, please let me know, and I'll be very happy to do my very best
to contribute.
Best regards,
-Sylvain

Hi,
This followup on tickets that I had previously indicated. So I want to
thank Mark, Ralph and any people for going over those!
For those that I followed I generally agreed with the outcome.
Ticket 301: 'Make power and divide return floats from int inputs (like
true_divide)'
http://projects.scipy.org/numpy/ticket/301
Invalid because the output dtype is the same as the input dtype unless
you override using the dtype argument:
>>> np.power(3, 1, dtype=np.float128).dtype
dtype('float128')
Alternatively return a float and indicate in the docstring that the
output dtype can be changed.
Ticket 354: 'Possible inconsistency in 0-dim and scalar empty array types'
http://projects.scipy.org/numpy/ticket/354
Invalid because an empty array is not the same as an empty string.
Ticket 1071: 'loadtxt fails if the last column contains empty value'
http://projects.scipy.org/numpy/ticket/1071
Invalid mainly because loadtxt states that 'Each row in the text file
must have the same number of values.' So of cause loadtxt must fail when
there are missing values.
Ticket 1374: 'Ticket 628 not fixed for Solaris (polyfit uses 100% CPU
and does not stop)'
http://projects.scipy.org/numpy/ticket/1374
Unless this can be verified it should be set as needs_info.
Bruce

Hi all,
I'm Francesco and I am writing on behalf of "Python Italia APS", a no-profit
association promoting EuroPython conference. (www.europython.eu)
Europython End of Call for Presentations is April 6th. I'd like to ask to
you to forward this mail to anyone that you feel may be interested.
We're looking for proposals on every aspects of Python: programming from
novice to advanced levels, applications and frameworks, or how you have been
involved in introducing Python into your organisation.
**First-time speakers are especially welcome**; EuroPython is a community
conference and we are eager to hear about your experience. If you have
friends or colleagues who have something valuable to contribute, twist their
arms to tell us about it!
Presenting at EuroPython
------------------------
We will accept a broad range of presentations, from reports on academic and
commercial projects to tutorials and case studies. As long as the
presentation is interesting and potentially useful to the Python community,
it will be considered for inclusion in the programme.
Can you show the conference-goers something new and useful? Can you show
attendees how to: use a module? Explore a Python language feature? Package
an application? If so, consider submitting a talk.
Talks and hands-on trainings
----------------------------
There are two different kind of presentations that you can give as a speaker
at EuroPython:
* **Regular talk**. These are standard "talk with slides", allocated in
slots of 45, 60 or 90 minutes, depending on your preference and scheduling
constraints. A Q&A session is held at the end of the talk.
* **Hands-on training**. These are advanced training sessions for a smaller
audience (10-20 people), to dive into the subject with all details. These
sessions are 4-hours long, and audience will be strongly encouraged to bring
a laptop to experiment. They should be prepared with less slides and more
source code. If possible, trainers will also give a short "teaser talk" of
30 minutes the day before the training, to tease delegates into attending
the training.
In the talk submission form, we assume that you intend to give a regular
talk on the subject, but you will be asked if you are available for also
doing a hands-on training on the very same subject.
Speakers that will give a hands-on training are rewarded with a **free
entrance** to EuroPython to compensate for the longer preparation required,
and might also be eligible for a speaking fee (which we cannot confirm at
the moment).
Topics and goals
----------------
Specific topics for EuroPython presentations include, but are not limited
to:
- Core Python
- Other implementations: Jython, IronPython, PyPy, and Stackless
- Python libraries and extensions
- Python 3.x migration
- Databases
- Documentation
- GUI Programming
- Game Programming
- Network Programming
- Open Source Python projects
- Packaging Issues
- Programming Tools
- Project Best Practices
- Embedding and Extending
- Science and Math
- Web-based Systems
Presentation goals usually are some of the following:
- Introduce audience to a new topic they are unaware of
- Introduce audience to new developments on a well-known topic
- Show audience real-world usage scenarios for a specific topic (case study)
- Dig into advanced and relatively-unknown details on a topic
- Compare different options in the market on a topic
Community-based talk voting
---------------------------
This year, for the first time in EuroPython history, the talk voting process
is fully public. Every partecipant gains the right to vote for talks
submitted during the Call For Papers, as soon as they commit to their
presence at the conference by buying a ticket. See all the details in the
talk voting[1] page.
Contacts
--------
For any further question, feel free to contact the organizers at
info(a)pycon.it. Thank you!
[1]: http://ep2011.europython.eu/talk-voting
--
->PALLA

Hi Ralf,
I cloned numpy/master and played around a little.
when giving the bins explicitely, now histogram2d and histogramdd work
as expected in all tests i tried.
However, some of the cases with missing bin specification appear
somewhat inconsistent.
The first question is if creating arbitrary bins for empty data and
empty bin specification is better than raising an Exception:
Specifically:
numpy.histogram2d([],[],bins=[0,0])
> (array([ 0., 0.]), array([ 0.]), array([ 0.]))
numpy.histogram([],bins=0)
> ValueError: zero-size array to minimum.reduce without identity
so 1-d and 2-d behave not quite the same.
also, these work (although with arbitrary bin edges):
numpy.histogram2d([],[],bins=[1,1])
> (array([ 0., 0.]), array([ 0., 1.]), array([ 0., 1.]))
numpy.histogram2d([],[],bins=[0,1])
> (array([ 0., 0.]), array([ 0.]), array([ 0., 1.]))
while this raises an error:
numpy.histogram([],bins=1)
> ValueError: zero-size array to minimum.reduce without identity
another thing with non-empty data:
numpy.histogram([1],bins=1)
> (array([1]), array([ 0.5, 1.5]))
numpy.histogram([1],bins=0)
> (array([], dtype=int64), array([ 0.5]))
while
numpy.histogram2d([1],[1],bins=A)
> ValueError: zero-size array to minimum.reduce without identity
(here A==[0,0] or A==[0,1] but not A==[1,1] which gives a result)
Nils

Hi all.
Sorry if this question has already been asked. I've searched the archive, but
could not find anything related, so here is my question.
I'm using np.histogram on a 4000x4000 array, each with 200 bins. I do that on
both dimensions, meaning I compute 8000 histograms. It takes around 5 seconds
(which is of course quite fast).
I was wondering why np.histogram does not accept an axis parameter so that it
could work directly on the array without me having to write a loop.
Or maybe did I miss some parameters using np.histogram.
Thanks.
Éric.
Un clavier azerty en vaut deux
----------------------------------------------------------
Éric Depagne eric(a)depagne.org

>>> from numpy import inf, array
>>> inf*0
nan
(ok)
>>> array(inf) * 0.0
StdErr: Warning: invalid value encountered in multiply
nan
My cycled calculations yields this thousands times slowing
computations and making text output completely non-readable.
>>> from numpy import __version__
>>> __version__
'2.0.0.dev-1fe8136'
D.

numpy/lib/test_io.py only uses StringIO in the test, no actual csv file
If I give the filename than I get a TypeError: Can't convert 'bytes'
object to str implicitly
from the statsmodels mailing list example
>>>> data = recfromtxt(open('./star98.csv', "U"), delimiter=",", skip_header=1, dtype=float)
> Traceback (most recent call last):
> File "<pyshell#30>", line 1, in <module>
> data = recfromtxt(open('./star98.csv', "U"), delimiter=",",
> skip_header=1, dtype=float)
> File "C:\Programs\Python32\lib\site-packages\numpy\lib\npyio.py",
> line 1633, in recfromtxt
> output = genfromtxt(fname, **kwargs)
> File "C:\Programs\Python32\lib\site-packages\numpy\lib\npyio.py",
> line 1181, in genfromtxt
> first_values = split_line(first_line)
> File "C:\Programs\Python32\lib\site-packages\numpy\lib\_iotools.py",
> line 206, in _delimited_splitter
> line = line.split(self.comments)[0].strip(asbytes(" \r\n"))
> TypeError: Can't convert 'bytes' object to str implicitly
>
> line 1184 in npyio (py32 sourcefile)
>
> if isinstance(fname, str):
> fhd = np.lib._datasource.open(fname, 'U')
>
> seems to be the culprit for my case
changing to binary solved this problem for me
fhd = np.lib._datasource.open(fname, 'Ub')
(I still have other errors but don't know yet where they are coming from.)
Almost all problem with porting statsmodels to python 3.2 so far are
input related, mainly reading csv files which are heavily used in the
tests. All the "real" code seems to work fine with numpy and scipy
(and matplotlib so far) for python 3.2
Josef