Hi All, I'm delighted to announce the release of Numpy 1.11.0rc1. Hopefully the issues discovered in 1.11.0b3 have been dealt with and this release can go on to become the official release. Source files and documentation can be found on Sourceforge https://sourceforge.net/projects/numpy/files/NumPy/1.11.0rc1/, while source files and OS X wheels for Python 2.7, 3.3, 3.4, and 3.5 can be installed from Pypi. Please test thoroughly. Chuck
On Mon, Feb 22, 2016 at 6:47 PM, Charles R Harris wrote: Hi All, I'm delighted to announce the release of Numpy 1.11.0rc1. Hopefully the
issues discovered in 1.11.0b3 have been dealt with and this release can go
on to become the official release. Source files and documentation can be
found on Sourceforge
https://sourceforge.net/projects/numpy/files/NumPy/1.11.0rc1/, while
source files and OS X wheels for Python 2.7, 3.3, 3.4, and 3.5 can be
installed from Pypi. Please test thoroughly. Issues reported by Christoph at https://github.com/numpy/numpy/issues/7316.
Chuck
23.02.2016, 03:47, Charles R Harris kirjoitti:
I'm delighted to announce the release of Numpy 1.11.0rc1. Hopefully the issues discovered in 1.11.0b3 have been dealt with and this release can go on to become the official release. Source files and documentation can be found on Sourceforge https://sourceforge.net/projects/numpy/files/NumPy/1.11.0rc1/, while source files and OS X wheels for Python 2.7, 3.3, 3.4, and 3.5 can be installed from Pypi. Please test thoroughly.
Christoph reports the following problem that I am unable to reproduce on appveyor or find reported elsewhere. On all 32-bit platforms: ============================================================ ERROR: test_zeros_big (test_multiarray.TestCreation) ------------------------------------------------------------ Traceback (most recent call last): File "X:\Python27\lib\site-packages\numpy\core\tests\test_multiarray.py", line 594, in test_zeros_big d = np.zeros((30 * 1024**2,), dtype=dt) MemoryError I would be much obliged if someone else could demonstrate it. <snip> Chuck
that test needs about 500Mb of memory on windows as it doesn't have sparse allocations like most *nixes. It used to fail for me during release testing when I only gave the windows VM 1GB of ram. If its a problem for CI we could disable it on windows, or at least skip the complex double case. On 23.02.2016 21:40, Charles R Harris wrote:
Christoph reports the following problem that I am unable to reproduce on appveyor or find reported elsewhere.
On all 32-bit platforms:
============================================================ ERROR: test_zeros_big (test_multiarray.TestCreation) ------------------------------------------------------------ Traceback (most recent call last): File "X:\Python27\lib\site-packages\numpy\core\tests\test_multiarray.py", line 594, in test_zeros_big d = np.zeros((30 * 1024**2,), dtype=dt) MemoryError
I would be much obliged if someone else could demonstrate it.
<snip>
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
On Tue, Feb 23, 2016 at 1:58 PM, Julian Taylor < jtaylor.debian@googlemail.com> wrote:
that test needs about 500Mb of memory on windows as it doesn't have sparse allocations like most *nixes. It used to fail for me during release testing when I only gave the windows VM 1GB of ram. If its a problem for CI we could disable it on windows, or at least skip the complex double case.
It's not a problem on CI, just for Christoph. I asked him what memory resources the test had available but haven't heard back. AFAICT, nothing associated with the test has changed for this release. The options are probably to 1) ignore the failure, or 2) disable the test on 32 bits. <snip> Chuck
23.02.2016, 22:40, Charles R Harris kirjoitti: [clip]
On all 32-bit platforms:
============================================================ ERROR: test_zeros_big (test_multiarray.TestCreation) ------------------------------------------------------------ Traceback (most recent call last): File "X:\Python27\lib\site-packages\numpy\core\tests\test_multiarray.py", line 594, in test_zeros_big d = np.zeros((30 * 1024**2,), dtype=dt) MemoryError
I would be much obliged if someone else could demonstrate it.
Memory fragmentation in the 2GB address space available? If dt==float64, that requires 250MB contiguous. -- Pauli Virtanen
On 2/23/2016 1:05 PM, Pauli Virtanen wrote:
23.02.2016, 22:40, Charles R Harris kirjoitti: [clip]
On all 32-bit platforms:
============================================================ ERROR: test_zeros_big (test_multiarray.TestCreation) ------------------------------------------------------------ Traceback (most recent call last): File "X:\Python27\lib\site-packages\numpy\core\tests\test_multiarray.py", line 594, in test_zeros_big d = np.zeros((30 * 1024**2,), dtype=dt) MemoryError
I would be much obliged if someone else could demonstrate it.
Memory fragmentation in the 2GB address space available? If dt==float64, that requires 250MB contiguous.
Before creating the dtype('D') test array, the largest contiguous block available to the 32 bit Python process on my system is ~830 MB. The creation of this array succeeds. However, the creation of the next dtype('G') test array fails because the previous array is still in memory and the largest contiguous block available is only ~318 MB. Deleting the test arrays after usage via del(d) fixes this problem https://github.com/numpy/numpy/pull/7323. Another fix could be to change the order of data types tested. Christoph
Christoph, any chance you can test https://github.com/numpy/numpy/pull/7324 before it gets merged (or not). <snip> Chuck
On Tue, Feb 23, 2016 at 8:44 AM, Pauli Virtanen
23.02.2016, 03:47, Charles R Harris kirjoitti:
I'm delighted to announce the release of Numpy 1.11.0rc1. Hopefully the issues discovered in 1.11.0b3 have been dealt with and this release can go on to become the official release. Source files and documentation can be found on Sourceforge https://sourceforge.net/projects/numpy/files/NumPy/1.11.0rc1/, while source files and OS X wheels for Python 2.7, 3.3, 3.4, and 3.5 can be installed from Pypi. Please test thoroughly.
Thanks for that. Most of the new errors look to be the result of the change in divmod, where before divmod(float64(1.0), 0.0) was (inf, nan) and it is now (nan, nan). There are also two errors in matplotlib that look to be the result of the slight change in the numerical values of remainder due to improved precision. I would class those errors as more of a test problem resulting from the inherent imprecision of floating point than a numpy regression. Chuck
participants (4)
-
Charles R Harris
-
Christoph Gohlke
-
Julian Taylor
-
Pauli Virtanen