Hi,
Our (nipy's) test suite just failed with the upgrade to numpy 1.13,
and the cause boiled down to this:
```
import numpy as np
poly = np.poly1d([1])
poly.c[0] *= 2
print(poly.c)
```
Numpy 1.12 gives (to me) expected output:
[2]
Numpy 1.13 gives (to me) unexpected output:
[1]
The problem is caused by the fact that the coefficients are now a
*copy* of the actual coefficient array - I think in an attempt to stop
us modifying the coefficients directly.
I can't see any deprecation warnings with `-W always`.
The pain point here is that code that used to give the right answer
has now (I believe silently) switched to giving the wrong answer.
Cheers,
Matthew

Dear all
I'm sorry if my question is too basic (not fully in relation to Numpy -
while it is to build matrices and to work with Numpy afterward), but I'm
spending a lot of time and effort to find a way to record data from an
asci while, and reassign it into a matrix/array … with unsuccessfully!
The only way I found is to use _'append()'_ instruction involving
dynamic memory allocation. :-(
>From my current experience under Scilab (a like Matlab scientific
solver), it is well know:
* Step 1 : matrix initialization like _'np.zeros(n,n)'_
* Step 2 : record the data
* and write it in the matrix (step 3)
I'm obviously influenced by my current experience, but I'm interested in
moving to Python and its packages
For huge asci files (involving dozens of millions of lines), my strategy
is to work by 'blocks' as :
* Find the line index of the beginning and the end of one block (this
implies that the file is read ounce)
* Read the block
* (process repeated on the different other blocks)
I tried different codes such as bellow, but each time Python is telling
me I CANNOT MIX ITERATION AND RECORD METHOD
#############################################
position = []; j=0
with open(PATH + file_name, "r") as rough_ data:
for line in rough_ data:
if _my_criteria_ in line:
position.append(j) ## huge blocs but limited in
number
j=j+1
i = 0
blockdata = np.zeros( (size_block), dtype=np.float)
with open(PATH + file_name, "r") as f:
for line in itertools.islice(f,1,size_block):
blockdata [i]=float(f.readline() )
i=i+1
#########################################
Should I work on lists using f.readlines (but this implies to load all
the file in memory).
Additional question: can I use record with vectorization, with 'i
=np.arange(0,65406)' if I remain in the previous example
Thanks for your time and comprehension
(I'm obviously interested by doc references speaking about those
specific tasks)
Paul
PS: for Chuck: I'll had a look to pandas package but in an code
optimization step :-) (nearly 2000 doc pages)

Hi All,
On behalf of the NumPy team, I am pleased to announce the release of NumPy
1.13.1. This is a bugfix release for problems found in 1.13.0. The major
changes are:
- fixes for the new memory overlap detection,
- fixes for the new temporary elision capability,
- reversion of the removal of the boolean binary ``-`` operator.
It is recommended that users of 1.13.0 upgrade to 1.13.1. Wheels can be
found on PyPI <https://pypi.python.org/pypi/numpy>. Source tarballs,
zipfiles, release notes, and the changelog are available on github
<https://github.com/numpy/numpy/releases/tag/v1.13.1>.
Note that the wheels for Python 3.6 are built against 3.6.1, hence will not
work when used with 3.6.0 due to Python bug #29943
<https://bugs.python.org/issue29943>. The plan is to release NumPy 1.13.2
shortly after the release of Python 3.6.2 is out with a fix that problem.
If you are using 3.6.0, the workaround is to upgrade to 3.6.1 or use an
earlier Python version.
*Pull requests merged*A total of 19 pull requests were merged for this
release.
* #9240 DOC: BLD: fix lots of Sphinx warnings/errors.
* #9255 Revert "DEP: Raise TypeError for subtract(bool_, bool_)."
* #9261 BUG: don't elide into readonly and updateifcopy temporaries for...
* #9262 BUG: fix missing keyword rename for common block in numpy.f2py
* #9263 BUG: handle resize of 0d array
* #9267 DOC: update f2py front page and some doc build metadata.
* #9299 BUG: Fix Intel compilation on Unix.
* #9317 BUG: fix wrong ndim used in empty where check
* #9319 BUG: Make extensions compilable with MinGW on Py2.7
* #9339 BUG: Prevent crash if ufunc doc string is null
* #9340 BUG: umath: un-break ufunc where= when no out= is given
* #9371 DOC: Add isnat/positive ufunc to documentation
* #9372 BUG: Fix error in fromstring function from numpy.core.records...
* #9373 BUG: ')' is printed at the end pointer of the buffer in numpy.f2py.
* #9374 DOC: Create NumPy 1.13.1 release notes.
* #9376 BUG: Prevent hang traversing ufunc userloop linked list
* #9377 DOC: Use x1 and x2 in the heaviside docstring.
* #9378 DOC: Add $PARAMS to the isnat docstring
* #9379 DOC: Update the 1.13.1 release notes
*Contributors*
A total of 12 people contributed to this release. People with a "+" by
their
names contributed a patch for the first time.
* Andras Deak +
* Bob Eldering +
* Charles Harris
* Daniel Hrisca +
* Eric Wieser
* Joshua Leahy +
* Julian Taylor
* Michael Seifert
* Pauli Virtanen
* Ralf Gommers
* Roland Kaufmann
* Warren Weckesser

Hi All,
I've delayed the NumPy 1.13.2 release hoping for Python 3.6.2 to show up
fixing #29943 <https://bugs.python.org/issue29943> so we can close #9272
<https://github.com/numpy/numpy/issues/9272>, but the Python release has
been delayed to July 11 (expected). The Python problem means that NumPy
compiled with Python 3.6.1 will not run in Python 3.6.0. However, I've also
been asked to have a bugfixed version of 1.13 available for Scipy 2017 next
week. At this point it looks like the best thing to do is release 1.13.1
compiled with Python 3.6.1 and ask folks to upgrade Python if they have a
problem, and then release 1.13.2 as soon as 3.6.2 is released.
Thoughts?
Chuck

Dear All
I'm a like matlab user (more specifically a Scilab one) for years, and
because I've to deal with huge ascii files (with dozens of millions of
lines), I decided to have a look to Python and Numpy, including
vectorization topics.
Obviously I've been influenced by my current feedbacks.
I've a basic question concerning the current code: why it is necessary
to transpose the column vector (still in the right format in my mind)?
does it make sens?
Thanks
Paul
####################################
import numpy as np ## np = raccourci
## works with a row vector
vect0 = np.random.rand(5); print vect0; print("\n")
mat = np.zeros((5,4),dtype=float)
mat[:,0]=np.transpose(vect0); print mat
## works while the vector is still in column i.e. in a right format,
isn't it?
vect0 = np.random.rand(5,1); print vect0; print("\n")
mat = np.zeros((5,4),dtype=float)
mat[:,0]=np.transpose(vect0); print mat
## does not work
vect0 = np.random.rand(5,1); print vect0; print("\n")
mat = np.zeros((5,4),dtype=float)
mat[:,0]=np(vect0); print mat

When an nditer uses certain op_flags[0], like updateifcopy or readwrite
or copy, the operands (which are ndarray views into the original data)
must use the UPDATEIFCOPY flag. The meaning of this flag is to allocate
temporary memory to hold the modified data and make the original data
readonly. When the caller is finished with the nditer, the temporary
memory is written back to to the original data and the original data's
readwrite status is restored. The trigger to resolve the temporary data
is currently via the deallocate function of the nditer, thus the call
becomes something like
i = nditer(a, op_flags=<flags>)
#do something with i
i = None # trigger writing data from i to a
This IMO violates the "explicit is better" philosophy, and has the
additional disadvantage of relying on refcounting semantics to trigger
the data resolution, which does not work on PyPy.
I have a pending pull request[1] to add a private numpy API function
PyArray_ResolveUpdateIfCopy, which allows triggering the data resolution
in a more explicit way. The new API function is added at the end of the
functions like take or put with non-contiguous arguments, explicit tests
have also been added . The only user-facing effect is to allow using an
nditer as a context manager, so the lines above would become
with nditer(a, op_flags=<flags>) as i:
#do something with i
# data is written back when exiting
The pull request passes all tests on CPython. The last commit makes the
use of a nditer context manager mandatory on PyPy if the UPDATEIFCOPY
semantics are triggered, while allowing existing code to function
without a warning on CPython.
Note that np.nested_iters[2] is problematic, in that it currently
returns a tuple of nidters, which AFAICT cannot be made into a context
manager, so __enter__ and __exit__ must be called manually for the nditers.
At some future point we could decide to deprecate the non-context
managed use on CPython as well, following a cycle of first issuing a
deprecation warning for a few release versions.
Any thoughts? Does making nditer a context manager make sense? How
widespread is use of nested_iters in live code (it does not appear in
the official NumPy documentation)
Thanks,
Matti
[0] https://docs.scipy.org/doc/numpy/reference/generated/numpy.nditer.html
[1] https://github.com/numpy/numpy/pull/9269
[2]
https://github.com/numpy/numpy/blob/master/numpy/core/tests/test_nditer.py#…

Hi All,
The '@' operator works well with stacks of matrices, but not with stacks
of vectors. Given the recent addition of '__array_ufunc__', and the intent
to make `__matmul__` use a ufunc, I've been wondering is it would make
sense to add ndarray subclasses 'rvec' and 'cvec' that would override that
operator so as to behave like stacks of row/column vectors. Any other ideas
for the solution to stacked vectors are welcome.
Thoughts?
Chuck

Hello all,
There are various updates to array printing in preparation for numpy
1.14. See https://github.com/numpy/numpy/pull/9139/
Some are quite likely to break other projects' doc-tests which expect a
particular str or repr of arrays, so I'd like to warn the list in case
anyone has opinions.
The current proposed changes, from most to least painful by my
reckoning, are:
1.
For float arrays, an extra space previously used for the sign position
will now be omitted in many cases. Eg, `repr(arange(4.))` will now
return 'array([0., 1., 2., 3.])' instead of 'array([ 0., 1., 2., 3.])'.
2.
The printing of 0d arrays is overhauled. This is a bit finicky to
describe, please see the release note in the PR. As an example of the
effect of this, the `repr(np.array(0.))` now prints as 'array(0.)`
instead of 'array(0.0)'. Also the repr of 0d datetime arrays is now like
"array('2005-04-04', dtype='datetime64[D]')" instead of
"array(datetime.date(2005, 4, 4), dtype='datetime64[D]')".
3.
User-defined dtypes which did not properly implement their `repr` (and
`str`) should do so now. Otherwise it now falls back to
`object.__repr__`, which will return something ugly like
`<mytype object at 0x7f37f1b4e918>`. (Previously you could depend on
only implementing the `item` method and the repr of that would be
printed. But no longer, because this risks infinite recursions.).
4.
Bool arrays of size 1 with a 'True' value will now omit a space, so that
`repr(array([True]))` is now 'array([True])' instead of
'array([ True])'.
Allan