Is there a function that operates like 'take' but does assignment?
Specifically that takes indices and an axis? As far as I can tell no
such function exists. Is there any particular reason?
One can fake such a thing by doing (code untested):
s = len(a.shape)*[np.s_[:]]
s[axis] = np.s_[1::2]
a[s] = b.take(np.arange(1,b.shape[axis],2), axis)
Or by using np.rollaxis:
a = np.rollaxis(a, axis, len(a.shape))
a[..., 1::2] = b[..., 1::2]
a = np.rollaxis(a, len(a.shape)-1, axis)
But I don't really think that either of these are particularly clear,
but probably prefer the rollaxis solution.
Also, while I'm here, what about having take also be able to use a slice
object in lieu of a collection of indices?
-Eric

For library compatibility testing I'm trying to use numpy 1.4.1 with Python
2.7.3 on a 64-bit CentOS-5 platform. I installed a clean Python from
source (basically "./configure --prefix=$prefix ; make install") and then
installed numpy 1.4.1 with "python setup.py install".
The crash message begins with [1]:
*** glibc detected *** /home/aldcroft/vpy/py27_np141/bin/python: free():
invalid next size (fast): 0x000000001a9fcf30 ***
======= Backtrace: =========
/lib64/libc.so.6[0x31a90711df]
/lib64/libc.so.6(cfree+0x4b)[0x31a907163b]
/home/aldcroft/vpy/py27_np141/lib/python2.7/site-packages/numpy/core/multiarray.so[0x2aaab025ebbe]
A problem which seems related is that fancy indexing is failing:
>>> idx = np.array([1])
>>> idx.dtype
dtype('int64')
>>> np.arange(5)[idx]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: index 210453397505 out of bounds 0<=index<5
Does anyone have suggestions for compilation flags or workarounds when
building numpy or Python that might fix this?
Thanks,
Tom
[1] I don't know the exact code that triggered this crash, it's buried in
some units tests. If it was useful I could dig it out.

2012/11/16 Olivier Delalleau <olivier.delalleau(a)gmail.com>
> 2012/11/16 Charles R Harris <charlesr.harris(a)gmail.com>
>
>>
>>
>> On Thu, Nov 15, 2012 at 11:37 PM, Charles R Harris <
>> charlesr.harris(a)gmail.com> wrote:
>>
>>>
>>>
>>> On Thu, Nov 15, 2012 at 8:24 PM, Gökhan Sever <gokhansever(a)gmail.com>wrote:
>>>
>>>> Hello,
>>>>
>>>> Could someone briefly explain why are these two operations are casting
>>>> my float32 arrays to float64?
>>>>
>>>> I1 (np.arange(5, dtype='float32')).dtype
>>>> O1 dtype('float32')
>>>>
>>>> I2 (100000*np.arange(5, dtype='float32')).dtype
>>>> O2 dtype('float64')
>>>>
>>>
>>> This one is depends on the size of the multiplier and is first present
>>> in 1.6.0. I suspect it is a side effect of making the type conversion code
>>> sensitive to magnitude.
>>>
>>>
>>>>
>>>>
>>>>
>>>> I3 (np.arange(5, dtype='float32')[0]).dtype
>>>> O3 dtype('float32')
>>>>
>>>> I4 (1*np.arange(5, dtype='float32')[0]).dtype
>>>> O4 dtype('float64')
>>>>
>>>
>>> This one probably depends on the fact that the element is a scalar, but
>>> doesn't look right. Scalars are promoted differently. Also holds in numpy
>>> 1.5.0 so is of old provenance.
>>>
>>>
>> This one has always bothered me:
>>
>> In [3]: (-1*arange(5, dtype=uint64)).dtype
>> Out[3]: dtype('float64')
>>
>
> My interpretation here is that since the possible results when multiplying
> an int64 with an uint64 can be signed, and can go beyond the range of
> int64, numpy prefers to cast everything to float64, which can represent
> (even if approximately) a larger range of signed values.
>
Actually, thinking about it a bit more, I suspect the logic is not related
to the result of the operation, but to the fact numpy needs to cast both
arguments into a common dtype before doing the operation, and it has no
integer dtype available that can hold both int64 and uint64 numbers, so it
uses float64 instead.
-=- Olivier

Hello,
Now, I am having the same problem, and although I have tried the
Pauili fix (see below) I still have the same problem
when using numpydoc extension.
Does anyone have more information or suggestions about it ?
Thanks,
Jose
On Sun, Jul 17, 2011 at 7:15 PM, Tony Yu <tsyu80(a)gmail.com
<http://mail.scipy.org/mailman/listinfo/numpy-discussion>> wrote:
>* I'm building documentation using Sphinx, and it seems that numpydoc is*>* raising*>* a lot of warnings. Specifically, the warnings look like "failed to import*>* <method_name>", "toctree*>* references unknown document u'<method_name>'", "toctree contains reference*>* to nonexisting document '<method_name>'---for each method defined. The*>* example below reproduces the issue on my system (Sphinx 1.0.7, numpy HEAD).*>* These warnings appear in my build of the numpy docs, as well.*>**>* Removing numpydoc from the list of Sphinx extensions gets rid of these*>* warnings*>* (but, of course, adds new warnings if headings for 'Parameters', 'Returns',*>* etc. are present).*>**>* Am I doing something wrong here?*>**>* You're not, it's a Sphinx bug that Pauli already has a fix for. See*http://projects.scipy.org/numpy/ticket/1772
Ralf

On my machine these are rather confusingly different functions, with the
latter corresponding to numpy.random.power. I appreciate that pylab
imports everything from both the numpy and numpy.random modules but
wouldn't it make sense if pylab.power were the frequently used power
function rather than a means for sampling from the power distribution?
Regards,
Will Furnass

Hi,
When running the test suite, there are problems of this kind:
https://github.com/numpy/numpy/issues/394
which then causes for example the Debian buildbots tests to fail
(https://github.com/numpy/numpy/issues/406).
The problem is really simple:
>>> from numpy import array, abs, nan
>>> a = array([1, nan, 3])
>>> a
array([ 1., nan, 3.])
>>> abs(a)
__main__:1: RuntimeWarning: invalid value encountered in absolute
array([ 1., nan, 3.])
See the issue #394 for detailed explanation why "nan" is being passed
to abs(). Now the question is, what should the right fix be?
1) Should the runtime warning be disabled?
2) Should the tests be reworked, so that "nan" is not tested in allclose()?
3) Should abs() be fixed to not emit the warning?
4) Should the test suite be somehow fixed not to fail if there are
runtime warnings?
Let me know which direction we should go.
Thanks,
Ondrej

Hey all,
Ondrej has been tied up finishing his PhD for the past several weeks. He is defending his work shortly and should be available to continue to help with the 1.7.0 release around the first of December. He and I have been in contact during this process, and I've been helping where I can. Fortunately, other NumPy developers have been active closing tickets and reviewing pull requests which has helped the process substantially.
The release has taken us longer than we expected, but I'm really glad that we've received the bug-reports and issues that we have seen because it will help the 1.7.0 release be a more stable series. Also, the merging of the Trac issues with Git has exposed over-looked problems as well and will hopefully encourage more Git-focused participation by users.
We are targeting getting the final release of 1.7.0 out by mid December (based on Ondrej's availability). But, I would like to find out which issues are seen as blockers by people on this list. I think most of the issues that I had as blockers have been resolved. If there are no more remaining blockers, then we may be able to accelerate the final release of 1.7.0 to just after Thanksgiving.
Best regards,
-Travis

I'm trying to understand how numpy decides when to release memory and
whether it's possible to exert any control over that. The situation is that
I'm profiling memory usage on a system in which a great deal of the overall
memory is tied up in ndarrays. Since numpy manages ndarray memory on its
own (i.e. without the python gc, or so it seems), I'm finding that I can't
do much to convince numpy to release memory when things get tight. For
python object, for example, I can explicitly run gc.collect().
So, in an effort to at least understand the system better, can anyone tell
me how/when numpy decides to release memory? And is there any way via
either the Python or C-API to explicitly request release? Thanks.
Austin