[Numpy-discussion] Any easy way to do this?

Angus McMorland amcmorl at gmail.com
Wed Mar 9 10:48:18 EST 2011

On 9 March 2011 10:37, Neal Becker <ndbecker2 at gmail.com> wrote:
> Angus McMorland wrote:
>> On 9 March 2011 09:45, Neal Becker <ndbecker2 at gmail.com> wrote:
>>> given: w[i,j,k], y[l, k]
>>> find:
>>> d[l,i,j] = norm(w[i,j] - y[l])
>>> for each triple (l,i,j), w[i,j]-y[l] is a vector, of which I want to find the
>>> norm, and store into d[l,i,j]
>> Is something like this what you want to do?
>> w = np.random.randint(100, size=(4,5,6))
>> y = np.random.randint(100, size=(7,6))
>> norm = lambda x, axis: np.sqrt(np.sum(x**2, axis=axis))
>> d = norm(w[:,:,None] - y[None,None], -1)
>> Angus.
> Thanks!  Now if I could understand why
> w[:,:,None] - y[None,None] is what I needed...

This makes the corresponding dimensions of the two arrays match up in
the same position and order so that broadcasting can occur correctly.

":" means include the existing axis, in the order they appear in the
original array
"None" is the same as "np.newaxis" (but takes fewer characters to
type), and means insert a dimension of length 1.

at the end of each list of dimension list, enough ":" to correspond to
any unmentioned dimensions are implied,
i.e. y[None, None] is the same as y[None, None, :, :]

Clear as mud?

AJC McMorland
Post-doctoral research fellow
Neurobiology, University of Pittsburgh

More information about the NumPy-Discussion mailing list