[Numpy-discussion] Rethinking multiple dimensional indexing with sequences?

Sebastian Berg sebastian at sipsolutions.net
Tue Feb 18 11:09:33 EST 2014

Hey all,

currently in numpy this is possible:

a = np.zeros((5, 5))
a[[0, slice(None, None)]]
#this behaviour has its quirks, since the "correct" way is:
a[(0, slice(None, None))] # or identically a[0, :]

The problem with using an arbitrary sequence is, that an arbitrary
sequence is also typically an "array like" so there is a lot of guessing

a[[0, slice(None, None)]]  == a[(0, slice(None, None))]
# but:
a[[0, 1]] == a[np.array([0, 1])]

Now also NumPy commonly uses lists here to build up indexing tuples
(since they are mutable), however would it really be so bad if we had to
do `arr[tuple(slice_list)]` in the end to resolve this issue? So the
proposal would be to deprecate anything but (base class) tuples, or
maybe at least only allow this weird logic for lists and not all
sequences. I do not believe we can find a logic to decide what to do
which will not be broken in some way...

PS: The code implementing the "advanced index or nd-index" logic is

- Sebastian

Another confusing example:

In [9]: a = np.arange(10)

In [10]: a[[(0, 1), (2, 3)] * 17] # a[np.array([(0, 1), (2, 3)] * 17)]
array([[0, 1],
       [2, 3]])

In [11]: a[[(0, 1), (2, 3)]] # a[np.array([0, 1]), np.array([2, 3])]
IndexError                                Traceback (most recent call
<ipython-input-11-57b93f64dfa6> in <module>()
----> 1 a[[(0, 1), (2, 3)]]

IndexError: too many indices for array

More information about the NumPy-Discussion mailing list