I have a need to index my array(s) starting with a 1 instead of a 0. The reason for this is to be consistent with the documentation of a format I'm accessing. I know I can just '-1' from what the documentation says, but that can get cumbersome. Is there a magic flag I can pass to a numpy array (maybe when it is created) so I can index starting at 1 instead of the Python default? Thanks, Jeremy
Hi Jeremy On Thu, Jul 28, 2011 at 3:19 PM, Jeremy Conlin <jlconlin@gmail.com> wrote:
I have a need to index my array(s) starting with a 1 instead of a 0. The reason for this is to be consistent with the documentation of a format I'm accessing. I know I can just '-1' from what the documentation says, but that can get cumbersome.
Is there a magic flag I can pass to a numpy array (maybe when it is created) so I can index starting at 1 instead of the Python default?
You may want to have a look at some of the labeled array packages out there, such as larry, datarray, pandas, etc. I'm sure at least one of them allows integer re-labelling, although I'm not certain whether it can be done in a programmatic fashion. An alternative may be to create an indexing function that remaps the input space, e.g.: def ix(n): return n - 1 x[ix(3), ix(5)] But that looks pretty nasty, and you'll have to expand ix quite a bit to handle slices, etc. :/ Stéfan
Don't forget the everything-looks-like-a-nail approach: make all your arrays one bigger than you need and ignore element zero. Anne On 7/28/11, Stéfan van der Walt <stefan@sun.ac.za> wrote:
Hi Jeremy
On Thu, Jul 28, 2011 at 3:19 PM, Jeremy Conlin <jlconlin@gmail.com> wrote:
I have a need to index my array(s) starting with a 1 instead of a 0. The reason for this is to be consistent with the documentation of a format I'm accessing. I know I can just '-1' from what the documentation says, but that can get cumbersome.
Is there a magic flag I can pass to a numpy array (maybe when it is created) so I can index starting at 1 instead of the Python default?
You may want to have a look at some of the labeled array packages out there, such as larry, datarray, pandas, etc. I'm sure at least one of them allows integer re-labelling, although I'm not certain whether it can be done in a programmatic fashion.
An alternative may be to create an indexing function that remaps the input space, e.g.:
def ix(n): return n - 1
x[ix(3), ix(5)]
But that looks pretty nasty, and you'll have to expand ix quite a bit to handle slices, etc. :/
Stéfan _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
-- Sent from my mobile device
On Thu, Jul 28, 2011 at 4:10 PM, Anne Archibald <aarchiba@physics.mcgill.ca> wrote:
Don't forget the everything-looks-like-a-nail approach: make all your arrays one bigger than you need and ignore element zero.
Hehe, why didn't I think of that :) I guess the kind of problem I struggle with more frequently is books written with summations over -m to +n. In those cases, it's often convenient to use the mapping function, so that I can enter the formulas as they occur. Stéfan
On 29.07.2011, at 1:19AM, Stéfan van der Walt wrote:
On Thu, Jul 28, 2011 at 4:10 PM, Anne Archibald <aarchiba@physics.mcgill.ca> wrote:
Don't forget the everything-looks-like-a-nail approach: make all your arrays one bigger than you need and ignore element zero.
Hehe, why didn't I think of that :)
I guess the kind of problem I struggle with more frequently is books written with summations over -m to +n. In those cases, it's often convenient to use the mapping function, so that I can enter the formulas as they occur.
I don't want to open any cans of worms at this point, but given that Fortran90 supports such indexing (arbitrary limits, including negative ones), there definitely are use cases for it (or rather, instances where it is very convenient at least, like in Stéfan's books). So I am wondering how much it would take to implement such an enhancement for the standard ndarray... Cheers, Derek
On Thu, Jul 28, 2011 at 4:26 PM, Derek Homeier <derek@astro.physik.uni-goettingen.de> wrote:
I guess the kind of problem I struggle with more frequently is books written with summations over -m to +n. In those cases, it's often convenient to use the mapping function, so that I can enter the formulas as they occur.
I don't want to open any cans of worms at this point, but given that Fortran90 supports such indexing (arbitrary limits, including negative ones), there definitely are use cases for it (or rather, instances where it is very convenient at least, like in Stéfan's books). So I am wondering how much it would take to implement such an enhancement for the standard ndarray...
Thinking about it, expanding on Anne's solution and the fact that Python allows negative indexing, you can simply reshuffle the array after operations a bit and have everything work out. import numpy as np n = -5 m = 7 x = np.zeros((m - n), dtype=float) # Do some operation with n..m based indexing for i in range(-5, 7): x[i] = i # Construct the output x = np.hstack((x[n:], x[:m])) print x Regards Stéfan
The can is open and the worms are everywhere, so: The big problem with one-based indexing for numpy is interpretation. In python indexing, -1 is the last element of the array, and ranges have a specific meaning. In a hypothetical one-based indexing scheme, would the last element be element 0? if not, what does looking up zero do? What about ranges - do ranges still include the first endpoint and not the second? I suppose one could choose the most pythonic of the 1-based conventions, but do any of them provide from-the-end indexing without special syntax? Once one had decided what to do, implementation would be pretty easy - just make a subclass of ndarray that replaces the indexing function. Anne On 28 July 2011 19:26, Derek Homeier <derek@astro.physik.uni-goettingen.de> wrote:
On 29.07.2011, at 1:19AM, Stéfan van der Walt wrote:
On Thu, Jul 28, 2011 at 4:10 PM, Anne Archibald <aarchiba@physics.mcgill.ca> wrote:
Don't forget the everything-looks-like-a-nail approach: make all your arrays one bigger than you need and ignore element zero.
Hehe, why didn't I think of that :)
I guess the kind of problem I struggle with more frequently is books written with summations over -m to +n. In those cases, it's often convenient to use the mapping function, so that I can enter the formulas as they occur.
I don't want to open any cans of worms at this point, but given that Fortran90 supports such indexing (arbitrary limits, including negative ones), there definitely are use cases for it (or rather, instances where it is very convenient at least, like in Stéfan's books). So I am wondering how much it would take to implement such an enhancement for the standard ndarray...
Cheers, Derek
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 29.07.2011, at 1:38AM, Anne Archibald wrote:
The can is open and the worms are everywhere, so:
The big problem with one-based indexing for numpy is interpretation. In python indexing, -1 is the last element of the array, and ranges have a specific meaning. In a hypothetical one-based indexing scheme, would the last element be element 0? if not, what does looking up zero do? What about ranges - do ranges still include the first endpoint and not the second? I suppose one could choose the most pythonic of the 1-based conventions, but do any of them provide from-the-end indexing without special syntax?
I forgot, this definitely needs to be preserved for ndarray!
Once one had decided what to do, implementation would be pretty easy - just make a subclass of ndarray that replaces the indexing function.
In fact, Stéfan's reshuffling trick does nearly everything I would expect for using negative indices, maybe the only functionality needed to implement is 1. define an attribute like x.start that could tell appropriate functions (e.g. for print(x) or plot(x)) the "zero-point", so x would be evaluated e.g. at x[-5], wrapping around at [x-1], x[0] to x[-6]... Should have the advantage that anything that's not yet aware of this attribute could simply ignore it. 2. allow to automatically set this starting point when creating something like "x = np.zeros(-5:7)" or setting a shape to (-5:7) - but maybe the latter is leading into very dangerous territory already... Cheers, Derek
participants (4)
-
Anne Archibald
-
Derek Homeier
-
Jeremy Conlin
-
Stéfan van der Walt