On 3/1/13, Henry Gomersall <heng@cantab.net> wrote:
On Fri, 2013-03-01 at 13:34 +0000, Nathaniel Smith wrote:
My usual hack to deal with the numerical bounds issue is to add/subtract half the step.
Right. Which is exactly the sort of annoying, content-free code that a library is supposed to handle for you, so you can save mental energy for more important things :-).
I agree with the sentiment (I sometimes wish a library could read my mind ;) but putting this sort of logic into the library seems dangerous to me.
The point is that the coder _should_ understand the subtleties of floating point numbers. IMO arange _should_ be well specified and actually operate on the half open interval; continuing to add a step until >= the limit is clear and always unambiguous.
Unfortunately, the docs tell me that this isn't the case: "For floating point arguments, the length of the result is ``ceil((stop - start)/step)``. Because of floating point overflow, this rule may result in the last element of `out` being greater than `stop`."
In my jet-lag addled state, i can't see when this out[-1] > stop case will occur, but I can take it as true. It does seem to be problematic though.
Here you go: In [32]: end = 2.2 In [33]: x = arange(0.1, end, 0.3) In [34]: x[-1] Out[34]: 2.2000000000000006 In [35]: x[-1] > end Out[35]: True Warren
As soon as you allow freeform setting of the stop value, problems are going to be encountered. Who's to say that the stop - delta is actually _meant_ to be below the limit, or is meant to be the limit? Certainly not the library!
It just seems to me that this will lead to lots of bad code in which the writer has glossed over an ambiguous case.
Henry
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion