>> I would also point out that requiring open vs closed intervals (in
>> doubles) is already an extremely specialised use case. In terms of
>> *sampling the reals*, there is no difference between the intervals
>> (a,b) and [a,b], because the endpoints have measure 0, and even with
>> double-precision arithmetic, you are going to have to make several
>> petabytes of random data before you hit an endpoint...
>>
> Petabytes ain't what they used to be ;) I remember testing some hardware
> which, due to grounding/timing issues would occasionally goof up a readable
> register. The hardware designers never saw it because they didn't test for
> hours and days at high data rates. But it was there, and it would show up
> in the data. Measure zero is about as real as real numbers...
>
> Chuck
Actually, your point is well taken and I am quite mistaken. If you
pick some values like uniform(low, low * (1+2**-52)) then you can hit
your endpoints pretty easily. I am out of practice making
pathological tests for double precision arithmetic.
I guess my suggestion would be to add the deprecation warning and
change the docstring to warn that the interval is not guaranteed to be
right-open.
Peter
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion