Re: [Numpy-discussion] Behavior of np.random.uniform

+1 for the deprecation warning for low>high, I think the cases where that is called are more likely to be unintentional rather than someone trying to use uniform(closed_end, open_end) and you might help users find bugs - i.e. the idioms of ‘explicit is better than implicit’ and ‘fail early and fail loudly’ apply. I would also point out that requiring open vs closed intervals (in doubles) is already an extremely specialised use case. In terms of *sampling the reals*, there is no difference between the intervals (a,b) and [a,b], because the endpoints have measure 0, and even with double-precision arithmetic, you are going to have to make several petabytes of random data before you hit an endpoint... Peter

On Wed, Jan 20, 2016 at 11:57 AM, Peter Creasey < p.e.creasey.00@googlemail.com> wrote:
+1 for the deprecation warning for low>high, I think the cases where that is called are more likely to be unintentional rather than someone trying to use uniform(closed_end, open_end) and you might help users find bugs - i.e. the idioms of ‘explicit is better than implicit’ and ‘fail early and fail loudly’ apply.
I would also point out that requiring open vs closed intervals (in doubles) is already an extremely specialised use case. In terms of *sampling the reals*, there is no difference between the intervals (a,b) and [a,b], because the endpoints have measure 0, and even with double-precision arithmetic, you are going to have to make several petabytes of random data before you hit an endpoint...
Petabytes ain't what they used to be ;) I remember testing some hardware which, due to grounding/timing issues would occasionally goof up a readable register. The hardware designers never saw it because they didn't test for hours and days at high data rates. But it was there, and it would show up in the data. Measure zero is about as real as real numbers... Chuck
participants (2)
-
Charles R Harris
-
Peter Creasey