[Numpy-discussion] Augment unique method
shoyer at gmail.com
Thu Jul 16 15:06:21 EDT 2020
On Thu, Jul 16, 2020 at 11:41 AM Roman Yurchak <rth.yurchak at gmail.com>
> One issue with adding a tolerance to np.unique for floats is say you have
> [0, 0.1, 0.2, 0.3, 0.4, 0.5] with atol=0.15
> Should this return a single element or multiple ones? One once side each
> consecutive float is closer than the tolerance to the next one but the
> first one and the last one are clearly not within atol.
> Generally this is similar to what DBSCAN clustering algorithm does (e.g.
> in scikit-learn) and that would probably be out of scope for np.unique.
I agree, I don't think there's an easy answer for selecting "approximately
unique" floats in the case of overlap.
np.unique() does actually have well defined behavior for float, comparing
floats for exact equality. This isn't always directly useful, but it
definitely is well defined.
My suggestion for this use-case would be round floats to the desired
precision before passing them into np.unique().
> On 16/07/2020 20:27, Amin Sadeghi wrote:
> > It would be handy to add "atol" and "rtol" optional arguments to the
> > "unique" method. I'm proposing this since uniqueness is a bit vague for
> > floats. This change would be clearly backwards-compatible.
> > _______________________________________________
> > NumPy-Discussion mailing list
> > NumPy-Discussion at python.org
> > https://mail.python.org/mailman/listinfo/numpy-discussion
> NumPy-Discussion mailing list
> NumPy-Discussion at python.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion