[Numpy-discussion] New functions.

Charles R Harris charlesr.harris at gmail.com
Tue May 31 22:26:50 EDT 2011


On Tue, May 31, 2011 at 8:00 PM, Skipper Seabold <jsseabold at gmail.com>wrote:

> On Tue, May 31, 2011 at 9:53 PM, Warren Weckesser
> <warren.weckesser at enthought.com> wrote:
> >
> >
> > On Tue, May 31, 2011 at 8:36 PM, Skipper Seabold <jsseabold at gmail.com>
> > wrote:
> >> I don't know if it's one pass off the top of my head, but I've used
> >> percentile for interpercentile ranges.
> >>
> >> [docs]
> >> [1]: X = np.random.random(1000)
> >>
> >> [docs]
> >> [2]: np.percentile(X,[0,100])
> >> [2]: [0.00016535235312509222, 0.99961513543316571]
> >>
> >> [docs]
> >> [3]: X.min(),X.max()
> >> [3]: (0.00016535235312509222, 0.99961513543316571)
> >>
> >
> >
> > percentile() isn't one pass; using percentile like that is much slower:
> >
> > In [25]: %timeit np.percentile(X,[0,100])
> > 10000 loops, best of 3: 103 us per loop
> >
> > In [26]: %timeit X.min(),X.max()
> > 100000 loops, best of 3: 11.8 us per loop
> >
>
> Probably should've checked that before opening my mouth. Never
> actually used it for a minmax, but it is faster than two calls to
> scipy.stats.scoreatpercentile. Guess I'm +1 to fast order statistics.
>
>
So far the biggest interest seems to be in order statistics of various
sorts, so to speak.

*Order Statistics*

minmax
median
k'th element
largest/smallest k elements

*Other Statistics*

mean/std

*Nan functions*

nanadd

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20110531/63be78eb/attachment.html>


More information about the NumPy-Discussion mailing list