[SciPy-user] Finding Neighboors

Lou Pecora lou_boog2000 at yahoo.com
Wed Aug 1 11:08:59 EDT 2007


No, these are not the same.  

For multiple variable (multidimensional) data points
the slice search uses presorting on each dimension's
data points (hence the need for static data) to take
slices around the "center Point" along each dimension
thereby narrowing down the neighbors rather rapidly. 
There are no tree structures in slice searching. 
Setting it up is very simple compared to k-d trees. 
However, the presorting makes it inefficient for
non-static data.

Please see the original article for more information
and comparisons to k-d tree searching including timing
comparisions.

-------------------------------------------
Matthieu wrote:

2007/8/1, Lou Pecora <lou_boog2000 at yahoo.com>:

    If you are using a data set that is static, i.e.
you
    are not adding or deleting data points, then you
    should check out nearest neighbor searches using
    slices along various directions.  It's *much*
easier
    to code than k-d trees and is about as efficient. 
The
    reference is,

    Nene and Nayer, IEEE TRANSACTIONS ON PATTERN
ANALYSIS
    AND MACHINE INTELLIGENCE, VOL. 19, NO. 9,
SEPTEMBER
    1997, pg. 989

    If your data is changing a lot as you are doing
    searches, then as I understand it, k-d trees are
    better to use since you can delete and insert
nodes
    easily.


Isn't that just what kd-trees do ? If you take several
slices around a point, you must know which points
belong to each slice, thus making a tree anyway, no ?

Matthieu




-- Lou Pecora,   my views are my own.
---------------
Great spirits have always encountered violent opposition from mediocre minds. 
-Albert Einstein


       
____________________________________________________________________________________
Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out.
http://answers.yahoo.com/dir/?link=list&sid=396545469



More information about the SciPy-User mailing list