Re: [Numpy-discussion] (K-Mean) Clustering
Using argmin it should be relatively easy to assign each vector to the cluster with the closest representative (using sum((x-y)**2) as the distance measure), but how do I calculate the new representatives effectively? (The representative of a cluster, e.g., 10, should be the average of all vectors currently assigned to that cluster.) I could always use a loop and then compress() the data based on cluster number, but I'm looking for a way of calculating all the averages "simultaneously", to avoid using a Python loop... I'm sure there's a simple solution -- I just haven't been able to think of it yet. Any ideas?
Maybe this helps (old code, may contain some suboptimal or otherwise
weird things):
from Numeric import *
from RandomArray import randint
import sys
def squared_distances(X,Y):
return add.outer(sum(X*X,-1),sum(Y*Y,-1))- 2*dot(X,transpose(Y))
def kmeans(data,M,
wegstein=0.2,
r_convergence=0.001,
epsilon=0.001, debug=0, minit=20):
"""Computes kmeans for DATA with M centers until convergence
in the sense that relative change of the quantization error is less than
the optional RCONV (3rd param). WEGSTEIN (2nd param), by default .2 but always
between 0 and 1, stabilizes the convergence process.
EPSILON is used to quarantee centers are initially all different.
DEBUG causes some intermediate output to appear to stderr.
Returns centers and the average (squared) quantization error.
"""
N,D=data.shape
# Selecting the initial centers has to be done carefully.
# We have to ensure all of them are different, otherwise the
# algorithm below will produce empty classes.
centers=[]
if debug:
sys.stderr.write("kmeans: Picking centers.\n")
while len(centers)
Janne Sinkkonen
[snip]
Maybe this helps (old code, may contain some suboptimal or otherwise weird things):
Thanks :) -- Magnus Lie Hetland The Anygui Project http://hetland.org http://anygui.org
participants (2)
-
Janne Sinkkonen
-
Magnus Lie Hetland