> Using argmin it should be relatively easy to assign each vector to the
> cluster with the closest representative (using sum((x-y)**2) as the
> distance measure), but how do I calculate the new representatives
> effectively? (The representative of a cluster, e.g., 10, should be the
> average of all vectors currently assigned to that cluster.) I could
> always use a loop and then compress() the data based on cluster
> number, but I'm looking for a way of calculating all the averages
> "simultaneously", to avoid using a Python loop... I'm sure there's a
> simple solution -- I just haven't been able to think of it yet. Any
Maybe this helps (old code, may contain some suboptimal or otherwise
from Numeric import *
from RandomArray import randint
return add.outer(sum(X*X,-1),sum(Y*Y,-1))- 2*dot(X,transpose(Y))
epsilon=0.001, debug=0, minit=20):
"""Computes kmeans for DATA with M centers until convergence
in the sense that relative change of the quantization error is less than
the optional RCONV (3rd param). WEGSTEIN (2nd param), by default .2 but always
between 0 and 1, stabilizes the convergence process.
EPSILON is used to quarantee centers are initially all different.
DEBUG causes some intermediate output to appear to stderr.
Returns centers and the average (squared) quantization error.
# Selecting the initial centers has to be done carefully.
# We have to ensure all of them are different, otherwise the
# algorithm below will produce empty classes.
sys.stderr.write("kmeans: Picking centers.\n")
# Pick one data item randomly
if d>epsilon: centers.append(candidate)
# Not like this, you get doubles: centers=take(data,randint(0,N,(M,)))
# Squared distances from data to centers (all pairs)
# Matrix telling which data item is closest to which center
# Compute new centers
centers=( ( wegstein)*(dot(transpose(x),data)/sum(x)[...,NewAxis])
# Quantization error
sys.stderr.write("%f %f %i\n" %(qerror,old_qerror,counter))
sys.stderr.write("%f None %i\n" %(qerror,counter))
return centers, qerror