0% found this document useful (0 votes)
95 views143 pages

Chapter8-Basic Cluster Analysis2016

This document provides an introduction and overview of cluster analysis techniques in data mining. It discusses what cluster analysis is, different types of clusterings including partitional, hierarchical, density-based, and conceptual clusters. It also covers cluster analysis applications, what is not considered cluster analysis, notions of ambiguity in clusters, and popular clustering algorithms like k-means, hierarchical clustering, and self-organizing maps neural networks. The goal of cluster analysis is to group similar objects together and maximize differences between groups.

Uploaded by

Riskaa Pricilia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views143 pages

Chapter8-Basic Cluster Analysis2016

This document provides an introduction and overview of cluster analysis techniques in data mining. It discusses what cluster analysis is, different types of clusterings including partitional, hierarchical, density-based, and conceptual clusters. It also covers cluster analysis applications, what is not considered cluster analysis, notions of ambiguity in clusters, and popular clustering algorithms like k-means, hierarchical clustering, and self-organizing maps neural networks. The goal of cluster analysis is to group similar objects together and maximize differences between groups.

Uploaded by

Riskaa Pricilia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 143

Introduction to Data Mining

Chapters 8 & 9

Cluster Analysis
What Is Cluster Analysis

Cluster analysis groups data objects based only


on information found in the data that describes
the objects and their relationships. The goal is
that the objects within a group be similar to one
another and different from the objects in other
groups. The greater the similarity within a group
and the greater the difference between groups,
the better or more distinct the clustering.
Segmentation and partitioning.

2
What is Cluster Analysis?

Inter-cluster
Intra-cluster distances are
distances are maximized
minimized

3
Applications of Cluster Analysis

Understanding
Classes, or conceptually
meaningful groups of objects
that share common
characteristics, play an
important role in how people
analyze and describe the
world.
Group related documents for
browsing, group genes and
proteins that have similar
functionality, or group stocks
with similar price fluctuations
Summarization
Reduce the size of large data
Clustering precipitation
sets in Australia

4
What is not Cluster Analysis?

Supervised classification
Have class label information

Simple segmentation
Dividing students into different registration groups
alphabetically, by last name

Results of a query
Groupings are a result of an external specification

Graph partitioning
If the purpose is to divide a graph into some sub-graphs

5
Notion of a Cluster can be Ambiguous

How many clusters? Six Clusters

Two Clusters Four Clusters

6
Types of Clusterings (1/2)

A clustering is a set of clusters

Important distinction between hierarchical and


partitional sets of clusters
Partitional Clustering
A division data objects into non-overlapping
subsets (clusters) such that each data object is in
exactly one subset
Hierarchical clustering
A set of nested clusters organized as a
hierarchical tree

7
Partitional Clustering

Original Points A Partitional Clustering

8
Hierarchical Clustering

Traditional Hierarchical Clustering Traditional Dendrogram

Non-traditional Hierarchical Clustering Non-traditional Dendrogram

9
A hierarchical clustering can be viewed as a
sequence of partitional clusterings and a
partitional clustering can be obtained by taking
any member of that sequence; i.e., by cutting the
hierarchical tree at a particular level.

10
Types of Clusterings (2/2)

Exclusive versus non-exclusive


In non-exclusive clusterings, points may belong to multiple
clusters.
Can represent multiple classes or border points

Fuzzy versus non-fuzzy


In fuzzy clustering, a point belongs to every cluster with some
weights between 0 and 1 In real fuzzy sets theory, this point
does not exist.
Weights must sum to 1
Probabilistic clustering has similar characteristics

Partial versus complete


In some cases, we only want to cluster some of the data
1 A B C D
Degree
0 11
Types of Clusters

Well-separated clusters

Prototype or Center-based clusters

Graph-based or Contiguous clusters

Density-based clusters

Shared-Property or Conceptual

Described by an Objective Function


12
Types of Clusters: Well-Separated

Well-Separated Clusters:
A cluster is a set of points such that any point in a cluster is
closer (or more similar) to every other point in the cluster than
to any point not in the cluster.
Sometimes a threshold is used to specify that all the objects in
a cluster must be sufficiently close to one another.

3 well-separated clusters

13
Types of Clusters: Center-Based (Prototype)

Center-based
A cluster is a set of objects such that an object in a cluster is
closer (more similar) to the center of a cluster, than to the
center of any other cluster Continuous data

The center of a cluster is often a centroid, the average of all


the points in the cluster, or a medoid, the most representative
point of a cluster

4 center-based clusters

14
Types of Clusters: Contiguity-Based (Graph)

Contiguous Cluster (Nearest neighbor or


Transitive)
A cluster is a set of points such that a point in a cluster is
closer (or more similar) to one or more other points in the
cluster than to any point not in the cluster.

8 contiguous clusters

15
Types of Clusters: Density-Based

Density-based
A cluster is a dense region of points, which is separated by
low-density regions, from other regions of high density.
Used when the clusters are irregular or intertwined, and when
noise and outliers are present.

6 density-based clusters

16
Types of Clusters: Conceptual Clusters
(Shared Property)

Shared Property or Conceptual Clusters


Finds clusters that share some common property or represent
a particular concept.
.

2 Overlapping Circles

17
Types of Clusters: Objective Function

Clusters Defined by an Objective Function


Finds clusters that minimize or maximize an objective
function.
Enumerate all possible ways of dividing the points into
clusters and evaluate the `goodness' of each potential set of
clusters by using the given objective function. (NP Hard)
Can have global or local objectives.
Hierarchical clustering algorithms typically have local
objectives
Partitional algorithms typically have global objectives
A variation of the global objective function approach is to fit
the data to a parameterized model.
Parameters for the model are determined from the data.
18
Types of Clusters: Objective Function

Map the clustering problem to a different domain


and solve a related problem in that domain
Proximity matrix defines a weighted graph, where the
nodes are the points being clustered, and the
weighted edges represent the proximities between
points

Clustering is equivalent to breaking the graph into


connected components, one for each cluster.

Want to minimize the edge weight between clusters


and maximize the edge weight within clusters
19
Clustering Algorithms

Self-organizing maps (SOM) neural network


This is a clustering and data visualization technique
based on a neural network viewpoint.
K-means and its variants
This is a prototype-based, partitional clustering
technique that attempts to find a user-specified
number of clusters (K), which are represented by their
centroids.
Agglomerative hierarchical clustering
Produce a hierarchical clustering by starting with each
point as a singleton cluster and then repeatedly
merging the two closest clusters until a single, all-
encompassing cluster remains.
20
Self-organizing maps (SOM)
Neural Network

21
SOM Neural Network

Kohonen proposed SOM neural network in 1980.


A kind of unsupervised neural network.
The concept is based on characteristic of birds
of a feather flock together.
It can be used as the pre-processing mechanism
for supervised neural network.

Obtain the class label


for every record
Training SOM neural BP neural
records
without network network
class
labels

22
Network Structure

5X5 topology

Node (j, k)

Node i

23
Network Structure

Input layer: Input vector of training samples. Use


linear transfer function f (x) = x.
Output layer: Clustering of samples with network
topology and neighborhood concepts.

24
Basic Concept (1/8)

Network topology: The corresponding position


is meaningful for the network output units in
SOM. It is different from other neural networks
whose outputs do not have meaning. Most
used output layer topology is 2-dimensional
structure. But, other shapes or number of
layers are allowed.

25
Basic Concept (2/8)

Topology coordinate: Represent the


coordinate of the output topology for the
output nodes. The number of dimensions can
be more than 2.
Yt

a
Topology coordinate
b a : (1,3)
Xt b : (3,1)

26
Basic Concept (3/8)

Neighborhood area: The area using an output


node as the center, or winner. The nodes within
this area will interact with each other. The size of
area is decreased during training.
nth iteration
(n+1)th iteration

(n+2)th iteration

27
Basic Concept (4/8)

Neighborhood center: The center controls the


neighborhood area. Basically, it is the coordinate
value for an output node which is the center, or
winner.

Neighborhood radius: The parameter controls the


size of the neighborhood area.

28
Basic Concept (5/8)

Neighborhood distance: The distance between


the output node and the neighborhood center in
output topology.

where
X j , Y j = The coordinate values for jth output node.
C x , C y = The coordinate values of the neighborhood center.

Neighborhood coefficient (R): The parameter


controls the interaction of output nodes inside the
neighborhood area. 29
Basic Concept (6/8)

Neighborhood function: Control neighborhood


coefficient, neighborhood radius, and
neighborhood distance.
R _ factor j = f (r j , R)

Four most used neighborhood functions:

30
Basic Concept (7/8)

R
R

Smokestack-hat function Mexican-hat function


R R

Chef-hat function Leaf-hat function


31
Basic Concept (8/8)

Neighborhood shrinking: The neighborhood


radius decreases gradually.

Rn=R_rate x Rn-1

where R_rate=Neighborhood radius shrinking


coefficient (<1.0)

32
Algorithm-Training (1/5)

(1) Set up network parameters.


(2) Set up connecting weight matrix W randomly.
Set up topology coordinate values for the output
layer nodes
X _ Node [ j ][ k ]=j
Y _ Node [ j ][ k ]=k
(3) Input a training
samples input vector
X.

33
Algorithm Training (2/5)

(4) Select the winning output node Node[ j* ][ k* ]


(a) net[ j ][ k ]= (X[ i ] - W[ i ][ j ][ k ])2
where W[ i ][ j ][ k ] is the connecting
weight between ith input node and jkth
output node.
(b) Find the winning node
net[ j* ][ k* ]
=min{ net[ j ][ k ] }
j,k

34
Algorithm-Training (3/5)

(5) Calculate the output vector Y for each output


layer.

if j=j* ,k=k* then


Y[ j ][ k ] = 1
else
Y[ j ][ k ] = 0

35
Algorithm-Training (4/5)

(6) Calculate the updating amount W for each


connecting weight.

W[ i ][ j ][ k ]=(X[ i ]-W[i][ j ][ k ])neighborhood[j][k]

where
R = Neighborhood radius
neighborhood[ j ][ k ]=exp(-r[ j ][ k ] / R)
r[ j ][ k ]=[(X_Node[ j ][ k ] - X_Node[ j* ][ k* ])2
+(Y_Node[ j ][ k ] - Y_Node[ j* ][ k* ])2]1/2
36
Algorithm-Training (5/5)

(7) Update connecting weight matrix W


W[ i ][ j ][ k ]=W[ i ][ j ][ k ] + W[ i ][ j ][ k ]

(8) (a) Repeat Steps 3-7 until all training


samples have been input.
(b) Shrink learning rate and neighborhood
radius
n = _rate n-1
Rn = R_rateRn-1
(9) Repeat Steps 3-8 until the termination
criterion is satisfied.
37
Algorithm-Recall

(1) Set up network parameters.


(2) Input connecting weight matrix W and set up
topology coordinate for the output nodes.
(3) Input the input vector X for a test sample
(4) Find the wining node with the smallest net
value.
(5) Calculate output vector Y for output layer.

38
An Example of Output Topology

Frequency

Output- Y axis

Output- X axis

39
Error Measure

Error is measured by using total distance.

Total distance=

where
P: Training sample
The average of total
distances can be used as
the termination criterion.

40
Homework 2 (1/2)

Write a computer program for SOM neural


network and use IRIS data set to verify your
program with following requirements:
Test different network topologies, like 5X5 and10X10,
and find the best network structure.
Test different training rates.
Use program to draw the learning curve as shown in
appendix 1.
Use Excel or Matlab to draw output topology as
shown in appendix 2.
Present your work on May 12, 2016 in class.

41
Homework 2 (2/2)

Appendix 1

Error

Training iteration

Appendix 2

Frequency Topology k

Topology j
42
K-Means

43
K-means Clustering

A kind of partitional clustering approach


Each cluster is associated with a centroid (center point)
Each point is assigned to the cluster with the closest
centroid
Number of clusters, K, must be specified
The basic algorithm is very simple

The condition can


be replaced by a
weaker one.

44
K-Means Example

Coordinates of the samples

Sample X coordinate Y coordinate


X1 -1 1
X2 3 8
X3 0 0
X4 4 9
X5 5 0
X6 3 6
X7 1 2
X8 6 2
45
K-Means Example

Training samples

x4
x2

x6

x7 x8

x1 x3 x5

46
K-Means Example

Step 1: Determine the number of clusters k


From the coordinate figure, it is very
apparent that there are three clusters. Thus, set
up k 3.
Step 2: Generate K cluster centers randomly.
Cluster center X coordinate Y coordinate
Z1 -1 0
Z2 2 8
Z3 6 0

47
K-Means Example

Thus, initial samples and the corresponding


cluster centers are as follows:

X4
Z2 X2

X6

X7 X8
X1
X3 X5 Z3
Z1

48
K-Means Example

Step 3: Calculate the Euclidean distance

between data sample xi and clustering


center zj

49
K-Means Example

Z1 Z2 Z3
X1 1.000 7.616 7.071
X2 8.944 1.000 8.544
X3 1.000 8.246 6.000
X4 10.296 2.236 9.220
X5 6.000 8.544 1.000
X6 7.211 2.236 6.708
X7 2.828 6.083 5.385
X8 7.280 7.211 2.000

50
K-Means Example

Step 4: Assign every data point xi to the


cluster that the distance between cluster
center and the data point is the smallest.
Z1 Z2 Z3 Assigned
cluster
X1 1.000 7.616 7.071 Z1
X2 8.944 1.000 8.544 Z2
X3 1.000 8.246 6.000 Z1
X4 10.296 2.236 9.220 Z2
X5 6.000 8.544 1.000 Z3
X6 7.211 2.236 6.708 Z2
X7 2.828 6.083 5.385 Z1
X8 7.280 7.211 2.000 Z3 51
K-Means Example

Step 5: Recalculate new cluster centers.


Z1 X Y
cluster coordinate coordinate
X1 -1 1
X3 0 0
X7 1 2

52
K-Means Example

Z2 X Y
cluster coordinate coordinate
X2 3 8
X4 4 9
X6 3 6

Z3 X Y
cluster coordinate coordinate
X5 5 0
X8 6 2

53
K-Means Example

The new cluster centers

Cluster center1 X coordinate Y coordinate

Z1new 0 1

Z2new 3.333 7.667

Z3new 5.5 1

54
K-Means Example

The coordinate figure for data points and


new cluster centers after one iteration

X4
X2
Z2NEW

X6

X7 X8
X1 Z3NEW
Z1NEW X5
X3

55
K-Means Example

Satisfy the pre-specified termination criteria


or notAssume that the maximal number of
iterations is 50)
If not satisfied, go back to Step 3 until the
termination criteria is met.

56
K-Means Example

After 50 iterations, we have the final cluster


centers as follows.

Cluster center50 X coordinate Y coordinate

Z1new 0 1

Z2new 3.333 7.667

Z3new 5.5 1

57
K-Means Example

The coordinate figure of samples and new


cluster centers after 50 iterations.

X4
X2
Z2NEW

X6

X7 X8
X1 Z3NEW
Z1NEW X5
X3

58
K-means Clustering Details

Initial centroids are often chosen randomly.


Clusters produced vary from one run to
another.
The centroid is (typically) the mean of the
points in the cluster.
Closeness is measured by Euclidean
distance, cosine similarity, correlation, etc.
Most of the convergence happens in the first
few iterations.
Often the stopping condition is changed to
Until relatively few points change clusters
59
Assigning Points to the Closest Centroid

Euclidean distance (L2) is often used for data


points in Euclidean space, while cosine similarity
is more appropriate for documents.
Manhattan (L1) distance can be used for
Euclidean data, while Jaccard measure is often
employed for documents.

60
Centroids and Objective Functions

The goal of the clustering is typically expressed


by an objective function that depends on the
proximities of the points to one another or to the
cluster centroids; e.g., minimize the squared
distance of each point to its closest centroid.
Once we have specified a proximity measure and
an objective function, the centroid that we should
choose can often be determined mathematically.

61
Assigning Points to the Closest Centroid

Proximity Function Centroid Objective Function


Manhattan (L1) Median Minimize sum of the L1 distance
of an object to its cluster centroid

Squared Euclidean Mean Minimize sum of the squared L2


(L22) distance of an object to its cluster
centroid
Cosine Mean Maximize sum of the cosine
similarity of an object to its cluster
centroid
Bregman Mean Minimize sum of the Bregman
divergence divergence of an object to its
cluster centroid

62
Evaluating K-means Clusters

Most common measure is Sum of Squared Error (SSE)


For each point, the error is the distance to the nearest cluster
To get SSE, we square these errors and sum them.

x is a data point in cluster Ci and mi is the representative point for


cluster Ci
can show that mi corresponds to the center (mean) of the cluster
Given two clusters, we can choose the one with the smallest
error
One easy way to reduce SSE is to increase K, the number of
clusters
A good clustering with smaller K can have a lower SSE than a poor
clustering with higher K

63
Two different K-means Clusterings

Original Points

Optimal Clustering Sub-optimal Clustering


64
Importance of Choosing Initial Centroids

65
Importance of Choosing Initial Centroids

66
Importance of Choosing Initial Centroids

67
Importance of Choosing Initial Centroids

68
Problems with Selecting Initial Points

If there are K real clusters then the chance of


selecting one centroid from each cluster is
small.
Chance is relatively small when K is large
Sometimes the initial centroids will readjust
themselves in right way, and sometimes
they dont
Consider an example of five pairs of
clusters (see next page)

69
10 Clusters Example

Starting with two initial centroids in one cluster of each pair of clusters
70
10 Clusters Example

Starting with two initial centroids in one cluster of each pair of clusters
71
10 Clusters Example

Starting with some pairs of clusters having three initial centroids, while other have only one.

72
10 Clusters Example

Starting with some pairs of clusters having three initial centroids, while other have only one.

73
Solutions to Initial Centroids Problem

Multiple runs
Helps, but probability is not on your side
Take a sample of points and cluster them using a
hierarchical clustering tech to determine initial
centroids
Select more than k initial centroids and then
select among these initial centroids
Select most widely separated
Limitations
The sample is relatively small
K is relatively small compared to the sample size

74
Select the first point at random and then select
the next point which is farthest from any of the
initial centroids already selected. (well separated)
But, outliers may be selected.
Expensive to compute the farthest point from the
current set of initial centroids.
Post-processing (in page 78)
Bisecting K-means
Not as susceptible to initialization issues

75
Handling Empty Clusters

Basic K-means algorithm can yield empty


clusters

Several strategies to handle empty clusters


Choose the point that is farthest away from any
current centroid as the new centroid.
Choose the replacement centroid from the cluster that
has the highest SSE (divide the cluster with highest
SSE into two clusters).
If there are several empty clusters, the above
procedure can be repeated several times.

76
Outliers

When outliers are present, the resulting cluster


centroids may not be as representative as they
would be and thus, the SSE will be higher as
well.
Thus, it is often useful to discover outliers and
eliminate them beforehand.

77
Reducing the SSE with Pre-processing
and Post-processing
Pre-processing
Normalize the data
Eliminate outliers

Post-processing
Eliminate small clusters that may represent outliers
Split loose clusters, i.e., clusters with relatively high
SSE
Merge clusters that are close and that have relatively
low SSE

78
Two strategies that decrease the total SSE
Split a cluster: The cluster with the largest SSE is
usually chosen, but we could also split the cluster with
the largest standard deviation for one particular
attribute.
Introduce a new cluster centroid: Often the point that
is farthest from any cluster is chosen. Random
selection is another approach.
Two strategies that decrease the number of
clusters
Disperse a cluster: Remove the centroid that
corresponds to the cluster and reassigning the points
to other clusters.
Merge two clusters: The clusters with the closest
79
centroids are typically chosen.
Updating Centers Incrementally

In the basic K-means algorithm, centroids are


updated after all points are assigned to a centroid

An alternative is to update the centroids after


each assignment (incremental approach)
Each assignment updates zero or two centroids
Never get an empty cluster
More expensive
Introduces an order dependency

80
Bisecting K-means

Bisecting K-means algorithm


Variant of K-means that can produce a partitional or a
hierarchical clustering
There are many ways to choose which cluster to split:
The largest cluster at each step
The one with the largest SSE
Use a criterion based on both size and SSE
We often refine the resulting clusters by using their centorids
as the initial centroids for the basic K-means algorithm.

81
Limitations of K-means

K-means has problems when clusters are of


differing
Sizes
Densities
Non-globular shapes

K-means has problems when the data contains


outliers.

82
Limitations of K-means: Differing Sizes

Original Points K-means (3 Clusters)

83
Limitations of K-means: Differing Density

Original Points K-means (3 Clusters)

84
Limitations of K-means: Non-globular Shapes

Original Points K-means (2 Clusters)

85
Overcoming K-means Limitations

Original Points K-means Clusters

One solution is to use many clusters.


Find parts of clusters, but need to put together.
86
Overcoming K-means Limitations

Original Points K-means Clusters

87
Overcoming K-means Limitations

Original Points K-means Clusters

88
Hierarchical Clustering

89
Hierarchical Clustering

Produces a set of nested clusters organized as a


hierarchical tree
Can be visualized as a dendrogram
A tree like diagram that records the sequences of
merges or splits

90
Strengths of Hierarchical Clustering

Do not have to assume any particular number of


clusters
Any desired number of clusters can be obtained by
cutting the dendogram at the proper level

They may correspond to meaningful taxonomies


Example in biological sciences (e.g., animal kingdom,
phylogeny reconstruction, )

91
Hierarchical Clustering

Two main types of hierarchical clustering


Agglomerative:
Start with the points as individual clusters
At each step, merge the closest pair of clusters until only one cluster
(or k clusters) left

Divisive:
Start with one, all-inclusive cluster
At each step, split a cluster until each cluster contains a point (or
there are k clusters)

Traditional hierarchical algorithms use a similarity or


distance matrix
Merge or split one cluster at a time
92
Agglomerative Clustering Algorithm

More popular hierarchical clustering technique


Basic algorithm is straightforward
1. Compute the proximity matrix
2. Let each data point be a cluster
3. Repeat
4. Merge the two closest clusters
5. Update the proximity matrix
6. Until only a single cluster remains

Key operation is the computation of the proximity of


two clusters
Different approaches to defining the distance between
clusters distinguish the different algorithms

93
Starting Situation

Start with clusters of individual points and a


proximity matrix p1 p2 p3 p4 p5 ...
p1

p2
p3

p4
p5
.
.
. Proximity Matrix

94
Intermediate Situation

After some merging steps, we have some clusters


C1 C2 C3 C4 C5
C1

C2
C3
C3
C4
C4
C5

Proximity Matrix
C1

C2 C5

95
Intermediate Situation

We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix. C1 C2 C3 C4 C5
C1

C2
C3
C3
C4
C4
C5

Proximity Matrix
C1

C2 C5

96
After Merging

The question is How do we update the proximity matrix?


C2
U
C1 C5 C3 C4

C1 ?

C2 U C5 ? ? ? ?
C3
C3 ?
C4
C4 ?

Proximity Matrix
C1

C2 U C5

97
How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...
p1
Similarity?
p2

p3

p4

p5
MIN
.
MAX
.
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
Wards Method uses squared error

98
How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...
p1

p2

p3

p4
The proximity between the closest two
p5
MIN points that are in different clusters
.
MAX
.
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
Wards Method uses squared error

99
How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

p5
MIN Proximity between the farthest two
points in different clusters
.
MAX
.
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
Wards Method uses squared error

100
How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...
p1

p2

p3

p4
The average pairwise
p5
MIN proximities of all pairs of
points from different
.
MAX clusters
.
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
Wards Method uses squared error

101
How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...
p1

p2

p3

p4

MIN p5

MAX .

Group Average .
.
Distance Between Centroids Proximity Matrix
Other methods driven by an objective function
Wards Method uses squared error
Like distance between centroids, but it measures
the proximity between two clusters in term of the
increase in the SSE that results from merging the
102
two clusters.
103
Cluster Similarity: MIN or Single Link

Similarity of two clusters is based on the two


most similar (closest) points in the different
clusters
Determined by one pair of points, i.e., by one link in
the proximity graph.

104
Hierarchical Clustering: MIN

Nested Clusters Dendrogram

105
Strength of MIN

Original Points Two Clusters

Can handle non-elliptical shapes

106
Limitations of MIN

Original Points Two Clusters

Sensitive to noise and outliers

107
Cluster Similarity: MAX or Complete Linkage

Similarity of two clusters is based on the two least


similar (most distant) points in the different
clusters
Determined by all pairs of points in the two clusters

108
Hierarchical Clustering: MAX

Nested Clusters Dendrogram

109
Strength of MAX

Original Points Two Clusters

Less susceptible to noise and outliers

110
Limitations of MAX

Original Points Two Clusters

Tends to break large clusters

111
Cluster Similarity: Group Average

Proximity of two clusters is the average of pairwise proximity


between points in the two clusters.

112
Hierarchical Clustering: Group Average

Nested Clusters Dendrogram

113
Hierarchical Clustering: Group Average

Compromise between Single and Complete


Link

Strengths
Less susceptible to noise and outliers

114
Cluster Similarity: Wards Method

Similarity of two clusters is based on the increase


in squared error when two clusters are merged
Similar to group average if distance between points is
distance squared

Less susceptible to noise and outliers

Hierarchical analogue of K-means


Can be used to initialize K-means

115
116
Hierarchical Clustering: Time and Space requirements

O(N2) space since it uses the proximity matrix.


N is the number of points.

O(N3) time in many cases


There are N steps and at each step the size, N2,
proximity matrix must be updated and searcheed

117
Hierarchical Clustering: Problems and Limitations

Once a decision is made to combine two clusters,


it cannot be undone

No objective function is directly minimized

Different schemes have problems with one or


more of the following:
Sensitivity to noise and outliers
Difficulty handling different sized clusters and convex
shapes
Breaking large clusters

118
Cluster Validity

119
Cluster Validity

For supervised classification we have a variety of


measures to evaluate how good our model is
Accuracy, precision, recall
For cluster analysis, the analogous question is how to
evaluate the goodness of the resulting clusters?
Each clustering algorithm defines its own type of cluster-
It may seem that each situation might require a different
evaluation measure.
Then why do we want to evaluate them?
To avoid finding patterns in noise
To compare clustering algorithms
To compare two sets of clusters
To compare two clusters

120
Clusters found in Random Data

Random DBSCAN
Points

K-means Complete
Link

121
Different Aspects of Cluster Validation

1. Determining the clustering tendency of a set of data, i.e.,


distinguishing whether non-random structure actually exists in the
data.
2. Comparing the results of a cluster analysis to externally known
results, e.g., to externally given class labels.
3. Evaluating how well the results of a cluster analysis fit the data
without reference to external information.
- Use only the data
4. Comparing the results of two different sets of cluster analyses to
determine which is better.
5. Determining the correct number of clusters.

For 2, 3, and 4, we can further distinguish whether we want to


evaluate the entire clustering or just individual clusters.

122
Some Challenges for Numerical Measure

A measure of cluster validity may be quite limited


in the scope of its applicability
Most work on measures of clustering tendency has
been done for two- or three-dimensional spatial data.
A framework to interpret any measure is needed.
Whether it is good or poor.
If a measure is too complicated to apply or to
understand, then few will use it.

123
Measures of Cluster Validity

Numerical measures that are applied to judge


various aspects of cluster validity, are classified into
the following three types.
External Index (Supervised): Used to measure the
extent to which cluster labels match externally supplied
class labels.
Entropy
Internal Index (Unsupervised): Used to measure the
goodness of a clustering structure without respect to
external information.
Sum of Squared Error (SSE) (see next two pages)
Relative Index: Used to compare two different
clusterings or clusters.
Often an external or internal index is used for this function, e.g., SSE
or entropy 124
Internal Measures: Cohesion and Separation

Cluster Cohesion: Measures how closely related


are objects in a cluster
Example: SSE
Cluster Separation: Measure how distinct or well-
separated a cluster is from other clusters
Example: Squared Error
Cohesion is measured by the within cluster sum of squares (SSE)

Separation is measured by the between cluster sum of squares

Where |Ci| is the size of cluster i


125
Internal Measures: Cohesion and Separation

Example: SSE
BSS + WSS = constant
m

1 m1 2 3 4 m2 5

K=1 cluster:

K=2 clusters:

126
Internal Measures: SSE
Clusters in more complicated figures arent well separated
Internal Index: Used to measure the goodness of a clustering
structure without respect to external information
SSE
SSE is good for comparing two clusterings or two clusters
(average SSE).
Can also be used to estimate the number of clusters

127
Internal Measures: SSE

SSE curve for a more complicated data set

SSE of clusters found using K-means

128
Unsupervised Cluster Evaluation Using
Cohesion and Separation

Most are based on cohesion or separation.


Two different kinds of cluster validity measure:
Prototype-based clustering technique
Graph-based clustering technique
General form for measuring cluster validity

129
Graph-Based View of Cohesion and
Separation
The cohesion of a cluster can be defined as the
sum of the weights of the links in the proximity
graph that connect points within the cluster.
The separation between two clusters can be
measured by the sum of the weights of the links
from points in one cluster to points in the other
cluster.

130
cohesion separation
131
Prototype-Based View of Cohesion and
Separation
The cohesion of a cluster can be defined as the sum of the
proximities with respect to the prototype (centroid or
medoid).
The separation between two clusters can be measured by
the proximity of the two cluster prototypes.

132
Both the cluster cohesion measure and
separation measure can be combined together
using the following equation:

This can refer to Table 8.6.


A popular method of silhouette coefficients
combines both cohesion and separation (see next
page)
133
The Silhouette Coefficient

Silhouette Coefficient combine ideas of both cohesion and separation,


but for individual points, as well as clusters and clusterings
For an individual point, i
Calculate a = average distance of i to the points in its cluster
Calculate b = min (average distance of i to points in another cluster)
The silhouette coefficient for a point is then given by
s = (b-a) / max(a,b) (*a<b is better)

Typically between -1 and 1.


A negative value is undesirable because this corresponds to a case in
which a is greater than b.
The closer to 1 the better.

Can calculate the Average Silhouette width for all the points in a
cluster.

134
Determining the Correct Number of
Clusters
It is necessary to use some measures to
determine the number of clusters

135
Wilks Lambdan value is the other choice for
determining the number of clusters.

ART2 ART2
Wilks Wilks
parameters Number of parameter Number of
Lambda Difference Lambda Difference
vigilance clusters vigilance clusters
Value Value
value value
0.96 80 0.49511 0.0486 0.9491 59 0.5899 0.0160

0.954 70 0.54374 0.0445


0.948 57 0.6059 0.0323
0.9494 60 0.58824 0.0655
0.9461 55 0.63821 0.0012
0.9422 50 0.65376 0.0383
0.9445 53 0.63936 0.0044
0.934 40 0.69201 0.0493

0.943 51 0.6438 136


0.91851 30 0.74127 136
Clustering Tendency

Two ways for measuring clustering tendency\


Using clustering algorithms- Use multiple algorithms
and again evaluate the quality of the resulting
clusters. If the clusters are uniformly poor, then this
may indeed indicate that there are no clusters in the
data.
Using statistical tests- like Hopkins statistics

137
Supervised Measures of Cluster Validity

Classification-orientated measure
Similarity-orientated measure
(See next page)

138
Classification-Oriented Measure of
Cluster Validity
Measure the degree to which predicted class
labels correspond to actual class labels.
Entropy
Purity
Precision
Recall
F-measure (Combination of Precision and Recall)

139
External Measures of Cluster Validity: Entropy and Purity

140
Similarity-Oriented Measure of Cluster
Validity
Compare two matrices:
The ideal cluster similarity matrix which has a 1 in the
ijth entry if two objects, i and j, are in the same cluster
and 0, otherwise.
The ideal class similarity matrix which has a 1 in the
ijth entry if two objects, i and j, belong to the same
class, a 0 otherwise.
Point
Point
p1 p2 p3 p4 p5 p1 p2 p3 p4 p5
P1 1 1 1 0 0 P1 1 1 0 0 0
Correlation
P2 1 1 1 0 0 P2 1 1 0 0 0
=0.359
P3 1 1 1 0 0 P3 0 0 1 1 1
P4 0 0 0 1 1 P4 0 0 1 1 1
p5 0 0 0 1 1 p5 0 0 1 1 1
Clustered results 141
Actual results
Final Comment on Cluster Validity

The validation of clustering structures is the


most difficult and frustrating part of cluster
analysis.
Without a strong effort in this direction, cluster
analysis will remain a black art accessible only to
those true believers who have experience and
great courage.

Algorithms for Clustering Data, Jain and Dubes

142
THE END!

143

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy