• Tidak ada hasil yang ditemukan

Sekizinci Ders Kümeleme Analizi: Temel Kavramlar ve Algoritmalar ve Algoritmalar

N/A
N/A
Protected

Academic year: 2021

Membagikan "Sekizinci Ders Kümeleme Analizi: Temel Kavramlar ve Algoritmalar ve Algoritmalar"

Copied!
95
0
0

Teks penuh

(1)

Sekizinci Ders

Sekizinci Ders

Kümeleme Analizi: Temel Kavramlar

Kümeleme Analizi: Temel Kavramlar

Dr. Hidayet Takçı

Dr. Hidayet Takçı

(2)
(3)

Kümeleme Analizi Nedir?

Kümeleme Analizi Nedir?

H

Heerr bbiirrii bbiirr ddiizzii öözznniitteelliikk iillee,, vveerrii nnookkttaallaarrıınnıınn bbiirr kküüm

meessii vvee

nnookkta

tala

larr aara

rası

sınd

ndaaki

ki bben

enzzeerl

rlii

ğ

ğ

ii ööllççeenn bbiirr bbeennzzeerrlliikk ööllççüüm

müü

verilmi

verilmi

şş

oollssuunn,, kküüm

meelleem

meenniinn aam

maaccıı;; aa

şş

aa

ğ

ğ

ıd

ıdak

akii öözzel

ellilikl

kleeri

ri

sa

sa

ğ

ğ

la

laya

yann kü

küme

mele

leriri bu

bulm

lmak

aktıtır.

r.

Kümeler arası

Kümeler arası

uzaklıklar

uzaklıklar

Küme içi

Küme içi

m

maa ss m

m zze

e ee

rr

u

uzzaa ıı aarr

minimize edilir

minimize edilir

(4)

Kümeleme Analizi Ne Değildir?

Kümeleme Analizi Ne Değildir?

Denetimli sınıflandırma

Denetimli sınıflandırma

 –

 –

Sınıf etiketi bilgisine sahip

Sınıf etiketi bilgisine sahip

Basit bölümleme

Basit bölümleme

 –

 –

Soyadına göre farklı kayıt gruplarının alfabetik olarak

Soyadına göre farklı kayıt gruplarının alfabetik olarak

bölünmesi

bölünmesi

Bir sorgunun sonuçları

Bir sorgunun sonuçları

 –

 –

Bir

Bir

şş

arta göre gruplamaların elde edilmesi

arta göre gruplamaların elde edilmesi

(5)

Kümeleme tipleri

Bir kümeleme kümelerin bir dizisidir.

Bölümlemeli ve hiyerar

ş

ik kümeler arasında

önemli bir ayrım vardır

Bölümlemeli Kümeleme

 – Veri

nesnelerinin,

birbirini

kapsamayan

alt

kümelere

ayrılmasıdır. Her bir veri nesnesi altkümelerden sadece birinde

yer alır.

Hiyerar

ş

ik Kümeleme

(6)

Bölümlemeli Kümeleme

(7)

Hiyerarşik Kümeleme

p4 p1 p3 p2

p4

p1 p2

p3

Geleneksel

Hi erar ik

Kümeleme Geleneksel Dendro ram

p4 p1 p3 p2

p4

p1 p2

p3

Geleneksel olmayan

Hiyerar

ş

ik

Kümeleme

(8)

Other Distinctions Between Sets of Clusters

Özel, özel olmayana kar

ş

ı

 – Özel olmayan kümelemelerde noktalar birden fazla sınıfa ait

olabilirler.

 – Çoklu sınıflar veya sınır noktaları sunulabilir mi?

Bulanık bulanık olmayana kar

ş

ı

 – Bulanık sınıflandırmada bir nokta her bir kümeye 0 ve 1 aralı

ğ

ında bir

 – A

ğ

ırlıklar toplamı 1 olmalıdır

 –

htimali (Probabilistic) kümeleme ile benzer özellikleri vardır

Kısmi, bütüne kar

ş

ı

 – Bazı durumlarda biz sadece verinin bir kısmı ile kümeleme yapmak

isteriz.

Heterojen, homojene kar

ş

ı

(9)

Kümelerin Tipleri

yi da

ğ

ıtılmı

ş

kümeler (Well-separated clusters)

Merkez tabanlı kümeler(Center-based clusters)

Biti

ş

ik Kümeler (Contiguous clusters)

Yo

ğ

unluk tabanlı kümeler (Density-based clusters)

Nitelik veya kavramsal (Property or Conceptual)

Bir amaç fonksiyonu tarafından açıklanan (Described by

(10)

Types of Clusters: Well-Separated

yi da

ğ

ıtılmı

ş

kümeler:

 – Her bir nokta; kendi kümesindeki di

ğ

er noktalara daha yakın,

ba

ş

ka kümeden noktalara ise daha uzaktır. Böylesi kümeler iyi

da

ğ

ıtılmı

ş

kümelerdir.

(11)

Types of Clusters: Center-Based

Merkez tabanlı

 – Küme içindeki bir nokta, kendi küme merkezine di

ğ

er küme

merkezlerine oranla daha yakın (veya daha benzer) ise bu

küme merkez tabanlı bir kümedir.

 – Bir kümenin merkezi sıklıkla, ya kümedeki bütün noktaların bir

ortalaması olan centroid ile yada kümeyi sunmak için en uygun

nokta olan medoid ile sunulur.

(12)

Types of Clusters: Contiguity-Based

Biti

ş

ik küme (Nearest neighbor or Transitive)

 – A cluster is a set of points such that a point in a cluster is

closer (or more similar) to one or more other points in the

cluster than to any point not in the cluster.

(13)

Types of Clusters: Density-Based

Yo

ğ

unluk tabanlı

 – Daha dü

ş

ük yo

ğ

unluklu bölgelerden ayrılan daha yüksek

yo

ğ

unluklu noktaların bir kümesidir.

 – Kümeler; düzensiz, birbirine karı

ş

ş

veya gürültülü oldu

ğ

unda

kullanılır.

(14)

Types of Clusters: Conceptual Clusters

Payla

ş

ılan nitelik veya kavramsal kümeler

 – Aynı ortak nitelikleri payla

ş

ır veya kısmi bir kavram sunar.

(15)

Types of Clusters: Objective Function

Bir amaç fonksiyonu tarafından tanımlaman

kümeler

 – Bir amaç fonksiyonunu minimize veya maksimize eden kümeler

bulunur.

 – Noktaların bölümlenmesi için olası bütün yollar sıralanır ve verilen

amaç fonksiyona göre kümelerin potansiyel dizilerinin en iyilikleri

de

ğ

erlendirilir (NP Hard)

 –

Hiyerar

ş

ik kümeleme algoritmaları tipik olarak lokal amaç fonksiyonlara

sahiptir.

Bölümlemeli algoritmalar tipik olarak global amaç fonksiyonlara

sahiptir.

 – Global amaç fonksiyonu yakla

ş

ımının bir çe

ş

idi veriyi parametrik

bir modele uydurmadır.

Modelin parametreleri veriden çıkarılır.

Mixture modeller veriyi birkaç istatistiksel da

ğ

ılımın bir karı

ş

ımı olarak

(16)

Kümeleme Algoritmaları

K-means ve onun çe

ş

itleri

Hiyerar

ş

ik kümeleme

(17)

K-means Kümeleme

Bölümlemeli kümeleme yakla

ş

ımıdır

Her bir küme bir centroid ile uyumludur (merkez nokta)

Her bir nokta kendisine en yakın centroid ile uyumlu

kümeye atanır

Kümelerin sayısı, K, belirlenmelidir

Temel al oritma ok basittir

(18)

K-means Kümeleme – Detaylar

Ba

ş

langıç merkez noktaları sıklıkla rastgele seçilir.

 –

Kümeler bir çalı

ş

tırmadan di

ğ

erine de

ğ

i

ş

ebilir.

Centroid tipik olarak kümedeki noktaların bir ortalamasıdır.

‘Yakınlık’ Euclidean uzaklı

ğ

ı, cosine benzerli

ğ

i, correlation,

v.s. ile hesaplanabilir.

K-means yukarıdaki benzerlik ölçümlerini bir noktada bir

.

Hesaplamalar centroid sabit kalana kadar devam eder.

Karma

ş

ıklık O( n * K * I * d )

 –

n = noktaların sayısı, K = kümelerin sayısı,

(19)

Importance of Choosing Initial Centroids

1.5 2 2.5 3     y

Iteration 1

1.5 2 2.5 3     y

Iteration 2

1.5 2 2.5 3     y

Iteration 3

1.5 2 2.5 3     y

Iteration 4

1.5 2 2.5 3     y

Iteration 5

1.5 2 2.5 3     y

Iteration 6

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1

x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1

x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1

x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1

x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1

x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1

x

(20)
(21)
(22)

Importance of Choosing Initial Centroids

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x     y Iteration 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x     y Iteration 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x     y Iteration 3 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x     y Iteration 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x     y Iteration 5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x     y Iteration 6

(23)

K-means Kümelelerin Değerlendirmesi

En genel ölçüm hataların kareleri toplamıdır (Sum of

Squared Error (SSE))

 – Her bir nokta için, hata en yakın kümeye olan uzaklıktır

 – SSE hesabı için, bu hataların karesini hesaplar sonra toplarız.

∑ ∑

= ∈

=

K  i  x C  i i

 x

m

dist 

SSE 

1 2

)

,

(

 –

x , C 

i

kümesinde bir veri noktasıdır ve

,

i

kümesi için temsil

edici bir noktadır

küme için merkez noktayı temsil etmektedir.

 –

ki küme verilmi

ş

olsun, biz bu iki küme için SSE de

ğ

erlerini

hesap eder ve en küçük olanı seçeriz.

 – SSE de

ğ

erini azaltmak için bir yol K de

ğ

erini artırmaktır

Yüksek K ile zayıf kümelemeden daha dü

ş

ük SSE de

ğ

erine sahip

(24)

Boş Kümelerle Çalışma

Temel K-means algoritması bo

ş

kümelere sebep

olabilir

Birkaç strateji

 – Choose the point that contributes most to SSE

 – Choose a point from the cluster with the highest SSE

 – If there are several empty clusters, the above can be

(25)

Updating Centers Incrementally

In the basic K-means algorithm, centroids are

updated after all points are assigned to a centroid

An alternative is to update the centroids after

each assignment (incremental approach)

 – Each assignment updates zero or two centroids

 – More expensive

 – Introduces an order dependency

 – Never get an empty cluster

(26)

Pre-processing and Post-processing

Pre-processing

 – Normalize the data

 – Eliminate outliers

Post-processing

 –

 – Split ‘loose’ clusters, i.e., clusters with relatively high

SSE

 – Merge clusters that are ‘close’ and that have relatively

low SSE

 – Can use these steps during the clustering process

(27)

Bisecting K-means

Bisecting K-means algorithm

 –

Variant of K-means that can produce a partitional or a

(28)
(29)

Limitations of K-means

K-means has problems when clusters are of

differing

 – Sizes

 – Densities

 – Non-globular shapes

K-means has problems when the data contains

(30)

Limitations of K-means: Differing Sizes

(31)

Limitations of K-means: Differing Density

(32)

Limitations of K-means: Non-globular Shapes

(33)

Overcoming K-means Limitations

Original Points

K-means Clusters

One solution is to use many clusters.

(34)

Overcoming K-means Limitations

(35)

Overcoming K-means Limitations

(36)

Hierarchical Clustering

Produces a set of nested clusters organized as a

hierarchical tree

Can be visualized as a dendrogram

 – A tree like diagram that records the sequences of

merges or splits

1 3 2 5 4 6 0 0.05 0.1 0.15 0.2 1 2 3 4 5 6 1 2 3 4 5

(37)

Strengths of Hierarchical Clustering

Do not have to assume any particular number of

clusters

 – Any desired number of clusters can be obtained by

‘cutting’ the dendogram at the proper level

 – Example in biological sciences (e.g., animal kingdom,

phylogeny reconstruction, …)

(38)

Hierarchical Clustering

Two main types of hierarchical clustering

 – Agglomerative:

Start with the points as individual clusters

At each step, merge the closest pair of clusters until only one cluster

(or k clusters) left

 –

vsve:

Start with one, all-inclusive cluster

At each step, split a cluster until each cluster contains a point (or

there are k clusters)

Traditional hierarchical algorithms use a similarity or

distance matrix

(39)

 Agglomerative Clustering Algorithm

More popular hierarchical clustering technique

Basic algorithm is straightforward

1.

Compute the proximity matrix

2.

Let each data point be a cluster

3.

Repeat

.

erge e wo c oses c us ers

5.

Update the proximity matrix

6.

Until

only a single cluster remains

Key operation is the computation of the proximity of

two clusters

 –

Different approaches to defining the distance between

(40)

Starting Situation

Start with clusters of individual points and a

proximity matrix

p1 p3 p2 p1 p2 p3 p4 p5

. . .

p5 p4 . . .

Proximity Matrix

(41)

Intermediate Situation

After some merging steps, we have some clusters

C3 C2 C1 C1 C3 C4 C2 C3 C4 C5 C1 C4 C2 C5 C5

Proximity Matrix

(42)

Intermediate Situation

We want to merge the two closest clusters (C2 and C5) and

update the proximity matrix.

C3 C2 C1 C1 C3 C4 C2 C3 C4 C5 C1 C4 C2 C5 C5

Proximity Matrix

(43)

 After Merging

The question is “How do we update the proximity matrix?”

C3 ? ? ? ? ? ? C2

U

C5 C1 C1 C3 C2

U

C5 C3 C4 C1 C4 C2

U

C5 ? C4

Proximity Matrix

(44)

How to Define Inter-Cluster Similarity

p1 p3 p5 p4 p2 p1 p2 p3 p4 p5

. . .

Similarity?

.

.

.

MAX

Group Average

Distance Between Centroids

Other methods driven by an objective

function

 – Ward’s Method uses squared error

(45)

How to Define Inter-Cluster Similarity

p1 p3 p5 p4 p2 p1 p2 p3 p4 p5

. . .

.

.

.

Proximity Matrix

MAX

Group Average

Distance Between Centroids

Other methods driven by an objective

function

(46)

How to Define Inter-Cluster Similarity

p1 p3 p5 p4 p2 p1 p2 p3 p4 p5

. . .

.

.

.

Proximity Matrix

MAX

Group Average

Distance Between Centroids

Other methods driven by an objective

function

(47)

How to Define Inter-Cluster Similarity

p1 p3 p5 p4 p2 p1 p2 p3 p4 p5

. . .

.

.

.

Proximity Matrix

MAX

Group Average

Distance Between Centroids

Other methods driven by an objective

function

(48)

How to Define Inter-Cluster Similarity

p1 p3 p5 p4 p2 p1 p2 p3 p4 p5

. . .

× × × × ××××

.

.

.

Proximity Matrix

MAX

Group Average

Distance Between Centroids

Other methods driven by an objective

function

(49)

Cluster Similarity: MIN or Single Link 

Similarity of two clusters is based on the two

most similar (closest) points in the different

clusters

 – Determined by one pair of points, i.e., by one link in

the proximity graph.

I1

I2

I3

I4

I5

I1 1.00 0.90 0.10 0.65 0.20

I2 0.90 1.00 0.70 0.60 0.50

I3 0.10 0.70 1.00 0.40 0.30

I4 0.65 0.60 0.40 1.00 0.80

I5 0.20 0.50 0.30 0.80 1.00

1 2 3 4 5

(50)

Hierarchical Clustering: MIN

1

2

5

1

3

5

0.15 0.2

Nested

Clusters

Dendrogram

4

6

4

0 3 6 2 5 4 1

0.05 0.1

(51)

Strength of MIN

Original

Points

Two

Clusters

(52)

Limitations of MIN

Original Points

Two Clusters

(53)

Cluster Similarity: MAX or Complete Linkage

Cluster Similarity: MAX or Complete Linkage

Similarity of two clusters is based on the two least

Similarity of two clusters is based on the two least

similar (most distant) points in the different

similar (most distant) points in the different

clusters

clusters

 –

 –

Determined by all pairs of points in

Determined by all pairs of points in the two clusters

the two clusters

II1

1

II2

2

II3

3

II4

4

II5

5

I1

I1 1

1.0

.00

0 0

0.9

.90

0 0

0.1

.10

0 0

0.6

.65

5 0

0.2

.20

0

I2

I2 0

0.9

.90

0 1

1.0

.00

0 0

0.7

.70

0 0

0.6

.60

0 0

0.5

.50

0

I3

I3 0

0.1

.10

0 0

0.7

.70

0 1

1.0

.00

0 0

0.4

.40

0 0

0.3

.30

0

I4

I4 0

0.6

.65

5 0

0.6

.60

0 0

0.4

.40

0 1

1.0

.00

0 0

0.8

.80

0

I5

I5 0

0.2

.20

0 0

0.5

.50

0 0

0.3

.30

0 0

0.8

.80

0 1

1.0

.00

0

11 22 33 44 55

(54)

Hierarchical Clustering: MAX 

Hierarchical Clustering: MAX 

0.25 0.25 0.3 0.3 0.35 0.35 0.4 0.4

1

1

2

2

5

5

2

2

5

5

4

4

N

Ne

es

stte

ed

d

C

Cllu

us

stte

errs

s

D

De

en

nd

drro

og

grra

am

m

3 6 4 1 2 5 3 6 4 1 2 5 00 0.05 0.05 0.1 0.1 0.15 0.15 0.2 0.2

4

4

6

6

1

1

3

3

(55)

Strength of MAX 

Strength of MAX 

O

Orriig

giin

na

al

l P

Po

oiin

ntts

s

T

Tw

wo

o C

Cllu

us

stte

errs

s

(56)

Limitations of MAX 

Original Points

Two Clusters

Tends to break large clusters

(57)

Cluster Similarity: Group Average

Cluster Similarity: Group Average

Proximity of two clusters is the average of pairwise proximity

Proximity of two clusters is the average of pairwise proximity

between points in the two clusters.

between points in the two clusters.

Need to use average connectivity for scalability since total

Need to use average connectivity for scalability since total

|

|

|Cluster

|Cluster

|

|

|Cluster

|Cluster

))

p

p

 ,

 ,

p

p

proximity(

proximity(

))

Cluster

Cluster

 ,

 ,

Cluster

Cluster

proximity(

proximity(

 j  j ii Cluster Cluster p p ClusterCluster p p  j  j ii  j  j ii  j  j  j  j ii ii ∗ ∗ = =

∈ ∈ ∈ ∈

proximity favors large clusters

proximity favors large clusters

II1

1

II2

2

II3

3

II4

4

II5

5

I1

I1 1

1.0

.00

0 0

0.9

.90

0 0

0.1

.10

0 0

0.6

.65

5 0

0.2

.20

0

I2

I2 0

0.9

.90

0 1

1.0

.00

0 0

0.7

.70

0 0

0.6

.60

0 0

0.5

.50

0

I3

I3 0

0.1

.10

0 0

0.7

.70

0 1

1.0

.00

0 0

0.4

.40

0 0

0.3

.30

0

I4

I4 0

0.6

.65

5 0

0.6

.60

0 0

0.4

.40

0 1

1.0

.00

0 0

0.8

.80

0

I5

I5 0

0.2

.20

0 0

0.5

.50

0 0

0.3

.30

0 0

0.8

.80

0 1

1.0

.00

0

11 22 33 44 55

(58)

Hierarchical Clustering: Group Average

Hierarchical Clustering: Group Average

0.15 0.15 0.2 0.2 0.25 0.25 1 1 2 2 5 5

2

2

5

5

4

4

N

Ne

es

stte

ed

d

C

Cllu

us

stte

errs

s

D

De

en

nd

drro

og

grra

am

m

3 6 4 1 2 5 3 6 4 1 2 5 00 0.05 0.05 0.1 0.1 3 3 4 4 6 6

1

1

3

3

(59)

Hierarchical Clustering: Group Average

Hierarchical Clustering: Group Average

Compromise between Single and Complete

Compromise between Single and Complete

Link

Link

Strengths

Strengths

 –

 –

Less susceptible to noise and outliers

Less susceptible to noise and outliers

Limitations

Limitations

 –

(60)

Cluster Similarity: Ward’s Method

Similarity of two clusters is based on the increase

in squared error when two clusters are merged

 – Similar to group average if distance between points is

distance squared

ess suscept e to no se an out ers

Biased towards globular clusters

Hierarchical analogue of K-means

(61)

Hierarchical Clustering: Comparison

MIN

MAX

1 2 3 4 5 6

1

2

5

3

4

1 2 3 4 5 6

1

2

3

4

5

Group Average

Ward’s Method

1 2 3 4 5 6

1

2

5

3

4

1 2 3 4 5 6

1

2

5

3

4

(62)
(63)

Hierarchical Clustering: Time and Space requirements

O(N

2

) space since it uses the proximity matrix.

 – N is the number of points.

O(N

3

) time in many cases

 – There are N ste s and at each ste the size N

2

proximity matrix must be updated and searched

 – Complexity can be reduced to O(N

2

log(N) ) time for

(64)

Hierarchical Clustering: Problems and Limitations

Once a decision is made to combine two clusters,

it cannot be undone

No objective function is directly minimized

Different schemes have problems with one or

more of the following:

 – Sensitivity to noise and outliers

 – Difficulty handling different sized clusters and convex

shapes

(65)

MST: Divisive Hierarchical Clustering

Build MST (Minimum Spanning Tree)

 – Start with a tree that consists of any point

 – In successive steps, look for the closest pair of points (p, q) such

that one point (p) is in the current tree but the other (q) is not

(66)

MST: Divisive Hierarchical Clustering

(67)

DBSCAN

DBSCAN is a density-based algorithm.

 –

Density = number of points within a specified radius (Eps)

 –

A point is a core point if it has more than a specified number

of points (MinPts) within Eps

These are points that are at the interior of a cluster

 –

A border point has fewer than MinPts within Eps, but is in

the neighborhood of a core point

 –

A noise point is any point that is not a core point or a border

(68)
(69)

DBSCAN Algorithm

Eliminate noise points

(70)

DBSCAN: Core, Border and Noise Points

Original Points

Point types: core,

border and noise

Eps = 10, MinPts = 4

(71)

When DBSCAN Works Well

Original Points

Clusters

Resistant to Noise

(72)

When DBSCAN Does NOT Work Well

Original Points

(MinPts=4, Eps=9.75).

Varying densities

(73)

DBSCAN: Determining EPS and MinPts

Idea is that for points in a cluster, their k

th

nearest

neighbors are at roughly the same distance

Noise points have the k

th

nearest neighbor at farther

distance

So, plot sorted distance of every point to its k

th

(74)

Cluster Validity

For supervised classification we have a variety of

measures to evaluate how good our model is

 – Accuracy, precision, recall

For cluster analysis, the analogous question is how to

evaluate the “goodness” of the resulting clusters?

But “clusters are in the eye of the beholder”!

Then why do we want to evaluate them?

 – To avoid finding patterns in noise

 – To compare clustering algorithms

 – To compare two sets of clusters

 – To compare two clusters

(75)

Clusters found in Random Data

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1     y Random Points 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1     y DBSCAN 0 0.2 0.4 0.6 0.8 1 x 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1     y K-means 0 0.2 0.4 0.6 0.8 1 x 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1     y Complete Link

(76)
(77)

1.

Determining the clustering tendency of a set of data, i.e.,

distinguishing whether non-random structure actually exists in the

data.

2.

Comparing the results of a cluster analysis to externally known

results, e.g., to externally given class labels.

3.

Evaluating how well the results of a cluster analysis fit the data

without 

reference to external information.

Different Aspects of Cluster Validation

- Use only the data

4.

Comparing the results of two different sets of cluster analyses to

determine which is better.

5.

Determining the ‘correct’ number of clusters.

For 2, 3, and 4, we can further distinguish whether we want to

evaluate the entire clustering or just individual clusters.

(78)

Numerical measures that are applied to judge various aspects

of cluster validity, are classified into the following three types.

 – External Index: Used to measure the extent to which cluster labels

match externally supplied class labels.

Entropy

 – Internal Index: Used to measure the goodness of a clustering

structure

without 

respect to external information.

Measures of Cluster Validity

Sum of Squared Error (SSE)

 – Relative Index: Used to compare two different clusterings or

clusters.

Often an external or internal index is used for this function, e.g., SSE or

entropy

Sometimes these are referred to as criteria instead of indices

 – However, sometimes criterion is the general strategy and index is the

numerical measure that implements the criterion.

(79)

Two matrices

 –

Proximity Matrix

 –

“Incidence” Matrix

One row and one column for each data point

An entry is 1 if the associated pair of points belong to the same cluster

An entry is 0 if the associated pair of points belongs to different clusters

Measuring Cluster Validity Via Correlation

 –

Since the matrices are symmetric, only the correlation between

n(n-1) / 2 entries needs to be calculated.

High correlation indicates that points that belong to the

same cluster are close to each other.

Not a good measure for some density or contiguity based

(80)

Measuring Cluster Validity Via Correlation

Correlation of incidence and proximity matrices

for the K-means clusterings of the following two

data sets.

0.8 0.9 1 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 x     y 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 x     y Corr = -0.9235 Corr = -0.5810

(81)

Order the similarity matrix with respect to cluster

labels and inspect visually.

Using Similarity Matrix for Cluster Validation

0.7 0.8 0.9 1 10 20 30 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 x     y Points       P     o       i     n       t     s 20 40 60 80 100 40 50 60 70 80 90 100 Similarity0 0.1 0.2 0.3 0.4 0.5 0.6

(82)

Using Similarity Matrix for Cluster Validation

Clusters in random data are not so crisp

      i     n       t     s 10 20 30 40 50 0.5 0.6 0.7 0.8 0.9 1 0.5 0.6 0.7 0.8 0.9 1 Points       P     o 20 40 60 80 100 60 70 80 90 100 Similarity0 0.1 0.2 0.3 0.4 .

DBSCAN

0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 . x

(83)

     o       i     n       t     s 10 20 30 40 50 0.5 0.6 0.7 0.8 0.9 1

Using Similarity Matrix for Cluster Validation

Clusters in random data are not so crisp

0.5 0.6 0.7 0.8 0.9 1     y Points       P 20 40 60 80 100 60 70 80 90 100 Similarity0 0.1 0.2 0.3 0.4

K-means

0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 x

(84)

Using Similarity Matrix for Cluster Validation

Clusters in random data are not so crisp

0.5 0.6 0.7 0.8 0.9 1     y       i     n       t     s 10 20 30 40 50 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 . x Points       P     o 20 40 60 80 100 60 70 80 90 100 Similarity0 0.1 0.2 0.3 0.4 .

Complete Link

(85)

Using Similarity Matrix for Cluster Validation

1 2 3 6 4 0.4 0.5 0.6 0.7 0.8 0.9 1 500 1000 1500 5 7

DBSCAN

0 0.1 0.2 0.3 500 1000 1500 2000 2500 3000 2000 2500 3000

(86)

Clusters in more complicated figures aren’t well separated

Internal Index: Used to measure the goodness of a clustering

structure without respect to external information

 – SSE

SSE is good for comparing two clusterings or two clusters

(average SSE).

Internal Measures: SSE

Can also be used to estimate the number of clusters

0 1 2 3 4 5 6 7 8 9 10       S       S       E -6 -4 -2 0 2 4 6

(87)

Internal Measures: SSE

SSE curve for a more complicated data set

1 2 3 6 5 4 7

(88)

Need a framework to interpret any measure.

 –

For example, if our measure of evaluation has the value, 10, is that

good, fair, or poor?

Statistics provide a framework for cluster validity

 –

The more “atypical” a clustering result is, the more likely it represents

valid structure in the data

 –

Can com are the values of an index that result from random data or

Framework for Cluster Validity

clusterings to those of a clustering result.

If the value of the index is unlikely, then the cluster results are valid

 –

These approaches are more complicated and harder to understand.

For comparing the results of two different sets of cluster

analyses, a framework is less necessary.

 –

However, there is the question of whether the difference between two

index values is significant

(89)

Example

 – Compare SSE of 0.005 against three clusters in random data

 – Histogram shows SSE of three clusters in 500 sets of random data

points of size 100 distributed over the range 0.2 – 0.8 for x and y

values

Statistical Framework for SSE

50 1 0.016 0.0180 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034 5 10 15 20 25 30 35 40 45       C     o     u      n       t 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 x     y

(90)

Correlation of incidence and proximity matrices for the

K-means clusterings of the following two data sets.

Statistical Framework for Correlation

0.7 0.8 0.9 1 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 x     y 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 x     y Corr = -0.9235 Corr = -0.5810

(91)

Cluster Cohesion: Measures how closely related

are objects in a cluster

 – Example: SSE

Cluster Separation: Measure how distinct or

well-separated a cluster is from other clusters

Internal Measures: Cohesion and Separation

 – Cohesion is measured by the within cluster sum of squares (SSE)

 – Separation is measured by the between cluster sum of squares

– Where |C

| is the size of cluster i

∑ ∑

=

i

 x

i

i

 m

 x

WSS

(

)

2

=

i i i

m

m

 BSS

(

)

2

(92)

Internal Measures: Cohesion and Separation

Example: SSE

 – BSS + WSS = constant

1

×

×

×

×

m

1

2

×

×

×

×

3

4

m

×

×

×

×

2

5

m

10

9

1

9

)

3

5

.

4

(

2

)

5

.

1

3

(

2

1

)

5

.

4

5

(

)

5

.

4

4

(

)

5

.

1

2

(

)

5

.

1

1

(

2 2 2 2 2 2 = + = = − × + − × = = − + − + − + − =

Total

 BSS

WSS

K=2 clusters:

10

0

10

0

)

3

3

(

4

10

)

3

5

(

)

3

4

(

)

3

2

(

)

3

1

(

2 2 2 2 2 = + = = − × = = − + − + − + − =

Total

 BSS

WSS

K=1 cluster:

(93)

A proximity graph based approach can also be used for

cohesion and separation.

 – Cluster cohesion is the sum of the weight of all links within a cluster.

 – Cluster separation is the sum of the weights between nodes in the cluster

and nodes outside the cluster.

Internal Measures: Cohesion and Separation

(94)

Silhouette Coefficient combine ideas of both cohesion and separation,

but for individual points, as well as clusters and clusterings

For an individual point,

 – Calculate

= average distance of

to the points in its cluster

 – Calculate

= min (average distance of

to points in another cluster)

 – The silhouette coefficient for a point is then given by

Internal Measures: Silhouette Coefficient

s = – a

a < ,

or s = a - 1

a

, not t e usua case

 – Typically between 0 and 1.

 – The closer to 1 the better.

Can calculate the Average Silhouette width for a cluster or a

clustering

a

(95)

Referensi

Dokumen terkait

Sehubungan dengan pelelangan paket tersebut diatas dan berdasarkan hasil evaluasi Pokja Pekerjaan Konstruksi terhadap penawaran perusahaan Saudara, diharapkan kehadirannya pada

Peserta diharapkan hadir dengan membawa semua Dokumen Kualifikasi yang Asli atau yang telah dilegalisir oleh pejabat instansi berwenang sebagai tanda validasi atas

Sedangkan apabila kedua istilah tersebut mempunyai makna yang berbeda , apakah “persetujuan” sebagaimana dimaksud dalam Pasal 15 ayat (1) UU No. 28/2014

Dari hasil Pengujian diketahui bahwa, sistem scheduler yang dibuat tidak memerlukan waktu yang cukup lama untuk memproses launch instance yang dilakukan oleh user ,

The flood of imported palm oil into India has driven down the price for coconut paid to Shanmugam and other farmers, deeply affecting their livelihoods. “Farmers are drowning in

The objective of this research was to find out that Just-in-Time Teaching (JiTT) technique could improve reading comprehension of the eighth grade students at

Keterangan lebih lanjut dapat dilihat di http://lpse.jogjaprov.go.id/eproc/ dan website Sekolah Tinggi Multi Media “MMTC” Yogyakarta yaitu http://www.mmtc.ac.id2. Peserta

Karya yang ditulis dan sudah dipublikasikan dalam 5 tahun terakhir adalah Program Konseling Sekolah Komprehensif: Suatu Review bagi Pengambil Kebijakan dan Praktisi (Review