0% found this document useful (0 votes)
5 views4 pages

NN - Lecture of W7 (09.04)

Uploaded by

Amara Costachiu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views4 pages

NN - Lecture of W7 (09.04)

Uploaded by

Amara Costachiu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

NN – lecture 7 (09.

04)

Clustering is unsupervised learning

We calculate the mean and associate those


whose distance is closest to the mean

Competitive Learning?

In unsupervised, we don’t know anything


about the patterns; all data is simple. We
don’t know if all inputs are necessary
Figure 1 (im sorry)

If we put k=3, it will divide into 3 clusters

If we want 5 sizes, we divide in 5 ( XS, S, M, L, XL)

More important: Image Compression

BMP image: bit-map

How many bytes for a pixel? 3 (RGB)


When too many colors for what we can represent => SPARCITY ?? (we have
224 = 16 mil colors)

When R=G=B => grey

So we cant have all greys

Why not have just 256 colours? (idk)

We take pixels from the image and represent their colours in the colour
space

We’ll have at most 1 million points (bc we might have many blocks). Then,
we do clustering

Then, for each pixel, we put the index of the closest cluster in the map, so
instead of 3 bytes we have 1.

 now we have it in 1MB


 This is called smart sub-quantization
Competitive Learning
 Unsupervised learning
 Nodes compete with each other in order to represent the data
 All the data which is represented by a node is called a cluster
 Each neuron wants to be responsible of the cluster; they will try to
point in the center of the clusters
 Competition of type “winner takes it all”
 We will take points for the data, all neurons will compete to
represent it, one will win
 That one will be added to the cluster
 We will take input data, see which neuron is the winner and then
update just that one neuron
 Neurons become “feature detectors”

 We need to find the angle



 Wn = w1cosα => α = 0
 Worst scenario is when α = pi/2
 We make dot product between w1 and w2

if α=0 (the learning rate) => we have no


learning

α = 1 => w+i-w = I => it jumps to the data?

Α = 0.5 => gets to the middle?

We still have problems:

1. The size
For this reason, we normalize, to cancel the advantage of being
larger/longer
Kohonen says competition is not good enough

For SOM (Self organizing maps):


 Winner does not take it all
 Their neighbours also get updated (in smaller amounts)
 Improvement over competitive learning
 Competitive learning is a particular case of SOM (where distance to
neighbours is 0)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy