The Self-Organizing Map is one of the most popular neural network models. The SOM algorithm is based on unsupervised, competitive learning. preserving mapping from the high dimensional space to map units. A self-organizing map (SOM) is a clustering technique that helps you uncover To check that the algorithm has converged, we can plot the. Jump to Learning algorithm - The goal of learning in the self-organizing map is to cause as possible, the kinds of vectors expected during mapping.‎Hybrid Kohonen self · ‎Competitive learning · ‎Neural gas · ‎U-matrix.


Author: Gerard Wiza
Country: Congo
Language: English
Genre: Education
Published: 8 July 2016
Pages: 783
PDF File Size: 28.41 Mb
ePub File Size: 14.96 Mb
ISBN: 992-7-70153-346-9
Downloads: 60792
Price: Free
Uploader: Gerard Wiza


One neuron is a vector called the codebook vector This has the same dimension as the input vectors n -dimensional.

Self-organizing map

The neurons are connected to adjacent neurons by a neighborhood relation. This dictates the topology, or the structure, of the map. Usually, the neurons are connected to each other via rectangular or hexagonal topology.


In the Figure 2. Different topologies One can also define a distance between the map units according to their topology relations.

At the same time, the new winner self organizing maps algorithm probably be a neighbor of xi, which has received a partial potentiation and can easily take the place of xi.

In your case, always refer to a square matrix, where each cell self organizing maps algorithm a receptive neuron characterised by a synaptic weight w with the dimensionality of the input patterns: During both training and working phases, the winning unit is determined according to a similarity measure between a sample and each weight vector.

Therefore, the training process is normally subdivided into two different stages. However, during the second stage, the radius is set to 1.

SOM is a red broken line with squares, 20 nodes.

The first principal component is presented by a blue line. Data points are the small grey circles. For PCA, the fraction of variance unexplained in this example is Because in the training phase weights of the whole neighborhood are moved in the self organizing maps algorithm direction, similar items tend to excite adjacent neurons.

Data Generation We picked self organizing maps algorithm colors—yellow and green—around which to generate random samples to form two clusters. We can visualize our color clusters using blue and green values, which are the dimensions along which the clusters are most differentiated.

Self-Organizing Map - Clever Algorithms: Nature-Inspired Programming Recipes

Two color clusters of yellow and green, in 3D and 2D spaces. We used an 8 x 8 rectangular grid, so there were 64 neurons in total. Initially, neurons in the SOM grid start out in random positions, but they are gradually massaged into a mould outlining the shape of our data.

This is an iterative process, which we can watch from the animated GIF below: It took about iterations for neurons in the SOM grid to stabilize. To get an overview of how many data points each neuron corresponded to, we can plot a frequency map of self organizing maps algorithm grid, shown below.

Each neuron is represented by a square, and the pink region within the square represents the relative number of data points that neuron is positioned closest to—the larger the pink area, the more data points represented by that neuron.

Frequency map of neurons in an 8 x 8 SOM grid. From the frequency map, we can see a clear divide separating a top left neuron cluster from a smaller bottom right cluster. This divide is represented by the self organizing maps algorithm in-between with small or no pink squares.

When two neurons correspond to vastly different sets self organizing maps algorithm data points, they would be separated by a larger distance, denoted by a pink color.

On the other hand, neurons representing similar data points are separated by shorter distances, denoted by a blue color.


U-matrix showing similarity self organizing maps algorithm dissimilarity between neurons. Some extensions of the approach can label the prepared codebook vectors which can be used for classification.

SOM is non-parametric, meaning that it does not rely on assumptions about that structure of the function that it is approximating.

Euclidean distance is commonly self organizing maps algorithm to measure the distance between real-valued vectors, although other distance measures may be used such as dot productand data specific distance measures may be required for non-scalar attributes.

There should be sufficient training iterations to expose all the training data to the model multiple times.