Selforganization is an unsupervised learning algorithm used by the Kohonen Feature Map neural net.


As mentioned in previous sections, a neural net tries to simulate the biological human brain, and selforganization is probably the best way to realize this.

It is commonly known that the cortex of the human brain is subdivided in different regions, each responsible for certain functions. The neural cells are organizing themselves in groups, according to incoming informations.

Those incoming informations are not only received by a single neural cell, but also influences other cells in its neighbourhood. This organization results in some kind of a map, where neural cells with similar functions are arranged close together.


This selforganization process can also be performed by a neural network. Those neural nets are mostly used for classification purposes, because similar input values are represented in certain areas of the net's map.


A sample structure of a Kohonen Feature Map that uses the selforganization algorithm is shown below:

Kohonen Feature Map with 2-dimensional input and 2-dimensional map (3x3 neurons)
Kohonen Feature Map with 2-dimensional input and 2-dimensional map (3x3 neurons)

As you can see, each neuron of the input layer is connected to each neuron on the map. The resulting weight matrix is used to propagate the net's input values to the map neurons.

Additionally, all neurons on the map are connected among themselves. These connections are used to influence neurons in a certain area of activation around the neuron with the greatest activation, received from the input layer's output.


The amount of feedback between the map neurons is usually calculated using the Gauss function:

                    2 * sig2
    feedbackci   = e
- xc is the position of the most activated neuron
- xi are the positions of the other map neurons
- sig is the activation area (radius)

In the beginning, the activation area is large and so is the feedback between the map neurons. This results in an activation of neurons in a wide area around the most activated neuron.

As the learning progresses, the activation area is constantly decreased and only neurons closer to the activation center are influenced by the most activated neuron.


Unlike the biological model, the map neurons don't change their positions on the map. The "arranging" is simulated by changing the values in the weight matrix (the same way as other neural nets do).

Because selforganization is an unsupervised learning algorithm, no input/target patterns exist. The input values passed to the net's input layer are taken out of a specified value range and represent the "data" that should be organized.

The algorithm works as follows:

  1. Define the range of the input values
  2. Set all weights to random values taken out of the input value range
  3. Define the initial activation area
  4. Take a random input value and pass it to the input layer neuron(s)
  5. Determine the most activated neuron on the map: - Multiply the input layer's output with the weight values - The map neuron with the greatest resulting value is said to be "most activated" - Compute the feedback value of each other map neuron using the Gauss function
  6. Change the weight values using the formula: weight(old) + feedback value * ( input value - weight(old) ) * learning rate
  7. Decrease the activation area
  8. Go to step 4
  9. The algorithm ends, if the activation area is smaller than a specified value

Example: see sample applet


The shown Kohonen Feature Map has three neurons in its input layer that represent the values of the x-, y- and z-dimension. The feature map is initially 2-dimensional and has 9x9 neurons. The resulting weight matrix has 3 * 9 * 9 = 243 weights, because each input neuron is connected to each map neuron.

3D SOM image 1
Screenshots of the sample applet showing a 3D Kohonen Feature Map
3D SOM image 2

In the beginning, when the weights have random values, the feature map is just an unordered mess.

After 200 learning cycles, the map has "unfolded" and a grid can be seen.

3D SOM image 3
Screenshots of the sample applet showing a 3D Kohonen Feature Map
3D SOM image 4

As the learning progresses, the map becomes more and more structured.

It can be seen that the map neurons are trying to get closer to their nearest blue input value.

3D SOM image 5
Screenshots of the sample applet showing a 3D Kohonen Feature Map
3D SOM image 6

At the end of the learning process, the feature map is spanned over all input values.

The reason why the grid is not very beautiful is that the neurons in the middle of the feature map are also trying to get closer to the input values. This leads to a distorted look of the grid.

The selforganization is finished at this point.

I recommend you, to do your own experiments with the sample applet, in order to understand its behaviour. A description of the applet's controls is given on the belonging page. By changing the net's parameters, it is possible to produce situations, where the feature map is unable to organize itself correctly. Try, for example, to give the initial activation area a very small value or enter too many input values.

Neural Net Components in an Object Oriented Class Structure