The components of a neural net

Generally spoken, there are many different types of neural nets, but they all have nearly the same components.

If one wants to simulate the human brain using a neural net, it is obviously that some drastic simplifications have to be made:

First of all, it is impossible to "copy" the true parallel processing of all neural cells. Although there are computers that have the ability of parallel processing, the large number of processors that would be necessary to realize it can't be afforded by today's hardware.

Another limitation is that a computer's internal structure can't be changed while performing any tasks.

And how to implement electrical stimulations in a computer program?


These facts lead to an idealized model for simulation purposes.

Like the human brain, a neural net also consists of neurons and connections between them. The neurons are transporting incoming information on their outgoing connections to other neurons. In neural net terms these connections are called weights. The "electrical" information is simulated with specific values stored in those weights.

By simply changing these weight values the changing of the connection structure can also be simulated.


The following figure shows an idealized neuron of a neural net.

Structure of a neuron in a neural network
Structure of a neuron in a neural network

As you can see, an artificial neuron looks similar to a biological neural cell. And it works in the same way.

Information (called the input) is sent to the neuron on its incoming weights. This input is processed by a propagation function that adds up the values of all incoming weights.

The resulting value is compared with a certain threshold value by the neuron's activation function. If the input exceeds the threshold value, the neuron will be activated, otherwise it will be inhibited.

If activated, the neuron sends an output on its outgoing weights to all connected neurons and so on.


In a neural net, the neurons are grouped in layers, called neuron layers. Usually each neuron of one layer is connected to all neurons of the preceding and the following layer (except the input layer and the output layer of the net).

The information given to a neural net is propagated layer-by-layer from input layer to output layer through either none, one or more hidden layers. Depending on the learning algorithm, it is also possible that information is propagated backwards through the net.


The following figure shows a neural net with three neuron layers.

Neural network with three neuron layers
Neural network with three neuron layers

Note that this is not the general structure of a neural net. For example, some neural net types have no hidden layers or the neurons in a layer are arranged as a matrix.

What's common to all neural net types is the presence of at least one weight matrix, the connections between two neuron layers.


Next, let's see what neural nets are useful for.

Neural Net Components in an Object Oriented Class Structure