CDI PL5 MatLab Neural Networks Toolbox
CDI PL5 MatLab Neural Networks Toolbox
Alexandra Moutinho
-1
-2 T
Y
-3
0 2 4 6 8 10
P
Here the network is trained for 50 epochs. Again the network's output is plotted.
>> net.trainParam.epochs = 50;
>> net = train(net,P,T);
>> Y = sim(net,P);
>> figure,plot(P,T,'rs-',P,Y,'o')
>> legend('T','Y',0),xlabel('P')
4.5
3.5
2.5
1.5
0.5
T
0 Y
-0.5
0 2 4 6 8 10
P
net =
Neural Network object:
architecture:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]
[numWeightElements:31 ??]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {2x1 cell} of layers
outputs: {1x2 cell} containing 1 output
biases: {2x1 cell} containing 2 biases
inputWeights: {2x1 cell} containing 1 input weight
layerWeights: {2x2 cell} containing 1 layer weight
are shown. The latter contains information about the individual objects of the network. Each layer
consists of neurons with the same transfer function net.transferFcn and net input function
net.netInputFcn, which are in the case of perceptrons hardlim and netsum. If neurons should
have different transfer functions then they have to be arranged in different layers. The parameters
net.inputWeights and net.layerWeights specify among other things the applied learning
functions and their parameters. The next paragraph contains the training, initialization and performance
functions.
functions:
adaptFcn: 'trains'
The trainFcn and adaptFcn are used for the two different learning types, batch learning and
incremental or on-line learning. By setting the trainFcn parameter you tell MatLab which training
algorithm should be used, which is in our case the cyclical order incremental training/learning function
trainc. The ANN toolbox includes almost 20 training functions. The performance function is the
function that determines how well the ANN is doing its task. For a perceptron it is the mean absolute
error performance function mae. For linear regression usually the mean squared error performance
function mse is used. The initFcn is the function that initialized the weights and biases of the
network. To get a list of the functions available type help nnet. To change one of these functions to
another one in the toolbox or one that you have created, just assign the name of the function to the
parameter, e.g.
The parameters that concern these functions are listed in the next paragraph.
parameters:
adaptParam: .passes
divideParam: .trainRatio, .valRatio, .testRatio
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: .epochs, .goal, .max_fail, .mem_reduc,
.min_grad, .mu, .mu_dec, .mu_inc,
.mu_max, .show, .time
By changing these parameters you can change the default behavior of the functions mentioned above.
The parameters you will use the most are probably the components of trainParam. The most used of
these are net.trainParam.epochs which tells the algorithm the maximum number of epochs to
train, and net.trainParam.show that tells the algorithm how many epochs there should be
between each presentation of the performance. Type help train for more information.
The weights and biases are also stored in the network structure:
The .IW(i,j) component is a two dimensional cell matrix that holds the weights of the connection
between the input j and the network layer i. The .LW(i,j) component holds the weight matrix for
the connection from the network layer j to the layer i. The cell array b contains the bias vector for each
layer.
Define number of
hidden neurons
Controlo e Decisão Inteligente – PL #5 MatLab Neural Networks Toolbox - Alexandra Moutinho 5
Controlo e Decisão Inteligente – PL #5 MatLab Neural Networks Toolbox - Alexandra Moutinho 6
Train network