Figure 226

The display on the left-hand side shows an input to the network set to "0 1". After the network has been run the output becomes 1, as expected. The display on the right-hand side shows an input to the network set to "1 1" The output this time becomes 0.

Input Assignment

To simplify the training process and to avoid deeper knowledge of NSL, we assign the training set directly to the model as a training array rather than from an external file as is usually the case. (We will show this more "realistic" approach in the NSLM chapter where we go over more extensive details of the modeling language NSLM. Obviously the approach taken will be more involved when dealing with large data sets.) The training set format is shown in table 2.2.

File Format

Example (XOR)

Table 2.2

<num_patterns>

4

Training file format.

<input1> <input 2> <output>

0 0 0

<input1> <input 2> <output>

0 1 1

<input1> <input 2> <output>

1 0 1

<input1> <input 2> <output>

1 1 0

The first row in the file specifies the number of patterns in the file. Training pairs are specified one per row consisting in the XOR example of two inputs <input1> and <input2> and a single output <output>. The training set input is assigned as follows

nsl

set

backPropModel

train

pInput {

{ 0

0 }

{ 0 1 } { 1 0

} { 1

1 } }

nsl

set

backPropModel.

train.

pOutput {

{ 0

} {

1 } { 1 } { 0

} }

Note again the curly brackets separating elements in two-dimensional arrays, similar to input in the Hopfield model.

Parameter Assignment

The Backpropagation layer sizes are specified within the present implementation of model, i.e., if the number of units in any layer changes, the model has to be modified accordingly and recompiled. The alternative to this example could be to treat layer sizes as model parameters to be set interactively during simulation initialization. While the latter approach is more flexible since layer sizes tend to change a lot between problems, we use the former one to avoid further complexity at this stage. Thus, the user will need to modify and recompile the model when changing layer sizes. In our example we use 2 units for the input layer, 2 for the hidden layer and 1 for the output layer.

Additionally, we set stopError to a number that will be small enough for the network to obtain acceptable solutions. For this example, we use 0.1 or 10% of the output value, nsl set backPropModel.layers.be.stopError 0.1

The learning parameter □□is represented by the learningRate parameter determining how big a step the network can take in correcting errors. The learning rate for this problem was set to 0.8 for both the hidden and output layers.

nsl set backPropModel.layers.bh.lRate 0.8 nsl set backPropModel.layers.bo.lRate 0.8

The training step or delta is typically set between 0.01 to 1.0. The tradeoff is that if the training step is too large—close to 1—the network tends to oscillate and will likely jump over the minimum. On the other hand, if the training step is too small—close to 0— it will require many cycles to complete the training, although it should eventually learn.

This is obviously a very simple model but quite illustrative of Backpropagation. As an exercise we encourage you to try different learningRates (lRate) and stopError values. Additionally, you can modify the training set although keeping the same structure. In section 3.5 you may try changing the layer sizes in designing new problems. Also, if you are not satisfied with the training, there are two ways to keep it going. One is to issue an initModule command, adjust trainEndTime to a new value, and then train and run again. The other is to save the weights, issue an initModule, load the weights again, and then type simTrain at the prompt.

Was this article helpful?

0 0

Post a comment