ANALYSIS AND DESIGN
4.1 Analysis
Before entering the Backpropagation proces, the process done is
normalization of data. The steps taken are reading the contents of master data,
transform the data to 0-1 range, and write the CSV file that be used in the process
of Backpropagation both the process of learning and testing.
8
In the learning process flowchart above, there are three processes in
Backpropagation, namely feed forward, backward, and weight update. Before
entering the first process in Backpropagation, the steps taken are reading the
contents of the learning data from the CSV file, then determining the value of
learning rate, maximum epoch, and maximum error that will be used in the
calculation process. The next step is generating random weights to calculate the
value in the hidden layer and the output layer. The weights ranging from -1 to 1.
In the feed forward process, the steps taken are calculate nodes value for
hidden layer and output layer. Using the values of the five parameters and the
weights that exist between the input layer and the hidden layer, the value is
calculated on the hidden layer using sigmoid activation function. Then the value
on the hidden layer and the weights between the hidden layer and the output layer
are calculated to get the value on the output layer using the same activation
function. Then calculate the error value from the output results that calculated
using MSE (Mean Square Error). If the error and epoch values do not match those
specified in the initial step, then the calculation is continued to feed backward
process.
In the feed backward process, the step is to calculate the change of weight. Then
update the old weight with new weight. The calculations are repeated
continuously until the epoch and the maximum error value reach the specified
In the testing flowchart above, the testing process is done by reading the
contents of CSV file which contains test data. Then the data is calculated by
Backpropagation process with the optimal weight obtained from the learning
process. Finally, the error percentage from 1, 2, and 3 hidden layers using
Backpropagation is displayed. While classification result stored in CSV file.
4.2 Design
4.2.1 Learning Process
1. The first step taken are determine the architecture of
Backpropagation. This project uses five nodes input layer, 1,
2, and 3 hidden layers with three nodes in each hidden layer,
and 2 nodes output layer.
Where:
X1= temperature
X2= pressure
X3= humidity
X4= wind
X5= rain
X6= clouds
2. Determine the coefficient of learning rate, maximum epoch,
maximum error.
3. Read the master data.
Table 4.1: Example Learning Data
Temp Press Humidity Wind Rain Clouds Weather
294.308 975 95 0 0 8 Clear
296.021 977 98 1 0 32 Clouds
299.354 974 94 1 4.1425 48 Rain
295.21 974 100 1 2.085 56 Rain
304.396 973 47 1 0 68 Clouds
Clear= 001
Clouds= 010
Rain= 100
4. Normalization process using formula:
x '=0.8(x−a) b−a +0.1
Where:
x’= data after normalization process
x= data that will be processed
a= minimum data
b= maximum data
Table 4.2: Normalized Learning Data
Temp Press Humidity Wind Rain Clouds Weather
0.1 0.8921 0.644 0.1 0.1 0.1 Clear
0.2358 0.9 0.8698 0.9 0.1 0.4200 Clouds
0.5001 0.3000 0.8094 0.9 0.9 0.6333 Clouds
0.1715 0.3000 0.9000 0.9 0.5026 0.7400 Rain
5. Initiation of weight values by generating small random
numbers -1 to 1.
Table 4.3: Example Input-Hidden Weight
Z1 Z2 Z3 Z4
Table 4.4: Example Hidden-Output Weight
Y1 Y2 Y3
6. Calculate the value of input and weight between input layer
and hidden layer.
zinj= the weighted hidden nodes signal
V0j= weight bias between input layer and hidden layer
Xi= input value
Vij= weight between input layer and hidden layer
zin1 = 0.2 + (0.1 x 0.1) + (-0.4 x 0.8921) + (0.3 x 0.644) +
7. Calculate the hidden value using sigmoid activation function.
Zj=f(zinj)
Where:
Zj= hidden nodes value
zinj= the weighted hidden nodes signal
= 1/(1+e-0.47478)
= 0.6165
8. Calculate the value of hidden and weight between hidden
layer and output layer.
yink=V0 j+
∑
i=1
n
ZjWjk
Where:
yink= the weighted output nodes signal
V0j= weight bias between hidden layer and ouput layer
Zj= hidden nodes value
Wjk= weight between hidden layer and output layer
Example:
Yk= output nodes value
Y1 = 1/(1+e-Y_in1)
10. Calculate the weighted changes value between output layer
and hidden layer.
δk=(tk−Yk)f '(yink)
ΔWjk=αδkZj
Where:
δk= error values are propagated back to hidden nodes
tk= target output
Yk= output nodes value
α= learning rate value Zj= hidden nodes value
∆Wjk= weighted changesvalue between output layer with
= -0.0293
11. Calculate the weighted changes value between hidden layer
and input layer.
δinj=
∑
k=1m
δj=δinjf '(zinj)
ΔVij=α δjXi
Where:
δinj= the number of input delta in the hidden layer from
output nodes
δj= error values are propagated back to input nodes
∆Vij= weighted changes value between hidden layer and
∆V35 = 0.2 x -0.0253 x 0.1
Wjk= weight between hidden layer and output layer
Vij= weight between input layer and hidden layer
∆Wjk= the weight changes value between hidden layer and
output layer
∆Vij= the weight changes value between input layer and
= 0.1798
W34 = 0.3 + (-0.0180)
= 0.282
Table 4.5: New Hidden-Output Weight
Y1 Y2 Y3
1 0.0707 0.2705 0.0705
Z1 0.0848 0.0848 0.0847
Z2 0.283 0.1829 0.1829
Z3 0.1799 0.3798 0.1798
Z4 0.0821 0.082 0.282
V10 = 0.2 + (-0.0022)
= 0.1978
V11 = 0.1 + (-0.0002)
= 0.0998
V12 = -0.4 + (-0.0019)
= -0.4019
V13 = 0.3 + (-0.0014)
= 0.2986
V14 = 0.2 + (-0.0002)
= 0.1998
V15 = 0.1 + (-0.0002)
= 0.0998
V16 = 0.1 + (-0.0002)
= 0.0998
V20 = 0.1 + (-0.005)
= 0.095
= 0.1997
V42 = -0.2 + (-0.0031)
= -0.2031
V43 = 0.3 + (-0.0022)
= 0.2978
V44 = 0.1 + (-0.0003)
= 0.0997
V45 = 0.1 + (-0.0003)
= 0.0997
V46 = 0.2 + (-0.0003)
= 0.1997
Table 4.6: New Input-Hidden Weight
Z1 Z2 Z3 Z4
1 0.1978 0.095 0.295 0.3965
X1 0.0998 0.2995 0.0995 0.1997
X2 -0.4019 0.0956 0.0955 -0.2031
X3 0.2986 0.0968 0.4968 0.2978
X4 0.1998 0.2995 0.0995 0.0997
X5 0.0998 0.1995 0.3995 0.0997
X6 0.0998 0.1995 0.1995 0.1997
4.2.2 Testing Process
Testing is done by doing the same steps 1-9 on the learning
process. Testing is done using optimal weight from learning