• Tidak ada hasil yang ditemukan

The backpropagation algorithm was used to train the multi layer perception MLP

N/A
N/A
Protected

Academic year: 2018

Membagikan "The backpropagation algorithm was used to train the multi layer perception MLP"

Copied!
35
0
0

Teks penuh

(1)

Backpropagation

Backpropagation

Learning Algorithm

(2)

x1

xn

The backpropagation algorithm was used to train the multi layer perception MLP

MLP used to describe any general Feedforward (no recurrent connections) Neural Network FNN

(3)

Architecture of BP Nets

Multi-layer, feed-forward networks have the following characteristics:

-They must have at least one hidden layer

– Hidden units must be non-linear units (usually with sigmoid activation functions)

– Fully connected between units in two consecutive layers, but no connection between units within one layer.

– For a net with only one hidden layer, each hidden unit receives input from all input units and sends output to all output units

– Number of output units need not equal number of input units

(4)

Other Feedforward Networks

Madaline

– Multiple adalines (of a sort) as hidden nodes

Adaptive multi-layer networks

– Dynamically change the network size (# of hidden nodes)

Networks of radial basis function (RBF)

– e.g., Gaussian function

– Perform better than sigmoid function (e.g.,

(5)

Introduction to Backpropagation

• In 1969 a method for learning in multi-layer network, Backpropagation (or generalized delta rule) , was

invented by Bryson and Ho.

It is best-known example of a training algorithm. Uses training data to adjust weights and thresholds of neurons so as to minimize the networks errors of prediction.

• Slower than gradient descent . • Easiest algorithm to understand

(6)

How many hidden layers and hidden units per layer?

– Theoretically, one hidden layer (possibly with many hidden units) is sufficient for any L2 functions

– There is no theoretical results on minimum necessary # of hidden units (either problem dependent or independent)

Practical rule :

• n = # of input units; p = # of hidden units • For binary/bipolar data: p = 2n

• For real data: p >> 2n

– Multiple hidden layers with fewer units may be

(7)

Training a BackPropagation Net

Feedforward training of input patterns

– each input node receives a signal, which is broadcast to all of the hidden units

– each hidden unit computes its activation which is broadcast to all output nodes

Back propagation of errors

– each output node compares its activation with the desired output

– based on these differences, the error is propagated back to all previous nodes Delta Rule

Adjustment of weights

(8)

Three

Three

-

-

layer back

layer back

-

-

propagation neural network

propagation neural network

(9)

Generalized delta rule

• Delta rule only works for the output

layer.

(10)

Description of Training BP Net:

Feedforward Stage

1. Initialize weights with small, random values 2. While stopping condition is not true

– for each training pair (input/output):

• each input unit broadcasts its value to all hidden units

• each hidden unit sums its input signals &

applies activation function to compute its output signal

• each hidden unit sends its signal to the output units

(11)

Training BP Net:

Backpropagation stage

3. Each output computes its error term, its own weight correction term and its bias(threshold) correction term & sends it to layer below

(12)

Training a Back Prop Net:

Adjusting the Weights

5. Each output unit updates its weights and bias

6. Each hidden unit updates its weights and bias

– Each training cycle is called an epoch. The weights are updated in each cycle

– It is not analytically possible to determine

where the global minimum is. Eventually the algorithm stops in a low point, which may

(13)

How long should you train?

• Goal: balance between correct responses for training patterns & correct responses for new patterns (memorization v. generalization)

• In general, network is trained until it reaches an acceptable error rate (e.g. 95%)

(14)

Graphical description of of training multi-layer

neural network using

BP

algorithm

(15)

• To teach the neural network we need training data set. The training data set consists of input signals (x1 and x2 ) assigned with corresponding target

(desired output) z.

• The network training is an iterative process. In each iteration weights coefficients of nodes are modified using new data from training data set.

• After this stage we can determine output signals values for each neuron in each network layer.

• Pictures below illustrate how signal is propagating through the network, Symbols w(xm)n represent weights of connections between network input xm

(16)
(17)

• Propagation of signals through the hidden layer. Symbols wmn represent weights of connections between output of neuron m

(18)

• Propagation of signals through the output layer.

• In the next algorithm step the output signal of the network y is compared with the desired output value (the target), which is found in

(19)

• It is impossible to compute error signal for internal neurons directly, because output values of these neurons are unknown. For many years the effective method for training multiplayer networks has been unknown.

• Only in the middle eighties the backpropagation algorithm has been worked out. The idea is to propagate error signal δ

(20)

• The weights' coefficients wmn used to propagate errors back are equal to this used during computing output value. Only the direction of data flow is changed (signals are propagated from output to inputs one after the

(21)

• When the error signal for each neuron is

computed, the weights

coefficients of each neuron input node may be modified. In formulas below

df(e)/de

represents derivative of neuron

activation

function (which weights are

(22)
(23)

Coefficient η affects network teaching speed. There are a few techniques to select this parameter. The first method is to start teaching process with large value of the parameter. While weights coefficients are being

established the parameter is being decreased gradually.

The second, more complicated, method starts teaching with small parameter value. During the teaching process the parameter is being

(24)

Training Algorithm 1

• Step 0: Initialize the weights to small random values

• Step 1: Feed the training sample through the network and determine the final output

• Step 2: Compute the error for each output unit, for unit k it is:

δk = (tk – yk)f’(y_ink)

Required output

Actual output

(25)

Training Algorithm 2

• Step 3: Calculate the weight correction

term for each output unit, for unit k it is:

w

jk

=

ηδ

k

z

j

A small constant

(26)

Training Algorithm 3

• Step 4: Propagate the delta terms (errors)

back through the weights of the hidden

units where the delta input for the j

th

hidden unit is:

δ

_in

j

=

Σ

δ

k

w

jk

k=1 m

The delta term for the jth hidden unit is:

δj = δ_injf’(z_inj) where

(27)

Training Algorithm 4

• Step 5: Calculate the weight correction term for the hidden units:

• Step 6: Update the weights:

• Step 7: Test for stopping (maximum cylces, small changes, etc)

w

ij

=

ηδ

j

x

i

(28)

Options

• There are a number of options in the

design of a backprop system

– Initial weights – best to set the initial weights (and all other free parameters) to random

numbers inside a small range of values (say: – 0.5 to 0.5)

– Number of cycles – tend to be quite large for backprop systems

(29)

Example

• The XOR function could not be solved by

a single layer perceptron network

• The function is:

X Y F

0 0 0

0 1 1

1 0 1

(30)
(31)

Initial Weights

R

andomly assign small weight values:

x y

(32)

Feedfoward – 1

st

Pass

Activation function f:

(33)
(34)
(35)

Final Result

A

fter about 500 iterations:

x y

Referensi

Dokumen terkait

[r]

Direktur Program Pascasarjana lnstitut Seni lndonesia Yogyakarta 15.. Direktur Program Pascasarjana Universitas Sebelas

Dalam prinsip asuransi syariah, entitas syariah merupakan pengelola dana kontribusi dari peserta yang terdiri dari dana tabarru’ dan ujrah untuk entitas syariah.. Akad yang

- Lengkap Untuk Pembukaan Dokumen Penawaran - Akan Dievaluasi Lebih Lanjut. - Lengkap /

[r]

Sejalan dengan meningkatnya meningkatnyan produksi TAHU BAXO ibu Pudji, maka bahan baku tahu tidak lagi bisa dipenuhi oleh Pabrik Tahu Pringapus, yang maksimalnya hanya mampu

With your help, Girls and Boys Town will make this springtime a true chance for a new life for the children in America who need it most. We’ll lift them up again and again until

<p>Jeff Shuman, who directs entrepreneurial studies at Bentley College, says, "The conventional wisdom is that an entrepreneur sees an opportunity, comes up with a