• Tidak ada hasil yang ditemukan

USER AUTHENTICATION USING NEURAL NETWORK IN SMART HOME JEE TZE LING

N/A
N/A
Protected

Academic year: 2022

Membagikan "USER AUTHENTICATION USING NEURAL NETWORK IN SMART HOME JEE TZE LING"

Copied!
24
0
0

Teks penuh

(1)

USER AUTHENTICATION USING NEURAL NETWORK IN SMART HOME

JEE TZE LING

Thesis is submitted to

Faculty of Engineering, Universiti Malaysia Sarawak in partial fulfillment of the requirements for the degree of Bachelor of Engineering

with Honours (Electronic and Telecommunications Engineering) 2009

(2)

UNIVERSITI MALAYSIA SARAWAK

R13a BORANG PENGESAHAN STATUS TESIS

Judul: USER AUTHENTICATION USING NEURAL NETWORK IN SMART HOME

SESI PENGAJIAN: 2008/2009

Saya JEE TZE LING

(HURUF BESAR)

mengaku membenarkan tesis * ini disimpan di Pusat Khidmat Maklumat Akademik, Universiti Malaysia Sarawak dengan syarat-syarat kegunaan seperti berikut:

1. Tesis adalah hakmilik Universiti Malaysia Sarawak.

2. Pusat Khidmat Maklumat Akademik, Universiti Malaysia Sarawak dibenarkan membuat salinan untuk tujuan pengajian sahaja.

3. Membuat pendigitan untuk membangunkan Pangkalan Data Kandungan Tempatan.

4. Pusat Khidmat Maklumat Akademik, Universiti Malaysia Sarawak dibenarkan membuat salinan tesis ini sebagai bahan pertukaran antara institusi pengajian tinggi.

5. ** Sila tandakan ( a ) di kotak yang berkenaan

SULIT (Mengandungi maklumat yang berdarjah keselamatan atau kepentingan Malaysia seperti yang termaktub di dalam AKTA RAHSIA RASMI 1972).

TERHAD (Mengandungi maklumat TERHAD yang telah ditentukan oleh organisasi/

badan di mana penyelidikan dijalankan).

a TIDAK TERHAD

Disahkan oleh

(TANDATANGAN PENULIS) (TANDATANGAN PENYELIA)

Alamat tetap: 379, CASA MARBELLA,

LRG SETIA RAJA 4E4, 93350 KUCHING. MDM. ANNIE JOSEPH

Nama Penyelia

Tarikh: 8 April 2009 Tarikh: 8 April 2009

CATATAN * Tesis dimaksudkan sebagai tesis bagi Ijazah Doktor Falsafah, Sarjana dan Sarjana Muda.

** Jika tesis ini SULIT atau TERHAD, sila lampirkan surat daripada pihak berkuasa/organisasi berkenaan dengan menyatakan sekali sebab dan tempoh tesis ini perlu dikelaskan sebagai SULIT dan TERHAD.

(3)

This Final Year Project Attached here:

Title : User Authentication Using Neural Network in Smart Home Student Name : Jee Tze Ling

Matric No. : 14192

Has been read and approved by:

______________________ ____________________

Mdm. Annie Jospeh Date

Supervisor

(4)

Dedicated to my beloved parents

(5)

ACKNOWLEDGEMENT

First and foremost, I would like express my gratitude to my supervisor, Mdm.

Annie Joseph for her support and guidance throughout my thesis with patience.

Without her advice and assistance, my thesis would not have been successful.

I would also like to take this opportunity to thank all lecturers from Electronics Department, Faculty of Engineering for their guidance and help throughout my studies in UNIMAS. Besides, I want to thank my fellow coursemates and friends for their encouragement and support during these research efforts.

Special thanks go to people who had replied and helped to solve my problems in MATLAB Newsreader. With their knowledge and help, the progress of this thesis has been smoother.

I wish to express my love and gratitude to my beloved family members, especially my parents; for their understanding and endless love, throughout the duration of my studies.

Last but not least, I would like to thank Mr. Ting Seng Wee for his encouragement and support throughout my studies in UNIMAS.

(6)

ABSTRAK

Keselamatan adalah isu penting dan kebimbangan bagi sistem-sistem rumah pintar. Terdapat kemungkinan akses haram untuk data sulit atau alat-alat berlaku kerana jaringan rumah pintar terdiri daripada alat-alat berwayar atau wayarles yang berjulat luas. Pengesahan kata laluan berpangkalan telah digunakan dengan meluas untuk mengenal pasti pengguna yang berdaftar kerana kaedah ini adalah murah, mudah dan tepat. Kaedah konvensional pengesahan kata laluan berpangkalan menyimpan kata laluan sebagai kata laluan atau pengesahan meja adalah lemah.

Dalam projek ini, Rangkaian Neural dilatih untuk menyimpan kata laluan dan menggantikan pengesahan meja. Kaedah ini berguna dalam menyelesaikan masalah keselamatan yang berlaku dalam beberapa sistem pengesahan. Tambahan pula, ia boleh digunakan ke atas kunci pintu untuk sistem rumah pintar. Cara konvensional untuk melatih rangkaian menggunakan Backpropagation (BPN) memerlukan masa yang panjang. Oleh itu, satu algoritma yang berlatih cepat, iaitu Resilient Backpropagation (RPROP) telah diguna dalam rangkaian untuk mempercepatkan proses latihan. 200 UserID dan Kata laluan telah diguna dalam eksperimen dan telah ditukar kepada perduaan. Eksperimen telah dijalankan untuk menilai prestasi bagi bilangan neuron tersembunyi, set-set latihan dan kombinasi fungsi perpindahan yang berlainan. Min ralat Kuasa Dua (MSE), masa latihan dan bilangan epochs digunakan untuk menentukan prestasi rangkaian. Daripada keputusan simulasi yang diperolehi, dengan meangguna Tansig dan Purelin di layar tersembunyi dan hasil, 250 neuron tersembunyi memberi keputusan yang lebih baik. Rangkaian yang memberikan prestasi terbaik telah digunakan untuk membinakan sistem pengesahan pengguna.

iii

(7)

ABSTRACT

Security has been an important issue and concern in the smart home systems.

Smart home networks consist of a wide range of wired or wireless devices, there is possibility that illegal access to some restricted data or devices may happen.

Password-based authentication is widely used to identify authorize users, because this method is cheap, easy and quite accurate. Conventional password-based authentication methods store passwords as a password or verification table which is vulnerable. In this project, a neural network is trained to store the passwords and replace verification table. This method is useful in solving security problems that happened in some authentication system. Furthermore, it can be applied to the door lock for a smart home system. The conventional way to train the network using Backpropagation (BPN) requires a long training time. Hence, a faster training algorithm, Resilient Backpropagation (RPROP) is embedded to the network to accelerate the training process. For the experiment, 200 sets of UserID and Passwords were created and encoded into binary as the input and target. The experiment had been carried out to evaluate the performance for different number of hidden neurons, training sets, and combination of transfer functions. Mean Square Error (MSE), training time and number of epochs are used to determine the network performance. From the simulation results obtained, using Tansig and Purelin in hidden and output layer, and 250 hidden neurons gave the better performance. The network which gives the better performance network is used to develop the user authentication system.

iv

(8)

TABLE OF CONTENTS

CONTENTS PAGE

ACKNOWLEGDEMENT ii

ABSTRAK iii

ABSTRACT iv

TABLE OF CONTENTS v

LIST OF TABLES ix

LIST OF FIGURES x

LIST OF ABBREVIATIONS xiii

CHAPTER 1 INTRODUCTION

1.1 Artificial Neural Network Overview 1.2 The Basic of Artificial Neural Networks 1.3 History of ANNs

1.4 Applications of ANNs 1.5 Problem Statement 1.6 Research Objectives 1.7 Chapter Outline

1 2 2 4 4 7 7

CHAPTER 2 LITERATURE REVIEW

2.1 Basic of Neural Network 9

v

(9)

2.2 Biological Neurons

2.3 Artificial Neurons and Neural Network Architecture 2.4 Types of Neural Network

2.4.1 Feedforward Neural Network 2.4.2 Feedback Neural Network

2.4.3 Kohonen’s Self-Organizing Network 2.5 Learning Algorithms of Neural Network 2.6 Activation Function

2.6.1 Identity Function

2.6.2 Heaviside Function (Step Function) 2.6.3 Ramp Function

2.6.4 Sigmoid Function 2.6.5 Gaussian Function 2.7 Backpropagation Algorithm

2.8 Resilient Backpropagation (RPOP) 2.9 User Authentication

9 10 12 12 17 20 21 23 23 24 25 26 27 28 30 31

CHAPTER 3 METHODOLOGY

3.1 Overview of Methodology

3.2 Password-based User Authentication Development 3.3 Software Tools

3.4 Data Collection (User Registration Phase) 3.5 Data Conversion (Normalization)

3.6 Neural Network Implementation

3.7 Employing Local Adaptive Techniques – Rprop

33 33 35 35 36 37 39

vi

(10)

3.8 Setup for Training a Neural Network 3.9 Pseudocode for RPROP Algorithm

3.10 Evaluate the Performance of Different Number of Hidden Neurons

3.11 Evaluate the Performance of Different Transfer Functions for Hidden Layer and Output Layer 3.12 Evaluate the Performance of Different Number of

Training Sets 3.13 User Authentication

41 42 43 44

45

47

CHAPTER 4 RESULTS, ANALYSIS & DISCUSSIONS 4.1 Data and Process

4.2 Training Parameters

4.3 Compare the Performance Using Different Transfer Functions

4.3.1 MSE

4.3.2 Training Time 4.3.3 Number of Epochs 4.3.4 Analysis and Discussion

4.4 Compare the Performance of Different Hidden Neurons

4.4.1 MSE

4.4.2 Training Time 4.4.3 Number of Epochs 4.4.4 Analysis and Discussion

4.5 Compare the Performance of Different Number of Training Sets

49 50 51

51 53 54 56 57

57 60 63 66 67

vii

(11)

4.6 The Overall Software for Password-based User Authentication

4.6.1 User Registration Phase 4.6.2 User Sign In Phase

4.6.3 User Authentication Phase 4.7 Discussions

71

71 72 73 74

CHAPTER 5 RECOMMENDATIONS & CONCLUSIONS 5.1 Conclusions

5.2 Recommendations

77 79

REFERENCES 80

APPENDIX A 83

APPENDIX B 85

APPENDIX C 87

viii

(12)

LIST OF TABLES

4.1 MSE Results with Different Transfer Function 52 4.2 Training time with Different Number of Hidden

Neurons

53

4.3 No. of Epochs with Different Transfer Function 55 4.4 MSE with Different Number of Hidden Neurons

(Combination B) 58

4.5 MSE with Different Number of Hidden Neurons

(Combination C) 59

4.6 Training Time with Different No. of Hidden Neurons (Combination B)

61

4.7 Training Time with Different No. of Hidden Neurons (Combination C)

62

4.8 No. of Epochs with Different Number of Hidden Neurons(Combination B)

64

4.9 No. of Epochs with Different Number of Hidden Neurons(Combination C)

65

4.10 MSE Results with Different Number of Training Sets 67 4.11 Training Time with Different Number of Training

Sets 69

4.12 Number of Epochs with Different Number of

Training Sets 70

ix

(13)

LIST OF FIGURES

FIGURE PAGE

1.1 The Neuron Structure 1

1.2 The architecture of BPN 6

2.1 Model of artificial neuron 11

2.2 The architecture of Feedforward Neural Network 13

2.3 Single-Layer Perceptron Structure 14

2.4 Multilayer Perceptron structure 15

2.5 Radial Basis Function Structure 16

2.6 Recurrent Neural Network 18

2.7 The Jordan Network 19

2.8 The Elman Network 20

2.9 Kohonen’s Self Organizing Map 21

2.10 Identity Function 24

2.11 Heaviside Function 25

2.12 Ramp Function 26

2.13 Sigmoid Function 27

2.14 Gaussian Function 28

3.1 Flowchart of Password-based User Authentication Development

34

3.2 Flowchart of choosing the number of hidden neurons 43 3.3 Flowchart of choosing the transfer functions 44

x

(14)

3.4 Flowchart of evaluating the number of training sets 46 3.5 The processes of login and user authentication

phases 48

4.1 MSE Results with Different Transfer Function 52 4.2 Training time with Different Number of Hidden

Neurons

54 4.3 No. of Epochs with Different Transfer Function 55 4.4 Performance Graphs of Combination A for Different

Hidden Neurons

83

4.5 Regression Graphs of Combination A for Different Hidden Neurons

84

4.6 Performance Graphs of Combination B for Different Hidden Neurons

85

4.7 Regression Graphs of Combination B for Different Hidden Neurons

86

4.8 Performance Graphs of Combination C for Different Hidden Neurons

87

4.9 Regression Graphs of Combination C for Different Hidden Neurons

88

4.10 MSE with Different Number of Hidden Neurons

(Combination B) 58

4.11 MSE with Different Number of Hidden Neurons

(Combination C) 59

4.12 Training Time with Different No. of Hidden

Neurons (Combination B) 61

4.13 Training Time with Different No. of Hidden Neurons (Combination C)

62

4.14 No. of Epochs with Different Number of Hidden Neurons(Combination B)

64

4.15 No. of Epochs with Different Number of Hidden Neurons(Combination C)

65

4.16 MSE Results with Different Number of Training Sets

68

xi

(15)

4.17 Training Time with Different Number of Training Sets

69

4.18 Number of Epochs with Different Number of Training Sets

70

4.19 User Registration Interface 71

4.20 User Sign In Interface 72

4.21 User Login Successful Interface 73

4.22 User Login Failed Interface 74

xii

(16)

LIST OF ABBREVIATIONS

ANN - Artificial Neural Network

BPN - Backpropagation Network

CCTV - Closed Circuit Television

delt_dec - Decrement to weight change delt_inc - Increment to weight change

GUI - Graphical User Interface

MATLAB - Matrix Laboratory

max_fail - Maximum validation failures min_grad - Minimum performance gradient

MLP - Multilayer Perceptron

RNN - Recurrent Neural Network

RPROP - Resilient Backpropagation

User ID - User identification

xiii

(17)

CHAPTER 1

INTRODUCTION

1.1 Artificial Neural Network Overview

Artificial neural network (ANN) is a mathematical model or computational model based on biological neural networks, as shown in Figure 1.1, which is made up of interconnecting artificial neurons.

Figure 1.1: The Neuron Structure [1]

The conceptual constructs of a Neural Network is from human brain where there is billions and billions of interconnected neurons. Although without creating a model of a real biological system, ANN may be used for solving artificial intelligence problems. An ANN consists of a network which is built up from simple artificial

1

(18)

neurons which can exhibit complex global behaviour, determined by the connections between the processing elements and element parameters [1, 2].

1.2 The Basic of Artificial Neural Networks

There are three major learning paradigms, each corresponding to a particular abstract learning task, which are supervised learning, unsupervised learning and reinforcement learning. The Backpropagation Network (BPN) used in the password- based user authentication is the supervised learning method where the training pattern includes the known input and the expected output. Besides, the training provides the network parameters and weight values where the weights values are adjusted by the training patterns [1, 2].

1.3 History of ANNs

The history of neural network is believed to start in the late 1800s with scientific attempts to study the workings of the human brain. In 1890, William James published the first work about brain activity patterns. Below is the brief history of the Neural Network development:

1943 - Warren McCulloch and Walter Pitts modelled a simple neural network using electrical circuits in order to describe how neurons in the brain might work.

2

(19)

1949 - Donald Hebb pointed out the fact that neural pathways are strengthened each time they are used, a concept fundamentally essential to the ways in which humans learn (Hebbian Learning)

1959 - Bernard Widrow and Marcian Hoff developed models called "ADALINE"

and "MADALINE"

1962 - Widrow & Hoff developed a learning procedure that examines the value before the weight adjusts it (i.e. 0 or 1) according to the rule: Weight Change = (Pre-Weight line value) * (Error / (Number of Inputs))

1969 - Minsky and Papert presented a discouraging analysis of the limitation of perceptron.

1982 - The ability for bi-directional flow of inputs between neurons/nodes was produced with the Hopfield's network and specialization of these node layers for specific purposes was introduced through the first hybrid network.

1986 - Rumelhart, Hinton and Williams develop new training algorithms for multilayer perceptrons. The new method is called the generalized delta rule for learning by backpropagation.

Today - Significant progress has been made in the field of neural networks-enough to attract a great deal of attention and fund further research. Advancement beyond current commercial applications appears to be possible, and

3

(20)

research is advancing the field on many fronts. Neurally based chips are emerging and applications to complex problems developing. Clearly, today is a period of transition for neural network technology [3].

1.4 Applications of ANNs

Artificial Neural Network is applied in many real life applications such as finance, customer products, process control, security and medical diagnosis.

Below is the brief summary of the Neural Network applications:

i. Investment analysis ii. Signature analysis iii. Process control iv. Monitoring v. Medical diagnosis [4].

1.5 Problem Statement

Recently there are a lot of criminals happening especially at residential areas.

This shows that the security systems available in the market are not powerful enough.

For an example, the security system can be easily hacked. Besides, the security system and the door lock are separated. Hence, the intruders can still break in

4

(21)

without knowing the password for the security system. Therefore, a more powerful security system is required for the home safety.

There are a lot of methods proposed for identifying the login user in the market, likewise, fingerprint, voice recognition, face detection, CCTV and etc. Indeed these systems are very powerful and secure; however, they are not widely used due to the price of the product. Password-based user authentication is inexpensive and affordable. Currently most of the password-based user authentication systems are still using a table to keep the username and password of the authorized users.

However, this password table has a potential threat that the passwords may be read or altered by an intruder.

The password-based user authentication using neural network which is introduced here is harder to be hacked. The neural network is used to train (generate and memorize) the identification parameters. One of the most well known types of neural network is the Backpropagation Network (BPN). The architecture of the BPN consists of three basic layers which are the input layer, hidden layer and output layer, as shown in Figure 1.2. As a consequence of BPN required hundred or even thousand of epochs to finish, even for a simple training, Resilient Backpropagation (Rprop) technique will be used to accelerate the training epochs in this project. This was due to BPN required a long time to train the nodes [5 - 7].

5

(22)

Figure 1.2: The architecture of BPN [8]

By using the Neural Network system, it is safe enough for the user to combine the door lock with the security system because it is hard for the intruder to hack the system and get the UserID and password. This is because there is no verification table which intruders used to hack the system. Hence, the user does not need a key to open the door and no key lost or stolen will occur. Furthermore, this system can be applied as the authorization system before entering the smart home controlling system. Therefore, even the owner lost the hardware to remote access the smart home, the person who got the weights also difficult to access the smart home system because it is hard to crack the owner’s User ID and password.

6

(23)

1.6 Research Objectives

There are several objectives of this project which are listed as following:

• To implement low computation capability user authentication system

• To implement a user authentication security system of a Smart Home using Neural Network

• To develop a security system which is harder for intruder to hack

• To train the set of password and username in a faster manner using Rprop Technique

• To compare the performance of Neural Networks using different numbers of hidden neurons, training sets and combination of transfer functions

1.7 Chapter Outline

In Chapter 1, Artificial Neural Network concept and applications were briefly introduced and the problem statement was discussed. The main idea of developing this system was also discussed. The objectives of this project are also stated in this chapter.

Chapter 2, the literature review describes the Artificial Neural Network in more detail. The architecture of Neural Network, types of ANN, types of activation functions, and BPN.

7

(24)

Chapter 3 stated the methodology of ANN the project which included the method of setting up the network, train the network, and the employment of Rprop and the implementation of user authentication security system for smart home using neural network.

In Chapter 4, the results of simulation were analyzed and discussed in detail.

The development of the user authentication software was also shown in Chapter 4.

Furthermore, the problems faced and limitation of the project was discussed in this chapter.

Chapter 5 is the conclusion of the project and the recommendation for improving the project in future.

8

Gambar

FIGURE    PAGE
Figure 1.1: The Neuron Structure [1]
Figure 1.2: The architecture of BPN [8]

Referensi

Dokumen terkait

This data will be implemented using the Artificial Neural Network technique to classify the banknotes whether it is real or counterfeit.. The goals of this research

From experimental result, combination of Chi2 and backpropagation neural network could perform better accuracy than evolutionary neural network for Java characters

In this paper, we have studied the role of Sigmoid and ReluRectified Linear Unit Activation Functions in Convolutional Neural Network, and we compare among these which one provides the

This study presents an automated system for a two-stage classification of driver fatigue, using a combination of compressed sensing CS theory and deep neural networks DNNs, that is

Keywords: Diabetes, Artificial Neural Networks, Fuzzy Neural networks Data mining ©2018 Torbat Heydariyeh University of Medical Sciences.. All rights

Also, using the ANN optimum model input = 27 variables, hidden layer = 1 with 15 neurons and 5 nods, output layer = 4, the normal or depressed status of each elderly person was

CONCLUSIONANDSUGGESTION The research findings underscore the effectiveness of employing a combination of Convolutional Neural Networks CNN for feature extraction, Principal Component

Experimentally, we discovered that the best neurons counts was 25 neurons in the second hidden layer, as illustrated by the best value of Epoch and MSE was chosen when the BPNN