International Journal of Advanced Computer Engineering and Communication Technology (IJACECT) _______________________________________________________________________________________________
_______________________________________________________________________________________________
Intrusion Detection System using Cascade Forward Neural Network with Genetic Algorithm Based Feature Selection
1D. P. Gaikwad, 2Ravindra R Thool
AISSMS College of Engineering, Pune, SGGAIO Technology and Engineering, Nanded Email: 1[email protected], 2[email protected]
Abstract— Due to the rapid expansion and advancements of computer network, security has become a vital issue for modern computer network. The network intrusion detection systems play the vital role in protecting the computer net- works. So, it has become a significant research issue. In spite of notable progress in intrusion detection system, there are still many opportunities to improve the existing systems in terms of false positives and classification accuracy. The objective of this research work is to reduce the false positive and increase the classification accuracy of intrusion detec- tion system. In this paper, the Feed Forward Backpropga- tion Neural network and Cascade Forward Neural network are used to implement intrusion detection system. The NSL-KDD dataset is used to train and test these networks.
The separate training and testing datasets are used to train and test the network. The Backpropgation algorithm based on Levenberg–Marquardt mechanism is used as training function. The networks were trained with 100 epochs. The results of Cascaded Forward neural network and Feed Forward Backpropgation Neural networks were evaluated in terms of classification accuracy, false positive rates, false negative and precision. Cascade Forward neural network exhibit the best results (Classification accuracy: 98.229737, False Positive rate: 0.0177, Precision rate: 0.9795) on cross validation. It exhibit the good results (Classification accu- racy: 81.205642%, False Positive rate: 0.2931, Precision rate: 0.6978) on test dataset with 15 neurons in hidden layer.
Feed Forward neural network exhibit 98.213860% Classi- fication accuracy using cross validation and 76.530341% on test dataset with 25 neurons in hidden layer. Cascaded Forward Back propagation neural network model exhibits best results followed by Feed Forward neural network model.
Index Terms—Intrusion detection, Cascade Architeture, Model, Genetic Algorithm, Hidden Layer, False Positives.
I. INTRODUCTION
The internet has become a very integral part of our life.
The usage of internet is rapidly increasing day by day. It is extensively growing in all institutes, Universities, trades and Government actions. But, unauthorized users can get access of your computer over internet to steal information and damage the computer system. The electronic attacks on network and information system of the financial or-
ganizations, military and energy sectors are increasing.
Large web sites of organizations are attacked by intruders.
Intrusions can have many forms such as viruses, spyware, worms, malicious logins and spam ware etc. [1]. The information and network security has become very im- portant aspect to protect the computer system of organi- zations. Every organization needs security applications that effectively protect their networks from malicious attacks and misuse. Intrusion detection system is software which inspects each in coming packet over internet for authorization of packet. It detects access to un-authorized user, the violation of security and identify illegal users [2].
The active intrusion detection takes the action when the system is intruded by giving the alarm to administrator [3]. There are two types of intrusion detection techniques viz., misuse and anomaly detection. Anomaly detection technique can detect novel attack with high positive error.
Misuse detection is knowledge or pattern based, whereas anomaly detection is behavior based. Misuse detection is reliable for detecting known attacks with low false-positive [4]. Misuse detection technique cannot detect the novel attack. Exiting intrusion detection system have high detection rate, whereas they suffer from high false-alarms. The computational complexities of current approaches are very high, so the practical applications of these approaches are very difficult. Currently Machine learning algorithms are extensively used to implement intrusion detection system. Machine learning has the advantage of discovering useful knowledge from dataset.
Artificial Neural Network, Genetic Algorithm, Fuzzy logic with Neural Network, Bayesian tree, Bayesian Be- lief Network, Hidden Markov Model, Association of rules and K-mean are some of the Machine learning techniques widely used to implement the intrusion detection appli- cation. Every Machine Learning algorithm has its own pros and cons. All cannot be easily applied to anomaly detection in order to detect novel intrusions. There are two types of Machine learning algorithms viz., supervised and unsupervised. Supervised Machine learning requires the dataset where is set of tuples t and is set of classes of tuples t [5]. Un-supervised Machine learning algorithms require dataset without . The Machine learning algorithms are used to discover relationships and regularities between tuples and classes
in training phase [6]. The different Machine learning algorithm can represent the knowledge in form of decision tree, rules, linear and non-linear functions, and probabil- istic models. The linear and non-linear algorithms results in highly precise decision but require more time to build a model. In this paper, Cascade Forward Backpropgation Neural network is used to implement the intrusion detec- tion system. The NSL_KDD dataset is used to train and evaluate the proposed neural network for intrusion de- tection.
Paper Overview: The rest of this paper is organized as follows. Section II surveys some previous neural network based intrusion detection system. In section III, Cascade artificial neural network is introduced. Section IV de- scribes the proposed approach of intrusion detection system. Section V is dedicated to discuss experimental results. Finally, section VI concludes the experimental work and summarizes its contributions.
II. RELATED WORKS
Neural networks are widely used to implement network anomaly based intrusion detection system. We describe some neural networks methods and systems for network anomaly detection below.
Battista Biggio et al. [7] have proposed a framework for experimental evaluation of classifier for intrusion detec- tion system. Authors have addressed one class v-SVM classifier using RBF kernel. They have suggested for including the simulated attack samples into the training data to improve security of discriminative SVM classifi- ers. Prasanta Gogoi et al. [8] have presented a clustering based classification method for network anomaly detec- tion. The training algorithm is a combination of super- vised classification and unsupervised incremental clus- tering. This algorithm clusters a labeled training dataset into different clusters to present the profile of normal and anomaly packets. The supervised classification is used to test object with the cluster profiles for labeling them.
Weiming Hu et al, [9] have proposed two online Ada- boost based intrusion detection. In first algorithm deci- sion stumps are used as weak classifiers. In the second algorithm, Gaussian mixture model is used as base clas- sifier. They found that the Adaboost with Gaussian mix- ture model exhibited a lower false positive rate than Adaboost with decision stumps. Monowar H. Bhuyan1 et al., [10] have presented an Adaptive Outlier based ap- proach for Coordinated scan Detection. This approach is an outlier score based adaptive network anomaly detec- tion. Using this method they achieved higher detection accuracy and low false positive rate on real-life datasets.
Susan C. Lee and David V. Heinbuch [11] have proposed intrusion detection system composed of a hierarchy of neural networks. The three layers of neural networks are used to implement intrusion detection system which functions as a true anomaly detector. The neural network detectors are able to recognize attacks that were not spe-
cifically presented during training. The small neural network detector in a hierarchy gives a better result than a single large detector. M. Amini et al. [12] have proposed an artificial neural network based intrusion detection system which is called as RT-UNNID. This unsupervised neural network based system is capable of intelligent real time intrusion detection system. The first module the system captures real packets and sent it to unsupervised classifier that uses Adaptive Resonance Theory and Self-Organizing Map neural networks. S. C. Lee and D.
V. Heinbuch [13] introduced a hierarchy of neural net- works to detect network anomalies. The Neural networks are trained using data which contains entire normal space and it is able to recognize unknown attacks effectively. K.
Labib et al. [14] introduced Network Self-Organizing Maps based is intrusion detection n system. It uses a structured Self-Organizing Map to collect and classify real time network data. First of all, it clusters routine traffic that is normal packets in one or more cluster. The abnormal traffic is clustered in additional cluster. These clusters are used to classify the regular and irregular intrusive traffic. Sun et al. [15] have proposed a wavelet neural network for intrusion detection system. This sys- tem is capable to reduce the number of wavelet basic functions to optimize the wavelet neural network. The gradient descent method is used to train the neural net- work.
III. INTRODUCTION TO CASCADE FORWARD BACKPROPAGATION
NEURAL NETWORK
Artificial Neural Network is a simple mathematical model of the human brain. The real neural networks employ enormous interconnections of neurons to achieve good performance. Neural networks acquire knowledge of the environment through a process of learning which analyt- ically changes the synaptic weights of the network to attain a desired design objective [16]. Artificial neural network consists of three or more neuron layers: one input layer, one or more output layers and one hidden layer. In some cases, only one hidden layer is used to restrict the model building time [17]. There are different types of architecture of Artificial Neural Network. The architec- ture of a network describes the network in terms of number of layers a network has, the number of neurons in each layer, each layer’s transfer function and how the layers connect to each other [18][19]. There are two main architectures of artificial neural networks; Feed Forward Back Propagation and Cascade Forward Back Propaga- tion. Feed Forward Back Propagation contains one input layer, one or several hidden layers and one output layer. It has one or more hidden layers of sigmoid neurons fol- lowed by an output layer of linear neurons [20]. Cascade Forward Back Propagation network is similar to Feed Forward Back Propagation network which use Back Propagation algorithm for updating weights. The main indication of this network is that each layer neurons re-
_______________________________________________________________________________________________
lates to all previous layer neurons [21]. The architecture of the cascade networks is dynamic. In Cascade network, learning is started with only one neuron, and the learning algorithm automatically adds new neurons creating a multilayer architecture during training. The number of hidden layer neuron increases step-by-step while training error decreases. The learning algorithm grows the net- work of a near optimal complexity which can generalize well [22]. Two training algorithms including Leven- berg-Marquardt and Bayesian regulation back propaga- tion algorithms can be used for updating network weights.
We can restrict the fan-in and small number of hidden layer by controlling the connectivity which can leads to faster learning. Figure 1 shows the architecture of Cas- cade Forward Back Propagation neural network with one input layer of three neurons, one hidden layer of two neurons and output layer.
Figure 1. Arrchitecture of Cascade forward Neuaral network
IV. PROPOSED INTRUSION DETECTION SYSTEM
The proposed intrusion detection system is divided into two phases. The first phase is dedicated for preproposessing of NSL_KDD dataset. Feature selection is essential data pre-processing step to reduce the di- mension of dataset. Reduction of dimension of dataset leads to better understandable neural network model.
There are two methods of feature selection methods. As early mentioned in section 4, NSL_KDD dataset is used for experimental work. All 41 features are not relevant for network intrusion detection system. The vital forteen features are selected from the dataset using Genetic Search Algorithm. Following parameters of Genetic Algorithm are used to select the features.
Probabability of Crossover : 0.6
Number of Generation : 20
Probability of Mutation : 0.033
Population Size : 20
Frequency of Report : 20
Seed Value : 01
In Table 1, all selected features are listed. This vital fea- ture of dataset is finally normalized using minmax method of normalization to reduce the model building time. In second module, Cascade Forward Back Propagation neural network is used to implement intrusion detection system. MatLab is used to implement and evaluate the performance of Cascade Forward Back propagation neural network for intrusion detection. The Cascade Forward Back propagation neural network is a very effi- cient, elastic, and fast algorithm for supervised learning.
In general, it builds the network by adding hidden units one at a time, until the desired input-output mapping is attained. It connects all the earlier neurons to the new neurons being added. Accordingly, each new neuron in effect adds a new layer and the input neurons of the hid- den and output neuron keeps on increasing as more neu- rons get added. The Cascade neural network’s architec- ture is dynamic which automatically adds and trains new neurons creating multilayer network. The number of hidden layer and number of hidden neurons increases the complexity of the network.
Table 1. List of Attribute Selected by Genetic Algorithm Sr.No. Name of the Attributes
1 src_bytes 2 dst_bytes 3 wrong_fragment
4 Hot
5 logged_in
6 num_file_creations
7 Count
8 srv_serror_rate 9 same_srv_rate 10 diff_srv_rate 11 dst_host_count
12 dst_host_srv_diff_host_rate 13 dst_host_serror_rate 14 dst_host_srv_serror_rate
In this paper, Cascade Forward neural network with re- stricted fan-in and small depth is used by controlling the connectivity. To reduce the connectivity, only one hidden layer is used to build the network. This approach of keeping restricted hidden layer helped to reduce the model building time. In experimentation, the numbers of neurons in hidden layer are increased from one neuron to 35 neurons. Our results reveal that there is a tradeoff between number of neuron in hidden layer, learning time and classification accuracy. In practice, Cascade Forward Back propagation neural network can over-fit because of noise in training dataset. The irrelevant features in train- ing dataset also can over-fit the network and increase the
time to build the network. To overcome these problems, data preprocessing techniques have been used aimed to select the informative features from dataset. The Genetic Algorithm is used to select the relevant features which helped to reduce time and avoid over-fit network. For comparison we used the Feed forward Back propagation neural-network technique to train network with one hid- den layer and one output neuron. The number of hidden neurons varied from 1 to 35 neurons. All neurons im- plemented a standard sigmoid transfer function. For training the Cascade Forward back propagation and Feed Forward Back propagation networks, we used a fast Le- venberg-Marquardt algorithm. The learning parameters of the fast Levenberg-Marquardt algorithm are listed in Table 2. To prevent the networks from over fitting, the early stopping rule is used, for which the training data have been divided into the training, validation and testing data subsets. The proportions of the data points for these sets were 80%, 10% and 10% respectively. The both networks are evaluated by using the testing dataset from training dataset which we call it as cross validation me- thod. The separate test dataset is also used to measure the performance of network in terms of classification accu- racy, false positive rate, false negative and precisions. The experimental results are discussed in section 6 in detail.
All experiments in this paper are performed on Intel(R) CORE™ i5-3210M CPU @ 2.50GHz, Installed 8 GB RAM, Lenovo Laptop with 32 bit Operating system.
Table 2. Learning Parameters Parameters of network Val-
ues
Comments net.layers{1}.transferFcn 'tansig' Activation func-
tion at input layer net.layers{2}.transferFcn 'logsig Activation Func-
tion at hidden net.trainParam.epochs 100 Maximum num-
ber of epochs to train
net.trainParam.show 25 Epochs between displays
net.trainParam.goal 0 Performance goal net.trainParam.mem_reduc 5 reduce memory
requirement
V. EXPERIMENTAL RESULTS AND DISCUSSIONS
Artificial Neural Network have developed using three layers. The fourteen parameters listed in Table 1 are given to input layer. In hidden layer number of neuron varied from one to thirty five to find the best fitting hidden layer structure. The experiments are performed using training, cross validation and test dataset. The experiments are also performed using separate test file. For the conveyance, first method of experimentation we call it as cross vali- dation method and second is called as testing dataset.
Table 3 listed all experimental results in terms of false positives, false negative, true positive rates and precision using cross validation method of evaluation.
Table 3.Performance Evaluation of Cascade NN Classifier using
Cross validation Neuron
s in Hidden Layer
False positive Rate
False Negati ve Rate
True positiv e Rate
True Negativ e Rate
Precisi on
01 0.0440 0.0542 0.9458 0.9560 0.9498 02 0.0460 0.0223 0.9777 0.9540 0.9452 03 0.0451 0.0320 0.9680 0.9549 0.9474 04 0.0282 0.0211 0.9789 0.9718 0.9672 05 0.0364 0.0199 0.9801 0.9636 0.9581 10 0.0367 0.0189 0.9811 0.9633 0.9571 15 0.0177 0.0177 0.9823 0.9823 0.9795 20 0.0227 0.0228 0.9772 0.9773 0.9739 25 0.0222 0.0185 0.9815 0.9778 0.9741 30 0.0231 0.0191 0.9809 0.9769 0.9728 35 0.0249 0.0198 0.9802 0.9751 0.9715 According to Table 3, it can be observed that Cascade Forward back propagation algorithm gradually reduces the false positive rate as number of neuron increases from one to fifteen. By increase neuron in hidden layer from 15 onward positive rates increases gradually. The hidden layer with fifteen neurons of Cascade Forward back propagation network gives best result. It is found that the results on 15 neuron are amazing. The both false positive and negative rates are equal. It also can observe that trained network gives equal true positive and negative rate with very good precision rate. Table 4 listed all experi- mental results on test dataset. According to Table 4, it can be observed that Cascade Forward back propagation algorithm gradually reduces the false positive rate as number of neuron increases from one to fifteen. By in- creasing neuron in hidden layer from 15 onward, positive rates increase gradually. The hidden layer with fifteen neurons of Cascade Forward back propagation network gives best result.
Table 4.Performance Evaluation of Cascade Classifier on Test Dataset
Neurons in Hid- den Layer
False positive rate
False Negative Rate
True positive Rate
True Negative rate
Preci- sion
01 0.3448 0.0971 0.9029 0.6552 0.6379 02 0.3554 0.0690 0.9310 0.6446 0.6077 03 0.3467 0.0340 0.9660 0.6533 0.6098 04 0.2984 0.0334 0.9666 0.7016 0.6883 05 0.3177 0.0340 0.9660 0.6823 0.6585 10 0.3320 0.0698 0.9302 0.6680 0.6481 15 0.2931 0.0385 0.9615 0.7069 0.6978 20 0.3268 0.0388 0.9612 0.6732 0.6453 25 0.3052 0.0320 0.9680 0.6948 0.6775 30 0.2947 0.0330 0.9670 0.7053 0.6938 35 0.3048 0.0332 0.9668 0.6952 0.6785
According to figure 2, it can observed that precision rate on cross validation is more as compared to test dataset with different neurons in hidden layers. Figure 3 shows
_______________________________________________________________________________________________
the false positive rates of both evaluation methods. It shows that the false positive rate on cross validation me- thod is low as compared to test dataset with different neurons in hidden layer.
Figure 2. Precision rate of Cascade BP Neural network
Figure 3. False Positive rates of Cascade NN with different number of nodes in Hidden Layers Table 5 listed the Classification accuracy of the proposed neural network using both method of evaluation. Ac- cording to table 5 and figure 4, it can observe that Cascade Forward neural network exhibits excellent classification accuracy with fifteen neurons in hidden layer.
Table 5.Classification Accuracy of Cascade Forward Network
Sr.
No.
Number of Nodes in Hidden Layer
Accuracy on Cross validation
Accuracy on Test dataset
1 01 95.117885% 75.483499%
2 02 96.459474% 75.102023%
3 03 96.086370% 76.565827%
4 04 97.507343% 80.904010%
5 05 97.118362% 79.240596%
6 10 97.142177% 77.200142%
7 15 98.229737% 81.205642%
8 20 97.729618% 78.326828%
9 25 97.951893% 80.367282%
10 30 97.872509% 81.218950%
Figure 4. Classification accuracy of proposed Cascade Forward network
Table 6 and figure 5 prsenets the classification accuracy of Feed Forward back Progation Neural Network on cross validation and test dataset. it can be observed that Cascade Forward Neural Network outperform feed Forward Neural netwrok on test dataset.
Table 6. Classification accuracy of Feed Forward Sr.
No.
Neurons in Hidden Layer
Accuracy on Cross valida- tion
Accuracy on Test dataset
1 01 96.356275% 73.491838%
2 02 96.403906% 73.966466%
3 03 98.023339% 79.644251%
4 04 97.007224% 75.621008%
5 05 97.451774% 76.273066%
6 10 98.134476% 77.985273%
7 15 97.991585% 81.214514%
8 20 97.999524% 80.287438%
9 25 98.213860% 76.530341%
10 30 97.880448% 78.655075%
Figure 5. Classification accuracy of Feed Forward Network
According to figure 6, it is appeared that the optimal performance of Cascade Forward network with fifteen neurons in hidden layer was obtained best than other number of neurons in hidden layers using cross validation.
Figure 7 shows the performance of proposed network with fifteen neurons in hidden layer on test dataset.
Figure 6. Performance Evaluation of Classifier on 15 Hidden nodes using Cross Validation
Figure 7. Performance Evaluation of Classifier on 15 Hidden nodes using Test Dataset
Finally, Table 7 presents the post processing results of proposed Cascade Forward network. The M-values and R-values have shown satisfactory.
Table 7:Performance Evaluation of Cascade classifier Post preprocessing
Evaluation Criteria
M -Value
B-Value R-value Model Building Time Cross
Validation
0.9430 0.0295 0.9712 8.42 Minutes test dataset 0.6136 0.3377 0.6542
VI. CONCLUSIONS
In this paper, different neural Network based intrusion detection systems are preseneted in section II. The
intrusion detection system is implemented using Cascade Forward Back propagation neural network. The netwrok was trained by using NSL_KDD dataset. The long train- ing time of the neural network is mostly due irrelevant features in dataset and the huge number of training sam- ples. To reduce the training time, the Genetic Search Algorithm is used to select the relevant features. The training dataset is divided into three dataset as training, crossover and testing dataset. The network is trained using this training dataset and tested using test (10%) dataset; we call this method as cross validation. The trained network is evaluated using cross validation me- thod of evaluation and using test file. The Backpropaga- tion algorithm based on Levenberg–Marquardt mechan- ism is used as training function. The networks were trained with 100 epochs. The results of Cascaded Forward neural network and Feed Forward Backpropgation neural networks were evaluated in terms of classification accu- racy, false positive rates, false negative and precision.
Cascade Forward neural network exhibit the best results (Classification accuracy: 98.229737, False Positive rate:
0.0177, Precision rate: 0.9795) on cross validation. It also exhibit good results (Classification accuracy:
81.205642%, False Positive rate: 0.2931, Precision rate:
0.6978) on test dataset with 15 neurons in hidden layer.
Feed Forward neural network exhibit 98.213860% Clas- sification accuracy using cross validation and 76.530341% on test dataset with 25 neurons in hidden layer. Cascaded Forward back propagation neural net- work model exhibits best results followed by Feed For- ward neural network model. It is found that, Cascade Forward Back Propagation neural network exhibits equal false positive and false negative rates with 15 neurons in hidden layer on cross validation. It also exhibit equal true positive and negative rate with 15 neurons. On test file, it is also found that the performance of Cascade Forward Neural Network is better than Feed Forward neural net- work. The average time to train the Cascade Forward BP neural network is 8.42 minutes.
REFERENCES
[1] Carol J Fung, Jie Zhang and Raouf Boutaba, ―Ef- fective Acquaintance Management based on Bayesian Learning for Distributed Intrusion De- tection Networks‖, in IEEE Transactions on Network and Service Management, Vol.9, No.3, September 2012.
[2] Muamer N. Mohammada, Norrozila Sulaimana and Osama Abdulkarim Muhsinb, ―A Novel In- trusion Detection System by using Intelligent Da- taMining in WEKA Environment‖, in Procedia Computer Science 3 (2011) 1237–1242. 2011.
[3] Weiming Hu, Wei Hu and Steve Maybank,
―AdaBoost-Based Algorithm for Network Intru- sion Detection‖, in IEEE Transactions on Systems,
_______________________________________________________________________________________________
Man, and Cybernetics-Part-B: Cybernetics, Vol.38, No2, April 2008.
[4] Sung-Bae Cho, ―Incorporating Soft Computing Techniques Into a Probabilistic Intrusion Detec- tion System‖, in IEEE Transactions on Systems, Man, and Cybernetics-Part-C Application and Review, Vol.32, No.2, May 2002.
[5] E. Alpaydin, ―Introduction to Machine Learning‖, MIT Press, 2004.
[6] Mitchell, ―Machine Learning‖, McGraw Hill, 1997.
[7] Battista Biggio, Giorgio Fumera and Fabio Roli,‖
Security Evaluation of Pattern Classifiers under Attack‖, in IEEE transactions on knowledge and data engineering, vol. 26, No. 4, APRIL 2014.
[8] Prasanta Gogoi, B. Borah and D. K. Bhattacha- ryya, ―Network Anomaly Identification using Supervised Classifier‖, in Informatica 37 (2013) 93–105.
[9] Weiming Hu, Jun Gao, Yanguo Wang, Ou Wu and Stephen Maybank, ―Online Adaboost-Based Pa- rameterized Methods for Dynamic Distributed Network Intrusion Detection‖, in IEEE Transac- tion on Cybernetics , Vol. 44, No. 1, JANUARY 2014.
[10] Monowar H. Bhuyan1, Dhruba Kr. Bhattacharyya and Jugal K. Kalita2,―AOCD: An Adaptive Out- lier Based Coordinated Scan Detection Ap- proach‖, International Journal of Network Secu- rity, Vol.14, No.6, PP.339-351, Nov. 2012.
[11] Susan C. Lee and David V. Heinbuch, ―Training a Neural-Network Based Intrusion Detector to Recognize Novel Attacks‖, IEEE Transaction on System, man and Cybernetics—Part A: System and Human, Vol.No.31, No.4 July 2001.
[12] M. Amini, R. Jalili, and H. R. Shahriari,
―RT-UNNID: A practical solution to real-time network-based intrusion detection using unsuper- vised neural networks,‖ Computers & Security, vol. 25, no. 6, pp. 459–468, 2006.
[13] S. C. Lee and D. V. Heinbuch, ―Training a neur- al-network based intrusion detector to recognize
novel attacks,‖ IEEE Trans. Syst. Man Cybern. A, vol. 31, no. 4, pp. 294–299, 2001.
[14] K. Labib and R. Vemuri, ―NSOM: A Tool To Detect Denial Of Service Attacks Using Self-Organizing Maps,‖ Department of Applied Science University of California, Davis Davis, California, U.S.A., Tech. Rep., 2002.
[15] J. Sun, H. Yang, J. Tian, and F. Wu, ―Intrusion Detection Method Based on Wavelet Neural Network,‖ in Proc. 2nd International Workshop on Knowledge Discovery and Data Mining. USA:
IEEE CS, 2009, pp.851–854.
[16] S. Haykin, Neural Networks. New Jersey: Prentice Hall, 1999.
[17] D. Kriesel,‖A Brief Introduction to Neural Net- works (ZETA2-EN)‖, in the framework of a se- minar of the University of Bonn in Germany (2005).
[18] David Reby , Sovan Lek , Ioannis Dimopoulos , Jean Joachim , Jacques Lauga and Stephane Au- lagnier, ―Artificial neural networks as a classifi- cation method in the Behavioural sciences‖, In Published by Elsevier Behavioural Processes 40 (1997) 35–43, Jan-1996.
[19] Howard Demuth and Mark Beale, ―Neural Net- work Toolbox for Use with MATLAB‖, User’s Guide Ver. 4.
[20] Lynne E. Parker, ―Notes on Multilayer Feed for- ward Neural Networks‖, CS494/594: Machine Learning Fall 2007.
[21] Dheeraj S. Badde, Anil k. Gupta and Vinayak K.
Patki, ― Cascade and Feed Forward Back propa- gation Artificial Neural Network Models for Pre- diction of Compressive Strength of Ready Mix Concrete‖, IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE) ISSN: 2278-1684, PP:
01-06.
[22] Vitaly Schetinin, ―An Evolving Cascade Neural Network Technique for Cleaning Sleep Elec- troencephalograms‖, Computer Science Depart- ment, University of Exeter, Exeter, EX4 4QF, UK.