• Tidak ada hasil yang ditemukan

Zener-Hollomon parameter (Z) and activation energy (Q)

2.5.2 Artificial neural network (ANN) modeling

A neural network is similar to the biological nervous system, which is basically a connectionist system, in which various nodes called neurons are interconnected. An artificial neural network (ANN) can be defined as a model of reasoning similar to the human brain, where a large amount of complex information can be stored and processed simultaneously by each neuron along the entire domain. A typical neuron receives one or more input signals and provides an output signal depending on the processing function of

the neuron. This output is transferred to connected neurons in varying intensities, the signal intensity being decided by the weights assigned. ANN has the advantages of (i) modelling the data where the input-output relation is either unknown or nonlinear, (ii) adaptive learning during training (iii) real time applications aided by a very fast computational speed. The conventional computational techniques follow an algorithmic approach, where a set of instructions in the specified order is followed to solve a problem. The relationship between each stage is required for solving the problem. On the other hand, neural network technique is data driven and the solution can be obtained even if the exact relationship is unknown. This can be obtained if a number of input-output data sets are available. Once the ANN architecture is fixed, the output for any combination of input variables can be predicted.

The most popular neural networks are feed forward networks. A feed forward network architecture consists of three distinct layers: the input layer, the hidden layer(s), and the output layer. Each layer consists of a number of neurons. The output from the neurons of one layer is transferred as input to neurons of the succeeding layer. The first layer, called an input layer, receives data from the outside world. The second layer, called the hidden layer, does not have any direct contact with the outside world and is used to help in extracting higher-level features and to facilitate generalization of outputs. The last layer is the output layer, which sends information out to users. For a given input vector, it generates the output vector by a forward pass. The data are fed to the network at the input layer and propagated with weights and activation functions to the output layer to provide the response. Once the data at the output neuron is reached, the mean squared error (MSE), which is the difference between the network output vector and the known target vector, is computed and back-propagated through ANN to modify the weights for the entire network.

This process is referred to as training. Learning can be of three types: supervised, unsupervised or reinforced. The most popular method practiced for supervised training of neural networks is the back-propagation training algorithm. Here, the training process involves two passes. In the forward pass, the input signals propagate from the network input to the output. In the reverse pass, the calculated error signals propagate backwards through the network where they are used to adjust the weights. Any efficient optimization

method can be used for minimizing the error through weight adjustment. The calculation of the output is carried out, layer by layer, in the forward direction. The output of one layer is the input to the next layer. In the reverse pass, the weights of the output neurons are adjusted first, since the target value of each output neuron is available to guide the adjustment of the associated weights. Next, the weights of the middle layers are adjusted, since the middle layers have no target values. As the errors of succeeding layers, after proper transformations, are propagated back through the network, layer by layer, this algorithm is termed as the back propagation algorithm.

After presenting the sets of inputs and associated outputs, the network is able to learn the relationships between them by changing the weights of its connections, i.e.

training. Once the network has been trained according to the assigned learning rule, it is capable of computing the output values associated with new input vectors. The trained neural network has to be tested by supplying testing data. If the testing error is much more compared to training error, the network is said to over-fit the data. A properly fitted network will give nearly equal training and testing errors.

During hot working process, flow stress (σ) of the material is a function of three independent parameters, and is expressed as: σ = σ (ε, ε , T) (as discussed in Eq. 2.19). This σ of a particular material can be modeled by ANN architecture, as shown in Figure 2.7. In such a network, the input layer can consist of three neurons representing the three parameters viz., ε, ε&, and T. The output layer on the other hand consists of one neuron representing σ. Once the proper network architecture is determined, σ can be successfully predicted for any combination of the input variables within the process domain. The σ values thus obtained can be successfully applied to generate processing maps to determine the safe processing zones.

Traditional curve fitting techniques have been used for obtaining these processing maps [PRAS1997]. Strain rate values generally used for carrying out experiments cover a range of four to five orders of magnitude. Hence, the traditional curve fitting techniques may not be appropriate for modeling such highly complex phenomena. On the other hand, neural network techniques have been found capable of learning from a data set to describe the non-linear and interaction effects with great success [ZURA1997]. The advantages of

neural networks are that the functional relationship between various variables can be obtained even if the form of non-linear relationship is unknown and some of the experimental data are faulty. This makes the neural network technique a robust technique for obtaining the functional relationship in any engineering problem. The application of neural network modeling in generating the processing maps for hot workability is covered in the following section. Neural network has been successfully demonstrated as a more robust technique than any other conventional methods for generating the processing maps [ROBI2003].

Fig.2.7. A typical neural network architecture [ROBI2003]