• Tidak ada hasil yang ditemukan

RESULTS AND DISCUSSION

4.2 Inputs/Process Variables Selection

As explained in methodology section, the selection of process variables is based on the correlation coefficient that is the relationship between two variables.

From the overall 19 process variables, the usage of the process variables as inputs in modeling is reduced to only 7 process variables. Selections are based on the highest correlation coefficients between each input and each output respectively (APPENDIX Q. The other minimum criterion in selecting critical process variables is the absolute correlation coefficients must not be less than 0.6.

The inputs selected and correlation coefficients are displayed in the Table 2 and are depicted in the Fig. 13 below.

Table 2 : Process Variables and Correlation Coefficient

Correlation Coefficient Num. Tag Number Process Variable Unit Chamber 1 Chamber 2

1 FIA-1216 Combustion air flow Nm /h 0.784 0.779

2 FRCA-1202 Natural gas feed flow Nm 3/h 0.795 0.803

3 FICA-1251 Total calorific energy GJ/h 0.766 0.767 4 TRA-1232 Reformer outlet temperature °C 0.799 0.773

5 PICA-1253 Fuel pressure kg G 0.679 0.626

6 FFRA-1208 S/C ratio - 0.791 -0.796

7 TIA-1212 Feed temperature °C 0.704 0.711

Fuel Gas

Figure 13 : Process Flow and Instruments' Locations [4]

4.3 First Phase Artificial Neural Network (ANN) Modeling

ANN model is developed by using MATLAB's Neural Network Toolbox (Refer to APPENDIX D for programming code). Performance of an artificial neural network (ANN) model is depending on some parameters; the structure and the preprocessing of the data. In ANN model development, several approaches are performed heuristically to obtain the optimal structure of a neural network model and are aforementioned in methodology section

i. Establish default neural network structure

Early works done by V. N. Patel, 2009 [4] has developed a tuned structure of ANN model with the lowest RMSEs for the validation set; 5.463 for chamber 1 and 5.233 for chamber 2 respectively. The default structure is represented as Table 3 and will be further tuned to get much more promising results

Table 3: Default ANN Architecture

Parameters Description

Network Type Hidden Layer Number of Neurons

Hidden Layer Transfer Function Output Layer Transfer Function Learning Rate

Training Function Normalisation

Performance function

Final Performance Indicator Data Structure

Feed forward/MLP

I

18 (hidden layer) Logsig

Purelin

0.2

ALL

0 to I MSE RMSE

200 data sets (divided 50: 50)

ii. Using different training algorithms

By using different variation of training/learning algorithms listed in Table 4 [9] the ANN model is trained and simulated to see the pattern that could emerge in optimising the ANN model's performance. The development process is done

only for chamber 1's TMT and the performance is only assessed on the validation set, assuming the validation performance results are more important than the training performance results (Table 5).

Table 4: Variation of Training Algorithms/Training Functions

Training Function Algorithm

`trainrp' Resilient Backpropagation

`traingda' Variable Learning Rate Backpropagation

`traingdx' Variable Learning Rate Backpropagation with momentum

`traincgf Fletcher-Powell Conjugate Gradient

`traincgp' Polak-Ribiere Conjugate Gradient

`traincgb' Conjugate Gradient with Powell/Beale Restarts

`trainscg' Scaled Conjugate Gradient

`trainbfg' BFGS Quasi-Newton

`trainlm' Levenberg-Marquardt

Table 5 : ANN Performances on Algorithms (Chamber 1) Training Function RMSE (Train) RMSE (Validate)

`trainrp' 5.6446 7.6812

`traingda' 6.6200 5.1977

`traingdx'

6.1337 6.0164

`traincgf 6.0002 6.9162

`traincgp'

5.8960 7.0486

`traincgb' 5.9615 6.9962

`trainscg' 5.2829 7.4656

`trainbfg' 5.6364 7.7416

`trainlm'

4.2452 11.9973

Based on the validation set for the TMT chamber 1's results, Variable Learning Rate Backpropagation (traingda) is observed to perform the best with the lowest validate RMSE, while taking maximum epoch (1500) before the training stopped. The result for the Levenberg-Marquardt (trainlm) shows the worst performance with the highest validate RMSE. Levenberg-Marquardt in this case,

did not give promising results (initially) even though in many cases (research), Levenberg-Marquardt is able to obtain lower root mean square error values compared to the other algorithms. The reason is still unclear, but this might be due to the complex mathematical calculation. However, these initial results draw no significant conclusion as the development process is still in the early phase. The maximum epoch is increased to 2000 to give ample time for the result to converge on the next steps.

iii. Using different normalisation method and performance function

Normalisation of data helps to prevent attributes with large ranges from outweighing attributes with smaller ranges as well as speeding up the computation time [10]. The effects of changing the normalisation method using 0 to 1,0.1 to 0.9 and 0.2 to 0.8 ranges are studied in a journal that indicates the usage of those types of normalisation method outperform the performance of the conventional -1 to 1 normalisation in terms of getting lower RMSE values [14]. In this project, the data are normalised into 0.1 to 0.9 range to see the effect of performance. The performance function is also changed from the previous `MSE' to `MSEREG' as to perform regularisation technique intended to produce smaller weight and biases thus forcing the network response to be smoother [9]. The results are tabulated in the Table 6 below:

Table 6 : ANN Performances on Performance Function

Training Function RMSE (Train) RMSE (Validate)

`trainrp'

6.4922 5.3060

`traingda' 6.7894 5.1145

`traingdx' 6.4765 5.3237

`traincgf' 6.4065 5.4861

`traincgp' 6.3903 5.4618

`traincgb' 6.3911 5.4600

`trainscg' 6.3898 5.4620

`trainbfg' 6.3898 5.4620

`trainlm' 6.1174 6.4963

Comparing with the previous results, the degradations of performances can be seen for the training's models while overall improvements on the validation's models. As this ANN modeling is done by mean of data mining, the optimisation process is continued without drawing any conclusion from these results.

iv. Changing the number of neurons in the hidden layer

The iteration of changing the number of neurons from I to 100 neurons for the hidden layer is performed and analysed on every training algorithm stated

above. The results show that for every training algorithm excluding the Variable Learning Rate Backpropagation (traingda and traingdx), and Levenberg- Marquardt (trainlm) has the same trend whereby as the number of neurons increase, the RMSE decreases until it reaches an optimum number of neurons, and then RMSE starts to increase again (Fig. 14). The optimum number recorded is 4 neurons. The performances of the training algorithms with the optimum number of neurons in terms of RMSE of the validation set are displayed in Table 7.

6.5

w U) 2 5.5

5 0 10 20 30 40 50 60 70 80 90 100

Number of neuron

Figure 14 : RMSE with Increasing Neuron Numbers in Hidden Layer

Table 7 : ANN Optimal Performances With Changing Number of Neurons in Hidden Layer

Training Function RMSE (Train) RMSE (Validate)

trainrp 4 5.1408

traingda 21 5.0731

traingdx 49 5.1154

traincgf 4 5.1409

Dokumen terkait