• Tidak ada hasil yang ditemukan

Synthetic Stream Flow Generation Using ANN

Problem Formulation

Chapter 6 Synthetic Streamflow Generation

6.3 Artificial Neural Network (ANN)

6.3.2 Synthetic Stream Flow Generation Using ANN

overlapped data are used for the testing of the network. Since, there are 12 periods for monthly series, the value of the mean, standard deviation, average time rate of change of discharge in different periods of the series (gradient), maximum and minimum value of historical flow repeats in a cycle of 12 periods. The most common and popular multi-layer network used in training algorithm is Back Propagation (BP) (Rumelhart et.al., 1986 and Hagan et.al., 1996) adopted in this study. It is found that a model working well for a monthly streamflow series does not perform well for a series having smaller time step discretization such as ten daily, eight daily, six daily, five daily and daily. Therefore it was decided to attempt different model for different time step discretization.

(i) Time Step Discretizations and Input Selection for Streamflow Generation

Non linearity of streamflow series increases with decrease in the length of time step over which the values are averaged. Therefore different models having different number of input parameters have been tried to obtain the best possible model for a particular time step length. Different models have been tried in this study by using different combinations of input parameter from the following set of input parameters; Streamflow of current period (It), Streamflow of previous period (It-1), mean ( t+1) and standard deviation ( t+1) of historical streamflow of next period, minimum value of inflow from the given historical record (mint+1) and maximum value of inflow from the given historical record (maxt+1), average time rate of change of discharge of the series (Gt+1). Table-6.1 shows the combinations of input parameters used in different models tried in this study.

Table 6.1 Different models with respect to time step discretization and input variables

Time Step Mod Input parameters

Monthly discretization ANN30D1 It, t+1 and t+1

ANN30D2 It-1, It, t+1 and t+1

Ten daily discretization

ANN10D1 It, t+1 and t+1

ANN10D2 It-1, t+1 and t+1

ANN10D3 It, t+1, t+1 and Gt+1

ANN10D4 It, t+1, t+1, mint+1 and Gt+1

ANN10D5 It, t+1, t+1, mint+1 and maxt+1

ANN10D6 It, t+1, t+1, mint+1, maxt+1 and

Eight daily discretization

ANN08D1 It, t+1 and t+1

ANN08D2 It-1, t+1 and t+1

ANN08D3 It, t+1, t+1 and Gt+1

ANN08D4 It, t+1, t+1, mint+1 and Gt+1

ANN08D5 It, t+1, t+1, mint+1 and maxt+1

ANN08D6 It, t+1, t+1, mint+1, maxt+1 and

Six daily discretization

ANN06D1 It, t+1 and t+1

ANN06D2 It-1, t+1 and t+1

ANN06D3 It, t+1, t+1 and Gt+1

ANN06D4 It, t+1, t+1, mint+1 and Gt+1

ANN06D5 It, t+1, t+1, mint+1 and maxt+1

ANN06D6 It, t+1, t+1, mint+1, maxt+1 and

Five daily discretization

ANN05D1 It, t+1 and t+1

ANN05D2 It-1, t+1 and t+1

ANN05D3 It, t+1, t+1 and Gt+1

ANN05D4 It, t+1, t+1, mint+1 and Gt+1

ANN05D5 It, t+1, t+1, mint+1 and maxt+1

ANN05D6 It, t+1, t+1, mint+1, maxt+1 and ANN05D7 It-1, It, t+1, t+1, mint+1, maxt+1 and

Daily discretization

ANN01D1 It, t+1 and t+1

ANN01D2 It-1, t+1 and t+1

ANN01D3 It, t+1, t+1 and Gt+1

ANN01D4 It, t+1, t+1, mint+1 and Gt+1

ANN01D5 It, t+1, t+1, mint+1 and maxt+1

ANN01D6 It, t+1, t+1, mint+1, maxt+1 and ANN01D7 It-1, It, t+1, t+1, mint+1, maxt+1 and

(ii) Training and Testing

Training was initial carried out for 2500 iterations but it was found that there was no significant improvement in MSE value after 2000 iteration, rather the time requires to train the network was increasing, hence the network is trained up to 2200 epochs. The MRE value for the testing and training was found separately and network is selected considering lowest MRE and MSE values for the particular number of neurons in hidden layer. In this study, best model has been decided by varying numbers of neurons in hidden layer from 3 to 10. For each network different combination of the learning rate = 0.00, 0.01, 0.02, 0.04, 0.05, 0.07, 0.09, 0.1, 0.2, 0.3, 0.5, 0.7 and 0.9 and momentum factor = 0.01, 0.02, 0.04, 0.05, 0.07, 0.09, 0.1, 0.2, 0.3, 0.5, 0.7 and 0.9 has been tried for the final selection of model. The best value corresponding to different learning rate and momentum factor was found after extensive trial and error with different combination of and . Table-6.2 to Table-6.7 present the MSE and MRE values with different numbers of neurons in hidden layer.

Table-6.2 MSE and MRE values-ANN30D1 Neurons

Hidden Layer (10 daily)

Training Testing

MSE MRE MSE MRE

3 0.040452 28.2546 0.0580 41.4286 4 0.038213 28.6033 0.0571 46.7165 5 0.037847 28.9926 0.0658 52.4647 6 0.038409 26.959 0.0799 54.3066 7 0.037871 30.7061 0.0687 52.9299 8 0.037848 25.3992 0.0709 52.8416 9 0.032074 22.6417 0.0637 55.9729 10 0.033522 32.9324 0.0700 56.0974

Table-6.3 MSE and MRE values-ANN10D1 Neurons

- Hidden Layer (Monthl

y)

Training Testing

MSE MRE MSE MRE

3 0.045185 61.0128 0.0566 44.9493 4 0.042522 51.9333 0.0899 69.8267 5 0.039036 54.2617 0.0776 56.7994 6 0.035631 47.2419 0.0669 48.2199 7 0.048346 63.9022 0.0922 67.3016 8 0.028801 39.6045 0.0636 40.5137 9 0.032503 45.5739 0.0704 58.0878 10 0.033765 43.5303 0.0728 47.8197

Table -6.4 MSE and MRE values-ANN08D1 Neurons-

Hidden Layer (8 daily)

Training Testing

MSE MRE MSE MRE

3 0.036986 20.8422 0.0492 35.6744 4 0.035716 20.6769 0.0507 33.5632 5 0.035597 19.6389 0.0495 34.1005 6 0.035475 19.9275 0.0482 32.6109 7 0.034504 19.4594 0.0465 31.5748 8 0.033317 18.9124 0.0477 32.7274 9 0.032049 19.6566 0.0421 30.6584 10 0.032326 19.3615 0.0426 30.5810

Table-6.5 MSE and MRE values-ANN06D3 Neurons

Hidden Layer (6 daily)

Training Testing

MSE MRE MSE MRE

3 0.030175 21.056 0.0373 38.4291 4 0.030748 20.5256 0.0373 34.8625 5 0.029254 19.9299 0.0366 34.7174 6 0.029636 20.3271 0.0370 34.8791 7 0.028346 19.9857 0.0366 33.0956 8 0.029192 19.3799 0.0392 31.2638 9 0.028717 19.5433 0.0342 34.9841 10 0.027825 19.4225 0.0363 31.2843

Table-6.6 MSE and MRE values-ANN05D6 Neurons-

Hidden Layer (5 daily)

Training Testing

MSE MRE MSE MRE

3 0.031445 22.0509 0.0339 38.805 4 0.030142 21.7799 0.0316 39.1053 5 0.030599 20.0876 0.035 36.8486 6 0.029210 21.9088 0.0339 36.9155 7 0.029091 20.5476 0.0346 36.1137 8 0.028419 20.2891 0.0345 36.3516 9 0.028971 21.1404 0.0349 37.4184 10 0.027347 19.6500 0.0344 35.4664

Table-6.7 MSE and MRE values-ANN01D5 Neurons-

Hidden Layer (daily)

Training Testing

MSE MRE MSE MRE

3 0.009259 20.063 0.0139 39.8489 4 0.009062 18.3655 0.0138 39.0409 5 0.008775 16.8718 0.0136 40.1766 6 0.008881 17.8888 0.0132 33.5179 7 0.008848 18.1755 0.0136 36.9351 8 0.008806 17.3207 0.0137 37.756 9 0.008123 16.6544 0.0133 34.8237 10 0.008639 17.4512 0.0134 36.0884

Each table gives different network parameters for model which performs better on the basis of input parameters for different time discretization.

b) Synthetic Streamflow Generation

In this study trained and tested network was used to generate series of synthetic streamflow. It was found that after several iterations the network was producing the repeated streamflow series. The statistical analysis of residual series shows that, it can be adequately modeled as normally distributed and crosscorrelated series with zero mean and unit standard deviation (Ochoa-Rivera et.al.2007). Therefore, it is very important to introduce random

component in the streamflow generation model to prevent the network from generating repetitious sequence of streamflow. A small random component calculated on the basis of the standard deviation of the observed streamflow is added to the output produced by the network (Ahmed and Sarma 2007). Thus repetitive generations of streamflow were handled by introducing a random component t t in the model. Where, t is an independent standard normal random variable with mean zero and variance unity, t is the standard deviation of observed streamflow of the corresponding month. Synthetic streamflow series of hundred years are generated by feeding the known value of inflow of previous period, inflow of current period, periodical mean of the historical flow of next period and periodical standard deviation of the historical flow of next period, maximum and minimum of historic flow of next period and average time rate of change of discharge in different periods of the series of flow. The output of the model will be the predicted inflow of the succeeding period and it will serve as input for the next iteration. If negative flow occurs during synthetic streamflow generation, would be replaced by the minimum value of the historic flow for the particular period (Ahmed and Sarma 2007).

6.4 Comparison of Results of ANN, Thomas-Fiering and Actual