• Tidak ada hasil yang ditemukan

Optimization of Artificial Neural Network

N/A
N/A
Protected

Academic year: 2023

Membagikan "Optimization of Artificial Neural Network "

Copied!
105
0
0

Teks penuh

유조선 시장 수익 예측을 위한 인공 신경망 최적화. 인공 신경망을 이용한 예측은 각각 SUEZMAX와 AFRMAX에 대해 수행됩니다. 키워드: 인공 신경망, 유조선 시장 전망, VLCC, SUEZMAX, AFRMAX.

Table 4.20    Comparison on ANN performance according to correlation coefficient
Table 4.20 Comparison on ANN performance according to correlation coefficient

Introduction

  • Background
  • Research purposes and scope
  • Predictions on shipping markets…
  • Structure of the Paper

And the effect of the correlation coefficient between dirty tank profits and multiple input variables of the ANN model was evaluated. To predict the dynamics and fluctuations of freight rates in tanker freight markets, many researches have been developed using univariate or multivariate time series analysis techniques and ANN models [22]-[24]. Also the predictive results are evaluated according to the training algorithms of the ANN architectures.

Artificial Neural Networks

An overview of Artificial Neural Networks (ANN)

The neurons in a layer get input from the previous layer or first layer, and feed their output to the next layer. The last layer or the highest layer of neurons is called the output layer and the one or more intermediate layers between the input and output layers are called the hidden. In Figure 2.2 [31] third layer is called output layer and the other layers of the first and second layer are called hidden layers.

Figure 2.1    Multiple-input neuron
Figure 2.1 Multiple-input neuron

Design of ANN model

  • Supervised learning
  • Mean squared error (MSE)
  • Least-mean squared algorithm
  • Backpropagation learning algorithm
  • Levenberg-Marquardt algorithm
  • Generalization and Bayesian regularization algorithm

The LMS algorithm and the backpropagation algorithm for multilayer networks adjust the weights and biases of the network to minimize the mean squared error, where the error is the difference between the target output and the network output. As with the LMS learning law, the performance index of the backpropagation is mean squared error. 𝐹𝐹(𝔁𝔁) = 𝛽𝛽𝐸𝐸𝐷𝐷 + α𝐸𝐸𝑊𝑊 = 𝛽𝛽 ∑𝒬𝒬𝑒𝑞𝑞=1 𝒂𝑞𝑞)𝑇𝑇(𝒂𝒕𝑞𝑞− 𝒂𝒬𝑞𝑞) + 𝛼 𝛼 ∑𝒾𝒾𝒾𝒾𝒾=1𝒾𝐝 𝔁) is called the regularized performance index, and the ratio α /β controls the effective complexity of the network solution.

Methodology

Data and pre-processing

  • Data collection
  • Data normalization

One of the most common tools to obtain better results from neural network is to use data normalization. Nonlinear activation functions are normalized to a value (0, 1) for logistic function or (-1, 1) for hyperbolic tangent function. When non-linear transfer functions are used at the output nodes, the desired output values ​​must be transformed to the range of the actual outputs of the network [25].

In this study, prediction is performed with multi-layer networks using sigmoid transfer functions in the first layer of the hidden layer. In the first layer, the net input is the product of the input times the weight plus the bias. In contrast, if the input values ​​are very small, large weights are needed to produce a large net input.

So it is standard practice to normalize the input before applying it to the network. When the input values ​​are normalized, the magnitudes of the weights have a consistent meaning when using regularization. The normalization step is applied to the input values ​​and target values ​​in the datasets [31][33].

Where 𝑝𝑝𝑚𝑚𝑚𝑚𝑚𝑚 is the vector containing the minimum values ​​of each element of the input vectors in the dataset, 𝒑𝒑𝑚𝑚𝑚𝑚𝑥𝓥 contains the maximum values, and 1 to 1 ) is the resulting normalized input vector.

Identification of ANN architecture…

Therefore, this paper focuses on the prediction of ANN with Levenberg-Marquardt algorithm and Bayesian regularization algorithm to evaluate the prediction accuracy of these two training algorithms. After determining the network structure, the number of hidden layers in these two learning algorithms is decided to one to allow easy comparison of performance results and features, and the ANN model implementing the backpropagation algorithm does not have too many layers, as the time for training the network grows exponentially. The number of neurons in the hidden layer is determined by the complexity of the function being approximated or the decision boundaries being implemented.

Therefore, to determine the number of neurons in the hidden layer to find the best forecasting performance for the VLCC tanker market, the number of neurons in the hidden layer of the ANN structure is adjusted using the Levenberg-Marquardt algorithm to find the best performance good. without any overfitting. Also, in the prediction using the Bayesian regularization algorithm, the ANN performs the prediction using 8 and 10 hidden layer neurons based on and evaluates the performance results of these two cases. The hidden layer consists of neurons with the tan-sigmoid transfer function chosen as their activation function, and the output layer has the linear transfer function.

The number of neurons in the output layer is the same size as the target. For multistep forward prediction, the output is fed back to the input of the feedforward neural network as part of the default network. For one-step forward predictions, the true output available during the training of the network is used instead of backtracking the estimated output.

That is, the network is trained with ℓ-step (ℓ> 1) separate differentiated data as input to the network.

Figure 3.1    NARX network (Closed loop) for the tanker market prediction
Figure 3.1 NARX network (Closed loop) for the tanker market prediction

Training and post training validation

If the prediction errors are uncorrelated (white noise), then 𝑅𝑅ℯ(τ) can be expected to be close to zero except when τ = 0. If there is no correlation between the prediction errors and the input sequence, then 𝑅𝑅𝑝𝑝ℯ(τ) can be expected to be close to zero for all τ. To determine whether 𝑅𝑅𝑝ℯ𝑝(τ) is close to zero, it can be defined by an approximate 95% confidence interval [14] using the range:.

Implementation of Methodology

Implementation

  • Data processing…
  • ANN networks for tanker market predictions
  • Computation
  • Validation

And when applying the Bayesian regularization training technique to VLCC, SUEZMAX and AFRAMAX, the ANN prediction was performed in two cases: the testing dataset is 15% and the case is 20%, and the results of the two cases were compared. The reason for dividing the test data set into two types, 15% and 20%, is to broaden the test data and compare the results by varying the width of the training data. When the Levenberg-Marquardt algorithm was applied to VLCC, the number of neurons in the hidden layer was adjusted to improve the accuracy of the prediction performance.

And the number of neurons in the output layer with the linear function as its activation function is the same as the size of the target. Experiments are conducted to identify the optimal ANN architecture for forecasting the Profit of tanker markets with a lead time of one step (i.e. one month) ahead, 3 steps ahead, 6 steps ahead, 9 steps ahead, 12 steps ahead and 15 steps forward. The value of the lag line labeled as time lag was 2 months with no change during implementation.

Each implementation for the prediction was repeated several times to identify the optimal parameters and conditions of the network. When training with the Levenberg-Marquardt algorithm, the number of neurons can be adjusted to prevent overfitting or extrapolation. The training of the 9-10-1 network effectively used less than 77.3% of the total number of weights and biases.

Autocorrelation function of prediction error and correlation function to measure correlation between input and prediction error were used for validation of ANN prediction model with the help of graphs.

Figure 4.1    Schematic diagram of ANN network for the tanker market prediction
Figure 4.1 Schematic diagram of ANN network for the tanker market prediction

Prediction performance results

  • Prediction performance results for VLCC
    • One-month ahead prediction
  • Prediction performance results for SUEZMAX
    • One-month ahead prediction
  • Prediction performance results for AFRAMAX
    • One-month ahead prediction
  • Comparison for different hidden layer size
  • Evaluation on ANN performance results for VLCC according to correlation
  • Comparison of performance error according to ship type…

In the Figure 4.13 of 12 months ahead forecast, the network of BRA.TDS-15.NN-10 appeared relatively good convergence with the total course of the peaks and troughs of the observation value. In the Figure 4.14 of 15 months ahead forecast, BRA.TDS-15.NN-10 network appeared relatively good convergence with the total course of the recession trend of the observation value. In the 6-month-ahead forecast, the BRA.TDS-15.NN-10 network showed more satisfactory forecasting results than the BRA.TDS-15.NN-8 network.

As shown in Figure 4.15 of the six-month ahead forecast, the performance results of the BRA.TDS-15.NN-10 network appear to converge relatively well with the overall trend of the peaks and bottoms of the observation value. In the nine-month ahead forecast, the BRA.TDS-15.NN-10 network showed more satisfactory forecast results than the BRA.TDS-15.NN-8 network. As shown in Figure 4.18 of the forecast for the next nine months, the performance results of the BRA.TDS-15.NN-10 network appear to converge relatively well with the overall trend of the peaks and bottoms of the observation value.

In the 12-month forecast, the BRA.TDS-15.NN-10 network showed more satisfactory forecasting results than the BRA.TDS-15.NN-8 network. In the 15-month-ahead forecast, the BRA.TDS-15.NN-10 network showed more satisfactory forecasting results than the BRA.TDS-15.NN-8 network. Also in the case of 10 neurons in the hidden layer, the effective number of parameters of total training algorithm parameters was 221.

As shown in Figure 4.21 of 1-month prediction, the performance results of the BRA.TDS-15.NN-8 network have appeared good. As shown in Figure 4.24 of 9-month prediction, the performance results of the BRA.TDS-15.NN-10 network have been found to be relatively good convergence with the overall course of peaks and troughs of the observation value. In 15-month prediction, the BRA.TDS-15.NN-8 network showed more satisfactory prediction results than the BRA.TDS-15.NN-10 network.

Figure  4.2  shows the average Earning trend for VLCC, SUEZMAX and AFRAMAX  tanker markets by time series data from January 2000 to December 2016 [3]
Figure 4.2 shows the average Earning trend for VLCC, SUEZMAX and AFRAMAX tanker markets by time series data from January 2000 to December 2016 [3]

Conclusions

When there is a large correlation between the input variable and the target variable, the ANN prediction performance error (MSE) does not change much with the change of the size of the input variables, and in the case of the input variable with a small correlation coefficient, the prediction performance error (MSE) changes ) changes depending on the change in the size of the input variable. The strength of the correlation between the input variables and the target variable affects the accuracy of the ANN prediction performance. When the size of the test dataset is increased in the Bayesian regularization algorithm, the R-value between outputs and objectives and the average performance errors show worse than the smaller test dataset.

Haralambides, "Econometric Modeling of Second-hand Ship Prices", Maritime Economics & Logistics, Vol.5, Issue 4, pp. Vergottis, "An Econometric Model of the World Market for Dry Cargo Freight and Shipping", Applied Economics, Vol. Kavussanos, "Comparing Volatility in the Dry Cargo Shipping Sector", Journal of Transport Economics and Policy, Vol.

Kavussanos, “Time-Varying Risks Between Tanker Freight Market Segments”, Maritime Economics and Logistics, Vol. Vergottis, “An Econometric Model of the World Tanker Market”, Journal of Transport Economics and Policy, Vol. Alizadeh, “A Hypothesis of Term Structure Expectations and Risk Premia in Bulk Freight Markets: An EGARCH-M Approach,” Journal of Transport Economics and Policy, Vol.

Doh, “Earnings Forecasting of VLCC Tankers Using Artificial Neural Networks,” Journal of the Korean Society of Marine Engineering, Vol.

Gambar

Figure 2.1    Multiple-input neuron
Figure 2.2    Multiple layers of neurons
Figure 3.1    NARX network (Closed loop) for the tanker market prediction
Figure 4.1    Schematic diagram of ANN network for the tanker market prediction
+7

Referensi

Dokumen terkait

The role of corporate foresight and technology roadmapping in companies' innovation development: The case of Russian state-owned enterprises Mikhail Gershman ⁎ , Sergey Bredikhin,