• Tidak ada hasil yang ditemukan

LSTM Network Hyperparameter Optimization for Stock Price Prediction Using the Optuna Framework

N/A
N/A
Protected

Academic year: 2023

Membagikan "LSTM Network Hyperparameter Optimization for Stock Price Prediction Using the Optuna Framework"

Copied!
14
0
0

Teks penuh

(1)

ISSN: 2338-3070, DOI: 10.26555/jiteki.v9i1.24944 22

Journal homepage: http://journal.uad.ac.id/index.php/JITEKI Email: [email protected]

LSTM Network Hyperparameter Optimization for Stock Price Prediction Using the Optuna Framework

Edi Ismanto, Vitriani

Department of Informatics, Universitas Muhammadiyah Riau, Pekanbaru, Riau, Indonesia

ARTICLE INFO ABSTRACT

Article history:

Received September 29, 2022 Revised January 18, 2023 Published January 21, 2023

In recent years, the application of deep learning-based financial modeling tools has grown in popularity. Research on stock forecasting is crucial to understanding how a nation's economy is doing. The study of intrinsic value and stock market forecasting has significant theoretical implications and a broad range of potential applications. One of the trickiest challenges in projects involving deep learning and machine learning is hyperparameter search. In this paper, we evaluate and analyze the optimal hyperparameter search in the long short-term memory (LSTM) model developed to forecast stock prices using the Optuna framework. This study contributes to developing the LSTM algorithm model for predicting stock prices. Applying the optuna framework to the LSTM model to improves the search for the ideal hyperparameter. We examined a number of hyperparameters with several LSTM architectures, including optimizers (SGD, Adagrad, RMSprop, Nadam, Adamax, dan Adam), LSTM hidden units, dropout rates, epochs, batch size, and learning rate. The results of the experiment indicated that of the four LSTM models tested—model 1 single LSTM, model 2 single LSTM, model 1 LSTM stacked, and model 2 LSTM stacked—model 1 single LSTM was the most effective. Single LSTM version 1 offers the lowest losses when compared to other models and had the lowest root mean square error (RMSE) score of 7.21. When compared to manual hyperparameter tuning, automatic hyperparameter tuning has lower losses and is better.

Keywords:

Deep learning;

LSTM network;

Optuna framework;

Stock price prediction;

Classification;

Time series analysis

This work is licensed under a Creative Commons Attribution-Share Alike 4.0

Corresponding Author:

Edi Ismanto, Department of Informatics, Universitas Muhammadiyah Riau, Pekanbaru, Riau, Indonesia Email: [email protected]

1. INTRODUCTION

Stock market activity includes trading, exchanging, and circulating stocks. The stock market has caught the interest of many investors. Understanding the evolving regularity of the stock market and predicting the direction of stock prices have long been appealing topics for investors and researchers. Numerous elements, including politics, the economy, society, and the market, have an impact on the rise and fall of stock values.

The stock market's trend projection is closely tied to the acquisition of profits for stock investors. The more precise the prediction, the more successfully it can mitigate hazards. For publicly traded firms, the stock price serves as a key technical measure for the study and research of the business in addition to reflecting the operational environment and expectations for future growth. Research on stock forecasting is crucial to understanding how a nation's economy is doing [1][2]. Predicting the development trend of financial data is a very challenging problem since it contains complicated, partial, and imprecise information [3][4]. The study of intrinsic value and stock market forecasting, therefore, has significant theoretical implications and a broad range of potential applications.

One of the trickiest challenges in projects involving deep learning and machine learning is hyperparameter search [4][5]. Deep learning techniques are becoming more sophisticated as they become more widely used, and automated frameworks for optimizing hyperparameters are more in demand than ever [6]. But all of the currently available frameworks for hyperparameter optimization demand that the user statically creates the

(2)

parameter-search-space for each model [7]–[9], and the search space in these frameworks can be very difficult to describe for large-scale experiments that involve a lot of candidate models of many conditional variables and a variety of different kinds with vast parameter spaces [10][11].

In recent years, the application of deep learning-based financial modeling tools has grown in popularity [12]. We discovered that modern models that combine LSTM with other methods, like DNN, are the focus of a lot of research [13]–[15]. The model was optimized using a small batch gradient descent approach during the training of the Attention-LSTM algorithm, which uses a short-step iteration method [16][17]. This made it possible to estimate the model more rapidly and accurately.

The deep learning model outperforms machine learning methods in terms of performance [18]–[23]. The experimental findings considerably raise classification accuracy [24]–[27]. LSTM is a better-suited machine learning algorithm for predicting the direction of stock price since it takes into account the idea of risk-adjusted return and the recently introduced performance metric of adjusted accuracy [28][29]. Even though the stock price prediction model presented in this research may significantly increase forecast accuracy and is quite resilient, there are still some drawbacks as listed below: Trial and error are typically used to determine the best size of parameters, such as the selection of a number of components when designing model parameters, as is the case in this work [30]–[32].

The main objective of this study is to optimize the LSTM network model that might be utilized to predict stock prices. Therefore, it is proposed an LSTM-based deep learning model utilizing the Optuna framework and evaluating hyperparameter optimization. A number of LSTM models were created, and testing and training of the models were done. The following are the contributions our study makes:

▪ Developing the LSTM algorithm model for predicting stock prices.

▪ Applying the Optuna framework to the LSTM model to improves the search for the ideal hyperparameter.

▪ An evaluation of the developed model is carried out.

2. METHODS

This research strategy examines the procedures employed to get the desired outcome. To increase the performance of the model's accuracy, hyperparameter tuning is used for the LSTM. This study may be used to enhance the LSTM method for stock price forecasting. Therefore, a flowchart will be made to outline the various techniques being employed. Fig. 1 shows the overall framework of our approach.

Fig. 1. The overall structure of our method

Pre-processing the data is the first step in Fig. 1. Data pre-processing is the practice of processing data before it is suitable for use [32]. The processed data is divided into two categories: training data and test data.

The next process is building the LSTM model, there are four LSTM models developed, 2 single structures and 2 stacked structures. The LSTM model is then tested, and testing is carried out with 2 approaches, namely setting hyperparameters using the Optuna framework and testing manually or without the Optuna framework.

The final step is the analysis process. The goal of the analytical procedure is to demonstrate the method's accuracy in forecasting stock prices using a variety of accuracy metrics.

Data Preprocessing

Training Set

Testing Set Retrieved

Dataset

Optimization Process with Optuna Framework

LSTM Model

Testing Input Set Predicted Stock Prices

Testing Output Performance

Evaluation

(3)

2.1. Dataset

In order to conduct this study, historical NEPSE pricing stock data from 2012 to 2020 were accessed via http://data.opennepal.net/datasets. Attributes of the NEPSE dataset include symbols, date, open, high, low, close, and volume. The NEPSE dataset has 250901 rows in total. The closing prices of the top 10 trending NEPSE equities are included in the dataset for NEPSE stocks, as seen in Fig. 2.

Table 1 displays each ticker's annualized average returns. Every ticker has a significant increase in the closing price from the year 2016 to 2017 and a gradual decrease in the stock price. However, the average returns of individual tickers over the five years period are negative and only ADBL is positive. Consequently, the model's training and testing will use the specified ADBL data. The high returns ticker for the ADBL stock's correlation analysis is displayed in Fig. 3. Every variable is noteworthy for additional investigation based on correlation analysis.

Fig. 2. The trend in NEPSE stock

Table 1. Average stock returns in NEPSE Number Symbol Average stock returns

1 ADBL 0.09476244967719016 2 CHCL 0.0004447407747326407 3 CZBIL 0.031137075954316824

4 EBL 0.027805842554793188

5 NABIL 0.01564970186241801

6 NIB 0.020970431795409195

7 PCBL 0.044723533540874195

8 SBI 0.029771719379631172

9 SCB -0.008406578465089133 10 SRBL 0.06419601965209884

Fig. 3. The high returns ticker for the ADBL stock's correlation analysis

(4)

2.2. Data Preprocessing

There were three processes in the data preparation stage: identifying the characteristics of the historical data, normalizing the data, and splitting the dataset into two halves (training data and testing data) [33]. The first stage is figuring out how many qualities, or how many ո-dates before the date, are thought to have an impact on the stock price right now. The second step is transforming the dataset by changing the value range between 0 and 1.

The third stage is separating the dataset into training and testing data based on the quantity of k-fold cross- validation. The training data is used to test the LSTM parameters based on the optimization findings in order to find the optimum LSTM model, and the testing data is used to assess how well the LSTM model predicts the stock price [34].

2.3. Long Short Term Memory (LSTM)

A specific type of recurrent neural network (RNN), often known as a sequence of neural networks capable of processing sequential input, is a long short-term memory network (LSTM) [35][36]. A unique network structure called LSTM has three "gate" structures (shown in Fig. 4). The input gate, forgetting gate, and output gate are the three gates that make up an LSTM unit [37][38]. Information may be chosen by rules when it enters the LSTM network [39][40]. Information that does not comply with the algorithm will be erased by the forgetting gate, leaving only the data that does.

Fig. 4. Unit Structure for LSTM

Through the gating unit, the LSTM is able to add and remove information for neurons. It comprises a layer of Sigmoid neural networks (1) and a pair multiplication operation to determine selectively whether information travels or not. The Sigmoid layer outputs each element as a real number in the range [0, 1], indicating the weight through which the relevant piece of information goes. Additionally, a layer with tanh activation function is present in the LSTM neural network shown in (2). Updates to the status of neurons are made using it.

𝜎(𝑥) 1

1 + 𝑒−𝑥 (1)

tanh(𝑥)𝑒𝑥− 𝑒−𝑥

𝑒𝑥+ 𝑒−𝑥 (2)

The LSTM neural network's forgetting gate determines what information containing the letters

𝑡−1

and

𝑥

𝑡 should be discarded, and assigns a value of 0–1 to the neuronal state

𝐶

𝑡−1. The calculating process for forgetting probability is shown in (3).

𝑓𝑡= 𝜎 (𝑤𝑓. [ ℎ𝑡−1, 𝑥𝑡]+𝑏𝑓

)

(3)

Where ℎ𝑡−1 is the previous neuron's output and 𝑥𝑡 is the current neuron's input. 𝜎 represents the sigmoid function. How much fresh information is added to the neuron state depends on the input gate. Updated information is first identified by the input layer with the sigmoid activation function, after which candidate vectors

𝑐̂

𝑡 are generated by a tanh layer, and the state of the neuron is updated, as shown in (4).

(5)

𝑐𝑡= 𝑓𝑡∗ 𝑐𝑡−1+ 𝑖𝑡∗ 𝑐̂𝑡 (4) Where (5) and (6) display the calculation techniques for

𝑖

𝑡 and

𝑐̂

𝑡.

𝑖𝑡= 𝜎(𝑤𝑖 . [ℎ𝑡−1 , 𝑥𝑡 ] + 𝑏𝑖 (5)

𝑐̂𝑡= tanh( 𝑤𝑐 . [ ℎ𝑡−1 , 𝑥𝑡] + 𝑏𝑐 ) (6) The output gate is used to determine how many controlling units state and current neural unit states are filtered, as illustrated in (7) and (8).

𝜊𝑡= 𝜎 ( 𝑤𝜊 . [ℎ𝑡−1 , 𝑥𝑡 ] + 𝑏𝜊 ) (7)

𝑡= 𝜊𝑡∗ tanh( 𝑐𝑡) (8)

2.4. Optuna Framework for Hyperparameter Optimization

The Optuna framework is software for deep learning and machine learning hyperparameter tuning [41].

The Optuna framework has the following benefits: (1). Makes it possible for users to develop a parameter search space dynamically. (2). Efficient use of search and pruning techniques. (3). A user-friendly, adaptable architecture that can be applied to a wide range of tasks, including scalable networked computing and lightweight experiments carried out via interactive interfaces.

2.5. Metrics for Evaluating Performance

Accuracy calculations are performed in order to assess the effectiveness of the developed LSTM model and show how closely the forecasted outcomes match the actual data. In this work, the assessment indices employed were the Root Mean Square Error (RMSE) [42], [43]. The evaluation index equation will calculate the error rate from the forecasting results, as shown in (9).

𝑅𝑀𝑆𝐸 = √1

𝑁 ∑ (𝑦̂𝑖− 𝑦𝑖 )

𝑁 1=1

(9)

3. RESULTS AND DISCUSSION

In this study, LSTM with optimized parameters is used to forecast the stock price utilizing the optuna framework. We examined a number of hyperparameters with several LSTM architectures, including optimizers (SGD, Adagrad, RMSprop, Nadam, Adamax, dan Adam ), LSTM hidden units, dropout rates, epochs, batch size, and learning rate. We created four LSTM models that will be tested: model 1 single LSTM, model 2 single LSTM, model 1 stacked LSTM, and model 2 stacked LSTM. The four LSTM models are put to the test using the price of ADBL's stock. Fig. 5 displays the closing price of ADBL shares from 2012 to 2020.

Fig. 5. The closing price of ADBL

3.1. Analysis of a Single LSTM Variant 1 Model

The single-variant 1 LSTM model was trained and tested, as shown in Table 2. The Optuna framework has been used to optimize the LSTM model hyperparameter. The evaluation results for the single LSTM

(6)

architecture variation 1 are shown in Fig. 6 for a number of hyperparameters, including the optimizer, hidden LSTM unit, dropout rate, epoch, batch size, and learning rate. The single-variant analysis 1 LSTM model's scores for training and testing from the evaluation of the root mean square error (RMSE) are displayed in Table 3. Single-variant 1 LSTM model evaluation shown in Fig. 7. Prediction using the on-train set single-variant 1 LSTM model shown in Fig. 8. Prediction using the on-test set single-variant 1 LSTM model shown in Fig. 9.

Table 2. Single-variant 1 LSTM model

Layer (type) Output Shape Param

lstm_50 (LSTM) (None, 3950) 62457400

dropout_50 (Dropout) (None, 3950) 0

dense_50 (Dense) (None, 1) 3951

Table 3. Loss Analysis single-variant 1 LSTM model

Loss Analysis Train Score Test Score

RMSE 14.46 7.21

Fig. 6. The significance of hyperparameters

Fig. 7. Single-variant 1 LSTM model evaluation

Fig. 8. Prediction using the on-train set single-variant 1 LSTM model

(7)

Fig. 9. Prediction using the on-test set single-variant 1 LSTM model

3.2. Analysis of a Single LSTM Variant 2 Model

The single-variant 2 LSTM model was trained and tested, as shown in Table 4. The Optuna framework has been used to optimize the LSTM model hyperparameter. The evaluation results for the single LSTM architecture variation 2 are shown in Fig. 10 for a number of hyperparameters, including the optimizer, hidden LSTM unit, dropout rate, epoch, batch size, and learning rate. The single-variant analysis 2 LSTM model's scores for training and testing from the evaluation of the root mean square error (RMSE) are displayed in Table 5. Single-variant 2 LSTM model evaluation shown in Fig. 11. Prediction using the on-train set single-variant 2 LSTM model show in Fig. 12. Prediction using the on-test set single-variant 2 LSTM model show in Fig. 13.

Table 4. Single-variant 2 LSTM model

Layer (type) Output Shape Param

lstm_50 (LSTM) (None, 2650) 28164200

dropout_50 (Dropout) (None, 2650) 0

dense_50 (Dense) (None, 1) 2651

Table 5. Loss Analysis single-variant 1 LSTM model

Loss Analysis Train Score Test Score

RMSE 14.66 7.44

Fig. 10. The significance of hyperparameters

Fig. 11. Single-variant 2 LSTM model evaluatio

(8)

Fig. 12. Prediction using the on-train set single-variant 2 LSTM model

Fig. 13. Prediction using the on-test set single-variant 2 LSTM model

3.3. Analysis of a Stacked LSTM Variant 1 Model

The stacked-variant 1 LSTM model was trained and tested, as shown in Table 6. The Optuna framework has been used to optimize the LSTM model hyperparameter. The evaluation results for the stacked LSTM architecture variation 1 are shown in Fig. 14 for a number of hyperparameters, including the optimizer, hidden LSTM unit, dropout rate, epoch, batch size, and learning rate. The stacked-variant analysis 1 LSTM model's scores for training and testing from the evaluation of the root mean square error (RMSE) are displayed in Table 7. Stacked-variant 1 LSTM model evaluation show in Fig. 15. Prediction using the on-train set stacked-variant 1 LSTM model shown in Fig. 16. Prediction using the on-test set stacked-variant 1 LSTM model shown in Fig.

17.

Table 6. Stacked-variant 1 LSTM model

Layer (type) Output Shape Param

lstm_120 (LSTM) (None, 1, 2550) 26040600

dropout_120 Dropout) (None, 1, 2550) 0

lstm_121 (LSTM) (None, 1, 850) 11563400

dropout_121(Dropout) (None, 1, 850) 0

lstm_122 (LSTM) (None, 1, 3250) 53313000

dropout_122(Dropout) (None, 1, 3250) 0

lstm_123 (LSTM) (None, 3600) 98654400

dropout_123(Dropout) (None, 3600) 0

dense_30 (Dense) (None, 1) 3601

Table 7. Loss Analysis stacked-variant 1 LSTM model

Loss Analysis Train Score Test Score

RMSE 14.58 7.72

(9)

Fig. 14. The significance of hyperparameters

Fig. 15. Stacked-variant 1 LSTM model evaluation

Fig. 16. Prediction using the on-train set stacked-variant 1 LSTM model

Fig. 17. Prediction using the on-test set stacked-variant 1 LSTM mode

(10)

3.4. Analysis of a Stacked LSTM Variant 2 Model

The stacked-variant 2 LSTM model was trained and tested, as shown in Table 8. The Optuna framework has been used to optimize the LSTM model hyperparameter. The evaluation results for the stacked LSTM architecture variation 2 are shown in Fig. 18 for a number of hyperparameters, including the optimizer, hidden LSTM unit, dropout rate, epoch, batch size, and learning rate. The stacked-variant analysis 2 LSTM model's scores for training and testing from the evaluation of the root mean square error (RMSE) are displayed in Table 9. Stacked-variant 2 LSTM model evaluation shown in Fig. 19. Prediction using the on-train set stacked-variant 2 LSTM model shown in Fig. 20. Prediction using the on-test set stacked-variant 2 LSTM model shown in Fig.

21.

Table 8. Stacked-variant 2 LSTM model

Layer (type) Output Shape Param

lstm_120 (LSTM) (None, 1, 3250) 42341000

dropout_120 Dropout) (None, 1, 3250) 0

lstm_121 (LSTM) (None, 1, 1600) 31046400

dropout_121(Dropout) (None, 1, 1600) 0

lstm_122 (LSTM) (None, 1, 750) 7053000

dropout_122(Dropout) (None, 1, 750) 0

lstm_123 (LSTM) (None, 400) 1841600

dropout_123(Dropout) (None, 400) 0

dense_30 (Dense) (None, 1) 401

Table 9. Loss Analysis stacked-variant 2 LSTM model

Loss Analysis Train Score Test Score

RMSE 13.65 7.31

Fig. 18. The significance of hyperparameters

Fig. 19. Stacked-variant 2 LSTM model evaluation

(11)

Fig. 20. Prediction using the on-train set stacked-variant 2 LSTM model

Fig. 21. Prediction using the on-test set stacked-variant 2 LSTM model

3.5. Analysis of Overall Hyperparameter Optimization Outcomes

Based on the testing outcomes of the four LSTM models that were constructed, i.e., model 1 single LSTM, model 2 single LSTM, model 1 stacked LSTM, and model 2 stacked LSTM. In comparison to other models, single LSTM version 1 has the lowest loss as seen in Table 10. Table 11 displays the findings of the loss analysis for both manual and automatic hyperparameter tuning. When compared to automatic hyperparameter tuning, manual hyperparameter tuning has a relatively substantial loss value. Single LSTM version 1 offers the lowest losses when compared to other models and had the lowest RMSE score of 7.21. When compared to manual hyperparameter tuning, automatic hyperparameter tuning has better and lower losses.

Table 10. Results of loss analysis for the LSTM model

Models Loss Analysis Train Score Test Score

Single LSTM Variant 1 Model RMSE 14.46 7.21

Single LSTM Variant 2 Model RMSE 14.66 7.44

Stacked LSTM Variant 1 Model RMSE 14.58 7.72

Stacked LSTM Variant 2 Model RMSE 13.65 7.31

Table 11. Results of loss analysis for both manual and automatic hyperparameter tuning Models Hyperparameter Tuning Loss Analysis Loss Model

Single LSTM Variant 1 Model Manual RMSE 10.95

Single LSTM Variant 2 Model Manual RMSE 10.83

Stacked LSTM Variant 1 Model Manual RMSE 10.53

Stacked LSTM Variant 2 Model Manual RMSE 11.94

Single LSTM Variant 1 Model Automated RMSE 7.21

Single LSTM Variant 2 Model Automated RMSE 7.44

Stacked LSTM Variant 1 Model Automated RMSE 7.72

Stacked LSTM Variant 2 Model Automated RMSE 7.31

(12)

4. CONCLUSION

This paper is to present an LSTM-based network model for simultaneous price forecasting of stocks. The Optuna framework was used for the hyperparameter optimization of four LSTM network models, which were created with distinct variations. The model structure, algorithmic framework, and experimental design are all described. The results of the experiment indicated that of the four LSTM models tested—model 1 single LSTM, model 2 single LSTM, model 1 LSTM stacked, and model 2 LSTM stacked—model 1 single LSTM was the most effective. Single LSTM version 1 offers the lowest losses when compared to other models and had the lowest RMSE score of 7.21. When compared to manual hyperparameter tuning, automatic hyperparameter tuning has better and lower losses. Although the model produces positive results, there are several areas that may be strengthened. For instance, the basic arithmetic means the approach is used to calculate total loss during training, with the intention of improving the model by lowering total loss. When the overall loss is at its lowest point, some details, such as the severe circumstances of each sub-loss and the oscillation during the loss reduction process, are not taken into account by this technique of calculating losses. The loss calculation technique will be optimized in the next stage to increase the model's average accuracy while also studying the dimension reduction of the input parameters.

REFERENCES

[1] I. K. Nti, A. F. Adekoya, and B. A. Weyori, “A novel multi-source information-fusion predictive framework based on deep neural networks for accuracy enhancement in stock market prediction,” J Big Data, vol. 8, no. 1, 2021, https://doi.org/10.1186/s40537-020-00400-y.

[2] W. Budiharto, “Data science approach to stock prices forecasting in Indonesia during Covid-19 using Long Short- Term Memory (LSTM),” J Big Data, vol. 8, no. 1, 2021, https://doi.org/10.1186/s40537-021-00430-0.

[3] D. Selvamuthu, V. Kumar, and A. Mishra, “Indian stock market prediction using artificial neural networks on tick data,” Financial Innovation, vol. 5, no. 1, 2019, https://doi.org/10.1186/s40854-019-0131-7.

[4] J. Shen and M. O. Shafiq, “Short-term stock market price trend prediction using a comprehensive deep learning system,” J Big Data, vol. 7, no. 1, 2020, https://doi.org/10.1186/s40537-020-00333-6.

[5] M. Göçken, M. Özçalıcı, A. Boru, and A. T. Dosdoğru, “Stock price prediction using hybrid soft computing models incorporating parameter tuning and input variable selection,” Neural Comput Appl, vol. 31, no. 2, pp. 577–

592, 2019, https://doi.org/10.1007/s00521-017-3089-2.

[6] D. Lien Minh, A. Sadeghi-Niaraki, H. D. Huy, K. Min, and H. Moon, “Deep learning approach for short-term stock trends prediction based on two-stream gated recurrent unit network,” IEEE Access, vol. 6, pp. 55392–55404, 2018, https://doi.org/10.1109/ACCESS.2018.2868970.

[7] F. G. D. C. Ferreira, A. H. Gandomi, and R. T. N. Cardoso, “Artificial Intelligence Applied to Stock Market Trading: A Review,” IEEE Access, vol. 9, pp. 30898–30917, 2021, https://doi.org/10.1109/ACCESS.2021.3058133.

[8] F. Kamalov, “Forecasting significant stock price changes using neural networks,” Neural Comput Appl, vol. 32, no. 23, pp. 17655–17667, 2020, https://doi.org/10.1007/s00521-020-04942-3.

[9] A. Lawi, H. Mesra, and S. Amir, “Implementation of Long Short-Term Memory and Gated Recurrent Units on grouped time-series data to predict stock prices accurately,” J Big Data, vol. 9, no. 1, 2022, https://doi.org/10.1186/s40537-022-00597-0.

[10] C. Xiao, W. Xia, and J. Jiang, “Stock price forecast based on combined model of ARI-MA-LS-SVM,” Neural Comput Appl, vol. 32, no. 10, pp. 5379–5388, 2020, https://doi.org/10.1007/s00521-019-04698-5.

[11] Z. Jin, Y. Yang, and Y. Liu, “Stock closing price prediction based on sentiment analysis and LSTM,” Neural Comput Appl, vol. 32, no. 13, pp. 9713–9729, 2020, https://doi.org/10.1007/s00521-019-04504-2.

[12] Z. Hu, Y. Zhao, and M. Khushi, “A survey of forex and stock price prediction using deep learning,” Applied System Innovation, vol. 4, no. 1, pp. 1–30, 2021, https://doi.org/10.3390/ASI4010009.

[13] T. T. Nguyen and S. Yoon, “A novel approach to short-term stock price movement prediction using transfer learning,” Applied Sciences (Switzerland), vol. 9, no. 22, 2019, https://doi.org/10.3390/app9224745.

[14] Srivinay, B. C. Manujakshi, M. G. Kabadi, and N. Naik, “A Hybrid Stock Price Prediction Model Based on PRE and Deep Neural Network,” Data (Basel), vol. 7, no. 5, pp. 1–11, 2022, https://doi.org/10.3390/data7050051.

[15] V. Chandrasekara, C. Tilakaratne, and M. Mammadov, “An improved probabilistic neural network model for directional prediction of a stock market index,” Applied Sciences (Switzerland), vol. 9, no. 24, pp. 1–21, 2019, https://doi.org/10.3390/app9245334.

[16] D. Wei, “Prediction of Stock Price Based on LSTM Neural Network,” Proceedings - 2019 International Conference on Artificial Intelligence and Advanced Manufacturing, AIAM 2019, pp. 544–547, 2019, https://doi.org/10.1109/AIAM48774.2019.00113.

[17] Y. C. Gao, Tingwei, “Improving Stock Closing Price Prediction Using Recurrent Neural Network and Technical Indicators,” Neural Comput, vol. 2733, no. March, pp. 2709–2733, 2018, https://doi.org/10.1162/NECO.

[18] N. Naik and B. R. Mohan, “Stock price movements classification using machine and deep learning techniques-the case study of indian stock market,” Springer International Publishing, vol. 1000, 2019, https://doi.org/10.1007/978-3-030-20257-6_38.

[19] Q. Jiang, C. Tang, C. Chen, X. Wang, and Q. Huang, “Stock Price Forecast Based on LSTM Neural Network,”

Springer International Publishing, pp. 393-408, 2019, https://doi.org/10.1007/978-3-319-93351-1_32.

(13)

[20] P. Yu and X. Yan, “Stock price prediction based on deep neural networks,” Neural Comput Appl, vol. 32, no. 6, pp. 1609–1628, 2020, https://doi.org/10.1007/s00521-019-04212-x.

[21] S. M. Carta, S. Consoli, L. Piras, A. S. Podda, and D. R. Recupero, “Explainable Machine Learning Exploiting News and Domain-Specific Lexicon for Stock Market Forecasting,” IEEE Access, vol. 9, pp. 30193–30205, 2021, https://doi.org/10.1109/ACCESS.2021.3059960.

[22] X. Yuan, J. Yuan, T. Jiang, and Q. U. Ain, “Integrated Long-Term Stock Selection Models Based on Feature Selection and Machine Learning Algorithms for China Stock Market,” IEEE Access, vol. 8, pp. 22672–22685, 2020, https://doi.org/10.1109/ACCESS.2020.2969293.

[23] M. Nikou, G. Mansourfar, and J. Bagherzadeh, “Stock price prediction using DEEP learning algorithm and its comparison with machine learning algorithms,” Intelligent Systems in Accounting, Finance and Management, vol.

26, no. 4, pp. 164–174, 2019, https://doi.org/10.1002/isaf.1459.

[24] G. Ding and L. Qin, “Study on the prediction of stock price based on the associated network model of LSTM,”

International Journal of Machine Learning and Cybernetics, vol. 11, no. 6, pp. 1307–1317, 2020, https://doi.org/10.1007/s13042-019-01041-1.

[25] T. Kabbani and E. Duman, “Deep Reinforcement Learning Approach for Trading Automation in the Stock Market,” SSRN Electronic Journal, vol. 10, no. July, pp. 93564–93574, 2022, https://doi.org/10.2139/ssrn.4100283.

[26] D. Song, A. M. Chung Baek, and N. Kim, “Forecasting Stock Market Indices Using Padding-Based Fourier Transform Denoising and Time Series Deep Learning Models,” IEEE Access, vol. 9, pp. 83786–83796, 2021, https://doi.org/10.1109/ACCESS.2021.3086537.

[27] E. K. Ampomah, Z. Qin, and G. Nyame, “Evaluation of tree-based ensemble machine learning models in predicting stock price direction of movement,” Information (Switzerland), vol. 11, no. 6, 2020, https://doi.org/10.3390/info11060332.

[28] S. Kim, S. Ku, W. Chang, W. Chang, W. Chang, and J. W. Song, “Predicting the Direction of US Stock Prices Using Effective Transfer Entropy and Machine Learning Techniques,” IEEE Access, vol. 8, pp. 111660–111682, 2020, https://doi.org/10.1109/ACCESS.2020.3002174.

[29] M. Wen, P. Li, L. Zhang, and Y. Chen, “Stock market trend prediction using high-order information of time series,” IEEE Access, vol. 7, pp. 28299–28308, 2019, https://doi.org/10.1109/ACCESS.2019.2901842.

[30] S. Chen and C. Zhou, “Stock Prediction Based on Genetic Algorithm Feature Selection and Long Short-Term Memory Neural Network,” IEEE Access, vol. 9, pp. 9066–9072, 2021, https://doi.org/10.1109/ACCESS.2020.3047109.

[31] M. R. Hossain, M. T. Ismail, and S. A. B. A. Karim, “Improving Stock Price Prediction Using Combining Forecasts Methods,” IEEE Access, vol. 9, pp. 132319–132328, 2021, https://doi.org/10.1109/ACCESS.2021.3114809.

[32] S. M. Idrees, M. A. Alam, and P. Agarwal, “A Prediction Approach for Stock Market Volatility Based on Time Series Data,” IEEE Access, vol. 7, pp. 17287–17298, 2019, https://doi.org/10.1109/ACCESS.2019.2895252.

[33] J. Qiu, B. Wang, and C. Zhou, “Forecasting stock prices with long-short term memory neural network based on attention mechanism,” PLoS One, vol. 15, no. 1, pp. 1–15, 2020, https://doi.org/10.1371/journal.pone.0227222.

[34] H. K. Choi, “Stock Price Correlation Coefficient Prediction with ARIMA-LSTM Hybrid Model,” 2018, [Online].

Available: https://doi.org/10.48550/arXiv.1808.01560.

[35] W. Lu, J. Li, Y. Li, A. Sun, and J. Wang, “A CNN-LSTM-based model to forecast stock prices,” Complexity, vol.

2020, 2020, https://doi.org/10.1155/2020/6622927.

[36] X. Yu and D. Li, “Important trading point prediction using a hybrid convolutional recurrent neural network,”

Applied Sciences (Switzerland), vol. 11, no. 9, 2021, https://doi.org/10.3390/app11093984.

[37] Y. Hao and Q. Gao, “Predicting the trend of stock market index using the hybrid neural network based on multiple time scale feature learning,” Applied Sciences (Switzerland), vol. 10, no. 11, 2020, https://doi.org/10.3390/app10113961.

[38] S. Kumar and D. Ningombam, “Short-Term Forecasting of Stock Prices Using Long Short Term Memory,”

Proceedings - 2018 International Conference on Information Technology, ICIT 2018, pp. 182–186, 2018, https://doi.org/10.1109/ICIT.2018.00046.

[39] R. Xiao, Y. Feng, L. Yan, and Y. Ma, “Predict stock prices with ARIMA and LSTM,” pp. 1–14, 2022, [Online].

Available: https://doi.org/10.48550/arXiv.2209.02407.

[40] X. Ji, J. Wang, and Z. Yan, “A stock price prediction method based on deep learning technology,” International Journal of Crowd Science, vol. 5, no. 1, pp. 55–72, 2021, https://doi.org/10.1108/IJCS-05-2020-0012.

[41] T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama, “Optuna: A Next-generation Hyperparameter Optimization Framework,” ACM Applied Data Science, pp. 2623–2631, 2019, https://doi.org/10.1145/3292500.3330701.

[42] S. Lahmiri, “A Technical Analysis Information Fusion Approach for Stock Price Analysis and Modeling,”

Fluctuation and Noise Letters, vol. 17, no. 1, pp. 1–14, 2018, https://doi.org/10.1142/S0219477518500074.

[43] Q. Chen, W. Zhang, and Y. Lou, “Forecasting Stock Prices Using a Hybrid Deep Learning Model Integrating Attention Mechanism, Multi-Layer Perceptron, and Bidirectional Long-Short Term Memory Neural Network,”

IEEE Access, vol. 8, pp. 117365–117376, 2020, https://doi.org/10.1109/ACCESS.2020.3004284.

(14)

BIOGRAPHY OF AUTHORS

Edi Ismanto, completed education bachelor's degree in the Informatics Engineering Department, State Islamic University of Sultan Syarif Kasim Riau. And master's degree in Master of Computer Science at Putra Indonesia University Padang. Now working as a lecturer in the Department of Informatics, University Muhammadiyah of Riau. With research interests in the field of Machine learning algorithms and AI.

Vitriani, obtained a bachelor's degree in information technology from Putra Indonesia University Padang. He completed his master's degree in computer science at Putra Indonesia University Padang. Then complete a Doctor at Padang State University in the same field of study.

Referensi

Dokumen terkait

[r]

Pada penelitian ini, digunakan untuk mengetahui seberapa besar hubungan antara variabel fasilitas, kualitas layanan, dan potongan harga terhadap variabel kepuasan konsumen..

selaku Ketua Departemen Matematika Fakultas Matematika dan Ilmu Pengetahuan Alam Universitas Sumatera Utara dan juga selaku dosen pembimbing yang senantiasa membimbing

ketentuan Peraturan Pemerintah Nomor 24 Tahun 2004 tentang Kedudukan. Protokoler dan Keuangan Pimpinan dan Anggota Dewan

Informasi yang dicatat dalam buku jurnal umum atau buku jurnal khusus secara periodik ditransfer ke akun yang sesuai yang terdapat dalam buku besar ini.. Transfer inilah yang umumnya

Jika kaum muda mampu mengidentifikasi aneka persoalan kebangsaan sekaligus mampu memanfaatkan segala potensi yang ada untuk bangsa, maka akan menjadi tidak berlebihan

Namun, melalui kajian yang sama, pengkaji menemui bahawa pada ketinggian 65 meter dari permukaan tanah, kelajuan angin adalah tinggi dan sangat ketara iaitu 6.13 m/s.. Boleh

Sedangkan pengurangan scan time pada pembobotan T2 sekuens TSE dengan menggunakan SENSE sesuai dengan Morelli dkk (2011), bahwa dengan SENSE tidak hanya menggunakan