• Tidak ada hasil yang ditemukan

Chapter 9: Conclusion

9.4 Limitations and directions for future research

3) The models explicitly remoulded for the time series of the NZX 50 Index [16] were subsequently applied to the ASX 200 Index [20]. This is to establish whether the models designed for the NZX 50 Index can be effectively used for different time series. These results are presented in Section 8.4. The findings confirmed the predictive superiority of Univariate LSTM followed by the Multivariate LSTM as the next best for each performance evaluation measure used in the study.

The conclusions summarised in (1) – (3) above are consistent; thus, these findings establish valid, reliable and generalised conclusions about the forecasting precision of Univariate LSTM followed by the Multivariate LSTM as the next robust model.

Additionally, the results confirm the predictive dominance of the deep learning LSTM [25] architecture over the evaluated statistical (HWES, [22]–[24] and ARIMA, [21]) models. The detailed evaluations are presented in Sections 8.2 – 8.4.

investigated the forecasting precision of the reformulated HWES [22]–[24], ARIMA [21], and LSTM [25] models when applied to the NZX 50 Index [16]. The redesigned models are primarily assessed on the NZX 50 Index [16], which is considered the benchmark index in the New Zealand stock market. I have also investigated the forecasting precision of the same reformulated models when applied to the ASX 200 Index [20] forecasting. Multiple tests are conducted on these reformulated HWES [22]–[24], ARIMA [21], and LSTM [25] models.

These investigations confirmed the forecasting precision of LSTM over HWES and ARIMA models and Univariate LSTM's predictive dominance over the rest of the models assessed in this research. Consequently, I consider my research pursues to close the empirical research gaps identified in Chapter 3.

This research has some limitations, which create potential opportunities for future research and the remainder of this section outlines them.

In addition to the chosen HWES, [22]–[24] and ARIMA, [21] models, Chapter 3 also uncovered that ARCH and GARCH models had been used to analyse financial time series.

Still, their applications to the New Zealand financial markets are limited. Hence, an exciting future research proposition would be to compare the predictive precisions of ARCH and GARCH as the alternatives to the chosen statistical models (HWES, [22]–[24] and ARIMA, [21]) and apply them to New Zealand and overseas financial markets to contrast their forecasting efficiencies.

In addition to LSTM [25], Chapter 3 also disclosed other popular DL models such as {CNN [205]–[207], DNN [209]–[213], GRU [219]–[220], [223], GAN [226]–[227] and RNN [189], [218]–[220]} and hybrid models {ARFIMA [157], ANFIS [159], [162], PFTS–LAPSO [158], Wavelet-MARS-SVR [159]} which have not yet been employed to the New Zealand financial markets. The application of LSTM [25] architecture on the NZX 50 Index prediction produced significantly improved forecasting precision compared to the prediction results from

the conventional statistical models (HWES [22]–[24] and ARIMA [21]). Thus, I considered that there is no fundamental reason to employ alternative DL or hybrid models. If LSTM [25]

did not produce superior results in comparison to the tested HWES [22]–[24] and ARIMA [21]

models, I would have then implemented an alternative DL model (GRU, CNN, DNN, GAN) or a hybrid model (ARFIMA, ANFIS and Wavelet-MARS-SVR), in the stated order of preference. Thus, an exciting research avenue is to apply the identified DL and hybrid models to New Zealand and foreign financial markets to compare their predictive efficacies in different markets.

Another research direction could be to compare the reliability and the predictive precision of the redesigned (ARIMA [21], HWES [22]–[24] and LSTM [25]) models with other popular DL models {CNN [205]–[207], DNN [209]–[213], GRU [219]–[220], [223], GAN [226]–

[227] and RNN [189], [218]–[220]} and hybrid models {ARFIMA [157], ANFIS [159], [162], PFTS–LAPSO [158], Wavelet-MARS-SVR [159]}. The comparisons can be made when the proposed models are applied to the same or different indices, implemented to the time series of different frequencies, or employed to domestic or foreign individual stocks.

Another possible research opportunity is to focus on financial textual analysis, a methodological toolbox in finance. The financial textual analysis investigates text data from news stories (through forums, databases, Twitter and so forth) to capture the market sentiment and predict the stock market's future directions. Deep learning methodologies have facilitated the management of multiple modalities and ensemble models to incorporate news and other forms of data.

Several new architectures have been developed in the last few years, and these models have leveraged DL in the time series forecasting domain. For example, [339] proposed Neural Basis Expansion Analysis for Interpretable Time Series Forecasting (N-Beats), which deploys

deep neural architecture built on the backward and forward residual links with a deep stack of fully connected layers. N-Beats is a pure learning-based model that strives to accomplish state- of-the-art performance on multiple large-scale datasets and has many desirable properties, quick learning, explicitness and the capability to apply without modification to a wide array of target domains. Similarly, [340] designed a hybrid forecasting model integrating Exponential Smoothing with Recurrent Neural Networks (ES-RNN). This model was judged as a winning submission in the M4 forecasting competition established by [341]. This hybrid forecasting model creates a common framework that merges the standard ES with LSTM to capture non- stationary trends and other effects in the historical time series. Likewise, DeepAR, developed by Amazon Research [342], is considered to be capable of producing efficient probabilistic forecasts based on training an autoregressive recurrent neural network model on much related time series. DeepAR is a time series forecasting model based on DL and can efficiently learn a global model from relevant time series and swiftly learn complex patterns. Transformers [343], a force in the DL models, have earned attention lately due to their efficiency across various domains. Recently a fleet of “X-former” models have been proposed; Reformer, Linformer, Performer, and Longformer, to name a few, which progress from the original Transformer architecture [344]. Likewise, Spacetimeformer, proposed by [345], is one of the newest state-of-the-art sequence-to-sequence models explicitly designed to jointly discover interactions between space, time, and value information along this extended sequence.

Similarly, a novel attention-based architecture known as Temporal Fusion Transformer (TFT), a transformer-based time series forecasting model, was developed by [346], which combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. [339]‒[346] are the latest additions to the time series forecasting domain; hence my research had no opportunity to investigate them. An exciting future research opportunity would be to apply [339]‒[346] to the time series of New Zealand and foreign financial markets.