• Tidak ada hasil yang ditemukan

Comparison of Neural Network and Recurrent Neural Network to Predict Rice Production in East Java

N/A
N/A
Protected

Academic year: 2023

Membagikan "Comparison of Neural Network and Recurrent Neural Network to Predict Rice Production in East Java"

Copied!
12
0
0

Teks penuh

(1)

Journal of Information Technology and Computer Science Volume 5, Number 3, December 2020, pp. 313-324

Journal Homepage: www.jitecs.ub.ac.id

Comparison of Neural Network and Recurrent Neural Network to Predict Rice Production in East Java

Andi Hamdianah1, Wayan Firdaus Mahmudy2, Eko Widaryanto3,

1,2Faculty of Computer Science, University of Brawijaya 3Faculty of Agriculture, University of Brawijaya

1[email protected], 2[email protected], 3[email protected] Received 30 January 2020; accepted 30 December 2020

Abstract. Rice is the staple food for most of the population in Indonesia which is processed from rice plants. To meet the needs and food security in Indonesia, a prediction is required. The predictions are carried out to find out the annual yield of rice in an area. Weather factors greatly affect production results so that in this study using weather parameters as input parameters. The Input Parameters are used in the Recurrent Neural Network algorithm with the Backpropagation learning process. The results are compared with Neural Networks with Backpropagation learning to find out the most effective method.

In this study, the Recurrent Neural Network has better prediction results compared to a Neural Network. Based on the computational experiments, it is found that the Recurrent Neural Network obtained a Means Square Error of 0.000878 and a Mean Absolute Percentage Error of 10,8832%, while the Neural Network obtained a Means Square Error of 0.00104 and a Mean Absolute Percentage Error of 10,3804.

Keywords: Neural Network, Recurrent Neural Network, Predicts, Rice Productivity, East Java

1 Introduction

Rice is the staple food of people in Indonesia. Based on statistical data from the Central Statistics Agency (CSA), the consumption of rice per capita per week reaches 1,668 kg. The average rice consumption is higher compared to other staple foods.

According to the data, the average consumption of other staples per capita per week does not reach 500 kg, for example, corn, sago and tubers. According to CSA, data in 2018 shows that rice production has fluctuated in a year. Meanwhile, rice consumption tends to be stable. To find out precisely, a prediction method is needed to predict rice production as the basis for a strategy so that Indonesia does not import rice as in previous years.

The Rice prediction method has been done by several methods. The parameters used are paddy area, paddy standard area, harvested area, planting area, production, raindrop, rainy day, stem borer, rat, brown planthopper, blast, golden snail, leaf blight bacteria, fake white pest and productivity level [1][2].

Another method is used to support vector machines (SVM). The parameters used are precipitation, minimum temperature, average temperature, maximum temperature and crop evapotranspiration, production area and yields for the wisdom season (June to November) [3]. Also, the same writer used a multilayer perceptron [4].

(2)

314 JITeCS Volume 5, Number 3, December 2020, pp 313-324

Table 1 rice consumption and prediction

Date Rice

production Consumption of rice

1/1/2018 1550000 2510000

2/1/2018 3210000 2270000

3/1/2018 5420000 2510000

4/1/2018 4200000 2430000

5/1/2018 2720000 2510000

6/1/2018 2540000 2430000

7/1/2018 3070000 2510000

8/1/2018 2990000 2510000

9/1/2018 2780000 2430000

10/1/2018 1520000 2510000

11/1/2018 1200000 2430000

12/1/2018 1220000 2510000

Total 32420000 29560000

The three studies that have been conducted with the same object, found that weather is one of the variables that is often used to predict rice productivity. So, this study also uses weather variables and historical production data to predict rice production.

2 Rice

Rice (Oryza sativa L.) is the main food commodity that is cultivated in Indonesia.

Based on cultivation, rice plants can be grouped into lowland rice, lading rice and swamp rice [5]. One factor that influences the productivity of rice plants is the weather. According to Cahyaningtyas (2018) changes in air temperature in rice productivity by 28%.

3 Methodology

In this paper, use two methods to compare. The aim is to find out the most effective method of making predictions.

3.1 Data

Based on the literature study conducted, the number of data variables used amounted to 9 variables. The variable consists of 8 input variables and 1 output variable. Input variables consist of cities, minimum temperature, maximum temperature, rainfall, irradiation time, time-series 1 yields, time-series 2 yields and time-series 3 yields. While the output variables are yields. The data used are annual data from 2007 to 2018 or about 12 years.

3.2 Normalization

Normalization is done before the training process. The difference in vulnerability in each attribute is the cause of the malfunctioning of attributes that have a much smaller value than other attributes. Therefore, normalization is needed so that vulnerable values can be set to a certain scale [6]. The Min-Max Normalization method is used in this paper. This method is believed to be good to use because it has higher accuracy compared to other normalization methods [6]. The Min-max Normalization method can use the following formula:

(3)

Andi Hamdianah et al. ,: Comparison of Neural Network…315

𝑁𝑎𝑟𝑚(𝑥) = (𝑚𝑎𝑥𝑅𝑎𝑛𝑔𝑒 − 𝑚𝑖𝑛𝑅𝑎𝑛𝑔𝑒) (𝑥−𝑚𝑖𝑛𝑉𝑎𝑙𝑢𝑒)

(𝑚𝑎𝑥𝑉𝑎𝑙𝑢𝑒−𝑚𝑖𝑛𝑉𝑎𝑙𝑢𝑒)+ 𝑚𝑖𝑛𝑅𝑎𝑛𝑔𝑒 (1) Information:

Norm(x) : Normalized value maxRange : the max. value is set 0,9 minRange : the min. value is set 0,1 x : value before normalization minValue : the minimal value of an attribute maxValue : the maximum value of an attribute Example of manual calculation of the city variable below:

𝑁𝑜𝑟𝑚(𝑥) = (0,9 − 0,1) ×(1 − 1)

(5 − 1)+ 0,1 = 0,1 3.3 Neural Network

Neural Network (NN) is a model of information inspired by the biological nervous system. NN have been successfully implemented in several studies for prediction and classification [1], [7], [8]. NN gets knowledge through several learning processes.

Such knowledge is stored in the distribution of nodes and weights [9]. The application of the NN method in this paper can be seen in Figure 1 below.

The stages of the NN flow diagram with the gradient descent learning function are detailed as follows:

1. Determine several parameters such as hidden layer, unit on a hidden layer, learning rate and epoch values.

2. Entering input data consisting of eight variables.

3. Generate architecture on all parameters, input variables and output variables.

4. Initial weights and initial biases are carried out randomly.

5. Conduct training using NN which consists of two stages.

Figure 1 Flow diagram NN a. Forward Pass

On the forward pass, it will flow to each unit on each layer to the layer below it, and the value will produce a value of each layer. This stage can be seen in the forward pass flow diagram in Figure 2 below.

(4)

316 JITeCS Volume 5, Number 3, December 2020, pp 313-324

Figure 2 Flow Diagram Forward Pass

1) Each neuron input layer transmits signals to each neuron in the hidden layer. Apply multiplication dan sum of input weights and biases.

𝑧_𝑖𝑛𝑗= ∑𝑛𝑖=1𝑥𝑖+ 𝑤𝑖𝑗− 𝑏𝑖𝑗 (2)

where:

𝑧_𝑖𝑛𝑗 : The total number of inputs for the j neuron.

𝑛 : number of neurons in the previously hidden layer, for the first hidden layer in is the number of input neurons.

𝑥𝑖 : i neuron value issued by the previously hidden layer, for the first hidden layer 𝑥𝑖 is the input received by the system.

𝑤𝑖𝑗 : the weight that connects the i neuron and the j neuron.

𝑏𝑖𝑗 : the bias that connects the i neuron and the j neuron.

2) Determine the activation function. In this paper, the activation function used is the binary sigmoid function.

𝑓(𝑥) = 1

1+e(−𝑧𝑗) (3)

where:

f : activation function on a hidden layer.

3) Finding the hidden layer value by applying the activation function in Equation 3.

𝑧𝑗= 𝑓(𝑧_𝑖𝑛𝑗) (4)

where:

𝑧𝑗 : output for the j neuron

4) Skip steps 5 and 6 if there is only one hidden layer. At this stage will calculate the value of the hidden layer to the next hidden layer.

𝑧_𝑖𝑛𝑘 = ∑𝑛𝑘=1𝑥𝑗+ 𝑤𝑗𝑘− 𝑏𝑗𝑘 (5)

where:

𝑧_𝑖𝑛𝑘 : total number of inputs for the k neuron 𝑤𝑗𝑘 : weight connects j neurons and k neurons.

(5)

Andi Hamdianah et al. ,: Comparison of Neural Network…317

𝑏𝑗𝑘 : the bias that connects j neurons and k neurons

5) Finding the hidden layer value by applying the activation function in Equation 3.

6) Each neuron in the hidden layer transmits signals to each neuron in the output layer.

𝑦𝑖𝑛𝑘= ∑𝑛𝑗=1𝑧𝑗 ∗ 𝑤𝑗𝑘− 𝑏𝑗𝑘 (6)

where:

𝑦_𝑖𝑛𝑘 : the total number of inputs for the k neuron.

n : number of inputs for the k neuron.

𝑤𝑗𝑘 : the weight that connects j neurons and k neurons

7) Apply the activation function to get the value at the output layer. The equation used in the same as Equation 3.

8) Calculate MSE to evaluate the training process to stop or continue on the next iteration.

𝑀𝑆𝐸 =1

𝑘𝑛𝑘=1(𝑦_𝑖𝑛𝑘− 𝑡𝑘)2 (7)

where:

n : the amount of data y_ink : data output value tk : target data value b. Backward pass

In the backward pass process id different from the forward pass. Each unit in the layer will flow backward or to the layer above it to update the weight. The stages in the backward pass can be seen in the backward pass flow diagram in Figur 3.

1) Finding the gradient value in the output layer.

𝛿𝑘 = 𝑦_𝑖𝑛𝑘× (1 − 𝑦_𝑖𝑛𝑘) × 𝑒𝑘 (8)

where:

𝛿𝑘 : the number of gradient values in the output layer.

𝑦_𝑖𝑛𝑘 : The total number of inputs for the k.

𝑒𝑘 : Error value in the forward pass process.

2) Update the weights and biases in the output layer o the hidden layer and the results are weights using Equation 9 and a new bias using Equation 10.

∆𝑤𝑗𝑘 = 𝛼 × 𝑦_𝑖𝑛𝑘× 𝛿𝑘 (9)

∆𝑏𝑗𝑘 = 𝛼 × 𝛿𝑘 (10)

where:

∆𝑤𝑗𝑘 : the number of new weight values.

𝑎 : learning rate value.

∆𝑏𝑗𝑘 : the number of new bias values.

3) Look for the gradient value in the hidden layer.

𝛿_𝑖𝑛𝑗= ∑𝑚𝑘−1𝛿𝑘× 𝑤𝑗𝑘 (11)

𝛿𝑗= 𝛿_𝑖𝑛𝑗× 𝑓′(𝑧_𝑖𝑛𝑗) (12)

where:

𝛿_𝑖𝑛𝑗 : the number of gradient values for neurons in the hidden layer j.

4) Update the weight and bias of the hidden layer to the input layer and the result is the weight using Equation 13 and the new bias using Equation 14.

∆𝑤𝑖𝑗= 𝛼 × 𝛿𝑗× 𝑥𝑖 (13)

(6)

318 JITeCS Volume 5, Number 3, December 2020, pp 313-324

∆𝑏𝑖𝑗= 𝛼 × 𝛿𝑗 (14)

where:

∆𝑤𝑖𝑗 : the number of new weight values.

∆𝑏𝑖𝑗 : the number of new bias values.

5) Perform a forward pass process to calculate the new weight and biases close to the target error value.

Figure 3 Flow Diagram Backward Pass 3.4 Recurrent Neural Network

In this paper, the application of the Recurrent Neural Network (RNN) method uses learning gradient descent to predict rice yields. RNN is a good network for extracting information features related to dynamic systems at the hidden layer [10]. Recurrent Neural Network have been used for prediction in several studies [11], [12]. The difference of RNN with NN lies in the value of the input not only from outside the network but added from the hidden layer output value. The RNN architecture used can be seen in Figure 4.

Figure 4 RNN architecture

(7)

Andi Hamdianah et al. ,: Comparison of Neural Network…319

4 Testing and Discussion

The NN and RNN tests that have been carried out use the epoch 500 test parameters, a learning rate of 0,2 [13] and error values of 0,0001. The purpose of the test is to find the smallest error value and find out the most effective algorithm.

4.1 Testing NN and RNN

The first test is testing hidden layer nodes. This test starts from 1 hidden layer node [14]. The best number of hidden layer nodes is the smallest MSE value with relatively faster computing time. Each hidden layer node is tested by 10 and the average is in Tabel 1.

Table 2 hidden layer node test results Node

hidden layer

MSE Times

(seconds) Iteration

NN RNN NN RNN

1 0.03398 0.03164 4 4 500

2 0.02401 0.02748 4 4 500

3 0.02303 0.02627 4 4 500

4 0.02051 0.01789 4 4 500

5 0.02025 0.01752 4 4 500

6 0.01998 0.01505 4 4 500

7 0.01539 0.01006 4 4 500

8 0.01329 0.01004 4 4 500

9 0.01119 0.00971 4 4 500

10 0.01060 0.00776 4 5 500 11 0.00988 0.00712 4 5 500 12 0.00969 0.00686 4 5 500 13 0.00967 0.00662 4 5 500 14 0.00967 0.00636 5 5 500 15 0.00951 0.00633 5 5 500 16 0.00912 0.00595 5 5 500 17 0.00862 0.00547 5 5 500 18 0.00858 0.00543 5 5 500 19 0.00846 0.00524 5 5 500 20 0.00844 0.00459 5 5 500 21 0.00795 0.00443 5 5 500 22 0.00789 0.00335 5 5 500 23 0.00778 0.00333 5 5 500 24 0.00710 0.00331 5 5 500 25 0.00690 0.00329 5 5 500 26 0.00589 0.00327 5 5 500 27 0.00587 0.00325 5 5 500 28 0.00516 0.00324 5 5 500 29 0.00515 0.00322 5 5 500 30 0.00514 0.00320 5 5 500 31 0.00513 0.00318 5 5 500 32 0.00512 0.00316 5 5 500 33 0.00511 0.00314 5 5 500

(8)

320 JITeCS Volume 5, Number 3, December 2020, pp 313-324

Node hidden

layer

MSE Times

(seconds) Iteration

NN RNN NN RNN

34 0.00510 0.00312 5 5 500 35 0.00509 0.00310 5 5 500

Based on the test results, it was found that nodes 1 to 27 on NN experience a very significant decrease in MSE, but nodes 28 to 35 decreased not as significant as shown in Figure 5. While on RNN nodes 1 through 21 also experienced a significant decrease. But nodes 22 to 35 remain according, but not significant. So for further testing, 28 hidden layer nodes on NN and 22 hidden layer nodes on RNN are used.

Figure 5 hidden layer node test results

The next test is learning rate testing. The test parameters used are 28 hidden layer nodes on NN, 22 hidden layer nodes on RNN, epoch 500 and error 0,0001. Learning rate testing starts from 0,1 to 0,9 [1]. The purpose of this test is to find the best learning rate with the smallest MSE value. Each learning rate is tested by 10 and the average is in Table 2.

Table 3 Learning rate test results Learning

Rate MSE Times

(seconds) Iteration

NN RNN NN RNN

0.1 9.64E-03 5.68E-03 5 5 500 0.2 5.16E-03 3.35E-03 5 5 500 0.3 6.08E-03 3.75E-03 5 5 500 0.4 4.55E-03 2.76E-03 5 5 500 0.5 3.37E-03 2.63E-03 5 5 500 0.6 3.39E-03 2.79E-03 5 5 500 0.7 2.80E-03 2.17E-03 5 5 500 0.8 2.92E-03 2.14E-03 5 5 500 0.9 2.81E-03 2.07E-03 5 5 500

The test results showed the best learning rate was 0,7 with MSE 0,0028 on NN and 0,9 with MSE 0,002 on RNN. The MSE value is obtained with an iteration of 500 and a computational time of 5 seconds. Then, in the next test learning rate 0,7 on the NN learning rate 0,9 on RNN is used.

0 0.01 0.02 0.03 0.04

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35

MSE average

hidden layer node hidden layer node tests

NN RNN

(9)

Andi Hamdianah et al. ,: Comparison of Neural Network…321

Figure 6 learning rate test results

The final test on the NN and RNN on this paper is an epoch. Epoch testing is testing to find out the number of iteration performed to get the smallest error value.

The test parameters used ware 28 hidden layer nodes, 0,7 learning rate and error 0,0001 on NN and 22 hidden layer nodes, 0,9 learning rate and error 0,0001 on RNN.

Each epoch is tested by 10 and the average is in Table 4. Table 4 epoch test results

Epoch MSE Times

(seconds)

NN RNN NN RNN

1000 2.10E-03 1.67E-03 9 8

2000 1.32E-03 1.27E-03 17 16 4000 1.05E-03 1.11E-03 34 26 8000 8.78E-04 1.04E-03 68 60 16000 6.15E-04 9.10E-04 128 120 32000 5.75E-04 8.28E-04 289 259 64000 3.91E-04 7.32E-04 715 629

The results of tests that have been done show that the more epochs, the smaller the MSE. However, the computing time is getting longer. Based on the test results, 8000 epochs will be used to make computing time faster. Although epoch 16000, 32000 and 64000 produce MSE successively produce small MSE values, the computation time will be longer. So on the NN and RNN with 3 kinds of the test found 28 hidden layer nodes, 0,7 learning rate, 8000 epoch and error 0,0001 on NN and 22 hidden layer nodes, 0,9 learning rate, 8000 epoch and error 0,0001 on RNN.

Figure 7 Epoch test results 0.00E+00

5.00E-03 1.00E-02 1.50E-02

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

MSE average

Learning rate Learning rate tests

NN RNN

0.00E+00 1.00E-03 2.00E-03 3.00E-03

1000 2000 4000 8000 16000 32000 64000

MSE average

epoch Epoch tests

NN RNN

(10)

322 JITeCS Volume 5, Number 3, December 2020, pp 313-324

4.2 Test Results

Based on the test results that have been done, it was found that the RNN was quite effective in making predictions when viewed from MAPE and the computation time as in Table 5. However, the results of the prediction between NN and RNN did not obtain significant results as shown in Figure 8.

Table 5 Comparison of Test Results

Method MSE MAPE

(%) Times

(seconds) Iteration

NN 0,000878 10,8832 68 8000

RNN 0,00104 10,3804 60 8000

Testing with 82 data divided into two, namely data training and data testing. Each method was evaluated using MSE and MAPE produced as shown in Table 5.

Obtained by MSE and MAPE with each method, namely Neural Network Backpropagation, the value of MSE is 0,000878 with MAPE 10,8832 and computing time is 68 seconds. This method requires a little time in the computational process.

Because one cause is the number of hidden layer nodes that are needed. The more the number of hidden layers, the greater the required computing time. The more complicated the calculations are done. But, change in weight changes produced better.

However, the node used as a limit, while the vertex has a convergence. MSE changes will stagnate, this happens because there is overtraining, where the network is blocked at the local minimum so that whatever value is obtained will produce the same value.

One solution is to reduce the node on the network if it produces the same value [15].

The best architecture in this method is 28 hidden layer nodes, 0,7 learning rate, 0,0001 errors and 8000 epoch.

RNN Gradient Descent gets MSE 0,00104, MAPE 10,3804 and 60 seconds computing time. This method is relatively faster when computing time is needed. One reason is that the RNN has a delay layer as an additional input iterated to 2. An additional delay layer can update weights faster due to additional multipliers [16]. The existence of the delay layer can reduce the nodes in the hidden layer. The reduction in nodes on the network can reduce computing time significantly. The best architecture on this network is 22 hidden layer nodes, 0,9 learning rate, 8000 epochs and 0,0001 errors.

Figure 8 Comparison of prediction results with actual data 0

200000 400000 600000 800000 1000000

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

actual data value

testing data

Results of comparison of actual and predictive data

Actual Data NN RNN GD

(11)

Andi Hamdianah et al. ,: Comparison of Neural Network…323

Table 6 Comparison of prediction results with actual data No Actual

Data NN RNN GD No Actual

Data NN RNN GD

1 747808 708674.5 690696.4 13 661321 614917 610250.4 2 860239 679310.9 672702.2 14 722642 595355.1 554729.6 3 770602 707837.2 702017.5 15 721434 660225.2 627087.2 4 376553 355827 382111.3 16 202309 190107.6 191917.9 5 395812 354106.2 373127.7 17 239400 174251.5 176574 6 422343 363159.9 358462.9 18 218900 206988.5 216144.3 7 438116 416870.5 427332.1 19 189670 166558.8 164865.5 8 470283 401143.7 418771.1 20 204847 171959.9 167682.9 9 446513 434474.1 448646.5 21 200772 171359 179088.8 10 471760 533530.5 545522.2 22 9448 19818.65 8905.009 11 533321 504399.2 508807.2 23 11160 19930.55 9549.425 12 610225 558815.9 560172.5 24 12381 19542.1 25156.5

Based on Table 6, the results show that data numbers 22-24 have quite a large difference in values. The difference in data is quite influential on the prediction results using both NN and RNN. However, predictions using RNN closer to the actual data compared with NN. As seen on Figure 8, plot of prediction results using RNN can follow trends in actual rice production data. Thus, it proves that the RNN is suitable as prediction method for fluctuate data such as rice production.

5 Conclusion

Both NN amd RNN can predict rice production yields with low prediction error.

However, RNN is better at predicting rice yields with a faster computation time. This occurs because the hidden layer nodes in RNN are smaller than NN and the learning speed in RNN is higher than in NN. The number of hidden layer nodes and learning speed have a significant effect on computation time.

References

[1] A. N. Sihananto and W. F. Mahmudy, “Rainfall Forecasting Using

Backpropagation Neural Network,” J. Inform. Technol. Comput. Sci., vol. 2, no. 2, pp. 66–76, 2017.

[2] B. N. Sari, H. Permana, K. Trihandoko, A. Jamaludin, and Y. Umaidah,

“Prediksi Produktivitas Tanaman Padi di Kabupaten Karawang Menggunakan Bayesian Networks,” J. Infotel, vol. 9, pp. 454–460, 2017.

[3] N. Gandhi, L. J. Armstrong, O. Petkar, and A. K. Tripathy, “Rice crop yield prediction in India using support vector machines,” in 2016 13th International Joint Conference on Computer Science and Software Engineering (JCSSE), 2016, pp. 1–5.

[4] N. Gandhi, O. Petkar, and L. J. Armstrong, “Rice crop yield prediction using artificial neural networks,” in 2016 IEEE Technological Innovations in ICT for Agriculture and Rural Development (TIAR), 2016, pp. 105–110.

[5] A. Cahyaningtyas, N. Azizah, and N. Herlina, “Evaluasi Dampak Perubahan Iklam Terhadap Produktivitas Padi (Oryza sativa L.) di Kabupaten Gresik,” J.

Produksi Tanam., vol. 6, no. 9, pp. 2030–2037, 2018.

[6] D. A. Nasution, H. H. Khotimah, and N. Chamidah, “Perbandingan

Normalisasi Data untuk Klasifikasi Wine Menggunakan Algoritma K-NN,” J.

(12)

324 JITeCS Volume 5, Number 3, December 2020, pp 313-324

Comput. Eng. Syst. Sci., vol. 4, no. 1, pp. 78–82, 2019.

[7] M. W. Alauddin, W. F. Mahmudy, and A. L. Abadi, “Extreme Learning Machine Weight Optimization using Particle Swarm Optimization to Identify Sugar Cane Disease,” J. Inf. Technol. Comput. Sci., vol. 4, no. 2, p. 127, 2019.

[8] B. Priambodo, W. Firdaus, and M. Arif, “Earthquake Magnitude and Grid- Based Location Prediction using Backpropagation Neural Network,” vol. 3, no. 1, pp. 28–39, 2020.

[9] J. J. Shann and H. C. Fu, “A fuzzy neural network for rule acquiring on fuzzy control systems,” Fuzzy Sets Syst., vol. 71, no. 3, pp. 345–357, 1995.

[10] J. L. Elman, “Structure in Time,” Cogn. Sci., vol. 14, no. 1 990, pp. 179–211, 1990.

[11] K. Park, J. Kim, and J. Lee, “Visual Field Prediction using Recurrent Neural Network,” Sci. Rep., vol. 9, no. 1, pp. 1–12, 2019.

[12] A. Moghar and M. Hamiche, “Stock Market Prediction Using LSTM

Recurrent Neural Network,” Procedia Comput. Sci., vol. 170, pp. 1168–1173, 2020.

[13] S. Santosa, A. Widjanarko, and C. Supriyanto, “Model Prediksi Penyakit Ginjal Kronik Menggunakan Radial Basis Function,” J. Pseudocode, vol. III, no. September, pp. 163–170, 2016.

[14] D. Jauhadi, A. Himawan, and C. Dewi, “Prediksi Distribusi Air PDAM Menggunakan Metode Jaringan Syaraf Tiruan,” J. Teknol. Inf. dna Ilmu Komput., vol. 3, no. 2, pp. 83–87, 2016.

[15] M. Khashei and M. Bijari, “An artificial neural network (p, d, q) model for timeseries forecasting,” Expert Syst. Appl., vol. 37, no. 1, pp. 479–489, 2010.

[16] A. A. J. Permana and W. Prijodiprodjo, “Sistem Evaluasi Kelayakan

Mahasiswa MagangMenggunakan Elman Recurrent Neural Network,” IJCCS (Indonesian J. Comput. Cybern. Syst., vol. 8, no. 1, pp. 37–48, 2014.

Referensi

Dokumen terkait

Sehubungan dengan telah memasuki tahap pembuktian kualifikasi terhadap dokumen isian kualifikasi yang Saudara sampaikan, maka bersama ini kami mengundang Saudara

Bobot kering (g) sisa serasah Avicennia alba tiap ulangan pada berbagai tingkat salinitas... Uji independent t test Dekomposisi serasah terdapat di lingkungan dengan

Hasil penelitiannya yaitu Faktor-faktor penyebab menurunnya minat siswa mengikuti pembelajaran squash di Sekolah Squash Bastaman Lodaya adalah faktor internal dengan

Dishwashing Soap, Cream Detergent, Housemaid Wage, Security Fee, Garbage Disposal Fee, T-shirt (Male), Underpants (Male), Jeans Trousers (Male), Cotton Trousers (Male),

[r]

Demikian pengumuman pemenang lelang ini dibuat, agar dapat diketahui oleh seluruh

Penelitian ini bertujuan untuk menentukan kandungan saponin secara kualitatif dan kuantitatif pada bagian daun, batang dan umbi tanaman binahong.Hasil penelitian secara

kasus labiognatopalatoskizis komplit bilateral dengan pilihan teknik labioplasti modifikasi Millard untuk koreksi primer celah bibir, disertai penggunaan nasoalveolar molding