• Tidak ada hasil yang ditemukan

Fisher Kolmogorov Equation Theory Simulation Using Deep Learning

N/A
N/A
Protected

Academic year: 2023

Membagikan "Fisher Kolmogorov Equation Theory Simulation Using Deep Learning"

Copied!
8
0
0

Teks penuh

(1)

Fisher Kolmogorov Equation Theory Simulation Using Deep Learning

Conny Tria Shafira*, Putu Harry Gunawan, Aditya Firman Ihsan Faculty of Informatics, S1 Informatics, Telkom University, Bandung, Indonesia Email:1,* [email protected] ,2[email protected] ,

3[email protected]

Correspondence Author Email: [email protected]

Abstract−Neural Networks (NNs), a powerful tool for identifying non-linear systems, derive their computational power through a parallel distributed structure. The Physics-Informed Neural Network (PINN) technique can solve the Partial Differential Equation (PDP) in the Fisher Kolmogorov equation. By testing several hyperparameter changes, the formula is correct, and the visualization results can be consistent. Shows that an accurate value can be obtained from the results of th e Mean Squared Error (MSE) on the formula loss value (loss f) and data loss (loss u). In experiment 1 the MSE obtained was 0.00001657 (Loss f) and 0.00000038 (loss u), as well as the MSE values obtained in experiment 4, is 0.00005865 (Loss f) and 0.00000216 (Loss u). It can be said to be accurate if the MSE value is close to 0. A formula is proven correct if it displays consistent data in random input data, but with the condition that it uses the same parameters. The author conducted research to simulate the Fisher-Kolmogorov equation with deep learning using the PINN technique. So the purpose of the research conducted was to simulate the Fisher-Kolmogorov equation with the deep learning method using the PINN technique.

From the research, it can be concluded that Fisher-Kolmogorov's equation proves to be true if the simulation is carried out in deep learning and produces a visualization that is consistent with the functions used for visualization.

Keywords: Simulation, Neural Network (NN); Physics-Informed Neural Network (PINN); Fisher-Kolmogorov; Partial Differential Equations (PDEs); Mean Squared Error (MSE).

1. INTRODUCTION

The differential equation (PDE) is an equation that contains the derivative of the dependent variable on the independent variable. Differential equations are grouped into two types based on the number of independent variables, namely ordinary differential equations and partial differential equations. Based on linearity, there are 2 partial differential equations, namely linear partial differential equations and non-linear differential equations [1].

Partial differential equations (PDE) are widely used in several quantitative disciplines, from physics to economics, in an attempt to describe the evolution and dynamics of phenomena [2]. Examples of non-linear differential equations are the Burgers equation, the Fisher equation, the Liouville equation, and the Boussinesq equation (Wazwaz, 2009). The Fisher-Kolmogorov equation for the reaction-diffusion process discussed uses decomposition to determine concentration statistics [3]. The one-dimensional Fisher-Kolmogorov equation with density-dependent non-linear diffusion. The Fisher equation with non-linear diffusion is known as the modified Fisher equation. There have been many studies in numerical analysis to efficiently calculate the numerical solutions of PDEs using different finite volume methods and with various constraints on the domain boundaries.

Because PDE has high dimensions, the computational costs are quite high, creating many challenges for applied mathematicians [4][5]. An increasing number of research studies use machine learning techniques, especially using deep learning algorithms. With deep learning algorithms, it is hoped that they can provide new horizons for calculating appropriate numerical solutions at considerable cost and computation time [6].

To account for equations that are linear combinations of physical functions, we interpolate the observed data using the Physics-Informed Neural Network (PINN) technique. The Physics-Informed Neural Network (PINN) technique is a technique that studies a type of universal function approach that can incorporate knowledge of all the physical laws that govern certain data sets into the learning process and can be explained by partial differential equations (PDE) [7]. The ability of Physics-Informed Neural Network (PINN) can overcome the lack of information from data by utilizing the laws of physics that underlie the system. Instead of studying purely with data monitoring, besides that, the PINN technique is also used in mathematical equations that govern existing physical phenomena [8]. The main advantage of PINN is that it uses known basic physical laws to govern the system. The known physical model provides additional information for the neural network to predict the behavior of the system within the domain rather than relying heavily on data feeds, which are not always available in large quantities [9].

Research conducted by Alexander M. Tartakovsky et al stated that the physics-based Deep Neural Network (DNN) method can accurately estimate non-linear constitutive relationships based on state measurements alone [10]. Meanwhile, Muhammad Shakeel researched the one-dimensional Fisher-Kolmogorov equation with density-dependent non-linear diffusion using COMSOL, resulting that the minimum wave velocity depending on the parameter values involved in the model [11]. Maziar Raissi etc. researched to solve differential equation problems using the Runge Kutta method [6]. The results of research by Aditya Firman Ihsan stated that the PINN method has the advantage that having more data can give better results but will consume more computation time and memory compared to the PDE numerical-solving method. Based on several previous studies, it was found that the PINN method has great potential to be a powerful method.

(2)

In this study implements the PDE solution on the Fisher Kolmogorov equation using the Physics- Informed Neural Network (PINN) technique. In this paper, the standard non-linear Fisher Kolmogorov equation is studied; namely a two-dimensional finite domain with the condition of using location (x) for boundary conditions and time (t) for initial conditions. Moreover, we apply PINN to solve them and analyze their performance. The standard numerical solution with a finite difference scheme is used as a comparison for the solution obtained by PINN. The results obtained in this study can be used as a consideration for the application of the PINN method in the future, especially when compared to other numerical methods that are already widely used

2. RESEARCH METHODOLOGY

2.1 Research Stages

In designing systems that use deep learning that takes advantage of their well-known capabilities as universal applications [12]. So that in this study, a system will be built to simulate the prediction results of the Fisher- Kolmogorov equation, using the Physics-Informed Neural Network (PINN) technique. The steps to be taken to design this system are as shown below:

Figure 1. Research Stages System Flow 2.1.1 Generate Data

Because the research does not use a dataset, to simulate the equation by generating data from the Fisher Kolmogorov equation function formula to produce input data. The stages of generating data are as follows:

a. Random Data

Random data is defined as random data collection where each data has the same probability of being selected. The first process to be carried out is to determine the size of the data and the number of batches where the resulting data is used to prepare random data that is used for the initial condition input data. In this process, prepare an empty list to hold the input data, then determine t and x to be randomly generated, where the t values are in the range 0 and 1 and the x values are in the range 0 and 1.

b. Shuffle Data

The results of the random data process are carried out by the shuffle data process. This process is carried out as a randomization of data that is used as input data, the resulting generated data is called Collection Data. The shuffling process is carried out because it ensures that a formula can be used. In this process what will be tested is not the form of the function, but the formula.

c. Initial Function

The initial function is a function for creating formulas that will be used later for several trial cases, as well as determining the functions that will be used in building the model.

2.1.2 Loss Function Preparation

The loss Function is a function that measures how well the model performs in predicting targets [13]. In PINN, the Loss Function is defined using the output processed from the network and its derivatives based on the equations that govern the physics of the problem [14]. The loss function works when the learning model gives errors that need attention. Because the model learns not from the data, and the loss function does not match the data. However, the loss function is defined to match the physical equation. In deep learning, the loss function will try to be reduced to close to 0.

2.1.3 PINN Modeling

This process is the process of creating a deep learning model or neural network. The input data is in the form of location (x) and time (t), and the resulting output is u. At this stage determine the optimizer used to prepare the MSE (Mean Squared Error) function as a loss function, determine the number of layers and the number of neurons (units) in each layer later, and determine the speed of the neural network learning model. So that it can iterate as

(3)

many layers as have been determined and add models to the dense layer with predetermined neurons then do normalization.

2.1.4 Training

Perform testing on the data that has been obtained from generating data in the function to get the expected results.

The testing step is carried out using gradient-based optimization to minimize loss functions or losses, such as SGD, Adam, AdaGrad, and RMSprop [15]. It was found that Adam's optimization is a combined algorithm of the AdaGrad and RMSProp optimization algorithms so the test was carried out using Adam's optimization as the best choice compared to other optimizations [16] [17]. With the above method, it is expected to obtain a trained neural network that can be used to approximate solutions of partial differential equations. In the next section, the test will use the above method to study the Fisher-Kolmogorov equation.

2.1.5 Evaluation

After testing, the lowest MSE value will be analyzed because the lower the MSE value or the closer the MSE value is to 0, the more accurate the accuracy of the test will be. The MSE value is obtained from the value of the loss in the formula (loss f) and the loss in the data (loss u). Then for evaluation, a test is carried out using the lowest MSE value by visualizing the Fisher Kolgomorov equation using the gradient at the predetermined x and t variable values. So that we get a visual of the diffusion change in the PDP equation. The final result of the research is a review of the test results to see whether the system built can simulate to predict the diffusion-reaction equation of the Fisher-Kolmogorov theory.

2.2 Problem Formulation

For the case study that will be discussed, using a one-dimensional wave equation using the mathematical formula in the Fisher Kolmogorov equation states the dynamics of a quantity U that depends on location x in a region [0, L] and time t, which is formally written as follows:

𝑈𝑡 = 𝐷𝑈𝑥𝑥 + 𝑅𝑈(1 − 𝑈)

The author determines the boundary conditions for modeling the Fisher-Kolmogorov equation as follows:

Ut= DUxx+ RU(1 − U) (1)

du dt=d2u

dx2+ u(1 − u), (2)

du

dx= 0 pada x = 0 (3)

dL(t)

dt = −kdu

dx at x = L(t) (4)

So, it can be concluded that the boundary conditions from the above equation are as follows:

du

dx= 0 at x = 0 (5)

du

dx= 0 at x = L (6)

0 ≤ x ≤ L; x = 0 dan x = 1 (7)

By using a case study for the function sin f(x) = A sin(π x). A is the amplitude used in the formula function.

This equation is a one-dimensional diffusion-reaction equation that combines linear diffusion with non-linear logistic source terms [18][19]. The "diffusion" equation has a coefficient D with the addition of the "reaction" term with a coefficient R. This equation, complemented by the boundary conditions at x = 0 and x = L, as well as the initial conditions at t = 0 forms a boundary value problem that needs to be solved.

2.3 Neural Network

Neural Network (NN) is defined as a massively parallel distributed data processing process consisting of simple processing units that have a natural tendency to store experience information and make it available for use. The use of NN is widely used in testing in various applications such as robotics, voice recognition, human facial recognition, medical applications, and physics. Then NN also has the advantage of being a powerful tool for identifying non-linear systems, gaining its computational power through massively parallel distributed structures and its ability to learn, and has an important role in dynamic system identification and error detection because it can not only be used to detect the occurrence of error [6]. In the case of a PDP solution, the NN will be designed to accept input variables x and t, then provide output in the form of variable u. With that NN is used to calculate from the function u (x,t). Variables x and t are inputs and variables u are outputs, and what is between the inputs and outputs is the hidden layer. The network illustration is shown in Figure 2.

(4)

Figure 2. Neural network used for PINN. The input is randomly generated values of 𝑥 and 𝑡. It can be seen as an approximation of function 𝑢 (𝑥,𝑡).

2.4 Physics-Informed Neural Network (PINN)

In physics problems, it is rare to create simulations using clean data or large amounts of datasets. Because there is no dataset used, the authors use these equations and generate them to be used as data and to model the authors use the PINN technique. PINN itself uses all known regulatory equations related to the function that will be a lesson.

PINN uses a randomly generated independent variable which will be fed into the function (in the case of x and t) to the Physics equation. In this case, using the wave equation, initial conditions, and boundary conditions are then used to form a loss function for predictions, which are then modified to minimize errors in the resulting output. To achieve our goal, for each equation we need several samples t and x to enter into the dynamic model of a quantity U that depends on the location of x in a region [0, L] and time t. This approach encodes the equation into a loss function and uses the initial conditions and boundary conditions on the PDP without using any data. Another approach is to combine PINN with observational data and the results identify increased accuracy [20].

3. RESULTS AND DISCUSSION

3.1 Research Results

In this study, the PINN technique is used to implement the PDP problem, namely the Fisher Kolmogorov equation.

The author uses the PINN technique because PINN can be used in physical equations that can adjust some of the loss functions to perform gradient calculations. So that this mechanism can give PINN the advantage of its ability to carry out learning without using data. We varied many aspects of the neural network as part of the analysis, such as varying the amplitude involved in the sin function, varying data size, batch size, and units (neurons).

PINN uses the resulting data as input from learning. What needs to be learned from PINN is that for any given data, it must fulfill all the equations that govern the system, so it uses independent variables namely (x and t) which will be distributed randomly across all domains and boundaries. The PINN provides loss functions, namely boundary equations, boundary conditions, and initial conditions. The boundary conditions contain two different equations. The simulation results are then evaluated by calculating the Mean Squared Error (MSE) with the best result being a value that is close to 0. The purpose of calculating the MSE is to minimize the total loss function, where an MSE that is close to 0 is an accurate calculation. The optimization used is the Adam (Adaptive Moment Estimation) Optimizer, which uses a learning rate of 0.005. The value of the actual learning rate is the result of the hyperparameter settings that were done before. A higher learning rate gives more to do unstable training processes.

Table 1. Parameters Used Test Parameters Value

Data Size 10.000, 50.000, dan 100.000 Batch Size 1.000, 10.000, dan 100.000 Unit (Neuron) 32 dan 16

Amplitude Sin 0.25, 0.5, dan 0.75

In table 1 can be seen, that assign the variable values x = 0 – 1; t = 0 – 1; D = 1, and R = 5. Several hyperparameters are varied to observe performance changes in the model training process. The experiment consisted of variations of 4 different parameter aspects namely; data size, batch size, unit, and amplitude. The speed of diffusion must be small enough to be able to observe the behavior in a short time interval because we will be simulating the problem in the time range t = 0 – 1. Small diffusion values are also needed for stability in numerical computations. In each case study experiment, we compared the different variations using varied parameters.

3.2 Analysis

By using deep learning and using the PINN technique, model testing is carried out with several test scenarios as follows:

We plot from 2 losses, namely the loss in the formula (loss function f ) and the loss in the data (loss function u) which is done about the epoch. In each experiment, we compare different hyperparameters using epochs 5 times.

(5)

So the MSE value is taken in the last iteration. We experimented 6 times by doing repeated training and storing the data results in array storage. From the first experiment, the hyperparameter variations used to carry out the test are shown in table 2:

Table 2. Trial to-1

Trial To - Data Size Batch Size Unit (Neuron) Amplitude sin MSE Loss (f) MSE Loss (u)

1 10.000 1.000 32 0,25 0,00001657 0,00000038

0,5 0,00028962 0,00001227 0,75 0,00015159 0,00000534 In the table above it can be seen that the results of an accurate MSE value with a value close to 0 at an amplitude of 0.25 with a data size of 10,000, a batch size of 1,000, and 32 neurons which get an MSE value of loss in the formula is 0.00001657 (loss f) and the loss on data 0.00000038 (loss u). Then carried out in the 2nd experiment, varying the hyperparameters used to carry out the test as shown in table 3:

Table 3. Trial to-2

Trial To - Data Size Batch Size Unit (Neuron) Amplitude sin MSE Loss (f) MSE Loss (u)

2 50.000 10.000 32 0,25 0,00014390 0,00000940

0,5 0,00071993 0,00001653 0,75 0,00206686 0,00005920 In the table above it can be seen that the results of an accurate MSE value with a value close to 0 at an amplitude of 0.25 with a data size of 50,000, a batch size of 10,000, and 32 neurons get the MSE value for the loss function formula is 0.00014390 (loss f) and the loss on data 0.00000940 (loss u). Then carried out in the 3rd experiment, the hyperparameters used to carry out the test are as shown in table 4:

Table 4. Trial to-3

Trial To - Data Size Batch Size Unit (Neuron) Amplitude sin MSE Loss (f) MSE Loss (u)

3 100.000 100.000 32 0,25 0,00023608 0,00000910

0,5 0,00105623 0,00004223 0,75 0,01235724 0,00044470 In the table, above it can be seen that the results of an accurate MSE value with a value close to 0 at an amplitude of 0.25, a data size of 100,000, a batch size of 100,000, and 32 neurons get the MSE value at a loss in the formula are 0.00023608 (loss f) and the loss on data 0.00000910 (loss u). Then carried out in the 4th experiment, the hyperparameters used to carry out the test are as shown in table 5:

Table 5. Trial to-4

Trial To - Data Size Batch Size Unit (Neuron) Amplitude sin MSE Loss (f) MSE Loss (u)

4 10.000 1.000 16 0,25 0,00005865 0,00000216

0,5 0,00009715 0,00000270 0,75 0,00087643 0,00004466 In the table above it can be seen that the results of the MSE value which is close to the value 0 at an amplitude of 0.25 with a data size of 10,000, a batch size of 1,000, and neurons 16 get an MSE value of loss in the formula 0.00005865 (loss f) and loss in data 0.00000216 (loss u). Then carried out in the 5th experiment, the hyperparameters used to carry out the test are as shown in table 6:

Table 6. Trial to-5

Trial To - Data Size Batch Size Unit (Neuron) Amplitude sin MSE Loss (f) MSE Loss (u)

5 50.000 10.000 16 0,25 0,00078782 0,00002963

0,5 0,00026390 0,00000395 0,75 0,00204600 0,00003127 In the table above it can be seen that the results of the MSE value which is close to the value 0 at an amplitude of 0.5 with a data size of 50,000, a batch size of 10,000, and neurons 16 get the MSE value for the loss of the formula function is 0.00026390 (loss f) and the loss in data is 0 .00000395 (loss u). Then carried out in the 6th experiment, the hyperparameters used to carry out the test are as shown in table 7:

Table 7. Trial to-6

Trial To - Data Size Batch Size Unit (Neuron) Amplitude sin MSE Loss (f) MSE Loss (u)

6 100.000 100.000 16 0,25 0,00048851 0,00002314

0,5 0,00622612 0,00023989

(6)

Trial To - Data Size Batch Size Unit (Neuron) Amplitude sin MSE Loss (f) MSE Loss (u) 0,75 0,00301641 0,00010878 In the table above it can be seen that the results of an accurate MSE value close to a value of 0 at an amplitude of 0.25 with a data size of 100,000, a batch size of 100,000, and neurons 16 get an MSE value of loss in the formula 0.00048851 (loss f) and loss in data 0, 00002314 (loss u). After doing 6 trials, the lowest MSE value was obtained as shown in table 8:

Table 8. Lowest MSE Result

Trial To - Data Size Batch Size Unit (Neuron) Amplitude sin MSE Loss (f) MSE Loss (u)

1 10.000 1000 32 0.25 0,00001657 0,00000038

4 10.000 1000 16 0.25 0,00005865 0,00000216

From the tests carried out, we can take the optimal PINN model for training on the Fisher-Kolmogorov equation with the model performed using the Adam Optimizer, using a learning rate of 0.005, 32 neurons, 10,000 data sizes, 1,000 batch sizes, and at an amplitude of 0 .25 which results in a loss value in the formula function (loss f) is 0.00001657 and the MSE value in data loss (loss u) gets a value of 0.00000038

After conducting training on the PINN Technique, PINN can simulate problems in the domain of solving the required physics functions. PINN can also take large diffusivity values without problems. The experiments that have been carried out and produced a loss value (MSE) can be seen graphically as shown in Figure 2 and Figure 3. In Figure 2 the lowest results after the experiment were carried out using a data size of 10,000, a batch size of 1,000, 32 units (neurons), and an amplitude of 0.25. 0.5, and 0.75. As can be seen, the smallest MSE value is at an amplitude of 0.25. Whereas in Figure 3 you can see the graphical results of the value of the loss in the data that has been compared with the experiment. The lowest MSE value is in the data at data size 10,000, batch size 1,000, units (neurons) 32, and amplitudes 0.25, 0.5, and 0.75. The graph showing the smallest MSE value is at an amplitude of 0.25.

Figure 3. Graph of Lowest MSE Value at Formula Loss

Figure 4. Graph of Lowest MSE Value on Data Loss

At the smallest MSE value, it can be seen in the colour gradient that shows changes in diffusion reactions with the spread of heat, which can be visualized using the sin, linear, geometric, and logarithmic functions so that it can be seen whether the function can be said to be correct with consistent visualization results. The visualization results can be seen in the following Figure 4:

(a) (b)

(7)

(c)

Figure 5. (a) Visualization using the sin function; (b) Visualization using linear functions, and (c) Visualization using geometric functions.

We can see the visualization image by intersecting each t value. The change in displacement can be seen from the color change, at t = 0.8 in all locations x a heat process occurs. At t = 0.6 and x = 0.2, the heat process has decreased to a cooler one. Cold changes were more dominant at t = 0.2 and all locations x. The point of heat distribution is at t = 0.6 to t = 1 with location x = 0.4 to 1. The visualization results on color look consistent so the formula in the Kolmogorov fisher equation proves to be correct. Consistency can be seen in the heat process at points t and x.

4. CONCLUSION

Heat transfer is a process, where the transfer of energy flows from a higher temperature to a lower temperature.

The heat transfer process can be identified by looking at the temperature distribution. The calculation of the temperature distribution requires a differential equation, and the solution to the differential equation can be found using various methods. However, in this research, the procedure used is a deep learning procedure with the PINN method. CHAPTER III it is discussed the solution of one-dimensional heat cases, where the research material is a homogeneous rod. The cross-section of this one-dimensional homogeneous rod has finite interval lengths and at certain points, the same diffusion and reaction values are known. The test carried out is to test whether the formula is correct by looking at the consistent visualization output. So we don't test from the data and put it in the function, so we randomize the data every time we do training. We tested using the PINN technique with 6 trials, resulting in MSE values for formula loss (loss f) and data loss (loss u). Testing the function u(x,t) in the equation 𝑈𝑡 = 𝐷𝑈𝑥𝑥 + 𝑅𝑈(1 − 𝑈) using the same values, namely x = 0 to 1; and t = 0 to 1. The parameters used are the same, namely D = 1 and R = 5, by making changes to several parameters such as data size, batch size, amplitude of the sin function, and units (neurons). After testing up to 6 times, we got the smallest MSE values in experiments 1 and 4.

In experiment 1 the MSE obtained was 0.00001657 (Loss f) and 0.00000038 (loss u), as well as the MSE values obtained in experiment 4, is 0.00005865 (Loss f) and 0.00000216 (Loss u). So it can be said that the MSE results that are close to 0 are optimal and accurate experiments. The smallest MSE value is visualized on sin, linear, and logarithmic functions. In the visualization of the three functions, it can be seen that the color of the indicator is consistent with heat propagation. Conducting this test so that the author can find out whether the equation is correct if the simulation is carried out with deep learning and visualized in several functions, proving whether the equation is correct by looking at the results of the simulation carried out that the resulting visualization is consistent across several experimental functions carried out by the author. So, if this research is not carried out, the writer cannot know whether the equation will produce consistent visualization of the several formulas used.

REFERENCES

[1] M. R. Ebert and M. Reissig, Basics for Partial Differential Equations. 2018.

[2] A. Hasan, M. Pereira, and R. Ravier, “LEARNING PARTIAL DIFFERENTIAL EQUATIONS FROM DATA USING NEURAL NETWORKS Vahid Tarokh Department of Biomedical Engineering, Duke University, Durham NC 27708 Department of Electrical and Computer Engineering, Duke University, Durham NC 27708,” pp. 3962–3966, 2020.

[3] H. P. Breuer, W. Huber, and F. Petruccione, “Fluctuation effects on wave propagation in a reaction-diffusion process,”

Phys. D Nonlinear Phenom., vol. 73, no. 3, pp. 259–273, 1994, DOI: 10.1016/0167-2789(94)90161-9.

[4] Gallouët, T., Herbin, R., Latché, J.C. and Nguyen, T.T., “Playing with Burgers’s equation”. In Finite Volumes for Complex Applications VI Problems & Perspectives (pp. 523-531). Springer, Berlin, Heidelberg, 2011.

[5] R. Herbin, J. C. Latché, and T. T. Nguyen, “Consistent segregated staggered schemes with explicit steps for the isentropic and full Euler equations,” ESAIM Math. Model. Numer. Anal., vol. 52, no. 3, pp. 893–944, 2018, DOI:

10.1051/m2an/2017055.

[6] T. Nguyen, B. Pham, T. T. Nguyen, and B. T. Nguyen, “A deep learning approach for solving Poisson’s equations,” Proc.

- 2020 12th Int. Conf. Knowl. Syst. Eng. KSE 2020, pp. 213–218, 2020, DOI: 10.1109/KSE50997.2020.9287419.

[7] M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations,” no. Part I, pp. 1–22, 2017, [Online]. Available:

http://arxiv.org/abs/1711.10566.

[8] A. F. Ihsan, “On the Neural Network Solution of One-Dimensional Wave Problem,” J. RESTI (Rekayasa Sist. dan

(8)

Teknol. Informasi), vol. 5, no. 6, pp. 1106–1112, 2021, DOI: 10.29207/rest.v5i6.3565.

[9] A. F. Ihsan, “Performance Analysis of the Neural Network Solution of Advection-Diffusion-Reaction Problem.”

[10] A. M. Tartakovsky, C. O. Marrero, P. Perdikaris, G. D. Tartakovsky, and D. Barajas-Solano, “Learning Parameters and Constitutive Relationships with Physics Informed Deep Neural Networks,” 2018, [Online]. Available:

http://arxiv.org/abs/1808.03398.

[11] M. Shakeel, “Travelling Wave Solution of the Fisher-Kolmogorov Equation with Non-Linear Diffusion,” Appl. Math., vol. 04, no. 08, pp. 148–160, 2013, DOI: 10.4236/am.2013.48a021.

[12] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359–366, 1989, DOI: 10.1016/0893-6080(89)90020-8.

[13] Hakim, D.M., "Optical Music Recognition Pada Citra Notasi Musik Menggunakan Convolutional Neural Network" (Doctoral dissertation, Universitas Komputer Indonesia), 2019.

[14] S. Amini Niaki, E. Haghighat, T. Campbell, A. Poursartip, and R. Vaziri, “Physics-informed neural network for modeling the thermochemical curing process of composite-tool systems during manufacture,” Comput. Methods Appl. Mech. Eng., vol. 384, p. 113959, 2021, DOI: 10.1016/j.cma.2021.113959.

[15] L. Yang and A. Shami, “On hyperparameter optimization of machine learning algorithms: Theory and practice,”

Neurocomputing, vol. 415, pp. 295–316, 2020, DOI: 10.1016/j.neucom.2020.07.061.

[16] D. Irfan, R. Rosnelly, M. Wahyuni, J. T. Samudra, and A. Rangga, “Perbandingan Optimasi Sgd, Adadelta, Dan Adam Dalam Klasifikasi Hydrangea Menggunakan Cnn,” J. Sci. Soc. Res., vol. 5, no. 2, p. 244, 2022, doi:

10.54314/jssr.v5i2.789.

[17] T. Yu and H. Zhu, “Hyper-Parameter Optimization: A Review of Algorithms and Applications,” pp. 1–56, 2020, [Online]. Available: http://arxiv.org/abs/2003.05689.

[18] M. El-Hachem, S. W. Mccue, W. Jin, Y. Du, M. J. Simpson, and M. J. Simpson, “Revisiting the Fisher – Kolmogorov – Petrovsky – Piskunov equation to interpret the spreading – extinction dichotomy Subject Areas : Author for correspondence :” 2019.

[19] J. H. Lagergren, J. T. Nardini, G. Michael Lavigne, E. M. Rutter, and K. B. Flores, “Learning partial differential equations for biological transport models from noisy Spatio-temporal data,” Proc. R. Soc. A Math. Phys. Eng. Sci., vol. 476, no.

2234, 2020, DOI: 10.1098/rspa.2019.0800.

[20] L. Lu, X. Meng, Z. Mao, and G. E. Karniadakis, “DeepXDE: A deep learning library for solving differential equations,”

SIAM Rev., vol. 63, no. 1, pp. 208–228, 2021, DOI: 10.1137/19M1274067.

Referensi

Dokumen terkait

Fields: 1, dunite, 2, wehrlite, 3, clinopyroxenite, 4, amphibole clinopyroxenite, 5, cumulate amphibole gabbro, 6, noncumulate amphibole gabbro (gabbro liquid), A, olivine cumulate,

Pokja Pemilihan Penyedia Barang/Jasa Pekerjaan pada Satker Peningkatan Keselamatan Lalu Lintas Angkutan Laut Pusat akan melaksanakan Pelelangan Umum dengan

Kemudian menurut Perda Nomor 8/DP.040/PD/76, Sekretariat DPRD dipimpin oleh seorang Sekretaris dengan 4 (empat) bagian, yaitu Bagian Umum, bagian Persidangan &

REKAPITULASI REALISASI ANGGARAN BELANJA DAERAH MENURUT URUSAN PEMERINTAHAN DAERAH,. Anggaran

Penerapan Model Pembelajaran Berbasis Proyek Untuk Meningkatkan Keterampilan Proses Sains Dan Penguasaan Konsep Siswa Smp Pada Materi Pesawat Sederhana.. Universitas

Dengan ini kami beritahukan bahwa perusahaan saudara masuk sebagai Calon Pemenang untuk paket pekerjaan Survey LHR Dalam Kota Palu , untuk itu kami mengundang saudara

[r]

Gedung H, Kampus Sekaran-Gunungpati, Semarang 50229 Telepon: (024)