Any normal* system of differential equations can be writ- ten as a first-order normal system, which in vector notation has the form
The general solution of the set of differential equations rep- resented by Eq. 10-1 is given by
Eq. 10-2 is subject to the initial conditions y = yO at t = tO. Eq. 10-1 is solved numerically by substituting one or more difference equations in place of each differential equa- tion. in general, the difference equations cannot be solved directly at any particular value of t. Instead the tune of y is calculated sequentially, point by point, beginning at the ini- tial values and continuing at intervals of the independent variable t until the solution has been extended over the required range oft (Ref. 13). The value of the dependent variable at each succeeding computation step is based on the value of that variable at one or more preceding steps.
Methods of numerical integration are divided into explicit and implicit types. Each type is discussed in the paragraphs that follow.
10-4.1.1 Explicit Methods
In an explicit method the value of the dependent variable yn + 1 is determined explicitly at step n+1 in terms of the independent variable tn+1 and of the values of the dependent variable at one or more preceding time steps.
The Euler method and the Runge-Kutta method are examples of explicit numerical processes.
1O-4.1.1.1 Euler Method
The Euler method is baaed on a difference equation hav- ing the form
Starting with the value yO, Eq. 10-3 is used to calculate yn successively at the points n = 1,2, . . . .
For example, consider the differential equation that describes Newton’s second law for an application with con- stant force and constant mass
Writing Eq. 10-4 as a system of first-order equations in the form of Eq. 10-1 gives
where
v = magnitude of velocity (speed), m/s.
Thus the vector y is defined in this case as
and the vector Gas
Then at the beginning of step n, the vectors y and G have values given by
10-8
Substituting Eqs. 10-8 and 10-9 into Eq. 10-3 and assuming a constant time step-allowing the subscript n to be dropped from the time step-gives the system of difference equations
Solving Eq. 10-10 algebraically yields an approximate solu- tion to Eq. 10-4 and gives values of v and x at successive time steps. The value oft at the beginning of time step n is given by
The Euler method, also called simple rectangular integra- tion, introduces a truncation error at each step, as shown in Fig. 10-1. As the time step T becomes sufficiently small, the error becomes insignificant. Reducing the time step to the extremely short times required for great accuracy, however, increases the computation time and can introduce. a signifi- cant rounding error, depending on the word length of the
Figure 10-1. Truncation Error in Euler Method
digital computer employed. The Euler method is seldom used in simulations because of these difficulties (See the improved Euler method in subpar. 10-4.1.2.1.). The Euler method is presented here as a basis for later discussions of other numerical integration methods that permit the size of the integration interval T to be increased and the integration accuracy to be improved.
10-4.1.1.2 Runge-Kutta Method
The Runge-Kutta method and its variations are very pop- ular in missile flight simulations. The method provides good accuracy, is simple to program, requires minimum storage, and is stable under most circumstances with integration intervals of reasonable size (Refs. 10 and 14). The basic der- ivation of the method involves a summation of terms, the number of which is arbitrary. The most common form of the method is based on the summation of four terms; conse- quently, it is referred to as the fourth-order Runge-Kutta method. Also in the derivation of the method are certain arbitrary constants. In the fourth-order Runge-Kutta method, the most frequently selected arbitrary constants lead to a set of difference equations of the form
Eq. 10-12 is applicable to sets of first-order differential equations that have the form of Eq. 10-1.
There are also Runge-Kutta methods that involve more than four steps; however, they are rarely used in simulation applications because the small improvement in accuracy generally does not justify the increase in execution time (Ref. 11).
Although the Runge-Kutta method involves fairly simple equations, it has certain disadvantages:
1. If a function G is complicated, evaluation of the H terms at each computation step can be time-consuming.
2. The method will calculate a solution across points of discontinuity, giving erroneous results without giving any indication that this has been done.
3. There is no readily obtainable error analysis.
The lack of any error analysis for the fourth-order Runge-Kutta method can be partially compensated by using certain rules of thumb. One such rule of thumb (Ref. 12) is that if the numerical value of the quantity
becomes larger than a few hundredths at any point tn, the step size T should be decreased.
10-4.1.2 Implicit Methods
The equations for implicit methods of numerical integra- tion are in a form that cannot be solved explicitly for yn+1. An implicit solution to such equations generally can be found by iterative calculations. In practice, instead of per- forming many iterations to solve the equations accurately for yn+1, the same accuracy is obtained with much less effort by employing a smaller computation step size and performing only one or two iterations (Ref. 12). These implicit methods are called predictor-corrector methods.
10-4.1.2.1 One-Step Processes
One-step difference equations determine the value of the dependent variable at step (n+1) in terms of its value only at the preceding step n. Thus, to start the calculations of a one-step method, the initial value of the dependent variable is used as the preceding value to calculate the first step.
Since only one preceding value of the dependent variable is used, one-step methods are called self-starting.
The two explicit methods described in subpar. 10-4.1.1, the Euler method and the Runge-Kutta method, are both one-step processes and are, therefore, self-starting. The improved Euler method and the modified Euler method are examples of implicit methods that also use a one-step pro- cess.
10-4.1.2.1.1 Improved Euler Method
The improved Euler method is implemented by a set of predictor equations and a set of corrector equations (Ref.
12). Again, consider the general normal system of first-order differential equations given by Eq. 10-1
The Euler method, Eq. 10-3, gives a good first approxima- tion and is employed as the predictor equation for the improved Euler method:
The corrector equation is
The vector Yn+l of predicted values of the dependent vari- able is calculated by using Eq. 10-13 and substituting into Eq. 10-14 to solve for the corrected dependent variable vec- tor yn+1. The term G(tn+1 ,Yn+1) is simply the function G, defined by Eq. 10-1, evaluated at time tn+1 and using the value of the predicted dependent variable Yn+1. In prepara- tion for the next computation step, G(tn+1,yn+1) is calcu- lated by substituting the value of Yn+1, calculated by Eq.
10-14, into the function G.
Eq. 10-14 is based on the trapezoidal integration equa- tion. This equation is similar to rectangular integration (the Euler method) except that the value of G used is based on the mean of the value of G at the beginning of the step and the predicted value of G at the end of the step.
As an example, again consider the system of equations that describes Newton’s second law, assuming constant force and constant mass, as given by Eqs. 10-5
Substituting Eq. 10-15 into Eq. 10-13 yields the predictor
The values of the functions G(t,Y) needed to solve Eq.
10-14 for this example are given by 10-10
Substituting into Eq. 10-14 yields the corrected values of the dependent variables
Eq. 10-19 simplifies to
These. are the one-dimensional equations of motion of a mass with constant acceleration F/m over the time period T.
Even when the acceleration is variable with time, the assumption that the acceleration changes in steps and is held constant over the duration of each incremental time step is sufficiently accurate for many applications. The improved Euler method, Eqs. 10-20 and 10-21, was used in Chapter 7 to calculate target motion by using Eqs. 7-22 and 7-23 and is often used in three-degree-of-freedom simulations to solve the equations of motion of the missile.
10-4.1.2.1.2 Modified Euler Method
There are at least two different numerical integration methods in the literature referred to as modified Euler meth- ods. One method is similar to the Euler method (subpar.
10-4.1.1.1) except that the modified method attempts to average out the truncation error at step n by integrating from (n-l) to (n+1) at each step; thus it is a multistep method. It can be shown, however, that when this particular method is used in a simulation of a component in a feedback loop, an unstable solution always results (Ref. 11); therefore, the method should not be used for missile flight simulations.
A more useful one-step method also identified as the modified Euler method, begins by using the predictor equa- tion for only half of a time-step interval and then processes the second half of the interval by using the corrector equa- tion. Thus the predicted value of G is at the middle of the integration interval. The predictor and corrector equations for the modified Euler method are given by
When applied to the particular differential equation described by Eq. 10-4, this modified Euler method yields the same results as the improved Euler method, i.e., Eqs.
10-20 and 10-21.
10-4.1.2.2 Multistep Processes
A k-step difference equation employs the values of the dependent variable at the first k preceding steps. For exam- ple, a four-step difference equation-not to be confused with fourth-order Runge-Kutta-determines the value of yn+1 by using values of yn, yn-1, yn-2, and Yn-3. Because of the need for multiple preceding values of the dependent variable, multistep difference equations are not self-starting, and some auxiliary method is required to determine the preced- ing values (Ref. 12). Atypical approach is to use a one-step method, such as the Runge-Kutta method to calculate the first n values of the dependent variable and then to switch to the multistep method.
10-4.1.2.2.1 Milne Method
The Milne method requires multiple previous values of the dependent variable to solve the difference equations rep- resenting the differential equations; therefore, it is a multi- step method. The derivation of the Milne method is parallel to that of the improved Euler method, except that the well-known trapezoidal rule is used in the improved Euler method, whereas the Simpson rule, instead of the trapezoi- dal rule, is used in the Milne method (Ref. 12).
The predictor equation for the Milne method is (Ref. 15)
and the Milne corrector equation is
These equations use four previously calculated values of the dependent variable to find the succeeding value. Other forms of the Milne method are possible. For example, see
Ref. 11 for Milne equations that use only two previously calculated values of the dependent variable and Ref. 15 for Milne equations that use six previous values of the depen- dent variable. Ref. 15 also gives special Milne equations that are applicable to second- and third-order differential equations.
Since the Milne method is a multistep process requiring previous values of the dependent variable, it is not self-start- ing, A one-step method, such as the Euler method, the improved Euler method, or the Runge-Kutta method, is required to calculate the values of the dependent variables for the first few computation steps.
Although it is not as popular as the Adams methods for application to missile flight simulations, the Milne method gives a perspective of the relationships among the various numerical integration methods. Other predictor-corrector methods differ from the improved Euler method and the Milne method only with respect to the polynomial interpo- lation equations from which the predictor and corrector equations are derived.
10-4.1.2.2.2 Adams Methods
The Adams-Bashforth equations are a family of methods often used as predictors. A commonly used predictor equa- tion is the fourth-order Adams-Bashforth equation, which is given by
This equation is most frequently used in conjunction with the Adams-Moulton corrector equation, which is given by
(10-27) where the function G now depends on the predicted variable Y, i.e., Gn = G(tn, Yn), Gn-l = G(tn-1, Yn-1), etc.
In simulations that require real-time operations, computa- tion time is typically reduced by using the second-order Adams-Bashforth predictor in place of the fourth-order Adams-Bashforth predictor given in Eq. 10-26. This method is known as AB2; it is just a predictor method, i.e., a correc- tor is not used. The second-order Adams-Bashforth equation is
The Adams equations are more accurate than the Euler equations and are comparable in accuracy to the Runge-Kutta method but require about half as much compu- tation (Ref. 10). In predictor-corrector methods the differ- ence between the predicted and corrected values is one measure of the error being made at each computation step and therefore can be used to control the step size employed at the integration (Ref. 16). Predictor-corrector methods, however, are not self-starting, and unlike the Runge-Kutta method, they cannot be easily used alone with a variable time step. These difficulties frequently are alleviated in practice by using the Runge-Kutta method to obtain the starting values and also to compute the solution for the first few computation steps after the step size has been changed or after a discontinuity has been encountered.
Another difficulty with predictor-corrector methods is that, in some cases, they are subject to certain types of insta- bilities that do not occur when the Runge-Kutta method is used. Numerical instability in a simulation usually is con- sidered to be the unbounded compounding of numerical error that results from either a truncation or rounding error or a combination of the two. One approach to resolving a truncation error is to reduce the step size of the simulation until the numerical integration process is stable and then test the process to determine whether at the small step size a rounding error introduces a significant error in the simula- tion. One particular type of instability manifests itself first by creating an error that is larger than expected and then b l increasing this error even more when an attempt is made reduce it by decreasing the step size. A more detailed dis- cussion of this instability is given in Ref. 17.
The tendency of multistep methods to become numeri- cally unstable under certain conditions can lead to disas- trous results; therefore, these methods should not be used indiscriminately (Ref. 10). In many applications involving complicated equations, however, the predictor-corrector methods can result in a considerable savings of computer time over the Runge-Kutta method (Ref. 12). Another important advantage of multistep methods is that with little additional computation they provide automatic monitoring of the accuracy, which can be used as a basis for deciding whether the step size is too small, too large, or about right (Ref. 10).
A number of special forms of various numerical integra- tion methods are available. for specific use with higher order differential equations. For general-purpose computing these special methods are not very useful for solving differential equations; however, for the special cases in which they are applicable, these methods reduce the number of calculations required for numerical evaluation (See, for example, Refs.
15 and 18.). Ref. 19 contains a discussion of higher order methods and variable step size methods that attempt to select an optimum integration step,
10-12
10-4.1.3 Modern Numerical Integration Methods
The dynamical system problems presently being faced are forcing classical numerical methods to their limits and leading to numerical instabilities of such drastic extent that very small step sizes are required to stabilize digital simula- tions. This leads to costly simulation at best and inaccurate simulation at worst-due to the propagation of rounding errors.
A digital simulation is, in itself, a discrete dynamic sys- tem. It can be filtered, tuned, stabilized, controlled, ana- lyzed, and synthesized in the same manner as any discrete system. This viewpoint broadens the scope of numerical methods and mathematical modeling techniques applicable to simulation. This applies not only to the classically devel- oped methods previously discussed but to all of those devel- oped from the viewpoints of sampled-data theory and discrete system theory. This broadening of applicability has led to the development of new simulation methods that have no classical counterparts (Ref. 15).
Modern numerical methods that simulate the motion of continuous systems accurately and efficiently in both the time and frequency domains date back only to 1959, and most of the methods for simulating nonlinear processes were developed in the early and mid 1970s (Ref. 15).
One type of modem numerical integrator is based on matching the roots of the difference equation to the roots of the differential equation it simulates. Since the dynamics of a continuous system are completely characterized by its roots and final value, it is appropriate to make the roots and final value of the simulating difference equations match those of the system being simulated. If a system of differ- ence equations is synthesized having poles, zeros, and final value that match those of the continuous system these equa- tions will accurately simulate the continuous system. Differ- ence equations generated in this manner are intruinsically stable if the system they are simulating is stable regardless of the step size.
Another type of modern numerical integrator is based on tunable digital filters whose phase and amplitude cbacter- istics can be varied to control integration error. Turnability in both the time domain and frequency domain has no counter- part in classical numerical integration. These numerical integrators, however, reduce to certain classical integration equations when phase-shift errors are introduced. This leads to an interesting corollary that large classes of classical numerical integrators arc actually the same integrator, dif- fering only by the amount of integrand phase shift (Ref. 15).
Significant advances in techniques for evaluating nonlin- ear equations of motion have been developed recently, by using piecewise linear difference equations in which the Jacobian of the differential equation plays a key role (Ref.
15). If the simulation involves many state vectors, however, a major difficulty can arise because these techniques require the inversion of a Jacobian matrix at each computational time step.
The details of these methods are too extensive to be included in this handbook; however, a complete discussion of a number of methods of synthesizing and applying mod- em numerical integration techniques is given in Ref. 15. See also Refs. 20 through 23 for discussions of the application of numerical methods to state-space equations. Although future surface-to-air missile simulations will include appli- cations of these modern methods, there is no universaI method that solves every problem. When choosing a method or methods, the user must evaluate the advantages and disadvantages of the different methods in terms of their application to the particular problem (Ref. 15).
10-4.1.4 Applications
Missile flight simulations require the simultaneous solu- tion of several differential equations at each time step. For example, in a six-degree-of-freedom simulation, the equa- tions of missile motion form a set of six differential equa- tions (Eqso 4-37 and 4-46) containing many cross connections among them. Integration of these six equations yields the velocity components of the missile. A second integration of each equation is required to solve for the dis- placement components; thus the number of equations to be solved doubles. The cross couplings among all of’ these equations are handled by applying the numerical integration procedures to the separate equations in parallel. That is, the first iteration of a numerical procedure is applied to all of the differential equations, the next iteration is applied to all of the equations, and so on, until all of the iterations for a given computational time step are completed.
The particular numerical method to employ in a given simulation depends largely on the requirements for accuracy and computation speed. Different methods in a given appli- cation behave differently. The numerical analyst must be constantly alert to indications that a numerical integration algorithm is not functioning properly (Ref. 10). The most useful comparison of the various methods of numerical inte- gration is based on their performance, on an experimental basis, in the actual simulation being considered (Ref. 11).
Present general practice is to use a simple, straightfor- ward, fast-running method, such as the improved or modi- fied Euler methods for applications that do not require great accuracy. Many simulations for general systems studies of proposed or hypothetical missiles or of foreign missiles based on uncertain intelligence data often fall in this cate- gory because the uncertainties in the input parameters that describe the missile make great computational accuracy unwarranted. Also the miss distance calculated by a simula- tion of a guided missile is likely to be affected only insignif- icantly by small errors introduced through numerical integration methods. The reason for this low sensitivity to errors induced by the numerical integration method is that the simulated closed-loop guidance system generates con- trol commands based on the sum of the simulated missile heading error plus any heading errors induced by computa- 10-13