Various methods are compared for spatial discretization and time integration of the one-dimensional Schroedinger equation. The second part of the book shows the application of computer simulation methods for a variety of physical systems with a certain focus on molecular biophysics.
Numerical Methods
Error Analysis
- Machine Numbers and Rounding Errors
- Numerical Errors of Elementary Floating Point OperationsOperations
- Numerical Extinction
- Addition
- Multiplication
- Error Propagation
- Stability of Iterative Algorithms
- Example: Rotation
- Truncation Error
Taylor series expansion gives first order. and the rounding errors of the first step. The product of the matrices Dϕ(r)· · ·Dϕ(1) is the matrix containing the derivatives of the output data with respect to the input data (chain rule).
Problems
Machine Precision
1.63), which deviates from the exact solution by a term of the order O(Δt3), therefore the local error order of this algorithm is O(Δt3), which is denoted by writing
Maximum and Minimum Integers
Truncation Error
Interpolation
Interpolating Functions
The parameters must be determined so that the interpolation function has the correct values at all sampling points (Fig. An interpolation problem is called linear if the interpolation function is a linear combination of functions.
Polynomial Interpolation
- Lagrange Polynomials
- Barycentric Lagrange Interpolation
- Newton’s Divided Differences
- Neville Method
- Error of Polynomial Interpolation
Once we have calculated the weight, evaluating the polynomial requires only O(n) operations, while computing all the Lagrangian polynomials requires O(n2) operations. They are invariant to changing arguments, which can be seen from the obvious formula.
Spline Interpolation
The most important example is the cubic concatenation, which is given in the interval x ≤x<. 2.44). The most common choice is the natural boundary conditions (x0)=s(xn)=0, but also the periodic boundary conditions s(x0)=s(xn),s(x0)=s(xn), s(x0)= are often used s(xn) or given derivative values s(x0)in(xn).
Rational Interpolation
- Padé Approximant
- Barycentric Rational Interpolation
- Rational Interpolation of Order [ M , N ]
- Rational Interpolation without Poles
All these tridiagonal systems can be easily solved with a special Gaussian elimination method (sections 5.3 and 5.4). Polynomial interpolation of larger data sets can work poorly, especially in the case of equidistant x values.
Multivariate Interpolation
In the uniform case this simplifies to (Table 2.1). which can be written as a two-dimensional polynomial. Consider for example two-dimensional spline interpolation on a rectangular grid of data to create a new set of finer-resolution data5. 2.111) First perform spline interpolation in the x direction for each data row j to compute new data sets.
Polynomial Interpolation
Examine the oscillatory behavior of a discontinuous pulse or step function as given by the data (Table 2.4).
Two-dimensional Interpolation
Numerical Differentiation
- One-Sided Difference Quotient
- Central Difference Quotient
- Extrapolation Methods
- Higher Derivatives
- Partial Derivatives of Multivariate Functions
- Numerical Differentiation
3.5) The errors are uncorrelated and the relative error of the result can be estimated by If we assume that the magnitude of the function and the derivative are comparable, we have the rule of thumb.
Numerical Integration
Equidistant Sample Points
- Closed Newton–Cotes Formulae
- Open Newton–Cotes Formulae
- Composite Newton–Cotes Rules
- Extrapolation Method (Romberg Integration)
4.1 (Trapezoid rule and midpoint rule) The trapezoid rule (Left) approximates the integral with the average of the function values at the boundaries. The midpoint rule (Right) evaluates the function in the middle of the interval and has the same error order. Applying the trapezoidal rule for each interval. 4.21) gives the compound trapezoidal rule. gives the composite Simpson's rule.
0 sin(x2)d x is calculated numerically. Circularly show the absolute error of the composite trapezoidal rule (4.22) for the step size sequence hi+1=hi/2.
Optimized Sample Points
- Clenshaw–Curtis Expressions
- Gaussian Integration
- Gauss–Legendre Integration
- Other Types of Gaussian Integration
- Connection with an Eigenvalue Problem
The Fourier coefficients are given by . and the integral is approx. 4.51) Very high order Clenshaw Curtis weights can be calculated efficiently [20, 21]. This can be achieved using a set of polynomials orthogonal to the scalar product. Further integral rules can be obtained by using other sets of orthogonal polynomials, for example.
The determination of quadrature points and weights can be formulated as an eigenvalue problem [22,23].
Romberg Integration Use the trapezoidal rule
Systems of Inhomogeneous Linear Equations
Gaussian Elimination Method
- Pivoting
- Direct LU Decomposition
A series of linear combinations of the equations transforms the matrix to an upper triangular matrix. Begin by subtracting 1/a11 times the first row of rows 2· · ·n. which can be written as a multiplication. This gives LU decomposition of the matrix P Awhere Pis a permutation matrix. P is not explicitly required.
If the array elements are of different sizes, it may be necessary to balance the array, for example by normalizing all rows of A.
QR Decomposition
- QR Decomposition by Orthogonalization
- QR Decomposition by Householder Reflections
Numerically stable algorithms use a series of transformations with unitary matrices, mostly Huishouer reflections (Fig.5.1) [2]1 which have the form. In the first step we try to find a vector like the first column vector of A. 1Alternatively Given rotations [39] can be used which require slightly more floating point operations. is transformed into a vector along the 1-axis. To avoid numerical extinction the sign of k is chosen such that. 5.70) and the Householder transformation of the first column vector of Agives.
In the next (n-2) steps, further Homeholder reflections are applied in the sub-space k≤i,j ≤n to eliminate the elements.
Linear Equations with Tridiagonal Matrix
If the orthogonal matrix Q is explicitly required, additional numerical operations are required to form the product. For that purpose we multiply the first line meta2/b1 and subtract it from the first line. 2This algorithm is only well behaved if the matrix is diagonally dominant|bi|>|ai| + |ci|.
Cyclic Tridiagonal Systems
Linear Stationary Iteration
- Richardson-Iteration
- Matrix Splitting Methods
- Jacobi Method
- Gauss-Seidel Method
- Damping and Successive Over-relaxation
The free parameter α must be chosen so that the diagonal elements do not become too small. 4This is also known as the method of successive overrelaxation (SOR) and differs from the damped Gauss-Seidel method which follows from (5.113). The order of the method is O(N32) which is comparable to the most efficient matrix inversion methods [44].
Non Stationary Iterative Methods
- Krylov Space Methods
- Minimization Principle for Symmetric Positive Definite SystemsSystems
- Gradient Method
- Conjugate Gradients Method
- Non Symmetric Systems
General iterative methods search for the minimum residue in a subspace of RN that increases with each step. In principle, the solution is reached after N = dim(A) steps, but rounding errors may require more steps and the method should be considered iterative. The general minimum residue method (GMRES) directly searches for the minimum of ||Ax−b||in the Krylov spaces of increasing order Kn(A,r(0)).
8If qn+1=0, the algorithm must stop and the Kryl space has the full dimension of the matrix.
Matrix Inversion
Problem
The maximum difference maxi=1..n(|xi−xexacti |) increases only slightly with dimension for the well-behaved matrix (5.224,a), but quite dramatically for the ill-conditioned Hilbert matrix (5.226,b). therefore is the exact solution. 5.3 (Condition numbers) The condition number cond(A) increases only linearly with dimension for the well-behaved matrix (5.224, full circles), but exponentially for the ill-conditioned Hilbert matrix (5.226, open circles). It is positive definite, and therefore the inverse matrix exists and can even be written down explicitly [50].
Its column vectors are very close to linearly dependent and the state number grows exponentially with its dimension (Fig.5.3).
Roots and Extremal Points
Root Finding
- Bisection
- Regula Falsi (False Position) Method
- Newton–Raphson Method
- Secant Method
- Interpolation
- Inverse Interpolation
- Combined Methods
- Dekker’s Method
- Brent’s Method
- Chandrupatla’s Method
- Multidimensional Root Finding
- Quasi-Newton Methods
The root of the polynomial p(xr+1)=0 determines the following iteratexr+1. 6.15) Quadratic interpolation of three function values is known as Muller's method [55]. 6.7 (Validity of inverse quadratic interpolation) Inverse quadratic interpolation is useful only if the interpolation polynomial p(y) is monotonic in the range of values of the interpolated function f1. a) and (b) show the limiting cases where the polynomial has a horizontal tangent at f1orf3. The limiting condition is that the polynomial p(y) has a horizontal tangent at one of the boundaries x1,3.
In certain cases Dekker's method converges very slowly and makes many small steps of the sequence.
Function Minimization
- The Ternary Search Method
- The Golden Section Search Method (Brent’s Method)
- Minimization in Multidimensions
- Steepest Descent Method
- Conjugate Gradient Method
- Newton–Raphson Method
- Quasi-Newton Methods
The location of the minimum can be determined iteratively by choosing a new valueξin the interval<ξ Both methods can be reversed using the Sherman-Morrison formula to give. Alternatively, the DFP method by Davidon, Fletcher and Powell directly updates the inverse Hessian matrix B =H−1 according to. The iteration stops if the gradient norm falls below 10−14 or if the line search does not find a lower function value. The last equation can be interpreted as an interpolation of the function f(t) at the sampling point estn by a linear combination of trigonometric functions. Its transmission function is obtained by applying the z-transform [75]. the discrete version of the Laplace transform) yielding y(z)= f(z). The order of the iteration (7.44) can be reversed by writing fˆωj = f0. 7.54) which is very useful for real-time filtering applications. If the number of samples is N =2p, the Fourier transform can be performed very efficiently with this method.2Phase factor. can only accept N different values. Since the Gauss extends to infinity, it must be cut off for practical calculations. For a real value an even window function such as the Gaussian (8.6) the STFT can therefore be calculated from. 8.19) Alternatively, the STFT can be formulated as. 8.5 (Gabor wavepacket) Left Real (solid curve) and imaginary (dashed curve) part of the wavepacket (8.21) is shown for a Gaussian window function with ω=5 and 2d2=3. Right In the frequency domain the wave packet acts as a band pass filter at ω0=ω. Using a Gaussian window, the Fourier transform can be calculated explicitly (an algebra program is very useful). For larger values of the time window, the frequency resolution becomes higher, but the time resolution lower. Assuming that the window function Wn(t)=0 outside the interval [tn− d,tn+d], we apply (7.5) and expand Wn(t)f(t) inside the interval as a Fourier- series. 8.30) We extend this expression to all times by introducing the characteristic function of the interval. Sampling frequency is 44100 Hz, number of samples 512, Hann windows are used with an offset of 8 samples (0.18 ms) between neighboring windows. Sampling frequency is 44100 Hz, number of samples 512, Hann windows are used with an offset of 2 samples (0.045 ms) between neighboring windows. A time resolution of 6 ms and a frequency resolution of 1.1 kHz are insufficient to resolve 100 Hz modulation. In general, however, the underlying signals are not rectangular, which makes determining the coefficients complicated. The determination of γ for a given window function can be simplified by using the Zak transform [82]. Wavelet Analysis Another popular (continuous) wavelet is the "Mexican hat" (also known as Ricker wavelet or Marr wavelet) which is given by the normalized negative second derivative of a Gaussian (Fig.8.13). 8.14 (Wavelet analysis) Topford=1 ms frequency resolution is high for stationary parts of the signal. Time resolution is poor. Middleford=0.25 ms pulsating component at 300 Hz can be resolved but time resolution is still poor. BottomFord=0.0625 ms time resolution is sufficient to show all modulations while frequency resolution is. 8.82) which is chosen [86,87] such that the Φ0,n form an orthonormal basis of the space of linear combinations. 8.84). This equation shows that knowledge of M0 is sufficient to determine the scaling function (see also p. 182). This is also the case for the wavelet function (c) which is perpendicular to the scaling function. However, since the scaling function obeys the orthonormality condition, the wavelet therefore also satisfies the orthonormality condition. Repeating this recursion makes it possible to calculate the wavelet coefficients even without explicit knowledge of the scaling and wavelet functions. Equations (8.168) and (8.170) take the form of discrete digital filter functions with subsequent downsampling by a factor of two (dropping samples with odd n).12 This can be seen by defining the downsampling coefficients.
Root Finding Methods
Fourier Transformation
Time-Frequency Analysis
Short Time Fourier Transform (STFT)
Discrete Short Time Fourier Transform
Gabor Expansion
Wavelet Synthesis
Discrete Wavelet Transform and Multiresolution AnalysisAnalysis
Discrete Data and Fast Wavelet Transform