S XLT(2)XL
Restriction 3: Fluids, kinematic condition, and massless interface
3.1 Non-normality and generalized linear stability analysis
3.1.1 Autonomous systems
Let us consider the behavior of linear systems of the form
∂tu=Lu, (3.1)
where L is an autonomous linear operator, i.e., one independent of time. For an ordinary differential equation, u in this expression would be a vector and L a matrix; for a partial differential equation, u would be a function and L a linear differential or integral operator.
Autonomous linear systems frequently arise when linearizing about the steady state of a nonlinear ordinary or partial differential equation (when linearizing about a time-dependent base state, the resulting linear operator is typically also time-dependent, and thus nonautonomous).
Equation (3.1) can be solved formally by u(t) = exp(tL)u(0) using the operator exponential.
Assuming L is diagonalizable, u(t = 0) can be decomposed into a sum of eigenfunctions vi of L: u(0) = Piaivi, and the time-dependent u(t) can be immediately solved as u(t) = P
iaiviexp(λit), where each λi is the eigenvalue of vi. In normal systems, the vi are all orthogonal, and hence∥u(t)∥2=Pia2iexp(2λit)∥vi∥2. Therefore, it is the eigenvalues λi that bound the relative growth rate of u(t). But in non-normal systems, the cross-terms⟨vi, vj⟩ do not necessarily vanish fori̸=j. In this case, the behavior can be complicated. Even if everyλi is negative,u(t)can still experience growth due to the cross-terms (Farrell and Ioannou, 1996b).
The time evolution of the magnitude of u, measured with a given inner product norm ∥ · ∥, can be written out explicitly without any eigenvector decomposition as
g(t) = ∥u(t)∥
∥u(0)∥ = ∥exp(tL)u(0)∥
∥u(0)∥ =
Dexp(tL†) exp(tL)u(0), u(0)E
⟨u(0), u(0)⟩
1/2
≤ ∥exp(tL)∥, (3.2)
where L† denotes the adjoint of L under the given inner product and ∥exp(tL)∥ uses the operator norm. Equality is achieved for an initial state u(0) = arg maxu(∥Lu∥/∥u∥) (Farrell and Ioannou, 1996b).
1More complicated inner products, in particular those containing derivatives, such as theH1 inner product
⟨u, v⟩H
1=R
(uv+∇u· ∇v), are beyond the scope of this work.
Thespectral abscissaofL, denotedβmax, is the real part of the eigenvalue of Lwith maximum real part, i.e., βmax ≡ max re eigL. The spectral abscissa provides a lower bound on the maximum growth ofu:
exp (βmaxt)≤ ∥exp(tL)∥. (3.3) Equality holds when L is normal, in which case the eigenvectors of L are orthogonal and thus the contribution of each eigenvector to the magnitude measurement is independent of every other eigenvector. In this case, exp(βmaxt) =∥exp(tL)∥, meaning that a vector cannot grow faster than the rate of the spectral abscissa: g(t) = ∥u(t)∥/∥u(0)∥ ≤exp(βmaxt) (Farrell and Ioannou, 1996b).
In general, however, exp(βmaxt) is not an upper bound on the growth of vectors. When L is diagonalizable (i.e., whenLhas a complete set of eigenvectors), it can be written asL=SΛS−1, with Λ diagonal. Then we can construct a (typically not tight) upper bound on∥exp(tL)∥:
exp (βmaxt)≤ ∥exp(tL)∥=∥Sexp(tΛ)S−1∥ ≤ ∥S∥∥S−1∥exp (βmaxt). (3.4) Again, when L is normal, S is unitary in the given norm, and so the upper bound reduces to exp(βmaxt). When S is not unitary, L is non-normal, andu(t) may achieve growth exceeding exp(βmaxt) (Farrell and Ioannou, 1996b).
The instantaneous growth rate ω is given by ω = ∂lng
∂t = ∂t∥u(t)∥
∥u(t)∥ = ∂t[⟨exp(tL)u(0),exp(tL)u(0)⟩]1/2
∥u(t)∥
= ⟨Lexp(tL)u(0),exp(tL)u(0)⟩+⟨exp(tL)u(0),Lexp(tL)u(0)⟩
2∥u(t)∥2
= 1
∥u(t)∥2
*L+L†
2 u(t), u(t) +
≤max eig
"
L+L† 2
#
≡ωmax, (3.5)
whereωmaxis the maximum instantaneous growth rate, known as thenumerical abscissa. When L is normal, the eigenvalues ofLandL† are complex conjugates, and so ωmax=βmax (Farrell and Ioannou, 1996b). When L is non-normal, βmax ≤ωmax. That ωmax cannot be less than βmaxfollows immediately from its definition. Lettingvbe a unit eigenvector ofLcorresponding toβmax,ωmax≥D(L+L†)/2v, vE= (⟨Lv, v⟩+⟨v,Lv⟩)/2 =βmax. Regardless of normality or non-normality, limt→∞ω≤βmax. Hence, the maximum asymptotic growth rate is the spectral abscissa, while the maximum instantaneous growth rate at t = 0 is the numerical abscissa.
Indeed, it may be the case that βmax < 0 while ωmax > 0, indicating transient growth even when all eigenvalues are negative.
It turns out that the initial condition u(0) which induces the maximum growth rate at t = 0 (i.e., the mode corresponding to the numerical abscissa) need not be the same as the initial
condition which grows the most between t = 0 and a later time t. For any given t, the maximum relative growth achievable at that time is simply∥exp(tL)∥. The initial condition, or input mode inducing that maximum isumax,t(0) = arg maxu∥exp(tL)u∥/∥u∥. After evolving from t = 0 to a later time t, the state of the system may look very different from the initial condition; this output mode is given by vmax,t(t) = exp(tL)umax,t(0)/∥exp(tL)umax,t(0)∥.
Numerically, the maximum-growth input and output modes can be computed from the singular value decomposition (SVD) ofexp(tL). In particular, if the SVD is given byexp(tL) =UΛV†, then the columns of V give the input modes, the columns of U the output modes, andΛ the total relative growth of each non-normal mode at time t(Farrell and Ioannou, 1996b).
Any matrixAwhich is non-normal under a given inner product and has no defective eigenvalues may be made normal by changing to a particular different inner product (the converse is not true;
some matrices, such as the identity and zero matrices, are normal under any inner product).
Indeed, when a classical linear stability analysis is carried out, it is implicitly assumed that ωmax = βmax, and thus that such a normalizing inner product is being used. But the choice of inner product should not, in general, be arbitrary. For physical systems, there often exists a natural choice of inner product norm which measures energy, volume distribution, or physical height variation of an object. The normalizing inner product is frequently unbalanced, giving certain parts of a physical system much more measurement weight than other parts (as will be seen in the stability analysis of flow in V-grooves, in Chapter 5). In the case of linear stability analysis of nonlinear systems, there may be a natural choice of measurement indicating when a perturbation becomes large enough for nonlinear effects to become important. This occurs, for example, in the destabilization of Couette flow in the laminar-turbulent transition (Butler and Farrell, 1992). In this case, the normalizing inner product may not effectively measure the nonlinear importance of small perturbations, whereas a different choice of inner product allows better prediction of the nonlinear destabilization which may lead to turbulence.
Non-normal operators with significant differences between transient and asymptotic stability are by no means rare occurrences. Chalker and Mehlig (1998) considered random matrices drawn from either a Gaussian ensemble of complex matrices, or from a comparable ensemble of Hermitian (self-adjoint) matrices. For the equation∂tu= (A−1)u, the normalized expectation value√
4π⟨u(t), u(t)⟩was equal tot−3/2 for the normal matrices and the much slower decaying t−1/2 for the general matrices.
Fun Fact: Normal second-order ordinary differential operators
It is well-known (see, e.g., Birkhoff and Rota, 1989) that the Sturm-Liouville operator LSL ≡w−1(x)∂x[p(x)∂x] +w−1(x)q(x)is the unique second-order ordinary differential lin- ear operator which is self-adjoint with weightw(x)(given the correct boundary conditions).
However it isnotthe unique normal second-order operator; normality is a more general con-
dition than self-adjointness (for example, skew-adjoint operators are also normal). Letting L ≡a(x)∂xx+b(x)∂x+c(x), the adjoint with weight w(x) is given by
L†≡a(x)∂xx+
−b(x) + 2a′(x) + 2a(x)w′(x) w(x)
∂x
+
c(x)−b′(x)−b(x)w′(x)
w(x) +a(x)w′′(x)
w(x) + 2a′(x)w′(x)
w(x) +a′′(x)
.
Directly computing the commutator[L,L†]yields a result of the form(f1)∂xx+(f2)∂x+(f3), where each f is a somewhat complicated expression. Solving for f1 =f2 = f3 = 0 yields three distinct cases:
Case 1: L is Sturm-Liouville, i.e., a(x) =p(x)/w(x),b(x) =p′(x)/w(x). This is the only case in which c(x)is a free function.
Case 2: L is skew-adjoint plus a constant (denoted c1 here). In this case, L=b(x)∂x+1
2
b′(x) +b(x)w′(x) w(x)
+c1 L†=−b(x)∂x−1
2
b′(x) +b(x)w′(x) w(x)
+c1.
Case 3: L has the following specific form:
L= 1
w(x)∂x[p(x)∂x] +c1
sp(x) w(x)
∂x+ p′(x)
4p(x) + w′(x) 4w(x)
+c2
+ p′′(x)
4w(x)− [p′(x)]2
16p(x)w(x) +p′(x)w′(x)
8w2(x) − 5p(x)[w′(x)]2
16w3(x) +p(x)w′′(x) 4w2(x) L†= 1
w(x)∂x[p(x)∂x]−c1 sp(x)
w(x)
∂x+ p′(x)
4p(x) + w′(x) 4w(x)
+c2 + p′′(x)
4w(x)− [p′(x)]2
16p(x)w(x) +p′(x)w′(x)
8w2(x) − 5p(x)[w′(x)]2
16w3(x) +p(x)w′′(x) 4w2(x) .
In this final case, L is a sum of a Sturm-Liouville operator with a specific value ofq and a skew-adjoint operator (the terms with coefficient c1). Incidentally, balancing the operator leaves only a single functional degree of freedom, p, along with two constants. Clearly this form is quite tuned and should not be expected to arise as often as Sturm-Liouville operators. In general, given a second-order operator which is not Sturm-Liouville, it is very likely to be non-normal.
Example: An autonomous ODE
To get a better intuitive picture of normality and non-normality, let us consider a2×2 matrix, based on an example from Farrell and Ioannou (1996b):
A=
"
−1 6 0 −2
#
. (3.6)
Being upper triangular, A clearly has eigenvalues −1 and −2, and it has corresponding eigen- vectors{1,0} and{−6,1}. The spectral abscissa is henceβmax=−1. The transient operator is given by
A+A†
2 =
"
−1 3 3 −2
#
, (3.7)
which has eigenvalues(−3±√
37)/2. Thus the numerical abscissa is ωmax= (−3 +√
37)/2≈ 1.54. Because ωmax>0, the system ∂tu=Auwill undergo transient growth for certain initial u(0). The initial condition which grows the most at any time may be computed numerically and turns out to be umax≈ {0.334,0.943}.
The maximum growthG(t) = maxu(0)∥u(t)∥/∥u(0)∥is shown in Figure 3.1 (orange line), with the evolution of the magnitudes of the two eigenvectors,exp(−t)andexp(−2t), shown in green and blue, respectively. Keep in mind thatG(t)does not plot a single trajectory; instead at each time it shows the maximum growth by any initial condition.
The left and center columns of Figure 3.2 show the evolutionumax(red) and the two eigenvectors (green and blue) over time, while the right column shows the optimal input and output modes at each time (orange). The unit circle is denoted by the dotted black line. It is clear from the first column that while the two eigenvectors quickly decay to the origin over time, umax follows a curved trajectory, moving far to the right before turning around and falling back into the origin. The middle column aims to elucidate this behavior by displaying the decompositionumax
(red arrow) into its eigenvector components (green and blue arrows). The key fact is that the eigenvectors are nonorthogonal and, counterintuitively, the sum of two shrinking nonorthogonal vectors may actually grow. The right column displays the optimal input and output modes at each time, labeled V and U, respectively. At t = 0, the two points coincide, and represent the point which experiences the fastest growth rate (the numerical abscissa). But the initial condition that induces fastest growth att= 0 is not the same as the one which grows the most overall; it is evident from the plot that the initial condition which grows the most at t= 0.656 is much higher in y than the point which induces the fastest t = 0 growth. Finally, note from the last plot of the right column that the optimal output mode at late times coincides with the least-stable eigenvector. Thus, at late times, the modal stability result is restored.
Figure 3.1: Maximum growth achievable at any time (orange line), as compared to the growth of the two eigenvectors (blue and green lines), for the autonomous system ∂tu = Au, for A = [−1,6; 0−2]. Note that the orange line does not represent the magnitude of a single initial condition evolved over time. Rather, at each time t, the orange line gives the greatest magnitude of any initial condition up to that time.
Figure 3.2: Evolution of the autonomous system ∂tu = Au, for A = [−1,6; 0−2]. The top row displays timet= 0, the center rowt= 0.656, and the bottom row t= 6.56.
Left column: Background gray arrows display a stream plot of A. The eigenvector {1,0} is shown as a green dot on the unit circle (dotted line), and the evolution of the corresponding initial condition u(0) ={1,0} over time is the green line emerging from that dot. The second eigenvector {0.986,−0.164} is shown as a blue dot on the unit circle (dotted line), and its evolution over time is the blue line emerging from that dot. The red dot indicates an initial condition of u(0) ={0.334,0.943}, and the curved red line is its corresponding evolution.
Middle column: The red arrow now represents the evolution of the initial condition u(0) = {0.334,0.943} (as did the red line in the left column). The horizontal green and tilted blue arrows show the eigenvector decomposition of the red arrow. These plots aim to emphasize that the sum of two shrinking vectors is a transiently growing vector.
Right column: Green and blue lines represent the eigenvectors, as before. The orange dots now represent the optimal input modeV and output modeU at each time. The orange line displays the maximum growth achieved at any time.