Resonance Frequencies for 8202A10
6.2 Theoretical Background .1 Synchronous Averaging
6.2.3 Jittering Error Analysis
The model from Eq. (6.16) is suitable for analyzing the effect of jittering variables on the reconstructed extracted periodic mean through the SA estimator. By making use of Euler’s identity, from Eq. (6.16) we give the absolute value of the synchronous averaged unit amplitude componentf*as:
jXSA.f/j D s1
N X
icos.ki/sinc.i/T 2
C 1
N X
isin.ki/sinc.i/T 2
(6.17) With:
kiD2
fiCiTf
i (6.18)
iD
f ffi
T (6.19)
2We make use of the sifting property of the delta function:f(x)ı(xa)Df(xa).
sinc.x/D sin.x/
x (6.20)
Similarly, the phase reads:
SA.f/DCr (6.21)
Where:
r Dtan1
1 N
P
isin.ki/sinc.i/T
1 N
P
icos.ki/sinc.i/T (6.22)
To derive statistics on the amplitude and phase errors of depending on the frequency of the considered component and on the jittering variables, it is necessary to assign a probability density function to the random variables. In this work, the following assumptions are made:
1. The jitter variables"iare independent, identically distributed (i.i.d.);
2. The jitter variables are uniformly distributed asU[/2,/2].
While the exact distribution of the jitter error is unknown, it will be shown in Sect.6.3that assuming a uniform distribution has practical relevance for some configurations of the acquisition system. We observe that both the statistics are made up from the sum ofNfunctions of random variables, each one depending on two i.i.d. random variables"i,"iC1. We denoteg1iandg2i
thei-th functions of the random variables"i,"iC1which appear respectively inside the summations in the denominator and the numerator of Eq. (6.22) and introduce the indexkD f1,2g. The dependency ofgkion the random variables and the frequency is omitted for easiness of notation. Given the assumptions, we now compute the distribution for the absolute value. The two squared terms under root in Eq. (6.17) are both in the form of the average of a series of random variables. Two distinct cases arise: when analyzing a synchronous component, the productTf*is an integer and, owing to the periodicity of the trigonometric functions, it can be neglected from the equation. For asynchronous components, the product is not an integer and therefore cannot be neglected. In such a case, it is difficult to derive appropriate statistics for the solution. However, we can still expect that the asynchronous terms will be attenuated in a similar way as in absence of jittering, though the case of the phases summing up to zero will become extremely unlikely, and seemingly the frequency of each component will undergo slight fluctuations per each revolution. Consequently, we cannot expect any equivalent of the zeros of the transfer function of Eq. (6.5). In fact, to zero out with certainty a component f*, the SA kernel shall be certainly zero for some value off*, which is impossible due to the presence of the random variables. Conversely, if we focus on the synchronous components, that is the part of the signal that we are interested in extracting, then the processesgkifulfil mixing conditions3 such that the central limit theorem holds forNlarge enough:
p
N.gEŒgi /!N2.0;†i/ (6.23)
Where:
gD 1
N X
ig1i; 1 N
X
ig2i T
DŒg1;g2T (6.24)
N2 denotes the multivariate normal distribution with two degrees of freedom and † the associated covariance matrix (to not be confounded with the number of averages N). Given the independence of the random variables (i.e.
pX,Y(x,y) D pX(x)pY(y)when p(•) denotes a probability density function), and saving from the notation the dependency ofgon all the deterministic variables, the expected value ofgis given as:
EŒgki D 1 2
Z 2
2
Z 2
2gki.x;y/dxdyIkD f1; 2g (6.25)
3Here it is sufficient to observe thatgki, gkjare mutually independent for everyji-jj>1.
In the following, we consider the solution for the frequency linefDf*as to simplify the computations. Moreover, we set f*D1and recognize that the dependency of the attenuation and phase shift can be incorporated without loss of generality in the jitter statistics. Specifically, linear relationship hold for the transformation of uniform random variables. Similarly, we consider the unitary periodTD1. Under the mentioned conditions, the integrals from Eq. (6.25) read:
EŒg1i D 1 2
Z 2 2
Z 2
2 cos. .xCy//sinc. .xy//dxdy (6.26) EŒg2i D 1
2 Z 2
2
Z 2
2 sin. .xCy//sinc. .xy//dxdy (6.27) The closed form solution of the second (Eq.6.27) integral is identically zero, whereas for the first (Eq.6.26) integral it takes the following expression:
EŒg1i D 1
42.4cos./C4cosCi.2/4cos./ln.2/C4sin./Si.2// (6.28) Where Ci(x)andSi(x) are the cosine and sine integral functions and D. Considering that the random variables take small values bounded in the region [;][;], we propose to approximate the integrals by expanding the sine cardinal function in power series and truncating the expansion to the second-order. Hence, Eq. (6.26) becomes:
EŒg1i D 1 2
Z 2
2
Z 2
2 cos. .xCy//
12x2 6 2y2
6 C 2
3 xyCq.x;y/
dxdy (6.29)
Using trigonometric identities for cos(xC y), considering the extremes of integration and observing that cos(x)is a bounded function, it is possible to show that the residualq(x,y)integrates to a small error in the considered region. Therefore, we give the following approximation for the expected value:
EŒg1i
2
2
2
C2cos./2
32.2/2 (6.30)
With similar reasoning, we derive approximate expressions for the terms of the covariance matrix as:
12i 2
2
2
324
2
4
C1202
2
2
21cos.2/C21 32
2
2
2
C2cos./22
28844 16
(6.31)
22i
324
2
4
1682
2
2
21cos.2/C21
28844 16
(6.32)
122i0 (6.33)
Where Eq. (6.31) is the variance ofg1i, Eq. (6.32) the variance ofg2iand Eq. (6.33) the covariance. Next, considering the distribution of the absolute value, we have:
ˇˇXSA
fˇˇ2DgTIg (6.34)
The random vectorgis distributed asN2(E[gi],†) according to Eq. (6.23), where the terms in† are given from Eqs.
(6.31,6.32,6.33,6.34) and the quantity in Eq. (6.34) is a quadratic form of normal variables. Such distributions are widely studied in literature [9]. As for this work, the probability density function was approximated using the moment-based method as proposed in [9]. The procedure can be summarized as follows4:
4The general expression for a quadratic form in the multivariate normal variableXis Q(X)DXTAX. In the considered caseAIand sinceS> 0, its eigenvalues are always positive. For the more general case, refer to [9].
1. Determine the eigenvalueskand the matrix of the normalized eigenvectorsƒof†;
2. Compute†-1/2Dƒdiag(k-1/2)ƒT; 3. ComputebDx†-1/2E[g];
4. Consider the transformed quadratic form Q1 D W1TA1W1 where the random normal variables have now identity