• Tidak ada hasil yang ditemukan

Adaptive interference suppression

Dalam dokumen Novel Radar Techniques and Applications (Halaman 67-84)

Target parameter estimation and array features

1.6 Adaptive interference suppression

directions. The form of these weightings shows the close relationship to ABF in (1.49) and (1.55).

The form of the projection also shows the relation between the number of elements and the number of pattern nulls. If the directionsu1;. . .;uM are all dif- ferent and if the array has no ambiguities, then rankf g ¼A M. Therefore, ifM is equal to the number of elements thenAis square and regular, then Pis the null matrix and no beamforming will be produced. WithNelements one can maximally produceN-1 nulls. The rank ofAis also called the required number of degrees of freedom (dof) of the weightingw.

match operation: If z contains only interference, i.e. if EfzzHg ¼Q, then EfðLzÞðLzÞHg ¼I, and the multiplication byLis a pre-whitening operation. The operation ofLon the (matched) signal vectora0accounts for the distortion from the pre-whitening operation and restores just the matching.

Sub-arrays. The formulation of weight vectors at the array elements can be easily extended to sub-arrays with digital outputs. As mentioned in Section 1.3.3, a sub-arrayed array can be viewed as a super-array with directive elements positioned at the centres of the sub-arrays. This means that we have only to replace the quantities a, n by ~a¼THaandnTHn. However, there is a difference with respect to receiver noise. If the noise at the elements is white with covariance matrixs2Ithen it will be at the sub-array outputsQ~ ¼s2THTwhich is for over- lapping sub-arrays not diagonal and which has for non-overlapping unequal sub- arrays unequal diagonal elements. Adaptive processing will turn this into white noise. In particular, if we apply at the elements some weighting for low sidelobes, which are contained in the matrixT, then ABF will reverse this operation by the pre-whiten and match principle. Then after ABF at sub-array level the residual noise will be white and the desired low sidelobe pattern is reversed at sub-array level. This effect can be avoided by simply normalizing the matrix T such that THT¼I. For non-overlapping sub-arrays this can be achieved by normalizing the element weights as mentioned in Section 1.3.3. We call configurations with interference suppression by adaptively weighting the sub-array outputs a direct sub-array weighting(DSW) configuration.

Sidelobe canceller configurations.A very particular sub-array configuration for ABF is the sidelobe canceller (SLC). The basic idea is that any (even a reflector) antenna can be turned into an adaptive antenna by adding some auxiliary antennas. This is a cheap and simple method to provide any antenna with adap- tivity. The additional antennas (called auxiliary antennas) are used to estimate the interference power and this is subtracted from the main antenna. Figure 1.14 shows the principle. The SLC has been described [7 p. 24.11] and has been studied in detail in [25 Chapter 4].

Actually, this is a way of post-beamforming adaptation. The adaptive weights are estimated by minimizing EfjSsumw^Hauxzauxj2g. The solution of this problem is:

^

waux¼Q1auxr; where Qaux ¼E zauxzHaux

and r¼E zauxSsum

One can show that this is the same as the SNIR-optimum solution of (1.46) if the desired signal has the forma0¼ð1;0; :::0ÞT, i.e. if we assume that the signal to be detected is only present in the main channel and not in the auxiliary channels:

Q1¼E Ssum

zaux

! Ssum

zaux

!H

( )1

¼ EnjSsumj2o rH r Qaux

!1

¼ ðQ1Þ11 ðQ1auxrÞHðQ1Þ11 Q1auxrðQ1Þ11 ðQ1Þ22

! (1.47)

Thereforew^ ¼Q1a0¼ ðQ1Þ11 1 Q1auxr

¼ ðQ1Þ11 1 waux

is the solution (1.46) for a special choice ofm.

If, as indicated in Figure 1.14, the main antenna is also an array, it is not necessary to take extra-auxiliary elements. These elements can be taken out of the whole array. Then one arrives at a configuration as in Figure 1.15 called the gen- eralized SLC (GSLC). Like the SLC the GSLC is a jammer suppression method

w w w w

w w w w w w w w w w

ADC

Adaptive weighting W

Δ

Figure 1.14 Principle of sidelobe canceller (SLC)

d d d d m m m m Auxiliary transformation M

– –

Adaptive weighting wΔ Adaptive weighting w

w w w w w w w w w w

Analogue channel splitting

Δ ADC

ADC

Figure 1.15 Generalized sidelobe canceller (GSLC)

after beamforming. The key feature is that the main channels (sum and difference beams) and the auxiliary channels are all generated from the sub-arrays. Note that the sub-arrays in Figure 1.15 could be also single elements.

The important feature of the GSLC system is the option to form the sum and difference beams by analogue beamforming networks. This reduces the danger of ADC limiting for strong sidelobe jammers. However, the danger of ADC limiting for mainbeam jammers is increased by this concept.

The auxiliary channels for the GSLC need not to be single elements or sub- arrays, but may be generated from the whole array by an auxiliary transformation matrixMas indicated in Figure 1.15. Suppose we haveLsub-arrays and want to havePauxiliary channels. TheLPauxiliary transformation matrixMcan be any matrix, from a simple selection matrix for single sub-arrays to a matrix that forms full beams with all sub-array outputs. The auxiliary channel outputs are zaux¼MH~z¼MHTHzand the main channel output isSsum¼m~HTHz. The opti- mum GSLC weight is then:

waux¼Q1auxr; where Qaux¼MHQM~ and r¼MHQ~m~ Several observations can be made for the GSLC:

(i) The columns of M and m~ must be linear independent, i.e. the auxiliaries must provide additional information. In particular the number of columns of Mmust be smaller than the number of sub-arraysL.

[Proof: If m~ ¼Mc for some coefficients c, then waux¼c and therefore SsumwHauxzaux¼0].

(ii) If the auxiliary channels block the vector used for beamforming, i.e. if MHm~ ¼0 (reference blocking condition), and if the auxiliaries preserve all dof, i.e. if M is of rank L-1, then one can calculate that the GSLC weight is the same as the DSW weight for the transformed sub-array con- figuration. More precisely, for the beam space sub-arrays~zBS¼ m~H~z

MH~z

one hasQ~BS¼ m~HQ~m~ m~HQM~ MHQ~m~ MHQM~

and~sBS¼ m~Hm~ MHm~

¼m~Hm~ 1 0 and therefore

wDSW;BS¼Q~1BS~sBS¼m 1 MHQM~ 1

MHQ~m~

¼m 1 waux

This means that under these conditions the GSLC is exactly the same as the SNIR-optimum weight applied to a special (modified) sub-array configuration.

(iii) For the special case that the auxiliaries are adapted to the interference in the sense that the columns of the matrixMspan the interference sub-space, then the GSLC is the same as the DSW interference suppression by a projection.

[Proof: If the columns of the matrixMspan the interference sub-space then the covariance matrix can be written as Q~ ¼IþMBMH with a certain interference cross-correlation matrixBand one obtains:

waux ¼ ðMHMMHMBMHMÞ1ðMHMHMBMHÞm~

¼ ðMHMÞ1ðIMHMBÞ1ðIMHMBÞMHm~

¼ ðMHMÞ1MHm~

from which follows that SsumwHauxzaux ¼m~HðIMðMHMÞ1MHÞ~z, which is the projection operation.]

As mentioned before,m~ needs not to be equal to the desired signal, i.e. the refer- ence blocking condition is not necessarily a signal blocking condition. The term signal blocking is often used in the literature.

The shape of the sub-arrays of the equivalent beamspace representation should be irregular to avoid grating effects, i.e. possible periodic repetitions of the jammer nulls. The main difference between GSLC and DSW lies in the dynamic range of the adaptive channels because of the different point of AD-conversion in the processing chain. This leads to differences in the sensitivity and the suppression of strong jammers. Limiting of the AD-converters must be avoided, because any non- linearity degrades the adaptive suppression. For the GSLC strong sidelobe jammers are attenuated before adaptation by the sidelobe level and limiting effects will occur only for very strong jammers, whereas with DSW the jammer will in general be located within the sub-array mainlobe. Conversely, for mainbeam jammers the GSLC will soon come to limiting in the mainbeam, while for DSW with the much lower sub-array gain adaptive suppression may still be possible. The number and size of the sub-arrays thus determine the performance against mainbeam jammers.

However, both systems do not completely fail if ADC limiting occurs. In this case, the gain of the sub-arrays or the main beam would be reduced by an AGC device.

This produces some SNR degradation resulting in a range reduction, but the losses are different for DSW and GSLC for sidelobe and mainlobe jammers.

The GSLC is not suited to reduce the necessary dof for nulling the interference.

The required number of dof depends on the number of jammers and the system errors, as has been shown in [12] and in Section 1.4. Any reduction of the dof below the necessary number will result in some performance loss. As a rule of thumb, it has been found in [12] that a number of dof of two to three times the number of jammers is necessary to compensate for channel errors. This is primarily a requirement for strong jammers, especially mainbeam jammers, because only then leakage eigenvalues due to the channels errors emerge from noise. Another effect of this feature is that an attempt to reduce cost by reducing the number of adaptive channels may fail because of the higher accuracy requirements for each channel.

With respect to channel errors there is another difference between both concepts, DSW and GSLC. The analogue beamforming networks of the GSLC tend to be more broadband. For DSW all bandpass filtering and AD-conversion errors have an impact on beamforming, in particular with respect to the null depth and the low sidelobe level. DSW with channel errors can perform significantly worse than the GSLC.

This error sensitivity is basically a problem of digital beamforming and is not specific to ABF. It is known that suitable calibration procedures are the key solu- tion to this problem. Simple phase and amplitude calibration is not the problem.

Channel inequalities have to be reduced over the receiving (signal) bandwidth to a sufficiently low level, if high jammer suppression is desired. Passband equalization techniques may be required.

ABF as a constrained optimization. Sometimes interference suppression is realized by minimizing only the jamming power subject to additional constraints, e.g.wHci¼ki, for suitable vectorsciand numberski,i¼1 . . .r. Although this is an intuitively reasonable criterion, it does not necessarily give the maximum SNIR.

For certain constraints however, both solutions are equivalent. The constrained optimization problem can be written in general terms as:

minwwHQw s:t: wHC¼k or wHci¼ki;i¼1. . .r

(1.48) This minimization problem has the solution:

w¼Xr

i¼1

liQ1ci¼Q1C C HQ1C1

k (1.49)

Examples of special cases:

Single unit gain directional constraint:wHa0¼1)w¼ ðaH0Q1a0Þ1Q1a0. This is obviously equivalent to the SNIR-optimum solution (1.46) with the specific normalizationm¼ ðaH0Q1a0Þ1.

Gain and derivative constraint: wHa0¼1; wHa00¼0)w¼mQ1a0þ kQ1a00 with suitable values of the Lagrange parameters m, l. A derivative constraint is added to make the weight less sensitive against directional mismatch of the steering direction.

Gain and norm constraint:wHa0¼1; wHw¼c)w¼mðQþdIÞ1a0. The norm constraint is added to make the weight numerically stable. In fact, this is equivalent to the famous diagonal loading technique which we will consider later.

Norm constraint only: wHw¼1)w¼minEVðQÞ. Without a directional constraint the weight vector produces a nearly omni-directional pattern which posseses only nulls in the interference directions. This is also called the power inversion weight, because the pattern displays the inverted interference power.

As we mentioned before, fulfilling the constraints may imply a loss in SNIR.

Therefore several techniques have been proposed to mitigate the loss. The first idea is to allow a compromise between power minimization and constraints by intro- ducing coupling factorsbiand solve a soft constraint optimization:

minw wHQwþXr

i¼1

biwHciki2 or equivalently minw wHQwþ ðwHCkÞHBðwHCkÞ

(1.50)

withB¼diag{b1, . . .br}. The solution of this soft-constraint optimization is:

w¼ ðQþCBCHÞ1CBk (1.51)

One may extend the constrained optimization by adding inequality constraints.

This leads to additional solutions with improved robustness properties but which often cannot be solved analytically. A number of methods of this kind have been proposed, see [28–31]. As we are only presenting the principles here we do not go into further details.

Performance criteria and displays. The performance of ABF is often displayed by the adapted antenna pattern. A typical adapted antenna pattern with 3 jammers of 20 dB SNR is shown in Figure 1.16(a) for the generic array of Figure 1.7. This pattern does not show how the actual jamming power is compen- sated by the null depth.

Plots of the SNIR are better suited for displaying this effect. The SNIR is typically plotted for varying target direction while the interference scenario is held fixed, as seen in Figure 1.16(b). The SNIR is normalized to the SNR in the clear (i.e. in absence of any jamming) and without ABF. In other words, this pattern shows the insertion loss arising from the jamming scenario with applied ABF.

One can see that in this ideal case an insertion loss only occurs for targets close to the interference direction, i.e. when the jammer lies on the skirt of the main beam.

The 3 dB width of the sum beam is indicated by the shaded area. For sidelobe jammers the insertion loss is virtually zero.

The effect of target and steering direction mismatch is not accounted for in the SNIR plot. This effect is displayed by the scan pattern, i.e. the pattern arising if the adapted beam scans over a fixed target and interference scenario. Such a plot is rarely shown because of the many parameters to be varied. In this context, we note that for the case that the training data contains only the interference plus noise the main beam of the adapted pattern is fairly broad similar to the unadapted sum beam and is therefore fairly insensitive to pointing mismatch. Obtaining an interference alone covariance matrix is a matter of proper selection of the training data as dis- cussed in the following section.

Figure 1.16 shows the case of an untapered (uniformly weighted) planar antenna. The first sidelobes of the unadapted antenna pattern are at the typical level of17 dB and they are nearly unaffected by the adaptation process. If we have an antenna with low sidelobes, the peak sidelobe level is affected and increases in this case by 15 dB as see in Figure 1.17. Due to the tapering we have a loss in SNIR of 1.2 dB compared to the reference antenna (untapered without ABF and jamming).

For comparison we have also plotted the insertion loss with a non-adaptive antenna which reproduces virtually the inverted antenna pattern. This shows the significant SNIR loss in spite of the low sidelobes.

1.6.2 Estimation of adaptive weights

In reality the interference covariance matrix is not known and must be estimated from some training dataZ¼ðz1; zKÞ. To avoid signal cancellation the training

–1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1 –60

–50 –40 –30 –20 –10 0

u

dB

20 20

20

–1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1

–60 –50 –40 –30 –20 –10 0

u

SNIR/dB

20 20 20

(a)

(b)

Figure 1.16 Untapered beamforming: Antenna and normalized SNIR patterns for a three jammer configuration and generic array (from [4]).

(a) Adapted antenna pattern and (b) SNIR

(b) (a)

–1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1

–60 –50 –40 –30 –20 –10 0

u

dB

20 20

20

–1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1

–60 –50 –40 –30 –20 –10 0

u

SNIR/dB

20 20 20

ABF noABF jammer 1 jammer 2 jammer 3

Figure 1.17 Tapered beamforming: Antenna and normalized SNIR patterns for a three jammer configuration for generic array with low sidelobes, 35 dB Taylor weighting, (from [4]). (a) Adapted antenna pattern and (b) SNIR

data should only contain the interference alone. Techniques how to achieve this will be considered in Section 1.6.4.

The ML estimate of the covariance matrix is:

^

QSMI¼ 1 K

XK

k¼1

zkzHk (1.52)

This is called the Sample Matrix Inversion algorithm (SMI). The SMI method is only asymptotically a good estimate. For small sample size it is known to be not very stable. To guarantee matrix inversion we need at least K¼N samples.

According to Brennan’s Rule, [22,23], one needs K¼2N samples to obtain an average SNIR loss below 3 dB. For smaller sample size the performance can be considerably worse. However, by simply adding a multiple of the identity matrix to the SMI estimate a close to optimum performance can be achieved. This is called the loaded sample matrix inversion (LSMI) method:

Q^LSMI¼ 1 K

XK

k¼1

zkzHk þdI (1.53)

The drastic difference between SMI and LSMI is shown in Figure 1.18 for the generic array (Figure 1.7) for three jammers of 20 dB input JNR with 32 sub-arrays and only 32 data snapshots (the minimum number). It can be shown that for a

‘reasonable’ choice of the loading factor (a rule of thumb isd¼2s2. . . 4s2for an

–1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1

–30 –25 –20 –15 –10 –5 0

u

SNIR/dB

20 20 20

SMI LSMI/4 EVP/10 optimal jammer 1 jammer 2 jammer 3

Figure 1.18 SNIR for SMI, LSMI (4s2) and eigenvector projection with dimJSS¼3 (from [4])

untapered antenna) we need only 2Msnapshots to obtain a 3 dB SNIR loss, ifM denotes the number of jammers (or better the number of dominant eigenvalues) present, [23,32]. So, with diagonal loading the sample size can be considerably lower than the dimension of the matrix. The beneficial effect of the loading factor is based on the fact that the dynamic range of the small eigenvalues is compressed.

The small eigenvalues possess the largest statistical fluctuation, but have the greatest influence on the weight fluctuation due to the matrix inversion.

One may go even further and ignore the small eigenvalue estimates com- pletely, i.e. one tries to find an estimate of the inverse covariance matrix based only on the dominant eigenvectors and eigenvalues. For high JNR we can replace the inverse covariance matrix by a projection matrix. Suppose we have M jammers with amplitudes b1ðtkÞ;. . .;bMðtkÞ in directions u1, . . . ,uM with received data zk¼Abkþnk, then:

E zzH

¼Q¼ABAHþI (1.54)

For convenience we have normalized the noise power to 1. With this normalization the diagonal elements ofB¼EfbbHgrepresent the signal–noise ratio. Using the matrix inversion lemma we have that:

Q1¼I AðB1þAHAÞ1AH!

B!1 I AðAHAÞ1AH¼P?A (1.55) P?Ais a projection on the space orthogonal to the columns ofA. For strong jammers the space spanned by the columns ofAwill be the same as the space spanned by the dominant eigenvectors, if we have no channel errors. The matrixX¼AðAHAÞ1=2 is just the matrix of the corresponding orthonormalized vectors. We may therefore replace the estimated inverse covariance matrix by a projectionP?X ¼IXXHand approximate X by the dominant eigenvectors because the eigenvectors X are orthonormalized. This is called the EVP method. In the asymptotic case one has up to a complex factorX¼AðAHAÞ1=2.

Figure 1.18 shows the performance of the EVP method in comparison with SMI and LSMI. Note the little difference between LSMI and EVP. The results with the three methods are based on the same realization of the covariance estimate.

For EVP we have to know the dimension of the jammer sub-space (dimJSS).

In complicated scenarios and with channel errors present this value can be difficult to determine. If dimJSS is grossly overestimated a loss in SNIR occurs. If dimJSS is underestimated the jammers are not fully suppressed. One is therefore interested in sub-space methods with low sensitivity against the choice of the sub-space dimension. This property is achieved by a ‘weighted projection’, i.e. by replacing the projection by:

PLMI ¼IXDXH (1.56)

where D is a diagonal weighting matrix and X is a set of orthonormal vectors spanning the interference sub-space.PLMI is of course no projection. Methods of this type of are called lean matrix inversion (LMI) methods. Comparing (1.56) with

(1.55) one can see that this is just a simplified (lean) estimate of the inverse cov- ariance matrix. A number of methods have been proposed that can be interpreted as an LMI method with different weighting matricesD. The LMI matrix can also be economically calculated by an eigenvector-free QR-decomposition method, [32,33].

One of the most efficient methods for pattern stabilization while maintaining a low desired sidelobe level is the constrained adaptive pattern synthesis (CAPS) algorithm, [34], which is also a sub-space method. Letm be the vector for beam- forming with low sidelobes in a certain direction. In full generality the CAPS weight can be written as:

wCAPS¼ 1 mHQ^1SMIm

^

Q1SMImX?XH?CX?1

XH?C 1 mHQ^1SMIm

^

Q1SMImm

!

(1.57) where the columns of the matrixXspan the space orthogonal to½X;m andXis again a unitary LM matrix with columns spanning the interference sub-space which is assumed to be of dimension M. C is a directional weighting matrix, C¼R

WaðuÞaðuÞHpðuÞdu, andWdenotes the set of directions of interest andp(u) is a directional weighting function. If we use no directional weighting,CI, the CAPS weight vector simplifies to:

wCAPS¼mþP½X;m 1 mHQ^1SMIm

Q^1SMImm

!

(1.58) whereP½X;m denotes the projection onto the space spanned by the columns ofX andm. This method is particularly effective for very low sidelobes and mainbeam jammers.

1.6.3 Determination of the dimension of jammer sub-space (dimJSS)

Sub-space methods require an estimate of the dimension of the interference sub-space. Usually this is derived from the sample eigenvalues. For complicated scenarios and small sample size a clear decision of what constitutes a dominant eigenvalue may be difficult. There are two principle approaches to determine the number of dominant eigenvalues, information theoretic criteria and noise power tests.

The information theoretic criteria are often based on the sphericity test criter- ion; see e.g. [35],

TðmÞ ¼ Nm1 PN

i¼mþ1li

QN i¼mþ1li

1=ðNmÞ (1.59)

where li denote the eigenvalues of the estimated covariance matrix ordered in decreasing magnitude. This ratio of the arithmetic to geometric mean of the

Dalam dokumen Novel Radar Techniques and Applications (Halaman 67-84)