• Tidak ada hasil yang ditemukan

5.3 Dithered GMD Transform Coder for Low Rate Applications

5.3.1 Dithered GMD Quantizer

It is shown in [27, 116] that (i) if the ditherw is uniformly distributed on(−∆/2,∆/2]then the quantization error of the subtractive dithered quantizerQ(x+w)−(x+w)is independent of the input signal and is uniformly distributed on(−∆/2,∆/2], and (ii) if the ditherwis the sum ofn≥2 independent random variables uniformly distributed on(−∆/2,∆/2], then thekth moment of the quantization error of the nonsubtractive dithered quantizer Q(x+w)−xis independent of the input distribution fork = 1,2. . . , n, and its variance is equal to(n+ 1)∆2/12. In this subsection, we propose to use dithered quantization along with the GMD coder. Also, a design method for the predictors in the PLT structure used in the GMD coder is proposed to incorporate the low rate mismatch.

We consider transform coder structures with dithered quantizers combined with the GMD coder. Fig. 5.12 shows an example of the structure of a GMD subtractive dithered (GMD-SD) coder withM = 3. The ditherwiis added to the input of uniform quantizerQifori= 1,2, . . . , M. After the uniform quantizer, the dither is subtracted from the quantized signal. The resulting signal is then multiplied with the predictor coefficientss1i, s2i, . . . , sM ifor the use of following substream quantizers. The quantized signalbyiis then stored or transmitted to the decoder side. The subtrac- tive dither quantizer assumes the decoder has knowledge about the dither signal. In the decoder, the dither is first subtracted. The resulting signal then undergoes the inverse operation of the pre- diction process and theM ×M matrixP, which yields the reconstructed signalbx.

In the non-subtractive dithered quantizer, the dither knowledge is lacking in the decoder. Fig.

5.13 shows the structure of GMD non-subtractive dithered (GMD-NSD) coder. In the encoder of GMD-NSD, the quantized signal is directly multiplied with the predictor coefficients for the use of following substream quantizers, without being first subtracted from the dither. For both cases, the transform coder can be modeled as in Fig. 5.14, where theith dithered quantizer is modeled as an additive noise sourceni. Note that the statistics of noise sourceniare different in these two cases.

In the following, we will use the model in Fig. 5.14 to design the predictor coefficients. The transformed signalzis passed through a prediction-based lower triangular transform coder [79]

implemented using the MINLAB structure [82]. The resulting encoded signal isyb = S(PTx+ n), where by = [by1,yb2,· · · ,ybM]T, n = [n1, n2,· · ·, nM]T, and the prediction matrix Sis a unit- diagonal lower triangular matrix that consists of prediction coefficients in the MINLAB structure.

The covariance matrix ofbycan be written as

R“y =SE[(z+n)(z+n)T]ST =S(Rz+Rn)ST,

w

w

1

Q1

w

1

s

21

s

31

s

31

s

21

P

z

1

y

1

w

1

w

2

x ^

^

P T

Q2

w

2

s

32

s

32

P

y

2

y

3

z

2

w

2

w

3

x

^

^

Q3

w

3

z

3

w

3

Encoder Decoder

Encoder

Figure 5.12: Subtractive dithered GMD transform coder.

Q1

w 1

s 21 s 31 s 31 s 21

z 1 y 1

^

^

P T

Q2

w 2

s 32

s 32 z 2 P

x y

2

x

^

Q3

w 3 32

z 3 y 3

Encoder Decoder

^

Encoder Decoder

Figure 5.13: Nonsubtractive dithered GMD transform coder.

where we assume the noise samples are uncorrelated with the signal z. Therefore, the MMSE prediction matrix can be obtained by viewing the signal covariance matrix of the middle-part (PLT part) as Rz+Rn instead of Rz. By similar derivation as in Sec. III of [79], the MMSE lower triangular prediction matrix can be written as

S=L−11 , (5.13)

whereL1is the unit-diagonal lower triangular matrix in the LDU decomposition ofRz+Rn:

Rz+Rn =L1DLT1. (5.14)

Suppose that the noise variance depends only on the quantizer step size, and the noise sam- ples are uncorrelated with each other.This is achievable by using a uniformly distributed dither for subtractive dithered quantizer and a triangular-pdf dither for nonsubtractive dithered quantizer (see [27, 116]). We will first assume that the step size of each quantizer can be made the same (same step-size rule). This implies the signal substream to each quantizer has the same variance, or equivalentlyR“y =dI, wheredis some constant. Later we will prove that this is possible with- out bit allocation, but with the aid of properly designed precoderPT. Under this same step size assumption, the noise covariance matrixRn2I, whereσ2is the noise variance that depends on the step size. With the use of prediction matrixSin (5.13), the covariance matrix of the encoded signalyb isR“y = D, whereDis a diagonal matrix. The question now is whether there exists an orthogonal matrixPso thatR“y=dI. The following theorem asserts the existence of suchP.

Theorem 5.3.1 SupposeAis some Cholesky factor ofRx, i.e.,Rx=AAT.Consider the geometric mean decomposition

 AT

σI

=QRPT, (5.15)

whereRhas equal diagonal entriesr = [r r· · · r]. If the precoderPT in Fig. 5.14 is taken as the one in

(5.15), thenR“y=D=dIfor some constantd. ♦

Proof: From (5.14), by using the MMSE prediction matrixS = L−11 , the covariance matrix R“y =D.Thus we only need to prove that the LDU decomposition ofRz2I=PTRxP+σ2I

n n 1

s 21 s 31 ! s 31 ! s 21 z 1

x ^

y ^ 1

P T

n 2

s 32

! s 32 z 2 P

x x

y ^ 2

n 3 ^ 32

z 3 y 3

Encoder Decoder

^

Encoder Decoder

Figure 5.14: The equivalent model of dithered GMD transform coder.

hasD=dI. To prove this, observe that

PTRxP+σ2I = PTh A σI

i

 AT

σI

P

= PTPRTQTQRPTP

= RTR=L1DLT1,

whereL1is taken asr−1RT, andD=r2I. This completes the proof.

This theorem suggests a method for finding the precoderPT. With suchPT as the precoder and Sin (5.13) as the prediction matrix, we are able to haveR“y =r2I. Therefore, the scalar quantizers of the dithered GMD transform coder can use uniform step size without bit allocation.

The design procedure for the GMD-SD (or GMD-NSD) transform coder, given the input covari- ance matrix, is summarized in the following:

1. Determine the uniform quantizer step size∆according to the bit rate.

2. Determineσfrom the quantizer step size, e.g.,σ=p

2/12for GMD-SD using the uniform pdf dither, andσ=p

2/4for GMD-NSD using the triangular pdf dither.

3. Compute the Cholesky factorAofRx:AAT =Rx. 4. Compute the geometric mean decomposition as in (5.15).

5. Compute the LDU decomposition:PTRxP+σ2I=L1DLT1,and takeS=L−11 .

6. Construct the transform coder structure usingPT and Sas in Fig. 5.12 and Fig. 5.13 for GMD-SD and GMD-NSD transform coder, respectively.

The complexity of the successive decompositions in the above algorithm is in the same order as that in the GMD transform coder described in [122]. A complete comparison of the design and implementation costs of different transform coders can be seen in the previous section.