• Tidak ada hasil yang ditemukan

New approaches to the analysis and design of Reed-Solomon related codes

N/A
N/A
Protected

Academic year: 2024

Membagikan "New approaches to the analysis and design of Reed-Solomon related codes"

Copied!
287
0
0

Teks penuh

Algebraic soft decoding of Reed-Solomon codes is a class of algorithms that has been stirred by Sudan's breakthrough. 7.5)2 Reed-Solomon product codes are compared to those of any binary code of the same dimension.

Introduction

Contributions

This motivated us to study the ultimate performance of such soft-decision list decoding algorithms. This motivated us to study the performance of linear product codes in general and RS product codes in particular.

Thesis Outline

The weight counter for the ensemble of binary images of product Reed-Solomon codes was also derived. Unfortunately, it is very difficult to know the weight counter for the binary images of RS codes.

Preliminaries

How can one analyze the maximum likelihood performance of the binary images of RS codes? This motivated us to study the minimum distance of the ensemble of binary images of RS codes (Section 2.3).

Average Binary Image of Reed-Solomon Codes

The generating function of the average weight adder of the binary image of a non-zero symbol is The ensemble mean weight enumerator of (2.10) is compared with the weight numerator of the random code (2.16) and the upper bound of Theorem 2.1.

Figure 2.1: True BWE versus the averaged BWE for the (7, 5) RS code over F 8 .
Figure 2.1: True BWE versus the averaged BWE for the (7, 5) RS code over F 8 .

The Binary Minimum Distance of the Ensem- ble of Binary Images of Reed-Solomon Codes

The minimum distance of the ensemble of binary images of an (n, k, d) RS code over F2m is lower bounded by. The relative minimum binary distance of the ensemble of binary images of the RS codes is plotted against the code rate.

Figure 2.3: The ensemble binary minimum distance of Reed-Solomon codes.
Figure 2.3: The ensemble binary minimum distance of Reed-Solomon codes.

Performance of the Maximum-Likelihood De- coders

We also compare the performance with that of the symbol-level HD-BM and HD-GS algorithms. Bounds on the performance of the maximum likelihood decoder provide a benchmark to compare the performance of other suboptimal algorithms.

Figure 2.6: Performance of a binary image of (15, 11) RS code over F 16 when trans- trans-mitted over a binary input AWGN channel.
Figure 2.6: Performance of a binary image of (15, 11) RS code over F 16 when trans- trans-mitted over a binary input AWGN channel.

Conclusion

We then proceed in Section 3.3 to derive a strong symmetry property for MDS codes (Theorem 3.8) which allows us to obtain improved bounds on the symbol error probability for RS codes. In Section 3.7, we conclude the chapter and give some insight into the results in this chapter.

Figure 3.1: A multiuser scenario where a code is shared among many users.
Figure 3.1: A multiuser scenario where a code is shared among many users.

Weight Enumerators

In Section 3.6 we prove that if systematic MDS (e.g. RS) codes are used in a multi-user setting, the unconditional symbol or bit error probabilities of all the users will be the same regardless of the size of the partitions allocated to them. Since each non-zero symbol in the redundancy part of the code contributes to both its output and redundancy weights, R(X,Y) and O(X,Y) are related by the following.

Figure 3.2: Partitioning of a code defined over F 7 q .
Figure 3.2: Partitioning of a code defined over F 7 q .

Partition Weight Enumerator of Maximum- Distance-Separable Codes

In the next theorem, we show that for an arbitrary partition of the coordinates of an MDS code and for any number of partitions, the partition weight counter of MDS codes allows a closed form formula. It can be checked that the sum of the coefficients is the total number of code words 83.

Figure 3.3: Theorem 3.1.
Figure 3.3: Theorem 3.1.

A Relationship Between Coordinate Weight and Codeword Weight

The total weight of all s-coordinates of Ch is the sum of the weights of the individual coordinates, s(h/n)E(h). If we count the total weight of the code words in C⊥ with the Hamming weight h in two different ways, we get Pn.

Binary Partition Weight Enumerator of MDS Codes

We emphasize that the weight profile of the binary image cannot be easily derived from it at the symbol level. IOWE for the binary image will be useful in the analysis of the bit error.

Figure 3.4: Partitioning of a code and its binary image.
Figure 3.4: Partitioning of a code and its binary image.

Symbol and Bit Error Probabilities

As we will see in the next section, the result of Theorem 3.16 can simplify the analysis of the bit error probability of MDS codes. It is also determined by the weight multiplier and has the form of the bound union as in (3.31);.

Multiuser Error Probability

We will now calculate the conditional symbol error probability of the third user under different scenarios. It follows that the SEP of the third user depends on the first two users having a zero probability of error.

Figure 3.5: Conditional multiuser decoder error probability for Example 3.4.
Figure 3.5: Conditional multiuser decoder error probability for Example 3.4.

Conclusion

Section 4.5 provides a brief overview of previously proposed multiplicity assignment algorithms for soft decision algebraic decoding. In short, we conclude that our method is theoretically superior to previously proposed algebraic algorithms for soft decision-making, although it remains to be seen whether it will prove practical.

Preliminaries

The underlying (discrete input, memoryless) channel model has input alphabet Fq, output alphabet R (which could be of infinite size for continuous channels), and transition probabilities Pr{Y =r|X =β}, where X and Y denote channel, respectively input and output. It should be noted that the density functions πi(β) could be calculated from the soft channel output, as is the case for additive white Gaussian noise (AWGN) channels.

The Guruswami-Sudan Algorithm

If complexity is not an issue and the interpolation cost tends to infinity, then a sufficient condition of (4.4) for a codeword c to be on the DC list reduces to [72, 31] (see Theorem 4.7).

Upper Bounds on the Minimum Weighted De- greegree

From the derivation of the above lemma it is clear that the upper bound of [72] has been reached.

Figure 4.1: Bounds on the function D v (Ω(M )) as a function of Ω(M ) for v = 6.
Figure 4.1: Bounds on the function D v (Ω(M )) as a function of Ω(M ) for v = 6.

A Mathematical Model for ASD Decoding of Reed-Solomon Codes.Reed-Solomon Codes

The following theorem shows that Pr{EA|Π} depends only on C, Π and M, so we introduce the notation. In Theorem 4.3, it was implicitly assumed that the channel is memoryless and that the components c are uniformly drawn from the field Fq.

Algebraic Soft-Decision Decoding

Gaussian approximation: According to the definition of point, (4.2), the result of a random vector with respect to a multiple matrix M is a sum of random variables. Assuming then that the random variables are independent, the outcome distribution approximates a Gaussian distribution.

Optimum Multiplicity Matrices

  • Optimization Problem

Here P(Π, γ) is the minimum possible error rate of the ASD decoder, given Π and an upper bound of γ on the cost of M. The matrix M(Π, γ) is the optimal multiplicity matrix of costs less than or equal to γ corresponding to the APP matrix Π.

The Chernoff Bound Multiplicity Assignment AlgorithmAlgorithm

  • The Chernoff Bound—Finite Cost
  • The Chernoff Bound—Infinite Cost
  • The Lagrangian
  • Convexity
  • Iterative Algorithm
  • Implementation Issues

G∗(Π,√ .k−1) is therefore the minimum possible decoder error probability for the ASD decoder, given the APP matrix Π. As we saw in the previous section, the case of an optimal multiplicity matrix with infinite costs is the special case with L2 = 1.

Numerical Results

Their performance is also compared to an average upper bound on the performance of the ML decoder. A decoding instance of the (15,11) RS code, BPSK modulated over an AWGN channel at a fixed SNR of 6 dB using Chernoff ASD.

Figure 4.2: Performance of ASD algorithms when decoding an (15, 11) RS code BPSK modulated over an AWGN channel, for both finite and infinite interpolation costs.
Figure 4.2: Performance of ASD algorithms when decoding an (15, 11) RS code BPSK modulated over an AWGN channel, for both finite and infinite interpolation costs.

Discussion

The Chernoff bound is generally an exponentially narrow upper bound on the tail of the error probability and closely approximates the true error probability.

Conclusion

For a brief review of the GS algorithm and soft decision algebraic decoding, in particular the Koetter-Vardy algorithm, we refer the reader to Section 4.2 and Section 4.5 in the previous chapter. In Section 5.1, we check some notations and describe the technique we used to extract the binary images of Reed-Solomon codes.

Preliminaries

  • A Binary Image of the Reed-Solomon Code

In many cases, it is the binary image of the RS codes that is modulated and transmitted on the channel. We show here a valid binary representation of an RS code and the corresponding parity check matrix.

Adaptive Belief Propagation

In the case that only one LBP iteration is performed on the parity check matrix, the vertical step is eliminated. Initialize log BP on the parity check matrixHˆP with initial LLRsΛin for a maximum number of iterations ItH and a vertical step damping factor θ.

Modifications to the Jiang-Narayanan Algo- rithm

However, running more iterations of the JN ABP algorithm can move the received updated vector into the scope of decoding the transmitted code. Each of these N2 iterations (decoders) starts with a different random permutation of the channel LLRs listed in the first inner iteration.

The Hybrid ABP-ASD List Decoding Algo- rithm

If the generated codeword list is not empty, the codeword list is expanded to the global codeword list. In addition, it tries to eliminate decoder errors (the decoded codeword is not the transmitted codeword) by iteratively adding codewords to the global codeword list and choosing the most likely one.

A Low Complexity ABP Algorithm

Rearrange the leftmost columns of the binary parity check matrix Q0j so that the unity weight columns are the leftmost columns. The implementation of the algorithm also assumes, for the sake of clarity, that the columns of the parity check matrix corresponding to the least reliable bits are independent and therefore reducible to unit weight columns.

Numerical Results and Discussion

  • Fast Simulation Setup
  • Bounds on the Maximum-Likelihood Error Probability
  • Numerical Results

When the binary image of RS codes is transmitted over a channel, the performance of the maximum likelihood decoder depends on the weight adder of the transmitted binary image. The performance of our algorithm was studied for the cases where the interpolation cost of the algebraic soft-decision decoding algorithm is both finite and infinite.

Figure 5.1: The performance of iterative ASD of (15, 11) RS code, BPSK modulated over an AWGN channel, is compared to that of other ASD algorithms and ABP-BM list decoding.
Figure 5.1: The performance of iterative ASD of (15, 11) RS code, BPSK modulated over an AWGN channel, is compared to that of other ASD algorithms and ABP-BM list decoding.

Preliminaries

We proceed in Section 6.6 to derive the mean binary weight adders of Reed-Solomon product codes defined on finite fields of characteristic two. As in Section 3.5, the codeword error probability (CEP) and bit error probability (BEP) of the decoder will be denoted by Φc(EC(h), γ) and Φb(γ).

Exact IOWE of Low-Weight Codewords

It therefore follows that the input weight of pi is the product of the input weights of c and r, while the output weight (total) is given by (6.12). If the codes C and R have the multiplicity property and P = R × C is their product code, then the subcode consisting of the code words in the first place in P has this property.

Average IOWE of Product Codes

  • Representing a Product Code as a Concatenated Code
  • Uniform Interleavers over F q
  • Computing the Average Enumerators

In other words, the symbols of the extended codeword are interleaved by the interleaver row by column. It is straightforward to show that the average IOWE function of the product code P is given by .

Figure 6.1: Construction 1: Serial concatenation.
Figure 6.1: Construction 1: Serial concatenation.

Merging Exact and Average Enumerators into Combined EnumeratorsCombined Enumerators

It follows from (6.20) that any exponent with a nonzero coefficient in Υ(Y, Y) also has a nonzero coefficient in RkCr(Y, Y) or equivalently EkCr(Y). Xx, then every exponent with a nonzero coefficient in Υ0(Y, Y) also has a nonzero exponent in EnCr−kr.

Split Weight Enumerators of Linear Codes

  • Hamming and Simplex Codes
  • Extended Hamming and Reed-Muller Codes
  • Reed-Solomon Codes

The following theorem gives a closed-form formula for the IRWE function of Hamming codes in terms of Krawtchouk polynomials. By observing that the extended Hamming codes are the duals of the (2m, m+ 1.2m−1) first-order Reed-Muller (RM) codes for which the IRWE function can be shown to be the same.

IRWE of Binary Images of Product Reed- Solomon CodesSolomon Codes

The weight register for the average binary code image, defined in finite fields of characteristic two, can be derived by assuming a binomial distribution of bits in the non-zero symbols (c.f (2.9)). We assume that the distribution of nonzero bits in a nonzero symbol follows a binomial distribution and that the nonzero symbols are independent.

Numerical Results

  • Combined Input-Output Weight Enumerators
  • Maximum-Likelihood Performance

The average binary weight counter for the (147.75) binary image, obtained by applying Corollary 6.11, is shown in Figure 6.6. Using Poltyrev heading towards the BSC channel, hard ML limits are also plotted.

Figure 6.4: The combined weight enumerator of the (16, 11) 2 extended Hamming product code is compared with that of a random binary code of the same dimension.
Figure 6.4: The combined weight enumerator of the (16, 11) 2 extended Hamming product code is compared with that of a random binary code of the same dimension.

Conclusion

Reed-Solomon (RS) product codes are product codes where the component codes are Reed-Solomon codes. Maximum-likelihood performance analysis of Reed-Solomon product codes for both hard-decision and soft-decision decoding shows the potential of devising improved polynomial-time algorithms for decoding them [26].

Figure 6.11: BER of some Reed-Solomon product codes over the AWGN channel.
Figure 6.11: BER of some Reed-Solomon product codes over the AWGN channel.

Referensi

Dokumen terkait