• Tidak ada hasil yang ditemukan

Chapter 1 Introduction

3.3 Some Results Derived using the Basic ANN Con- figurationfiguration

3.3.3 ANN based channel estimation

Principles governing training of the ANN have been included in Section 2. Tables 3.5 and 3.6 show data sets used to train an ANN for four different path delays and four different frequency selective paths of Rayleigh and Rician faded channels generated us- ing Clarke-Gans model [53]. The size of the data involved in the training is not very large, hence a careful selection of training sessions is an important criteria to prevent over training of the ANN and thereby ensure optimum performance. If the ANN is over- trained it will fail to model the channel in general. Rather, it might memorize a few coefficients with higher probability. The number of training epochs required, therefore, are limited to a few hundreds only depending upon the size of the data considered. For

0 5 10 15 20 0

0.5 1

pathgain

0 5 10 15 20

0 0.5 1

pathgain

0 5 10 15 20

0 0.5 1

pathgain

0 5 10 15 20

0 0.5 1

pathgain

Figure 3.3: A few samples of channel path gains used for training MLP

0 5 10 15 20

0 0.5 1

pathgain

0 5 10 15 20

0 0.5 1

pathgain

0 5 10 15 20

0 0.5 1

pathgain

0 5 10 15 20

0 0.5 1

pathgain

Figure 3.4: Channel path gains generated by MLP after a few hundred training sessions when used with 64-bit data blocks

64-bit data blocks, training epochs are within a few hundred. But for larger size of data, the training is confined to a few thousands to enable the ANN learn the patterns appro- priately. Figure 3.3 shows a set of four channel path gain samples used with signals for training the MLP. At the end of around 150-200 sessions (average of ten trails), the MLP estimates a set of channel coefficients that have path gains as shown in Figure 3.4.

These patterns show that with training upto a few hundred, the MLP is able to track the channel coefficients properly. But when the training is extended to a few thousands (1000-1300 for ten trails), the proper tracking which the MLP showed in the previous

0 5 10 15 20 0

0.5 1

pathgain

0 5 10 15 20

0 0.5 1

pathgain

0 5 10 15 20

0 0.5 1

pathgain

0 5 10 15 20

0 0.5 1

pathgain

Figure 3.5: Channel path gains generated by MLP with extended training upto a few thousand sessions when used with 64-bit data blocks

case is not observed. Instead, the MLP captures a few variations with greater probability, memorizes and generates them in response to a set of inputs applied to it (Figure 3.5).

It is a case of over training and the MLP failing to generalize.

The performance achieved during these epochs are noted as follows: the ANN training considers the MSE convergence and precision generated in channel estimation, symbol recovery and the BER calculation. If the MSE has converged to the fixed target value, the precision levels and the associated BER values are determined. If the values fall within the desired levels, the training is extended to include more number of samples which represent varying channel conditions. The iterations are confined to only to a few hundreds during which a minimum MSE convergence of 0.006 x 102 has been attained using the GDALRMBP training method. The ANN trained by following these consider- ations is taken for performing the channel estimation. The MSE values attained using different methods of during training is shown in the Table 3.7. The MSE obtained using GDALRBP is the lowest within the given training sessions but such values generated by other training methods are comparable too. Table 3.8 shows the precision performance of a 1-hidden layered MLP trained with the four mentioned training methods. The highest precision is around 92.5 % which is attained by the GDMALRBP based training. After the training aspects are properly addressed, a validation process is carried out which is followed by testing. Out of the total data, about 30% are used for training, 15% for validation and rest for testing. The greatest strength of the ANN in such applications is related to the opportunity that the system derives in extending the performance domain by adopting better configuration and allowing increased number of sessions to continue

Table 3.7: MSE attained during training by a one-hidden layered MLP with a learning rate of 0.4

Sessions MSE attained with four different training methods

GDBP GDMBP GDALBP GDMALRBP

100 4.1 x 102 3.1x 102 2.6 x 102 1.23x 102 200 0.42x 102 0.31 x 102 0.21 x 102 0.12x 102 300 0.21x 102 0.07 x 102 0.04 x 102 0.03x 102 400 0.02x 102 0.01 x 102 0.01 x 102 0.006 x 102

Table 3.8: Precision performance in % of channel estimation during training by a 1- hidden layered MLP trained with four different raining methods with a learning rate of 0.4

Sessions Precision %for different training methods

GDBP GDMBP GDALBP GDMALRBP

1000 76.1 79.3 83.3 84.2

2000 77.9 81.2 84.2 85.7

3000 79.1 82.3 86.6 89.4

4000 81.9 84.4 88.6 92.5

the learning till the desired performance levels are attained. The learning patterns and thereby performance of ANN varies with training method. Faster the learning, greater is the chance of the ANN falling a local trap where the convergence curve oscillates around a local minima. No such problems have been observed in the present case with all the four training methods. A set of learning curves of the MLPs during training are shown in Figure 3.6 (a)-(d). A BER plot (Figure 3.7) is generated using training and and testing samples to ascertain the performance of the system designed to perform channel estimation. The MLP trained for a few hundred sessions, track the variations in the input samples. The performance difference between the training and the testing samples is less than 10% despite the fact that the sample variation is around 15-20%. Another set of test samples are used to generate a BER plot (Figure 3.8) which is compared with an estimator with perfect channel state information (CSI). The perfect CSI state is given to an estimator by training it with a set of BER values which are generated for the considered MIMO set-up and fading conditions withapriori knowledge derived from the theoretical limits. A few results are derived for the statistical channel estimation methods as well. The results shown are derived from about fifty trials with a host of variations including SNR and background noise. The MLP based system is found to be resilient enough in dealing with multipath variations observed in MIMO channels. Figure 3.9 shows a comparative plot between LS, MMSE and MLP based methods of BER value

20 40 60 80 100 120 140 160 180 200 0.2

0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2

epoch

Learning Curve for 4 MIMO−OFDM channels with GDALRBP

MSE convergence

(a) GDALRBP

20 40 60 80 100 120 140 160 180 200

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

epoch

Learning Curve for 4 MIMO−OFDM channels with GDMBP

MSE convergence

(b) GDMBP

20 40 60 80 100 120 140 160 180 200

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

epoch

Learning Curve for 4 MIMO−OFDM channels with GDBP

MSE convergence

(c) GDBP

20 40 60 80 100 120 140 160 180 200

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

epoch

Learning Curve for 4 MIMO−OFDM channels with GDALRMBP

MSE convergence

(d) GDALRMBP

Figure 3.6: ANN learning curves for four channels with

−10 −8 −6 −4 −2 0 2 4 6 8 10

10−6 10−5 10−4 10−3 10−2 10−1

SNR in dB

BER

With−Pefect−CSI MLP−TRAIN MLP−TEST

Figure 3.7: BER vs SNR plot of estimated channel (4×4MIMO using OFDM) obtained with MLP during training and testing compared to an estimator with perfect CSI calculation related to MIMO- channel estimation. The MLP-based values show better results which can be attributed to the fact that such a system has used TCRSI better

−10 −8 −6 −4 −2 0 2 4 6 8 10 10−6

10−5 10−4 10−3 10−2 10−1

SNR in dB

BER

With−Pefect−CSI LS MMSE MLP

Figure 3.8: BER vs SNR plot of estimated channel (4 x 4 MIMO using OFDM) obtained with LS, MMSE, MLP methods compared to an estimator with perfect CSI

than conventional methods. In all the cases of simulation, results are shown considering the channel to be Rayleigh. Since the results derived from the Rician channel are close to that obtained from the Rayleigh case, those are not shown.

3.3.4 Limitation of MLP based symbol recovery and channel es-