• Tidak ada hasil yang ditemukan

1.4 Classification of ECG Compression Methods

1.4.1 Time Domain Compression Methods

1.4.1.3 Adaptive Sampling Techniques

Adaptive sampling is a technique for adjusting the sampling rate of a given source to its information rate [44, 51]. Adaptive sampling allows the sampling rate to change according to the activity of the source.

Because of the problems with uniform sampling for cardiac signal, the alternative of sampling at irregular intervals is intrinsically attractive. Signals are sampled more frequently during the transient periods and

less frequently otherwise. ECG signals are well suited for adaptive sampling since they are periodic and contain some segments of rapid and slow changes. The basic difference between adaptive sampling and redundancy reduction is that in adaptive sampling, the sampling rate of the original data waveform is varied while in redundancy reduction the waveform is initially sampled at a constant rate and nonessential samples are eliminated later [44]. In adaptive sampling, an output is provided only when the data change exceeds the predetermined tolerance. Adaptive sampling is attempted using the following algorithms: the voltage- triggered [52, 53], the two-point projection [52, 54], the second differences [48, 54], the amplitude zone time epoch coding (AZTEC) [55], the turning point (TP) [56], the coordinate reduction time encoding system (CORTES) [43], the scan along polygonal approximation (SAPA) [57], the fan [58–61], the adaptive AZTEC [62], the slope detection algorithm [63], the corner detection [64], the modified SAPA (MSAPA) [65], the combined SAPA (CSAPA) [65], the cubic-splines and SAPA (CUSAPA) [67–69], the improved adaptive AZTEC [70], etc. Each adaptive sampling method processes the closely spaced samples to select a smaller set of samples and their associated time values called permanent samples. All methods are based on the idea of replacing the sequence of transient samples by line segments and use at least one parameter, called ‘ERR’, to specify the degree of approximation to be allowed in replacing the transient samples by line segments [71–73]. Permanent sample and time values produced by each method are used to reconstruct the waveform. The reconstruction uses linear interpolation between the permanent samples. These techniques are based on the specified error tolerance comparison or significant point extraction. In general, higher value of the specified error threshold will result in higher CR and lower compressed signal quality and vice-versa [74]. An excellent review and summary of few statistical, redundancy reduction and adaptive sampling techniques can be found in [74]. Clinical evaluations of the AZTEC, TP and fan algorithms is presented in [75]. Some of the experimental results and the limitations of the techniques are presented here.

Several researchers have compared the performance of different adaptive sampling algorithms on pre- viously measured cardiac signals. The step (voltage-triggered), two-point projection and fan methods are applied on normal, abnormal and noisy ECG signals [52]. The AZTEC [55] decomposes raw ECG samples into plateaus and slopes. The AZTEC provides a significant compression ratio but the reconstruction of the ECG signal is unacceptable for accurate analysis by the cardiologist. Because the AZTEC results in significant discontinuities (step-like) and distortion in the reconstruction of the P and T waves due to their slow varying slopes [74]. The reconstructed signals need postprocessing to produce smoothed signals. This least-squares (parabolic filter) technique produces a smoothed signal by finding the best polynomial fit to each set of seven sample points in the reconstructed signal [43]. This smoothing produces a new wave- form with reduced noise and no discontinuities. But it introduces the loss of amplitudes of QRS peaks and valleys. The amplitude information is very important in some diagnoses such as ventricular hypertro- phy. The AZTEC technique ECG signals with a compression ratios of 4:1 and 5:1 fairly reproduces QRS configuration. Reproduction of ST/T complexes is less satisfactory making less accurate calculation of a depression [75]. The TP algorithm [56] is based upon the notion that ECG signals are normally oversampled

at four or five times faster rate than the highest frequency component present. Thus, the TP algorithm is used for the purpose of reducing the sampling frequency of an ECG signal from 200 Hz to 100 Hz without diminishing the elevation of large amplitude QRS complexes. The TP method is simple but introduces the short term time distortion or the local time shifts due to the unequally spaced time intervals. The CORTES algorithm [43] is a hybrid of TP and AZTEC algorithms. The CORTES algorithm employs the idea of the AZTEC to discard clinically insignificant samples in the isoelectric region with a high CR and then applies the TP method to reduce the amount of data for the clinically significant high frequency regions [74]. The CORTES provides nearly as great data reduction as AZTEC with approximately the same small compres- sion or reconstruction error as TP. The parabolic smoothing is applied only to the AZTEC portion of the CORTES signal to eliminate this distortion. The adaptive AZTEC algorithm [62] represents a modification of the AZTEC technique extended with several statistical parameters used to calculate the variable thresh- old. This algorithm calculates the statistical parameters (mean value (µ), standard deviation (σ) and third moment (M)) of the ECG signal which are used in on-line. The experimental results show that the noisy signals compressed by the adaptive AZTEC algorithm [62] are as good as the noise free signals but the com- pression ratio is poor. The compression ratio highly depends on the level of the noise. The corner detection algorithm [64], is an efficient algorithm which locates significant samples and at the same time encodes the linear segments between them using linear interpolation. It is used for real time ECG compression and the results are compared with the AZTEC algorithm. The performance evaluation shows that under the same bit rate, a considerable improvement of the signal to noise ratio (SNR) and root mean square error (RMSE) can be achieved by employing the corner algorithm. The improved modified AZTEC [62] is presented to amend the effectiveness of the adaptive AZTEC technique. The comparative study is carried out on fidelity of the reconstructed signals using the smoothing filters. The compression results of the modified AZTEC are improved by incorporating the following two steps: i) For the evaluation of the statistical parameters namely the mean, the standard deviation and the third moment for the next compression cycle, Xmax and Xminvalues are initialized to the first sample of the segment under consideration after each plateau or slope.

ii) The statistical parameters of previous segments are not considered for the evaluation of parameters of the next compression cycle. The least-squares polynomial smoothing filters are employed to produce smooth reconstruction. However, the smoothing technique may introduce the amplitude distortion.

The Fan is a technique of adaptive sampling that selects samples with an irregular temporal spacing that specifies each waveform with the minimum number of samples required for a given maximum error or tolerance ε [58–61]. It draws the longest possible line between the starting point and the ending point so that all intermediate samples are within the specified error ε. Since the Fan technique guarantees that the error is less than or equal to the preset error tolerance, it produces better signal fidelity under the same compression ratio. Three SAPA algorithms (SAPA-1, SAPA-2 and SAPA-3) are developed for real time ECG compression and their performances are compared with the AZTEC algorithm [57]. Although SAPA- 3 achieves highest compression, it tends to smooth out Q-wave details. The theoretical basis of the SAPA

algorithm is that a good approximation to a curve is obtainable from the piecewise linear segments inside the corridor formed by a preset error tolerance [65]. A major disadvantage of the SAPA-2 algorithm is that three division operations are required for calculating the slopes. This problem is later solved in modified SAPA (MSAPA) algorithm. In MSAPA, the integer division was utilized instead of real division which may be rapidly performed using a table searching technique and which also decreases the number of comparisons required. Although the MSAPA algorithm achieves fast and efficient data compression by approximating original ECG signals by lines, it occasionally produces a loss of the P wave and ST segment [65]. This is solved in the combined SAPA (CSAPA) algorithm that combines MSAPA and TP algorithms. At first, CSAPA algorithm applies MSAPA to ECG signals while simultaneously detecting the R wave and S point.

Once the S point is detected, TP is substituted for ECG data reduction until the end of the ST segment. The reconstruction procedures of MSAPA and CSAPA are similar to the SAPA-2 algorithm. The decompression of CSAPA is the same as the MSAPA algorithm except that the ST segment is recovered by a method similar to the TP algorithm. The two algorithms named as modified SAPA and combined SAPA are tested using the 15 different ECG signals including ischaemic episodes, tachycardia, inverse QRS complexes and powerline noise. A curve smoothing technique can also be utilized to smooth peak and valley points of the curve but its disadvantage is the introduction of amplitude distortion of the ECG waveform.

The cubic-spline is a very popular method presently used for data approximation, interpolation, curve fitting and smoothing. The spline approximation in ECG data compression was reported in [68]. Data compression is achieved using cubic splines by calculating the spline coefficients which will be used later on to reconstruct the original data. The number of spline coefficients is much less than the number of original data points. These coefficients are computed in such a way that the approximation error between the original and the new signal is minimum. Least-squares norms are used as a measure of closeness in calculating the coefficients. The CUSAPA combines the SAPA-1 algorithm and the cubic-splines piecewise polynomial approximation [67]. The algorithm applies the SAPA algorithm to the high-frequency portions of the ECG signal (i.e., the QRS complex) and the cubic-splines approximation to the lower-frequency segments (i.e., the S-Q segment). To ensure a low reconstruction error for the spline processed segments and to achieve fast compression, an attribute grammar is developed to locate the best initial spline knot locations. These initial locations are then used to determine the optimal locations and the corresponding spline coefficients. The selective application of either the SAPA or the cubic-spline algorithms requires the detection and isolation of the QRS segments. The CUSAPA algorithm produces the best reconstructed signal among all other techniques with superior noise reduction [67]. The disadvantage is the extended time needed for processing.

From the survey of above direct data compression methods, the following conclusions can be drawn.

Many adaptive sampling techniques have been reported for the reduction of large amount of ECG data. It is observed that in all the methods, the threshold toleranceε is applied to select the significant or permanent samples. Some of these techniques are sensitive to the sampling rate, the quantization step size, and the

noises and artifacts in the signal. However, the selection of tolerance is difficult for noisy ECG signals.

Some of the methods use more than one parameters to achieve the desired data reduction. But the simulta- neous selection and tuning of these parameters is difficult in the case of real time applications. Furthermore, in most of the methods, the compressed signal is obtained by interpolating the stored samples. Therefore, design of interpolation scheme without varying shape and duration features of the ECG signal is the practi- cal problem in these methods. Although the direct time domain methods are simple to implement, they will produce a serious distortion at high compression ratios.