• Tidak ada hasil yang ditemukan

1.4 Classification of ECG Compression Methods

1.4.5 Quality Controlled ECG Compression Methods

Main goal of any compression method is either to control the data rate or the quality of the compressed signal by automatically adjusting the coding parameters. If the quality of a compressed ECG signal is not guaranteed, the compression process itself will become less useful [129]. Since the quality is indispens- able for diagnosis, some ECG compression methods attempt to control its compressed quality using some squared error distortion criterion. In this section, we discuss different approaches of the quality controlled ECG compression methods, and illustrate the advantages and disadvantages of those approaches.

Nagarajan et al. [127] selected the best wavelet packet basis that minimizes the number of bits needed to represent the ECG, subject to the constraint that the PRD is always within an apriori bound PRDb. Such

constrained minimization problem is in rate-distortion theory. This bound is based on the initial performance of the compression method and could be defined by the clinician after correlating the information contained in the compressed signal and the resulting PRD. However, ensemble averages do not ensure capturing clinically relevant features in the PQ, QRS and ST portions, since each portion makes widely differing contributions to the variance [97]. Blanchett et al. presented a KLT-based quality controlled compression (KLT+QCC) of ECG in which beat-by-beat quality controlled criterion is shown to be necessary to ensure clinically adequate reconstruction of each beat [97]. The KLT+QCC method employed the relative mean square error (rMSE) as a quality measure and the number of retained transformed coefficients is chosen such that the rMSEs for each block (PQ, QRS and ST) is less than or equal to a prespecified tolerance. The noise filtering is demonstrated during sampling rate conversion of PQ, ST and QRS blocks. Thus, the quality control criterion may provide a lower CR over noisy PQ, ST and QRS blocks. Chen and Itoh [129] presented a compression method guaranteeing desired signal quality (GDQ) [129] based on discrete orthonormal wavelet transform (DOWT) and an adaptive quantization strategy, by which a user specified PRD of the reproduced ECG segments is guaranteed at high compression ratio (CR). The authors observed that if the same PRD value is specified for a noiseless block and the noisy one, the CR obtained for the noisy block will be lower than that of the noiseless block because a smaller quantization bin size is used for the noisy signal and this will spend extra bits on approximating the noise with the specified accuracy. Although noise elimination process is used prior to the compression process, the filtering algorithm may degrade the quality of the signal, and there exists a practical difficulty in measuring the quality of the denoised signal. Moreover, the PRD and other similar error measures have many disadvantages, which result in poor diagnostic relevance [83]. In [129] the weighted PRD is suggested to characterize the distortion of the selected local waves. However, an open problem on the this criterion is how to determine the weights for each local wave viz. P, Q, QRS, etc.

Zigel et al. [83] presented an analysis by synthesis ECG compressor (ASEC) algorithm based on analysis by synthesis coding, and consists of a beat codebook, long and short-term predictors, and an adaptive resid- ual quantizer. The compression algorithm uses a defined weighted diagnostic distortion (WDD) measure in order to efficiently encode every heartbeat, with minimum bit rate, while maintaining a predetermined distortion level. However, the ASEC algorithm attempts to minimize a different metric called weighted diagnostic distortion, which is claimed to be better matched to diagnostic distortion than PRD, but requires complex parameter extraction to calculate, as is done within the ASEC procedure [145], [151].

Miaou and Yen proposed a quality driven gold washing adaptive vector quantization (QDQW+AVQ) for the wavelet based ECG compression [151] where multiple distortion thresholds (dth’s) are automat- ically adjusted according to a user-specified reconstruction quality (desired PRD). Later, Miaou and Lin proposed a wavelet-based quality-on-demand algorithm for the compression of ECG signal [146] using a set partitioning in hierarchical tree (SPIHT) in which the resulting PRD falls within the preset bound of the desired PRD. Note that PRD may not be a good measure of clinical acceptability of the compressed sig-

nal [151]. Moreover, Lu et al. ( [145], 2000) notice, in their evaluation of the quality of compressed signal, that the chief effect of wavelet compression is the smoothing of the low-level background noise. Although the compression error of SPIHT (in PRD) is larger, the clinical features of the compressed signal appear to be faithfully preserved. Blanco et al. developed a filter bank based algorithm for ECG compression with retrieved quality guaranteed (RQG) where the threshold is chosen so that the quality of the retrieved signal is guaranteed. A target PRD (PRDtarget) is established a priori to control the quality of the reconstruction.

Tests are carried out using noisy ECG records from the mitadatabase and the PRD criterion to evaluate the quality of the retrieved signal. In RQG [121], it is considered from Ref. [83] that for the mita data- base, the compressed signal with PRD1 (computed with zero mean original signal block) values under 9%

represent good, or very good, results, whereas if the value is greater than 9% its quality group cannot be determined. One can argue that these are not valid assumptions with the use of global error criteria since the noise filtering capability and distortion of the local waves are dissimilar across the lossy compression methods. Thus, it should be noted that the use of optimum thresholding where threshold value is obtained from noise characteristics can provide a better rate-distortion performance [125].

Alesanco et al. [139] presented a new real-time ECG coding scheme compatible with packetized tele- cardiology applications. The compression scheme contains the preprocessing stage (baseline removal, QRS detection, Beat segmentation and noise measure), template matching, wavelet transform, coefficient se- lection based on rate or RMS error, adaptive pulse code modulation and Huffman coding schemes. The threshold is selected according to the noise power estimated in the preprocessing stage. Noise power in each beat is estimated as the power remaining after high-pass filtering (cutoff frequency equal to 20 Hz) in the repolarization interval. The threshold automatically updated according to the measured noise level avoids spending bit rate for noise coding. However, measurement of noise power of ECG beats with time varying characteristics may be difficult with the use of high-pass filter because there are many sources of noise such as power line interference, muscle contraction noise, poor electrode contact and baseline wander- ing due to respiration, having different frequency characteristics, in a clinical environment. Later, Alesanco and Garc´ı [149] presented a simple method for guaranteeing ECG quality in real-time wavelet lossy coding based on the SPIHT algorithm and WWPRD measure. The effect of ECG baseline artifacts in compression performance is discussed and the artifacts are removed before the compression phase. But noise measure- ment criterion may be difficult to incorporate in well designed SPIHT strategy which codes the wavelet coefficients by exploiting the redundancies among wavelet subbands. Thus, noise degrades compression performance of the SPIHT coder. In conclusion [149], in order to avoid the effect of noise coding, an adaptive approach that varies the threshold should be used. However, this may not be a remedy for quality controlled SPIHT coding strategy.

Blanco et al. [142] developed a simple wavelet packets feasibility study for the design of an ECG com- pressor (WPFDEC) scheme where the same thresholding and entropy coding schemes utilizing the PRD as the target to terminate compression is adopted. The compression performance of the WPFDEC is compared

with the SPIHT and filter-bank methods. Benzid et. al [143] proposed an ECG compression method based on the adaptive wavelet coefficients quantization combined to a modified two-role encoder (ALQ-TRE).

The resulting wavelet coefficients are thresholded iteratively until a user-specified PRD is matched using the iterative algorithm given in [121]. Then, the compression performance of this method was compared with the filter-banks [121] and SPIHT [145] based methods. The authors concluded that the compression performance of the ALQ-TRE method is better than the filter-banks [121] and SPIHT [145] based methods.

Benzid et. al ( [111], 2008) proposed a constrained ECG compression algorithm using the block-based discrete cosine transform (DCT). It employed the block-based DCT, the adaptive quantization, the look- up table and the arithmetic coding. The iterative algorithm employed as similar to [121] considers the PRD as a controlled quality criterion. The experiments are carried out using the 2-min of data (each) from the mita records 100, 101, 102, 103, 107, 109, 111, 115, 117, 118 and 119. The authors reported that the method presents a high compression performance than the filter-banks [121] and SPIHT [145] based methods. Many ECG compression methods show and illustrate the effect of smoothing of background noise [126, 134, 145, 158, 160]. The well designed compression method may fast remove the wavelet coeffi- cients due to high-frequency noises dominated in higher subbands for data compression. Thus, the effect of noise removal, the amount of noise removed by compression filtering, may not be similar for every method and testing record. Under this situation, the comparison of the compression methods or the compression results provided with the usage of PRD measure are not significant for real time applications. The employ- ment of the PRD measure in evaluating ECG compression methods has no practical value [74]. The above facts have motivated a great deal of research on objective quality measure based on weighted distortion criterion approach. Recently, a wavelet based weighted PRD (WWPRD) [190] measure is reported that provides a local or subband error estimation, with weights equal to the normalized absolute sum of wavelet coefficients in the corresponding subbands. However, local PRD value of insignificant errors in higher sub- bands dominates a global PRD value while local PRD value of significant errors in other subbands may not reflect any contribution to the global PRD. This may lead to confusion in the judgement of the quality of the compressed signal. Thus, WWPRD criterion does not provide an optimal coding parameters set in a rate-distortion optimization procedure for a given quality specification. The effectiveness of the quality criteria are investigated thoroughly and their disadvantages are described in next Chapter.