1.4 Classification of ECG Compression Methods
1.4.1 Time Domain Compression Methods
1.4.1.4 Parameter Extraction Compression Techniques
noises and artifacts in the signal. However, the selection of tolerance is difficult for noisy ECG signals.
Some of the methods use more than one parameters to achieve the desired data reduction. But the simulta- neous selection and tuning of these parameters is difficult in the case of real time applications. Furthermore, in most of the methods, the compressed signal is obtained by interpolating the stored samples. Therefore, design of interpolation scheme without varying shape and duration features of the ECG signal is the practi- cal problem in these methods. Although the direct time domain methods are simple to implement, they will produce a serious distortion at high compression ratios.
are also used to exploit the intra-channel correlation in the multichannel case. This correlation consists of both intra-beat correlation and inter-beat correlation. An ECG signal compression system based on VQ schemes has two parts: the encoder and the decoder. First, the codebook is generated by a training procedure and it can be updated according to the input vector. This codebook is available for both the encoder and the decoder. For a test input vector, the encoder finds a representative vector in the codebook such that the distortion can be very small. Each codevector is associated with a unique index. The index, rather than the codevector itself, is transmitted during the encoding process. A binary codeword (Huffman codeword) is assigned to each permissible vector template which is then transmitted. To obtain a reconstructed signal at the decoder, the index is served as a pointer to find the corresponding vector from the codebook. The goal of VQ methods is to produce the best possible reproduction vector for a given rateRwith minimal distortion.
However, a distortion criterion which represents a distance or error between the input vector and codevector should be subjectively relevant. The choice of distortion criterion permits us to quantify the performance of a VQ in a manner that can be computed and used in analysis and design optimization. For simplicity, squared error distortion criterion is employed in most of the VQ based methods.
3) Template matching (or Beat subtraction) methods: These methods include ECG beat segmentation, beat normalization, beat matching and subtraction and coding of residual signal. The residual signal is obtained by subtracting the detected ECG beat from the average beat (or the beat from the template database). The run-length and Huffman coding schemes are employed for the residual data or the differenced residual data and for the period of the ECG complex [76, 80, 84]. Note that compression performance of these methods depends on the repetition of the ECG beats, the endpoints detection and the updation of the beat templates.
The residual template pattern library will be synchronously updated in both the encoder and decoder via a simple updating rule [84]. Note that the ECG signal can be regular and irregular in real time case [83].
Changes in the QRS complex can be efficiently handled in these compression systems.
Short term prediction (STP) and long term prediction (LTP) algorithms [79] are employed for compres- sion of ECG signals. STP and LTP algorithms exploit the intra-beat correlation and inter-beat correlation, respectively. The complexity of the algorithm is fully loaded by the detection of ECG features, the tem- plate matching [84] and the encoding of its residual by a VQ. Zigel et al. [83] introduced ASEC algorithm used with defined weighted diagnostic distortion (WDD) [81] measure in order to efficiently encode every heartbeat, with minimum bit rate while maintaining a predetermined distortion level. Themitadatabase is used to evaluate the compression algorithm and the results were compared with other known compression methods such as AZTEC [55] algorithm, SAPA-2 [57], and LTP [80]. A mean compression rate of approxi- mately 100 bps (CR value of about 30:1) had been achieved with a good reconstructed signal quality (WDD below 4% and PRD below 9%). Two types of tests such as the quantitative (the PRD and the WDD) test and the qualitative (by mean opinion score (MOS) of cardiologists) test which includes the blind and semi-blind tests were performed for the evaluation. An accurate QRS detector invariant to different noise sources and varying morphologies is an important issue in these methods. Predictive coding does it by coding an error
term formed as the difference between the current sample and its prediction. These techniques are not opti- mal since the processed samples are still somehow correlated or dependent and mainly depends on accurate beat segmentation and end points of the ECG beats.
VQ algorithms are widely employed for compression of image and speech signals [40, 41]. These are extended to the ECG signal compression which employs VQ either directly on the samples or on its trans- formed coefficients. A novel ECG signal compression based on VQ for coding feature vector is reported in [85, 86]. In the mean shape vector quantization (MSVQ) and the gain shape vector quantization (GSVQ) based methods [87, 89], each block is preprocessed by subtracting its mean value and normalized by its dy- namic range of the input. Then, the shape and the mean value are coded using VQ and scalar quantization.
In most of the algorithms, a simple squared error distortion measure is used to select the codebook which varies with time varying PQRST complex morphologies. The selection of distortion measure is important in VQ based coding scheme [86]. Large training sequence is needed so that all the statistical properties of the PQRST complexes are captured. The performance of VQ algorithm depends crucially on the dimension, the codebook size, the choice of code vectors in the codebook and the distortion measure. Best compression results are obtained with VQ on scales with long duration and low dynamic range, and scalar quantization on scales of short duration and high dynamic range. Note that the usual duration of computer evaluated ECG records is 10 seconds [17]. Many VQ algorithms do not consider the difficulties such as the contro- versial issues of selecting a distortion measure and the creation, and updation of codebook in the encoder and decoder for continuous ECG signal transmission system.
Chena et al. [84] introduced an efficient data compression scheme for biomedical ECG and arterial pulse waveforms. This compression consisted of the following processes: the beat segmentation and normaliza- tion, two-stage pattern matching and template replenishing mechanism and the residual beat coding. Three different residual beat coding methods such as Huffman/run-length coding (method-1) Huffman/run-length coding in discrete cosine transform domain (method-2) and vector quantization (method-3) are employed.
The coding performance in terms of CR versus PRD distortion measure was compared over other methods including the TP algorithm, the Fan algorithm, the AZTEC algorithm, the m-AZTEC algorithm and the CORTES algorithm for ECG data compression using the 20 files of ECG data sequence sampled at 300 samples per second and quantized at 16 bits per sample. It was concluded that method-3 is superior to both method-1 and method-2.
In [76, 79, 80, 82–84], each ECG beat is treated as a separated vector. These methods utilize the corre- lation between adjacent samples and the correlation between adjacent beats to achieve low bit rate at the cost increased computational time. The methods require that the endpoints detection of each ECG beat be determined prior to the compression phase that is a difficult task under noisy conditions. The accurate beat segmentation, beat normalization and beat replenishing schemes maximize the correlation between the current and previous beats or the current and beat pattern available in the template database. The processing stage before compression phase provides a better choice of similar beat vector from the template database, or
results in residual signal with lesser dynamic range. Template matching is done using a simple normalized correlation coefficient (NCC) criterion. The template assigned to be subtracted from the current beat vector will be the one with the largest value of NCC. However, as will be demonstrated, the presence of some degree of noise diminishes the NCC as a measure of similarity in real time case. Two stage pattern match- ings such as beat vector matching and residual vector matching are employed to reduce bit rate. Finally the authors concluded that it is not very suitable to irregular ECG signals, such as drastically varying pattern shape [84]. Therefore, the algorithm for irregular signal, such as ventricular fibrillation (VF) and ventricular tachycardia (VT) detection and compression is needed and is discussed in [83]. The compression system is more computationally complex than most of the reported ECG compression algorithms. It includes the following stages regularity detection, beat segmentation, period estimation and beat normalization, beat pat- tern matching and subtraction, residual pattern matching, residual coding, side information coding, updating algorithms according to the beat morphology variations, and the irregular signal compression algorithm.