• Tidak ada hasil yang ditemukan

A study of Performance Analysis of Image Compression Methods and Denoising Natural Image

N/A
N/A
Protected

Academic year: 2023

Membagikan "A study of Performance Analysis of Image Compression Methods and Denoising Natural Image "

Copied!
51
0
0

Teks penuh

Abdul Khalek in the Department of Electronics and Telecommunication Engineering at Daffodil International University has been accepted as satisfactory in partial fulfillment of the requirements for the degree of Bachelor of Science in Electronics and Telecommunication Engineering. Shahina Haque, Department of Electronics and Telecommunication Engineering, Daffodil International University, for her relevant help and guidance throughout the year. We would like to express our sincere gratitude to other faculty members, staff of Department of Electronics and Telecommunication Engineering, Daffodil International University.

In lossless compression, the data (binary data such as executables, documents, etc.) is compressed in such a way that when decompressed, it produces an exact replica of the original data. An approximation of the original image is sufficient for most purposes, as long as the error between the original and the compressed image is acceptable. Also, some of the finer details in the image may be sacrificed to save a little more bandwidth or storage space.

Compression is achieved by removing one or more of the three basic data redundancies. The goal of compression is to reduce the number of bits as much as possible, while keeping the resolution and visual quality of the reconstructed image as close as possible to the original image.

Figure 1-1: Intensity level representation of image 3  Figure 1-2 : Outline of lossy compression techniques 5
Figure 1-1: Intensity level representation of image 3 Figure 1-2 : Outline of lossy compression techniques 5

Principle Intensity level representation of Image

Image compression addresses the problem of reducing the amount of data required to represent a digital image. This is a process designed to compactly represent an image, thereby reducing image storage/transfer requirements. Psychovisual redundancy results from information that is ignored by the human visual system (ie, visually irrelevant information).

Image compression techniques reduce the number of bits needed to represent an image by exploiting these redundancies. The reverse process, called decompression (decoding), is applied to the compressed data to obtain a reconstructed image. Digitizing a one-inch square image at 256 dpi requires 8 x 256 x storage bits. Using this demonstration, it is clear that image data compression is a huge advantage when many images need to be stored, transferred or processed.

According to “Image compression algorithms aim to remove redundancy in data in a way that allows image reconstruction.” This essentially means that image compression algorithms try to exploit redundancies in the data; they calculate which data must be retained to reconstruct the original image and therefore which data can be discarded. By removing the redundant data, the image can be represented in a smaller number of bits and thus compressed.

Figure 1-1 : Intensity level representation of image
Figure 1-1 : Intensity level representation of image

Image compression techniques

Lossless compression technique

Lossy compression technique

Lossy compression techniques includes following schemes

Image Compression and Reconstruction

The problem of image compression is very easy to define, as shown in the figure. The compressor then removes the redundancy in the transformed image and stores it in a compressed file or data stream. In the second step, the quantization block reduces the accuracy of the transformed output according to a predetermined fidelity criterion.

Quantization operation is a reversible process and can therefore be omitted when error-free or lossless compression is required. In the final stage of the data compression model, the symbol encoder creates a fixed or variable length code to represent the quantized output and maps the output according to the code. It assigns the shortest code words to the most frequently occurring output values, thus reducing coding redundancy.

Decompression reverses the compression process to produce a reconstructed image, as shown in Figure 1-4. The restored image may have lost some information due to compression and may have an error or distortion compared to the original image.

Figure 1-3:  Image compression System
Figure 1-3: Image compression System

Why Need for Image Compression

Theory of Wavelet Compression 2.1 Wavelet transform

  • History of wavelet transform
  • Three type of wavelet transform
    • Continuous wavelet transform
    • Semi discrete wavelet transform
    • Discrete wavelet transform
  • Parameters and Equations
    • Mean square error
    • Root mean square error
    • Peak Signal to Noise Ratio
  • EZW method
  • Wavelets and compression
  • Wavelet Compression Techniques
  • Thresholding in Wavelet Compression
  • Principles of using transform as source encoder
    • Haar wavelet
    • Symlets
    • Coif lets
    • Daubechies
    • Demy

Fourier's statement played an essential role in the evolution of mathematicians' ideas about functions. A property of the Haar wave is that it has a compact support, meaning that it vanishes beyond a finite interval. We take a wave and compare it with a portion at the beginning of the original signal.

We calculate a coefficient C(a,b), which represents how closely correlated the wave is with this portion of the signal. The higher C is, the more the agreement and note that the results will depend on the shape of the wave we choose. Mean Square Error measures the error with respect to the center of the image values, i.e.

Peak signal-to-noise ratio (PSNR) measures estimates of the quality of the reconstructed image compared to an original image and is a standard way to measure image fidelity. Where S is the maximum pixel value and RMSE is the Root Mean Square Error of the image. The EZW algorithm was one of the first algorithms to demonstrate the full power of wavelet-based image compression.

An embedded encoding is a process for encoding the transform magnitudes that allows progressive transmission of the compressed image. Separating the smooth variations and details of the image can be done in many ways. One such way is the decomposition of the image using a Discrete Wavelet Transform (DWT). Digital image compression is based on the ideas of subband decomposition or discrete wavelet transform (DWT).

For some signals, many wavelet coefficients are close to or equal to zero. A technical disadvantage of the Haar wavelet is that it is not continuous and therefore cannot be differentiated. Daubechies wavelet names are written dbN, where N is the order and db is the last name of the wavelet.

This wavelet is an FIR-based approximation of the Meyer wavelet, enabling fast wavelet coefficient calculation using DWT. You can get an overview of the main properties of this wavelet by typing waveinfo('dmey').

Figure 2-1:  The demonstration of CWT according to the equation
Figure 2-1: The demonstration of CWT according to the equation

Chapter: 3

Theory of JPEG compression 3.1 JPEG

JPEG compression

The compression method is usually lossy, which means that some original image information is lost and cannot be recovered, possibly affecting image quality. There is an optional lossless mode defined in the JPEG standard; however, this mode is not widely supported in products. This is ideal for large images that will be displayed while downloading over a slow connection, providing a reasonable preview after only receiving part of the data.

However, progressive JPEGs are not as widely supported and even some software supports them. There are also many medical imaging and traffic systems that create and process 12-bit JPEG images, usually grayscale images. JPEG files are usually JPEG File Interchange Format (JFIF) files or Exchangeable Image File Format (Exit) files, the latter being used by most digital cameras.

Both formats are based on the JPEG Interchange Format (JIF) as specified in Annex B of the standard. The tag differences are unimportant to the compression algorithm, so both formats are readily supported.

Effects of JPEG compression

Compression algorithm

It is shown that this quantization is optimal for a continuous signal with a Laplacian distribution such as DCT or wavelet coefficients. Each sub-band is divided into small rectangular blocks called code blocks and each encoded independently by an Adaptive Binary Arithmetic encoder. Finally, the output of the arithmetic encoder is organized as a compressed bit stream which provides a considerable degree of flexibility.

JPEG Wizard Compression

JPEG Compression Table: 1

Applications of JPEG

Applying Wavelet Compression Selected Natural image 4.1 Wavelet Image Compression

Natural Image

  • For True Compression 2-D haar-4

Haar

Transform Table: 2

Residual

Matlab Compression Image

Matlab Program Compression Table: 4

Matlab De Noise Image

For Wavelet Transform

Conclusion

Gambar

Figure 1-1: Intensity level representation of image 3  Figure 1-2 : Outline of lossy compression techniques 5
Figure 1-1 : Intensity level representation of image
Figure 1-2: Outline of lossy compression techniques
Figure 1-3:  Image compression System
+7

Referensi

Dokumen terkait

Pada tabel 4.2 dibawah ini merupakan hasil pengujian untuk proses penyisipan dengan menggunakan algoritma XOR dan LSB dan menghasilkan nilai PSNR dan MSE sebagai parameter

Since the discrete wavelet transform analyze image and give us lot of data about 2D signal which make the difference in the quality of reconstructed image however In case

Peran lifting wavelet transform dari penelitian sebelumnya dapat meningkatkan ketahanan watermark terhadap serangan sehingga menghasilkan nilai bit error rate dan PSNR

Proposal to improve EZW technique for and encryption image compression The IEZW compression approach beginning by dividing uncompressed image into blocks size 8x8 pixels.. Then DWT,

Juned Ahmed Mazumder et al have made a performance analysis of LSB, DFT and DWT by measuring MSE and PSNR changes with respect to different image formats and different message

06, Issue 06, June 2021 IMPACT FACTOR: 7.98 INTERNATIONAL JOURNAL 33 STUDY ON IMAGE COMPRESSION BASED ON CURVELET TRANSFORM 1Rajdeep Kumar, 2Abhishek Tiwari 1Research Scholar, Dept

In an OFDM domain the info bit can be multiplexed into number of N image, every last with image time of T, and every image stream can be utilized to regulate the parallel sub bearers

There are many visual information processing methods which involve assessment of image quality of the outputs: image compression, image denoising, contrast enhance- ment, quantization