• Tidak ada hasil yang ditemukan

Digital Image Processing

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Membagikan "Digital Image Processing"

Copied!
812
0
0

Teks penuh

The content of the following chapters can be presented in either a one- or two-semester sequence. Chapters 18-20 offer a complete introduction to the basic spectral techniques used in image processing and are essentially independent of the other material in the text.

Digital Images

Programming with Images

Digital image editing, or as it is sometimes referred to, digital imaging, is the manipulation of digital images using an existing software application such as Adobe Photoshop or Corel Paint. Computer graphics, in contrast to digital image processing, focuses on the synthesis of digital images from geometric representations such as three-dimensional (3D) object models.

Image Analysis and Computer Vision

While today's graphics professionals are interested in topics such as realism and, especially in the context of computer games, rendering speed, the field relies on many methods originating from image processing, such as image transformation (morphing), reconstruction of 3D models from image data and specialized techniques such as pictorial and non-photorealistic rendering [180, 248]. Similarly, image processing uses many ideas originating from computational geometry and computer graphics, such as volumetric (voxel) models in medical image processing.

Types of Digital Images

Ultimately, you'll find that image processing is both intellectually and professionally challenging, as the field is filled with problems that were originally thought to be relatively simple to solve, but that have refused to give up to this day. from their secrets. With the background and techniques presented in this text, you will not only be able to develop complete image processing solutions, but you will also have the knowledge necessary to tackle unsolved problems and the real opportunity to expand horizons. of science: while image processing itself may not change the world, it is likely to be the foundation that supports the wonders of the future.

Image Acquisition

  • The Pinhole Camera Model
  • The “Thin” Lens
  • Going Digital

The distance f ("focal length") between the aperture and the image plane determines the scale of the projection. One of the problems with a pinhole camera is that it needs a very small aperture to produce a sharp image.

Spatial sampling

Thus, while this simple model is sufficient for our purposes (i.e. understanding the mechanics of image acquisition), much more detailed models incorporating these additional complexities can be found in the literature (see e.g. [126]). Finally, the resulting values ​​must be quantized to a finite range of integers (or floating-point values) so that they can be represented by digital numbers.

Temporal sampling

What is projected onto the image plane of our camera is essentially a two-dimensional (2D), time-dependent continuous distribution of light energy. In the simplest case, a plane of sensor elements is arranged in a uniformly spaced grid, and each element measures the amount of light falling on it.

Quantization of pixel values

  • Image Size and Resolution
  • Image Coordinate System
  • Pixel Values
  • Image File Formats
    • Raster versus Vector Data
    • Tagged Image File Format (TIFF)
    • Graphics Interchange Format (GIF)
    • Portable Network Graphics (PNG)
    • JPEG
    • Portable Bitmap Format (PBM)
    • Additional File Formats
    • Bits and Bytes
  • Exercises

Most image file formats can be identified by inspecting the first few bytes of the file. Most image formats can be determined by inspecting the first few bytes of the file.

2 ImageJ 2.1 Software for Digital Imaging

ImageJ Overview

  • Key Features
  • Interactive Tools
  • ImageJ Plugins
  • A First Example: Inverting an Image
  • Plugin My_Inverter_A (using PlugInFilter)
  • Plugin My_Inverter_B (using PlugIn)
  • Executing ImageJ “Commands”

As is common in most image editing programs, all interactive operations are applied to the currently active image, that is, the image the user recently selected. Plugins can be created, edited, assembled, invoked, and organized through the Plugin menu in ImageJ's main window (Fig. 2.1).

Additional Information on ImageJ and Java

  • Resources for ImageJ
  • Programming with Java

It can be configured to log commands in a variety of ways, including Java, JavaScript, BeanShell, or ImageJ macro code.

Exercises

To display the modified image after each shift, a reference to the corresponding ImagePlus object is required (Image Processor has no display methods). TheImagePlusobject is only accessible to the plugin's setup() method, which is automatically called before the run() method.

Histograms and Image Statistics

  • What is a Histogram?
  • Interpreting Histograms
    • Image Acquisition
    • Image Defects
  • Calculating Histograms
  • Histograms of Images with More than 8 Bits
    • Binning
    • Implementation
  • Histograms of Color Images
    • Intensity Histograms
    • Individual Color Channel Histograms
    • Combined Color Histograms
  • The Cumulative Histogram
  • Statistical Information from the Histogram
    • Mean and Variance
    • Median
  • Block Statistics
    • Integral Images
    • Practical Calculation of Integral Images
  • Exercises

Since the intensity values ​​start with zero (like arrays in Java) and are all positive, they can be used directly as the index i∈[0, N−1] of the histogram array. The binnedHistogram() method returns the image histogram of the object passed to it as an array of size B. Some common image statistical parameters can be easily calculated directly from its histogram.

Point Operations

4 Point Operations 4.1 Modifying Image Intensity

Contrast and Brightness

Limiting Values by Clamping

Inverting Images

Threshold Operation

Point Operations and Histograms

While this seems fairly straightforward, it can be helpful to look a little more closely at the relationship between point operations and the resulting changes in the histogram. Of course, if a given histogram line is shifted as a result of some point operation, then all the pixels in the corresponding set are changed equally and vice versa. The answer is that the corresponding pixel sets are merged and the new common histogram input is the sum of the two (or more) contributing inputs (i.e., the size of the combined set).

Automatic Contrast Adjustment

Modified Auto-Contrast Operation

Histogram Equalization

The purpose of histogram smoothing is to find and apply a dot operation so that the histogram of the modified image approximates a uniform distribution (see Figure 4-8). We can thus reformulate the goal as finding a point operation that shifts the histogram lines such that the resulting cumulative histogram is approximately linear, as illustrated in Fig. First, the histogram of the image ip is obtained using the standard ImageJ method ip.getHistogram() in line 7.

Histogram Specification

  • Principle of Histogram Specification
  • Adjusting to a Piecewise Linear Distribution
  • Adjusting to a Given Histogram (Histogram Matching)
  • Examples

The value in each histogram cell describes the observed frequency of the corresponding intensity value, i.e., the histogram is a discrete frequency distribution. The mapping function fhs is obtained not by inverting, but by "complementing" the reference distribution functionPR(i). For each possible pixel value, starting with a= 0, the corresponding probabilityspA(a) is placed layer by layer “under” the reference distribution PR.

Gamma Correction

  • Why Gamma?
  • Real Gamma Values
  • Applications of Gamma Correction
  • Implementation
  • Modified Gamma Correction

The ITU 709 standard is based on a slightly modified version of the gamma correction (see section 4.7.6). The gamma correction in the sRGB standard [224] is specified on the same basis (with different parameters; see section 14.4). Gamma correction parameters for the ITU and sRGB standards based on the modified mapping in Eqns.

Point Operations in ImageJ

  • Point Operations with Lookup Tables
  • Arithmetic Operations
  • Point Operations Involving Multiple Images
  • Methods for Point Operations on Two Images
  • ImageJ Plugins Involving Multiple Images

In the latter case, the currently active image is passed as an object of type ImageProcessor (or any of its subclasses) to the plugin'srun() method (see also Section 2.2.3). If you want to merge it with the plugin program, you can pass only one I1 image directly to the plugin's run() method, but not the additional I2 images. The IBG background image is overlaid with the IFG foreground image, the transparency of which is controlled by the value of α in the form.

Exercises

Define a new object class with all necessary instance variables to represent the distribution function and implement the required functions PL(i) (Eq. The plugin is applied to the (currently active) background image, and the foreground image must also be open when the plugin The background image (bgIp), which is passed to the plugin'srun() method, is multiplied by α (line 22).

Filters

What is a Filter?

The coordinates for the source pixels are fixed in relation to the current image position (u, v) and usually form a continuous area, as illustrated in fig. Each new pixel value I(u, v) is calculated as a function of the pixel values ​​within a specified range of source pixels Ru,v in the original image I. Another option is to assign different weights to pixels in the support region, to give stronger weight to pixels , which is closer to the center of the range.

Linear Filters

  • The Filter Kernel
  • Applying the Filter
  • Filter Plugin Examples

With a point operation (eg in programs 4.1 and 4.2) each new pixel value depends only on the corresponding pixel value in the original image, so it was no problem to simply save the results back to the same image - the calculation is done "in place" without the need to intermediate storage. Version A: The result of the filter operation is first stored in the intermediate image and then copied back to the original image (a). Version B: the original image is first copied to an intermediate image that serves as the source for the filter operation.

3 averaging filter (“box” filter)

The result of the filter calculation is initially stored in a new image, the content of which is eventually copied back to the original image. The original image is first copied to an intermediate image that serves as the source for the actual filter function. Also, noclamping (see section 4.1.2) of the results is necessary because the sum of the filter coefficients is 1, and therefore no pixel values ​​outside the allowed range can be created.

3 smoothing filter

  • Integer Coefficients
  • Filters of Arbitrary Size
  • Types of Linear Filters
  • Formal Properties of Linear Filters
    • Linear Convolution
    • Formal Properties of Linear Convolution
    • Separability of Linear Filters
    • Impulse Response of a Filter
  • Nonlinear Filters
    • Minimum and Maximum Filters
    • Median Filter
    • Weighted Median Filter
    • Other Nonlinear Filters
  • Implementing Filters
    • Efficiency of Filter Programs
    • Handling Image Borders
    • Debugging Filter Programs
  • Filter Operations in ImageJ
    • Linear Filters
    • Gaussian Filters
    • Nonlinear Filters
  • Exercises

As a simple example, let's assume the filter is composed of the 1D kernelshxandhy, with. Original signal (top) and result after filtering (bottom), where the colored bars indicate the size of the filter. The middle element of the sorted vector (A[n]) is taken as the median value and stored in the original image (line 33).

Edges and Contours

What Makes an Edge?

Gradient-Based Edge Detection

  • Partial Derivatives and the Gradient
  • Derivative Filters

In mathematics, the amount of change with respect to spatial distance is known as the first derivative of a function, and so we start with this concept to develop our first simple edge detector. Evaluation of the first derivative of a discrete function. The slope of the straight line (dashed) between the neighboring function values ​​f(u−1) and f(u+1) is taken as an estimate for the slope of the tangent (i.e. the first derivative) atf(u). The components of the gradient function (Eqn. 6.4)) are simply the first derivatives of the image lines (Eqn. 6.1)) and columns along the horizontal and vertical axes, respectively.

Simple Edge Operators

  • Roberts Operator
  • Compass Operators
  • Edge Operators in ImageJ

The edge strengthE(u,v) corresponds to the length of the vector obtained by adding the two orthogonal gradient components (filter results) D1(u,v) and D2(u,v). The design of linear fringe filters involves a trade-off: the stronger a filter's response to fringe-like structures, the more sensitive it is to orientation. The edge strength at position (u, v) is defined as the maximum of the eight filter outputs; this is,.

Other Edge Operators

  • Edge Detection Based on Second Derivatives
  • Edges at Different Scales
  • From Edges to Contours

Typical small edge operators, such as the Sobel operator, can only respond to intensity differences that occur within their 3×3 pixel filter regions. Whatever method is used for edge detection, the result is usually a continuous value for the edge strength for each image position and possibly also the angle of local edge orientation. Despite this weakness, global thresholding is often used at this point due to its simplicity, and some common post-processing methods, such as the Hough transform (see Chap. 8), can handle incomplete edge maps well.

Canny Edge Operator

  • Pre-processing
  • Edge localization
  • Edge tracing and hysteresis thresholding
  • Additional Information
  • Implementation

The similarity of the Prewitt and Sobel operators is shown in the corresponding results. Preprocessing: Smooth the image with a Gaussian filter of widthσ, which determines the scale level of the edge detector. Candidate edge pixels are isolated by local "non-maximal suppression" of the edge magnitude Emag.

Edge Sharpening

  • Edge Sharpening with the Laplacian Filter
  • Unsharp Masking

Note that HL in Eq. 6.32) is not separable in the usual sense (as discussed in Sec. The correct choice of w also depends on the specific Laplacian filter used in Eq. 6.35) because none of the aforementioned filters are normalized. Laplace sharpening with a weight factor ew as defined in Eq. 6.35) can therefore (with a little manipulation) be expressed as.

Exercises

Corner Detection

  • Points of Interest
  • Harris Corner Detector
    • Local Structure Matrix
    • Corner Response Function (CRF)
    • Determining Corner Points
    • Examples
  • Implementation
    • Step 2: Selecting “Good” Corner Points
    • Step 3: Cleaning up
    • Summary
  • Exercises

7.7) we see that the difference between the two eigenvalues ​​is of the local structure matrix. To avoid the explicit calculation of the eigenvalues ​​(and the square root), the Harris detector defines the function. The figure shows the result of the gradient calculation, the three components of the structure matrix M(u, v) =A C.

Finding Simple Curves: The Hough Transform

  • Salient Image Structures
  • The Hough Transform
    • Parameter Space
    • Accumulator Map
    • A Better Line Representation
  • Hough Algorithm
    • Processing the Accumulator Array
    • Hough Transform Extensions
  • Java Implementation
  • Hough Transform for Circles and Ellipses
    • Circles and Arcs
    • Ellipses
  • Exercises

The dimensions of the original image (a) are 360×240 pixels, so the maximum radius (measured from the center of the image) is rmax ≈ 216. Our simple version of the Hough transform determines the parameters of the line in the image but not their ends. The Hough transformation yields the parameters of the recovered lines in Hessian normal form (that is, as pairs Lk=θk, rk).

Referensi

Dokumen terkait

Much of this information is not useful to us; but we can see the size of the image in pixels, the size of the file (in bytes), the number of bits per pixel (this is given by

A conceptually simple method of reconstruction that follows from the projection theorem is to fill the two-dimensional Fourier space by the one-dimensional Fourier transforms of

Typically, three models are used: a three dimensional kinematic or three-dimensional model that provides accurate information about the skeleton of the human body and skin information..

The results of this experiment showed that the color space CIELAB can be used in steganography using Integer Wavelet Transform and can produce a stego-image with imperceptibility and

This paper shows a review of various insertion procedures, closest neighbor, Bilinear, Bicubic, B-spline, Lanczos, Discrete wavelet change DWT and Kriging.. Our outcomes indicate

The Artificial Neural Networks ANN have the capacity of building a subjective nonlinear planning from different information to various yield information inside the organization through

2326-9865 149 A Research Paper on Lung Field Segmentation Techniques using Digital Image Processing 1Gunjan Bhatnagar, 2Ashish Gupta, 3Yogesh Kumar 1,3Assistant Professor,

If in an image there exist similar change in gray-level values in the image, which of the following shows a stronger response using second order derivative operator for sharpening?. a A