• Tidak ada hasil yang ditemukan

View of AN ANALYTICAL STUDY BASED ON IMAGE PROCESSING CONCEPTS

N/A
N/A
Protected

Academic year: 2023

Membagikan "View of AN ANALYTICAL STUDY BASED ON IMAGE PROCESSING CONCEPTS"

Copied!
4
0
0

Teks penuh

(1)

ACCENT JOURNAL OF ECONOMICS ECOLOGY & ENGINEERING Peer Reviewed and Refereed Journal, ISSN NO. 2456-1037

Available Online: www.ajeee.co.in/index.php/AJEEE

Vol. 07, Issue 02,February 2022 IMPACT FACTOR: 7.98 (INTERNATIONAL JOURNAL) 13 AN ANALYTICAL STUDY BASED ON IMAGE PROCESSING CONCEPTS

Dr. Savyasachi

Assistant Professor, Department of Information Technology, L N Mishra College of Business Management, Muzaffarpur, Bihar

Abstract - Interest in digital image processing stems from two major application areas.

Enhancing image information for human interpretation. Processing of image data for storage, transmission and display for autonomous machine perception. The purpose of this article is to define the meaning and scope of image processing, explain the various steps and methods of typical image processing, and describe the application of image processing tools and processes at the forefront of research.

Keywords: image processing, image analysis, applications, research.

1. INTRODUCTION

The image can be defined as a two- dimensional function f (x, y). Where x and y are spatial (planar) coordinates, and the amplitude of f at each pair of coordinates (x, y) is called the intensity or gray level of the image at that time, If the amplitude values of x, y, and f are all finite discrete quantities, then the image is called a digital image. The field of digital image processing refers to the processing of digital images by digital computers. Keep in mind that digital images are made up of a finite number of elements, each with a specific position and value. These elements are called image elements, image elements, pels, and pixels. Pixel is the most commonly used term to refer to an element of a digital image.

Vision is the most advanced of our senses, so it is not surprising that images play the most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spectrum, imaging devices cover almost the entire EM spectrum, from gamma waves to radio waves. You can work with images generated by sources that people are not accustomed to associating with images.

These include ultrasound, electron microscopy, and computer-generated images. Therefore, digital image processing covers a wide variety of applications.

2. FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING

Digital image processing steps can be divided into two major areas: methods where the inputs and outputs are images, and methods where the inputs are images but the outputs are attributes extracted from those images.

Image acquisition is the first process of digital image processing. Keep in mind that retrieving can be as easy as retrieving an image that is already in digital format. In general, the image acquisition phase includes preprocessing such as: B. Scaling.

The next step is image enhancement, which is one of the simplest and most attractive areas of digital imaging. Basically, the idea behind the enhancement technique is to elicit obscure details, or simply to elicit certain interesting features in the image. A well- known example of an improvement is to increase the contrast of an image to "look good". It is important to remember that enhancement is a very subjective area of image processing.

Image Restoration is also an area that deals with improving the appearance of images. However, unlike subjective emphasis, image restoration is objective in the sense that restoration techniques tend to be based on mathematical or stochastic models of image degradation.

Enhancements, on the other hand, are based on human subjective preferences regarding what constitutes a "good"

enhancement result.

Color image processing is an area of increasing importance due to the proliferation of digital images on the Internet. Color image processing involves studying the basic concepts of color models and the basic color processing of the digital domain. Image colors can be used as a basis for extracting features of interest in an image.

The wavelet is the basis for displaying images in different resolutions.

In particular, wavelets can be used to compress image data or to represent

(2)

ACCENT JOURNAL OF ECONOMICS ECOLOGY & ENGINEERING Peer Reviewed and Refereed Journal, ISSN NO. 2456-1037

Available Online: www.ajeee.co.in/index.php/AJEEE

Vol. 07, Issue 02,February 2022 IMPACT FACTOR: 7.98 (INTERNATIONAL JOURNAL) 14 pyramids in which an image is

continuously divided into smaller areas.

Compression, as the name implies, deals with techniques that reduce the amount of disk space required store images or the bandwidth required to send images. Storage technology has improved significantly over the last decade, but the same cannot be said about transmission capacity. This is especially true for the use of the Internet, which features important image content. Image compression is known (perhaps unintentionally) to most computer users in the form of image file extensions such as. B. JPEG (Joint Photographic Experts Group) A jpg file extension used by image compression standards.

Morphological processing deals with tools for extracting image components that are useful in expressing and describing shapes. Morphological image processing is the beginning of the transition from the process of outputting images to the process of outputting image attributes.

The segmentation method divides an image into its components or objects.

In general, autonomous segmentation is one of the most difficult tasks in digital image processing. Robust segmentation techniques take a major step towards successfully solving mapping problems that require individual identification of objects. Weak or flawed segmentation algorithms, on the other hand, almost always guarantee a final failure. In general, the more accurate the segmentation, the more likely it is that the detection will be successful.

Presentations and descriptions most often follow the output of the segmentation phase. This is usually raw pixel data that contains the boundaries of the area (that is, the set of pixels that separate one image area from another) or all points in the area itself. .. In either case, the data must be converted to a format suitable for computer processing.

First, you need to decide whether to display the data as a boundary or as a complete area. Boundary representation is suitable when the focus is on external shape features such as corners and bends. Regional representation is suitable when the focus is on internal properties such as textures and skeletal shapes. In some applications, these expressions

complement each other. The choice of representation is only part of the solution for converting raw data into a format suitable for subsequent computer processing. You also need to provide a way to describe the data so that the features of interest are highlighted. The description, also known as feature selection, involves extracting attributes that lead to quantitative information of interest, or the underlying attributes that distinguish an object of one class from another.

3. APPLICATIONS OF IMAGE PROCESSING

There are many applications of image processing in a wide range of human activities, from remote sensing to biomedical image interpretation. This section only gives an overview of some of these applications.

3.1. Automatic Visual Inspection System

Automated visual inspection systems are essential to improve the productivity and product quality of manufacturing and related industries [2]. Here is a brief description of some visual inspection systems.

Automatic inspection of incandescent lamp filaments: An interesting application of automatic visual inspection is to inspect the light bulb manufacturing process.

Often, the filament of a light bulb melts due to a flaw in the shape of the filament. B. After a short time, the wiring in the lamp is unevenly tilted. Manual inspection is not efficient in detecting such anomalies. The automated visual- based inspection system creates a binary image slice of the filament from which the filament silhouette is generated. Analyze this silhouette to identify the pitch non-uniformity of the filament shape in the valve. One such system was designed and installed by General Electric Corporation.

Faulty component identification:

You can also use automated visual inspection to identify defective components in electronic or electromechanical systems.

Defective components usually

(3)

ACCENT JOURNAL OF ECONOMICS ECOLOGY & ENGINEERING Peer Reviewed and Refereed Journal, ISSN NO. 2456-1037

Available Online: www.ajeee.co.in/index.php/AJEEE

Vol. 07, Issue 02,February 2022 IMPACT FACTOR: 7.98 (INTERNATIONAL JOURNAL) 15 generate more thermal energy.

Infrared (IR) images can be generated from the distribution of thermal energy within the array. By analyzing these IR images, you can identify the faulty component in the assembly.

Automatic surface inspection systems: Detection of flaws on the surfaces is important requirement in many metal industries. For example, in the hot or cold rolling mills in a steel plant, it is required to detect any aberration on the rolled metal surface. This can be accomplished by using image processing techniques like edge detection, texture identification, fractal analysis, and so on.

3.2. Remotely Sensed Scene Interpretation

Remote sensing image analysis can extract information on natural resources such as agriculture, water, minerals, forestry, and geological resources. In the analysis of the remote sensing scene, an image of the surface of the earth is captured by a sensor of a remote sensing satellite or a multispectral scanner mounted on an aircraft and sent to an earth station for further processing [3, 4].

Figure 1 shows an example of two remote sensing images. The color version is displayed on the color image page. Figure 1 (a) shows the Ganges Delta in India. The light blue segment represents the sediments of the river delta, the deep blue segment represents the body of water, and the deep red region represents the mangrove wetlands of the adjacent islands. Figure 1.1(b) is the glacier flow in Bhutan Himalayas. The white region shows the stagnated ice with lower basal velocity.

Fig. 1: Example of a remotely sensed image of (a) delta of river Ganges, (b)

Glacier flow in Bhutan Himalayas

3.3 Content-Based Image Retrieval Retrieving query images from large image archives is an important application in image processing. With the advent of large multimedia collections and digital libraries, there is an increasing need to develop search tools for indexing and retrieving information from them. While there are many good search engines available today for retrieving machine- readable text, there are not many quick tools for retrieving intensity and color images. Traditional approaches to image retrieval and indexing are time consuming and costly. Therefore, there is an urgent need to develop an algorithm for retrieving an image using the content embedded in the image.

Digital image characteristics (shape, texture, color, object topology, etc.) can be used as index keys to retrieve and retrieve image information from large image databases. Searching for images based on such image content is commonly referred to as content-based image search [8, 9].

3.4 Moving-Object Tracking

Tracking of moving objects, for measuring motion parameters and obtaining a visual record of the moving object, is an important area of application in image processing (13, 14). In general there are two different approaches to object tracking:

(i) Recognition-based tracking (ii) Motion-based tracking.

Systems for tracking high-speed targets (military aircraft, missiles, etc.) have been developed based on motion- based prediction methods such as Kalman filters, extended Kalman filters, and particle filters. Automatic image processing-based object tracking systems automatically detect target objects in the sensor's field of view without human intervention. With detection-based tracking, object patterns are detected in successive image frames and tracking is performed based on their location information.

3.5 Neural Aspects of the Visual Sense The optic nerve of our visual system enters the eyeball and connects to the rods and cones behind the eye.

Neurons contain dendrites (inputs) and long axons (outputs) with bifurcations at

(4)

ACCENT JOURNAL OF ECONOMICS ECOLOGY & ENGINEERING Peer Reviewed and Refereed Journal, ISSN NO. 2456-1037

Available Online: www.ajeee.co.in/index.php/AJEEE

Vol. 07, Issue 02,February 2022 IMPACT FACTOR: 7.98 (INTERNATIONAL JOURNAL) 16 the ends. Neurons communicate through

synapses. Signal transmission is associated with the spread of chemicals across the interface, and receiving neurons are stimulated or suppressed by these chemicals that spread across the interface. The optic nerve begins as a bundle of axons from the ganglion cells on one side of the retina. The opposite rod and cone are connected to the ganglion cells of bipolar cells and the other side, and there are horizontal neurons that produce lateral compounds. Since the signal from adjacent receptors in the retina is grouped by a horizontal cell, the uniform lighting of the magnetic field does not give a net stimulation, as the signals from adjacent receptors form the receiving area of the opposite response to the center and surroundings. In the case of non-uniform illumination, the difference between the central illumination and peripheral stimulation occurs. The received field uses the color difference. B.

Redgreen or yellow blue, stimulus differences apply to both color and brightness. There is further grouping of visual cortex for receptive electric field responses and directivity ends of the side acting and directivity. This is a low-level process that precedes a high-level interpretation with an unknown mechanism. Nevertheless, it shows an important role of differentiation in the sensation that underlies the contrast phenomenon. If the retina is evenly illuminated in terms of brightness and color, there is little neural activity.

The normal human retina has 6 to 7 million cones and 110 to 130 million rods. The transmission of optical signals from rods and cones is through the fibers of the optic nerve. The optic nerve intersects at the optic chiasm, sending all signals from the right side of the two retinas to the right hemisphere and all signals from the left hemisphere to the left hemisphere. Each half of the brain gets a half image. This ensures that eye loss does not affect the visual system. The

optic nerve terminates in the lateral geniculate nucleus in the middle of the brain, from which signals are relayed to the visual cortex. The visual cortex still has a retinal topology and is only the first level of perception in which information is provided. The two hemispherical fields of view are connected by a corpus callosum that connects half of the field of view.

4. CONCLUSION

Image processing has a variety of applications that allow researchers to select one of their areas of interest. Many research findings have been published, but many research areas remain untouched. In addition, the high-speed computers and signal processors available in the 2000s have made digital image processing the most popular form of image processing and widely used because it is not only the most versatile method, but also the cheapest.

REFERENCES

1. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd Edition, Prentice Hall, 2002.

2. D. T. Pham and R. Alcock, Smart Inspection Systems: Techniques and Applications of Intelligent Vision, Academic Press, Oxford, 2003.

3. T. M. Lissesand and R. W. Kiefer, Remote Sensing and Image Interpretation, 4th Edition, John Wiley and Sons, 1999.

4. J. R. Jensen, Remote Sensing of the Environment: An Earth Resource, Perspective, Prentice Hall, 2000.

5. P. Suetens, Fundamentals of Medical Imaging, Cambridge University Press, 2002.

6. P. F. Van Der stelt and Qwil G.M.Geraets,

“Computer aided interpretation and quantication of angular periodontal Bone defects on dental radiographs”, IEEE Transactions on Biomedical engineering, 38(4), April 1998. 334-338.

7. M. A. Kupinski and M. Giger, “Automated Seeded Lesion Segmentation on Digital Mammograms,” IEEE Trans. Med. Imag., Vol.

17, 1998, 510-51 7.

8. S. Mitra and T. Acharya, Data Mining:

Multimedia, Soft Computing, and Bioinformatics, Wiley, Hoboken, NJ, 2003.

9. A. K. Ray and T. Acharya. Information Technology: Principles and Applications, Prentice Hall of India, New Delhi, India, 2004.

Referensi

Dokumen terkait

1 Research mainly for detection of bottle cap, through the image preprocessing and precise processing to solve the human visual error rate and contingency, realized based on

Decryption is the reverse process of encryption in which cipher image is converted into original image by providing the key which is used in encryption.Information is transmitted over

03, Issue 01,January 2018 Available Online: www.ajeee.co.in/index.php/AJEEE OCR, Parsing, Sentence Breaking, Digital Documentation, Sentiment analysis, POS-Tagging, Text mining,

1 Research Paper ANALYSIS OF IMAGE PROCESSING BASED WATERMARKING USING ADAPTIVE DISCRETE TRANSFORM ALGORITHM RAJEEV SHRIVASTAVA1 [email protected] MALLIKA JAIN2

An image processing algorithm based on threshold and extended maximum Transformation and K-means clustering technique was developed to assess the quality of sperm cells by identifying

imsubtract subtracts each pixel value in one of the input images from the corresponding pixel in the other input image and returns the result in the corresponding pixel in an output

Segmentation of the image is a crucial stage in using magnetic resonance images of brain cancer research where the segmented part of the brain tumor might ignore perplexing features

3.2 Results of Segmentation Image of Area Blood Flow Figure 1 displays the data obtained from the segmentation process, and the corresponding output for patient A.. Image