1. INTRODUCTION
The cavitation phenomenon is a major research topic in the physics of ultrasound.
Researchers have proved that fast-fl owing current results in pressure loss in fl uids using hydrodynamic principles. Fluid pressure is the sum of dynamic and static pres- sures. The dynamic pressure increases by increasing fl uid's velocity; therefore, the static pressure decreases as well. Then, liquid evaporation happens locally, which cre- ates bubbles in turn. This phenomenon is termed cavitation (Van Wijngaarden, 2016;
Halliday et al., 2013; Verhaagen and Rivas, 2016). The oscillation frequency of an ultrasonic probe is at least 20,000 Hz. The ultrasonic waves increase or decrease the
AN IMAGE PROCESSING TOOL FOR THE ANALYSIS OF ACOUSTIC BUBBLE CLOUD DYNAMICS
Amir Hossein Foruzan
1,*& Abazar Hajnorouzi
21Department of Biomedical Engineering, Engineering Faculty, Shahed University, Tehran, Iran, P.O. Box: 18155/159
2Department of Physics, Faculty of Basic Science, Shahed University, Tehran, Iran, P.O. Box: 18155/159
*Address all correspondence to: Amir Hossein Foruzan, Department of Biomedical Engineering, Engineering Faculty, Shahed University, Tehran, Iran, E-mail: a.foruzan@
shahed.ac.ir
Quantifi cation of ultrasonic bubble clouds in diff erent liquid environments reveals liquids' prop- erties and the ultrasonic set characteristics. In this paper, we develop a toolbox to visualize and process bubble cloud videos. Quantifi cations include segmentation of the cloud region, speed cal- culation of diff erent points on the front of the cloud, and calculation of cloud cross section. The segmentation algorithm is based on the distance regularized level-set evolution technique. A user can adapt the parameter values to the cloud evolution condition to obtain accurate results. The average Dice measure of the segmentation results was 0.950 ± 0.023. The analysis of point speeds complies with manual outcomes as well. We plan to quantify more ultrasonic vibration parameters and reconstruct the bubble's 3D volume using two orthogonal videos.
KEY WORDS: acoustic bubble cloud, level-set segmentation, image processing, ultra- sonic horn
Original Manuscript Submitted: 12/16/2020; Final Draft Received: 5/7/2021
fl uid pressure. When the acoustic pressure is in its negative phase and lower than the critical pressure of the fl uid, small bubbles will be generated (Bai et al., 2014; Kauer et al., 2017). However, the inertial forces and the spherical convergence of the liquid's elements lead to the focus of energy when the bubbles collapse in the positive phase of the acoustic pressure (Suslick and Price, 1999; Kuppa and Moholkar, 2010). The bubbles collapse suddenly and energetically which results in tremendous local pres- sure and temperature (Hajnorouzi et al., 2014; Hajnorouzi and Afzalzadeh, 2019).
Investigating a cloud reveals important information including its symmetry proper- ty, the relation between bubble size and its distance to the probe, the distribution of the bubble size and speed, and the dispersion pattern of returning bubbles.
2. PREVIOUS WORK
Researchers analyzed the dynamics of clouds by the acquisition of images from the acoustic bubble. Maeda and Colonius (2019) used high-speed images to study the cloud's symmetrical properties under different ultrasonic wavelength regimes. They also investigated the distance to the center of a probe to the cloud. They modifi ed their theories and introduced new parameters to describe dynamics of clouds by an- alyzing bubble images. Xin et al. (2018) used image processing techniques to ana- lyze the morphology and size of the bubble formed by acoustic droplet vaporization to quantify thermal lesion shape. The employed image processing techniques chiefl y segmented the cloud and calculated its volume from B-mode images. The experiments consisted of vaporization by pulse wave ultrasound before continuous wave ultra- sound heating. Ohl et al. (2015) reviewed the characterization of bubbles with shock waves and ultrasound. In one paper, the authors obtained their images with a speed of up to 150,000 fps. Qualifi cation of the acquired images was the primary usage of im- age processing techniques. There are also ultrasound applications in medical images (Liu et al., 2019) and underwater acoustic image processing (Song et al., 2016).
Image processing techniques in acoustic experiments are usually restricted to the qualifi cation of the evolution of the cloud. Moreover, high-speed cameras obtain the input videos, which are not available in common physics labs. This paper develops an image processing tool to analyze the bubble clouds produced by acoustic waves. As far as we know, it is for the fi rst time that such a software research tool is being de- veloped and made available to researchers. Moreover, the developed tool can handle the restrictions of general-purpose video acquisition devices. The software application analysis includes cloud size, measurement of cloud speed, and qualifi cation measures such as symmetrical/asymmetrical properties. This tool is available as a low-cost ap- plication for students and general acoustic labs to study different liquids' characteris- tics in acoustic vibrations.
Moreover, the conventional approach is to employ a high-speed camera and image processing techniques for analysis. However, we propose the traditional acquisition
of image equipment. The short time and 3D nature of cavitation make it diffi cult to quantify and qualify the phenomenon. Therefore, quantifying a cloud is mainly per- formed by numerical simulation.
This paper is organized as follows. In Section 2, we give the details of the devel- oped tool and the corresponding image processing algorithm. Section 3 represents and discusses the results. We conclude the paper and give future works in Section 4.
3. METHOD
3.1 Outline of the Proposed Method
We introduce an image processing tool developed in the MATLAB environment to quantify and classify bubble cloud images. The pipeline of our approach and the snap- shot of the image processing tool are presented in Figs. 1 and 2, respectively. After reading the input video ("Load Video" button in Fig. 2), the user browses the whole video to observe the cloud front evolution ("Browse Slices," "Slow Motion Show,"
and "Go to a Slice" sections in Fig. 2).
FIG. 1: The pipeline of the proposed method is shown
FIG. 2: A snapshot of the developed GUI
The input parameters of the segmentation algorithm are kernel size, start/end frame numbers, illumination enhancement thresholds, and initial contour. The kernel size is in pixels, and it defi nes the contour growth from one slice to the other. For slow/
fast evolutions, the kernel size is decreased/increased accordingly. The evolution of a cloud may contain several fragments based on the average speed of growth. It starts at an initial velocity; then, its speed of evolution increases suddenly after passing some frames. By browsing the video, the evolution speed of the cloud is estimated in pixels and it used as the "Kernel Size" button in Fig. 2. We treat each fragment individually with its contour parameters to achieve the best segmentation results. We set the start/
end frames ("Start Frame" and "End Frame" edit boxes in Fig. 2) and the kernel size, and click the "Add Fragment" button to defi ne a fragment.
In some cases, the illumination of the environment is low that can be compensat- ed by contrast enhancement. To improve the contrast of a video, the user clicks the
"Improve Contrast" checkbox and presses the "Show Frame" button (Fig. 2). A typical frame is then shown in a new fi gure. The user reads the maximum intensity of back- ground pixels and the minimum gray level of the cloud region and sets these values in the "Low-Int" and "High-Int" edit boxes, respectively (Fig. 2).
Next, the user clicks the "Set Initial Contour" button and draws a polygon that encompasses the tip of the probe and the cloud. Segmentation starts by clicking the "Segment Frames" button. The segmentation results may be displayed using the
"Browse" button. The boundaries of distinct fragments are put together using the "In- tegrate Results." The outcome of the algorithm is saved on the disk as separate PNG fi les for a further survey ("Save as PNG" button in Fig. 2).
After segmentation of bubble clouds in video frames, wave speed calculation starts with the software calibration ("Calibrate the Camera" in Fig. 2). The user puts two points defi ning the width of the probe and sets this distance in millimeters. Therefore, the size of a pixel is determined. The "Frame per Second" parameter is set in the calibration stage as well. The user then puts one or more points on the front of the ultrasound probe ("Wave Speed Calculation" in Fig. 2); then, the software follows the point in subsequent frames to calculate its speed. The "Calculate" button (Fig. 2) draws some curves showing the distance (in mm) of the point relative to its initial position. It also reveals the cross section of the cloud (in mm, Fig. 3).
3.2 Segmentation Algorithm
Segmentation is a class of image processing algorithms to label image pixels into two groups: object (foreground) and background. Object and background pixels are as- signed 1 and 0, respectively. The output of segmentation is a binary mask that reveals the location of the object. There is a wide spectrum of segmentation algorithms; from low-level techniques (such as thresholding) to high-level methods [e.g., active shape model (ASM)]. Simple approaches are appropriate for images with a fl at appearance in both the object and background regions. Sometimes the intensities of the image
FIG. 3.
(a)
(b)
(c)
(d)
FIG. 3: (a) Initial contour, (b)–(h) evolutions of the cloud boundary in the subsequent frames (e)
(f)
(g)
(h)
follow a complex statistical model, the signal-to-noise ratio is not enough, and parts of the object and background have similar intensity models. Enhanced segmentation algorithms such as ASM, deep neural networks (DNN), and active contours (AC) are preferable in such cases.
The active contour methods are classifi ed into parametric and geometric algo- rithms. The geometric active contours (GACs) are the newer variation that removes some of the challenges of the parametric approaches. They are based on a mechanical engineering notion that models the fl ow, merging, and splitting of sea waves by a dif- ferential equation. It considers a segmentation problem inRn as a problem in Rn+1. Compared to the parametric active contours, GACs easily extend from 2D to 3D and higher dimensions, and they solve the problem of merging and splitting.
There are several GAC variants including: edge-based, region-based, and shape- aware versions. In the conventional GAC algorithm, a user draws an initial contour located inside or outside the intended object. This boundary defi nes a level-set func- tion φ(x) that represents the signed distance function from the contour. The zero lev- el-set (i.e., φ(x) = 0) determines the boundary of the object. The GAC algorithm de- forms the level-set and consequently the initial contour until it reaches the true edges of the foreground. A partial differential equation evolves of the contour:
( ) · ( ) · ( ) · .
∂φ = −α ∇φ − β ∇φ + γ κ ∇φ
∂ A x P x Z x
t (1)
In Eq. (1), φ is the level-set function, and A(x), P(x), and Z(x) are advection, prop- agation, and curvature maps, respectively. The advection map displaces the level-set function on the image. The propagation term expands or shrinks the level-set, and the curvature controls the smoothness of the zero level-set. The coeffi cients α, β, and γ are the weights controlling advection, propagation, and curvature terms. Variations of the GAC propose more [Eq. (1)] to fi nd true edges in low-contrast and inhomoge- neous images.
Our segmentation approach uses the level-set algorithm (Balcilar, 2020; Li et al., 2010). While the GAC method is sensitive to leakage in low contrast images, we modify it to obtain a fast and robust functionality in bubble cloud segmentation. First, the user draws a polygon around the probe tip and the cloud by putting some points (Fig. 3a). The fi rst two points are important since they represent a line precisely on the edge of the probe. We use this initial contour for the next steps. First, it is evolved and prevents the GAC algorithm from trapping into a local minimum. It defi nes a region of interest (ROI) as well. Then ROI reduces the processed image size to 1/10 of its original size; therefore, it reduces the memory requirement of the algorithm and increases the speed of the code. A morphological fi lter dilates the segmented cloud region by a structuring element ("Kernel Size" parameter in Fig. 2) to defi ne the max- imum evolution region of the GAC contour in each frame.
The level-set tries to evolve the contour towards the prominent regions of an image, namely the edges of an object. Considering variations of the level-set, Li et al. (2010) proposed a variant called "distance regularized level set evolution" (DRLSE) to tackle the problem of signed distance function initialization (Balcilar, 2020). They discussed that explicit initialization of the distanced function has several issues, including when and how to do initialization. Moreover, the straightforward approach would result in instability of the solution. They solved these problems by including a distance term to the original level-set formulation:
( )
extdiv .
∂φ∂ = μ ⋅ ⎡⎣dp ∇φ ⋅ ∇φ −⎤⎦ ∂ε∂φ
t (2)
In Eq. (2), the parameters φ, dp, and εext are the level-set function, the derivative of potential function corresponding to the distance regularizer, and the external energy function derived from the image, respectively. The image gradient usually drives the external energy function. The potential function dp is defi ned as
( ) = ( ).
p p s
d s s (3)
The authors defi ned three variants of the distance potentials and performed several ex- periments to verify the stability of their level-set (Balcilar, 2020; Li et al., 2010). The segmentation of a frame is dilated and used as the initial contour of the next frame.
An available mobile phone with a frame rate of 960/s performed image acquisition.
The real rate is 480/s, and the remaining frames are copies of the original images (False Frames). To calculate the wave speed, we only employ the real frames where the cloud has a shift.
4. RESULTS AND DISCUSSION
We employed the proposed GUI for the quantifi cation of bubble clouds. We used 40 movies in MP4 format taken from our laboratory and processed them with the toolbox. The size of the input videos was 1280 × 720, and the number of frames was 960 fps. The developed GUI was implemented in MATLAB 2014a and was run on a personal computer with an Intel Core™-i7 (2.20 GHz) with 8 GB DRAM running a 64-bit Windows 7. The time needed to segment a frame depends on the size of the cloud. It took 3 s for initial frames, while this time was something between 8 to 10 s for fi nal frames. In Fig. 4, we show typical results of the segmentation.
The ultrasonic reactor contained a glass vessel of 244 (height) × 930 (width) × 720 (thickness) mm3. The horn tip was submerged up to 10 mm into the liquid. The ultrasonic horn (FAPAN Co. Iran, Model: 1200 UT) generates the acoustic waves into the vessel.
'
FIG. 4: Typical evolutions of the contour are shown. Every four other frames are given. The size of the probe is 22 mm.
A power meter socket (UNI-T UT230B) measured the root-mean-square value of the power that was 250 W. The transducer frequency was 19.8 kHz using an oscillo- scope (TDS220 Tektronix Digital 100 MHz).
The probe of the ultrasonic transducers had three parts: a sandwich transducer (Langevin type), a titanium booster, and a titanium horn tip which screwed at the end of the titanium booster. The sandwich transducer (convener) generated an ultrason- ic wave by piezoelectric ceramics. The booster amplifi es the vibrations mechanically and then transmits them to the titanium horn tip. The tip shape of the titanium horn is a two-step cylinder with an end diameter of 20 mm and a length of 141 mm.
A cellular phone captured the images of bubble clouds. A white light (24 W) illu- minated the container, and black paper covered the back of the container. We fi lled the container with deionized water.
For quantitative evaluation, we used the Dice measure defi ned in the following equation:
2 ∩
+
A M
M A . (4)
In Eq. (4), M and A refer to manual and automatic segmentation, respectively and |·|
is the number of pixels. In Fig. 5, we have plotted the frame-by-frame segmentation results of one typical video and compared them with automatic segmentation. If auto- matic segmentation completely covers the manually defi ned boundary, the Dice measure will be 100%. As is shown in Fig. 5, the minimum Dice index is about 90%. In the ini- tial stages of the cloud evolution, the results are lower than the fi nal stages. The reason is that over-segmentation has more effect in smaller clouds compared to large clouds.
The average Dice for the test frames was 0.950 ± 0.023, which is an excellent result. Considering Fig. 5, in some cases, the results are lower than the mean value.
These cases are shown in Fig. 6, in which yellow or solid arrows represent the defect regions. If the size of the ROI was increased and the user could change the number of iterations of the GAC algorithm, these imperfections would not happen.
The proposed tool measures the traverse of some cloud points on the edge of the ultrasound probe (Fig. 7). It quantifi es the speed of the cloud on the sides and center of the probe. Moreover, we compute the area of the cloud cross section for further usage.
The input to our algorithm can be high-frame-rate videos by conventional mobile phones and expensive superspeed cameras as well.
To evaluate the performance of our algorithm, we changed experiment conditions including, probe size, the distance between the camera and the probe, and environ- ment illumination (Fig. 8). As is shown in Fig. 8, the results are appropriate when the algorithm parameters are selected correctly.
A user can change the parameters of the segmentation algorithm when the results are not good enough. The user may change the parameters several times until appro-
FIG. 5: Segmentation results of typical frames. The X axis is the frame number. FIG. 6: Images (a)–(d) correspond to frames 6, 18, 28, and 42
priate outcomes are obtained. In some cases, one can divide a single video into three or more pieces and process each piece individually to achieve good results.
We evaluated the sensitivity of the algorithm's parameters. As stated earlier, the segmentation result of a frame is expanded and used as the initial contour in the next frame. A dilation fi lter with a disk-shaped structuring element expands the contour.
We call the size of the structuring element the kernel size. When the kernel size is FIG. 7: (a) Three points were chosen on the ultrasonic probe, and (b) the developed applica- tion calculates the traverse of them.
(a)
(b)
FIG. 8: Segmentation of cloud evolution in different conditions. Upper row: the probe size is 3 mm. Lower row: the probe size is 22 mm; however, the camera is near the vessel.
small, the initial contour may not encircle the cloud and it misses part of the cloud (Fig. 9). If the start and end of a fragment are not selected correctly, we may lose the cloud too. Improper initialization of the contour would also result in missing part of the cloud region.
5. CONCLUSIONS
This paper developed an image processing toolbox using the MATLAB environment to process videos of bubble clouds. The proposed GUI provides a simple and effi cient environment to study ultrasonic bubble cloud dynamics in different environments. In the future, we add more capabilities to the developed toolbox, including reconstruc- tion of a 3D cloud volume using two videos taken in orthogonal directions.
REFERENCES
Bai, L., Xu, W., Deng, J., Li, C., Xu, D., and Gao, Y., Generation and Control of Acoustic Cavitation Structure, Ultrason. Sonochem., vol. 21, no. 5, pp. 1696–1706, 2014.
Balcilar, M., DRLSE-Image-Segmentation, GitHub, accessed December 4, 2020, from https://github.
com/balcilar/DRLSE-Image-Segmentation, 2020.
Hajnorouzi, A., Afzalzadeh, R., and Ghanati, F., Ultrasonic Irradiation Effects on Electrochemical Syn- thesis of ZnO Nanostructures, Ultrason. Sonochem., vol. 21, no. 4, pp. 1435–1440, 2014.
Hajnorouzi, A. and Afzalzadeh, R., A Novel Technique to Generate Aluminum Nanoparticles Utilizing Ultrasound Ablation, Ultrason. Sonochem., vol. 58, p. 104655, 2019.
Halliday, D., Resnick, R., and Walker, J., Fundamentals of Physics, New York: John Wiley & Sons, 2013.
Kauer, M., Belova-Magri, V., Cairós, C., Schreier, H.-J., and Mettin, R., Visualization and Optimization of Cavitation Activity at a Solid Surface in High Frequency Ultrasound Fields, Ultrason. Sonochem., vol. 34, pp. 474–483, 2017.
Kuppa, R. and Moholkar, V.S., Physical Features of Ultrasound-Enhanced Heterogeneous Permanganate Oxidation, Ultrason. Sonochem., vol. 17, no. 1, pp. 123–131, 2010.
FIG. 9: A small kernel size makes the evolving contour miss part of the cloud boundary. Left:
frame 106, right: frame 107
Li, C., Xu, C., Gui, C., and Fox, M.D., Distance Regularized Level Set Evolution and Its Application to Image Segmentation, IEEE Trans. Image Process., vol. 19, no. 12, pp. 3243–3254, 2010.
Liu, S., Wang, Y., Yang, X., Lei, B., Liu, L., Li, S.X., Ni, D., and Wang, T., Deep Learning in Medical Ultrasound Analysis: A Review, Engineering, vol. 5, no. 2, pp. 261–275, 2019.
Maeda, K. and Colonius, T., Bubble Cloud Dynamics in an Ultrasound Field, J. Fluid Mech., vol. 862, p. 1105, 2019.
Ohl, S.W., Klaseboer, E., and Khoo, B.C., Bubbles with Shock Waves and Ultrasound: A Review, Inter- face Focus, vol. 5, no. 5, p. 20150019, 2015.
Song, Z., Bian, B., and Zielinski, A., Application of Acoustic Image Processing in Underwater Terrain Aided Navigation, Ocean Eng., vol. 121, pp. 279–290, 2016.
Suslick, K.S. and Price, G.J., Applications of Ultrasound to Materials Chemistry, Annu. Rev. Mater. Sci., vol. 29, no. 1, pp. 295–326, 1999.
Van Wijngaarden, L., Mechanics of Collapsing Cavitation Bubbles, Ultrason Sonochem., vol. 29, pp. 524–527, 2016.
Verhaagen, B. and Rivas, D.F., Measuring Cavitation and Its Cleaning Effect, Ultrason. Sonochem., vol. 29, pp. 619–628, 2016.
Xin, Y., Zhang, A., Xu, L.X., and Fowlkes, J.B., The Effects on Thermal Lesion Shape and Size from Bubble Clouds Produced by Acoustic Droplet Vaporization, Biomed. Eng. Online, vol. 17, no. 1, pp. 1–14, 2018.