• Tidak ada hasil yang ditemukan

Nontarget-Based Displacement Measurement Using Computer Vision and 3-D Point Cloud

N/A
N/A
Protected

Academic year: 2023

Membagikan "Nontarget-Based Displacement Measurement Using Computer Vision and 3-D Point Cloud"

Copied!
86
0
0

Teks penuh

This paper proposes non-target-based 6-DOF and in-plane structural displacement measurement approaches using a three-dimensional (3-D) point cloud. In the proposed non-target-based in-plane displacement measurement method, the laser-scanned 3D point cloud combined with high-resolution RGB information of the structure is used to determine the precise physical displacement tailored for the field measurement.

Introduction

Motivation and Scope of the Research

A conversion algorithm for off-target 3-D displacement measurement that allows flexibility of camera positions for improved field usability needs to be developed. In addition, a non-targeted approach using LiDAR and a camera is presented to estimate the precise in-plane structural displacement in a field test.

Dissertation Organization

Literature Review

Displacement Measurement Methods

Among non-contact direct measurement methods, laser-based displacement sensors, such as total station and light detection and ranging (LiDAR) and computer vision using optical devices, have the advantage of remote sensing capabilities and offer alternatives effective than those previously described. non-contact methods [42]. As such, laser-based and computer vision-based approaches are considered promising alternatives to conventional non-contact methods for practical full-scale displacement measurement applications.

Figure 2. Configuration of LDV for displacement measurement [41]
Figure 2. Configuration of LDV for displacement measurement [41]

Computer Vision-based Displacement Measurement

  • Coordinate Transform

Finally, the total displacement of the 6-DOF structure can be estimated based on the translation to the world coordinate system. The plane displacement of the concrete specimen was estimated by the proposed method and.

Figure 5. Artificial target for displacement measurement of structure [21]
Figure 5. Artificial target for displacement measurement of structure [21]

Nontarget-Based Displacement Measurement

  • Feature Tracking in Nontarget-Based Methods
  • Coordinate Transform in Nontarget-based Method

Summary and Discussion

  • RGB-D Camera
  • Coordinate Systems
  • Coordinate Transform from Camera to World Coordinates
  • Estimation of Displacement

The main enabler of the proposed non-target-based method, for which prior information about the target structure is unnecessary, is the infrared (IR) camera that records depth information. For example, the temperature of the IR emitter in Kinect strongly affects the measured distance, as the emitter heats up during operation. The precision of the measured distance decreases as the physical distance increases, whereas the accuracy remains constant regardless of physical distance.

The obtained RGB camera coordinates of the measurement points must then be transformed into the world coordinate system to determine the displacement; the world coordinate system is independent of camera positions. For example, the horizontal and vertical axes on the surface of the target structure are specified as the X and Y axes of the world coordinate system, the cross product of which gives the transverse Z axis, as shown in the figure. 20 [73]. To determine the orientation of the world coordinates, some points on the X and Y axes (eg, the red circles in Figure 20) are selected in the first frame of the RGB images for which the 3-D coordinates in the camera RGB system coordinates can be obtained using equation (12).

The Y-axis unit direction vector dY can be obtained by following the same process.

Figure 17. Process of displacement measurement using RGB-D camera [73]
Figure 17. Process of displacement measurement using RGB-D camera [73]

Validation

  • Shear Building Test
  • Pan-Tilt Motion Test

For the non-target-based displacement measurement using Kinect, the proper size and shape of the ROI were investigated through four different cases under the conditions of Test 1, as shown in Figure 22. From the results, the displacement measurement accuracy tends to increase when the ROI covers wider areas of the target. Although the Kinect results showed a high level of noise due to real-time depth measurement error, all displacement measurements using the proposed method showed good agreement with those from DSLR and LDV.

The RMSE values ​​of the measured displacements from the proposed method compared to the references were calculated as shown in Table 6. The measuring plate on top of the pan-tilt consisted of two parts, which were the printed image of a concrete bridge for Kinect on the left in Figure 26 and the checkerboard for the DSLR camera on the right. To limit the errors due to the low resolution of the Kinect, 10 images were captured for each movement and the average was used.

The RMSE values ​​of Y- and Z-rotations estimated by Kinect and DSLR in each test are summarized in Table 8 .

Figure 21. Experiment setup of the shear building motion test [73]
Figure 21. Experiment setup of the shear building motion test [73]

Summary and Discussion

In-plane Displacement Measurement Using 3-D Point Cloud

Displacement Measurement System

For the in-plane structural displacement measurement, the experimental system consisted of a LiDAR and a camera mounted on the holder to share a portion of the angular field of view (FOV), as shown in Figure 30. The camera was equipped with a mirror lens to improve the resolution of the RGB image during long-distance measurements. To obtain the laser scanning of the 3D point cloud of the structure, a LiDAR, a Leica BLK360, is needed.

The BLK360 acquires 3-D information of the environment, called a 3-D point cloud, by repeatedly emitting a pulse of laser beams with a wavelength of 830 nm and a divergence of 0.4 mrad. Due to the vertically rotating mirror and horizontally rotating base, the BLK360 can obtain unidirectional parallel scanning lines in a wide angular field of view. To measure the 3-D point cloud of the environment, the radial distance and angles of the digitized reflected signals are calculated based on the time-of-flight estimation improved by Waveform Digitization (WFD) technology and then converted to the corresponding spatial Cartesian coordinates around the LiDAR sensor.

Scanning method Vertical rotating mirror (30 Hz) on base with horizontal rotation (2.5 mHz) Field of view Horizontal: 360°, Vertical: 300°.

Figure 31. Leica BLK360
Figure 31. Leica BLK360

Coordinate Transform

To convert the image coordinates of the measurement points to the LiDAR coordinate system, let i be a point with coordinates (u, v) in the image coordinate system. Since the 3D point cloud obtained from the LiDAR represents the physical distance z1 between the camera and the structure, the scale factor can be estimated based on the calibration process performed before the measurement. For example, the relationship between two normal vectors of planes formed by the LiDAR and camera coordinates of the ROI is used, as shown in Figure 33.

To detect an approximate level for each 3-D point cloud of the ROI, the random sampling consensus (RANSAC) is applied [ 86 ]. The approximate level was derived from iterative detection of the consensus model based on the inliers under the 3-D point cloud of ROI. Based on rotation R and translation T from Equations (28) and (29) the scale factor s of the calibration process, the image coordinates can be converted to the LiDAR coordinates regardless of the camera position.

To determine the structural displacement, the obtained LiDAR coordinates of the measurement points must be transformed to the world coordinate system.

Figure 32. Coordinate system of displacement measurement system
Figure 32. Coordinate system of displacement measurement system

Displacement Estimation

Finally, the rotation matrix R from equation (20) and the translation vector T from equation (18) can be estimated from the defined world coordinate system. Because the camera has a small field of view, the RGB image typically does not contain information necessary for camera rotation angle estimation, such as the static background and standard structural axes. Moreover, the camera rotation information is difficult to obtain from the 3D point cloud due to the automatic tilt correction by the integrated inertial measurement unit (IMU) sensor of LiDAR [87].

Let (a1, β1) and (a2, β2) be the coordinates of the two endpoints of the longest line segment. Structural in-plane displacement can be estimated based on changes in the corrected world coordinate system relative to the initial position of the structure. The center of the tracked feature points in the ROI, which are in the world coordinate system, is used as the representative position of the structure for each measurement.

Therefore, the difference between the centroids of the initial position and each subsequent measurement in the corrected world coordinates becomes the displacement in the plane of the structure.

Experimental Validation

  • Concrete Specimen Movement Test
  • Out-of-Plane Motion Test
  • Field Test

The RMSE values ​​of the measured displacement from the proposed method with respect to the reference were calculated as shown in Table 12. The test results indicate that the proposed non-target based method can accurately estimate the in-plane displacement of the structure. Note that the 3-D point cloud of the wall was used to provide the reference plane and estimate the rotation angle of the concrete specimen.

RANSAC was applied to detect the estimated faces for the 3D point cloud of the ROI and the wall. The in-plane displacement of the concrete sample was estimated by the proposed method and compared with that of the DSLR measurement. The RMSE values ​​of the corrected displacements from the proposed method with respect to the references were calculated as shown in Table 13.

Because out-of-plane motion occurred due to the inclined surface of the ROI, as shown in Figure 44, the errors in the measured in-plane displacement were taken into account.

Figure 35. Initial experimental setup (photo to the top right is a panoramic photo)  Table 11
Figure 35. Initial experimental setup (photo to the top right is a panoramic photo) Table 11

Conclusions and Future Work

Conclusions

The proposed method was validated in two laboratory tests using a two-story sliding building and pan-tilt with a measuring plate for translational and rotational movements. The displacement measured by the proposed method showed good agreement with the reference data obtained by DSLR camera and LDV. Thus, the proposed approach using an RGB-D camera showed the potential to measure the 6 DOF structural displacement without an artificial target in the field measurement environment.

Although the proposed non-target-based approach in Section 3 could measure structural displacement without using an artificial target, there were practical limitations; The accuracy of the measurements was negatively affected by the accuracy of the measured depth and the low RGB resolution. First, the feature points in the image coordinate system were converted to the LiDAR coordinate system using the rotational and translational relationship between the two approximate planes formed by the ROIs in the binary system. The proposed method was validated in two series of laboratory tests on a concrete specimen and then validated during a field test on a large railway bridge.

Therefore, the proposed non-target-based method accurately measured the in-plane displacement of the structure without using a target, which provides a promising technique for displacement measurement in practice.

Future work

Sim, “Computer vision-based measurement of structural displacements robust to light-induced image degradation for in-service bridges,” Sensors, vol. Myung, “Vision-Based Displacement Measurement Method for Tall Building Structures Using a Distribution Approach,” NDT E Int., vol. Masri, “A vision-based approach for direct measurement of displacements in vibrating systems,” Smart Mater.

Tanaka, »Vision-Based Displacement Sensor for Monitoring Dynamic Response Using Robust Object Search Algorithm,« IEEE Sens. Catbas, »Computer-Based Displacement and Vibration Monitoring without using Fizični cilj na strukturah«, Struct. Kim, »Target-Less Vision-Based Displacement Sensor Based on Image Convex Hull Optimization for Measuring the Dynamic Response of Building Structures,« Senzorji, vol.

Shinozuka, “Cost-effective vision-based system for monitoring the dynamic response of civil engineering structures,” Struct.

Gambar

Figure 5. Artificial target for displacement measurement of structure [21]
Figure 7. The coordinate system of camera model
Figure 8. Scale factor for pinhole cameras [68]
Figure 9. Planar homography transform between image and marker [11]
+7

Referensi

Garis besar

Dokumen terkait

This study aimed to see the distribution and ef- fect of total suspended matter TSM on chlorophyll-a based on measurement and retrieval of Sentinel 3 imagery using the linear regression

In order to obtain a superior classification performance, this work offers a deep learning model for multimodal emotion identification based on the fusion of electroencephalogram EEG

The objective of this work is to create a novel approach for job scheduling in cloud-fog edge environments, with the goal of optimizing task allocation and reducing job completion

The number of false positives #FPs per pass, the number of detected cuboids #DETs per pass, and the precision % measured using the in-house ABUS datasets A and B for volumes with