• Tidak ada hasil yang ditemukan

A LiDAR-assisted Smart Car-following Framework for Autonomous Vehicles

N/A
N/A
Protected

Academic year: 2024

Membagikan "A LiDAR-assisted Smart Car-following Framework for Autonomous Vehicles"

Copied!
6
0
0

Teks penuh

(1)

A LiDAR-assisted Smart Car-following Framework for Autonomous Vehicles

Item Type Conference Paper

Authors Yi, Xianyong;Ghazzai, Hakim;Massoud, Yehia Mahmoud

Citation Yi, X., Ghazzai, H., & Massoud, Y. (2023). A LiDAR-assisted Smart Car-following Framework for Autonomous Vehicles. 2023 IEEE International Symposium on Circuits and Systems (ISCAS).

https://doi.org/10.1109/iscas46773.2023.10181437 Eprint version Post-print

DOI 10.1109/iscas46773.2023.10181437

Publisher IEEE

Rights This is an accepted manuscript version of a paper before final publisher editing and formatting. Archived with thanks to IEEE.

Download date 2024-01-16 23:19:50

Link to Item http://hdl.handle.net/10754/693197

(2)

A LiDAR-assisted Smart Car-following Framework for Autonomous Vehicles

Xianyong Yi, Hakim Ghazzai, and Yehia Massoud

Innovative Technologies Laboratories (ITL), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia Email:{xianyong.yi, hakim.ghazzai, yehia.massoud}@kaust.edu.sa

Abstract—In this paper, we investigate an innovative car- following framework where a self-driving vehicle, identified as the follower, autonomously follows another leading vehicle. We propose to design the car-following strategy based only on the environmental LiDAR data captured by the follower and the GNSS input. The proposed framework is composed of several modules, including the detection module using the PointNet++

neural network, the continuous calculation of the leader’s driving trajectory, and the trajectory following control module using the BP-PID method. Comparison experiments and analysis have been performed on the Carla simulator. Results show that our proposed framework can work effectively and efficiently in the defined car-following tasks, and its success rate exceeds that of the Yolo-v5-based method by more than 13% under night conditions or rainy weather settings.

Index Terms—Intelligent Car-following, Autonomous vehicle, LiDAR and GNSS fusion, PointNet++.

I. INTRODUCTION

Intelligent car-following refers to the scenario in which the following vehicle maintains a specific relative driving relationship under the guidance of a leading vehicle. Since intelligent following can significantly reduce the occupancy rate of road resources, improve traffic efficiency, and further reduce energy consumption and traffic accidents, it is one of the most prominent research topics in autonomous driving [1].

It enables effective self-driving in traffic jam and highway situations as well as in convoys. Its application is not only limited to intelligent transportation systems, but it also covers other services in logistics, robotics, and other fields [2].

Basically, the intelligent car-following system realizes the driving of a following vehicle through the coordination of the perception system, decision-making system, motion planning system, and the control system. The guidance information provided by the leading vehicle can be regarded as a high-level reference control signal. With the development of data-driven and artificial intelligence technologies, many researchers have provided machine learning methods to learn human driving behavior in car-following situations. Wu et al. proposed a car- following model based on Deep Neural Networks (DNN) [3].

Tang et al. suggested an actor-critic network trained by real driving data [4]. Lin et al. designed a humanoid maneuver decision based on the combination of Long-Short Term Mem- ory (LSTM) neural network and conditional random field model [5]. Li et al. collected and analyzed real car-following driving data, and developed a DDPG car-following model with a reward function based on human driving characteristics [6].

Car-following achieves the tracking of the leading vehicle by comprehensively processing the acquired sensor informa- tion through the detection of the leading vehicle and the per- ception of the environment [7]. Among them, the acquisition of data information will affect the following performance of

Fig. 1. The architecture includes three main phases: Information Perception, Guidance Information Inference, and Trajectory Following Control. The input of the whole framework is based on LiDAR data and GNSS location.

the following vehicle. However, current research about car- following is mainly based on image data and Vehicle to Vehicle (V2V) communication, and cannot perform well in the situations that light is lacking or the communication with the leading vehicle is lacking, which significantly constrains the operation of the car-following system.

In order to overcome the misoperation of car-following systems udner some conditions where image-based method cannot work well, we propose a novel intelligent car-following framework based on Light Detection and Ranging (LiDAR) and Global Navigation Satellite System (GNSS) information, as shown in Figure 1. The input of the system includes the environmental LiDAR data and the following car’s GNSS information. We design a Leading Vehicle Detection module based on PointNet++ to detect the front vehicle from the massive LiDAR cloud points, and we develop a Leading Vehicle Global Coordinates Calculation module that uses the GNSS and relative position information to obtain the global position of the leading vehicle. The Trajectory Generation module then generates the driving trajectory as the reference information for the following vehicle. After that, the system will executes a Trajectory Following control, which including the lateral driving control and longitudinal control.

In a nutshell, the contributions of this study are as follows:

We design a novel car-following framework for au- tonomous vehicles, where the system uses only LiDAR and GNSS information. As far as we know, it is the first proposed framework relying on only LiDAR and GNSS of the following vehicle, to successfully perform the car- following tasks.

We only make use of the information collected by the following vehicle, which reduces the system constraints.

In addition, we propose an effective method to calculate the leading vehicle’s global coordinates, and further to get the driving trajectory of the leading vehicle.

The PointNet++ neural network is adopted to our frame-

(3)

(a) (b)

Fig. 2. (a) and (b) show the LiDAR data sample collected by the following vehicle. The center in both figures represents the following vehicle, and the up direction represents the front of the vehicle. (a) In a straight driving scenario, a vehicle has been detected in the right front of the following vehicle. (b) The following vehicle is driving at an intersection road.

work to perform the leader detection from a raw LiDAR point cloud data.

We implement the experiment on the open-source sim- ulator Carla, train the PointNet++ using our generated dataset for cars, and execute car-following in four dif- ferent weather settings to validate the effectiveness and efficiency of our proposed LiDAR-based framework.

II. METHODOLOGY

To accomplish the LiDAR-based intelligent car-following tasks, our proposed framework requires the cooperation of main functional phases: Information Perception, Guidance Information Inference, and Trajectory Following Control. In the Guidance Information Inference phase, the PointNet++ is integrated for leading vehicle detection from the point cloud.

For the Information Perception part, it mainly includes self- positioning and environment LiDAR information collection.

The GNSS information of the autonomous car itself is an important reference, and we use its GNSS information to calculate the global coordinates of the leading vehicle. The obtained environment LiDAR points data is shown in Figure 2.

In the Guidance Information Inference phase, the system will process the collected information, to further get the richest possible information related to the leading vehicle in real time, including the oriented bounding box fitting, relative distance, and relative angle. These information are the basis for the determination of the relative driving relationship between the leading vehicle and the following vehicle. Specifically, there are two key modules in this phase: Leading Vehicle Detection module and Trajectory Generation module. The Leading Vehicle Detection module can detect and identify the leading vehicle by using the LiDAR sensor mounted on the following vehicle. Through the GNSS information of the following vehicle, the global position information of the leading vehicle can be continuously obtained while driving.

And then, the trajectory of the leading vehicle will be naturally obtained in the Trajectory Generation module.

The aforementioned phases are mainly designed for the acquisition of information, which can be considered as the understanding of the world model. On the basis of these un- derstandings, during the Trajectory Following Control phase, the system makes decision and judgment on the next driving decision of the following vehicle, and determines the control of the speed and steering of the vehicle.

In the Trajectory Following Control phase, there are two modules, one is Longitudinal Control module and another is Lateral Control module. Longitudinal control refers to the control of vehicle speed during driving. The core concept

is to coordinately control the accelerator and brake pedals of the vehicle, so as to achieve the purpose of accurate speed tracking. Lateral control refers to the control of the lateral position of the vehicle, that is, the steering during the car-following. To achieve this goal, the vehicle needs to manipulate the steering angle in real time, so as to ensure that the following car is always on the desired path safely, smoothly and accurately

Although there is a coupling relationship between the lon- gitudinal dynamics of the vehicle and the lateral dynamics, the vehicle will perform emergency braking while controlling the steering of the vehicle only when it faces emergency obstacle avoidance or other extreme situations. In order to simplify the system complexity, we decouple the longitudinal control and lateral control in this work.

III. MODULECOMPONENTS

A. Leading Vehicle Detection

Many recent studies have proposed methods to directly process 3D point cloud data. PointNet directly takes the 3D point cloud as input, introduces the T-Net structure to achieve the rotation invariance of the point cloud, and uses a shared multi-layer perceptron to extract the features of each input point [8]. Then, it aggregates the information of all points to obtain global features through maximum pooling. This method effectively solves the problem of poor rotation invariance of the original point cloud, and is widely used in classification, component segmentation and semantic segmentation and other tasks. However, there are some notable issues such as paying too much attention to global features and ignoring local features, not considering the structural information between points, and not fully considering the adverse effects caused by uneven density of point clouds, etc. In addition, it is difficult to adapt to complex scenario, so it is not suitable for our car- following task scenario.

To address these issues, Qi et al. further proposed Point- Net++ based on PointNet [9]. PointNet++ captures local geo- metric details through hierarchical downsampling. In addition, it also adopts the multi-scale grouping and multi-resolution grouping strategy to overcome the problem that sparse point information may be overlooked due to uneven density of point cloud data. In our Leading Vehicle Detection module, we adopt PointNet++ to perform semantic segmentation on the input point cloud data, and the output result is the probability score of the category the point cloud belongs to. The point cloud in each view frustum contains only one target object [10].

Here, a binary evaluation is adopted to determine whether the point cloud belongs to the leading vehicle target point cloud or a non-target point cloud, thereby removing non-target points data, such as occlusion point cloud and cluttered point cloud.

Finally, the required leading vehicle point cloud is segmented.

The point cloud segmentation network performs semantic segmentation on the input point cloud data, and the output result is the probability score of the category to which the point cloud belongs. The point cloud in each view frustum contains only one target object. Here, a binary evaluation score is set to determine whether the point cloud belongs to the target point cloud or a non-target point cloud (such as ground, vegetation points, or other points that may be occluded or located behind

(4)

(a) (b)

Fig. 3. (a) Car-following representation (b) Mathematical representation of car-following, in which the pointM and the pointP represent the following vehicle and the leading vehicle respectively.

the target object), thereby removing non-target points Point cloud data, such as occlusion point cloud and cluttered point cloud, are used to segment the required target point cloud.

B. Trajectory Generation of Leading Vehicle

The following vehicle is equipped with a LiDAR sensor, which is used to detect the distance d and azimuth θ of the leading vehicle relative to itself. Combined with the GNSS information of the following vehicle, the global position of the leading vehicle is calculated, and the guidance path is generated in real time.

As shown in Figure 3(b), let the leading vehicle be the point P, the following vehicle be the point M. To convert the radial coordinates into plane rectangular coordinates, the system calculates the coordinates of P in the local LiDAR coordination system (xP, yP):

xP =dcosθ

yP =dsinθ (1)

We suppose that the coordinate value of the pointM in the global coordinate system is (xG, yG), and the azimuth of the point P in the global plane Cartesian coordination system is θ, then the system can get the coordinate of the point P in the global coordinate system (x, y)as follows:

x y

= xG

yG

cosθ −sinθ sinθ cosθ

xP

yP

(2) In our car-following problem, the global position informa- tion of the leading vehicle is continuously calculated, then the guidance trajectory of the leading vehicle is obtained, and it will be used in the Trajectory Following Control phase.

C. Lateral Control

After acquiring the leading vehicle’s trajectory, the system can directly use it to make the following vehicle keep driving along the guidance path. In this paper, we choose to use the Proportional–Integral–Derivative (PID) controller for the Lateral Control. The PID algorithm takes the error between the target value and the actual value as the input, and obtains the output value through the operation of the proportional- integral-derivative to perform control tasks [11]. Because of its simple structure, stability, and reliability, it is widely used in automatic control systems.

Generally, the following vehicle obtains its own GNSS in- formation in real time, and calculates the deviation of its real- time position information to the guidance trajectory. According to this deviation, the controller performs PID adjustment to obtain the angle value to be regulated, and sends it to the steering controller. Finally, it performs the steering adjustment.

These steps are repeated to perform path tracking control of the vehicle.

Fig. 4. BP PID controller acting on the brake/pedal of the following vehicle

Specifically, the controller uses the following vehicle’s global position coordinates to calculate the shortest distance between it and the guidance trajectory, and determines whether the current position is on the left or right side of the path curve.

We define the rule that, left is positive, right is negative. Then the signed distance error value is used as a deviation signal.

Then, the system uses the deviation signal to perform PID adjustment for the vehicle driving on the trajectory.

D. Longitudinal Control

To perform the longitudinal control, we also adopt the PID controller. But unlike the PID controller used in the Lateral Control module that needs very quick response so as not to leave the desired trajectory far, we make an assumption that the PID controller in the Longitudinal Control requires more smooth adjustment, and avoids rapid speed-changing. In order to make PID control achieve good control effect in the Longitudinal Control, it is necessary to adjust the functions of proportional-integral-derivative control, to form a relationship of mutual cooperation and mutual restriction in the control variables. Neural networks have the ability to approximate any nonlinear function, realize parallel cooperative processing, and perform strong self-learning. Hence, with the help of neural networks, we could find the most suitable combination of PID control through the system performance measurements. In this framework, we employ the Back Propagation (BP) neural network that aims to minimize the total error of the network by adjusting the weights, that is, by using the gradient descent method to minimize the error between the expected value and the actual value [12].

To efficiently measure the error in the Longitudinal Control, the system use the distance between the leading vehicle and the following vehicle as the measurements criteria. The ideal distance is expected to be 15 meters, while the leading vehicle drives at a speed below 30km/h. This paper adopts BP neural network PID control to realize the self-learning and self- adjustment of parameters Kp, Ki and Kd. The structure of the BP neural network PID controller is shown in Figure 4.

IfKp, Ki, Kd are regarded as adjustable coefficients based on the operating state of the control system, the expression of the PID control is expressed as:

u(k) =f[u(k−1),Kp, Ki, Kd, e(k), e(k−1), e(k−2)]

(3) wheref(•)is a nonlinear function associated with coeffi- cients such as Kp, Ki, Kd, u(k−1), y(k) [13]. The optimal parameters can be found through theBPneural network.

IV. EXPERIMENTRESULTS

In order to validate the smart vehicle following frame- work proposed in this paper, we resort to use the Carla open source driving simulator, an urban driving simulation platform specially used for autonomous driving research and

(5)

Fig. 5. Comparison between the PID method and the BP-PID method in terms of longitudinal control. The more smooth the line, the better the results.

testing. Carla provides interfaces that can be used to set various required simulation maps, traffic scenarios, vehicles, and sensors. In addition, the simulation platform supports flexible configuration of sensors and environmental conditions, as well as full control over all dynamic and static actors in the map. We use Carla to collect data for training PointNet++

for Sematic Segmentation task, and perform car-following simulation testing and analysis.

In our simulations, we define a successful car following execution if the following car can follow the leading car for around 100-meter straight road or turning road without crashing with or losing its detection, or driving off the road.

In our design, (i) there is no need to consider issues such as traffic lights, dynamic obstacles, lane changes, etc.; (ii) the following vehicle needs to maintain the safe distance under the guidance of the leading vehicle; (iii) the car-following scenario is running at four different weathers including sunny day, dark night, rainy day and rainy night.

We first evaluate the performance of BP-PID method inte- grated to our system. It can be seen from Figure 5 that, com- pared with the traditional PID controller, the BP-PID adaptive control makes the control accuracy significantly improved, the time for the error to close to zero significantly shortened, and the change more stable. The simulation results show the strong self-learning ability of the BP neural network, which shows that the our proposed method can make the actual output of the control system reach the desired goal more quickly, and make the control more accurate and efficient.

To evaluate the main performance, that is the success rate of autonomous driving, we choose to use the Yolo-v5 algorithm as the object detection module to build a comparison frame- work [14]. The Yolov5-based framework is fed with RGB-D depth image and GNSS data in the Information Perception phase. Then in the Guidance Information Inference phase, the Yolov5 algorithm is used to perform the object detection, while the global position of the leading vehicle can also be obtained using the depth data and GNSS information collected by the following vehicle.

To better know the effectiveness of our proposed method and the Yolo-v5 based method, we measure their driving performance in the weather settings of sunny day, dark night, rainy day, and rainy night. For each method, we perform 100 car-following driving tests. The results are shown in Figure 6.

The success rates 97% shown in Figure 6 illustrates that our proposed PointNet++ with BP-PID framework can effectively perform the car-following tasks on sunny day. However, with- out the help of BP neural network for longitudinal control, the Yolo-v5 based method outperforms the framework without BP.

Fig. 6. Car-following performance at the weather settings of sunny day, dark night, rainy day, and rainy night respectively. Our PointNet++ with BP-PID method outperforms other two methods in all of the scenarios.

TABLE I

FAILURE CASE ACCOUNT IN100TESTS FOR EACH FRAMEWORK

When the weather setting is changed to night, the performance of PointNet++ based methods remain stable, even at the rainy conditions. On the contrast, Yolo-v5 based method is influenced dramatically, and its performance decreases a lot, especially in rainny night. Therefore, our proposed method can achieve desired performance for car-following when facing different weather conditions.

To better explain the results, we explore the reason of the failure cases. For each method, 100 car-following independent tests are performed and Table I shows the counts of diverse failures for each framework. Some failures are loss of the leading vehicle, for which, the direct reason is that the global positions record of the leading vehicle is not complete, and the guidance trajectory is broken. The fundamental reason is that the Longitudinal Control module does not catch up the leading vehicle. Besides, there are some failure situation that the following vehicle drives off the road, and the reason should be with the Lateral Control module. With the integration of BP-PID, our method could successfully avoid crashing with leading vehicle, and reduce the rate of the target loss.

V. CONCLUSION

We designed a novel car-following framework where the system uses only LiDAR and GNSS information. The Point- Net++ and Back-Propagation Neural Networks are adopted to perform the car-following tasks. The qualitative and quantita- tive analysis shows that our proposed framework can work effectively and efficiently in the car-following tasks under different weather conditions. In the aspect of success rate, its performance exceeds that of the Yolo-v5-based method by more than 13% under night or rainy weather.

However, the accuracy of the leading vehicle’s trajectory depends on the positioning accuracy and azimuth measurement accuracy of the following vehicle, so robustness cannot be guaranteed currently. In the future, we may take the Inertial Measurement Unit (IMU) information into consideration to improve our system.

(6)

REFERENCES

[1] J. Han, H. Shi, L. Chen, H. Li, and X. Wang, “The car-following model and its applications in the v2x environment: A historical review,”Future Internet, vol. 14, no. 1, p. 14, 2021.

[2] M. Masmoudi, H. Friji, H. Ghazzai, and Y. Massoud, “A reinforcement learning framework for video frame-based autonomous car-following,”

IEEE Open Journal of Intelligent Transportation Systems, vol. 2, pp. 111–127, 2021.

[3] Y. Wu, H. Tan, X. Chen, and B. Ran, “Memory, attention and prediction:

a deep learning architecture for car-following,” Transportmetrica B:

Transport Dynamics, vol. 7, no. 1, pp. 1553–1571, 2019.

[4] T.-Q. Tang, Y. Gui, and J. Zhang, “Atac-based car-following model for level 3 autonomous driving considering driver’s acceptance,”IEEE Transactions on Intelligent Transportation Systems, 2021.

[5] Y. Lin, P. Wang, Y. Zhou, F. Ding, C. Wang, and H. Tan, “Platoon trajectories generation: A unidirectional interconnected lstm-based car- following model,” IEEE Transactions on Intelligent Transportation Systems, 2020.

[6] D. Li and O. Okhrin, “Modified ddpg car-following model with a real- world human driving experience with carla simulator,” Available at SSRN 4072706, 2021.

[7] H. Friji, H. Ghazzai, H. Besbes, and Y. Massoud, “A dqn-based autonomous car-following framework using rgb-d frames,” in2020 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT), pp. 1–6, 2020.

[8] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” inProceedings of the IEEE conference on computer vision and pattern recognition, pp. 652–

660, 2017.

[9] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” Advances in neural information processing systems, vol. 30, 2017.

[10] X. Yi, H. Ghazzai, and Y. Massoud, “End-to-end neural network for autonomous steering using lidar point cloud data,” in2022 IEEE 65th International Midwest Symposium on Circuits and Systems (MWSCAS), pp. 1–4, IEEE, 2022.

[11] K. H. Ang, G. Chong, and Y. Li, “Pid control system analysis, design, and technology,” IEEE transactions on control systems technology, vol. 13, no. 4, pp. 559–576, 2005.

[12] V. R. Duddu, S. S. Pulugurtha, A. S. Mane, and C. Godfrey, “Back- propagation neural network model to predict visibility at a road link- level,” Transportation research interdisciplinary perspectives, vol. 8, p. 100250, 2020.

[13] X. Han, X. Zhang, Y. Du, and G. Cheng, “Design of autonomous vehicle controller based on bp-pid,” inIOP Conference Series: Earth and Environmental Science, vol. 234, p. 012097, IOP Publishing, 2019.

[14] P. Jiang, D. Ergu, F. Liu, Y. Cai, and B. Ma, “A review of yolo algorithm developments,” Procedia Computer Science, vol. 199, pp. 1066–1073, 2022.

Referensi

Dokumen terkait