Trajectory Planning for
Improving Vision-Based Target Geolocation Performance
Using a Quad-Rotor UAV
LELE ZHANG
FANG DENG , Senior Member, IEEE JIE CHEN , Fellow, IEEE
Beijing Institute of Technology, Beijing, China YINGCAI BI
National University of Singapore, Singapore SWEE KING PHANG, Member, IEEE Taylor’s University, Subang Jaya, Malaysia XUDONG CHEN
Carnegie Mellon University, Pittsburgh, PA, USA
This paper describes a novel method to improve the target location accuracy through imaging it from an aircraft. This method focuses on improving estimation accuracy of heading angle bias and then to im- prove geolocation performance. A particle swarm optimization algo- rithm is employed to derive an expression of optimal trajectory, which can be a guide for trajectory planning. Thanks to the maneuverability of quad-rotor unmanned aerial vehicles, the aircraft is commanded to follow path generated by trajectory planning to acquire multiple bearing measurements of the ground object. The main result is that the aircraft’s heading angle bias can be more accurately estimated using trajectory planning. Hence, the target is more accurately geolo- Manuscript received November 21, 2017; revised June 29, 2018; released for publication December 3, 2018. Date of publication December 13, 2018;
date of current version October 10, 2019.
DOI. No. 10.1109/TAES.2018.2886617
Refereeing of this contribution was handled by L. Pollini.
This work was supported in part by the China Scholarship Council, in part by the National Natural Science Foundation of China under Grant U1613225, and in part by the Beijing NOVA Program xx2016B027.
Authors’ addresses: L. Zhang, F. Deng, and J. Chen are with the Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, Beijing 100081, China, and also with the School of Automation, Beijing Institute of Technology, Beijing 100081, China, E-mail: ([email protected]; [email protected];
[email protected]); Y. Bi is with the Temasek Laboratories, Na- tional University of Singapore, Singapore 117411, E-mail: (ying- [email protected]); S. K. Phang is with the School of Engineering, Taylor’s University, Subang Jaya 47500, Malaysia, E-mail: (Sweek- [email protected]); X. Chen is with the Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA, E-mail:
([email protected]). (Corresponding author: Fang Deng.)
0018-9251C 2018 IEEE
cated. The efficacy of this technique is verified and demonstrated by simulation results and flight test.
NOMENCLATURE
β Depression angle.
XYfly Flying area of aircraft.
XYsearch Search area of aircraft.
(xv, yv, zv) Cartesian coordinates of aircraft.
(ψ, θ, φ) Euler angles of aircraft.
δψ Yaw-angle bias of aircraft.
f Focal length of the camera.
(ζ, ξ) Line-of-sight angles.
α Radian of the orbit.
η Spin angle of aircraft.
s Level flight distance of aircraft.
Subscripts
f Focal plane.
b Body frame.
p Ground object of interest.
c Camera frame.
Superscripts
n Navigation plane.
b Body frame.
I. INTRODUCTION
Unmanned aerial vehicles (UAVs) are increasingly be- ing used for a wide variety of missions, such as intelli- gence, surveillance, and reconnaissance. Camera is one of most common sensors employed to acquire information about surroundings in order to achieve autonomy. Several research groups are using UAVs with vision systems, such as cargo transfer [1], object detection [2]–[4], and vision- aided navigation [5]–[7]. For the target localization applica- tion, target localization work using a camera-equipped UAV was early reported in [8] and a real-time method to calculate the location of a fixed target detected by a gimbaled cam- era in a fixed-wing UAV equipped with a single-antenna GPS on board is described in [9]. Then, a framework of vision processing for geolocation coupled with path plan- ning to realize the coverage of the target is developed in [10]. Vision-based tracking geolocation system for long- endurance UAVs has been reported in [11] and geolocation of multiple targets from airborne video without terrain data is described in [12]. However, the aforementioned liter- atures do not take into account sensor measurement bias.
There are two solutions to handle this problem. One method of calculating the heading angle bias using visual measure- ments for a single UAV is developed based on the batch least squares technique [13]. The other method uses an ex- tended information filter and presents a decentralized man- ner to jointly estimate the sensor biases and the unknown point-of-interest location [14], where it indicated that not all states are observable using a single UAV, a stationary point of interest (POI), and a POI-centered UAV. Hence, in this paper, the batch least squares technique is employed to estimate the yaw bias by taking visual measurements of a
stationary ground object. Based on the technique, we are focused on designing trajectories to improve the estima- tion performance of yaw bias and then enhance the target geolocation accuracy further.
Related research work devoted mainly on developing trajectory planning for the enhanced target estimation in [15]–[19]. A comprehensive overview of trajectory plan- ning algorithms available in the literature applied to a UAV is presented in [20] and is helpful for researchers to under- stand implementation details of different algorithms. An information potential is generated from Shannon informa- tion in [21] and used to develop a switched feedback con- trol law for integrated sensor path planning and control. An optimized-visibility motion planning approach is proposed using an extended Kalman filter (EKF), and within this es- timation framework, a control law is derived to optimize the target tracking and robot localization performance [22].
An adaptive robotic control actions are provided based on maximizing the map information by simultaneously maxi- mizing the expected Shannon information gain on the oc- cupancy grid map and minimizing the uncertainty in the simultaneous localization and mapping (SLAM) process [23]. An information-theoretic approach to distributed and coordinated control of multirobot sensor systems is de- scribed in [24], and the control objective becomes maxi- mization of this Fisher information gained by the system.
The amount of the Fisher information is used as the perfor- mance metric and a receding horizon optimal control for- mulation is developed and solved for trajectories that yield maximum information [25]. These “information based” tra- jectory planning approaches presented in these literatures [21]–[25] are based on the use of information as a measure of utility for taking control actions. These information mea- sures are formally related to both Shannon information and Fisher information. These methods, however, are infeasible to trajectory optimization problems related to parameter se- lection. Some work has been done to optimally choose the trajectory parameters of the UAVs in target orbits and track- ing problems by using information metrics [26]. This paper attempts to extend the circular orbit optimization to more general trajectory configurations by taking into account a bounded field of view (FOV) region and yaw bias estima- tion problems. Particular attention is given to the estimation performance of the parameter “yaw bias” that is only con- sidered in this paper, since estimation error covariance rep- resents the correlation relationship among other parameters, such as the ground object position, and we choose yaw bias estimation error to build the performance metric. This pro- vides a simple straightforward representation of parameter estimate performance based on these measurements, which is being acquired along the trajectory. Trajectory planning is considered as generating an optimal trajectory, where the UAV flies to take the ground object’s bearing measurements over time to obtain the best heading angle bias estimation.
There is no doubt that trajectory planning can be viewed as an optimization problem under certain index (i.e., min- imizing the estimation error of the bias). Particle swarm optimization (PSO) is a population-based optimization al-
gorithm and has been used in solving many optimization problems successfully [27]–[31]. Because of its relative fast convergence and global search property, PSO is used to obtain optimal choices of trajectory parameters.
In a continuation of our previous research [32]–[35], the contribution of this paper is that we design trajectory planning strategies to improve the target localization accu- racy, considering FOV constraints and performance metric dependent on yaw bias estimation error, and all the process steps including trajectory planning, data collection, and bias estimation are online implemented in a UAV onboard NUC.
More specifically, PSO is first computed offline to solve op- timal trajectory and a general expression is listed as a guide for online trajectory planning. Then, trajectory planning is started onboard UAV once the ground object of interest (GOI) is detected. The main result is that the estimation ac- curacy of heading angle bias can be improved significantly by using the bearing measurements along the optimal tra- jectory. As a result, the target is more accurately geolocated than the one without trajectory planning. To clarify, the yaw measurement bias provided by the attitude-heading refer- ence system (AHRS) and the bias may vary every time when the AHRS is initialized, and the assumption of un- known and constant bias is justified in the short time, as the case that a UAV fly over a ground object.
This paper is organized as follows. In Section II, we give an overview of the geometry between a quad-rotor UAV and GOI, including yaw-angle bias estimation and field- of-view constraint. In Section III, PSO employed to solve optimal trajectory for different scenarios is described. In Section IV, we detail trajectory planning for the UAV ac- cording to different cases, followed by the results of simula- tions performed to validate the efficacy of the novel method.
In Section V, the results obtained from the flight test data are presented to verify the method further. This paper ends with some concluding remarks.
II. GEOMETRY OF GEOLOCATION
The dynamic relationship between the UAV and the GOI will be discussed in two parts, which are, respectively, yaw-angle bias estimation and field-of-view constraint.
A. Yaw-Angle Bias Estimation
The UAV target geolocation model can be visualized in Fig. 1. In mathematics, it can be expressed as follows:
xp
yp
= xv
yv
+
zp−zv
(0,0,1)Cbn
⎡
⎢⎣ xf
yf
f
⎤
⎥⎦
1 0 0
0 1 0
Cbn
⎡
⎣xf
yf f
⎤
⎦
(1) where xf andyf are the coordinates of the image of the GOI’s corresponding pointpin the image plane. The navi-
Fig. 1. UAV target geolocation model.
gation framenhas axes aligned with the directions of north (x), east (y), and down (z). (xv, yv, zv)T and (xp, yp, zp)T represent the UAV’s position and the GOI’s position in the navigation frame n, respectively. f is the focal length of the camera. The UAV navigation state (xv, yv, zv, ψ, θ, φ) is measured by the onboard GPS receiver and the AHRS.
The rotation matrixCbnrepresents the transformation from the body framebto the navigation framen.
For UAV level-flight mode, the pitch and roll angles of the UAV can be accurately measured using accelerometers, but the yaw-angle measurement will introduce large mea- surement errors when a miniaturized magnetic flux gate or a compass is employed. To facilitate the estimation of the critical yaw-angle biasδψ, the pitch and roll measurement errors will be ignored in our studies and only the bias in the yaw angle is considered.
It is assumed that the GOI is identified and is traced from frame to frame using a feature-tracking algorithm [36]. Taking these multiple bearing measurements of the GOI ensures that the heading angle bias can be estimated using the optical method proposed in [13]. The process of the yaw-angle bias estimation is illustrated as follows.
We estimate the parameterγ =[γ1, γ2]T where γ1 =[xp, yp]T
is the position of the ground object and γ2=δψ
is the bias in the yaw measurement provided by the AHRS.
The measured variables are y=[y1, y2]T
wherey1=[xv, yv, zv, xf, yf, θ, φ]T andy2 =ψ.
The measurement equation has the general form γ1=F(y1, y2) (2) and the actual measurements are
z1=y1+w1, w1∼N(0, R1) (3) z2=y2+γ2+w2, w2∼N(0, R2) (4)
where the Gaussian measurement noise covariancesR1 is the 7×7 symmetric positive definite matrix andR2is a pos- itive number. Substituting (3) and (4) into the measurement equation (2), we obtain the following nonlinear measure- ment equation:
γ1=F(z1−w1, z2−(γ2+w2)). (5) Using Taylor’s theorem then yields the following linearized measurement equation:
F(z1, z2)≈γ1+ ∂F
∂y2
z1,z2
·γ2+ ∂F
∂y1
z1,z2
·w1+ ∂F
∂y2
z1,z2
·w2. (6) Suppose thatN(≥2) bearing measurements of the GOI are taken at the discrete time k = 1, . . . , N, that is, (z11, z21), . . . ,(z1N, z2N), then linear regression in the pa- rameterγ ∈R3can be formulated as follows:
⎛
⎜⎜
⎜⎝
F(z11, z21)
·
· F(z1N, z2N)
⎞
⎟⎟
⎟⎠=
⎡
⎢⎢
⎢⎢
⎣ I2,
. . I2,
∂F
∂y2|z11,z21
. .
∂F
∂y2|z1N,z2N
⎤
⎥⎥
⎥⎥
⎦γ +W (7)
where
W ∼N(0, R) is the equation error with its covariance
R=diag ∂F
∂y1|z1k,z2k
R1
∂F
∂y1|z1k,z2k
T
+ ∂F
∂y2|z1k,z2k
R2
∂F
∂y2|z1k,z2k
T .
From (7), the yaw bias can be solved effectively with the weighted least squares method [37]. The partial derivatives are then obtained by
Ak = ∂F
∂y1|z1k,z2k, Bk = ∂F
∂y2|z1k,z2k. The parameter estimation is given by
ˆ γ =
N
k=1
I2
BkT
(AkR1ATk+BkR2BkT)−1[I2, Bk] −1
× N k=1
I2
BkT
AkR1ATk+BkR2BkT−1
F z1k, z2k
(8) and the parameter estimation error covariance is
P = N
k=1
I2
BkT
AkR1ATk +BkR2BkT−1
I2, Bk−1
. (9)
Fig. 2. Flying area and search area for a forward-looking camera.
B. Field-of-View Constraint
A bounded FOV region hinges on depression angleβ, field-of-view angle FOV, and UAV pose. Sinceβand FOV are certain, the UAV pose is only considered for this region.
To maintain the same GOI in the camera field of view, the trajectory is limited in the space called flying area XYfly : {(xv, yv)|xinf≤xv ≤xsup, yinf≤yv ≤ysup}. Similarly, at an arbitrary point along the trajectory, potential GOI will be detected only if it appears in the coverage called search area XYsearch :{
xp, yp
|xinf≤xp≤xsup, yinf≤yp≤ysup}. These areas for a forward-looking camera and a side- looking camera are different, so they will be discussed, respectively. We define (x0, y0) as the UAV current hori- zontal position in the body frame bandhas the relative altitude of the UAV above the GOI. Furthermore, we set current heading of the UAV to beψ =0◦.
One obtains for a forward-looking camera that XYsearch:
(x1, y1)|x1f−inf ≤x1 ≤x1f−sup, y1f−inf≤y1 ≤y1f−sup XYfly:
(x2, y2)|x2f−inf ≤x2 ≤x2f−sup, y2f−inf≤y2 ≤y2f−sup One obtains for a side-looking camera that
XYsearch :
(x1, y1)|x1s−inf≤x1≤x1s−sup, y1s−inf≤y1≤y1s−sup
XYfly :
(x2, y2)|x2s−inf≤x2≤x2s−sup, y2s−inf≤y2≤y2s−sup where (x1, y1) and (x2, y2) represent the GOI and the UAV horizontal position restricted in search area and flying area in the body frameb, respectively. The green triangle shows the trajectory in the flying area and the blue rectangle rep- resents the search area at current point along the trajectory, as shown in Figs. 2 and 3.
III. OPTIMAL TRAJECTORY
The ground object’s altitude is assumed to be known.
Thus, the geolocation method described in Section II is transferred as two-dimensional (2-D) case. The main equa- tion for the 2-D case is obtained in level-flight mode (set θ =φ=0◦). The rotation matrixCbnbecomes
Cnb =
cosψ −sinψ sinψ cosψ
Fig. 3. Flying area and search area for a side-looking camera.
and (1) is reduced to xp
yp
= xv
yv
+zp−zv f
cosψ −sinψ sinψ cosψ
xf
yf
. (10) The target line-of-sight can be measured by the azimuth and elevation anglesζandξ, as shown in Fig. 1. In the 2-D case, the partial derivativesAk andBk are listed in detail (see the appendix).
From
xf =
xf2 +yf2 +f2cos (ζ) cos (ξ) yf =
xf2 +yf2 +f2cos (ζ)sin (ξ) f
xf2 +yf2 +f2
=sin (ζ) we obtain
cot (ξ)= (xv−xp) cosψ+(yv−yp) sinψ
−(xv−xp) sinψ+(yv−yp) cosψ (11)
sin (ζ)= zv−zp
(xv−xp)2+(yv−yp)2+(zv−zp)2 (12) cot (ξ+ψ)= xv−xp
yv−yp. (13)
Generally speaking, the accuracy of parameter estimate is typically using the estimation error e= |γˆ−γ| and its covarianceP. The evolution ofP can be described as in- formation form [21]–[26]. Particular attention is given to the estimation performance of the parameter “yaw bias”
that is only considered in this paper, sinceP represents the correlation relationship among three parameters (x, y, δψ) and we choose yaw bias estimation erroreδψ= |γˆ2−γ2| to build the performance metric.
From the appendix, AkandBk are related to the pa- rameters (ξ, ζ, ψ). It is assumed that the UAV flies in a constant altitude. We confine our attention to the horizon- tal planexy. It can be inferred thateδψˆ is only related to
the parameters (xv, yv, ψ) according to (11)—(13). So the performance metric is expressed by
eδψˆ =J(xv, yv, ψ). (14) In [26], the optimization problem is not generally analytical and must be done numerically under some specific setting.
Similarly, in this paper, the PSO algorithm is used offline to numerically obtain optimal choice of trajectory parameters based on flight simulation data.
A. Assumptions
The following assumptions were made on the data col- lected: the UAV’s airspeed isVa =3.44 m/s, constant flight altitude above the GOI is h=45 m, field-of-view angle is FOV=30◦, depression angle isβ=45◦, GPS data ac- quisition frequency is fGPS=4 Hz, σψ=σθ =σφ =5◦, the bias in the yaw-angle measurement isδψ=30◦, σς= σξ =0.5◦, the differential GPS (DGPS) horizontal uncer- tainty isσx =σy 0.6 m, and the baro altitude uncertainty isσz 1.5 m.
B. Search the Optimal Path With PSO
The PSO was first proposed by Kennedy and Eberhart [38]. The algorithm simulates the behavior of birds forag- ing. Each particle has two characteristics of velocity and position. The process mainly includes four parts: particle initialization, evaluation process, state update, and judg- ment process. The position and velocity vector of each particle are randomly initialized. The core of the evaluation process is to estimate the proximity of the particle’s current position to the desired optimal position. It is represented by a selected fitness function value, and the smaller the fitness value, the closer the particle is to the desired opti- mal position. The state update is divided into the velocity and the location updates, and the global optimal particles are used as the guide and to influence the update of each particle state, so that the particles gradually move toward the optimal position. In our implementation, the position of a particle represents a complete UAV trajectory. The flowchart of the PSO implementation is shown in Fig. 4.
The equations used to compute the velocity and position of a single particle at iteration are given as follows:
vt+1i =w·vit+c1·rand1·(αtpi−αti)
+c2·rand2·(αgt −αit) (15) αt+1i =αit+vit+1 (16) wherevitis the velocity of the particlei, andαtiis its position at the iterative stept.αtpiis the best position of the particle and αtg is the best position of the swarm at the iterative stept.Npis the population size,Ncis the maximum of the iterative steps (1≤i≤Np, 1≤t≤Nc). rand1 and rand2
are the random values between 0 and 1, andwis the inertia personal influence factor.c1andc2are the social influence
Fig. 4. Flowchart of the PSO.
TABLE I
Scenarios Considered in the Simulation
TABLE II
Variable Range for Each Scenario
factors. In our implementation, the parameters are set as follows:Np=40, Nc=100, w=0.4, andc1=c2 =2.
The fitness function is obtained from (14) as follows:
fit =J(αit)=J(xvti, yvti, ψit). (17) Our implementation of the PSO is discussed in the fol- lowing scenarios: overflight, flyby, and loitering, as shown in Table I. For the sake of simplicity, the yaw angle will be set toψ=0 for overflight and flyby cases and such that the fitness function is only related toyv parameter. Similarly, the radian angle will be set to α=1.5π, and it is related to the radiusRfor loitering. Hence, our implementation of the PSO is simplified to solve a single-variable optimiza- tion problem, over the search space (xv, yv)∈XYfly. The range of the single variable in these scenarios is listed in Ta- ble II. Also, Fig. 5 illustrates the generation of the particles initial population, where the line represents the UAV tra-
Fig. 5. Generation of the particles initial population.
Fig. 6. Evolution curve of the PSO.
TABLE III Optimization Results
jectory and the asterisk is the position of the GOI. For each scenario, there exits an optimal trajectory where the esti- mation of the yaw-angle bias can reach the optimal value.
Compared with other scenarios, the optimal value of the estimation in loitering scenario is improved significantly.
For each scenario, the relationship of the objective func- tion and the iteration step of PSO is described in Fig. 6, and the experimental results from simulation are provided and summarized in Table III.
Optimal trajectory parameters are drawn that provide guidelines for simulation and real-time flight test. Specif- ically, trajectory planning is started to make UAV transit from current trajectory to optimal trajectory when a ground object is detected. It guarantees that the yaw bias estimation performance is improved by trajectory planning so that the accuracy of the target geolocation can be enhanced.
IV. TRAJECTORY PLANNING
The conclusions developed in Section III will be applied to UAV trajectory planning and to ensure that the accuracy of attitude-heading bias estimation can be effectively im- proved in the generated path. To make the algorithm of trajectory planning clear, the algorithm will be illustrated in the following steps.
Step 1: Calculate the current relative position of the GOI with respect to the UAV if the GOI is identified.
Step 2: Depending on onboard camera installation, it can be divided into two cases. For a forward-looking camera, the UAV will fly a distance s1 along the right side ors2
along the left side to reach the starting point of the optimal trajectory. For a side-looking camera, there are two options by the tradeoff between accuracy and computation cost. If flyby flight is chosen, the UAV flies a distances along the left side, else if loitering flight is chosen, the UAV first turns at an angleηand then flies a distances3along the right side ors4along the left side.
Step 3: The UAV follows the optimal trajectory to take multiple bearing measurements of the GOI.
Step 4: After the completion of the yaw bias estimation, the UAV will fly forward to realize target localization.
First, the parameters in Step 1 are listed as follows. Let pb =[xbyb zb]T denote the relative position of the GOI with respect to the UAV in the body frame b. Consider Fig. 1, the pinhole camera model with an assumption of fixed zoom and known camera intrinsic parameters is
xf
yf
= f zb
xb
yb
. (18)
The relative altitude between the UAV and the GOI [39] is given by
h= −zbsinφ+xbsinθcosφ+ybcosθcosφ (19) wherehandθandφare directly measured by an onboard barometer and an AHRS. Combining (18) with (19), the relative positionpbbetween UAV and GOI can be obtained.
Next, the parameters in Step 2 is analyzed under differ- ent conditions.
A. Forward-Looking Camera
The UAV flies a distance s1 along the right side with the conditionyb ≥0; otherwise, the UAV flies a distance s2along the left side, and then the UAV flies forward. The distances are defined as follows:
s1=s2 = |yb|. B. Side-Looking Camera
We define d1 =yb−y1s−sup and d2 =yb−y1s−inf. If flyby flight is chosen, the UAV flies a distances along the left side and then flies forward. One obtains
s= |d1| −σy.
Fig. 7. One-shot location of a target for a forward-looking camera. (a) No planning. (b) Overflight planning.
Fig. 8. One-shot location of a target for a side-looking camera. (a) No planning. (b) Flyby planning. (c) Loitering planning.
Else if loitering flight with radius R is chosen, the UAV first turns an angleηat the current point by utilizing easy rotation of the quad-rotor aircraft. One obtains
η=arc tan
−xb
yb
where the clockwise rotation is taken as looking along the UAV axiszb from the origin (η≥0), whereas a negative rotation acts in an opposite sense (η <0). Then, the UAV flies a distances3 along the right side with the condition d2 ≥0, otherwise, it flies a distances4along the left side.
Then, the UAV orbits the GOI in the loitering trajectory with the radianα. Eventually, the UAV switches to fly forward
for localizing a target. The parameters are defined as R=
xb2+yb2+s3,(d2≥0) R=
xb2+yb2−s4,(d2<0) s3 = |d2| −σy, s4= |d2| +σy.
Monte Carlo simulations were performed to validate the efficacy of the technique presented, with respect to the scenarios introduced in Section IV-A. Typical runs are pre- sented in Figs. 7 and 8, in which the solid circle shows the actual position of a GOI measured to estimate the bias, the black star is the target’s actual position, and the red asterisk represents the result of the geolocation of the tar- get’s position. In Tables IV and V, the statistical results
TABLE IV Estimation Performance for a
Forward-Looking Camera
TABLE V Estimation Performance for a
Side-Looking Camera
Fig. 9. T-Lion.
are reported in terms of the RMS error associated with the yaw-angle bias estimation error eδψˆ and resulting lo- calization error exy. From the simulation results, we can see that the target geolocation accuracy is enhanced using a one-shot measurement, provided that the yaw-angle bias is more accurately estimated based on the UAV trajectory planning.
V. FLIGHT TEST
The quad-rotor UAV used in this flight test is an exper- imental one called T-Lion designed by the Control Science Group of Temasek Laboratories at the National University of Singapore as shown in Fig. 9. It was specially catered to carry high payload (>2 kg) with a long-endurance flight (>20 min). The T-Lion has the ability to fly on a predefined waypoint and a circular path autonomously, so it can be used as a test platform to validate our method. The T-Lion is equipped with a power distributor, a flight controller, an onboard computer, and a gimbal-controlled camera, as shown in Fig. 9. An Intel NUC minicomputer with a pow- erful i7 processor is selected as an onboard computer and mounted on the T-Lion, and it is run in the ROS system [40], where it enables image processing and trajectory planning in real-time during flight. A monocular camera is used for
visual measurements during flight. The camera is set to point directly downward at the GOI.
The proposed method is now verified through actual flight data. The flight scenario is shown in Fig. 10. The quad-rotor UAV is flying different trajectories in an outdoor environment. For each trajectory, the UAV first rely on feature detection to take multiple measurements of GOI to obtain the estimation of the yaw bias and then localize the target accurately when it popped up in the video. In order to identify the GOI and the target easily, one AprilTag [41], [42] is used as the GOI and another AprilTag is mounted on the target. It should be noted that not all information provided by AprilTag are employed for this experiment but only the pixel position of the center point of AprilTag.
The UAV is commanded to fly all three trajectories pre- sented in Section IV, as shown in Fig. 11. Specifically, there are common initial segments for the three trajectories. For the no planning case, the UAV is flying straight without changing direction of motion. For the flyby planning case, the UAV is flying along the right side to finish the transition from current trajectory to the optimal trajectory generated by flyby planning, when the GOI is detected. For the loiter- ing planning case, the UAV is first flying along the left side to transit the optimal trajectory by loitering planning. Then, the UAV is changing to fly straight when the UAV obits the GOI for one circle. The red trajectory represents the DGPS measurement of the UAV to provide the ground truth for the experiment. The arrows represent the yaw measured by the AHRS. The red square and red star represent the actual position of the GOI and the target measured by the DGPS, respectively. The triangle is the UAV DGPS posi- tion, where the target is detected and geolocated. The black asterisk represents the result of target geolocation.
The following assumptions in the measurement process are used:σx =σy =0.2 m,σz=1 m,σψ =σθ =σϕ =1◦, andσξ =σζ =0.5◦.
The assumption of the constant bias in the yaw measure- ment is justified according to Fig. 11. Before eliminating the bias, the arrows show significant angular offset with re- spect to the direction of motion. After having an estimate of the yaw-angle bias, the angular offset decreases in varying extent for different trajectories. The main result is that the bias in the heading measurement is more accurately esti- mated using trajectory planning than no planning at all. As a result, the accuracy of target geolocation was enhanced greatly.
For the three cases shown in Fig. 11,T1represents the length of the video record selected with N1 frames/data points available, in which the GOI is tracked frame by frame. The estimate of the bias in the yaw measurement and the localization errors of the target in the horizontal direction is reported in Table VI.
The successful results in actual flight trials has illus- trated that the method proposed in this paper is capable of improving the yaw-angle bias accuracy and obtaining a sig- nificant advantage in terms of target geolocation accuracy after eliminating the bias. Besides, the flight data results have proved that the method is robust to the unknown sensor
Fig. 10. Flight test: an outdoor scenario. (a) UAV takes visual measurements of GOI. (b) GOI detection. (c) Target detection.
Fig. 11. Target geolocation. (a) No planning. (b) Flyby planning. (c) Loitering planning.
TABLE VI Target Geolocation Results
noise. Considering the computation cost, PSO is computed only once offline to get an optimal path, which can be a guideline for trajectory planning run onboard UAV. Hence, the method is practical for UAV real-time applications.
VI. CONCLUSION
The geolocation accuracy of a target obtained from one-shot measurement is generally poor, due to the bias in the heading measurement provided by the AHRS on- board UAVs. In order to have a more accurate estimate of the heading angle bias, the proposed vision-based geoloca- tion method relies on the UAV trajectory planning strategies to take multiple bearing measurements of a GOI along the generated trajectory. Finally, it can significantly improve the target geolocation accuracy.
From flight test results, it is shown that bearing mea- surements over time of the GOI taken in the generated trajectory enable more accurate calculation of heading an- gle bias and thus further enhanced target geolocation ac- curacy. With DGPS horizontal position accuracy of 0.2 m and bearing-angle accuracy of 0.5◦, geolocation accuracy achieved using the method is nearly 0.5 m. Compared with no planning, the target is more accurately geolocated with a largest increase of nearly 3 m. That is, the significant ad- vantage in terms of target geolocation uncertainty reduction is reached using our method. In the future work, we will consider extending this method to a moving ground object scenario.
APPENDIX
The aim of this appendix is to provide the computation of the matrixAkandBkpresented in Section III. The main equation is
xp yp
= xv
yv
+
zp−zv
(0,0,1)CbnVb
1 0 0
0 1 0
CbnVb
where
Vb =
⎡
⎢⎣
cos (ζ) cos (ξ) cos (ζ) sin (ξ)
sin (ζ)
⎤
⎥⎦.
The parameters are estimated when γ1=[xp, yp]T is the position of the ground object and
γ2 =δψ
is the bias error in the attitude-heading measurement pro- vided by the AHRS.
The measured variables are y=[y1, y2]T.
Wherey1 =[xv, yv, zv, xf, yf, θ, φ]T andy2=ψ.
The partial derivatives are obtained as follows:
Ak= ∂F
∂y1|z1k,z2k =
1 0 a13 a14 a15 a16 a17
0 1 a23 a24 a25 a26 a27
Bk= ∂F
∂y2|z1k,z2k = b11
b21
.
By setting θ =φ=0, we obtain Ak andBk for the 2-D case as follows:
a13= −cos (ψ+ξ) cosζ sinζ
a14=(zp−z)−cos (ψ+ξ) sin2ζ
a15=(zp−z)−sin (ψ+ξ) cosζsinζ sin2ζ
a16=(zp−z)cosψsin2ζ+cos (ψ+ξ)cos2ζcosξ sin2ζ
a17=(zp−z)sinψsin2ζ−cos (ψ+ξ)cos2ζsinξ sin2ζ
a23= −sin (ψ+ξ) cosζ sinζ
a24=(zp−z)−sin (ψ+ξ) sin2ζ
a25=(zp−z)cos (ψ+ξ) cosζsinζ sin2ζ
a26=(zp−z)sinψsin2ζ+sin (ψ+ξ)cos2ζcosξ sin2ζ
a27=(zp−z)−cosψsin2ζ−sin(ψ+ξ)cos2ζsinξ sin2ζ
b11=(zp−z)−sin (ψ+ξ) cosζ sinζ b21=(zp−z)cos (ψ+ξ) cosζ
sinζ .
REFERENCES
[1] S. Y. Zhao, Z. Y. Hu, B. M. Chen, and T. H. Lee
A robust real-time vision system for autonomous cargo transfer by an unmanned helicopter
IEEE Trans. Ind. Electron., vol. 62, no. 2, pp. 1210–1218, Feb.
2015.
[2] Y. F. Shen, Z. Rahman, D. Krusienski, and J. Li
A vision-based automatic safe landing-site detection system IEEE Trans. Aerosp. Electron. Syst., vol. 49, no. 1, pp. 294–311, Jan. 2013.
[3] S. Huh, S. Cho, Y. Jung, and D. H. Shim
Vision-based sense-and-avoid framework for unmanned aerial vehicles
IEEE Trans. Aerosp. Electron. Syst., vol. 51, no. 4, pp. 3427–
3439, Oct. 2015.
[4] M. ElMikaty and T. Stathaki
Car detection in aerial images of dense urban areas
IEEE Trans. Aerosp. Electron. Syst., vol. 54, no. 1, pp. 51–63, Feb. 2018.
[5] L. Zhao, D. Wang, B. Huang, and L. Xie
Distributed filtering-based autonomous navigation system of UAV
Unmanned Syst., vol. 3, no. 1, pp. 17–34, Jan. 2015.
[6] L. R. G. Carrillo, I. Fantoni, E. Rondon, and A. Dzul
Three-dimensional position and velocity regulation of a quad- rotorcraft using optical flow
IEEE Trans. Aerosp. Electron. Syst., vol. 51, no. 1, pp. 358–371, Jan. 2015.
[7] M. K. Kaiser, N. R. Gans, and W. E. Dixon
Vision-based estimation for guidance, navigation, and control of an aerial vehicle
IEEE Trans. Aerosp. Electron. Syst., vol. 46, no. 3, pp. 1064–
1077, Jul. 2010.
[8] B. D. Barber, J. D. Redding, T. W. McLain, R. W. Beard, and N.
Clark
Vision-based target geolocation using a fixed-wing miniature air vehicle
J. Intell. Robot. Syst., vol. 47, no. 4, pp. 361–382, Dec. 2006.
[9] S. Sohn, B. Lee, J. Kim, and C. Kee
Vision-based real-time target localization for single-antenna GPS-guided UAV
IEEE Trans. Aerosp. Electron. Syst., vol. 44, no. 4, pp. 1391–
1401, Oct. 2008.
[10] J. A. Ross, B. R. Geiger, G. L. Sinsley, J. F. Horn, L. N. Long, and A. F. Niessner
Vision-based target geolocation and optimal surveillance on an unmanned aerial vehicle
In Proc. AIAA Guid. Navgat. Control Conf., Honolulu, HI, USA, 2008.
[11] M. E. Campbell and M. Wheeler
Vision-based geolocation tracking system for uninhabited aerial vehicles
J. Guid. Control Dyn., vol. 33, no. 2, pp. 521–532, 2010.
[12] K. M. Han and G. N. DeSouza
Geolocation of multiple targets from airborne video without terrain data
J. Intell. Robot. Syst., vol. 62, no. 1, pp. 159–183, Apr. 2011.
[13] M. Pachter, N. Ceccarelli, and P. R. Chandler
Vision-based target geolocation using micro air vehicles J. Guid. Control Dyn., vol. 31, no. 3, pp. 597–615, 2008.
[14] W. W. Whitacre and M. E. Campbell
Decentralized geolocation and bias estimation for uninhabited aerial vehicles with articulating cameras
J. Guid. Control Dyn., vol. 34, no. 2, pp. 564–573, Mar. 2011.
[15] E. Frew and S. Rock
Trajectory generation for constant velocity target motion esti- mation using monocular vision
In Proc. IEEE Int. Conf. Robot. Automat., Taipei, Taiwan, 2003, pp. 3479–3484.
[16] L. Paull, C. Thibault, A. Nagaty, M. Seto, and H. Li
Sensor-driven area coverage for an autonomous fixed-wing un- manned aerial vehicle
IEEE Trans. Cybern., vol. 44, no. 9, pp. 1605–1618, Nov. 2014.
[17] S. Twigg, A. Calise, and E. N. Johnson
On-line trajectory optimization including moving threats and targets
In Proc. AIAA Guid. Navgat. Control Conf., Providence, RI, USA, 2004, pp. 2019–2030.
[18] Y. Oshman and P. Davidson
Optimization of observer trajectories for bearings-only target localization
IEEE Trans. Aerosp. Electron. Syst., vol. 35, no. 3, pp. 892–902, Jul. 1999.
[19] H. Oh, D. Turchi, S. Kim, A. Tsourdos, L. Pollini, and B. White Coordinated standoff tracking using path shaping for multiple UAVs
IEEE Trans. Aerosp. Electron. Syst., vol. 50, no. 1, pp. 348–363, Jan. 2014.
[20] M. Radmanesh, M. Kumar, P. H. Guentert, and M. Sarim Overview of path-planning and obstacle avoidance algorithms for UAVs: A comparative study
Unmanned Syst., vol. 6, no. 2, pp. 95–118, 2018.
[21] W. Lu, G. Zhang, and S. Ferrari
An information potential approach to integrated sensor path planning and control
IEEE Trans. Robot., vol. 30, no. 4, pp. 919–934, Aug. 2014.
[22] H. Wei, W. Lu, P. Zhu, G. Huang, J. Leonard, and S. Ferrari Optimized visibility motion planning for target tracking and localization
In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Chicago, IL, USA, 2014, pp. 76–82.
[23] F. Bourgault, A. A. Makarenko, S. B. Williams, B. Grocholsky, and H. F. Durrant-Whyte
Information based adaptive robotic exploration
In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Lausanne, Switzerland, 2002, pp. 540–545.
[24] B. Grocholsky, A. Makarenko, and H. Durrant-Whyte
Information-theoretic coordinated control of multiple sensor platforms
In Proc. IEEE Int. Conf. Robot. Automat., Taipei, Taiwan, 2003, pp. 1521–1526.
[25] W. W. Whitacre, G. Tillinghast, and M. E. Campbell
Information-theoretic optimization of periodic orbits for per- sistent cooperative geolocation
In Proc. AIAA Guid. Navgat. Control Conf., Honolulu, HI, USA, 2008, Paper AIAA2008-7020.
[26] J. Ousingsawat and M. E. Campbell
Optimal cooperative reconnaissance using multiple vehicles J. Guid. Control Dyn., vol. 30, no. 1, pp. 122–132, 2007.
[27] X. Bing, M. Zhang, and W. N. Browne
Particle swarm optimization for feature selection in classifica- tion: A multi-objective approach
IEEE Trans. Cybern., vol. 43, no. 6, pp. 1656–1671, Dec. 2013.
[28] V. Roberge, M. Tarbouchi, and G. Labonte
Comparison of parallel genetic algorithm and particle swarm optimization for real-time UAV path planning
IEEE Trans. Ind. Inf., vol. 9, no. 1, pp. 132–141, Feb. 2013.
[29] H. B. Duan and S. Q. Liu
Non-linear dual-mode receding horizon control for multiple unmanned air vehicles formation flight based on chaotic particle swarm optimisation
IET Control Theory Appl., vol. 4, no. 11, pp. 2565–2578, 2010.
[30] L. Wang, Y. Liu, H. Deng, and Y. Xu
Obstacle-avoidance path planning for soccer robots using par- ticle swarm optimization
In Proc. IEEE Int. Conf. Robot. Biomimetics, Kunming, China, Dec. 2006, pp. 1233–1238.
[31] J. L. Foo, J. Knutzon, V. Kalivarapu, J. Oliver, and E. Winer Path planning of unmanned aerial vehicles using B-splines and particle swarm optimization
J. Aerosp. Comput. Inf. Commun., vol. 6, no. 4, pp. 271–290, Apr. 2009.
[32] L. Zhang, J. Chen, and F. Deng
Aircraft trajectory planning for improving vision-based target geolocation performance
In Proc. IEEE Int. Conf. Control Automat., Ohrid, Macedonia, Jul. 2017, pp. 379–384.
[33] L. Zhang et al.
Vision-based target three-dimensional geolocation using un- manned aerial vehicles
IEEE Trans. Ind. Electron., vol. 65, no. 10, pp. 8052–8061, Oct.
2018.
[34] X. Yue, F. Deng, and Y. Xu
Multidimensional zero-crossing interval points: A low sam- pling rate acoustic fingerprint recognition method
Sci. China Inf. Sci., vol. 62, Jan. 2019, Art. no. 19202.
[35] F. Deng et al.
Energy-based sound source localization with low power con- sumption in wireless sensor networks
IEEE Trans. Ind. Electron., vol. 64, no. 6, pp. 4894–4902, Jun.
2017.
[36] D. G. Lowe
Distinctive image features from scale-invariant keypoints Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, Nov. 2004.
[37] N. R. Draper and H. Smith
Applied Regression Analysis. New York, NY, USA: Wiley- Interscience, 1998, pp. 108–111.
[38] J. Kennedy and R. C. Eberhart Particle swarm optimization
In Proc. IEEE Int. Conf. Neural Netw., Perth, WA, Australia, Nov. 1995, pp. 1942–1948.
[39] I. Kaminer, W. Kang, O. Yakimenko, and A. Pascoal
Application of nonlinear filtering to navigation system design using passive sensors
IEEE Trans. Aerosp. Electron. Syst., vol. 37, no. 1, pp. 158–172, Jan. 2001.
[40] M. Quigley et al.
ROS: An open-source Robot Operating System In Proc. ICRA Workshop Open-Source Softw., 2009.
[41] E. Olson
AprilTag: A robust and flexible visual fiducial system In Proc. IEEE Int. Conf. Robot. Automat., Shanghai, China, 2011, pp. 3400–3407.
[42] J. Wang and E. Olson
AprilTag 2: Efficient and robust fiducial detection
In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Daejeon, South Korea, 2016, pp. 4193–4198.
Lele Zhang received the B.E. degree in automation from Inner Mongolia University, Inner Mongolia Autonomous Region, China, in 2013. He is currently working toward the Ph.D.
degree in control science and engineering at the School of Automation, Beijing Institute of Technology, Beijing, China.
He engaged in research work as a visiting Ph.D. student with the Department of Electrical and Computer Engineering, National University of Singapore, from November 2016 to November 2017. His current research interests include camera calibration, SLAM, and deep learning, specifically in the area of aerial robotics.
Fang Deng (M’15–SM’18) received the B.E. and Ph.D. degrees in control science and engineering from Beijing Institute of Technology, Beijing, China, in 2004 and 2009, respectively.
He is currently a Professor with the School of Automation, Beijing Institute of Technol- ogy. His research interest covers intelligent fire control, intelligent information processing, and smart wearable devices.
Jie Chen (SM’12–F’19) received the B.S., M.S., and Ph.D. degrees in control science and engineering from Beijing Institute of Technology, Beijing, China, in 1986, 1996, and 2001, respectively.
He is currently a Professor with the School of Automation, Beijing Institute of Tech- nology. His current research interests include complex systems, multiagent systems, mul- tiobjective optimization and decision, constrained nonlinear control, and optimization methods.
Yingcai Bi received the B.Eng. degree in automation engineering from Nanjing Uni- versity of Aeronautics and Astronautics, Nanjing, China, in 2011, the M.Eng. degree in control science and engineering from Beihang University, Beijing, China, in 2014, and the Ph.D. degree in electrical and computer engineering from the NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore, in 2018.
He is currently a Research Scientist with the Intelligent Unmanned System group, Temasek Laboratories, National University of Singapore. His research focus on multisensor fusion, visual-inertial odometry, and SLAM, specifically in the area of microaerial vehicle.
Swee King Phang (M’14) received the B.Eng. and Ph.D. degrees in electrical engineering from the National University of Singapore, Singapore, in 2010, and 2014, respectively.
He is currently a Lecturer with Taylor’s University, Subang Jaya, Malaysia. His research interests are in system and control of unmanned aerial systems (UAS), including modeling and flight controller design of UAS, indoor navigation of UAS, and trajectory optimization.
Xudong Chen received the B.Eng. degree in mechanical engineering and the M.Eng.
degree in electrical engineering from the National University of Singapore, Singapore, in 2015 and 2018, respectively. He is currently working toward the master’s degree in computer science on computer vision at Carnegie Mellon University. Pittsburgh, PA, USA.
His research interests include autonomous system localization and navigation, control and sensor fusion, computer vision, and deep learning.