Contour Tracking of Targets with Large Aspect Change
Behzad Jamasbi, Seyed Ahmad Motamedi
Electrical Engineering Department, AMIRKABIR University of Technology, 424 Hafez Ave. ,TEHRAN,IRAN [email protected] [email protected]
Alireza Behrad
Engineering Department, SHAHED University, Tehran-Qom Freeway, TEHRAN, IRAN [email protected]
AbstractIn this paper we present a novel method for tracking contour of a moving rigid object in natural video sequences from a mobile camera. While a mobile camera is keeping track of a moving target, any change in relative orientation or position of the target and camera may cause a change in the view of target captured by the camera. This is called the target aspect change and known as a challenge in tracking.
Especially, a large aspect change may reveal some previously hidden parts of the target, and this is the case we have handled in our algorithm. In proposed contour tracking method the motion model of the target is estimated using corner matching and LMedS statistical method. This model is used together with an active contour (snake) to track the target efficiently.
The snake we propose is capable of swelling locally, so that it can swell and include newly appeared parts of the target in case of a large aspect change. Detecting target newly appeared parts, is based on the fact that a part of target should move consistently with other parts and inconsistently with the background. When a new part is detected the nearby part of the snake is stimulated to swell. In this way the snake will include any new part of the target revealed by a large aspect change. Several experiments have been conducted to show the promise of proposed algorithm.
Index Termscontour tracking, moving target, aspect change, active contour, corner matching
I. INTRODUCTION
Visual tracking of moving objects is a significant task in computer vision applications. Among these, applications in which the camera is in motion while tracking are newer and more challenging. The solutions usually consist of extraction and tracking different image features of the target, such as regions, edges, or corners. One major but less studied problem of these solutions is the target aspect change.
A mobile camera facilitates a long period tracking, during which the aspect of the target viewed by the camera may change gradually. A large aspect change may cause some target features to disappear and some previously hidden
ones to appear. So, tracking a constant set of features while the target aspect is changing will increase the tracking error.
In many applications the target is distinct from background by color or intensity. Consequently, for methods based on tracking theses features[1,2,3] an aspect change in the target does not cause much problem, because every side of the target is supposed to appear in the same color or intensity. But this is not a general case, since many targets may be colored or textured differently at different sides and/or may be moving against a background crowded with different objects with similar appearances.
Some other methods assign a 3-D geometric model to the target [4]. This may be a help for detecting new aspects of the target, but these methods restrict generality of the targets , besides having their own complexities.
In this paper we discuss a contour tracking method for rigid objects while the camera is moving and target aspect is changing. Contours are well-known descriptors of position and apparent shape of an object in the image. Contour tracking of a target in natural video sequences has been subject of many researches, where “Active Contour Model”
(snake) [5] is proven to be a powerful tool, especially in combination with other techniques [6-11]. Although active contour based methods can usually handle target shape deformations, in most of contour tracking algorithms the target aspect change is assumed to be small or negligible.
Different algorithms have been proposed for tracking objects using active contours. Peterfreund [6] introduced a Kalman-filter based active contour for object tracking.
Techpenakis [7] proposed a method to extract an uncertainty region for the snake activity in every image frame, using optical flow and contour velocity information. However, these algorithms are not proper for tracking targets with large aspect change. These methods use target motion history and track only previously detected target parts, while in the case of large aspect change some previously hidden parts of the target may be revealed.
Lili Qiu [8] used active contour to refine the result of fused motion and color segmentation. Araki [9] defined the
tracking snake energy based on an image obtained by subtracting current frame from background motion compensated previous frame. These methods may be able to track targets with large aspect change, but they re-detect the target at every frame which demands high computation overhead.
Our method consists of a robust motion model estimator proposed by Behrad [12] and an active contour. Behrad used target corner features and their matches in consecutive frames to estimate an affine motion model for the target. We combined this method with an active contour. To deal with large aspect change of the target we introduce a novel snake with local swelling property and a stimulation method which drives the snake to include newly appeared parts of the target.
This paper is organized as follows. In the next section we introduce a contour tracking method based on corner matching and active contour. In section III we discuss the contour tracking problem caused by large aspect change of the target and explain our idea to solve it. In section IV a method for detecting target newly appeared parts, is explained. The idea proposed in section III is developed in section V by proposing a snake with local swelling property and its stimulation method. Experimental results are provided in section VI and conclusion appears in section VII.
II. PROPOSEDCONTOUR TRACKINGMETHOD
Snake (active contour) is known as a suitable tool for tracking moving boundary of a target in a video sequence.
The snake operates by minimizing an energy function which drives it to edges in the image as well as maintaining its continuity and smoothness.
Note that the global minimum in the snake energy may not always be the desired solution, especially in natural video sequences where the target boundary does not necessarily include strong edges. Therefore, energy minimizing algorithms which try to find the global minimum and decrease the effect of initialization [13,14] are not usually suitable for tracking targets in cluttered backgrounds.
In most of the methods that use snake for tracking, the snake initialization in each frame is a prediction of the desired solution, and the snake activity is limited so that it won’t go far away from initialization, for example by using a prediction energy [10] or a restricted search band [7].
In our method we use the target motion model to predict the target contour in each frame and use it as the snake initialization. We use simple and fast GREEDY algorithm[15] for energy minimization. The result is the nearest local minimum in the snake energy, which is most probable to be the correct target boundary. In fact the role of snake in this method, is to correct the minor prediction error. To ensure that the snake won’t lose the target boundary, the energy reduction iterations are limited.
To start tracking, an approximate contour of the desired target in the first frame is outlined by user. The rough contour is then refined by ordinary snake to obtain the exact target contour. Tracking begins at the second frame. The tracking algorithm consists of the snake initialization and energy minimization in each frame. For the snake initialization, we first estimate an affine model for the target motion between the previous and current frames.
A. Target Motion Model Estimation
Using the method in [12] we can estimate an affine transform to model the target motion between two consecutive frames. The affine transformation accounts for translation, rotation, scaling and skew between two image frames as follows:
»¼
« º
¬ ª
»¼
« º
¬
Ȼ
¼
« º
¬
» ª
¼
« º
¬ ª
6 5 4
3 2 1
a a y x a a
a a Y X
i i i
i (1)
where are coordinates of points in the previous frame and are coordinates of points in the current frame and are motion parameters. This transformation has six parameters; therefore, three matching pairs are required to fully recover the motion. We estimate the target motion using LMedS(Least Median Square) statistics [16] as follows:
) , (xi yi
) , (Xi Yi
6 1,...,a a
1. Extract corner features located inside the target contour at previous frame.(Fig. 1.a)
2. Find the match points of extracted corners in the current frame.(Fig. 1.b)
3. Select M random sets of three feature points and their matches:
^
(xi,yi,Xi,Yi)|i 1,2,3`
, where are coordinates of feature points in the previous frame, andare their matches in the current frame. M is the minimum number of data points to make a robust estimate.
) , (xi yi )
, (Xi Yi
4. For each data set calculate affine parameters 5. Calculate median residual over all data.
6. Select the affine transformation with lowest median residual value.
The resultant affine model is then used to transform the target contour at previous frame to obtain the snake initialization at current frame. Each snaxel (snake or contour point) is transformed separately to obtain its position at current frame.
B. Snaxel Manipulation
Target motion may include magnification or shrinking, thus contour length may get longer or shorter while the number of snaxels remains constant. This causes the snake properties to vary as a function of target size. To avoid this change we should keep the distance of adjacent snaxels in a constant range. To manage this, we apply the following
algorithm to all contour snaxels in each frame before the snake energy minimization:
1. A snaxel is discarded if its distance to the nearest neighbor is less than a low threshold (TL).
2. A snaxel is added between two adjacent snaxels, if their distance is larger than a high threshold (TH).
Above manipulation also facilitates local stretching or contraction as demanded by the snake deformation.
After applying above algorithm to snaxels, the snake energy is minimized to obtain target contour at current frame using GREEDY algorithm. We may use ordinary snake or the snake proposed in section V in order to track targets with large aspect change.
III. ASPECTCHANGE PROBLEM IN CONTOUR TRACKING
A continuous motion in target and/or camera may cause a gradual change in the target aspect which is captured by the camera. As tracking is in progress, aspect change causes the target boundary to deform slightly through consecutive frames. Snake as a deformable model can follow these changes to some extent. However, during a large aspect change, some previously hidden parts of the target may appear, and some previously observable parts may disappear. When disappearing, some parts of the target move behind other parts and the old boundaries are replaced by new ones. Therefore, a snake can easily detect new boundaries in this case. However, when a new part of the target appears as a result of large aspect change, the edge points of the old target boundary remain observable. In this case the snake keeps tracking with the old boundary and fails to include the newly appeared parts of the target. This is the problem we aim to solve.
In the case of a large aspect change, new parts of the target appear gradually from behind the target boundary which the snake is ‘locked on’. Assuming a slow aspect change, we can say that a newly appeared part of the target at its emerging period is located somewhere at the outer boundary of the snake. Therefore, in order to include newly appeared parts of the target, the snake should swell at places where there is a probability for the existence of new parts.
This is the idea we have developed in our proposed snake (Sec.V).
IV. DETECTING TARGETNEWLY APPEARED PARTS
New parts of the target which appear as a result of large aspect change usually remain outside the target contour. For detecting these parts, we manage to find corners located outside the target contour which may belong to the target.
Assume are six parameters of the affine transformation which models the target motion between frames k-1 and k. This model is obtained in section II.A and we have used it previously to predict the snake initialization for frame k. In a similar way we obtain the background motion model represented by . Now, assume
6 1,...,a a
6 1,...,a ac c
^
Ci(k1) [cxi(k1) cyi(k1)]T |1didQ`
are corner points at frame k-1 located outside the target contour. If) 1 (k
Ci is a corner on the target, we may predict its match at frame k as follows:
a
Figure 1. Steps 1 and 2 of the target motion model estimation:
(a) step1: Corner features inside the target contour at previous frame are extracted.
(b) step 2: Match points of extracted corners are found at current frame.
»¼
« º
¬ ª
»
¼
« º
¬ ª
6 5 4
3 2
1 ( 1)
)
( a
k a a C a
a k a
Pi i (2)
Similarly, If Ci(k1) is a corner on the background, its match at frame k should be close to:
b
»¼
« º
¬ ª c c
»
¼
« º
¬ ª
c c
c c c
6 5 4
3 2
1 ( 1)
)
( a
k a a C a
a k a
Pi i (3)
Finally, consider Mi(k) as the true match point of Ci(k1) at frame k, which is found using a feature matching algorithm. Some useful distances between the true and predicted match points are:
) ( ) ( )
(k M k P k
di i i , d (k) M(k) P(k)
i i i
c
c (4,5)
If is small enough, we may conclude that motion is consistent with the target motion. The following rule is used in [12] to find consistent feature points:
) (k
di Mi(k)
“ If then motion is consistent with the target motion “
2 2 2.5ˆ )
(k V
di Mi(k)
where Vˆ is the robust standard deviation estimate [16] of the LMedS algorithm used to find the target affine model , and is given by:
f
f M
N 3)]
/(
5 1 [ 4826 .
ˆ 1
V (6)
in which is minimal median and is the total number of features used for calculation of the affine model.
Mf Nf
Since intended targets in this work are rigid objects such as vehicles, all the corners on the target should have a consistant motion with the target. Yet, the reverse statement is not always valid, i.e. consistency with the target motion does not necessarily yield belonging to the target. For example if the target ceases to move for a period of time, there might be an apparent motion in the image due to the camera motion, but there will be no difference between target and background motions. Therefore, all the features on the background appear to move consistently with the target. So, in order to gain a factor of belonging to the target for feature points, besides checking for consistency with the target motion we manage to check for inconsistency with the background motion. The rule for background motion inconsistency can be stated as follows:
“If then motion is inconsistent
with the background motion “
2 2 2.5ˆ )
( ! Vc
c k
di Mi(k)
where Vˆc is the robust standard deviation estimate of the LMedS algorithm used to find the background motion model (a1c,...,a6c) , and is calculated similar to Vˆ .
Provided that we use a pixel matching algorithm to find , its coordinates are integer numbers. So, as a pixel value, has a tolerance (inaccuracy) of
) (k Mi
) (k
Mi r12 pixel in
each direction (x or y). In other words, the real match point of (which can not be calculated due to limited image resolution) is not distanced from more than
) 1 (k Ci
) (k Mi 2
1 pixel in each direction or 2 2 pixel in the image plane.
On the other hand, is a real point obtained by transforming with the background motion model (Eq. 3). Therefore, considering the worst case, we have a tolerance of
) (k Pic ) 1 (k Ci
2
r 2 pixel for d (k) M(k) P(k)
i i i
c
c value.
So, the condition “greater than(‘>’)” can not be used validly for dic(k)2 values smaller than 12, and the rule of inconsistency should be restated as follows:
“If then motion is
inconsistent with the background motion “ )
5 . 0 ˆ , 5 . 2 max(
)
( 2 ! Vc2
c k
di Mi(k)
Using two rules of consistency with target and inconsistency with background motion, we define the
“Factor of Belonging to Target” for feature point as follows:
°°
¯
°°®
! c c
<
else k d and
k d k if
d
k i
i i
i
0
5 . 0 , ˆ 5 . 2 max ) (
5ˆ . 2 ) ( 5ˆ
. 2
) 1 ( ) (
2 2
2 2
2 2
V V V
(7) Obviously, <i(k) is nonzero only if motion is consistant with the target and inconsistent with the background, which means may belong to the target.
When nonzero,
) (k Mi )
(k M ) (k
i
<i is a value between zero and one which represents the degree of consistency with the target motion. So, <i(k) may be considered as the probability of for belonging to the target, i.e. feature points with larger values of
) (k
M M (k)
) (k
i i
<i are more probable to be parts of the target. Therefore, <i(k) values can be used to determine the location of newly appeared target parts.
V. PROPOSEDSNAKE
As a solution for aspect change problem discussed in Sec.
III, we propose a snake which is capable of swelling locally, and a stimulation method to determine which parts of the snake should swell in order to include newly appeared parts of the target.
A. Snake Energy Definition
A parametric presentation for a closed snake with N ordered snaxels can be presented as follows:
^
vj [vxj vyj]T 1djdN1, vN1 v1`
(8) where vj is the snake point (snaxel) indexed j.To use GREEDY algorithm for energy reduction, the snake energy should be defined at the neighborhood of each snaxel. So we propose the following snake energy:
) , ( ) , ( ) , ( ) ,
(v j Eint v j E v j E v j
ESnake edge bal (9)
where is a neighbor of snaxel at which the energy is defined, is the snake internal energy, is the edge energy and is the balloon energy. We use the following definition for the snake internal energy:
v
vjEint Eedge
Ebal
>
1 12@
2 1
int 2
2 ) 1 ,
(v j hvvj vj vvj
E D E (10)
where D and E are the snake continuity and smoothness coefficients respectively and is the average distance between snaxels. The edge energy is defined as follows:
) h (k Mi
j
edge v j I vxvy n
E ( , ) J ( , )xˆ (11)
where is the image gradient at , is the normal vector of the snake at and
) , (vxvy
I
v
nˆjvj J is the edge energy coefficient. This edge energy definition has an especial effect in our solution i.e. the snake with this energy is more probable to detect new parts of the target appeared as a result of aspect change , because this snake tends to be collinear with image edges and thus tries to find new extensions of previously detected boundaries of the target .
Finally, is the balloon [17] energy defined as follows:
Ebal
)]
( ˆ [ ) ,
(v j n v v
Ebal Oj jx j (12)
where Oj!0 is the balloon energy coefficient which determines the snake tendency to swell at the location of snaxel vj . In other words, the snake swells at areas where Oj values are large enough. Therefore, the snake local swelling can be controlled by determination of balloon energy coefficients, which we refer to as “swelling stimulation”.
B. Swelling Stimulation Method
In this section we propose a method to determine which parts of the snake should swell in order to include newly appeared parts of the target.
As discussed before, balloon energy coefficients should be increased where there is a probability for the existence of newly appeared parts of the target. In other word if an snaxel be in the neighborhood of a match point with a large value of , we should assign a large value to . Population of feature points near , their distances to the snaxel and corresponding values are factors which should be involved in assigning value to
vj Mi(k)
)
i(k
<
)
j(k
O Mi(k) vj
)
i(k
<
)
j(k
O . Our algorithm for assigning value to (k)
Oj consists of the following three steps:
Step 1: Construction of Stimulation Image
To create the stimulation image, we first construct an image Ifor frame k as follows:
¯®
<
else
k M y x if k k
y x
I i
T i
0
) ( ] [ )
) ( , ,
( (13)
The only bright pixels in this image are the locations of match points which may belong to the target, and the degree of brightness represents the probability of this belonging.
Since match points are discretely distributed in the
resultant image, we apply a Gaussian filter to the image to distribute
) (k Mi
)
i(k
< probability values in the neighborhood of match points and obtain stimulation image as follows:
) , , ( ) , ( ) , ,
(x yk G x y I x y k
S V (14)
in which represents Gaussian low-pass filter with the standard deviation of
GV
V (Note that Iand pixel values are not discrete). Pixel values of S can be considered as the probability for the existence of newly appeared target parts at pixel locations. Stimulation image is so called because it is used to stimulate the snake for swelling. An example of stimulation image appears in Fig. 2.b. Shining areas correspond to match points whose motion is consistant with the target and inconsistent with the background, and thus belong to a part of the target which lies outside contour.
S
Figure 2.
(a) A frame of a tracking video sequence: white squares indicate match points of corners located outside the target contour and close to it ( ) ) and the white line indicates initial contour for this frame; a part of the target is excluded.
(k Mi
(b) The stimulation image of same frame scaled to be visible, and the initial contour; the contour brightness is proportional to
)
j(k
O coefficients.
a
b
Step 2: Stimulation
Stimulation image is sampled at pixels where contour snaxels are located. Sampled values are used to update balloon energy coefficients of corresponding snaxels as follows:
) , , ( ) 1 ( ) 1 ( )
(k m j k m c S vxj vyj k
j O O
O (15)
where is sensitivity coefficient which controls the extent of the snake reaction (swelling) to stimulation, and
cO
1
0dmd is a ratio which may be called memory coefficient.
Remember that k indicates the frame number in the sequence. Using (15) to update balloon energy coefficient in each frame, the snake swelling tendency accumulates at areas where the stimulation continues for several consecutive frames. These areas are most probable to be newly appeared parts of the target. So, when the swelling
tendency ( represented by Oj(k) values ) grows enough, the snake starts swelling at that area. Stimulation and swelling continue until the snake includes the target totally. At this time the stimulation stops and the swelling tendency fades.
In this way equation (15) models the gradual nature of aspect change and avoids the snake from swelling in response to temporary stimulations that are usually wrong ones. A further discussion about this equation and parameter
‘m’ is provided at Sec. VI. Note that to reset the swelling property at the first tracking frame Oj(1) values should be set to zero.
Step 3: Swelling Property Manipulation
To have a uniform local swelling, balloon energy coefficient of adjacent snaxels should be close in value. To manage this, the following low pass ( moving average ) filter is applied on (k)
Oj values twice : 3
/ )]
( ) ( ) ( [ )
(k j1k j k j1 k
j O O O
O (16)
Finally, a threshold is applied on Oj(k) values to avoid over-swelling, as follows:
¯®
!
else k
k k
j
H j H
j ( )
) ) (
( O
O O
O O (17)
where H is the high threshold. Having determined balloon energy coefficients of the snake at frame k , energy reduction drives the snake to the target boundaries including new ones appeared as a result of the target large aspect change.
O
VI. EXPERIMENTAL RESULTS
Proposed algorithm was tested on several video sequences. The results showed that the proposed method is capable of tracking moving vehicle targets with large aspect
change. Fig. 3 shows the contours of a moving target, tracked using different tracking methods. In this experiment we compared three tracking methods:
a) Single ordinary snake: In this method an ordinary snake [1] is used for tracking. The resultant target contour at each frame is used as the initial snake for the next frame.
b) Proposed tracking method using ordinary snake: It is the same as the previous method except the snake initialization method, for which the proposed algorithm of Sec. II is used.
c) Proposed tracking method using proposed snake:
Proposed tracking method of Sec. II is used with the snake proposed at Sec. V to form a promising method for tracking targets with large aspect change.
As depicted in Fig. 3.a, single ordinary snake easily loses the target boundary and is not suitable for tracking in real world video sequences. Fig. 3.b shows that proposed tracking method using ordinary snake is successful as long as the target aspect change is negligible. But, continuation of aspect change causes the algorithm to lose track of the target boundary partially. Fig. 3.c shows the advantage of using proposed snake with proposed tracking method: the resultant contour includes the target totally.
To have a numeric evaluation of the results we define the
“tracking error percentage” as follows:
u100 C)
n(R
C) C)-n(R TE% n(R
(18)
where R is the set containing the target pixels, C is the set of pixels inside the tracked contour (the result of the algorithm) and indicates the number of members. Fig. 4 depicts the tracking error percentage for different methods tested on the video sequence of Fig. 3.
n(B) B
frame 160 frame 120 frame 40
a
Figure 3. Contour tracking using different methods:(a) single ordinary snake
(b) proposed tracking method using ordinary snake (c) proposed tracking method using proposed snake
b
c
Figure 4. Tracking error percentage of Fig. 3 experiments
Figure 5.
Contour tracking using proposed tracking method with proposed snake.
frame 130 frame 110 frame 80 frame 40
Fig. 5 shows the result of tracking a vehicle with drastic aspect change in a cluttered background and Fig. 6 depicts the tracking error percentage in this experiment. Obviously, using proposed tracking method with proposed snake yields the lowest tracking error. Although, during a fast aspect change ( frames 100 through 120 ) the algorithm loses track of the target boundary partially, but it is able to recover and accomplish tracking successfully.
Figures 7 and 8 show the results of using pre-mentioned tracking methods in a video which includes zooming effects and camera vibration. As it can be seen in Fig. 8 camera vibration cause ordinary snake to lose the target very soon.
However, since proposed tracking method is based on the target motion estimation between each two consecutive frames, it is robust to camera vibration. Still, when the target starts to change aspect the tracking error increases. Using proposed snake helps to maintain the error in a low level.
Although, we assumed that the target contour at the first tracking frame is correctly known, proposed snake is capable of recovering from the error in initial target contour.
In an experiment shown in Fig. 9 the initial contour at the first frame includes just a part of the target. However, using proposed tracking method with proposed snake, the target contour has been recovered while tracking.
We tried to use optimal parameters for proposed tracking algorithm. Parameter values used for experiments are as follows:
Figure 6.
Tracking error percentage for experiments on Fig. 5 video.
1 , 1000 ,
12 , 20 , 1 ,
2 ,
5 L H
H T M c
T D E V O O
(V is the Gaussian filer standard deviation used in Eq. (14)).
For ordinary snake J 1.2 yields the best results. However, the swelling property in proposed snake acts against internal energy which forces the snake to shrink. So, to balance these effects we use J 1 for proposed snake. Also, the number of energy reduction iterations for both ordinary and proposed snakes is limited to 10.
Figure 8.
Tracking error percentage for experiments on Fig. 7 video.
frame 5 frame 65 frame 115 frame 205
Figure 7. Contour tracking using proposed tracking method with proposed snake.
frame 105 frame 35 frame 15 frame 5 Figure 9. Contour tracking
of a target using proposed tracking method with proposed snake; Contour includes the target partially at the starting frame.
Discussion: parameter ‘m’
As explained in section V.B, Eq. (15) gives the snake a memory to remember which parts of it have been stimulated in previous frames, and the value of parameter m defines the share of stimulation history in effective stimulation at current frame. Consequently, the larger is m the snake swells with more inertia, and the smaller is m the snake responds faster to stimulation. In case of a fast aspect change , e.g. when a car takes a fast round , a fast swelling and a small m is required. But, a small m makes the snake too sensitive to stimulation errors. Fig. 10 depicts the tracking error percentage for a typical target aspect change using different values for m. It can be seen that m=0.4 yields the lowest error. Larger values of m increase tracking error during aspect change because of the delay in swelling.
But when aspect change is over, the algorithm has time to include newly appeared parts of the target and decrease tracking error. Using values of m smaller than 0.4 , increases the tracking error in another way: wrong stimulations easily make the snake swell, and may cause the snake to get trapped
on strong edges in the background. So, although m=0.4 yields the least tracking error, but in order to have better stability we moved to the more secure region ,that is larger values of m.
So, we use m=0.5 for having both low error and high stability.
VII. CONCLUSION
In this paper we proposed an active contour based tracking method that utilizes motion information obtained from corner matching in the form an affine motion model. We also proposed a snake with local swelling property and a stimulation method which enables the snake to include newly appeared parts of the target. These new parts appear as a result of the target large aspect change and are distinguished by their motion which is consistent with the main target body motion and inconsistent with background motion. Experimental results show that proposed tracking algorithm using ordinary snake can track targets with minor aspect change successfully and much better than ordinary snake alone. However, application of proposed snake with proposed tracking method shows good tracking result for targets with drastic aspect change in cluttered backgrounds.
Furthermore, this method can accomplish tracking while the camera is moving, zooming or vibrating or even if the target is roughly selected at the first frame.
REFERENCES
[1] P. Moallem, A. Memarmoghadam, Target tracking in Successive Video Frames Using Probabilistic Segmentation of Gray Levels, Proceedings, Fourteenth Iranian Conference on Electrical Engineering, May 2006
[2] B. Heisele, U. Kressel, W. Ritter, Tracking non-rigid, moving objects based on color cluster flow, cvpr, p. 257, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'97), 1997
[3] Shiuh-Ku Weng, Chung-Ming Kuo and Shu-Kang, Video object tracking using adaptive Kalman filter, Journal of Visual Communication and Image Representation, Volume 17, Issue 6, Pages 1190-1208, December 2006
[4] Zheng Li, Han Wang, Real-time 3D motion tracking with known geometric models, Real-Time Imaging archive , Volume 5 , Issue 3, Pages: 167 - 187, June 1999
[5] M. Kass, A. Witkin, D. Terzopoulos, Snakes: Active Contour Models, Proceedings of the International Conference on Computer Vision, London, England, vol. 1, June 1987 [6] N. Peterfreund, Robust tracking of position and velocity with
Kalman snakes , IEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 21, No. 6, June 1999
[7] G. Tsechpenakis, K. Rapantzikos, N. Tsapatsoulis and S.
Kollias , A snake model for object tracking in natural sequences , Signal Processing: Image Communication, Volume 19, Issue 3, March 2004
[8] Lili Qui , LiLi , Contour extraction of moving objects , Proceedings of Fourteenth International Conference on Pattern Recognition , 1998
[9] S. Araki , T. Matsuoaka , N Yokoya and H. Takemura, Real- time tracking of multiple moving object contours in a moving camera image sequence, IEICE Trans. Inf. & Syst., Vol. E83- D, No. 7, July 2000
[10] Sang-Cheol Park, Sung-Hoon Lim, Bong-Kee Sin and Seong- Whan Lee , Tracking non-rigid objects using probabilistic Hausdorff distance matching , Pattern Recognition, Volume 38, Issue 12, December 2005
Figure 10. Effect of parameter m
[11] Ning Xu and Narendra Ahuja. Object contour tracking using graph cuts based active contours, Proceedings of IEEE International Conference on Image Processing, Vol. 3, Rochester, NY. September 2002
[12] A. Behrad, A. Motamedi, Moving target detection and tracking using edge features detection and matching, IEICE Trans. Inf. & Syst. , Vol. E86-D, No. 12, December 2003 [13] Jierong Cheng and Say Wei Foo, Dynamic Directional
Gradient Vector Flow for Snakes , IEEE Transactions on Image Processing, Vol. 15, No. 6, June 2006
[14] Ronghua Yang, Majid Mirmehdi, and Xianghua Xie, A Charged Active Contour based on Electrostatics, Proceedings of the 5th International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS), Springer LNCS, September 2006.
[15] D. J. Williams and M. Shah, A fast algorithm for active contours and curvature estimating, CVGIP: Image Understanding. Vol. 55, No. 1, January 1992
[16] P.H.S Torr and D. Murray, Outlier detection and motion segmentation, Sensor Fusion VI, SPIE, Vol. 2059, 1993 [17] L.D. Cohen, On active contour models and balloons,
Computer Vision Graphics Image Processing: Image Understanding 53, 2, 1991