• Tidak ada hasil yang ditemukan

The Background Subtraction Problem for Video Surveillance Systems

N/A
N/A
Protected

Academic year: 2024

Membagikan "The Background Subtraction Problem for Video Surveillance Systems"

Copied!
14
0
0

Teks penuh

(1)

CITR at Tamaki Campus (http://www.citr.auckland.ac.nz)

CITR-TR-78 November 2000

The Background Subtraction Problem for Video Surveillance Systems

Alan McIvor 1 , Qi Zang 2 and Reinhard Klette 2

Abstract

This paper reviews papers on tracking people in a video surveillance system, and it presents a new system designed for being able to cope with shadows in a real-time application for counting people which is one of the remaining main problems in adaptive background subtraction in such video surveillance systems.

1 Reveal Ltd., Level 1, Tudor Mall, 333 Remuera Rd, Auckland, New Zealand

2 CITR, Department of Computer Science,Tamaki Campus, University of Auckland, Private Bag 92019, Auckland, New

Zealand

(2)

AlanMcIvor a

,QiZang b

, andReinhardKlette b

a

RevealLtd.,Level1,TudorMall,333RemueraRd.

b

CITR,University ofAuckland,TamakiCampus,Building 731

Auckland,NewZealand

Abstract. This pap er reviews pap ers on tracking p eople in a video

surveillance system, and it presents a new system designed for b eing

abletocop ewithshadowsinareal-time applicationforcounting p eople

which is one of the remaining main problems in adaptive background

subtractioninsuchvideosurveillan cesystems.

Keywords:videosurveillance,backgroundsubtraction,countingp eople

1 Introduction

Video surveillance systems seek to automatically identify events of interest in

avarietyofsituations.Exampleapplicationsinclude intrusiondetection,activ-

ity monitoring,and p edestrian counting. The capability of extracting moving

objects from a video sequence is a fundamental and crucial problem of these

visionsystems.Forsystems usingstaticcameras,backgroundsubtractionisthe

metho d typicallyused to segment moving regions in the image sequences, by

comparingeachnewframetoamo delofthescene background,see,e.g.,[6,13].

1.1 Classicationof ExistingApproaches

There aremanydierent approachesto solvingtheproblemoftracking p eople

inavideo surveillance system.These can b e classied into threemain groups:

feature-based tracking, background subtraction, and optical ow techniques.

Feature based tracking can b e further divided dep ending on the scale of the

features. These various approachesare discussed b elow.Note that this classi-

cationisbasedonthemetho dsused fordetecting objects ofinterest.Theother

comp onentof asuccessful system isthe metho dologyused forsolving thecor-

resp ondence offeaturesb etween frames.Inalmostallapproaches,thisissolved

usingametho dbased onKalmanFiltertracking[1].

LargeScaleFeatureBasedTracking Inthisapproach,alargescalefeature

ofanobjectisidentiedandsubsequentlytracked.Typically,thisistheb ounding

silhouetteoftheobjectandthisistrackedwithusingasnake-typ etracker,e.g.,

[3].Inthecontextoftrackingp eople,anotherusefulfeatureistodetect facesby

(3)

SmallScaleFeatureBasedTracking Inthisapproach,featuressuchasedge

fragments or corners are identied in the imagesand these are indep endently

tracked through the image sequence. Then these features have to b e group ed

intoclustersthatcorresp ondtoindep endentmovingobjects.Forexample,in[2],

corner p ointsare tracked in aroad surveillance applicationand then group ed

togetheronthebasisthatallcornersonavehiclewillhavethesamemotion,and

thatthismotionwillb esignicantlydierentfromthatofothervehiclesoverthe

scene,evenwhentravellinginthesamehighwaylane.Suchanapproachwouldb e

diÆculttotransfertoap edestriantrackingapplicationb ecausep eoplearehighly

articulatedobjects,andhenceindividualcomp onentsexhibitsignicantrelative

motion. But, moreover, given the smo other exterior of p eople, the extracted

corner p ointswillnot have stable surface generators, and they willtend to b e

viewp ointsensitive.

The signicant problem in this approach is the clustering of the tracked

primitivefeaturesintogroupsthatcorresp ondtoobjects.Onep ossibleapproach

istouseap erceptualgroupingtechnique[15].

OpticalFlowBasedTechniques Thebasisofthisapproachistoestimatethe

opticalowatallp ointsoftheimageplane.Signicantp ointsarethengroup ed

basedonprinciplesofmotioncoherence,etc.Thisapproachiscloselyrelatedto

thesmallscalefeaturebased trackingapproach.

BackgroundSubtraction Backgroundsubtractionisaregion-basedapproach

wheretheobjectiveisto identifypartsoftheimageplanethataresignicantly

dierenttothebackground,i.e.,thesceneaswouldapp eariftherewerenofore-

groundobjectsinit.Thesignicantproblemtob eaddressedinthisapproachis

thatofestimatingthebackgroundimage,esp eciallywhentheilluminationlevels

change. The metho d describ ed in this pap er falls into this category. Another

imp ortantasp ect of the background subtraction approach is the denition of

what constitutes asignicantdierence fromthebackground.Thisisathresh-

oldselection problem[19].

1.2 EvaluationCriteria

Whentheab ovemetho dsareappliedto avideosurveillanceproblem,thereare

anumb erofkeyattributesandscenariosthat mustb ehandled.Theseare:

{ Thechoice ofmo delsforkey comp onents.

{ Initializationofthemo delparameters whenthesystemisrststarted.

{ Distinguishingobjectsofinterestfromilluminationartifactssuchasshadows

andhighlights.

{ Handlinguncontrollableilluminationlevelchanges,suchaso ccurinoutdo or

scenes.

{ Adaptingto changes inthe scene, such as when anewchair is intro duced

intoarestaurant.

{ Detectingwhensomethingiswrongwiththepro cessing,andre-initialization

(4)

The rest of this pap er will concentrate on approaches based on background

subtraction.Firstly,existingmetho dswillb ereviewedwithresp ect totheab ove

criteria.Thenanewmetho dwillb edescrib ed anditsp erformanceevaluated.

2 Reviewof ExistingMethods

In abackground subtraction based approach,the eld of viewcan b e divided

intothreecomp onents:

background =thepartsofthestaticscene thatarestillvisible.

objects =thingsofinterest totheapplication,e.g.,p edestrians.

artifacts =imagechangessuch asshadowsandhighlights.

The combination of the \objects" and the \artifacts", i.e., everything that is

not \background", is often called the foreground. In most existing metho ds,

onlythebackgroundisexplicitlymo deled.Theforegroundisdividedintoobject

and artifact on the basis of various imageprop erties. However, [16] takes the

opp osite approach of explicitly mo dellingthe colordistribution of each object

usingamixtureofGaussiansandusesthistodetecttheobjects.In[18,22],b oth

thebackgroundandtheforegroundaremo delledasGaussiandistributions.

Inthis section,we discuss existing metho ds withresp ect to the choice of a

backgroundmo delandhowitismaintainedinthefaceofilluminationchanges,

how the foreground is dierentiated from the background, and what strate-

gies are used to correct classication errors. Finally,the classication of fore-

ground into objects and artifacts is discussed. The followingnotation will b e

usedthroughoutthissection:

I

t

=theincomingimageattimet.

B

t

=thebackgroundestimateattimet.

The co ordinates asso ciated withapixel arenotshown intheequations unless

necessary.

2.1 BackgroundMo dels

The simplest and most commonmo del for the background is to use a p oint

estimate of the color at each pixel lo cation, e.g., [12]. This p oint estimate is

usuallytakentob e themeanofaGaussiandistribution.Insomesystems,e.g.,

[17],thevariance oftheintensityisalsomo delled.

Tocop e with variationintheillumination,thebackgroundestimatehas to

b econtinuallyup dated.In[12],thisisup dated using

B

t+1

=I

t

+(1 )B

t

(1)

whereiskeptsmalltopreventarticial\tails"formingb ehindmovingobjects.

In[5],thebackgroundismaintainedasthetemp oralmedianofthelastNframes,

(5)

estimateis oftenrestricted topixels which haveb een classied as background.

In [9{11],three parameters are estimated at each pixel:M, N, and D,which

represent the minimum,maximum, and largest interframe absolute dierence

observableinthebackgroundscene.

Such simple background estimates fail to cop e with scenes which contain

regularlychangingintensitiesatapixel,such aso ccurswith ashinglightsand

swaying branches. Several more complexbackground mo dels have b een devel-

op edtohandlesuchscenarios. In[7,20,14],each pixelisseparatelymo deledby

amixtureofKGaussians

P(I

t )=

K

X

i=1

!

i;t (I

t

;

i;t

;

i;t

) (2)

where K = 4 in [14] and K = 3:::5 in [20]. In [7,20], it is assumed that

i;t

= 2

i;t

I.Thebackgroundisup dated,b eforetheforegroundisdetected,as

follows:

1. If I

t

matches comp onent i, i.e., I

t

is within standard deviations of

i;t

(whereis2in[7]and 2.5in[14,20]),then theith comp onentisup dated

asfollows:

!

i;t

=!

i;t 1

(3)

i;t

=(1 )

i;t 1 +I

t

(4)

2

i;t

=(1 ) 2

i;t 1 +(I

t

i;t )

T

(I

t

i;t

) (5)

where=Pr(I

t j

i;t 1

;

i;t 1 ).

2. Comp onentswhich theincomingimagedo esn'tmatchareup datedby

!

i;t

=(1 )!

i;t 1

(6)

i;t

=

i;t 1

(7)

2

i;t

= 2

i;t 1

(8)

3. IfI

t

do es notmatch any comp onent, thenthe least likelycomp onent(the

onewith smallest!

i;t

)isreplaced withanewone which has

i;t

=I

t ,

i;t

large,and!

i;t low.

Theweightsthenhave tob erenormalized.

In[4],three background mo delsare simultaneouslykept, aprimary,a sec-

ondary,andanoldbackground.Theyareup dated asfollows:

1. Theprimarybackgroundisup datedas

B

t+1

=I

t

+(1 )B

t

(9)

wherethepixelisnotmarkedas foreground,andisup dated as

B

t+1

=I

t

+(1 )B

t

(10)

where the pixel is marked as foreground. In the ab ove, was selected

from within the range [0:0000610351:::0:25],with the default value =

(6)

2. Thesecondary backgroundisup dated as

B

t+1

=I

t

+(1 )B

t

(11)

at pixels where the incomingimage is not signicantly dierent from the

current valueof thesecondary background, where is as for the primary

background.At pixelswherethere isasignicantdierence,thesecondary

backgroundisup dated by

B

t+1

=I

t

(12)

Whatconstitutes asignicantdierence isnotdened.Itisalsonotedthat

therewereproblemswith thisbackgroundestimator.

3. The old background is a copy of the incomingimage from9000 to 18000

framesago.Itisnotup dated.

They claimthatadding morethan onesecondary backgroundactuallyreduces

the sensitivity of the system b ecause there is agreater range of values that a

pixelcan takeonwithoutb eing markedasforeground.

In[21],twobackgroundestimatesareusedwhicharebased onalinearpre-

dictivemo del:

B

t

= p

X

k =1 a

k I

t k and

^

B

t

= p

X

k =1 a

k B

t k

(13)

where theco eÆcients a

k

are reestimated after each frame is received so as to

minimizethe predictionerror. The second estimator

^

B

t

is intro duced b ecause

theprimaryestimateB

t

canb ecomecorruptedifapartoftheforegroundcovers

apixelforasignicantp erio d oftime.Aswellastheab ove,anestimateofthe

predictionerrorvariance isalsomaintained:

E(e 2

t )=E(I

2

t )+

p

X

k =1 a

k E(I

t I

t k

): (14)

2.2 ForegroundDetection

The metho d used for detecting foreground pixels is highly dep endent on the

background mo del.But, inalmostall cases, pixels are classied as foreground

if the observed intensity in the new imageis substantially dierent from the

backgroundmo del,e.g.,

jI

t B

t

j> (15)

where is a\predened" threshold,or afactordep endent on thevariance es-

timatealsomaintainedwithinthebackgroundmo del.Insystemsthatmaintain

multiplemo delsforthebackground,thenapixelmustb esubstantiallydierent

from all background values to b e classied as foreground. In [4], an adaptive

(7)

InthemixtureofGaussiansapproach,theforegroundisdetectedasfollows.

Allcomp onentsinthemixturearesortedintotheorderofdecreasing!

i;t

=k

i;t k.

So higher imp ortance gets placed on comp onents with the mostevidence and

lowestvariance,whichare assumedto b ethebackground.Then let

B=argmin

b P

b

i=1

!

i;t

P

K

i=1

!

i;t

>T

!

(16)

forsomethresholdT.Thencomp onents1;:::;Bareassumedtob ebackground.

So if I

t

do es not match one of these comp onents,the pixel is markedas fore-

ground.

In[21],whichusestwopredictionsforthebackgroundvalue,apixelismarked

asforegroundiftheobservedvalueinthenewimagediersfromb othestimates

bymorethanathreshold =4 p

E(e 2

t

),whereE(e 2

t

)iscalculatedin(14).

2.3 Error Recovery Strategies

Thesectionab oveonbackgroundmo delsdescrib eshowthevariousmetho dsare

designed to handle gradual changes inillumination levels. Sudden changes in

illuminationmustb e detected and themo delparameters changed inresp onse.

In [21], the strategy used is to measure the prop ortion of the image that is

classied as foreground and if this is morethan 70%, then the current mo del

parameters are abandonedandare-initialization phase is invoked. In[12],if a

pixelismarkedasforegroundformostofthelastcoupleofframes(thevaluesof

`most'and`couple'arenotgiven),thenthebackgroundisup datedasB

t+1

=I

t .

This correction isdesigned tocomp ensate forsudden illuminationchanges and

theapp earanceofstaticnewobjects.

Another common problem are regular changes in illumination level, such

as that from moving foliage. Multiple mo del schemes such as the mixture of

Gaussians explicitly handle such problems. However, they present a problem

for singlemo del schemes. In [12],to comp ensate for these problems,apixel is

maskedoutfrominclusionintheforegroundifitchangesstate fromforeground

tobackgroundfrequently.

2.4 ForegroundClassication

Thepurp oseofforegroundclassicationistodistinguishobjectsofinterestsuch

as p edestrians fromilluminationartifactssuch as shadowsandhighlights.This

isusuallyaccomplishedbyevaluatingthecolordistributionwithinaforeground

region[13]:

Ashadow regionhassimilarhueandsaturationtothebackgroundbutalower

intensity(see Fig.2.4.

A highlight region has similar hue and saturation to the background but a

higherintensity.

and, consequently, an object region has a dierent hue and saturation to the

(8)

Fig.1. Value distributio n in the Blue channel of a region of the background: left -

withshadow, right -without. Thisshows atypical situation: shadow means reduced

intensity andreduced `dynamics'.

3 An Approach Incorporating Foreground Classication

The section b efore has indicatedthat there aremanybackground subtraction

metho ds. Most of them are only concerned ab out detecting pixels where the

backgroundisnolongervisible,withoutattemptingtounderstandwhatis`cov-

ering'suchpixels.Becausebackgroundsubtractionisbasedonobservedintensi-

ties,suchtechniquesnormallymisclassifyilluminationartifactssuchasshadows

and highlights. If these are considered as objects of interest in a surveillance

application,thenthisresultsinthree typ esof errors:

{ Falsealarmdue toavisible shadowbut notanobject(that generatedit).

{ Shadowsincrease theapparentsizeofanobject,p erhapsresultinginafalse

lab ellingasmultipleobjects.

{ Missingseparation ofobjects b ecauseashadowbridges thevisiblearea b e-

tween them.

In this section, a metho d is describ ed which addresses these problems.In the

followingsectionitsp erformanceisevaluated.

3.1 TheBackgroundMo del

Thebackgroundobservedateachpixelismo delledbyamultidimensionalGaus-

siandistributioninRGBspace.Thisrequiresestimatesofthemeanandstandard

deviation.Theestimatesoftheseparametersareup datedwitheachnewframe,

butonlyatpixelsthathaveb eenclassiedasbackground.Theup datingisdone

withthefollowinglinearlter:

=(1 )+Ii

2

=(1 )

2

+(Ii )

2

Here, isthepredenedlearningrate.

Initiallythedistributions ofthebackgroundarenotknown.Weinitializeall

unknownvariablesinthefollowingway:

(9)

Standarddeviation =a high value (the actual valuewill b e obtained during

up dating).

Brightnessdistortion=ahighvalue(the actualvaluewillb eobtainedduring

up dating).

3.2 PixelClassication

Each pixel ina new imageis classied as one of background, object, shadow,

or highlight. Pixels are classied as background if the observed color value is

withintheellipsoidofprobabilityconcentration(theregionofminimumvolume,

centeredaroundthebackgroundmean,thatcontains99%oftheprobabilitymass

fortheGaussiandistribution).

Thedistinction b etween objects, shadows, and highlightsamongthe pixels

not classied as background is madeusing thebrightness distortion [13]. This

detects shiftsintheRGB spaceb etween acurrentpixel andthecorresp onding

backgroundpixel.LetE

i

representtheexp ectedbackgroundpixel'sRGBvalue,

I

i

represents thecurrent pixel'sRGB valueinthecurrent frame,

i

represents

itsstandarddeviation,with

E

i

=[E

r (i);E

g (i);E

b (i)]; I

i

=[I

r (i);I

g (i);I

b

(i)]; and

i

=[

r (i);

g (i);

b (i)]:

The brightness distortion is ascalar valuethatbrings the observed colorclose

to the exp ected chromaticity line (line from origin to E

i

). It is obtained by

minimizing

(

i )=(I

i

i E

i )

2

;

where

i

represents the pixel's strength of brightness with resp ect to the ex-

p ected value:

i

is1ifthebrightness ofthegivenpixelinthecurrentimageis

thesameasinthereferenceimage,and

i

islessthan1ifitisdarker,andgreater

than1ifitb ecomes brighterthantheexp ectedbrightness.Ifthecurrentpixel's

intensityisgreaterthanthebackgroundintensityandthecurrentpixel'sbright-

nessdistortionvalueislessthanthecorresp ondingbackgroundpixel'sbrightness

distortionvalue,thispixelwillb e markedashighlight.Ifthecurrentpixel'sin-

tensity isless than thebackgroundintensity andthecurrent pixel's brightness

distortionvalueislessthanthecorresp ondingbackgroundpixel'sbrightnessdis-

tortionvalue,thispixelwillb emarkedasshadow.Otherwisethepixelismarked

as anobject of interest. Intheguresb elowthepro duced outputismarkedas

follows:movingobjects(p eople) inlightgreen,andshadoworhighlightindark

red.

4 Our Results

Thealgorithmas discussed ab ove hasb een testedforindo orlabscenes. Figure

2 illustrates the eects of dierent lighting on a static scene (i.e. no moving

p eople). It shows how dierent lighting aects our test area. Figure 3 shows

(10)

Fig.2. Dierentbackgroundlighting: left-uniformlighting withall curtainsclosed,

right-lighting coming justfromwindow.

Fig.3.Backgrondsubtractionforuniformlighting:left-theoriginalframe,right-the

resultafterbackground subtraction.

p erson ismarkedaslightgreen,theshadowismarkedasdarkredsuccessfully.

Figure 4 illustrates that the algorithm still works ne after adding one extra

light.Figure5showshowthealgorithmisabletocop ewithglobalillumination

changes afteraddingtwoextra lights.Figure6showsadierent lightingsitua-

Fig.4.Backgrondsubtractionforuniformlighting plusoneextralight:left-theorig-

inalframe,right-theresult afterbackground subtraction.

(11)

tion. Thealgorithmcan notonlydetectmovingp eopleandshadowsaccuratly,

Fig.5. Backgrond subtraction for uniform lighting plus two extra lights: left - the

original frame,right-theresult afterbackgroundsubtraction.

Fig.6. Backgrond subtractionforlighting coming from window plusoneextra light:

left-theoriginal frame,right -theresult afterbackgroundsubtraction.

italsocan cop e withhighlights.Figure7illustratesthehighlightabilitiesafter

adding an extra light. It still works fairlywell and robust for moving p eople

sutractionandshadowdetection.Figure 8showsaresult after initializationby

usingstaticframeswithoutmovingp eople.Weobtaingo o dforeground subrac-

tionandshadowdetection resultsinreal-time.Figure9showsthatafter static

initialization,thebackgroundmo delcanb eup dated adaptively.Thealgorithm

up datesanewbackgroundmo delwhileframeswithmovingp eoplearecaptured.

Figure10istoillustratetherobustnessofbackgroundup dating.Evenwhenwe

captureaverydierentframewithmovingp eople,thisalgorithmstillcanrene

(12)

Fig.7. Backgrond subtractionforlighting coming fromwindowplustwoextralights:

left-theoriginal frame,right -theresult afterbackgroundsubtraction.

Fig.8.Here,allframesb eforethecurrentframehaveb eenstatic(i.e.withoutmoving

p eople):left-thecurrentframe,right-theresult afterbackgroundsubtraction.

Fig.9.Here,startingwithstaticframes(i.e.withoutmovingp eople)wecontinuewith

5framesshowing movingp eopleuntilthecurrentframe:left-thecurrentframe,right

-theresult afterbackgroundsubtraction.

(13)

Fig.10. Same situation as in gure b efore, but dierent p eople are walking in the

sceneb eforethecurrentframeistaken:left-thecurrentframe,right-theresultafter

background subtraction.

5 Conclusion and Further work

This pap er presents and discusses a background subtraction approach which

can detectmovingp eopleonabackgroundwhile alsoallowingforshadowsand

highlights.Thebackgroundisadaptivelyup dated.Theapproachhasb eentested

in RGB space. The metho d is eÆcient with resp ect to computation timeand

storage.Itonlyrequires tostore abackgroundmo delandbrightness distortion

values.Thisallowsreal-timevideopro cessing.

Themetho dhasanumb eroflimitationsthatwillb eaddressedinfuturework.

A staticbackground withoutmovingp eopleis needed during the initialization

phase,otherwiseittakesasubstantialamountoftimetoreliablyclassifypixels

asforeground.Verydarkpartsofp eoplearestillclassiedaseitherbackground,

or shadows. Another limitation is that shadows on very dark backgrounds or

severalshadowsaddedtogetherwillnotb edetectedeectively.Brighthighlights

arealsoaproblemb ecauseofsaturationinthecamerasensor.

References

1. Y. Bar-Shalom, Th. E. Fortmann: Tracking and Data Association.Academic

Press,NewYork(1988).

2. D. Beymer,Ph. McLauchlan,B. Coifman,J. Malik:Areal-timecomputervision

(14)

3. A. Blake, M. Isard:ActiveContours.Springer,Berlin(1998).

4. T. E. Boult, R. Micheals, X. Gao, P. Lewis, C. Power, W. Yin, A. Erkan:

Frame-rateomnidirectio nalsurveillanceandtrackingofcamouagedando ccluded

targets.SecondIEEEWorkshoponVisualSurveillance.(1999)48{55.

5. R. Cutler,L. Davis:View-baseddetectionandanalysisofp erio dicmotion.Inter-

nat.Conf.PatternRecognition(1998)495{500.

6. A. Elgammal, D. Harwo o d,L. A. Davis: Non-parametric mo delforbackground

subtraction.ICCV'99(1999)??{??.

7. W. E. L. Grimson,C. Stauer,R. Romano,L. Lee:Usingadaptivetrackingto

classify andmonitoractivitiesinasite.ComputerVisionandPatternRecognition

(1998)1{8.

8. W. E. L.Grimson,C. Stauer:Adaptivebackgroundmixturemo delsforreal-time

tracking.CVPR'99(1999) ??{??.

9. I. Haritaoglu,D. Harwo o d,L. S. Davis:W 4

:Who?When?Where?What?Areal

timesystemfordetectingand tracking p eople.3rd FaceandGestureRecognition

Conf.(1998)222{227.

10. I. Haritaoglu,R. Cutler,D. Harwo o d,L.S. Davis:Backpack:Detectionofp eople

carrying objects using silhouettes. Internat.Conf. ComputerVision(1999) 102{

107.

11. I. Haritaoglu, D. Harwo o d, L. S. Davis: Hydra: Multiple p eople detection and

tracking usingsilhouettes. 2ndIEEE Workshopon VisualSurveillance(1999) 6{

13.

12. J. Heikkila,O. Silven:Areal-timesystemformonitoringofcyclistsandp edestri-

ans.2ndIEEEWorkshoponVisualSurveillance(1999)74{81.

13. T. Horprasert,D. Harwo o d,L. A. Davis:Astatisticalapproachforreal-timero-

bustbackground subtractionand shadow detection.ICCV'99FrameRateWork-

shop(1999)1{19.

14. T. Y. Ivanov,C. Stauer,A. Bobick, W. E. L. Grimson VideoSurveilla nce of

Interactions.2ndIEEE WorkshoponVisualSurveillance(1999)82{90.

15. M.-S. Lee:Detectingp eoplein clutteredindo orscenes.CVPR'00(2000)??-??.

16. S. J. McKenna,Y. Raja,S. Gong:Trackingcolorobjectsusingadaptivemixture

mo dels. ImageandVisionComputing(1999)780{785.

17. J. Orwell, P. Remagnino, G. A. Jones: Multi-camera color tracking. 2nd IEEE

WorkshoponVisualSurveillance(1999)14{24.

18. R. Rosales, S. Sclaro: 3D trajectory recoveryfortracking multiple objects and

trajectoryguidedrecognitionofactions.ComputerVisionandPatternRecognition

(1999)117{123.

19. P. L. Rosin:Thresholding forchangedetection.ICCV'98(1998)274{279.

20. C. Stauer,W. E.L.Grimson,Adaptivebackgroundmixturemo delsforreal-time

tracking.ComputerVisionandPatternRecognition(1999)246{252.

21. K.Toyama,J.Krumm,B.Brumitt,B.Meyers:Wallower:Principlesandpractice

ofbackground maintenance.Internat.Conf.ComputerVision(1999)255{261.

22. C. Wren,A. Azabayejani, T. Darrell,A. Pentland:Pnder:Real-timetrackingof

the human b o dy.IEEETrans.PatternAnalysisand MachineIntelligence(1997)

780{785.

Referensi

Dokumen terkait