),
AN OPTIMISE,D CAMERA-OBJECT SETUP FOR 3I) OBJECT RECOGNITION SYSTEM
M. Khusairi Osman, M. Yusoff Mashor, M. Rizal Arshad
Center
for Electronic Intelligent
System(CELIS),
Schoolof Electrical & Electronic Engineering, Universiti
SainsMalaysia, Engineering
Campus,14300
Nibong Tebal,
SeberangPerai
Selatan, Pulau Pinang,Malaysia.
Tef :
+604-5937788 ext.5742, Fax: +604-5941023, E-mail: [email protected]
Abstract- This paper
proposesan effective
methodfor recognition and
classificationof 3D
objects usingmultiple views technique and neuro-fuzzy
system.Firs(, a pruper condition for
camera-obJectsetup
is investigated to select anoptimal number
of views andviewpoint condition. 2D intensity
imagestaken from multiple
sensors are used to model the 3Dobjects. In the
processingstage, we proposed to use
momentinvariants as the features for modeling 3D
objects.Moments
have beencommonly
usedfor 2D pattern
recognition. However, we have provedthat with
someadaptation to multiple
views technique,2D
momentsare sufficient to model 3D objects. In addition,
thesimplicity of 2D moments calculation reduces
theprocessing time for feature extraction,
hence increasesthe system efficiency. In the
recognition stage,we proposed to use a neuro-fuzzy
classifiercalfed Multiple Adaptive Network based
FuzzyInference System (MANFIS) for matching
andclassification. This
proposed rfrethod has been tested usingtwo
groupsof
object,polyhedral
andfree-form
objects.Computer simulation
results showedthat
the proposedmethod can be
successfullyapplied to
3D object recognition.Keywords- 3D
object
recognition, neuro-fuzzy system, momentinvariants.
I.INTRODUCTION
A
model-basedobject recognition system finds
a correspondence betweencertain
featuresof the
input image and comparable featuresof
the model[l].
Such aprocess
involves
extracting featuresfrom
images, and comparingthem with a stored
representationof
theobject. Recognizing 3D objects from images is a
difiicult problem, primarily
becauseof the inherent loss of information in the projection from 3D to 2D [2].
Inaddition, the image
of
the 3D objects depend on variousfactors such as
cameraviewpoint and the
viewing geometry since handlingof 3D
scenes allows additional degreesof
freedom(DOF) for the orientation of
the object in space [3].Most model based 3D object recognition
system considersthe
problemof
recognizing objectsfrom
the imageof
a singlevieu [a][5][6]t71. A
singleview
maynot
be sufficientto
recognize an object unambiguously sinceonly
one sideof
an object can be seenfrom
any given viewpoint[3].
Sometimes,two
objects may haveall
viewsin
commonwith
respectto
a given feature set,and
maybe distinguishedonly through a
sequenceof views [2]. In
addition, sincethe
single dependency on imageview, this
method requirescomplex
features to represent the object.To
overcomethis
problem, modeling3D
objects usingmultiple views technique in a recognition task
was proposed by some researchers. Thid paper continues our previouswork [9] on
developinga
systemthat
applied multipleview
techniquefor 3D
object recognition task.In this work, modification to the camera-object setup has been made.
A
suitable camera-object setupfor
multipleviews was
proposedfor better recognition rate.
In general, objectscan be
recognizednot only by
their shape, but also based on other visual cues, such as color, texture, characteristic motion,their
location relatives toother objects in the
scene,context information
and expectation [29]. Our workwill
focus on the recognitionof the
isolated objectsusing
shapeinformation.
Some researches on 3D object recognitionlimit their
object toonly
polyhedral objectsIlTl[18]tl9l[20][21] since
itsshape simplicity compare to free-form objects. However,
our
proposed system wasnot limit to only
polyhedral objects but also consider objectswith
free-form shape.Due to the inherent loss
of
informationin
the 3Dto
2D imaging process, an effective representation of propertiesof 3D object should be
considered.We
choose 2D moments as featuresfor 3D
object modeling. Althoughmoments are commonly applied
to 2D
object or pattern recognition, an adaptationwith
multiple views technique enables this technique to be used in 3D object modeling.The simplicity of 2D
moment calculationwill
reduce processing time, hence increases the system efficiency.Recently,
most
researchersare focusing on
applyingneural network for 3D object
recognition. Comparedwith the conventional 3D object recognition,
neuralnetworks provides a more general and
parallel implementation paradigm [8]. In this work, we proposedto
use a neuro-fuzzy classifier calledMultiple
AdaptiveNetwork
basedFuzzy
Inference System(MANFIS). A
neuro-fuzzy classifier combinesfuzzy
reasoning system and neural networksinto
an integrated functional model[33].
The integrated systemwill
possess the advantages of both neural networks and fuzzy system. In recognition stage, using moments as inputs, I\,IANFIS recognize an input object by matching input and model features.II. RELATED RESEARCH
Most 3D object recognition systems used a model-based approach
[8]. In this
section some relatedworks in
3D object recognition based on a model-based approach are briefly discussed.Earlier works on
3D
object recognition were influenced byMarr's
philosophy [10]. Marr claims that to recognize 3D objects, one must have enough 3D information about the objeclto
be recognized. Basedon this,
most early approaches attemptto
describe thefull
3D shape befo?e performing recognitiontask.
Some examples are wire- frame and surface-edge-vertex (SEV) representation [4].A wire-frame model consists of object edges. It
represents an object using possible edgejunction.
Jong and Buurman[13]
used stereovision
systemto
acquire3D
wire-framesof the
polyhedral objects consistingof
straight lines. The system acquires images
of
objects incertain stable condition using stereo vision,
and combinedthe two
observationsin the learning
stage.This
system doesnot require exact
knowledge about poses. However,recognition is limited to
polyhedralobjects
only. The SEV
representationis a large
data structure which contains alist of
the edges and verticesof
an object and someform of
topological relationshipn ll.
Flynn and Jain
[7]
developed a system called BONSAI which identifies and localizes 3D objects in range imagesof
oneor
more parts that that have been designedon
a computer-aided-design(CAD)
system.Recognition
isperformed by constrained search
ofthe
interpretation tree using unary and binary constraints derived automatically form the CAD models to prune the search space.3D
part orientation was developedfor
recognition and locating 3D objects in range images [22]. This algorithm uses a CAD modelwith
simple features such as type and size. TheCAD
model describes edges, surfaces, vertices and their relationship. Thelimitation
of this algorithm is that theCAD
model requires complex data structure and user intervention.In
contrastto
methods thatrely
on predefined geometrymodel for recognition,
view-basedmethod has
been proposedby
some researchers..In view-based technique,3D
objectis
described usinga
setof 2D
characteristicview
or aspects. Paggio and Edelman[23]
showed that3D
objcctscan bc
rccognizcdfrom the raw
intensity valuesin 2D
images,using a network or
generalizedradial
basis functions.They
demonstratethat full
3D structureof an object
canbe
estimatedif
enough 2D views of the object are provided. Murase and Nayar [24]develop a parametric eigenspace method to recognize 3D objects directly
from their
appearance. Eigenvectors are computed from set of images in which the object appearsin different
poses.An important
advantageof
this method is theability to
handle the combined effectsof
shape, pose, reflection properties and illumination.
One of the main disadvantages
of
view-based technique is the inherent lossof
information in the projection from 3D objectto 2D
image. Furthermore, the imageof
a 3Dobject
dependson such factor such as the
cameraviewpoint
andthe viewing
geometry.A single view-'
based approachmay not be
applicablefor 3D
object recognition since only one sideofan
object can be seenfrom any given viewpoint [3]. To overcome
this,multiple-view technique have been proposed
by
several researc herstl
7 )t2 5ll2 61128l!29l.Recently, neural networks have been used
to
solve 3Dobject recognition problem. Yuan and Niemann
[27]develop
an
appearance based neural image processing algorithmfor
recognizing 3D objectswith
arbitrary posein 2D image. Wavelet transform is used to
extract compact featurefor object
representationand a
feed- forward network using resilient backpropogation training algorithmis
used totrain
extracted features. Kawaguchiand
Setoguchi[28]
proposeda new algorithm for
3D object recognition based onHopfield
nets and multipleview
approach.The algorithm
computesthe
surface matching score between theinput
image and the object model. Objectwith
smallest matching error is considered as the best matched model. Ham and Park [8] proposed ahybrid hidden-Markov model and neural
network(HMM-NN) to
recognize3D
objectsfrom
range image.3D
features such as surface type, moments, surface area andline
length are extractedin
image processing step.Features are trained using
HMM
and neural network is used as post-processing stepto
increase the recognition rate.III. CAMERA-OBJECT SETUP In this
section,a
proposed methodologyfor
camera-object setup will be
discussed.Each object to
berecognized must be placed
in its
stable condition at the centreof the turntable. The turntable is a
circular horizontal platform that can be rotated 360 degree.A,
B,C
andD
representthe
coordinate where the cameras to be placed around the tumtable.A,
B and C are located on the same horizontal, butdiffer
450 from each other. pointD is
perpendicularto
the tumtable.ligure I
shows the locationof
the points and object. Sinceall
points havethe
same distancefrom the
centreof
the turntable. all cameras must have the samefocal
leneths. For features stabilityr, c-ameras at pointA,
B and C a*re proposed to be fixed at 45u from perpendicular view. Camera at point D is fixed at the topofthe
object. Figure 2 shows how all these cameras are fixed.Since
we
havefour
camerasat different
location, weconsider eight possibilities for
camera-object setup condition. Each condition is investigated to find the mosteffective condition for our
system.Table I
describesthese conditions.
I Small change in shape should produce small change in description. This rvill simplifies the problem of comparing shapes.
Canera at --:-l D ^-J f urrr!+ u dru
r-
\ \
\'-\-\
a--\ \
f, --'.,t' -.. .*;-/
Figure 2: Camera position for point
A,
B, C and D Tablel:
Possibility condition for camera setupAfter
an objectof
interestis
placed at the centreof
the tumtable, the object's images are further acquire. Then,the
objectwill be
rotated5'
andthe
same process is repeated.,.Each rotationwill
rotate the object 50 and so onuntil 360' is
complete..Hence,for
each object, we have72
image sets. These images arepartitioned into
two groups, 32 image sets for training data and 32 image setsfor
testing data._For training data, we considered imageswith
00, 100,200,...3500 condition, and the rest of the images (imagewith
50, 150, 250...3550condition)
are used for testing. The training data set is usedto
build the 3D object model in the recognition stage.IV. FEATURES EXTRACTION
Captured
images are then digitized by the
DT3 155 framegrabberfrom
Data TranslationInc.
and sent to the pre-processing and feature extraction stage.ln
the pre- processingstage, images are threshold
automaticallyusing iterative thresholding method tl3ltl4l.
Thismethod leads
to a
good separation between object andbackground in several applications [30]. In
featureextraction stage, we
chooseHu's moments [15]
asfeatures
for 3D
modeling.Although Hu's
moments are Aobj I
Condition Camera position (at ooint)
No. of camera used
A
Iz
A&B
23
A&C
25A
A&D
25
A,B&C
36
A.B&D
37 8
A C&N
A.B.C&D
4Figure
l:
Image acquisition set-upcommonly and
widely
usedfor 2D
object recognition,we prove that some
adaptationwith multiple
views technique,2D moments are sufficient to model
3D objects.In
orderto
understand howto utilize
moment invariantmethod, let f (i, j) be a digital image
withi =1,2,3....M and j =1,2,3...N.
Two-dimensional momentsand central moments of order (p + q) ot
f (i, j)
are defined as:M/r'
ffinu = II
iojo.f (i, j) (r.r)
r.1 i=t j=l
lvl /r'
u n, = IItr - i), (i - i), ,f (i, j) (r.2)
i=t j=l where
- m'" , j-ffiot
i - '" onu
(2)ffioo
ffiooFrom the second and
third
order moments, a set of seveninvariant
momentswhich is
invariantsto
translation, rotation and scale derived by Hu are as follow:rp,
=
Sro+ So, (3 l)
e, = (92, - Sor)' + 4,9:t e.z)
et = (Sn -35,r.)' +(352t -Ju,)'
(3.3)gq = (Sn * Srrr)' + ($r, +
,90,)2
(3.4)es = (9n -3S,rX4o + 9,rX(9.0 + S,r)' -
3(9r, + J,,,)'?1+
(34, - 4rX4,
+ Q,,)[3(9,,, + g,r)'z- (9r, +
90,)'I
(3.s)ee =
(Szo-9orX(9, + 9,r)t -(Sr, +
9* )t l + 4Sr (4gu
(g3o+ sn)(S2t +
go,)
(3.6)e, =(352, -9*X4o +9i2)[(,930 +S,r)'-
3(Sr, + 9* )t l - (9ro - 3gt)(Szt +
go,) [3(9,0 + S,r)' - (4, + g*)t] e.7)
where .9ro are the normalized central moments defined by
V. RECOGNITION
ln
the recognition stage,we
proposedto
usea
neuro-fuzzy classifier named MANFIS (multiple
adaptivenetworks based fuzzy inference system).
MANFIS containsa
numberof ANFIS
networks[16] which
arearranged
in
parallel combinationto
producea
networkwith multiple
outputs sinceANFIS is a single
outputnetwork.
Figure 3 shows an exampleMANFIS
networkwith
threeinputs, xr,
xt,xt ,
and eleven outputs, f1, f2,"ft...f,'.
For our recognition purpose, the number of input depends onthe
numberof
cameras used (seeTable
1)while
the numberof
outputs depends on the numberof
objects
to be
recognized.A hybrid
learning algorithmwhich combines gradient descent and least
squareestimator is used to for learning
procedure.In
the recognition step,if
any output node has the largest value grcater that 0.5, that ncrde is determined asl,
Otherwise,the node is
consideredas 0. Details
discussion onalgorithm and learning of ANFIS can be found
intl6]t3ll
and a brief discussion onMANFIS
can be found in our previous work in [9].n
rl
"lrl ')..-.=
-
t4t f T '
"00
(4.1 )
4
r =L(p+q)12)+1, p+q=2,3,4....
(4.2)Figure 3:
MANFIS
networkwith
11 outputVI. RESULTS AN]D DISCUSSION
We chose
two
typesof
objectsin
orderto
analyze our system's performance. Each type consistsof
eleven 3D objects. The first type, Type 1 object, contains simple 3D shapelike cylinder, box, trapezoid,
sphereetc.
The second type,Type 2
object contains free-form objects.Figure 4 and 5 show these types ofobject.
In
orderto find the
best camera-object positionfor
our system, we have conducted eight analyses using different camera positions as givenin Table L
In these cases, allMANFIS
parameters are setto
default valuesfNo. of
membership function,
MF:2, initial
step size:0.1). Hu'sfirst
moment,Pr
(Equation 3.1)is
used as feature. We choose thefirst type
objectfor this analysis.
Table 2 summarizesthe
performanceof the
systemfor
each condition.Figure 4: Type 1- simple 3D shape
As we can
seefrom the table, condition 1
gave the poorest recognition rates comparedto
other conditions.In
this case, recognition has performed through a single view. Since the features are only dependent on the single view, this method was insufficientto
model 3D objects.Generally, by adding the number of views, it will
increase the recognition rate
to
a certain rate. However, too many views,for
example, condition 8will
decrease the recognition rate. So, optimum numberof
viewswith
suitable camera position are requiredto
achieve the best recognition rate. Systemwith
camera condition7
gives the highest recognition ratewith
95.71% for training and 95.20%for testing. Since condition 7 gives the
best performance, we choose this method for further analysis.Table 2: System performance under different camera setuo condition
Table 3 and Table 4 show the system performance using simple object
(Type l)
and free-form objects (Type 2)for
different orderof
moment after some refinement (by selecting appropriateinitial
step size and number of MF).Our
results showthat
better recognitionrate
achieved when using lower order moment compare to higher order moments. Generally,higher order
momentsare
more sensitiveto
noise compareto lower order [32]. As
aresult, the features stability
will
decrease andthis
factorreduces the recognition rate. For Type I
object, maximum recognition rateis
100% and for Type 2 object is 98.99Yo, both usingfirst Hu's
moment. Since the free-form
objects have a complex shape, recognition rate islower than polyhedral object. However, for
overall performance, recognition rate achieved using this method is better compared to our previous work in [9].Condition Accuracy (%o)
Training Testing
i
42.68 40.402 66.16 66.92
J 64.14 64.65
^a 90.91 90.91
5 88.64 86.87
6 D0.9r 91.41
7 95.71 95.20
8 88.64 88.r3
Figure 5:
Type2
- free-form objectHu's moment
No.
of
MFInitial step size
Accuracy (7o) Training Testing
9t
2 0.01 100.00 100.00Qz 2 0.20 91.92 88.38
Qt J 0.10 99.75 99.49
a
0.t0
57.83 50.769s z 0.10 70.96 59.60
Qo 2 0.10 7 t.46 59.85
Qt 2 0.10 60.35 46.97
Table 3: System performance for object type I different Hu's moments
types
ofobject
show that this method can be successfully applied to 3D object recognition.Currently, we are applying Hu's moment for the features.
In the future, we will try to use other variation of
moments suchas Zernike
and Legendre moments forcomparison.
Futurework also
tendsto
compare theMANFIS
performancewith
other neural networks type such as multiple layer perceptron(MLP) and
radial basis fuction (RBF) network.REFERENCES
Pope, A. R. (1gg4). Model Based
ObjectRecognition: A Survey of Recent
Research.Technical Report TR-g1-01. University
of
British Uolumbra.Roy, S.
D.
Chaudhury,S.&
Banerjee, S. (2003)Active
Recognition Through NextView
Planning:A Survey. Pattern Recognition (Accepted
for Publication).B0ker,
U.
and Hartmann,G.
(1996) Knowledge-Based View Control of Neural 3D
Object Recognition System, Proceediny,ef
Internatinnnl Conference on Pattern Recognition. D:24-29.Besl, P. J. and Jain, A. C. (1985)
Three- dimensional Object Recognition,ACM
Computer Survey. 17:76-145ElJen, L; Kraiss, K. -F.; and Krumbeigel,
D.(1997)
Pixel
Based3D Object
Recognitionwith
Bidirectional Associative Memories. !nternational Conferenceon Neural Netwoks.
3:1679-1684.1997.
Roh,
K.
S.; You, B. J. and Kweon,I.
S. (1998) 3DObject Recognition Using Projective
Invariant Relationshipby
SingleYiew.
Proceedings. of theIEEE International
Conferenceon Robotic
and Automation. 3394-3399Flynn, P. J. &
Jain,A. K.
(1991)BONSAI:
3DObject Recognition Using Constraint
Search.IEEE Transactions on Pattern Analysis
and Machine Intelligence. 13(10): 1066-1075.Ham,
Y. K.
and Park, R.-H.
(1999)3D
ObjectRecognition In Range Images Using
HiddenMarkov Models And Neural Networks.
PatternRec
ognition.
32:729-7 42.uslng
tll
Table 4: System performance for object type 2 using different Hu's moments
VII. CONCLUSION AND FUTURE WORK
This
paper proposes anefficient 3D
object recognition system usingmultiple
views technique and neuro-fuzzy system.A new
methodfor
camera-objectposition
is proposedto
increasethe recognition rate. In
feature extraction stage,Hu's
moments are used to model the 3D objects.Our
proposed methodalso
showsthat
some adaptationwith
multiple-views technique,Hu's
moments are ableto
model the3D
objectwell.
Thesimplicity of
moments calculation provedthat our
proposed system did not required complex featuresfor 3D
representation, hence reduce processingtime
in feature extraction stage.In
recognition stage,we
proposeto
usea
neuro-fuzzycalled MANFIS.
Computersimulation
resultsfor
two12)
t3l
t4l
tllL- I
t6l
17)
t8l Hu's
moment
No.
of
MFInitial
step sizeAccuracy (Yo)
Training Testing
Q, 5 0.10 99.75 98.99
Qz 0.30 98.23 96.72
Qz 2 0.20 89.14 Et.3
I
a^ 2
0.l0
76.52 t+. t>Qs 2 0.10 74.24 72.22
Qr, 2 0.
l0
73.99 69.70(Pt 2 0.10 51.20 45.96
t9l
Osman,M. K.
Mashor,M. Y. &
Arshad,M.
R.(2003) Multi-View
TechniqueFor 3-D
ObjectRecognition Using Neuro-Fuzzy
System.AIAI 2003, 24th -
25th.June2003, Kuala
Lumpur.Malaysia, paper no.6 [Invited session].
tl0l Marr, D. (1982) Vision.
San Francisco:W.
H.Freeman.
Il]
Haralick, R.&
Shapiro,L.
(1993) Computer and Robot Vision.Vol I, II. MA:
Addison-Wesley.|21
Jong,J. J. &
Buurman,J. (1992)
Learning 3DObject Descriptions
from
a Setof
Stereo VisionObservations. International Conference
on Pattern Recognition ICP R'92.l:
7 68-77l.
ll3l Riddler, T. W. and Calvard, S. (1978)
Picture Thresholding Using an Iterative Selection Method.IEEE Transactions on Systems, Man
and Cybernetics. 8: 630-632.[4] irussell, H. J. (1979) Comments on
PictureThresholding using an Iterative Selection Method.
IEEE Transactions on Systems, Man
and Cybernetics. 9(5): 3 I 1.[5]
Hu,M. K.
(1962)Visual
Pattern Recognition ByMoment Invariants. IRE Transactions
on I nformati on Theory. 8(2):17 9-187 .tl6l
Jang, J.-S.
R. (1993)ANFIS:
Adaptive-Network-Based Fuzzy Inference System.
IEEE TransactionsOn
Systems,Man
and Cybernetics.23(3):665-685.
Il7)
Farias,M. F. S. & de
Carvalho,J. M.
(1999)Multi-view
TechniqueFor 3
dimensi Polyhedral Obj ect Recogniti on Using Surface Representation.Revista Controle
&
Automacao.l0(2): 101-ll7
.[8]
Park,K &
Cannon,D.
J. (1996) Recognition andLocalization of a 3D
PolyhedralObject
using NeuralNetwork.
Proceedingsof the
1996 IEEEInternational. Conference on Robotics
and Automation,ICM
I 996. 3613-361 8.tl9l
Kawaguchi,T. & Baba, T. (1996) 3D
Object Recognition Using a GeneticAlgorithm.
Circuits and Systems, ISCAS' 96. 3:321 -324.l20l
Wang,P. S. P. (1997)
ParallelMatching of
3DArticulated Object Recognition.
InternationalJournal Of Pattern
RecognitionAnd Arttficial
I nte I li gence. 13 (4): 43 1 -444.
[21]
Procter, S (1998) Model Based Polyhedral ObjectRecognition using Edge-Triple
Features.phd
Thesis.UK:
Universityof
Surrey.t22l
Bolles, R. C.&
Horaud, P. (1986) 3DPO:A
ThreeDimensional Part Orientation
System.International
Journal
of Robotics Research. 5(3):3-26.
[23]
Paggio, T.&
Edelman, S. (1990) A Network That Learnsto
Recognize3D
Objects.Nature
343:263-266.
t24l Murase, H. & Nayar, S. K. (1995)
VisualLeorning and Rccognition of 3D Objccts
frorrr Appearance.International Journal of
Computer Vision. 14: 5-24.I25l Abbasi, S. & Mokhtarian, F. (2001) Affine-
Similar Shape Retrieval: Application toMultiview 3D Object
Recognition.IEEE
Transactions on Image Processing.i0(l):
131-139.126l
Seibert,M. &
Waxman,A. M.
(1992) Adaptive3D Ublect trom Multiplc Views.
IEEE Transactionson Pattern Analysis and
Machine I nte I li genc e.aQ\
107 -124.[27]
Yuan,C. &
Niemann,H.
(2000)An
Appearance Based Neural Image ProcessingAlgorithm
for 3D Object Recognition.International
Conference on Image Processing, ICIP2000. 344-347.[28]
Kawaguchi,T. &
Setoguchi,T"
(1994)A
Neural. Network
Approachfor 3-D Object
Recognition.IEEE International
Symposiumon Circuits
and Systems, ISCAS 1 994. 6: 3 15-318.t29l Ullman, S. (1998) Three-Dimensional
Object Recognition Based on the Combination of Views.Cognition.
67:21-44.
[30] Klette, R. &
Zamperoni, P. (1996) Handbookof
Image Processing Operators. England: John
Wily
&
Sons.t31l
Jang,J.
S.R.
Sun,C. T. & Mizutani, E.
(1997)Neuro-Fuzzy And Soft Computing;
AComputational Approach to Learning
and Machine Intelligence. New Jersey: prentice Hall.Il?t
L- -tT??1f""l
Prokop, R. J.
&
Reeves,A.
P. (1992)A
Surveyof Moment-Based Techniques for
UnOccludedObject Representation and Recognition. CVGIp:
Graphics
Models and
Image Processing. 54(5):438-460.
Lin, C. T. &
Lee,C. S. c.
(1996) Neuro-fuzzy Syste ms. Prentice Hall.