• Tidak ada hasil yang ditemukan

PDF Reducer Demo version

N/A
N/A
Protected

Academic year: 2024

Membagikan "PDF Reducer Demo version"

Copied!
10
0
0

Teks penuh

(1)

Vti Thj Thanh Thiay Tap chi KHOA HQC & CONG NGHE 162(02): 145-154 LOCAL INTERPOLATION RADIAL BASIS FUNCTION NETWORKS

Vu Thi Thanh Thuy College of Education- TNU ABSTRACT

There is, hitherto, no efficient methods to approximate multivariate function or to classify patterns, for especially dynamic problems in which new training data are often added in real time In order to construct an efficient method, this paper considers local interpolation RBF networks, where artificial neural network approach and instance-based learning are combined. In these networks, training data are clustered into relatively small sub-clusters; and on each sub-cluster, an interpolation RBF network is trained by using the new algorithm recently proposed by the authors;

it is a two-phase algorithm for training interpolation RBF network using Gaussian basis functions, and it has the complexity ofiV^l, where N is the number of nodes. The training time of this new architecture is effectively short and its generality is superior than global RBF network. Further its universal approximation property is proven. Especially, this new architecture can be efficiently used for dynamic framing.

Keywords: Radial basis function; width parameters; output weights; contraction transformation, k-d tree local interpolation RBF networks

INTRODUCTION

Artificial neutral network [1,2] such as MLP ( Multiple - Layered Percepfron), RBF (Radial Basis Functions) networks, and instance- based learning methods [3] such as k- nearest neghbour and local weighted regression are common methods for approximating multivariate fiinctions and for classifying patterns. Neural networks require lunge training time when training pattem size is large, or when new data are added on real time applications. Instance-based learning methods are based on new patterns information; therefore, it coud not be frained in advance by fraining data. Thus, these methods are not efficient in practice for real- time problems. Unfortunately, there are no known methods for there situations.

Among the above methods, RBF networks are considered as a bridge connecting MLP networks and instance-base learning methods (see chapter 8 of [3] with the following advantages: I. their learning time is much less than that of MLP networks, 2. although the influence of radial fimction is local, network training does not depend on new ' Tel: 0989 948283. Email thuychoat@gmail com

patterns, and as opposed to instance-based methods. When the pattern size is small, RBF interpolation networks are often used, wherein interpolation nodes are employed as centers of radial basisc functions [1,2,4,5]. In case of large pattern size, the number of radial basis fimction in use is less than the number of interpolation nodes. However, these networks have large errors and it is difficult to find a suitable center for each basis function.

It is also difficult to re-frain when new data are added. Therefore, it is almost impossible to apply to real-time problems.

Recently a new efficient two-phase algorithm to train interpolation RBF networks was proposed in [4,5], named the HDH algorithm from now on. It has many advantages such as significant improvement of network training time with small error, easy estimation of error, simple parallelizability. However, its training time is of order oiN^j where N is the number of the interpolation nodes; and this is a serious inconvenience for large TV^. In order to reduce the fraining time, it is natural to partition the interpolation domain into sub- domains by using the clustering algorithm based on k-d tree see [6,7,8]; data in each subdomain form a sub-cluster. The partition 145

(2)

must be done in such a way that the size of each sub-cluster is smaller than a predetermined constant M. Then, interpolation RBF network are built for each sub-cluster. Such a network is called local interpolation RBF network. The HDH algorithm applied to each sub-cluster perfroms very fast and the simulation results show that the local network generality is better than the global one. When it come dynamic problems, is new data are added during the network fraining period, we then only need to reinforce the training on the clusters containmg the new data. This should enhance the fraining time.

The rest of this paper is organized as follows.

In section2, interpolation RBF network and the HDH algorithm are briefly infroduced as the tools for building local networks. In section 3. Local interpolation RBF networks and the fraining algorithm are presented.

Simulation results are shown in section 4.

[NTERPOLATION RBF NETWORK AND THE HDH TRAINflsIG ALGORITHM To be self-contained, this section briefly recounts the resultts obtained in [5]

Interpolation Multivariate problems and the Interpolation RBF network architecture Interpolation Multivariate problems Consider a multivariate function f:D[^R"^^R'" and the sample set

\x ,y 1^^^ ;{.x j^^jCiD such that /{x") = y ; A = \,...N . Let ^ be a function of a known from satisfying:

(p[x') = y'yi = \,...„N (1) The points x are called interlation nodes,

and the funtion ^ is called interpolation function of / and is used to approximate / on the domain D. Powell proposed to exploit radial basis function for the interpolation problem [9]. In the following is infroduced the Powell technique using Gaussian radial function (for futher details see [1,2,10,5].

146

Interpolation technique based on Radial basis \ \ Function

Without loss of generality, it is assumed t^

m is equal to 1. The interpolation function f

has the following form: I w /^

^ W = S ^ * ^ * W + "'o (2]

-^^l<.l I

Where??^ (x) = e ;Vk = l,....,N (I Where | u | is the Euclidean norm of u and

y^ is called as center of radial basis function

<p^, Vfj and cTj^ are parameters such that tp satisfying interpolation conditions ( 1).

«'(^')=i"''ft(^')+»'«=-^';*''='--'' (^i i

For each k, parameters cr^ ( called width parameter of radial basis function) is used to confrol the width of the basis fimction p^ ; when x - x * >3a-i, then <p^{x) is almost negligible.

Consider NxN matrix O :

Where ^^^ =p^yx'j= e (5) With chosen parameters c^ If all nodes X are paiwise different, then the mafrix <t) is positive-definite. Therefore, with given W^,.

the solution M'i,....,w^ of ( 4) always exist and unique.

In the case where the number of radial basjS function is less than N, their center might not be an interpolation node and (4) may not have any solution; the problem is then fined to™ i best approximation of / using any optimuw criterium. Usually, parameters w^ and cr^ ar|j ^ determined by the least mean square method see [1], which Is not suitable for our situation.

Furthermore, determining the optimum cenW is still an open problem.

(3)

Vfl Thj Thanh Thiiy Tgp chf KHOA HQC & C 6 N G N G H $ 162(02): 145 - 154 Interpolation RBF network architecture.

\n interpolation neural network RBF

;hereafter referred to as intrpolation RBF letwork for short) is a 3-layer feedforward network which is used to interpolate n- variable real function / : D ( C = J?")->/f". It is composed of n nodes of the input layer, represented by the input vector xeR"; there is N neurons in the hidden layer, of which the k' neuron center is the interpolation node X* and; its k"" output is ^^ ( x ) ; finally the output layer contain m neurons which detemine interpolate values of f{x). Given the fact that in the HDH algorithm, each neuron of the output layer is trained independently when m>\, we can then assume m = 1 without loss of generality. There are different fraining methods for mterpolation RBF networks; but as show in [5], the HDH algorithm offers the best known performance. It will be used to tram local network. The HDH is briefly presented in tiie followmg.

The HDH algorithm

In the first phase of the two-phase HDH algorithm, radial parameters a^ are determined by balancing between the error and the convergence rate. In the second phase, weight parameters w^ are obtained by finding the fixed-point of-a given confraction transformation. Let us denote by I the ATx A''

indentily matrix, IV = ' w i '

y i .

,z=

' z , '

.'".

respectively two vector in N-dimensions space R " .where z^ = _y^ - w^ Vk<N (6)

Andlet>i' = / - 0 = [v'.J„^^ (7)

Where <£ is given in (5); then (Q;khi k = j '

[-e •,khik^j Equation (4) can be now rewritten as

W = y/W-^Z (9) Wg in the expression (6) is chosen as the

average of y'' values:

1 N

(10) Now, for each A < A'', let us define ^j, as

With a give error E and positive constant;

q,a<\, the algorithm computes paramaters CTj and W . In the first phase, for eacl k<N, a^ is determined such that g^^Q while replacing cr^ by cr^/a, we havt

?* > ? • W'th values, the norm ||(//|. of mafrb T given by (14) is less than q, such than q such that an approximate solution W of (9' will be found in the next phase by a simple interative method. The algroithm is described ir figure (for fiirther details see [5]).

Procedure Training RBF network Begin

Fork=l t o N d o Initiate a>, ;

Determine (TJ such that qt<q and if replace o-^=tJ^/a then gf.>g ;//phase 1 End.

Find JV' by s simple iterative method ; // phase 2 Figure 1. Network training procedure

(4)

Vu Thi Thanh Thiiy T?p chi KHOA HOC & CONG NGHS 162(02): 145.11 To find an approximate solution W of

equation (9) in phase 2, an iterative procedure is employed. It is described as follows:

1. Initialize W°=Z

2.Compute fT*"' =WW'+Z; (13) 3.If the ending condition is not satisfied yet, then set I ^ := FT' and return step 2;

N

For each N-dimension vector, |u||, = ^ | « y j

is a norm of u (14) The ending condition is chosen by the

following expression:

-^-WW'-WH <s (15) g - 1 " "'

The above algorithm always ends after a finite number of steps, of which the solution satisfies the following inequality:

||ff'-^|L<e (16)

Its complexity is 0({T + c)nN^], where c and T are given constants [5]. The fraining time of phase 1 only depends on the number of interpolation nodes and that of phase 2' depends on ||2||,, but not on fimction f to be interpolated.

LOCAL INTERPOLATION RBF NETWORK Interpolation RBF network trained by the above iterative algorithm have several advantages: significant improvement of the training time, easy estimation of the fmal error, simple parallelizability. However, with the complexity of order 0(N^J, it would not be very efficient for real time problems using

new training data (see [5]). By partitioning data into clusters with size than M, and ty building interpolation RBF network for each cluster, the overall computing time and ^ , global training error will then be impro>^y The obtained network will be called local interpolation RBF networks

Network architecture,niethod of building and network operation

Assume that the interpolation node set with N is in Z) = ]~[[a,,i»,]. With a give integer M, based on the k-d tree clustering algorithig introduced in section 3.2 [6,7,8], the domain D is divided into sub-domains which aren- dimensional parallepipeds D^ (_/' = \,2,...,k), is such a way that each sub-domain contains no more than M nodes. To determine the sub- domain D for new coming data .t, a procedure (called input sub-set locator)^

included in the clustering algorithm, the HDH algorithm is now used to train RBF network for" each sub-domain D^ with new data belonging to the sub-domain Dj, only the local interpplation RBF network associated to D needs to be refrained when the number of nodes, taking into account the new coming data, in s sub-domain is larger than M, the the k-d tree clustering algoriflun is reusd as to partition thic sub-domain into two with smaller sizes. The above procedure can be specified as foUwing: ^_^

Procedure Building local RBF network

Begin . Divled D into sub domain Dy,...,D^; " i

// using k-d tree clustering algorithm such that the number of nodes in each sub-domain is \ not over M.

Build input sub-set;

Train each interpolation RBF network on each input sub-set;// using the HDH algorithm.

Connect the sub-set locator output to the RBF network, that are connected to the output Selector;

End

Figure 2: Procedure of building a local interpolation RBF network

(5)

Ihe network archtitecture is described in Fig 3.

During the fraining phase, the output selector is inactive; each subnet is framed independently.

For the interpolation, the output of the inputsub-set locator directs the input x to the correspondmg subnet, while all other have null mput, that is,outputs of all sub-nets are null except the one with x. Then, Ihe output selector is the sum of all sub-net outputs.

E> /

Fig. 3. Local RBF network diagram The clustering algorithm based on k-d tree The k-d tree was infroduced by Bentley in [11] and its vatiants are efficient to data clustering tools see [6,7,8]. The following algorithm posses several adjustments from the k-d technique proposed in [6,7]. Specifically, we divide an n-dimensionnal D with N data points sub-parallelepipeds £>,,...,D^ such that each D contains M or less data points.

The algroithm can be described by the process of building a n-dimensional bmaty tree, in which the root of the tree is the parallelepiped D containing N data objects; each node at depth ^ is a n-dimensional parallelepiped obtained by an orthogonal cut to the largest edge of the parent parallelepiped mto two child parallelepipeds such that the latters contain the same quantities of data.

Procedure of n-dimensionnal parallelepiped two-dividing ^j =Y\ \_^i > ^ / 1 '^ as follows:

1-1

Step 1. Select/such that the edge [ a / , 6 / ] "s the largest // select a method for culting plane.

Step 2. Divide the edge [ a / , A / ] by an Jrthogonal cut through at least one data point

!uch that two resulting parallelepipeds

contain the same quantity of data, taking into account the data points on the cut. These points can be considered belonging to both parallelepipeds if necessary.

The dividing procedure is applied several time so that the number of data at the final step in each sub- parallelepiped does not exceed M. During the process of dividing, the number of interpolation nodes in each parallelepiped at depth d is approximately equal to —j-. When, the dividing procedure ends, the quantity of data in each sub-domain

domains belongs to the interval l^ 2—1 • Fig 4. describes the result of dividing a 2 dimensional parallelepiped [0,10]x [0,6], that contains 38 data ( N=38), where M=10. It results in 4 sub- parallelepipeds, which all contain 10 data, where points (1,4) and (8,2) are considered to belong to both two parallelepipeds, while point (3,1) only belong to £),.

, 5 ! ' !r-^T 1^

Fig. 4. k-d tree describes the data set in a 2- dimensional space, where N=38, M=10.

The number of sub-domain, instead of the size M of a sub-domain, can be used as ending condition in the dividing procedure.

Network training algorithm complexity The algorithm complexity is mainly accounted for by the complexity of the complexity of the clustering based on k-d tree structure and the RBF network fraining on sub-parallelepipeds. The k-d tree building process used in this paper is almost the same as those presented in [6,7,8], However, in this 149

(6)

Vfl Thi Thanh Thiiy T?p chl KHOA HOC & CONG NGHE 162(02): 145- paper, the cut at each step is applied to the

largest edge of parallelepipeds, while in the others, the cut is applied successively to each edge. Therefore, the complexity of both techniques are almost the same, and is given by 0(A''logA''). The fraining time of each RBF network on each sub-domain is

0{p^ + c)nM^). If the training is performed sequentially on each cluster, the complexity for training RBF network is

0({T + c)nMN). With the final k-d tree structure, when adding new data, the complexity for determining if the new data

f !^^

belongs to a cluster is of order 0\ log— . Universal property of local RBF interpolation networks

The universal approximation of mterpolation RBF netwrok proved in [12,13] with arbifraiy RBF with parameters. In the HDH algorithm, these paramaterers are consfrained such that q,^ e[aq,q]. As discussed in [5], this choise does not limit its usefulness in practice.

However, due to this constraint, the generality of network frained by the HDH algorhhm could not be confirmed.

Fortunately, this universal property is proved in the following theorem for local RBF interpolation networks trained by the HDH algorithm. To state the theorem, the ^-grid concept is used. The set of node is called a S - grid on D if Vx e Z), there exist a node x-' such that d(x,x']<3 jWhcre d{x,x'j is a distance induced by the norm defmed in (14).

Theorem I: (Universal property of local RBF interpolation networks).

Let f be a continuous function on D and M the maximum cardinal of each sub-ctuster.

For all G>Q, there exist S>0 such that with any S -grid data set over D, a local interpolation RBF network can be well frained enough with this dada set to give

^XED,\f{x)-<p{x)\<e (17)

The proof of the above theorem is given in the appendix.

Dynamic problems ^ For dynamic problem, there will be nd^

interpolation data added after the first training period. When new data are to be processed, the input locator determines the parallepiped D^; we then have to verify that the number of nodes of D^ must be smaller than or to M;

otherwise, Dj will be divided into two child parallelepipeds using the k-d tree techniqii||

The corresponding local RBF networ^^

finally retrained. Base on k-d tree, adding daj^

into a suitable domain does not noticeably the running time. In addition, with the mentioned processing method, only sub-network need t | be ratrainr. Experiment results presented in M\

next section show that the retraining time for added data is veiy short.

EXPERIMENT RESULTS • As shown in [5], to reduce the complexly, \v|,

use the norm defined by ||H|| = n i a x V p instead of ||H|]. in (14). The goal of simulation' is to compare the training time, the error and the local network generality with respect to those of global interpolation RBF networks (here the word "global" is used to distinguish it from local network ). ., , For the HDH algorithm, the ending conditio^ ' defined by (15) provides error on the final weight vectors but not the interpolation error.

Nonetheless, to compare there two network architectures, is must be based on the final interpolation error. For dynamic problem^, the total training time with new training d?ift must also be compared for both methods. 9 l In the following, we consider different fiinctions; the first one with two-variables •' y = (xf+Xi*X2)/5 + l/2, where A:, 6 [0,8] and X2e[0,10] provides a case where the diaetter of sub-domain quid reduces when the number of doma^

increases, as remarked in section 3.3.

second one with there variables'',

(7)

iVV. ex. v-wiivj i-junp y = x^ *X2+sm(x2+Xj+l) + 4 ,Xie:lO,3]

and Xj e [ 0 , 5 ] gives a more complex case.

These two examples are representative to analyze the effciency of different network in accordance with the complexity having evaluated in section 3.3.

The simulation is performed on a personnal computer equiped with an Intel Pentuim IV Processor, S.OGhz, 512MB DDR RAM. The chosen parameter for different scenarios are specidied by ^ = 0.^ a = 0.9 and the ending condition on xepression (15) conrresponds to

£• = 10"*. The RBF sub-network are sequentially frained.

Comparison of training time and error Experiment results are presented in Tables 1 and 2, repectively for the first and the second example. After the fraining is finished, we take 10 interpolation nodes to compute the resulting training error in order to compare the efficiency of different networks. The first and the second examples respectively have 4096 and 19683 equally spaced interpolation nodes. In Table 1, the size M of the sub- domain is fixed, whike for Table 2, the k-d tree construction is completed when a given number of sub-domains is reached.

Comments: From the results show in Tables 1 and 2, we can obeserve that

1, The network fraining time rapidly decreases, if the number of sub-domain increases.

2, The fraining error noticeably decreases if the number of sub-domain increases.

These two effects are more pronounced for cases with small sub-domain data size.

Comparison of generality

Experiment results are presented in Table 3 anf Table 4, repectively for the two-variable and three-varible fiinctions. For these simulations, the fraining leaves out 10 interpolation nodes (those are points far away from centers). When the training is finished,

the interpolaion error is computed at these nodes. In table 3, the size M of the sub- domains is fixed, while in Table 4, the k-d free construction is completed when reaching a given number if sub-domains.

Comment: The network generality, charaterized by the interpolation error, is better when the number of sub-domain is increasing; especially for cases with the small sub-domain data size (in case two-variable function).

Reninforcement training for dynamic problems

To analyze the efficiency of the local interpolation RBF network, we consider the

three-varible function

:v = j^^.j:,+sLn(:>:, + ;ti + l) + 4 ;:>:, e[0,3].x, e[0,4],;c3 s[0,5]

where x^ and x^ are equally speaced and Xj is randomly chosen in the range of [0,5].

The simulation are done for different scenarios associated to different number of interpolation nodes.

The fraining in the first period is done using all but five interpolation nodes which are considered as new training data. For the new coming data, the width parameters c7^

obtained in the first training period are reused as initialized parameters for the second training period, namely the reinforcement training. The simulation results are show in Table 5. Note that the scenario of 200 nodes, the running time is determined only upto unit"!".

Comments: When there are new training data, and if we use the whole set of data including the new coming ones to ratrain the networks, the training time must be larger than the ones corresponding to the first training period. But if we use the reinforcement training techinque, the training time is now very small. This strong advantage of local network is inherited from those of the HDH algorithm.

(8)

3 — o to o

— — o o

I

2 O ^ O V

II 3

K X X X X X X X X o o o o o o o o o o '

i ; 1. i t i J. i 1 •

1 o o o o

• k) 4 i to o : . ^ ^ R> r: c

1 o ^o w . ^ o

> Co ov w — E

> .fe. CT\ to I*- —

J

11^

, to fo i" !" ^

; s I is 5 s i S S! s i^ <

- S o> - I

I I I I = ; I O => o t

) ^ Wi VO CTs t

: g ^ E 3 1 j

X X X X X , o o

O - J ' OS 0\

••L.

"* s

(9)

^ C C x x x J J x x ,

s ~

V22V22V2V2

bL

\ r - O

• • ' ^ ' x ' x ' x ' x '^ >= J*

0^ -V c^ OS r~- (

S — ^O O I 1 (N vo 'S- I - m -O <?> (

5 ^ (^ a> o (

r r ^ — — t ^ t N o O — "O h r o v i i o - o — o o r - v o h m o o o O T r t - ~ o o ' ^ \ o r r » i t N ( N r - - i o e o c ) v o r ' H ' ^ ^ — ' o d o o o ^ * o

4 iTv >0 W r^ — — 00

^ O - " — v O t ^ l J S — O o o 1 VI O CT> M ~ — "O

T t T T T T T t

; o ^ o o o o o J

g S 2 ^ i

=^ Ox -3- — ,

, 1 T t 1 f T T 1-, T • o o o ^ o o o o 2 o '

5 lO __

?• == o.

•r T T t 7 1 T 1 1

o ^ o o o o o o o l o o

s X X y

• r^ r^ J:,

. (-1 00 CT. J;:^ -a- o -

? 00 ( N m g t - OS v o , r - ( S — 2 r - o - • -

; OS tj- — o* ^

61

) • * OS J

• VO O t

1 ov t 00 n

• ^o >/^ c —

• Ov ^ i o O vo 01

! o i ; 5 ^ r- •

• d o d " [

(10)

Vil Thi Thanh Thiiy

Number of nodes

200 500 1000 1600 2025

Tjp chl KHOA HOC & CONG NGHE it>,afiii: i t j - l i Table 5. Enhanced training time comparison

Discard 5 nodes Training time (first period) Phasel Phase2 Total

2" 1" 3"

7" 1" 8 34" 2" 36"

68" 3" 7 1 "

126" 4" 130"

when adding new nodes Adding 5 new nodes, reinforcement

training time Phasel Phase2 Total

1" I" 2" "

2" 1" 3»

6" 2" 8"

13" 3" 16" "

19" 4 - 23"

R E F E R E N C E S

1. C G . Looney, Pattern recognition usuig neural networks: Theory anf algorithm for engineers and scientist, Oxford University Press, New York, 1997.

2. S. Hakln, Neural Networks: A Comprehensive Foundation, second ed.. Prentice-Hall Inc, 1999.

3. T.M. Mitchell, Machine Learning, McGraw- Hill, 1997.

4. Hoang Xuan Huan, Dang Thi Thu Hien, An iterative algorithm for training an mteipolatin RBF networks, in: Proc. Vietoamese National Workshop on some selected topics of Information Technology, Haiphong, Viemam,2005,pp314-323.

5. Hoang Xuan Huan, Dang Thi Thu Hien and Huu Tue Huynh, A Novel Two-Phase Efficien Algorithm for Trauiing Interpolation Radial Basis Function Network, Signal Processing, Vol 87, Issue II (November2007),pp2708-2717.

6. M, Dikaiakos and J.Stadel A Performance Study of Cosmological Simulation on Message- Passing and Shared-Memory Multipprocessors, In Proceedings of the 10* ACM International Conference on Supercomputing. ACM. May 1996, pp94-10I.

7. Berchold S. Kriegel H.P. Indexmg the Sollution Speace: A New Technique for Nearest Neighbor

Search in High-Dimensional Speace, !£££

Transaction on Knowledge and Data Engineering Vol, 12, No.l, 2000.45-57.

8. Andrzej Chmielewski, Slawomir T.Wierzchon.

V-Dectector algorithm with tree-based structures, processdings of international MuIticonference on Computer Scienece and Information Technology 2006,ppn-16.

9. M.J.D.Poweil, Radial basis function approximations to polynomials, in: Proc Numerical analysis 1987, Dundee, UK, 1988, pp. 223-241, 10. E. Blanzieri, Theoretical Interpretations and Applications of Radial Basis Function Network, Technical Report DIT-03-023 Inforniatira.

Telecomunicazioni, Universiy of Trento (2003).

11. J. L. Bently. Mulddimensional binary search tree used for associative searching, Commun.

ACM.18(9): 509-517, Setember 1975.

12. E.J,Hartman, J.D. Keerer and J.M. Kowalski, Layered neural network with Gaussian hidden units as universal approximations, Neural CompuL 2 ( 2 ) (1990). 210-215.

13. J.Park and I,W. Sandberg, Approximation and radial-basis-function networks. Neural Comput.-^

Vol.5, no.3,305-316, 1993.

T O M T A T

B A I T O A N N O I SUY V A M A N G N O R O N R B F

Vfl Thi Thanh Thiiy Trudng Dgi HQC Supham - DH Thdi Nguyin Trong bk\ bio nky t6i d^ xu^t m§t thugt toan tfii xSy d\mg m^t phuong phdp xdp xi ham da bien, coi mang nfii suy RBF dja phuong \k tigp cSn b3ng m^ng luoi noron. Trong cac mang n^y dtt li?u huan luySn dug-c nh6m thanh c4c cum nho va trSn moi phan cum mOt m^ng RBF noi suy 'diroc dao tao bang cach su dung c^c thuat toan moi nhu sau: Mgt thuat todn hai pha cho mang RBF 6io tao n9i suy su dung cdc chiic nang co sd cua Gaussian vd co su phiic tap, trong d6 N la s6 nut. Sii dyng phuong phap ndy thi hi?u qua ngSn hon va c6 dang tSng quat hon mang RBF toan du hon nfta kieu n?! suy mang RBF ndy hi6u qua hon cho nhieu bi^n.

Til' kh6a: Radial ca sg chiic ndng; Thdng s6 chiiu rdng; Trpng luang ddu ra; Su biin ddi co; bai j

toan ngi suy, mgng RBF ' ] Tel: 0989948283, Email: [email protected]

154

Referensi

Dokumen terkait

Nghien edu danh g i i cao gia trj eua Mtfc dd lan toa cQa truyin thdng xa hdi thdng qua; S d lupng thdng dlgp fruyln thdng cua doanti nghiep dUdc ngudi dung ehia se; So lupng ngudl dang

2010 cho rang, quyd't dinh mua nha cua khdch hdng tai llidnh phd Nannin, tinh Guangxi - Trung Qude bi anh hffdng, bdi: 1 Cac nhan td ben ngoai, bao gom: van hda, chinh sdeh eua nha

Ddi tu'dng nghien eiru dddc tham kham tai thdi diem vao vien, chi lay nhii'ng benh nhan cd Hunt-Hess dp 1 den 3 va theo doi trong suot qua trinh nam vien, sir dung thang diem Rankin eai

Hinh thai giai phau gian tTnh mach tinh d u ^ tren bang phan loai ciia Bahren 1983 chia thanh 5 type, theo do trong nghien ciiH ciia chung toi type I ehiem 50%, va type I I I ehiem

- d nhdm benh nhan cd tdn thu'dng viem phSi ke, dp tudi Idn hdn, thdi gian mac benh keo dai hdn va nong do men CK trong huyet thanh tang cao hdn so vdi nhdm BN khdng cd viem phoi ke,

Neonatal I.OATV^NOE Suy hd hap SHH la mdt hdi chiTng ciia nhieu nguyen nhan gay nen, la tinh trang benh ly rat hay gap d thdi ky sd sinh, nhat la trong nhutig gid dau sau sinh do day

Phan loai nhiem khuan ho hap cap d tre dddi 5 tuoi: Viem du'dng ho hap tren gom; viem miii, hong, viem VA, amidal, viem xoang, viem tai giife, viem xu'dng chum.... Viem du'dng ho hap

Sd phuc hBi nay mang y nghTa vd eiing to ldn khong chi tra lai cupc sdng, su'c lao dpng cho ngddi benh va xa hpi ma edn khang dinh phau thuat la phu'dng phap dieu trj toi du va la