• Tidak ada hasil yang ditemukan

Directory UMM :Data Elmu:jurnal:B:Biosystems:Vol58.Issue1-3.2000:

N/A
N/A
Protected

Academic year: 2017

Membagikan "Directory UMM :Data Elmu:jurnal:B:Biosystems:Vol58.Issue1-3.2000:"

Copied!
7
0
0

Teks penuh

(1)

Neural networks through the hourglass

Tatyana Turova

1

Mathematical Center,Lund Uni6ersity,22100Lund,Sweden

Abstract

The effect of the synaptic plasticity on the dynamics of a large neural network is studied. Our approach is analytic but inspired by the data, both simulated and experimental. We explain formation of the small strongly connected assemblies within a dynamical network following Hebb’s rule. Also, we find the conditions for the synchrony effect in the stochastic network in the absence of large synchronized input. © 2000 Elsevier Science Ireland Ltd. All rights reserved.

Keywords:Attractor; Neural network; Ornstein – Uhlenbeck process

www.elsevier.com/locate/biosystems

1. Introduction

The concept of using the theory of random processes to describe neural activity has been elaborated since the 1960s. Analysis reported by Griffith (1971) indicated already remarkable vari-ability of the behaviour of stochastic biological neural networks, but it was not until the 1990s when the existence of phase transitions was proved rigorously (Cottrell, 1992; Karpelevich et al., 1995; Malyshev and Turova, 1997).

Given numerous simulated results on the be-haviour of large stochastic networks, it is natural to try to make more links between computational and analytic approaches. For this purpose, I con-sider here two quite different examples from the literature: one is a result of numerical simulations

by Xing and Gerstein (1996), and the other is reported as physiological data by Charpier et al. (1994). In both these examples, the effect of synaptic plasticity on the cooperative behaviour of the large network is studied. In this paper a mathematical model is presented by means of which one can observe qualitatively the same phenomena. I shall discuss which features of this model are essential and even necessary for pos-sessing the same qualities. Then, based on the known results and also on the presented analysis, I will be able to answer some of the questions posed by Charpier et al. (1994), Xing and Ger-stein (1996).

2. The model

Consider a model that fulfils a minimum re-quirement to be called ‘neuronal’, i.e. it takes into account the spiking nature of the neuronal activ-E-mail address:tatyana@maths.lth.se (T. Turova).

1On leave from the Institute of Mathematical Problems in Biology, RAS, Russia.

(2)

ity, the exponential decay of the post-synaptic potentials, and the exponential decay of the devia-tion of the membrane potential from the rest potential. As an example, I take a model that has been introduced by Kryukov (1978) under the assumption that the electrical activity (the mem-brane potential) of a single independent neuron can be described by some Markov process. In fact, a model for a single neuron similar to that described in Section 2.1 appears already as a ‘realistic neuronal model’ in the monograph by Griffith (1971). In the form presented here, the model was formulated and studied by Turova (1996, 1997).

2.1. A model neuron

The neuron is modelled as one point neglecting the propagation of membrane current along the axons and the dendrites. The activity of each neuron is described by the stochastic point pro-cess of the consecutive firing moments (spikes). Assume that any individual neuron is ‘active’, i.e. the sequence of its spikes forms a renewal process. More exactly, in the absence of interactions, let the inter-spike intervals be independent random variables with a generic variable

Y=dinf{r\0:h(r)=ye

Varying parametery, one can adjust the mean of the inter-spike interval to the data. Roughly speaking, the mean inter-spike interval is an in-creasing function of y.

2.2. A network

The neurons of the network are numerated by the sites iZ2. The evolution of the ith

neu-ron is described by a membrane potentialxi(t)R

and a threshold functionyi(t)R+,t]0, that are

continuous stochastic processes except for, at most, a countable number of points of discontinu-ity. It is assumed that xi(t)Byi(t) for all t]0,

except for the random moments 0Bti 1Bt

ron. At any moment of firing of the ith neuron, the membrane potential xi(t) and the threshold

yi(t) are reset jumpwise to the values 0 and y,

and the process of the membrane potential accu-mulation is repeated until the next firing. Set ti

Thus Si(t) denotes the time elapsed since the last firing of the ith neuron until the moment t. The threshold function yi(t) is defined by

yi(t):=ye−aSi(t), t]0.

The evolution of the membrane potentials of the interacting neurons is given by the following system

This form of interaction suggested by Kryukov et al. (1990) has the advantage that being mathemat-ically tractable, as indeed it still resembles physio-logical data: it takes into account the exponential decay of the post-synaptic potentials, and the connection constants aijcan be chosen as positive as well as negative to model excitatory and in-hibitory connections, correspondingly.

In order to study the dynamics of the spike trains generated by the model of Eq. (3), I introduce an embedded process R(t)=

[Ri(t), iL]RL+ such that each of its

(3)

Ri(t)

=

time until the next firing of the ith neuron assu ming no interaction takes place meanwhile.

It has been proved by Turova (1996) that

[Ri(t), iL]t]0=d[Xi(t), iL]t]0 (5)

where the processX(t) is known as an ‘hourglass model’. (The name is suggested by the straightfor-ward analogy with asandglass andspinglass, and is used in the sense of a ‘timeglass’. In fact, ‘timglas’ means sand-glass in Swedish.) The hour-glass model has a very clear description, which is now outlined briefly referring for the details to Turova (1996, 1997). As long as all the compo-nents of X(t) are strictly positive, they decrease from the initial state X(0) linearly in time with rate one until the first time, tz, that one of the components reaches zero for somezL:Xz(tz)=

0. ThenXzis reset to Xz(t+z)=Yz (1)

, where Yz(1)

is an independent copy of the variable Y already defined. At the same moment, every trajectory

Xj(t) withjin the interacting neighbourhood of z

receives, instantaneously, a random increment uzj(t). After moment tz, the foregoing dynamics

are repeated.

Observe that the positive, i.e., excitatory, con-nections azj result in the negative sign of uzj(t),

which shorten the time until the next firing and might even cause a simultaneous firing of thejth neuron, in which case Xjis also reset toXj(t+z)= Yj(1). On the contrary, the inhibitory connections

resulting in the positive sign ofuzjincrease

instan-taneously Xj(t), and therefore delay the next firing. The distributions ofuij(t) are derived from

the model. In particular, if aij= −aB0, then given that the post-synaptic jth neuron is in a state Xj(t)=u\0, on can derive the following

formula for the density ofuij(t):

2a3/2e2a6 ae

−au

p(e2a6

−1)3exp

aa2e−2au

e2a6

−1

,

6\0. (6)

Notice that the hourglass model with time-inde-pendent interactions had been introduced and studied independently as a model for the neuronal activity by Cottrell (1992). With the results of

Turova (1996), it became possible to choose the parameters of this model in consistency with those of the standard model of interacting membrane potentials. The equality of Eq. (5) implies that the spike trains of the model of Eq. (3) and the process X(t) are equal in distribution. But the process X(t) is much easier to treat analytically since it has piece-wise linear trajectories. I shall emphasize that the definition of the hourglass process does not require any conditions on the original model. Therefore, it appears to be a quite useful probabilistic tool.

3. Temporal coding of the dynamics

It has been observed in many simulations (see, for example, Xing and Gerstein, 1996; Cottrell et al., 1997), as well as proved analytically for some class of the hourglass models (Karpelevich et al., 1995; Cottrell and Turova, 2000), that the net-work with strong enough inhibitions splits, with time, into two subsets. These are a subset of active, i.e. infinitely often firing, neurons and a subset of inactive neurons. Furthermore, the sub-set of the inactive neurons remains unchanged as it reaches a certain state, so that ‘‘special manipu-lations are required to activate these silent neu-rons’’, as noticed by Xing and Gerstein (1996).

The subsets of inactive neurons can be used for the description of the limiting states of the dy-namics. Recall the definition of the attractors given by Malyshev and Turova (1997) for the transient hourglass models. Let (V, S, P) be the underlying probability space of the process X(t). For any A¦L let V(A)¦V be the set of all trajectories vV such that

Xi(t, v)“ as t“ , if and only if iA.

(7)

Here Xi(t, v) denotes a particular realization of

the random trajectory Xi(t). It is clear that V=

@AV(A), whereA runs over all subsets of L.

Definition 1.We call an ‘attractor’ any non-empty set Asuch that

(4)

The attractors defined here are the only stable patterns of the silent neurons for the transient hourglass models. Let us rewrite this definition. For any T\0,nZ+ and iL set

pi(n, T):=

1

Tc[(n−1)TBt

5nT: the ith neuron fires at time t]

Definition 2.We call an ‘attractor’ any non-empty set A such that

lim

n“

pi(n, 1)=0

if iA, while for any jQA, there exists

lim

T“

pj(1, T)\0.

According to the results by Turova (1996), any finite model (Eqs. (3) and (4)) is ergodic for any fixed connection constants. Therefore, in this case, there are no attractors in the sense of the previous definition. Instead, we introduce a ‘meta-stable state’. Choose constantTc\0 large enough when compared with other time characteristics of the network, and modify Definition 2 as follows.

Definition 3. We call a ‘meta-stable state’ any non-empty setAfor which, with a positive proba-bility, there exists an infinite sequence {nl}l]1

such that

pi(nl, Tc)=0 l]1 if iA.

Here is the first conclusion one can draw on the basis of using hourglass model. If the interactions of the system are such that the interactionsuij(t)

of the corresponding hourglass model are time and space homogeneous, then in the presence of strong enough inhibitions the system moves into one of its attractors and stays there ‘forever’. Recall that for a one-dimensional model with nearest-neighbour inhibitory interactions, all the possible attractors were classified by Karpelevich et al. (1995). These attractors are random, we observe them on a microscopic scale. However, on a larger scale, we obtain a deterministic macro

image due to the law of large numbers. The structure of equilibrium measures on attractors has been studied by Malyshev and Turova (1997).

4. Random graphs and Hebb’s rule

The discussion in this section is inspired by the results of Xing and Gerstein (1996), who have observed in particular that, in the presence of strong enough inhibition, a homogeneous net-work after the training based on the Hebb rule becomes composed of the stable groups of neu-rons. The neurons within a group are strongly connected, while the connections between the groups are weak. The striking feature is that when compared with the size of the whole network, the size of any group is small.

A natural question arises: why do we observe only ‘small’ groups after the training based on the Hebb rule?

To answer this question, I shall analyze the behaviour of our network under similar condi-tions. To eliminate the boundary effect, letLbe a two-dimensional torus. We assume that the ith neuron sends the excitatory impulses to the neu-rons numerated by the sites in DE(i)=

{jL: 0Bi−j5d}, and it sends inhibitory impulses to DI(i)={jL: dBi−j5D}, where the constants DE(i)=DEand DI(i)=DI

are independent of i.

To be able to demonstrate the attractors, I shall consider a network such that the corresponding hourglass process has interactions that become state independent after the training based on the following rule attributed to Hebb (see also Xing and Gerstein (1996)).

1. Synaptic strength is increased when pre- and post-synaptic neurons fire in near synchrony. 2. The total outward synaptic strength from a

neuron holds constant.

(5)

random variables uij, iL, jDI(i)@DE(i) such

that

uijB0 with Euij= −BI, if jDI(i)

uij\0 with Euij=BE, if jDE(i)

and 05qij(t)5DEare the following random func-tions with piece-wise constant right-continuous trajectories. Set qij(0)=1 for all iL,

jDI(i)@DE(i), and choose 0BoB(EY/DE). Thent+ois a discontinuity point ofqij(·),jDE(i), if theith neuron fires at timetand thejth neuron fires within time interval (t, t+o]. More precisely, letFi(tDE(i) be the set of the neurons which fire

Cottrell and Turova (2000) have proved for the model with a specific architecture of the interacting neighbourhood that, for any value wE of the excitatory connections, there exists a critical value

wIcr(wE) of the inhibitory connections that separates

the ergodic and transient cases. Furthermore, it is proved by the same authors, and simulated for other types of interacting neighbourhoods by Cot-trell et al. (1997), that wIcr

(wE) is a non-increasing function. It is natural then to conjecture here also that, when BE is fixed and qij(t)1, there is a constantBIcrindependent of the size of the network

such that the system is transient if BI\BIcr and

ergodic when BIBBIcr.

I shall use random graphs to illustrate the dynamics of accumulating the ‘strong’ connections is the network due to the Hebb rule. The random graphG(t) consists of the set of verticesLand the

set E(t) of the directed edges (i, j)L×L. Set

E(0)=¥. Then, for any t\0, there is a directed edge at time t fromi to j, ifqij(t)][1/(1+o)]DE.

Thus, the neurons at the nodes of the connected component of G(t) fire in near synchrony.

The dynamics of graph G(t) basically has two phases:

1. accumulation of the connected components, when the excitatory connections play the major role; and

2. formation of the stable groups of the connected neurons along the connected components, when the inhibitory connections come into play. Consider these phases more closely. Notice that the probability of appearance of an edge in this graph is at most the probability that, in the neighbourhood of theith firing neuron, there will be a neuron that fires in near synchrony. Simple analysis shows that this probability is proportional toomaxxpY(x). Then it is not difficult to obtain the

following bound for the length of any connected component L of the graphG(t) at time tEY

EL5CoDEmaxxpY(x)

for some positive constant C independent of L. But as soon as there is at least one edge in the graph G(t), those neurons that are in the DI

neighbourhoods of the connected component re-ceive, roughly speaking, twice as many inhibitory impulses as there were from a neuron in a free node of the graph. Thus, when the size of the connected subgraph becomes of the order ofBIcr/BI, this group

will be surrounded by the silent neurons, and the dynamics of this group will be independent of the rest of the active neurons. This is in a perfect agreement with the fact observed by Xing and Gerstein (1996) that the size of the connected group is decreasing when the inhibition is increasing.

I conclude that, after the training (Eq. (8)), any limiting structure of the network will be composed of small groups of the connected neurons, unless the coefficientsqij(t) are kept bounded from above

by a small constant independent of DE. But the

(6)

5. Role of weak inhibitory connections

It has been reported by Charpier et al. (1994) that a set of inhibitory interneurons ‘‘is capable of repetitive discharges and that evoked as well as spontaneous firing in this population is syn-chronized’’. Furthermore, this synchrony is achieved in the absence of large synchronous in-put. Charpier et al. (1994) conjectured that the underlying mechanism of this phenomena is based on the activity-dependent plasticity, and is possible in the presence of the excitatory path-ways to these interneurons.

I shall now show that a similar effect can be observed in our model (Eq. (3)) with a properly designed architecture.

Consider the network composed of two types of neurons: inhibitory, which transmit only in-hibitory impulses and are numerated by the sites of LI:={(k, 0):15k5N}, and excitatory,

which transmit only excitatory impulses and are numerated by the sites of LE:={(k, 1):15k5 N} Let L=LI@LE. Notice that the neural

model where inhibitory and excitatory neurons are distinct is regarded as a more realistic one (see, for example, Xing and Gerstein, 1996). We set for the fixed positive constants b, c andd:

aij=

!

b\0 if iLE and 0Bi−j5d, cB0 if iLI and 0Bi−j5d,

Choose the initial conditions such that, for some constant L\0,

−2LBxi(0)B−L, if iLE, and

xi(0)5y if iLI.

By choosing L large enough, only the inhibitory neurons are allowed to be active from the initial moment. Then the firings in the network will evolve as follows. From Eq. (6), I derive the bounds for the mean

C1aijexp{−aXj(t)}5Euij(t)

5C2aijexp{−aXj(t)},

where 0BC1BC2 are some constants. This

im-plies that if the excitatory jth neuron has been receiving inhibitory impulses for a long time, say T, then Xj(T) is large, meaning that the

expectation of the inhibitory impulse Euij(T) is

small. In other words, this jth neuron does not react strongly to the incoming inhibitory im-pulses after Xj(T) reaches a certain level. Thus, our particular choice of interactions models the effect of ‘fatigue’. Furthermore, X(t), t\T, drifts with a positive probability towards the state where the components Xj, jLE, are equal.

(For the detailed explanation of this mechanism, refer to Turova (1997).) Therefore, with the probability one, there will be a moment when firing of some excitatory neurons causes the syn-chronized firing of the subnet LE. Notice that

the synchronization in the excitatory network was described in detail by Turova et al. (1994). Due to this synchronized firing, every inhibitory neuron receives a large excitatory impulse, which in turn yields a synchronized respond in the inhibitory subsystem. Observe, however, that this synchrony breaks after the first synchronous firing of the inhibitory subsystem, but will be repeated again under the same conditions.

Thus, the activity-dependent connections (Eq. (4)) allow us to design a synchrony effect in the stochastic network in the absence of large syn-chronized input.

6. Conclusions

State-dependent and activity-dependent con-nections are necessary for the simulation of dy-namical network, which possesses a large (with respect to the size of the network) number of the meta-stable states, each of which it can visit within a finite time without external interrup-tion.

Acknowledgements

(7)

References

Charpier, S., Behrends, J., Chang, Y.-T., Sur, C., Korn, H., 1994. Synchronous bursting in a subset of interneurons inhibitory to the goldfish Mauthner cell: synaptic media-tion and plasticity. J. Neurophysiol. 72, 531 – 541. Cottrell, M., 1992. Mathematical analysis of a neural network

with inhibitory coupling. Stoch. Proc. Appl. 40, 103 – 126. Cottrell, M., Turova, T.S., 2000. Use of hourglass model in

neuronal coding. J. Appl. Probability 37, 168 – 186. Cottrell, M., Piat, F., Rospars, J.-P., 1997. A stochastic model

for interconnected neurons. Biosystems 40, 29 – 35. Griffith, J.S., 1971. Mathematical Neurobiology. An

Introduc-tion to the Mathematics of the Nervous System. Academic Press, London.

Karpelevich, F.I., Malyshev, V.A., Rybko, A.N., 1995. Stochastic evolution of neural networks. Markov Process. Relat. Fields 1, 141 – 161.

Kryukov, V.I., 1978. Markov interaction processes and neu-ronal activity. In: Lecture Notes in Mathematics, vol. 653. Springer, Berlin, pp. 122 – 139.

Kryukov, V.I., Borisyuk, G.N., Borisyuk, R.M., Kirillov, A.B., Kovalenko, Y.e.I., 1990. Metastable and unstable states in the brain. In: Dobrushin, R.L., Kryukov, V.I., Toom, A.L. (Eds.), Stochastic Cellular Systems: Ergodic-ity, Memory, Morphogenesis. Manchester University Press, Manchester, UK, pp. 226 – 357.

Malyshev, V.A., Turova, T.S., 1997. Gibbs measures on at-tractors in biological neural networks. Markov Process. Relat. Fields 3, 443 – 464.

Turova, T.S., 1996. Analysis of a biologically plausible neural network via an hourglass model. Markov Process. Relat. Fields 2, 487 – 510.

Turova, T.S., 1997. Stochastic dynamics of a neural network with inhibitory and excitatory connections. BioSystems 40, 197 – 202.

Turova, T.S., Mommaerts, W., van der Meulen, E.C., 1994. Synchronization of firing times in a stochastic neural net-work model with excitatory connections. Stochastic Pro-cess. Appl. 50, 173 – 186.

Xing, J., Gerstein, G.L., 1996. Networks with lateral connec-tivity I, II, III. J. Neurophysiol. 75, 184 – 231.

Referensi

Dokumen terkait

Pokja III Unit Layanan Pengadaan Kabupaten Kaur akan mengadakan. klarifikasi, negosiasi teknis dan

Bapak Gunawan Wiradi dan Bapak Sediono Tjondronegoro (Badan.. Pembina AKATIGA) yang memang mengikuti secara rutin diskusi AKATIGA, menyampaikan keingintahuannya terhadap

Sehubungan dengan telah memasuki tahap pembuktian kualifikasi terhadap dokumen isian kualifikasi yang Saudara sampaikan, maka bersama ini kami mengundang Saudara

[r]

Penelitian tindakan kelas yaitu penelitian yang dilakukan oleh guru, bekerja sama dengan peneliti atau dilakukan oleh guru sendiri yang juga bertindak sebagai peneliti di

Kami telah mempelajari Laporan Keuangan Konsolidasi beserta Laporan Auditor Independen Perseroan dan Anak Perusahaan untuk tahun yang berakhir pada tanggal 31 Desember

The research results show that number of trip, amount of FADs used, and the number of crew operator operating purse seine are significant factors affecting

Peraturan Presiden No.70 Tahun 2012 tentang Perubahan Kedua atas Peraturan Presiden Nomor 54 Tahun 2010 tentang Pengadaan Barang dan Jasa Pemerintah pada Pasal 84 ayat