• Tidak ada hasil yang ditemukan

Maxent for Equilibrium Statistical Mechanics

Dalam dokumen Dissipative Nanomechanics (Halaman 125-128)

RRout

6.1 Maxent for Equilibrium Statistical Mechanics

systems and apply maxent methods to elucidate their behavior.

In this chapter we provide a brief introduction to the formalism of the maxent methods for equilibrium and non-equilibrium systems. In Chapter 7 we apply the apply maxent to a variation of the famous dog-flea model of the Ehrenfests [131] and derive transport equations for particle diffusion (Fick’s law), heat diffusion (Fourier’s law), momentum diffusion (Newton’s law of viscosity), and chemical flux (rate equation). In Chapter 8 we develop simple microtrajectory models for biological problems like molecular motors and voltage gated ion channels and physical problems like the diffusion of a Brownian particle in dual optical traps, which is an analog of these biological examples. We then apply maxent to these models, subject to the external constraints, and obtain the probabilities of the microtrajectories.

Since the number of equations m is less than the total number of variables n, this is clearly a problem of missing information and has no unique solution. To obtain a solution that is least biased with respect to the known constraints (Eqs. 6.1 and 6.3), Jaynes proposes a maximization of the

“information measure” SI = P

ipilnpi introduced by Shannon. This quantity is known to have the properties of consistency and uniqueness which makes it the correct measure of the “amount of uncertainty” in a probability distribution. Thus, the scheme to obtain a distributionpi involves maximization of the information theory entropy

SI =−

n

X

i=1

pilnpi. (6.4)

subject to Eqs. 6.1 and 6.3. The solution can be obtained by the method of Lagrange multipliers.

The entropy subject to the constraints can be written as

SI =−

N

X

i=1

pilnpi0 N

X

i=1

pi+

m

X

k=1

λk N

X

i=1

fk(xi)pi. (6.5)

One can maximize this entropy by taking a differential and equating it to zero, i.e.,

δSI =X

i

(−1−lnpi0+

m

X

k=1

λkfk(xi))δpi= 0. (6.6)

This implies that the probabilitypi is given by

pi= exp(−λ0+ 1) exp(

m

X

k=1

λkfk(xi)). (6.7)

The normalization condition implies that

exp(−1 +λ0) = 1

PN

i=1exp(Pm

k=0λkfk(xi)). (6.8)

The denominator on the right hand side (RHS) of the above equation is called the partition function and is denoted byZ(λ1, ..., λm). The probability of the microstatep(i) is thus

pi= exp(Pm

k=1λkfk(xi))

Z({λk}) . (6.9)

The Lagrange multipliers are obtained by noting that

hfki =

N

X

i=1

fk(xi)pi,

= X

i

fkxiexp(P

kλkfk(xi)) X

j

exp(X

k

λkfk(xj))

| {z }

Z(λ1,...,λm)

. (6.10)

It can be easily seen that the above equation can as well be written as hfk(xi)i= ∂

∂λiZ(λ1, ..., λm), (6.11) wherekgoes from 1 tom. The solution of these mequations will provide the Lagrange multipliers and give us the probabilities of the microstates. The variance of quantities can also be seen without much difficulty to be

∆fk2=hfk2i − hfki2= ∂2

∂λ2k lnZ. (6.12)

This was a completely general derivation without any reference to the nature of xand fk(x) and told us how to obtain a “least biased” distribution for an under constrained problem (or a problem with missing information). If we compare the probability distributionpiand the partition function Z with those obtained in Chapter 2 for the canonical distribution, the mathematical similarities will be apparent. Specifically, let the energy levels of a system beE(α1, α2, ...) where the external parametersαi, may include the volume, strain tensor, electric field, etc. Then, if we are given only the average energy hEi, the maxent probabilities are given by a special case of Eq. 6.9, which is the Boltzmann distribution if the Lagrange multiplier λ is equal to 1/kBT. Thus, the maximum entropy approach provides us with an information theoretic interpretation of equilibrium statistical mechanics without any reference to the ergodic properties of the system. It can also be shown that at thermodynamic equilibrium the information entropySI is the same as the experimental entropy SE [5].

Jaynes approaches the probabilities in a Bayesian manner and maintains that the obtained prob- abilitypi, more than anything else, represents the “belief” of the experimenter that the system is in microstatei. But how can the belief of an experimentalist dictate the course of an experiment? The answer lies in recognizing that we are concerned with the prediction of the “reproducible” macro- scopic behavior. Whereas the macroscopic behavior is experimentally reproducible under the applied constraints, the microstates are not. That the macroscopic behaviour is reproducible under given constraints or experimental “knobs” implies that it is a characteristic of each of the vast number

of microstates compatible with those constraints. It, hence, follows that the vast majority of the microscopic details are irrelevant for the prediction of the macro quantities. When, the applications of the Jaynes procedure fails, the situation is informative because it signals the presence of new constraints that had not been taken into account.

The take away message of this section is that it is possible to interpret the entropy in an infor- mation theoretic manner and recover the results of equilibrium statistical mechanics. If we obtain a

“reproducible” experiment by controlling certain “knobs,” then the maximization of the information entropy subject to constraint imposed by the “knobs” will provide us with “least biased” estimates of the probabilities of the underlying microstates. In the equilibrium scenario the “knobs” are time independent. If we extend this logic by replacing the “time independent” constraint with “time dependent” constraints and replace the “microstates” with “microtrajectories,” we should be able to have a general formulation for the systems that are far from equilibrium. That is the subject of the next section.

Dalam dokumen Dissipative Nanomechanics (Halaman 125-128)