• Tidak ada hasil yang ditemukan

Interleaving of Program Graphs

Dalam dokumen The MIT Press Principles of Model Checking (Halaman 59-67)

Modelling Concurrent Systems

Definition 2.21. Interleaving of Program Graphs

LetPGi= (Loci,Acti,Effecti, →i,Loc0,i, g0,i), fori=1,2 be two program graphs over the variables Vari. Program graph PG1|||PG2 over Var1∪Var2 is defined by

PG1|||PG2 = (Loc1×Loc2,Act1Act2,Effect, →,Loc0,1×Loc0,2, g0,1∧g0,2) where is defined by the rules:

1 g:α11

1, 2 g:α1, 2 and 2 g:α22 1, 2 g:α1, 2 and Effect(α, η) = Effecti(α, η) ifα∈Acti.

The program graphs PG1 and PG2 have the variablesVar1∩Var2 in common. These are the shared (sometimes also called “global”) variables. The variables inVar1\Var2 are the local variables ofPG1, and similarly, those inVar2\Var1 are the local variables ofPG2. Example 2.22. Interleaving of Program Graphs

Consider the program graphsPG1 and PG2 that correspond to the assignmentsx:=x+1 and x := 2·x, respectively. The program graph PG1|||PG2 is depicted in the bottom left of Figure 2.5. Its underlying transition system TS(PG1|||PG2) is depicted in the bottom right of that figure where it is assumed that initially x equals 3. Note that the nondeterminism in the initial state of the transition system does not represent concurrency but just the possible resolution of the contention between the statements x := 2·x and x:=x+1 that both modify the shared variablex.

The distinction between local and shared variables has also an impact on the actions of the composed program graphPG1|||PG2. Actions that access (i.e., inspect or modify) shared

Parallelism and Communication 41

1 1

1 2 1 2

1 2

1 2

1 2 1 2

1 2

1 2

1 2

x:= 2·x x:=x+ 1

x:= 2·x x:=x+ 1

x= 3

x= 7

x= 6 x= 4

x= 8 PG1 |||PG2 : TS(PG1 |||PG2)

PG1 : PG2 :

x:= 2·x

2 2

x:=x+ 1

Figure 2.5: Interleaving of two example program graphs.

variables may be considered as “critical”; otherwise, they are viewed to be noncritical.

(For the sake of simplicity, we are a bit conservative here and consider the inspection of shared variables as critical.) The difference between the critical and noncritical actions becomes clear when interpreting the (possible) nondeterminism in the transition system TS(PG1|||PG2). nondeterminism in a state of this transition system may stand either for

(i) an “internal” nondeterministic choice within program graphPG1 orPG2, (ii) the interleaving of noncritical actions of PG1 and PG2, or

(iii) the resolution of a contention between critical actions ofPG1andPG2(concurrency).

In particular, a noncritical action of PG1 can be executed in parallel with critical or noncritical actions ofPG2 as it will only affect its local variables. By symmetry, the same applies to the noncritical actions ofPG2. Critical actions ofPG1andPG2, however, cannot be executed simultaneously as the value of the shared variables depends on the order of executing these actions (see Example 2.20). Instead, any global state where critical actions of PG1 and PG2 are enabled describe a concurrency situation that has to be resolved by an appropriate scheduling strategy. (Simultaneous reading of shared variables could be allowed, however.)

Remark 2.23. On Atomicity

For modeling a parallel system by means of the interleaving operator for program graphs it is decisive that the actionsα∈Actare indivisible. The transition system representation only expresses the effect of the completely executed action α. If there is, e.g., an action α with its effect being described by the statement sequence

x:=x+ 1;y:= 2x+ 1; ifx <12then z:= (x−z)2∗y ,

then an implementation is assumed which doesnotinterlock the basic substatementsx:=

x+ 1,y:= 2x+ 1, the comparison “x <12”, and, possibly, the assignmentz:= (x−z)2∗y with other concurrent processes. In this case,

Effect(α, η)(x) = η(x) + 1 Effect(α, η)(y) = 2(η(x) + 1) + 1

Effect(α, η)(z) = (η(x) + 1−η(z))22(η(x) + 1) + 1 ifη(x) + 1<12

η(z) otherwise

Hence, statement sequences of a process can be declaredatomic by program graphs when put as a single label to an edge. In program texts such multiple assignments are surrounded by brackets . . ..

Parallelism and Communication 43

wait1

crit1 noncrit1

y:=y+1

y:=y1 y >0 :

wait2

crit2 noncrit2

y:=y+ 1

y:=y1 y >0 :

PG1: PG2:

Figure 2.6: Individual program graphs for semaphore-based mutual exclusion.

Example 2.24. Mutual Exclusion with Semaphores Consider two simplified processes Pi, i=1,2 of the form:

Pi loop forever

... (* noncritical actions *) request

critical section release

... (* noncritical actions *) end loop

Processes P1 and P2 are represented by the program graphs PG1 and PG2, respectively, that share the binary semaphore y. y=0 indicates that the semaphore—the lock to get access to the critical section—is currently possessed by one of the processes. When y=1, the semaphore is free. The program graphs PG1 and PG2 are depicted in Figure 2.6.

For the sake of simplicity, local variables and shared variables different from y are not considered. Also, the activities inside and outside the critical sections are omitted. The locations of PGi are noncriti (representing the noncritical actions), waiti (modeling the situation in which Pi waits to enter its critical section), and criti (modeling the critical section). The program graph PG1|||PG2 consists of nine locations, including the (unde- sired) location crit1,crit2 that models the situation where both P1 and P2 are in their critical section, see Figure 2.7.

When unfolding PG1|||PG2 into the transition system TSSem = TS(PG1|||PG2) (see Figure 2.8 on page 45), it can be easily checked that from the 18 global states in TSSem

wait1,noncrit2 noncrit1,wait2 noncrit1,noncrit2

wait1,wait2

crit1,noncrit2 noncrit1,crit2

crit1,wait2 wait1,crit2

crit1,crit2 y >0 : y:=y1

y:=y+1 y:=y+1

y:=y+1 y:=y+1

y:=y1 y:=y1

y >0 :

y >0 : y:=y+1

PG1|||PG2:

Figure 2.7: PG1|||PG2 for semaphore-based mutual exclusion.

only the following eight states are reachable:

noncrit1,noncrit2, y= 1 noncrit1,wait2, y= 1 wait1,noncrit2, y = 1 wait1,wait2, y = 1 noncrit1,crit2, y= 0 crit1,noncrit2, y= 0 wait1,crit2, y = 0 crit1,wait2, y= 0

States noncrit1,noncrit2, y = 1, and noncrit1,crit2, y = 0 stand for examples of sit- uations where both P1 and P2 are able to concurrently execute actions. Note that in Figure 2.8 n stands for noncrit, w forwait, and c forcrit. The nondeterminism in these states thus stand for interleaving of noncritical actions. State crit1,wait2, y = 0, e.g., represents a situation where onlyPG1 is active, whereasPG2 is waiting.

From the fact that the global statecrit1,crit2, y =. . .is unreachable inTSSem, it follows that processes P1 and P2 cannot be simultaneously in their critical section. The parallel system thus satisfies the so-called mutual exclusion property.

In the previous example, the nondeterministic choice in state wait1,wait2, y = 1 rep- resents a contention between allowing either P1 or P2 to enter its critical section. The resolution of this scheduling problem—which process is allowed to enter its critical section next?—is left open, however. In fact, the parallel program of the previous example is

“abstract” and does not provide any details on how to resolve this contention. At later design stages, for example, when implementing the semaphore y by means of a queue of waiting processes (or the like), a decision has to be made on how to schedule the processes that are enqueued for acquiring the semaphore. At that stage, a last-in first-out (LIFO),

Parallelism and Communication 45

n1, n2, y=1

w1, n2, y=1 n1, w2, y=1

c1, n2, y=0 w1, w2, y=1 n1, c2, y=0

c1, w2, y=0 w1, c2, y=0

Figure 2.8: Mutual exclusion with semaphore (transition system representation).

first-in first-out (FIFO), or some other scheduling discipline can be chosen. Alternatively, another (more concrete) mutual exclusion algorithm could be selected that resolves this scheduling issue explicitly. A prominent example of such algorithm has been provided in 1981 by Peterson [332].

Example 2.25. Peterson’s Mutual Exclusion Algorithm

Consider the processes P1 and P2 with the shared variables b1, b2, and x. b1 and b2 are Boolean variables, while x can take either the value 1 or 2, i.e., dom(x) = {1,2}. The scheduling strategy is realized using x as follows. If both processes want to enter the critical section (i.e., they are in location waiti), the value of variable x decides which of the two processes may enter its critical section: if x = i, then Pi may enter its critical section (for i= 1,2). On entering location wait1, processP1 performsx:= 2, thus giving privilege to process P2 to enter the critical section. The value of x thus indicates which process has its turn to enter the critical section. Symmetrically, P2 sets x to 1 when starting to wait. The variables bi provide information about the current location of Pi. More precisely,

bi = waiti∨criti .

bi is set whenPistarts to wait. In pseudocode,P1performs as follows (the code for process P2 is similar):

wait1

crit1

noncrit1

b1 := true;x:= 2 b1 := false

x=1∨ ¬b2

wait1

crit1

noncrit1

b2 := true;x:= 1 b2:= false

x=2∨ ¬b1

PG1: PG2 :

Figure 2.9: Program graphs for Peterson’s mutual exclusion algorithm.

P1 loop forever

... (* noncritical actions *)

b1 := true;x:= 2; (* request *) wait until (x= 1 ∨ ¬b2)

docritical section od

b1 := false (* release *)

... (* noncritical actions *)

end loop

Process Pi is represented by program graph PGi over Var = {x, b1, b2} with locations noncriti,waiti, andcriti, see Figure 2.9 above. The reachable part of the underlying tran- sition system TSPet =TS(PG1|||PG2) has the form as indicated in Figure 2.10 (page 47), where for convenience ni, wi, ci are used for noncriti,waiti, and criti, respectively. The last digit of the depicted states indicates the evaluation of variablex. For convenience, the values for bi are not indicated. Its evaluation can directly be deduced from the location of PGi. Further, b1=b2= false is assumed as the initial condition.

Each state in TSPet has the formloc1,loc2, x, b1, b2. AsPGi has three possible locations and bi and xeach can take two different values, the total number of states ofTSPet is 72.

Only ten of these states are reachable. Since there is no reachable state with P1 and P2

being in their critical section, it can be concluded that Peterson’s algorithm satisfies the mutual exclusion property.

In the above program, the multiple assignments b1 := true;x := 2 and b2 := true;x:= 1 are considered as indivisible (i.e., atomic) actions. This is indicated by the brackets

Parallelism and Communication 47

n1, n2, x= 2

w1, n2, x= 2

w1, w2, x= 1

c1, w2, x= 1

n1, n2, x= 1

n1, w2, x= 1

w1, w2, x= 2

w1, c2, x= 2

c1, n2, x= 2 n1, c2, x= 1

Figure 2.10: Transition system of Peterson’s mutual exclusion algorithm.

and in the program text, and is also indicated in the program graphs PG1 and PG2. We like to emphasize that this is not essential, and has only been done to simplify the transition system TSPet. Mutual exclusion is also ensured when both processes perform the assignmentsbi := true andx:=. . .in this order but in a nonatomic way. Note that, for instance, the order “first x :=. . ., then bi := true” does not guarantee mutual exclusion.

This can be seen as follows. Assume that the location inbetween the assignments x:=. . . and bi := true in program graphPi is called reqi. The state sequence

noncrit1, noncrit2, x= 1, b1 = false, b2 = false noncrit1, req2, x= 1, b1 = false, b2 = false req1, req2, x= 2, b1 = false, b2 = false wait1, req2, x= 2, b1 = true, b2 = false crit1, req2, x= 2, b1 = true, b2 = false crit1, wait2, x= 2, b1 = true, b2 = true crit1, crit2, x= 2, b1 = true, b2 = true

is an initial execution fragment where P1 enters its critical section first (as b2 = false) after which P2 enters its critical section (as x = 2). As a result, both processes are simultaneously in their critical section and mutual exclusion is violated.

2.2.3 Handshaking

So far, two mechanisms for parallel processes have been considered: interleaving and shared-variable programs. In interleaving, processes evolve completely autonomously

interleaving forα /∈H:

s1 −−→α 1 s1 s1, s2 −−→ α s1, s2

s2 −−→α 2 s2 s1, s2 −−→ α s1, s2

handshaking forα∈H:

s1 −−→α 1 s1 s2−−→α 2 s2 s1, s2 −−→ α s1, s2

Figure 2.11: Rules for handshaking.

whereas according to the latter type processes “communicate” via shared variables. In this subsection, we consider a mechanism by which concurrent processes interact via hand- shaking. The term “handshaking” means that concurrent processes that want to interact have to do this in a synchronous fashion. That is to say, processes can interact only if they are both participating in this interaction at the same time—they “shake hands”.

Information that is exchanged during handshaking can be of various nature, ranging from the value of a simple integer, to complex data structures such as arrays or records. In the sequel, we do not dwell upon the content of the exchanged messages. Instead, an abstract view is adopted and only communication (also called synchronization) actions are considered that represent the occurrence of a handshake and not the content.

To do so, a set H of handshake actions is distinguished with τ H. Only if both participating processes are ready to execute the same handshake action, can message passing take place. All actions outside H (i.e., actions in Act\H) are independent and therefore can be executed autonomously in an interleaved fashion.

Dalam dokumen The MIT Press Principles of Model Checking (Halaman 59-67)