• Tidak ada hasil yang ditemukan

Performance Assessment

Reliability

4.5 The Octagonal Mesh Network

4.5.3 Performance Assessment

Since for uniformly random and independent traffic, AX and b.Y are independent of each other, this can be written equivalently as follows:

k-1

Docta

= L

i(P(b.X

=

i)P(b.Y::; i)

+

P(b.X::; i)P(b.Y

=

i)-P(b.X

=

i)P(b.Y

=

i))

i=l

In order to evaluate Docta, we need to evaluate the quantities P(b.X

= i) =

P(b.Y

= i)

and P(b.X ::;

i) =

P(b.Y ::;

i).

Following the argument presented in [3], we have:

p(b.X

= 0)

P(b.Y

= 0)

le 1

p(b.X

=

i) P(b.Y

= i)

2(k-i) k2 0<i::;k-1 P(b.X ::; i) P(b.Y ::; i) !

+ Li·

2(k-j)

k 1=1 kz 0<i::;k-1

k+(2k-l)i-i2 kz

From these expressions, we obtain the following, for 0 < i :S'. k - 1:

P(max(b.X, b.Y)

= i)

We are now ready to evaluate the average distance from destination, for a reasonable mesh size, k, after simplifying and rearranging terms as follows:

Docta

=

k-1

L

44 (2k2i2 - 3ki3

+

i4)

i=l k

For a 32 x 32 octagonal mesh, the average message distance is ,;:,j 14.93. This result is very close to that obtained from Monte Carlo simulations. Notice that the asymptotic limit is very close to that obtained from a 2D rectilinear torus of the same size.

The Simulation Experiments and Results

In order to obtain insights on the effects of random faults upon the communication performances of the proposed octagonal mesh, three identical sets of simulation exper-

iments were performed on three 16 x 16 octagonal meshes, each corresponding to a varying degree of faults. The first network has no fault, i.e., all of its 256 nodes and 930 channels are operational. The second network has a total of 235 nodes and 891 opera- tional channels, whereas the third has a total of 199 nodes and 836 operational channels

(see Figures 4.23 and 4.24). We shall henceforth refer to these reclaimed networks as networks A, B, and C, respectively. The faults in networks B and C were generated independently and uniformly under 5% and 12% channel-failure probabilities. Within each set of experiments, the network traffic was uniformly random and independent, with the applied load being varied to cover the entire range. Except for the use of a different network topology, all the assumptions described in section 3.4 remain valid for the current simulations. Since our main focus here is the investigation of the inher- ent network performance figures, only the artificially generated traffic experiments were conducted.

Figures 4.25 and 4.27 plot the average normalized throughput versus normalized applied load for the three different networks. Consider, for example, network A (the non-faulty mesh): The normalized network throughput increases linearly with increasing applied load, until it reaches~ 72%, after which the throughput remains stable, in spite of increasingly heavy applied load. Similar behaviors are observed in networks B and C with corresponding saturation throughput values of~ 0.58 and 0.38, respectively. The fact that the saturation throughput for network A occurs around 70% can be understood by looking at the very dispersive nature of the routes generated by the routing relations, R*. In particular, our previous bandwidth argument counted only messages that must cross the bisection from their sources on one side to their destinations on the other side. However, many more routes exist that have both their sources and destinations on the same side, but that nonetheless cross the bisection more than once. These routes were not taken into account and are responsible for the < 100% observed saturation throughput. In other words, a reduced maximum throughput is the price paid in order to obtain a higher network reliability by adopting a more dispersive routing strategy.

Similar throughput behaviors are observed in the reclaimed networks B and C, but having correspondingly lower saturation values. For network B, with 235 nodes, or a yield of ~ 92%, the normalized saturation throughput is reduced from 0. 70 to 0.58, or

~ 80% that of the non-faulty network A. Similarly, for network C, with 199 nodes or a

yield of~ 78%, the saturation throughput is reduced to 0.38, or ~ 53% that of network A. While these figures are specific to the fault configurations of the two simulated networks, they are suggestive of the extent of performance degradation induced by random faults in general. In particular, if we assume that both the node and channel resources are degraded to the same extent, we may expect that, on the average, the relation between applied load and available bandwidth will remain unchanged. However, the average length of survived routes are increased due to increased reliance on detouring and, hence, they consume additional bandwidth. As a result, we conjecture that the effective saturation throughput will degrade at a rate faster than that of the node and channel resources. The empirical figures obtained for networks B and C appear to be consistent with this conjecture.

The corresponding message latencies for these networks are shown in Figures 4.26 and 4.28. These curves also present no surprises. Again, we observed the familiar char- acteristic curve for latency behavior. Each curve starts at latency values very close to the theoretical lower bound under low to moderately heavy applied load, and increases rapidly as the load approaches the respective throughput limit. For example, in Fig- ure 4.26, the transition point throughput for networks A, B, and C are ~ 0.6, 0.45, and 0.3, respectively. One way to interpret these results is as follows: For computations that are primarily communication bounded, network C, as a result of the ensuing faults, is reduced to ~ 0.78 X

g::

~ 0.39 of the raw computing speed of the original non-faulty network A. This superlinear degradation in the overall computing performance will in general be observed for traffic patterns generated under random placement strategies that are very effective in maintaining approximate load balance. Because of its simplic- ity, such random placement strategy is particularly attractive in faulty networks, since these networks are substantially more irregular than the original non-faulty networks.

Figure 4.23: Reclaimed Convex Network B - 235 Nodes and 891 Channels

Figure 4.24: Reclaimed Convex Network C - 199 Nodes and 836 Channels

16 x 16 Octagonal Mesh 0.8

Network A

0.7

T H 0.6 Network B

R 0

u

0.5

G H 0.4 twork C

p

u

0.3

T 0.2 0.1

0.0 0.2 0.4 0.6 0.8 1.0

Normalized Applied Load

Figure 4.25: Normalized Throughput for Single-packet Message

16 x 16 Octagonal Mesh 1600

1400 1200 L 1000 A T E 800 N

C 600 y

400 200 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

Normalized Throughput

Figure 4.26: Average Latency for Single-packet Message

16 x 16 Octagonal Mesh 0.7

Network A

0.6 T 0.5

H Network B

0 R

u

0.4

G 0.3 H p

u

0.2

T 0.1 0.0

0.0 0.2 0.4 0.6 0.8 1.0

Normalized Applied Load

Figure 4.27: Normalized Throughput for Variable-length Message

16 x 16 Octagonal mesh 2500

M E

s

2000 Networ A

A

s

G 1500 E L

A 1000 T E

N 500 C y

0

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

Normalized Throughput

Figure 4.28: Average Latency for Variable-length Message