• Tidak ada hasil yang ditemukan

Optimization Model

Dalam dokumen Fault-tolerant Distributed Data Centers (Halaman 112-117)

Chapter 1 Introduction

5.2 Optimization Model

In this section, we formulate the green energy cost-aware, capacity provisioning problem (termed GACED) for fault-tolerant GDCs as a constrained optimization problem. Before that, we discuss the architecture of the data center powered by different renewable energy sources considered in our work, followed by the assumptions and input parameters used.

We considered a GDC powered by multiple renewable energy sources as illustrated in Fig. 5.1. There are |S| data centers housing ms number of servers (we use the index s for data center). To achieve carbon footprint and energy cost reduction, each data center is integrated with multiple green energy sources such as,

5.2 Optimization Model

on-site renewable generator (wind and/or solar), off-site renewable generator, PPA, and utility power transmitted through grid. ESD is used to store surplus green energy. There are front-end proxy servers collocated with each client region, which map the requests to multiple data centers as shown in Fig. 5.1. Lahu denotes the demand generated per hour (h) from each client region u, corresponding to each application typea. λhsu denotes the number of requests mapped from client regionu to data centers at hour hfor application type a. In this model,ms and λhsu are the decision variables while other parameters like Lhu, brown electricity price and green energy availability are the input parameters.

5.2.1 System Architecture

The following assumptions are used in the model.

• Each data center consolidates the workload to keep the power consumption proportional to the workload served.

• Service time (including queuing delay) is same for all the data centers and the latency is due to the propagation delay.

• The requests are placed in a single queue to be served by any server.

DC1 m1 GRID

SOLAR

WIND

DC2 m2 GRID

SOLAR

WIND

DC|S|

m|S|

GRID

WIND

Front- end1

Front- end2

Front- end3

Front- end|U|

λ11

h

λ21 λh

12 h λ|S|2

h . λ|S|3 λ h

23 h λ

|S||U| h

SOLAR

Offsite Offsite

Offsite Onsite

Onsite

L1h

L2h

L3h

L|U|h

Figure 5.1: Architecture of the GDC powered by multiple green energy sources

• Only one data center can completely or partially fail at any point in time [2].

Failure of more than one data center at the same time is avoided by the choice of locations.

5.2.2 System Model

Demand: LetS be the set of data centers housing ms number of servers. LetLahu denote the demand from a client regionu during hour h for application typea. Let λaf hsu denote the number of requests mapped from client region u to data center s (s ∈ S), at hour h for an application type a. Here f = 0 indicates the case of no data center failure and f ∈1,2, . . . ,|S| indicates the failure offth data center.

Heterogeneous workload: Since we assume that a data center can serve different types of workload, we explicitly considered the heterogeneity while calculating the server utilization. Let the processing rate of the server be B bps and the mean job size be Ja for application type a. The effective service rate is JB

a. We define the average utilization as

γsf h = P

u,aλaf hsu Ja

msB (5.1)

Failure model: Let p be the fraction of servers failing at any given site. The processing rate of a failed data center reduces to (1− p)msB. The data center utilization after the failure can be expressed as

γsf h=

















Eq.(5.1) ∀f 6=s

P

u,aλaf hsu Ja

(1−p)msB f =s, p <1

0 f =s, p= 1

(5.2)

Delay: LetDsu be the propagation delay between the data center s and the client regionu. We define a target delay of Dmax for all types of workloads (with different processing rate), when no data center has failed and Dmaxf (Dmaxf ≥ Dmax) for the

5.2 Optimization Model

case of a data center failure. We use a binary variable, ysuf to indicate the ability of data center s to serve the requests from client region u when data center f has failed.

Power consumption: LetPidle be the average power drawn in idle condition and Ppeak be the power consumed when server is running at peak utilization. The total power consumed bys ∈S, at hour h∈H is modeled as [16]

Psf h = ms(Pidle + (Es − 1)Ppeak) + ms(Ppeak − Pidlesf h + , (5.3) where Es is the PUE of a data center at s and is an empirical constant.

Modeling brown energy usage: Let θsh be the price of brown energy at a data centersduring hourhandδsih be the price of green energy of typei,i∈ {1,2,3,4,5}

corresponding to onsite wind, offsite wind, onsite solar, offsite solar and PPA, respectively. Let P Bsf h denote the amount of brown energy drawn at hour h and

f hsi denotes the amount of renewable energy drawn from source i. Since the brown energy is used only after exhausting the green energy available, the brown energy drawn from the grid is given by

P Bsf h =Psf h−X

i

f hsi ∀s, h, f (5.4)

5.2.3 Cost Model

Next, we define the cost components used in the objective function of the MILP formulation.

• Server cost: Let α be the cost of acquiring a server. The total cost of the servers in all the data centers is

Φ =αX

s

ms (5.5)

• Brown energy cost: The cost of brown energy consumed across all the data centers is given by

Θ = X

s,h,f

θshP Bsf h (5.6)

• Renewable energy cost: The total cost incurred in using renewable energy across all the data centers is given by

R= X

s,h,f

δsihf hsi (5.7)

MILP model: The objective for capacity provisioning in fault-tolerant green GDC is to minimize the TCO, denoted by z, which is simply the sum of all the aforementioned costs while satisfying constraints on delay, green energy usage and availability. Formally, the problem is expressed as

minimize z= Φ + Θ +R; (5.8)

subject to

X

s,i,h

f hsi ≥ρPsf h, ∀f (5.9)

X

s

λaf hsu = Lahu , ∀u, a, h, f (5.10) 2Dsuyfsu ≤Dmax, ∀s, u, f = 0 (5.11) 2Dsuyfsu ≤Dmaxf , ∀s, u, f ≥1 (5.12) 0 ≤ λaf hsu ≤ yfsuLahu , ∀s, u, a, h, f (5.13) γsf h ≤ γmax, ∀s, h, f (5.14)

Mmin ≤ms ≤Mmax, ∀s (5.15)

λaf hsu = 0, ∀u, a, h, s=f (5.16) ysuf ∈ {0,1}, ∀s, u, f (5.17)

Dalam dokumen Fault-tolerant Distributed Data Centers (Halaman 112-117)