An Optimized System Dynamics Approach for a Hotel Chain Management
3.2 State of the Art and Theoretical Background
The analysis of the hotels’ efficiency up to now represents a neglected area. There is just a small number of studies among which the earliest ones make use of ratios to evaluate the performance of the lodging industry (Baker and Riley 1994), or use the break even analysis to discern the effectiveness of tourism manage- ment (Wijeysinghe 1993) or the yield management (Brotherton and Mooney 1992;
Donaghy et al. 1995) to enhance hotels’ profitability. In the efficiency context, DEA techniques have been applied by Anderson et al. (1999), Hwang and Chang (2003), Barros (2004, 2005), and Barros and Mascarenhas (2005).
System dynamics methodology is also very new in this research field. The only recent example we know is Georgantas (2003) who uses a system dynamics sim- ulation model to test Cyprus’ hotel value chain and profitability. An example of integration of system dynamics with an economic model is illustrated by Smith and van Ackere (2002).
Two papers, Png, (1988) and Scott et al. (1995) share a common theme with our study: the optimal hotel pricing cum capacity-utilization problem within an uncertain demand environment. They both deduce that optimal strategy leaves some unused capacity.
3.2.1 Data Envelopment Analysis
Data envelopment analysis is a linear programming based technique for measur- ing the relative performance of organizational units where the presence of multiple inputs and outputs makes comparisons difficult. Indeed, when the activity of mea- suring and comparing the efficiency of relatively homogeneous units (authority departments, schools, hospitals, shops, banks, and so on) involves multiple inputs and outputs related to different resources, activities and environmental factors, the usual measure of efficiency
efficiency=output input is often inadequate.
In fact, the initial assumption is that the measure of technical efficiency requires a common set of weights to be applied across all decision making units (DMUs). This immediately raises the problem of how such a set of weights can be obtained. Such a problem was addressed the first time by Farrell (1957) and developed by Farrell and Fieldhouse (1962), focusing on the construction of a hypothetical efficient unit, as a weighted average of efficient units, to act as a comparator for an inefficient
one. Later, Charnes et al. (1978) recognized the legitimacy of the proposal that each unit might value inputs and outputs differently and therefore it should be allowed to adopt a set of weights which shows it in the most favourable light in comparison to the other units.
Therefore, by making a generalization of the Farrell’s technical efficiency mea- sure, Charnes, Cooper, and Rhodes first introduced the data envelopment analysis to describe what is a mathematical programming approach to the construction of pro- duction frontiers and the measurement of efficiency of developed frontiers. In their general-purpose DEA they assumes constant return-to-scale (CRS) and an input ori- entation. A few years later Banker et al. (1984) developed a variable returns-to-scale (VRS) model. Other models are the additive model (Charnes et al. 1985), the mul- tiplicative model (Charnes et al. 1982), the cone-ratio DEA model (Charnes et al.
1990, the assurance region DEA model (Thompson et al. 1990), and Thompson et al. 1986), and the super-efficiency model (Anderson and Peterson 1993).
Since 1978 over 4000 articles, books and dissertation have been published and DEA has rapidly extended to dummy or categorical variables, discretionary and non-discretionary variables, incorporating value judgments, longitudinal analysis, weight restrictions, stochastic DEA, non-parametric Malmquist indices, technical change in DEA and many other topics.
Up to now the DEA measure has been used to evaluate and compare educational departments (schools, colleges and universities), health care (hospitals, clinics), prisons, agricultural production, banking, armed forces, sports, market research, transportation (highway maintenance), courts, benchmarking, index number con- struction and many other applications.
One of the main characteristics of DEA is its flexibility in the choice of weights for the different inputs and outputs. Therefore, DEA may be appropriate where units can properly value inputs or outputs differently, or where there is a high uncertainty or disagreement over the value of some inputs or outputs. The heart of the analysis lies in finding the “best” virtual producer (a single peer or a peer group) for each real producer (virtual because this producer does not necessarily exist and two or more DMUs can be combined to form a composite producer). If the virtual producer is better than the original producer by either making more output with the same input or making the same output with less input then the original producer is inefficient.
By identifying the efficiency score of each DMU in the sample, the slack variables in inputs and outputs of the inefficient DMUs and the peer group of efficient ones, DEA is one of the most promising techniques for the improvement of efficiency.
Yet, the same characteristics that make DEA a powerful tool can also create prob- lems. An analyst should keep these limitations in mind when choosing whether or not to use DEA. First of all, since DEA is an extreme point technique, noise (even symmetrical noise with zero mean) such as measurement error can cause signifi- cant problems. Second, DEA is good at estimating “relative” efficiency of a DMU but it converges very slowly to “absolute” efficiency. In other words, it can tell you how well you are doing compared to your peers but not compared to a “theoretical maximum”. Thirdly, DEA is a nonparametric technique and, therefore, statistical hypothesis tests are difficult. Finally, since a standard formulation of DEA creates
a separate linear program for each DMU, large problems can be computationally intensive.
More detailed reviews of the methodology are presented by Seiford and Thrall (1990), Ali and Seiford (1993), Lovell (1994), Charnes et al (1994), Seiford (1988, 1996), Thanassoulis and Dyson (1988), Dyson and Thanassoulis (1988), and Thanassoulis et al. (1987).
3.2.2 System Dynamics
System dynamics is a methodology for modelling, studying and managing complex feedback systems by making use of a feedback-based, object-oriented approach. SD was developed initially by Jay W. Forrester and his seminal book Industrial Dynam- ics (Forrester 1961) is still a significant statement of philosophy and methodology in the field. Since its publication, the span of applications has grown extensively and now encompasses works in corporate planning and policy design, public man- agement and policy, biological and medical modelling, energy and the environment, theory development in the natural and social sciences, dynamic decision making, complex nonlinear dynamics. According to Simonovic and Fahmy (1999), system dynamics is based on a theory of system structure and a set of tools for representing complex systems and analyzing their dynamic behaviour.
The most important feature of system dynamics is that it helps to elucidate the endogenous structure of the system under consideration, and demonstrates how dif- ferent elements of the system actually relate to one another. This then facilitates experimentation as relations within the system are changed to reflect different deci- sions. Unlike other scientists, who study the world by breaking it up into smaller and smaller pieces, system dynamicists therefore look at things as a whole. A system can be anything from a steam engine, to a bank account, to a basketball team.
What makes system dynamics different from other approaches used for studying complex systems is the use of feedback loops, where a change in one variable affects other variables over time, which in turn affects the original variable, and so on. A typical system dynamics model is composed of four basic building blocks: stock, flow, connector and converter. Stocks (levels) are used to represent anything that accumulates; for instance water stored in storage or dams. Flows (rates) represent activities that fill and drain stocks; an example includes releases or inflows. Con- nectors (arrows) are used to establish the relationship among variables in the model while the direction of the arrow indicates the relationship of the dependency. They carry information from an element to another one in the model. Converters trans- form input into output. Stocks and flows help to describe how a system is connected by feedback loops which create the nonlinearity found in real systems. Models are built by making use of a computer software to run the simulations. As long as such a dynamic model is a good representation of the problem being studied, running
“what if” simulations to test certain policies on such a model can greatly aid under- standing how the system changes over time. Moreover, since the resulting structure
of the model and the way it actually works is the result of the hypothesis made in building the model, the inherent flexibility and transparency of the model is par- ticularly helpful to understand the underlying dynamics in the observed behaviour of the system. Therefore, compared with the conventional simulation approaches, system dynamics can better represent how different changes in basic elements can affect the dynamics of the system in the future.