Data Envelopment Analysis
7.2 Strengths and Weaknesses of the DEA Approach
inefficiently operated company in the years 1991, 1992 and 1995 (black bars). The figure clearly indicates the very regular development, gradually increasing between 1991 and 1993, and declining slowly from 1993 to 1995.
Note that hotel no. 14 crossed the ‘efficiency line’ twice, first from an inefficient to an efficient company and then back to an inefficient group of companies.
1996 was an extraordinary year for hotel no. 14, which certainly has to be considered in an overall evaluation.
There are several new procedures suggested in the literature for the analy- sis of efficient companies using DEA results. However, all of these extensions have in common that they introduce a complex form of two-stage approach in model formulation and computation (e.g. the ‘slack adjusted DEA model’
suggested by Sueyoshiet al. (1999); the DR/DEA model by Sinuany-Stern and Friedman (1998); or the ‘single price system extension’ by Ballestero (1999), which classifies efficient but not inefficient companies).
The model selected here is an input-oriented model, which seeks to identify technical inefficiency as a proportional reduction in input usage. As discussed in Chapter 4, it is also possible to measure technical efficiency as a proportional increase in output production, however, the former is the more adequate model for the manager. This is because hotel managers usually have objectives to fulfil, either set by corporate management goals or by self-defined business plans, and hence the input quantities appear to be the primary decision variables. In other applications it will also be possible that managers may be given a fixed quantity of resources and asked to produce as much output as possible. In this case an output orientation would be more appropriate.
inputs for less-efficient firms and it enables tests with distribution-free or non-parametric procedures to be used to investigate the important factors contributing towards excellent performance for companies (Hawdon and Hodson,1996). This acceptance of DEA is mainly because it focuses on observed operating practice,and circumvents specifying the complete func- tional form of the production function. It has greater practical appeal and higher perceived fairness than normative industrial engineering standards.
Moreover,recent extensions to DEA have offered substantive flexibility to incorporate realistic assumptions,such as variable returns-to-scale properties and non-discretionary input variables.
Although DEA has several advantages over other methodologies for performance evaluation,it nonetheless suffers from a variety of weaknesses which should be subject for future research.
7.2.1 Stability of DEA results
DEA is not endowed with any formal system of hypothesis testing (Seiford and Thrall,1990). This is because DEA is a non-statistical technique which makes no explicit assumptions on the distributions of the residuals. This,and other problems,have left DEA open to criticism. In recognition of these difficulties Charneset al. (1985b) initiated work on a DEA-sensitivity analysis. Subse- quently,research has begun to focus increasingly on sensitivity issues (Banker and Morey,1989; Epstein and Henderson,1989; Sengupta,1990,1992b,c;
Banker,1993; Bankeret al.,1993,1998; Retzlaff-Roberts and Morey,1993;
Hougaard, 1999; Maital and Vaninsky, 1999).
The initial work by Charneset al. (1985b) only involved an examination of the effects on the efficiency score of deleting variables. However,if noise is pres- ent in an observation located inside the efficiency frontier,the consequences are limited to that company under evaluation. The inefficiency score of that company will be biased towards or away from the frontier depending on the nature of the distortion. The inefficiency scores of the other companies in the data set will not be affected. Data errors of companies which make up the frontier are more serious since these will change the efficiency score of all companies in the data set for which efficiency is defined by reference to the biased company.
An old suggestion to address this problem is given by Timmer (1971) who argued that a Farrell boundary can be constructed iteratively. In an iterative approach he successively eliminated outlying data points and re-estimated the frontier until the resulting efficiency estimates stabilized. This is possible in relatively large data sets,although it means that excluded units will have no efficiency score in the final iteration. The Timmer adjustment is arbitrary to the extent that it is not clear,a priori,precisely when the efficiency scores have stabilized to a sufficient degree to accept that random outcomes have been
eliminated. Later developments of the Timmer approach can be found in Sengupta (1987, 1988), and Sengupta and Sfeir (1988).
More recently,Banker et al. (1993) performed extensive Monte Carlo simulations which suggest that the reliability of DEA results deteriorates considerably in comparison to an econometric approach when measurement errors become large. Retzlaff-Roberts and Morey (1993) introduce the concept of minimum frontier to allocative DEA in order to identify significantly inefficient units.
In a study reported by Ganley and Cubbin (1992: 128),a data error on one variable at one company reduced the average efficiency of the whole cross-section by 12%. Seven companies,formerly efficient on the correct data, achieved non-unit efficiency scores in the error-ridden data set. They argue that noise in outcomes might be identified in unexpected or abrupt change in the efficiency ranking of utilities year-by-year. In their study they suggest using Spearman’s rank correlation coefficients to test whether the efficiency rankings change significantly when excluding individual companies from the data set. A high correlation represents stable efficiency scores which could then be the basis of acceptable targets.
Another approach was taken by Färe et al. (1987) who found more stable estimates when performing separate DEAs on successive cross-sections and deriving mean efficiency scores for companies. Also Brockett and Golany (1996) suggested that the analysis should be performed by group,rather than by individual units,which leads to stochastic extensions of DEA where random deviations from the group’s behaviour can be studied.13 A similar form of data pooling was suggested by Charneset al. (1985a) as part of their ‘window- analysis’,which was discussed in detail on pp. 129–133. The resulting composite frontier derived by the window-analysis gives less weight to unusual observations and is therefore more robust to stochastic events.
Banker et al. (1998) present a stochastic data envelopment analysis (SDEA) model to estimate standards from comparative benchmarking data.
The authors argue that their model can create mix and yield variance ratios when including estimates on substitutability or separability between factors.
They illustrate their approach with data on nursing services from 66 state hospitals in one US state. Banker et al. (1998) show how one hospital’s performance can be matched against the benchmark cost for the hospitals as a group,and conclude that SDEA sets more achievable standards than conventional DEA. Recently,however,it has been reported that the stochastic DEA models can only outperform the traditional DEA in some specific situa- tions,but on average they cannot compete with the older techniques (Resti, 2000: 559).
Some valuable contributions have been made by Sengupta (1990) and Bankeret al. (1989) who have examined methods for identifying ‘gross data
13 Their statistical evaluation of observed group differences was recently improved by Sueyoshi (1999).
errors’ and regions of data stability. However,these developments are rather ad hoc. More promising extensions involve the incorporation of fuzzy mathe- matical programming which has been demonstrated by Sengupta (1992b,c) and Hougaard (1999). The ultimate research objective in order to distinguish accurately between measurement errors and inefficiencies probably lies in some form of marriage of parametric and non-parametric techniques. This would circumvent one of the principal difficulties currently inherent in a non- parametric frontier approach.
Another goal of DEA simulation is to impose control on allowable solutions,as in real world applications not all factors can be controlled by the managers. The cone-ratio DEA model (Charneset al.,1989,1990) and the assurance-region aspects (Thompsonet al.,1986,1990) are examples where upper and lower bounds are imposed on the weights to assure that certain environmental considerations and expert opinion are incorporated into evalu- ation. Extensions to these models have been introduced by Kao (1994) and Cooperet al. (1999).
7.2.2 Interactive DEA
Several advantages result when DEA is incorporated in an interactive environ- ment. For example,the input and output variables must be carefully selected to make the analysis useful for the manager. Although DEA has fewer limitations than other econometric approaches in the choice of input and output vari- ables,formal selection criteria are unavailable,the input–output variables in a model are therefore usually selected based on intuitive or pragmatic considerations (Haaget al.,1992). Roll et al. (1989) attempt to give some guidelines on selecting the appropriate variable set for a DEA.
The advantage of an interactive system is that the user can go back and forth and learn from the output. The manager can change the variables selected and he/she is not bound to a strict classification as is usual in ordinary printed publications of panel studies. Hence the user will soon realize that results may vary significantly,sometimes even through minor changes in the variables selected. The manager who manipulates options can gain more insights and a better understanding of how to interpret benchmarking results and how to use them for managerial purposes.
Furthermore,this simulation environment may be the ideal platform for the implementation of more dynamic DEA models,like the ‘inverse DEA model’
recently introduced by Weiet al. (2000). Their extension of the basic DEA model solves such problems where,for instance,among a group of companies, a particular company’s inputs are increased and,by assuming that this company maintains its current efficiency level,the maximum achievable increase in outputs is calculated. The inverse DEA model proposed by Weiet al.
is certainly a useful tool to perform what-if analyses,which are perfectly suited for being integrated in an interactive decision support system.
An appropriate interactive decision support system is best implemented in a multi-user environment like the Internet in that the widest possible group could benefit. Extranet applications,like TourMIS,could offer online databases of financial and non-financial hotel data for DEA analysis,especially for SMEs which are less organized in the exchange of business data than international hotel chains. Finally,the advantage of a real-time application is that additional insights can be gained by multi-period analysis and extrapolations of business data time series. An appropriate linkage of a DEA to a database system can therefore easily convert usage of DEA model from an ex post evaluation instrument to a prospective oriented instrument which might also support budgeting tasks for small and medium-sized enterprises.
However,there are severe problems which have to be addressed when developing a real-time interactive DEA system. For example,the standard DEA model is a static,one-period evaluation and difficult to integrate in an inter- active environment. When a new company’s data are added to a database they become a part of one or more subsets of the data in which its presence must be considered. For these subsets current methodology means that all efficiency runs for all other firms have to be repeated to redefine the efficiency frontier.
It would clearly be appropriate to move towards more general dynamic DEA modelling in order to handle trend data in growing organizations and chang- ing environments. A step in support of this would be deriving an explicit partial adjustment mechanism to use rather than replicating all linear programming runs.