THE IMPACT OF THE CAPITAL MARKET
CHAPTER 10 THE STRUCTURE OF BANKING
10.5 EMPIRICAL EVIDENCE
10.5.5 THE EFFICIENT FRONTIER APPROACH
The volume of studies using the e⁄cient frontier methodology have expanded dramatically over recent years. The general £avour of the Data Envelopment Analysis is illustrated in Box 10.1.
The e⁄ciency of the units is therefore measured with the e⁄ciency frontier as the benchmark. Units on the frontier attract a rating of 1 (or 100%) and the ine⁄- cient units a rating of less than 1 according to the distance they lie from the e⁄cient frontier. Note that there is the potential problem that the ‘benchmark ¢rms’, which lie on the e⁄ciency frontier, may not be e⁄cient in the absolute meaning of tech- nically e⁄cient. Selection of the frontier is via ¢rms that are relatively more e⁄cient than others in the sample. Extension to multiple inputs and outputs is easily achieved through utilization of programming methods.11 E⁄ciency frontier methods can also be subdivided into two broad categories; namely, nonparametric and parametric approaches.
The main nonparametric approach is Data Envelopment Analysis (DEA), and this imposes no structure on the production process, so that the frontier is deter- mined purely by data in the sample. Utilization of linear programming generates a series of points of best-practice observations, and the e⁄cient frontier is derived as a
EMPIRICAL EVIDENCE 151
11Care must be taken to ensure that the number of observations is substantially greater than the number of inputs and outputs, otherwise units will ‘self-select’ (or near self-select) them- selves because there are no other units against which to make a comparison; e.g., a single obser- vation becomes the most e⁄cient by de¢nition.
BOX 10.1
Data Envelopment Analysis (DEA)
DEA was developed during the 1970s – a seminal article is Charnes et al.
(1978). It has been applied to a wide range of activities involving multiple objectives and decision-making units. DEA methodology is based on mathe- matical programming, so it is useful to start with a simple illustrative example of a linear programming problem.
Assume:
1. A firm produces just two products (Y andX) utilizing two inputs (Aand B) and, hence, two processes.
2. Process 1 uses 2 units of A and 1 unit ofBto produce 1 unit ofY. Process 2 uses 1 unit ofAand 2 units ofBto produce 1 unit ofX.
3. Capacities ofAandBare both 200 and 400, respectively.
4. Assume the profits per unit forYandXare also both 10.
This can be formulated as a linear programme as follows:
2Yþ1X200 1Yþ2X300 Maximizeh¼10Yþ10Xsubject toY;X0.
The advantage of this simple illustrative model is that it can be solved graphically:
The only region that satisfies both constraints is that given inside the frontier given 100, Q150. The dotted lines represent the profit available from the
Y
X
150 200
100 300
Q
EMPIRICAL EVIDENCE 153
production process. The object is to move as far outwards as possible so that the most profitable is given by pointQ.
DEA analysis proceeds in a similar manner. Efficiency for theJth firm can be defined as:
U1Y1JþU2Y2Jþ V1X1JþV2X2Jþ
whereU1is the weight given to output 1,Y1Jis the amount of output 1 from Decision Making Unit (DMU)J,V1is the weight given to input 1 andX1Jis the amount of input 1 to DMUJ.
Charnes, Cooper and Rhodes (CCR) formulate the above problem as a linear programming problem with each DMU representing a bank. The aim is to maximize the ratio of output to inputs for each DMU (i.e., bank) subject to the constraint that this ratio for each other computed using the same weightsUandV is not greater than unity.
The formulation is as follows (assume 3 outputs and 2 inputs). For firm 0:
Maximize: h0¼U1Y10þU2Y20þU3Y30 V1X10þV2X20 Subject to: U1Y10þU2Y20þU3Y30
V1X10þV2X20 1 for firm 0 U1Y11þU2Y21þU3Y31
V1X11þV2X21 1 for firm 1 U1Y12þU2Y22þU3Y32
V1X12þV2X22 1 for firm 2 similarly for the remaining firms:
U;V0
More generally, the programme can be formulated as:
Maximize: h0¼ Xs
r¼1
UrYr0 Xm
i¼1
ViXi0
where the subscript 0 indicates the 0th unit.
Subject to the constraints that:
Xs
r¼1
UrYr J
Xm
i¼1
ViXiJ
1; Ur 0; VJ0
Forr ¼1;2;. . .;n;i¼1;2;. . .;m:
The resulting solution provides among other information the efficient frontier, each bank’s position relative to the frontier, and the scale position (i.e., increasing, decreasing, constant, etc.).
A simple diagrammatic illustration of a trivial production process involving one input and one output is shown in the diagram below. UnitsA,B,C,D,E andFare efficient in a technical sense as compared with unitsFandG. For each of the latter units:
(a) Output could be increased with no increase in input –Gmoving to the position ofA.
(b) Input could be reduced with no reduction in output –Fmoving toE.
A simple illustration is shown below:
There are two useful features about DEA. First, each DMU is assigned a single efficiency score, hence allowing ranking among the DMUs in a sample.
Second, it highlights the areas of improvement for each single DMU. For example, since a DMU is compared with a set of efficient DMUs with similar input–output configurations, the DMU in question is able to identify whether it has used input excessively or its output has been underproduced.
The main weakness of DEA is that it assumes that the data are free from measurement errors (see Mester, 1996). Since efficiency is a relative measure, the ranking relates to the sample used. Thus, an efficient DMU found in the analysis cannot be compared with other DMUs outside of the sample. Each sample, separated by year, represents a single frontier which is constructed on the assumption of the same technology. Therefore, comparing the efficiency measures of a DMU across time cannot be interpreted as technical progress but rather has to be taken as changes in efficiency (Canhoto and Dermine, 2003).
A B
D
E
F
G
Input Output
C
series of piecewise linear combinations of these points. Often, constant returns to scale are assumed and the X-Ine⁄ciency is measured as the gap12between actual and best practice. The problem with this approach is that the total residual (i.e., the gap between best and the ¢rm’s actual practice) is assumed to be due to X- Ine⁄ciencies, whereas some of it may be attributable to good luck, especially advantageous circumstances and such factors as measurement errors. Hence, it would be expected that e⁄ciency estimates by DEA would be lower than those obtained by the other methods, which tried to segregate the random error from X- Ine⁄ciency.13The e⁄ciency of a merger can be made by noting changes in relative performance after the merger as compared with pre merger. Sensitivity analysis can be carried through using a window over, say, 3 years. A good description of this method is contained in Yue (1992), including an application to 60 Missouri commercial banks.
Parametric approaches tend to overcome this problem (but not the problem of the measurement of the e⁄cient frontier) through the allocation of the residual between random error and X-Ine⁄ciency. The cost of this re¢nement is the im- position of structure necessary to partition the residual. This leaves these approaches open to the same criticism as that applied to the production function approach; i.e., that this structure is inappropriate. Three separate types of nonparametric approach have mainly been used: the stochastic frontier approach (sometimes called the
‘econometric frontier approach’), distribution-free approach and the thick-frontier approach. A brief description of these measures now follows.
Stochastic Frontier Analysis (SFA)
This approach speci¢es a function for cost, pro¢t or production so as to determine the frontier and treats the residual as a composite error comprising:
(a) Random error with a symmetric distribution ^ often normal.
(b) Ine⁄ciency with an asymmetric distribution ^ often a half-normal on the grounds that ine⁄ciencies will never be a plus for production or pro¢t or a negative for cost.
EMPIRICAL EVIDENCE 155
Most studies using DEA have focused on the USA, but Fukuyama (1993), Berg et al.(1993) and Favero and Papi (1995) have done country-specific studies outside the USA. Allen and Rai (1996) have examined banks in 15 countries. Bergeret al.(1993) conducted a survey of comparative methods of efficiency estimation.
12Given constant returns to scale, it does not matter whether output is maximized or input minimized.
13The overall mean e⁄ciency of US banks in the studies surveyed in Berger and Humphrey (1997) was 0.79%. The mean for the nonparametric studies was 0.72% and that for the parametric studies 0.84%.
Distribution Free Approach (DFA)
Again, a speci¢c functional form is speci¢ed and no assumption is made about the distribution of errors. Random errors are assumed to be zero on average, whereas the e⁄ciency for each ¢rm is stable over time:
Inefficiency¼Average residual of the individual firm Average residual for the firm on the frontier
Thick Frontier Approach (TFA)
A functional form is speci¢ed to determine the frontier based on the performance of the best ¢rms. Firms are ranked according to performance and it is assumed that:
(a) Deviations from predicted performance values by ¢rms from the frontier within the highest and lowest quartiles represent random error.
(b) Deviations between highest and lowest quartiles represent ine⁄ciencies.
This method does not provide e⁄ciency ratings for individual ¢rms but rather for the industry as a whole.
It would be comforting to report that the various frontier e⁄ciency methods provided results that were consistent with each other. Unfortunately, this is not the case. Baueret al.(1998) applied the di¡erent approaches to a study of the e⁄ciency of US banks over the period 1977 to 1988 using multiple techniques within the four main approaches discussed above. They found that the results derived from nonparametric methods were generally consistent with each other as far as identify- ing e⁄cient and ine⁄cient ¢rms were concerned. Similarly, parametric methods showed consistent results. Parametric and nonparametric measures were not consistent with each other.
A number of other studies have been made to assess the e⁄cacy of mergers using this broad methodology. Avkiran (1999) applied the DEA approach to banking mergers in Australia. This study suggested that as far as the Australian experience is concerned (albeit on a small sample of four mergers), (i) acquiring banks were more e⁄cient than target banks and (ii) the acquiring bank did not always maintain its pre-merger e⁄ciency.
As mentioned earlier, Vander Vennet (1996) also employed the e⁄cient frontier methodology. The precise methodology used was the stochastic frontier; i.e., a parametric approach. These results mirror quite closely the results obtained through use of accounting measures and, therefore, reinforce the earlier conclusions.
De Young (1997) examined 348 bank mergers in the US during the period 1987^
1988 using the thick cost frontier; i.e., a parametric approach. He found that post-merger e⁄ciency improved in (i) about 75% of the banks engaged in multiple mergers, (ii) but only 50% of those engaged in a single merger. This led De Young to conclude that experience improved the bank’s chances of securing the potential bene¢ts of a merger. An international perspective was
provided by Allen and Rai (1996) who used a global stochastic frontier for a sample of banks in 15 countries for the period 1988^1992 and found that X-Ine⁄ciencies of the order of 15% existed in banks where there was no separation between commercial and investment banking. Where there was separation X-Ine⁄ciencies were higher of the order of 27.5%.