Measurement of multiple sites in service firms with data envelopment analysis

Metters, Richard C


Data envelopment analysis (DFA) has become an increasingly popular method to measure performance for service firms with Multiple SiteS. DEA is superior to many traditional methods for firms that have multiple goals. The Promise Of DEA is that the complex, multiobjective problem of performance measurement can be reduced to a single number. Unfortunately, the practice Of DEA often belies the

promise. Misconceptions concerning the purpose and implementation Of DEA can cause DEA applications to be less than successful. Here, the technique is explained, and a guide to the implementation Of DEA is proposed, utilizing DEA studies of retail bank branches.


1. Introduction

One distinguishing characteristic of service versus manufacturing firms is the number of physical sites that constitute a single firm. Multinational manufacturing giants in the largest manufacturing enterprises, such as the auto, steel, chemical, or paper industries, may have scores, perhaps a hundred or so manufacturing/assembly plants in a single firm. In contrast, the leaders in many service industries routinely have multiple hundreds or thousands of brick and mortar sites where services are created. The largest restaurant chain has over 20,000 sites; the largest banks and retail stores have over 3,000. The leaders in lesser known industries, such as repair services, cleaning services, and personnel services, have multiple thousands of sites.

The sheer volume and associated geographic dispersion of sites creates managerial difficulties. Unlike many centralized back-offices or large manufacturing plants, it is no longer possible to “manage by walking around.” Managerial “gut feel” and subjective performance measures cease to be useful when the evaluator is rarely physically present at the unit being evaluated.

For these geographically disperse units, many of the seemingly objective performance measures used by many firms also have severe drawbacks (Achabal, Heineke, and McIntyre 1984). Accounting profit, or the associated unit ROA or ROE, is a common output measure. However, individual unit profits can be highly dependent on decisions that are uncontrollable by the unit, such as pricing, product mix, and trade area competitive and economic factors (Kamakura, Lenartowicz, and Ratchfrordm 1996). Further, other outputs are also typically important, such as market share, customer service, cost containment, or gross sales growth, among many others (Good 1984). Myopic focus on unit profitability can induce unwanted behavior. It can be a simple matter for a rarely seen, remote service unit to “brand shirk” by having fewer personnel relative to other units of the firm, which can increase accounting profit of that unit but lower customer service levels, which could affect future system-wide sales.

In this paper we focus on the retail banking industry as an example service industry with such problems. There is considerable debate in the banking practitioner literature as to both how to construct bank branch profitability statements and their worth in evaluating performance, with many practitioners disregarding a branch ROE or ROA as meaningless (Schultz and Chelst 1994; Pihl and Whitmyer 1994; Thygerson 1991; Witzeling 1991). Even if accounting profit could be accurately measured, branches within a given banking system have differing missions that would preclude considering profit alone (Sherman and Ladino 1995; Oral, Kettani, and Yolalan 1992).

Even if the all the differing performance measures accurately assess the performance of a unit, distortions can arise in implementing those measures. Outputs are often assessed in one of three ways: comparing to unit goals, comparing with results from a prior time period, or by a gross comparison of outputs between units. All of these methods have serious flaws. Unit goals are frequently set by negotiation. Comparing results to negotiated goals can reward the good goal negotiators, rather than the good performers. Basing performance evaluation on prior time period results, for example, a goal of “last year plus 10%,” encourages “sandbagging,” or purposely not overly exceeding a goal in the current time period so that the next time period goal will not be as strenuous (noted empirically by Lovell and Pastor 1997, p. 292). If comparisons between units are made merely on output levels alone, managers of units with superior locations will appear to be superior, regardless of actual ability.

Another problem can arise in combining many disparate measures of success into an overall assessment. How much market share growth should be traded-off for each point of customer satisfaction? Should accounting profit constitute 30% or 50% of the overall evaluation score?

Data envelopment analysis (DEA) is a technique that shows promise as a possible solution for many of the problems listed above. Formally, DEA is a linear programming technique for measuring the relative efficiency of decision making units (DMUS) where each DMu has a multitude of desired outputs or needed inputs. In practical terms, one use Of DEA is as a measurement tool for multisite organizations when a single overall measure, such as accounting profit, is not sufficient. DEA combines numerous relevant outputs and inputs into a single number that represents productivity, or “efficiency.”

DEA is a well-known and established technique among some researchers in operations research. Between the inception Of DEA (Charnes, Cooper, and Rhodes 1978) and 1992, over 470 articles were written concerning DEA (Seiford 1994), and the pace appears to have accelerated since that time. Yet, it is still sufficiently esoteric that it appears in few textbooks, and tutorials explaining the basics of the technique are still deemed necessary at academic conferences (Seiford and Cooper 1997).

Here, we do not attempt to extend the methodology Of DEA. Rather, a guide for practice is intended. We contend that DEA can be a highly useful tool, but that it is context sensitive. That is, the rules for model choice, variable choice, and results interpretation change depending upon managerial purpose. The purpose here is to generate a number of rules for practice that are often misunderstood or misapplied in applications found in the literature.

We proceed as follows. The next section contains an introduction to the idea Of DEA. This is followed by discussions of appropriate DEA formulations with respect to multiple corporate strategic goals and dissimilar unit scale. Then, the relationship of the specific managerial goal of a DEA analysis to results analysis and variable choice is discussed.

2. The Concept of DEA

In general, the conditions required to use DEA are that a number Of DMUs are attempting to accomplish roughly the same goals and that there is some “goal diversity.” That is, there is more than one desirable goal and the goals cannot be compared in a straightforward fashion. In a technical sense, DEA measures only efficiency or productivity. DEA was developed for use in the evaluation of nonprofit sector firms, but the Use Of DEA has been expanded in practice. DEA has been used in the for-profit sector to identify superior/inferior sites, to evaluate managerial performance, to allocate resources among sites, or to diagnose the determinants of successful/unsuccessful sites (Epstein and Henderson 1989).

Early applications Of DEA centered on multiple-site nonprofit organizations because of the multiple goals and goal diversity that exist in such environments. For example, the goals of an elementary school may include disparate elements such as student self-esteem in addition to reading and arithmetic ability (Charnes, Cooper, and Rhodes 1981). As discussed earlier, goal diversity also applies in the for-profit sector. Accordingly, DEA has been applied to for-profit activities such as nursing homes (Fizel and Nunnikhoven 1993), restaurants (Banker and Morey 1993), and insurance agencies (Mahajan 1991).

Possibly the most prolific for-profit sector application Of DEA has been in retail banking branch networks (hereafter referred to as “banking”). There have been at least 18 bank branch DFA studies in the academic literature (Table 1), and there is some evidence that banking practitioners have started utilizing DEA as a routine performance measurement tool (lida 1991). These studies have used widely varying methods in their attempts to measure many different aspects of branch performance. Because of the variety of methods and number of studies concentrating on the same industry, subsequent examples will focus on banking.

At heart, DEA is about measuring the productivity of a unit, where productivity is defined as the ratio of outputs produced to inputs consumed. Measuring productivity is often a simple matter for many individual jobs, but can become complex for groups with multiple goals. For example, if bank teller A handles 200 transactions per day, whereas teller B handles 250 transactions, teller B is more productive, ceteris paribus. If those tellers handle different types of transactions, the raw data given is no longer sufficient. The transactions must be weighted according to the agreed standard time required per handling. If teller A’s 200 transactions were judged to require 9 standard hours, whereas teller B’s 250 transactions required only 7.5 standard hours, teller A would be considered more productive.

On a Decision Making Unit (DMU) level (where a DMU is typically a distinct physical site), however, evaluating productivity is not always as simple because of the multiple strategic directives that cannot be combined in a single measure.

As an example of how DEA works, assume bank branches are to be evaluated on only two criteria: loans and deposits. Table 2 shows five potential levels of branch performance, which are reproduced graphically on Figure 1. Table 2 lists identical “inputs” of 100 for each branch and separate outputs of loans and deposits. Inputs can be construed, for example, as personnel or total expenses. Branches A, C, and E form the “efficient frontier” and have a corresponding efficiency rating of 1. That is, no other branches outperform them on both measures. Branch B is clearly not performing well, as it is dominated by branch C; that is, branch C performs better on both dimensions than branch B. The case of branch D, however, is less clear. No other branch dominates branch D on both dimensions. The strength Of DEA lies in its ability to assess nondominated branches such as branch D.

For all Dmus not on the efficient frontier, DEA creates a hypothetical comparison Unit (HCU) that is a linear combination of efficient units. In this case, the HCU^sub D^ is composed of branches C and E and would represent a point of (25, 25) on Figure 1. DEA attempts to consider each DMU in the most positive manner possible: branch D is compared to HCU^sub D^, which is highly similar to itself. The efficiency measure can be interpreted geometrically: the distance between the origin and branch D is 92% of the distance between the origin and HCU,. The HCU corresponding to branch B is (18.1, 30.2), leaving branch B with an efficiency of 83%.

Note that branch A is deemed efficient, even though it has far fewer loans than branch C and only minimally larger deposits. Even more extreme, a branch with $1 more in loans than branch A and a total of $0 deposits would also be efficient. In more advanced formulations Of DEA, the basic DEA formulation can be altered to exclude branches such as A from the efficient set (Charnes, Cooper, Lewin, and Seiford 1994).

As the number of dimensions increases, a dominant relationship, such as that between branches B and C, becomes less likely. Consequently, direct comparisons become less useful, and the need to use DEA to find HCUS increases. Further, once two dimensions is exceeded, the

Efficiency is defined as the ratio of outputs to inputs. The objective is to find a set of weights that will give the Dmu evaluated the highest possible efficiency, subject to the constraint that no Dmu efficiency can exceed one. These free ranging weights are the mathematical expression of goal diversity and are to be contrasted with performance criteria that express a rigid formula for evaluation. A potential criticism Of DEA is that DEA allows weights on outputs or inputs that are too free ranging. That is, that DEA will deem efficient any DMU that excels on only one of many outputs and does not have an appropriate balance of outputs. This potential problem can be corrected in many ways, such as the incorporation of maximum weights on outputs (Roll and Golany 1993) or by implementing upper and/or lower bounds for the ratio of the weights (Thompson, Singleton, Thrall, and Smith 1996).

The efficiencies listed on Table 2 are the result of 5 separate iterations of (5)-(9), one for each of the 5 units. Naturally, the relative weight assigned to loans is heaviest for branch A, while the relative weight assigned to deposits is heaviest for branch E.

It should be noted that (5)-(9) represent only the original DEA model. Numerous extensions of this basic model exist (Charnes, Cooper, Lewin, and Seiford 1994).

3. Selecting an Appropriate DEA Formulation

This section concerns issues in model selection, while the following section is dedicated to results analysis. There are a number of varieties of the basic DEA model, as well as many differing methods of employing DEA models. We first consider a problem with model structuring relating to the variety of strategic outputs a firm may have and then consider the problems that large variations in unit size can have regarding the type of DEA model used.

3.1. Multiple Strategic Outputs and Spurious Efficiency

Typically, a multisite service firm desires strategic consistency among DMUS. Customers may visit a DMU because they assume that each Dmu will represent the same corporate philosophy. However, within an organization it is not unusual for DMUS to focus on different goals because of local demographic factors or their position within the firm. For example, an upscale department store chain may also have a chain of discount stores where off-season or unfashionable merchandise is sold. The discount stores have different standards for display, customer service, employee mix, etc., and one would expect them to have qualitatively different outputs and inputs than regular stores.

Retail banking is a prime example of an industry where DMUs have differing goals. Researchers agree that a difference in focus occurs in different branches of the same bank-though there is disagreement on the nature of the conflict. Chase, Northcraft, and Wolf (1984) indicate that two conflicting missions exist: “fast service” and “relationship banking.” Ryan (1993) delineates some specific branch missions, such as attracting merchant deposits versus consumers. Zenios, Zenios, Agathocleous, and Soteriou (1995) characterize three strategic emphases of Bank of Cyprus branches based on location: coastal branches focus on the cash transactions demanded by the tourist trade, urban branches focus on commercial accounts, and rural branches are presumably for personal accounts. The relationship between branch mission and location is also noted by Sherman and Ladino (1995), where sample branch types are said to include urban, suburban, and shopping mall, each with a differing mix of services provided. Kamakura, Lenartowicz, and Ratchfrordm (1996) used a clusterwise translog cost function to determine five different general branch types-which agreed with the managerial perception of five such types.

The differences in strategic goals is important since branches often cannot pursue these goals simultaneously; they generally have to focus to be successful. For example, the relationship between the loan generation and transaction processing abilities of bank branches has been found to be “not highly correlated” (Oral, Kettani, and Yolalan 1992, p. 173). Service quality and productivity have been found to be negatively correlated in retail banking (Roth and Jackson 1995).

A modeling challenge is generated by the existence of differing strategic foci. In addition to the traditional model (1)-(4), consider the following,

A justification of maximization, rather than minimization or averaging in (10) involves the basic tenets Of DEA: a DMU should be judged in its best possible light. In this case, the maximum Ek’ should be associated with the yj E s that form the strategic focus of the branch. Note that instead of the traditional one LP needed to determine efficiency for each DMU, this approach requires S Lps.

In plain language, units should be segregated according to strategic direction and a different model constructed for each group. This is not a point that is new in a theoretical sense. From the inception Of DEA it has been understood that it should be applied to only like units. However, the practical application of this tenet has fallen short of the theoretical goal. None of the 18 studies listed in Table I adopt this approach strictly. The spirit of many DEA applications may be best exemplified by Schaffnit, Rosen, and Paradi (1997, p. 278), who noted four basic strategic groups of branches in their study, but combined all of them in the same model, stating that “an intrinsic feature Of DEA is that . . . each relatively inefficient branch will be automatically compared to its peer group: it is the set of efficient branches with the most similar mixes of inputs and outputs.” This is true for strategic outputs, but it ignores the units for whom some of the outputs or inputs are not strategic. For example, all the branches that have a strategic imperative to sell loans will have higher loans sold/loan officer ratios than other branches, and those branches that are meant to service deposit transactions will have higher deposit balances/teller ratios than other branches. But, a DEA model that combines all branches has the ability to “mix and match” these strategic inputs and outputs, such as basing a portion of efficiency on the loans sold/teller ratio.

A few studies have partially implemented this idea. Athanassopolous (1997), Oral and Yololan (1990), Oral, Kettani, and Yolalan (1992), and Schaffnit, Rosen, and Paradi (1997) construct two different models, but apply them to all branches regardless of branch specialty. Alternatively, Kamakura, Lenartowicz, and Ratchfrordrn (1996) and Zenios, Zenios, Agathocleous, and Soteriou (1995) segregate branches into five and seven differing clusters, respectively, but still use only one model.

Some researchers have addressed this problem by selecting a homogeneous group of branches for study. Soteriou and Stavinides (1997) and Goikas (1991) both studied a small subset of branches of a bank that operates in the same markets and has the same strategic direction.

3.2. Variation in Unit Size

A strategic imperative in most multisite industries is to maintain a uniform Dmu size. The uniform size facilitates cost reduction through, for example, reduced architectural design and building costs and reduced variation in equipment sizes. Uniform “cookie cutter” DMUS allow for easier personnel training and movement. Further, uniform DMUS may be desired in order to present a consistent appearance to the customer.

For many services, however, there exist extreme differences in unit size. In retailing, some firms have both regular stores and “superstores” that are vastly different in size. In the supermarket industry there is a move toward larger stores. The average new supermarket built by Safeway is twice the size of the stores built a decade ago (Safeway 1996). This is also true in banking. Branches within the same bank can differ by one or two orders of magnitude in assets managed or employees (see data in Al-Faraj, Alidi, and Bu-Bshait 1993; Sherman and Ladino 1995).

This size difference has an important effect on model choice. The original ccR model assumes constant returns-to-scale, but it has been shown that there are generally increasing returns-to-scale for bank branches (e.g., Zardkoohi and Kolari 1994; Drake and Howcroft 1994), though decreasing returns have also been found for very large branches (e.g., Giokas 1991). Consequently, the smallest branches would be deemed inefficient in a traditional DEA formulation even if they were performing at an efficient level for their size. For example, Al-Faraj, Alidi, and Bu-Bshait (1993) found that the only inefficient branches in the system studied were the three smallest, based on the number of accounts. Three other studies cited also use the CCR model. Taken literally, this signals management that only large branches are efficient and small branches should be enlarged. Of course, enlarging a small branch serving a small population base would be imprudent.

Variations on the CCR model allow for different returns to scale characterizations (such as Banker, Chames, and Cooper 1984; Seiford and Thrall 1990) and should be used in any practical DEA study where scale economies are an issue. Despite the existence of these models many researchers are unaware of them. As stated by Vassiloglou and Goikas (1990, p. 594), “(b)y construction, DEA assumes that [Dmus] do not benefit from economies of scale.”

Although it is best to use a variable returns-to-scale model when mixing unit sizes, some researchers have attacked this problem in a different manner. Zenios, Zenios, Agathocleous, and Soteriou (1995), Oral and Yololan (1990), and Oral, Kettani, and Yolalan (1992) use a constant returns-to-scale model, but segregate the branches by size first and performed their analyses only a groups of branches of similar size.

4. Managerial Goals

Although DEA technically measures only productivity, DEA can and has been used for many different managerial objectives (Epstein and Henderson 1989). In the banking environment, DEA has been applied to the general managerial problems of evaluation, resource allocation, and classification. Evaluation includes the performance evaluation of managerial personnel and front-line workers. We also include the evaluation of potential sites for new branches and branch closures in this category. Resource allocation decisions include decisions about personnel additions/deletions and budgeted expenditures on other noninterest expenses, such as supplies and maintenance. Classification entails the identification of branches for nonfinancial managerial recognition, use as training facilities, and the establishment of personnel policies, such as pay grades or designation for further study. We argue that the form a DEA model should take depends upon the managerial goal of the study.

Table I lists banking DEA studies in conjunction with the specific managerial objectives pursued. The remainder of this section analyzes the relative merits Of DEA versus the standard techniques and discusses the alterations of the basic DEA model that must be accomplished for each type of use.

4.1. Evaluation

Managerial evaluation for promotion and compensation and evaluation of physical sites for expansion have two important commonalities from a modeling perspective.. Both forms of evaluation require explicit consideration of uncontrollable variables and an assessment as to the degree of inefficiency.

4.1.1. UNCONTROLLABLE VARIABLES. Variables are deemed “uncontrollable” in the sense that branch management cannot affect their level, e.g., population located within a two-mile radius of the branch, average age, income, or other demographic characteristic of this population, distance to a major highway, and competitive density. Consider the role of uncontrollable variables in the three distinct tasks of evaluation of the strength of branch management, site selection, and the identification of branches as candidates for closure. In evaluating the strength of management at a particular site, including uncontrollable variables in the DEA appropriately adjusts for their impact on the production of outputs. In the case of site selection, management is concerned with determining the relative impact of uncontrollable variables so that the opportunity for unit effectiveness is maximized even if unit management is only average. Conversely, for unit closure, uncontrollable variables should not be considered, just results. We argue that while it may be possible to use DEA for these tasks, the basic DEA formulation must be altered and that the results analysis is more complex than is typically assumed.

The relative advantages Of DEA versus regression for discerning the effects of an uncontrollable variable have been known since an early time in the DEA literature. In the first application Of DEA, schools in the Texas, U.S.A., school system were divided into those who instituted a management program known as “Program Follow Through” versus a control group, and DEA was used to determine program effectiveness by comparing the best practice frontier, rather than comparing average performance through regression (Charnes, Cooper, and Rhodes 1981). DEA has the capability to determine what can be done given good management and a set of resources.

The predominant method used for discerning the relative advantages of uncontrollable variables is regression. Regression has been used in site selection for bank branches (Doyle, Fenwick, and Savage 1979), as well as the related service sector site selection problem for hotels (Kimes and Fitzsimmons 1990). Although DEA has been used for site selection in some environments (Thompson, Singleton, Thrall, and Smith 1986), this approach would be problematic for the multiple uncontrollable variables involved in site and personnel evaluation in a large branch network.

In the case of selecting candidate Dmus for closure, imagine an extremely well-managed bank branch with a poor set of uncontrollable variables. Would such a branch be passed over for closure on the grounds that, “they do so well-considering they are robbed weekly”? Similar arguments apply to personnel evaluation: the questions of “how good is this person?” and “do we need a person in this position?” require different methodologies.

4.1.2. EFFICIENCY RATINGS AS SUCCESS MEASURES. One approach for dealing with uncontrollable variables is problematic and speaks to the heart of the difficulties in using DEA for evaluative purposes. Epstein and Henderson (1989) suggest a two-stage approach where the first stage consists of uncontrollable variables used as inputs. In the second stage, regression analysis is undertaken with the first stage efficiency scores as the dependent variable and the uncontrollable variables as the independent variables.

This technique makes the implicit assumption that DEA efficiency scores can somehow provide a ratio ordering Of DMUS. Unfortunately, the relative value of efficiency scores cannot be used reliably for such purposes.

It has long been noted that the DEA efficiency scores do not provide a rank ordering Of DMUS when the efficiency scores relate to different reference sets, which is nearly always the case for large systems in practice. Comparing the efficiency Scores Of DMUS that relate to different reference sets essentially compares apples to oranges. Further, it is often incorrectly assumed that efficiency scores can be directly interpreted as to the reduction in inputs or increase in outputs possible from an inefficient DMU. This stems from the idea Of DEA providing “radial” efficiency, that is, all inefficiencies are forced to be an equiproportionate overuse/underachievement of inputs/outputs. One manifestation of this, seen in three papers cited in exhibit 1, is to assume an efficiency score of alpha

Unfortunately, none of the studies listed in Table I use a method that allows efficiency scores to accurately assess the distance from a DMU to the efficient frontier. Consequently, both of these relationships between the efficiency score and the distance to the efficient frontier discussed in the previous paragraph can be improved upon significantly. This is because the traditional DEA model often chooses a reference set that is not always the closest reference set on the efficient frontier from the DMU, just one which maximizes the radial efficiency (Parkan 1994, p. 206). That is, the closest point on the efficient frontier that intersects with a ray from the origin through the Dmu data point. In Figure 1, the efficiency Of DMU B [15, 25] is judged to have an efficiency of 0.83 because it is compared to point HCU^sub B^ 118.1, 30.2 1. The closest point to the efficient frontier for Dmu B is actually f 15, 30.5 1. These differences can be more extreme for efficient frontiers of different shapes.

Further, by nature of the LP conversion, traditional DEA models must be either output or input oriented. This causes DEA models to choose a DMu reference set that is most similar to the DMU on the orientation chosen, rather than similar to the combined DMU inputs and outputs (Parkan 1994, p. 213). In fact this choice is forcing the projection onto the frontier to move along either the input or output dimensions but not both (Frei and Harker 1995). Consequently, the reference set given by the model output may not represent the shortest distance to the efficient frontier. Haag and Jaska (1995) attempted to ameliorate this shortcoming by using a shortest projection technique that moves to the efficient frontier in both the input and output space. Unfortunately, their projection is the shortest projection to a single facet on the frontier and not necessarily the shortest projection to the entire frontier.

To illustrate the vastly different implications that can come from the various DEA-based models, we compare the methods of Sherman and Gold (1985), Haag and Jaska (1995), and Frei and Harker (1995) (Table 3), although it should be noted that other methods exist. Sherman and Gold used the additive, input-oriented, constant returns-to-scale DEA model to determine the efficiency of 14 bank branches. Three inputs and four outputs were used. The reference set for each DMU is determined by each DMU constraint with a non-zero dual price. For branch 7, Sherman and Gold determined that specific input decreases of 22%, 22%, and 35% were required to the three inputs (while holding outputs constant) to make the branch efficient. Haag and Jaska used an additive, variable returns-to-scale model that is neither input nor output oriented to show that decreases of 3% to each input and increases of 9%, 5%, 5%, and 3% to each output would make the branch efficient. These are obviously vastly different conclusions and it is important to understand the cause of these differences. Some of the difference can be explained by the returns-to-scale assumptions. Using Sherman and Gold’s methodology under a variable returns-to-scale assumption shows that a reduction of inputs of 7%, 7%, and 7% is required to make this branch efficient. (The identical input reduction percentages are not coincidental. The variable returns-to-scale model used reveals a uniform percentage differential to the closest facet on the frontier.) This is much closer to Haag and Jaska’s recommendation, but is still significantly different because of the ability of the Haag and Jaska method to simultaneously move inputs and outputs. Unless there is an explicit restriction to hold outputs or inputs constant, the ability to adjust both simultaneously is superior in terms of determining what a branch would look like if efficient.

However, the Haag and Jaska method determines the shortest distance from a given Dmu to its associated facet. Although it accurately determines the shortest projection onto a facet on the frontier, there is no assurance that this is the shortest projection to the entire frontier. In fact, in the case of the Sherman and Gold data, this projection is not the closest projection. If the shortest projection to the entire frontier is computed using the methodology of Frei and Harker, then branch 7 would need reductions of the inputs of 1.41%, 0.59%, and 17.06%, while simultaneously increasing the outputs by 0.27%, 0.14%, 8.00%, and 0.09%.

This example shows that the choice of DEA model and the implication of that choice are serious in terms of managerial implications. Rather than requiring a slight decrease (3%) in rent, Fms and supplies and simultaneously requiring outputs to increase between 3% and 9%, the shortest projection onto the frontier shows that by concentrating on reducing supplies significantly, efficiency can be achieved without altering the other inputs and only moderately altering Output 3. The different recommendations that arise here highlight the importance of the appropriate reference set for an inefficient Dmu. If a reference set is meant to compare against those Dmus on the frontier that most closely resemble the inefficient Dmu, then, without compelling reasons to restrict any of the dimensions, the reference point should be the shortest projection.

4.1.3. EmciENcY RATINGS AND UNIT PROFITABILITY. A separate problem in the interpretation of efficiency scores can occur when there exists an function between outputs, which is unknown. For example, it may be argued that there is a quantifiable dollar value to the bank for each point on a customer satisfaction scale and a quantifiable dollar value for each loan originated. These values exist, but are unknown. If these dollar values were known, branch profitability statements would be superior to DEA. Since the values are not known, DEA is Useful. In such a case of existing but unknown profit functions, however, the efficiency scores can provide misleading information.

While it is always desirable to become more efficient, i.e., to produce more of any output given the same level of inputs or to produce the same level of output using fewer inputs, it is not true that every efficient DMU is per se better than every inefficient DMU. In fact, it is entirely possible for an inefficient Dmu to be superior to an efficient DMU. Consider the hypothetical production function illustrated in Figure 2 where one input, x, is used in the production of a single output, y, and profit, z, is the overriding objective. Point P^sub 0^ is the only efficient point under the CCR model (it has the highest output-to-input ratio). When variable returns to scale are considered, both P^sub 2^ and P^sub 3^ are also efficient.

The DEA technique measures efficiency, defined as producing the greatest ratio of outputs to inputs under some most favorable weighting, but firms are concerned with profit maximization. In classic microeconomic theory, this means producing up to the point at which marginal cost equals marginal return (marginal profit is zero) and not at the point of maximum productivity. Suppose that the input cost and output price are given by c^sub 1^, C^sub 2^ > 0. The dashed lines in Figure 2 indicate lines of equal profit with z^sub 0^

Considered in isolation, the basic DEA model requires extensive modification to provide accurate data and would seem a poor choice for evaluative purposes. However, to see the benefits Of DEA in this area, it must be viewed in comparison to the alternatives discussed earlier. DEA does not encourage the dilution of current results to get an easier goal in the next accounting period, so it does not unduly reward poor performers and penalize better performers. DEA reduces a potentially large amount of data to a single number. To get such a single number without using DEA, performance on multiple goals are often combined into a single metric through a set weighting scheme across all units. This provides a classic benefit Of DEA. Any arbitrary set weighting scheme may punish those units that excel in a particular strategic area and provide value to the firm. DEA has the ability to allow each unit to choose those weights that make it perform most favorably. Thus, there is no predetermined weighting scheme necessary, only the identification of the evaluation criteria.

4.2. Resource Allocation

When the issue is where to direct marginal increases of resources, DEA can provide some guidance, but the theoretical underpinnings Of DEA must be considered. There are two cases depending on the characterization of returns to scale. Further, DEA provides no guidance as to the suitability of deploying incremental resources to inefficient units-an idea that may seem counterintuitive and suboptimal-but one that is not precluded by theory.

In determining where to deploy additional resources, management seeks to maximize the incremental return of outputs for the given incremental increase in inputs. In economic terms, managers seek to deploy incremental resources at DMUS with the highest levels of marginal productivity. DEA assesses the relative position of a given DMU with respect to the empirically derived efficient production frontier. DMUS located on the production frontier are rated efficient. Being rated efficient does not ensure a high rate of marginal productivity. It is entirely possible that a unit operating under an inferior production function may yet have the highest marginal productivity.

In the case where constant returns to scale prevail (as is the case with the ccR model), the situation is more clear. Here, average productivity is the same as marginal productivity. DEA identifies DMUS with the highest levels of average productivity as efficient. Deployment of additional resources should be considered only for efficient DMUS.

The most common use of the DEA banking studies cited is for resource allocation. Empirical evidence suggests that efficient branches should be considered for resource additions. Indeed, the inefficient branches may have appropriate levels of personnel, while efficient DMUS are most in need of increased resources. Consider the area of personnel resources. DMUS with appropriate staffing levels may be deemed inefficient, while those DMUS that are understaffed will be deemed efficient. The reason that this occurs is due in part to the nature of queuing systems and in part to the nature Of DEA.

Queuing theory reveals an inherently nonlinear relationship between customer service (e.g., waiting times) and personnel utilization. In order to achieve reasonable levels of customer service, some idle time must be tolerated. Or put another way, the most efficient queuing systems (in terms of both DEA and server utilization) offer poor service. Empirical banking studies have found this negative correlation between productivity and service quality in practice (Roth and Jackson 1995). Understaffed branches (those with very high personnel utilization) will be deemed DEA efficient due to the high number of transactions processed per worker. This will be the case even if measures of customer service are included as an output. The DEA will simply place more weight on the output dimension of transactions processed and relatively less on the dimension of customer service.

Oral, Kettani, and Yolalan (1992) found that efficient branches required additional resources. They conducted their DEA study along with a time-and-motion study. Of the 11 branches deemed most efficient by DEA (10 with efficiency = 1), nine were found to have a shortage of personnel according to the results of the time-and-motion study.

An appropriate method for determining personnel needs in a service firm consists of a queuing study supplemented by a tour scheduling algorithm similar to Buffa, Cosgrove, and Luce (1976). This involves determining the customer arrival pattern by time or day, day of week, etc., and calculating the personnel requirements by multiplying the transaction mix by the standard transaction times derived from a time and motion study. After personnel requirements are established in accordance with service standards through queuing analysis, personnel are assigned to shifts by tour scheduling. For large operations such as telephone call centers or bank check processing centers, this is cost effective, but it is prohibitively expensive to undertake for typical bank branches.

An inefficient DEA rating is not necessarily an indication of over-capacity, as DEA inputs cannot encompass the typical reason for an apparent employee excess: nonuniform customer arrival rates during the day and the need to build in idle time to ensure an appropriate level of customer service. Sherman and Gold (1985, p. 309) note that a discussion with management resolved that “inefficiency [ratings are] due to clustering of customer transactions.” Banks that have greater variation in their customer arrival profiles require a greater ratio of employees to transactions. However, this should not be interpreted as license to ignore DEA results for inefficient banks. Rather, the point is that DEA, although useful, should not necessarily be used as a prescription for action but rather a component of the analysis.

4.3. Classification

Perhaps the most appropriate Use Of DEA is merely establishing the efficient set versus the inefficient set. Classifying those units that are efficient from the inefficient separates them for further study. Specifically, the efficient units may have a use in employee training (Sherman and Ladino 1995), which is especially useful for the many service firms that have large franchised networks. It is useful and common to have those who are liaisons between franchise owners and management to have specific experience in the operations of the firm. This experience is gained at the company owned units that are deemed to be good exemplars of practice-which can be determined by DEA.

Merely identifying good units can aid in corporate-wide continuous improvement. With the limited resources available for extensive operational audits, the small set of efficient branches can be examined more closely for the staffing, layout, management style, competitive conditions, and demographics, that form the underlying basis for their success. In this manner DEA becomes “part of a continuous process of information generation and understanding” (Vassiloglou and Giokas 1990).

A caveat is necessary when using DEA for classification. Because DEA determines efficiency scores by comparing DMUS to extreme points, it is more susceptible to errors in the data than more traditional data analysis techniques such as regression. For example, in analyzing data with regression, an error by an order of magnitude in a single data point could potentially skew the results but will not likely bias the entire results. The check for robustness is in the removal of the errant data point and a comparison of the results. Certainly the results will be different but, especially for larger data sets, the tendencies will likely be unchanged. However, in DEA an error to a single DMU, by either mis-measurement or data entry, can quite conceivable bias the entire analysis by causing a DMU to be efficient and dominate most other DMUS. By removing this one DMU, especially one that was mistakenly in most reference sets, the results could drastically change.

The composition of the set of efficient DMUS can be sensitive to small errors or randomness in the data (Metters, Vargas, and Whybark 1995). In general, DEA assumes that there are no random fluctuations, so that any deviation from the efficient frontier is inefficiency (Berger, Hancock, and Humphrey 1993). Charnes, Cooper, Lewin, Morey, and Rosseau (1985) allow for variation in a single variable. Chames and Neralic (1990) allow for variation in multiple variables for a single DMU. Traditional LP sensitivity analysis is of limited use for DEA sensitivity analysis even in these simple cases. They correspond to allowing variation in both the coefficient matrix, as well as the vector of right hand side coefficients. The method of Thompson, Dharmapala, and Thrall (1994) allows for simultaneous variation of all variables for the ccR model. This method is conservative in its estimation of the range of variation for which a Dmu will remain efficient. The actual range may indeed be much larger. This method is also limited to the ccR model, which assumes constant returns to scale. Otherwise, chance-constrained DEA (Land, Lowell, and Thore 1993) or general global sensitivity analysis (Wagner 1995) represent additional alternatives. These methods, however, require specification of the probability distributions of the input and output measures. In the case of chance-constrained DEA, however, the mathematical programming model is no longer solvable as an LP. In the case of global sensitivity analysis, an extensive simulation analysis is required.

Metters, Vargas, and Whybark (1995) address the robustness of DEA classification in an ad hoc but intuitively appealing fashion by strategically perturbing the data and performing additional DEA runs. For efficient DMUS, these perturbations involve decreasing outputs and increasing inputs slightly and vice versa for the inefficient DMUS within the range of measurement error, DEA is performed on the perturbed data. Results are assessed as robust if the composition of the efficient and inefficient sets remains relatively unchanged. This approach has the advantage of being easy to implement and interpret.

5. Summary

The purpose of this manuscript is to provide some practical guidelines for the application of DEA in assessing performance within a firm. The promise of DEA is that the difficult, cumbersome task of performance evaluation of geographically dispersed units can be reduced to a single number. A number of specific rules are suggested here that appear to be violated in practice:

1. Model selection

Ia. Avoid spurious efficiency. Units with different strategic emphases should be in separate models.

lb. Use variable returns-to-scale models when modeling units of largely varying size.

2. Results analysis

2a. DEA is more effectively used in efficient unit identification than in performance evaluation and resource allocation.

2b. When used in personnel evaluation, uncontrollable variables should be considered and specific efficiency values do not constitute a ranking.

2c. When used for unit closure, uncontrollable variables should not be considered.

2d. When used for resource allocation, the efficient units require scrutiny, rather than the inefficient units.

2e. Efficient does not necessarily mean effective. If true goal diversity does not exist, inefficient units may be better than efficient units (Figure 2).

2f. DEA is the start of analysis, not the end.

Violating these rules each carries a potential “penalty.” If care is not taken to avoid spurious efficiency (rule la), then potentially inefficient Dmus will be ranked by a DEA model as efficient, or the degree of inefficiency of a DMU can be understated and it may be compared to an improper reference set of Dmus not sharing the same strategic goals. If a constant returns-to-scale model (rule lb) is used when a variable returns model is needed, then there will be distortions of efficiency ratings for the largest and smallest units. For example, all large units may be classified as efficient, even though they are poorly run.

Applying DEA results to effect pay raises or allocate resources is far riskier than using it to merely to identify efficient units (rule 2a). At the extreme, managers can easily manipulate DEA results by focusing on only one output to the exclusion of others, or by intentionally using less of a single resource, thereby artificially pushing themselves to the efficient frontier. If DEA is used in personnel evaluation, uncontrollable variables must be considered, or else a manager might continue to get substantial (poor) pay increases just for being a branch manager in a desirable (undesirable) location (rule 2b). Alternatively, when evaluating the worth of units, rather than managers, the reverse is true: a poor unit should not be kept open and a good unit closed just because the manager of the poor unit is heroic and causes a smaller loss than forecast (rule 2c).

When applying DEA to provide resources the reverse of the traditional logic holds: the efficient Dmus are those that require attention, as they may be efficient because they may be stretching resources beyond the limit and may be able to use more (rule 2d).

DEA should only be used under conditions of true ambivalence among outputs. If this is not true, inefficient DMUS may actually be the best performers (rule 2e). Finally, DEA is a useful tool, but not one that should ordinarily be used in isolation (rule 2f). It is typically necessary to augment DEA with queuing or industrial engineering studies, cost accounting analyses, etc.

Some of these rules have been noted in previous DEA studies. But, the rules for applying DEA have been fragmented throughout the literature. Consequently, applications continue to lag behind theory. Given a set of rules to apply, it is hoped that applications can converge on a consistent and correct implementation.


ACABAL, D., 1. HEiNeY, AND S. McINTYRE (1994), “Issues and Perspectives on Retail Productivity,” Journal of Retailing, 60, 3, 107-127.

AL-FARAj, T., A. ALIDI, AND K. Bu-BSHAIT (1993), “Evaluation of Bank Branches by Means of Data Envelopment Analysis,” International Journal of Operations and Production Management, 13, 9, 45-52.

ATHANAssopouLos, A. (1997), “Service Quality and Operating Efficiency Synergies for Management Control in the Provision of Financial Services: Evidence from Greek Bank Branches,” European Journal of Operational Research, 99, 2, 300-313.

BANKER, R., A. CHARNES, AND W. COOPER (1984), “Some Models for Estimating Technical and Scale Inefficiencies in Data Envelopment Analysis,” Management Science, 30, 9, 1078-1092.

AND R. MOREY (1993), “Integrated System Design and Operational Decision Making for Service Sector Outlets,” Journal of Operations Management, 11, 1, 81-98.

BERGER, A., D, HANCOCK, AND D. HUMPHREY (1993), “Bank Efficiency Derived from the Profit Function,” Journal of Banking and Finance, 17, 2/3, 317-347.

BuFFA, E., M. COSGROVE, AND B. LUCE (1976), “An Integrated Work Shift Scheduling System,” Decision Sciences, 7, 4, 620-630.

CHARNEs, A., W. COOPER, A. LEWIN, R. MOREY, AND J. ROUSSEAu (1995), “Sensitivity and Stability Analysis in DEA,” Annals of Operations Research, 2, 139-156.

-1 -1 -1 AND L. SFiFoRD (1994), “Basic DEA Models,” chapter I in Data Envelopment Analysis: Theory, Methodology and Applications, A. Charnes, W. Cooper, A. Lewin, and L. Seiford (eds.), Kluwer Academic Publishers, Boston.

AND E. RHODES (1978), “Measuring Efficiency of Decision-making Units,’ European Journal of Operational Research, 2, 6, 428-449.

-, -, AND – (1981), “Evaluating Program and Managerial Efficiency: An Application of Data Envelopment Analysis to Program Follow Through,” Management Science, 27, 6, 668-687.

-, AND L. NERALIC (1990), “Sensitivity Analysis of the Additive Model in Data Envelopment Analysis,” European Journal of Operational Research, 48, 3, 332-341.

CHASE, R., G. NORTHCRAFF, AND G. WOLF (1984), “Designing High-Contact Service Systems: Application to Branches of a Savings and Loan,” Decision Sciences, 15, 4, 542-555.

DOYLE, P., 1. FENwICK, AND P. SAVAGE (1979), “Management Planning and Control in Multi-branch Banking,” Journal of the Operational Research Society, 30, 1, 105-111.

DRAKE, L. AND B. HOWCRoFr (1994), “Relative efficiency in the Branch Network of a UK Bank: An Empirical Study,” Omega, 22, 1, 83-90.

EPSTEIN, M. AND J. HENDERSON (1989), “Data Envelopment Analysis for Managerial Control and Diagnosis,” Decision Sciences, 20, 1, 90-119.

FIZEL, J. AND T. NuNNIKHOvEN (1993), “The Efficiency of Nursing Home Chains,”Applied Economics, 25,1,49-55. FREI, F. AND P. HARKER (1995), Projections onto Efficient Frontiers: Theoretical and Computational Extensions of DEA, Working Paper, Wharton Financial Institutions Center, University of Pennsylvania, Philadelphia, PA.

GioKAs, D. (1991), “Bank Branch Operating Efficiency: A Comparative Application of DEA and the Loglinear Model,” OMEGA, 19, 6, 549-557.

GOOD, W. (1984), “Productivity in the Retail Grocery Trade,” Journal of Retailing, 60, 3, 81-97,

HAAG, S. AND P. JASKA (1995), “Interpreting Inefficiency Ratings: An Application of Bank Branch Operating Efficiencies,” Managerial and Decision Economics, 16, 1, 7-14.

IIDA, J. (1991), “US Bancorp Seeks Savings via Branch-Analysis System,” American Banker, 156, 195, 3. KAMAKURA, W., T. LENARTOWICZ, AND B. RATCHFRORDM (1996), “Productivity Assessment of Multiple Retail Outlets,” Journal of Retailing, 72, 4, 333-356.

KBffis, S. AND J. Fn-LsmmoNs (1990), “Selecting Profitable Hotel Sites at La Quinta Motor Inns,” Interfaces, 20, 2, 12-20. LAND, K., C. LOVELL, AND S. THORE (1993), “Chance-Constrained Data Envelopment Analysis,” Managerial and Decision Economics, 14, 6, 541-554.

LOVELL, C. AND J. PASTOR (1997), “Target Setting: An Application to a Bank Branch Network,” European Journal of Operational Research, 98, 2, 290-299.

MAHAJAN, 1 (1991), “A Data Envelopment Analytic Model for Assessing the Relative Efficiency of the Selling Function,” European Journal of Operational Research, 53, 2, 189-205.

METTERS, R., V. VARGAS, AND D. WHYBARK (1995), Analysis of the Sensitivity of DEA to Data Errors, Working Paper, Goizueta Business School, Emory University, Atlanta, GA.

ORAL, M,, 0. KE-rrANi, AND R. YOLALAN (1992), “An Empirical Study on Analyzing the Productivity of Bank Branches,” HE Transactions, 24, 5, 166-176.

– AND R. YOLALAN (1990), “An Empirical Study on Measuring Operating Efficiency and Profitability of Bank Branches,” European Journal of Operational Research, 46, 3, 282-294.

PARKAN, C. (1987), “Measuring the Efficiency of Service Operations: An Application to Bank Branches,” Engineering Costs and Production Economics, 12, 1-4, 237-242.

– (1994), “Operational Competitiveness Ratings of Production Units,” Managerial and Decision Economics, 15, 3, 201-221.

PASTOR, J. (1994), How to Discount Environmental Effects in DEA: An Application to Bank Branches, Working Paper, Universidad de Alicante, Alicante, Spain.

PIHL, W. AND L. WHITmYER (1994), “Making Branch Performance Relevant,” Bank Management, 70, 4, 58-63. ROLL, Y. AND B. GOLANY (1993), “Alternative Methods of Treating Factor Weights in DEA,” Omega, 21,1,99-109. ROTH, A. AND W. JACKSON (1995), “Strategic Determinants of Service Quality and Performance: Evidence from the

Banking Industry,” Management Science, 41, 11, 1720-1733.

RYAN, T. (1993), “Who Really Pays for Branches,” Journal of Retail Banking, 15, 2, 15-20. SAFEWAY (1996), Safeway Annual Report, Pleasanton, CA.

SCHAFFNIT, C., D. ROSEN, AND J. PARADI (1997), “Best Practice Analysis of Bank Branches: An Application Of DEA in a Large Canadian Bank,” European Journal of Operational Research, 98, 2, 269-289.

SCHULTZ, J. AND K. CHELST (1994), “Getting Help for Underperforming Branches,” Journal of Retail Banking, 16, 4, 27-35.

SEMORD, L. (1994), “A Bibliography of Data Envelopment Analysis (1978-1992),” in Data Envelopment Analysis: Theory, Methodology and Applications, A. Charnes, W. Cooper, A. Lewin, and L. Seiford (eds.), Kluwer Academic Publishers, Boston.

– AND W. COOPER (1997), “Tutorial: Data Envelopment Analysis-Theory, Methodology and Application,” presentation at INFORMS national meeting, October 1997.

– AND R. THRALL (1990), “Recent Developments in DEA,” Journal of Econometrics, 46, 1/2, 7-38. SHERMAN, H. (1984), “Improving the Productivity of Service Businesses,” Sloan Management Review, 25, 3, 11-22. AND F. GOLD (1985), “Bank Branch Operating Efficiency,” Journal of Banking and Finance, 9, 2, 297-315. AND G. LADINO (1995), “Managing Bank Productivity Using Data Envelopment Analysis (DEA),” Interfaces, 25, 2, 60-73.

SoTERiou, A. AND Y. STAVRIDES (1997), “An Internal Customer Service Quality Data Envelopment Analysis Model for Bank Branches,” International Journal of Operations and Production Management, 17, 8, 780-789.

THOMPSON, R., P. DHARMAPALA, AND R. THRALL (1994), “Sensitivity Analysis of Efficiency Measures with Applications to Kansas Farming and Illinois Coal Mining,” in Data Envelopment Analysis: Theory, Methodology and Applications, A. Chames, W. Cooper, A. Lewin, and L. Seiford (eds.), Kluwer Academic Publishers, Boston.

-, F. SINGLETON, R. THRALL, AND B. SMITH (1986), “Comparative Site Evaluations for Locating a High Energy Physics Lab in Texas.” Interfaces, 16, 6, 35-49.

THYGERSON, K. (1991), “Modeling Branch Profitability,” Journal of Retail Banking, 13, 3, 19-24.

VASSILOGLOU, M. AND D. GioKAs (1990), “A Study of the Relative Efficiency of Bank Branches: An Application of Data Envelopment Analysis,” Journal of the Operational Research Society, 41, 7, 591-597.

WAGNER, H. (1995), “Global Sensitivity Analysis,” Operations Research, 43, 6, 948-969.

WITZELING, R. (1991), “Branch Performance: Putting the Numbers in Perspective,” Credit Union Management, 14, 6, 18-20.

ZARDKoom, A. AND J. KOLARI (1994), “Branch Office Economies of Scale and Scope: Evidence from Savings Banks in Finland,” Journal of Banking and Finance, 18, 3, 421-432.

ZENIOS, C., S. ZENIOS, K. AGATHOCLEOUS, AND A. SoTERiou (1995), Benchmarks of the Efficiency of Bank Branches, Report 95-10, Department of Public and Business Administration, University of Cyprus, Nicosia, Cyprus.


Cox School of Business, Southern Methodist University, Dallas, Texas 75275-0333, U.SA.

Harvard Business School, Harvard University, Boston, Massachusetts 02163, U.SA. Goizueta Business School, Emory University, Atlanta, Georgia 30322-2710, U.SA.

Richard Metters is Assistant Professor at the Cox School of Business, Southern Methodist University. He holds a Ph.D. from the Kenan-Flagler Business School, University of North Carolina, an MBA from Duke University and a BA from Stanford University. His research interests concentrate in both manufacturing and service sector applications of stochastic inventory theory. He has previously published in Journal of Service Research, Production and Operations Management, HE Transactions, Journal of Operations Management, International Journal of Production Research, European Journal of Operational Research, Journal of the Operational Research Society, and Production and Inventory Management Journal.

Frances Frei is Assistant Professor at the Harvard Business School. Her current research focuses on the financial services industry, which combines extensive empirical analysis with a novel processmodeling framework. Areas of particular interest include examining drivers of performance and efficiency, process analysis, electronic payment systems, online financial services, and supply chain management. Professor Frei has also taught at the University of Rochester and the University of Pennsylvania’s Wharton School where she received seven excellence in teaching awards. She received her Ph.D. in Operations Management from the Wharton School, M.E. in Industrial Engineering from the Pennsylvania State University, and B.S. in Mathematics from University of Pennsylvania.

Vicente Vargas is Assistant Professor of Decision and Information Analysis at the Goizueta Business School, Emory University. He holds a Ph.D. from the Kenan-Flagler Business School, University of North Carolina. His current research interests include the application of Data Envelopment Analysis (DEA) in multicriteria optimization, yield management, master production schedule instability in make-to-stock and assemble-to-order production environments, and mixed model assembly scheduling under JIT production. He has published previously in Journal of Service Research, Production and Operations Management, HE Transactions and International Journal of Production Research.

Copyright Production and Operations Management Society Fall 1999

Provided by ProQuest Information and Learning Company. All rights Reserved

You May Also Like

Strategic aspects of quality: A theoretical analysis

Strategic aspects of quality: A theoretical analysis Narasimhan, Ram STRATEGIC ASPECTS OF QUALITY: A THEORETICAL ANALYSIS* Th…

Comparing the environmental and quality initiatives of Baldrige award winners

Comparing the environmental and quality initiatives of Baldrige award winners Angell, Linda C COMPARING THE ENVIRONMENTAL AND QUALIT…

Role of supply management capabilities in green supply, The

role of supply management capabilities in green supply, The Bowen, Frances E THE ROLE OF SUPPLY MANAGEMENT CAPABILITIES IN GREEN SUP…

Investigating the need for trade-offs in operations strategy

Competitive priorities: Investigating the need for trade-offs in operations strategy Boyer, Kenneth K COMPETITIVE PRIORITIES: INVEST…