McNamara, Christopher

This paper offers insights into developments in the practice of performance measurement and management through examination of three case studies. Analysis of the cases suggests a clear distinction between performance measurement and management. It clarifies the usefulness of implementing broadbased performance measurement frameworks, and the criticality of organisational culture to effective performance management.

Corporate performance measurement/management has attracted much attention in academic as well as professional circles. At the beginning of 2003, more than 12 million performance management websites were in existence (Marr and Schiuma 2003, De Waal 2003). The topic has spawned research from diverse areas, spanning accounting, economics, human resource management, marketing, psychology and sociology (Marr and Schiuma 2003). The diverse epistemology underlying the literature has lead to a multiplicity of confusing and not always synonymous terms (eg, “business performance measurement” [Marr and Schiuma 2003], “strategic performance measurement” [Ittner et al 2003], “corporate performance management” [Bourne et al 2003] and “strategic enterprise management” [Brignall and Ballantine 2004]). In this paper we treat the term “performance management” to encompass matters that extend beyond the measurement of performance to issues of how measurement may facilitate organisational functioning in “real decision-making settings” (Shutler and Storbeck 2002).

The paper presents an investigation into the development of, and interrelationship between, performance measurement and management in practice. From an academic standpoint, this offers insight into the alignment of performance measurement/management in practice with the principles annunciated in the literature. From a practical perspective, we aim to explore the factors associated with successful performance measurement/management systems.

The performance management literature canvasses a number of issues relating to successful performance management. From a design perspective, much has been made of the benefits of incorporating a mixture of financial and non-financial measures. In particular, the balanced scorecard (BSC) concept (Kaplan and Norton 1996) has received considerable attention. However, mere adoption of a particular performance measurement system does not automatically translate into successful performance management. Successful performance management systems require customisation and refinement to meet the specific circumstances of an organisation, aligning employees with the system and creating the “right” organisational culture. Also, performance management is becoming increasingly integrated with other organisational functions, such as risk management.


Financial and non-financial performance metrics and the BSC

Since Kaplan and Norton’s 1992 Harvard Business Review article and 1996 book on the BSC, there has been a general acceptance in practice that a mixture of financial and non-financial measures in a performance measurement system is beneficial, for both profit and non-profit organisations (Sinclair and Zairi 2001, Ballou et al 2003)1. The benefits of employing a balanced performance measurement system are typically articulated in terms of the limitations of traditional financial measures: short-term focus, emphasis on narrow groups of stakeholders, and limited guidance for future actions (eg, Langfield-Smith 2003). However, practice has yet to perfect the design of non-financial performance measures.

Discussion of the use of non-financial measures has been prevalent in academic and practitioner literature (Marr and Schiuma 2003). Ittner and Larcker (2003) criticise modern practice’s design of non-financial performance measures. They state that firms tend to apply a “boilerplate” model of the BSC, with no explicit or implicit link to strategy in many cases. Ittner and Larcker also observe that (1) firms have fundamental difficulties in communicating strategies; (2) firms have difficulty in establishing the real drivers of organisational functioning and cannot distinguish between “noise” and cause-and-effect results; (3) performance targets are often inappropriate; and (4) measures employed are not always “valid” or “reliable”. They argue that firms need to spend more time understanding the linkages between strategy and measures to improve their performance management systems.

Customising performance management – an iterative process

A theme in the practitioner and academic literature is the notion that no single performance management system (PMS) fits all companies; the key is to customise and choose an appropriate system and appropriate measures. Companies may adopt a number of frameworks as a basis for their PMS, for example, the BSC, activity-based management (ABM), the six sigma model (see Chartered Institute of Management Accountants 2002) and the European Foundation for Quality Management (EFQM) excellence model (Wongrassamee et al 2003). Firms can also employ a number of financial measures (eg, economic value added or tracking stock) or even a mixture of frameworks (Jalbert and Landry 2003).

However, it is argued that if a firm elects to adopt a system of non-financial and financial measures, they cannot be ad hoc in nature. Additionally, the “right” number of measures, how to measure, how to report, and how to weight measures in a particular PMS can only be perfected over time and with experience – through an iterative process. A customised PMS that links an organisation’s objectives with underlying changes in the metrics is argued to outdo generic or less customised systems.

An interesting illustration of these concepts is the study of Mike’s European operations by Lohman et al (2004). One of key aspects of Nike’s PMS success was the ability to customise and refine its system. While it was designed around the BSC concept, measures were designed on six perspectives, rather than the traditional four, to meet the needs of the business. In addition, the PMS had the capacity to centralise performance measurement throughout the operation with consistent and standardised measures. To keep the system up-to-date and relevant for decision-making, it was reviewed monthly and yearly redesigns were also carried out.

A successful PMS is also said to depend on how employees interact with the system. Indeed, De Waal (2004, p. 88) argues that “performance can be considered an outcome of both organizational and human activities.” Consequently, organisational culture and behavioural factors play an important role in the ultimate success of a PMS. Nohria et al (2003) propose that creating a success culture, where employees are empowered and involved in decision-making, is one of the four primary management practices “that can significantly affect a company’s performance” (p. 8). Their conclusion was based on a comprehensive longitudinal study of more than 160 US companies from various industries. Malina and Selto (2001) report that positive outcomes are generated by better strategic alignment of employees and better motivation. Similarly, De Waal (2004) finds that an organisational culture focused on using performance measures for improvement is an important behavioural factor in PMS success. Given the importance of employee alignment to PMS success, the issue for practitioners is the accomplishment of cultural reform.

Ability to assess risks in performance management

There has been a growing awareness in some professional spheres of the relationship between performance management and other organisational functions such as risk management and organisational infrastructure (Miller and Israel 2002). It has been argued that successful implementation of such a multi-faceted system can be a source of competitive advantage, particularly in the highly competitive and intense professional service industries such as investment banking (Rutter 2002). However, while managers have acknowledged the potential for this broader role for a PMS, practical considerations remain as to how to integrate risk management and other functions into the performance management framework.2


To illustrate some of the contemporary issues in performance measurement/management, the following discussion presents three case studies: Measuring and Reporting Project Effectiveness at Oz Bank, Defining Corporate Performance at AsTelco, and Building a Performance Management Architecture at SEA Bank.3 The data for each case were collected through a series of interviews with project staff from Booz Alien Hamilton and their client counterparts, combined with a detailed review of project reports and supporting materials. Each case study is structured to present the background to the organisation and case in question, description of the performance measurement/management challenge, discussion of the solution developed (subdivided into design and implementation, impact, and limitations), and a synopsis of the key performance measurement/management lessons for managers.


Oz Bank is a large financial services organisation with operations throughout Australia. Product offerings span retail and business banking, superannuation, insurance and investments. Oz Bank is publicly listed on the Australian Stock Exchange, and has traded profitably for a number of years. Data were collected from Oz Bank’s head office between May and August 2004.

Situation and challenge

In the five years from 1996 to 2001 Oz Bank undertook two major organisational change programs. Both programs focused on consolidating the operational core of the business, driving efficiency through organisational processes and the corporate structure. In sum, the two change programs generated over 1,000 efficiency initiatives to be implemented. Both programs involved large, cross-functional project teams charged with implementing and tracking program initiatives. The key processes undertaken by these cross-functional project teams are depicted in Figure 1.

When the first major change program started, Oz Bank did not have a standardised approach to measuring, managing and tracking the performance of program initiatives. The magnitude of the two change programs, in terms of complexity and significance for Oz Bank, created a need for a systematic approach to codifying, evaluating, communicating and tracking program initiatives. In response, Oz Bank developed “Fastrack”, a proprietary system for measuring and managing the performance of its major change programs.

The solution

Design and implementation

The earliest iteration of Fastrack emerged from the first of Oz Bank’s two major change programs. Like many of its tracking system predecessors, Fastrack began as a static database of program initiatives. The database captured basic information about each program initiative: name, description, cost centre affected, and the magnitude and type of benefits.

Over time, Fastrack began to incorporate a broader range of metrics, extending measurement to include dimensions of risk. For example, the potential risks of initiatives to staff and customers were evaluated as low, medium or high, supported by a descriptive field on the nature of the risk.

“The GMs [general managers] were very clear about the need to explicitly consider the staff and customer risks of each initiative. Including these as fields [in Fastrack] helped make these risks visible” – Fastrack system designer.

A team of six people was established to track initiative performance. The program team took responsibility for assessing implementation progress and sourcing data to evaluate initiative success. Program results were collated and periodically reported to a central steering committee comprising GMs from each of the Oz Bank businesses.

Oz Bank’s second major change program precipitated further evolution of Fastrack. The Fastrack team sought to bring greater clarity and specificity to the assessment of initiative performance. With respect to initiative benefits, Fastrack began recording the start point (referred to as the “baseline” value) and target end point of both the underlying metric and financial outcomes associated with each initiative. For example, for an initiative targeting reduced error rates, Fastrack captures the starting number of errors and the associated costs, the target number of errors, and the associated cost savings.

“In the second iteration of Fastrack we tried harder to capture the intent of initiatives. Rather than simply recording a dollar value for initiative benefits we wanted to try and paint a picture of how the business processes should actually change once the initiative was implemented . . . We introduced a measurement framework catted CSTML – Fastrack initiatives needed to have Clear, Specific, Time-bound, Measurable and Lasting measures attached” – Fastrack system designer.

In addition to greater specificity of initiative KPIs, the second evolution of Fastrack established clear links to initiative owners. Each initiative was attributed to a general manager responsible for benefit realisation.

“We asked GMs to sign up for realising the initiative benefits. If they couldn’t deliver the benefits on the original initiative, they were expected to find a new initiative to fill the gap”- Oz Bank line manager (retail bank).

The activities of the Fastrack team also changed with the evolution of Fastrack’s capability. In addition to maintaining the Fastrack repository of initiatives and their success, the team began to play a greater role in the implementation of initiatives. Members of the Fastrack team began working with implementation managers to assist with the creation of implementation plans.

Today, the Fastrack system assists performance measurement and management of large-scale corporate projects as well as “business as usual” initiatives. A Web-based front end has been added to enhance the usability of the system. Tracking and reporting on initiative effectiveness is now undertaken by a team of 10 people.


The introduction of Fastrack has significantly increased the breadth of initiative information available to Oz Bank managers.

“The design of the system is based around capturing all the aspects which need to be considered when evaluating initiative success. In doing this, we were committed to understanding the impact for all stakeholders” – Oz Bank line manager (retail bank).

One of the key effects of the Fastrack system has been the ability to standardise information available for Oz Bank’s GMs, enhancing project comparability and improving decision-making.

Fastrack has also helped improve visibility of the effectiveness and success of Oz Bank’s major strategic and operational initiatives. By strengthening and formalising the feedback loop between strategic vision and implementation, Fastrack has enhanced Oz Bank’s strategic planning process, and supported communication with the market:

“Fastrack has helped ensure that the benefits of our major programs are real and lasting. It’s brought consistency to our decision-making. It’s one of the main reasons why those programs have been successful. . . It’s also allowed us to be very confident in communicating the impact of our programs to the market” – Oz Bank line manager (group).

By supporting the feedback loop, Fastrack has also bolstered Oz Bank’s accountability and control, and strengthened the firm’s performance appraisal system:

“It’s significantly strengthened our systems of accountability. There is a clear benefit that needs to be achieved, tagged to a specific individual and a particular time frame. These are coupled with the implementation support and a strong audit/review process” – Fastrack system designer.

Refinements to the Fastrack system are continuing. The input process was recently automated. In addition to reducing processing time, the automated template is expected to further increase the comparability of projects. The Fastrack team also continues to develop standard reports to assist project managers in monitoring the progress and success of initiatives.


While Fastrack has been particularly successful for measuring and managing the success of large-scale corporate projects, its application to “business as usual” initiatives has been more challenging.

“We’re still grappling with how to fully leverage it [Fastrack] into BAU [business as usual] . . . The biggest challenges in that context have been getting clarity and consistency in what an idea is and what the appropriate KPIs are. The outputs from the business can vary quite a lot and sometimes not enough thought goes into this”- Fastrack system designer.

A second limitation is that Fastrack, designed originally as a system to support a single transformation project, lacks strong capabilities to support concurrent examination of the full portfolio of organisational projects. This limitation is most pronounced with respect to risk assessment. As an organisation takes on multiple projects, the risks of project ineffectiveness increase. The heightened risk of project failure is associated with the challenges of managing complexity and increased stress on both organisational resources and stakeholders. Fastrack is unable to provide sufficient depth of analysis around the ways in which such project risks aggregate. Consequently, Oz Bank is required to utilise additional analytical tools and processes when evaluating its organisational portfolio, and developing overarching strategic plans.


Consistent with the findings of Lohman et al (2004) the Oz Bank case suggests there are significant advantages to developing a standardised and centralised approach to performance measurement. In addition to duplicating effort, multiple project tracking frameworks are likely to result in sub-optimal decision-making. Lohman et al (2004, p. 284) observe: “The need for coordination creates a central role for a set of shared and clearly defined performance metrics.” A consistent base upon which multiple projects can be compared is key to management’s ability to assess the return on project investment.

Further, a consistent performance measurement base informs a portfolio view of organisational projects, allowing an assessment of aggregated risks and benefits. The criticality of the risk component is worth particular mention. Risk management systems are becoming increasingly important as a lever of organisational control (Simons 2000). The prevalence of risk management disclosure in public companies’ annual reports is reflective of increasing demand for risk measurement and management. Consequently, corporations are explicitly integrating elements of risk management in their performance measurement framework (Rutter 2002).

The consistent base on which a performance measurement system is founded must be broad enough to capture the risk/reward profile of any project, and should be reviewed periodically as the dimensions and priorities of performance measurement/management shift. As the Oz Bank case study shows, designing an effective performance management system requires a number of iterative steps and considerable refinement. Consistent with this case, Ittner and Larcker (2003, p. 95) state: “Reassessment of measures should be ongoing and regular . . . Even in stable environments, ongoing analysis allows companies to continually refine their performance measures and deepen their understanding of the underlying drivers of economic performance.”


AsTelco is a leading provider of mobile telecommunications to more than 10 million customers in the Asia-Pacific region. Product offerings include pre-paid and standard billing products. AsTelco has grown strongly in recent years, with an expanding customer base, increased network coverage and growing demand for its products. The following case study outlines the design and implementation of a balanced scorecard directed at helping the company consolidate its strong growth.

Situation and challenge

The growth in the regional economy over the past two years has proved both an opportunity and a challenge for AsTelco. As one of the market leaders in the mobile telecommunications space, AsTelco was well positioned to reap the benefits of the regional “upswing”. However, outdated performance measurement systems threatened the company’s ability to manage the emerging growth through effective execution of its operations. Within a broader strategybased transformation, AsTelco undertook a major project to update its performance measurement system in a balanced scorecard framework.

The diagnostic phase of the project revealed two key issues constraining organisational performance: corporate strategy was not well implemented and there was sub-optimal decision-making in multiple levels of the hierarchy. Three key hypotheses emerged as possible drivers of the ineffective execution of the corporate strategy: corporate strategy was not clearly defined; the corporate strategy was not clearly communicated; and the corporate strategy was static rather than adaptive or dynamic. Two key hypotheses emerged as potential drivers of sub-optimal management decision-making: management were not supplied with timely information and management were not supplied with appropriate information. Figure 2 illustrates the key issues and hypotheses for AsTelco.

Interviews were conducted with managers throughout the business to test the hypotheses. The interview data revealed a number of factors impeding the effective execution of the corporate strategy. First, many operations managers indicated that the corporate strategy had not been translated into specific objectives at the operational level. Operations managers were left to translate the perceived corporate strategy into their day-to-day activities. Second, while operational managers had aligned their activities to the perceived strategic imperatives for their area, this perception was not always aligned to the actual strategic objectives of the corporate head office. Third, head office held operations managers fully accountable for the performance of their area, assuming all the risks associated with the strategies they had deployed. Fourth, many operations managers felt the strategic imperatives against which their performance was evaluated were outdated, and that shifts in the marketplace were not reflected in those objectives. Operations managers indicated that there was no formal process or channel for feeding such information back to the head office – insights from the frontline were communicated back through person-to-person relationships. These findings, summarised in Figure 3, supported the team’s hypotheses that AsTelco’s corporate strategy lacked clarity, was not clearly communicated, and was static rather than dynamic.

Combining the interview data with an independent review of management decision-making over the previous 12 months indicated three key factors contributing to sub-optimal management decision-making. First, all managers felt that performance information was not available quickly enough. Operational managers highlighted a need for ad hoc data requests to track the effectiveness of tactical decisions. Second, many managers identified an over-reliance on traditional financial-accounting-based performance measures. Third, AsTelco’s managers were critical of the “backward-facing” nature of most performance measures. For example, managers at AsTelco’s head office indicated that monthly performance reports were useful for tracking financials, but “lacked deep insight into the economics of the underlying business”. Managers also highlighted a lack of “outside-in” information – metrics capturing customer and investor perceptions of AsTelco. To some extent, operations managers overcame the limitations of AsTelco’s standard performance metrics through the use of ad hoc data requests. These findings, summarised in Figure 4, were consistent with the team’s broad hypothesis that AsTelco’s performance measurement system failed to supply managers with timely and appropriate information for decision-making.

To address these limitations of AsTelco’s performance measurement system, a balanced scorecard was designed and implemented.

The solution

Design and implementation

The first, and arguably most important, step in enhancing AsTelco’s performance measurement system was to clarify corporate strategy. The lack of a strong, coherent strategic message from head office constrained the functioning of operations managers.

“Lack of clarity in the strategy was a key issue for AsTelco. Our first objective was to get agreement on what the corporate message should be. If you don’t know where you’re going, it doesn’t matter which route you take” – Booz Allen project manager.

Consequently, a series of workshops were conducted with AsTelco’s executives. Within this forum, AsTelco management reviewed the corporate strategic agenda, revisiting the company’s vision and mission, medium to long-term direction, and definition of corporate strategic success.

“One of the key outputs of the executive workshops was to establish a single definition of strategic success which everyone in the organisation could understand intuitively. We then used that definition at the heart of the balanced scorecard” – Booz Allen Project Manager.

The fundamental test of corporate strategic success became the sustained growth of the firm’s “intrinsic shareholder value”. Workshops conducted across the business identified the key drivers of intrinsic shareholder value.4

Having established a clear definition of strategic success and identified the drivers of shareholder value, the next step was to translate this knowledge into performance objectives and metrics for each level of the organisation. The corporate scorecard was constructed first, with objectives for each scorecard dimension agreed by the AsTelco management team. Particular effort was made to incorporate metrics into the financial and customer dimensions reflecting “outside in” perspectives of investors and customers. Figure 5 illustrates AsTelco’s corporate scorecard.5

Following completion of the corporate scorecard, scorecards were designed for each of AsTelco’s key divisions and regions with finer levels of granularity in the measures. Consequently, each division’s scorecard represented the aggregation of the respective regions’ scorecard results within that division. Divisional scorecards then tied back to the overall corporate scorecard result. Figure 6 illustrates the relationship between the regional, divisional and corporate scorecards. Figure 7 summarises the design of the balanced scorecard system.


Today, the balanced scorecard remains the cornerstone of AsTelco’s performance measurement system; passing through a number of iterations since its implementation. Scorecard objectives and the associated metrics are periodically reviewed and updated. Individual and team performance appraisals are linked to their respective scorecard contribution.

AsTelco management has noted a number of impacts of the balanced scorecard implementation. First, divisional and regional managers are more “bought in” to corporate strategy. Establishing objectives at the divisional and regional levels has created a clearer framework of accountability. The involvement of divisional and regional managers in setting the metrics has created a closer working relationship between operations and head office. Second, AsTelco managers feel more informed in making strategic and tactical decisions. The balanced scorecard has significantly increased the breadth of information available to the managers for decision-making. In particular, the inclusion of forward-looking “outside-in” metrics is viewed as particularly useful. Third, AsTelco management feel that there has been an improvement in the execution of corporate strategy. It is believed that the combination of increased strategic clarity, greater commitment and improved decision-making has underscored this result.


While the introduction of the balanced scorecard at AsTelco has generally been perceived as a “success”, a number of limitations are apparent. While the scorecard has enhanced the breadth and depth of information captured and disseminated at AsTelco, it has stopped short of articulating strategic and operational responses – the ways in which performance should be managed. The feedback loop between the periphery and the centre remains largely informal. Further, recent iterations of the AsTelco balanced scorecard have struggled to codify strong causal links between metrics perceived by operational managers as useful or important and the corporate strategic objectives (eg, total shareholder returns of competitors, network performance of competitors).


The balanced scorecard can provide a useful starting point to build a performance management system (Kaplan and Norton 1996). At AsTelco, implementation of a BSC-style performance measurement system helped to clarify the corporate strategy, engage internal stakeholders and bring greater breadth and depth to corporate performance measurement.

Like Fastrack, the AsTelco scorecard evolved over time, aligning the scope of the PMS to the shifting information requirements of AsTelco management. Fonvielle and Carr (2001) observe that achieving such alignment and buy-in is key to any performance measurement system: “In the absence of broad commitment, the performance measurement system is likely to . . . be widely ignored or abandoned” (p. 9).

The AsTelco scorecard was not, however, without its flaws. The scorecard did not inform the development of corporate mission and specific corporate goals. Also, the scorecard did not articulate a clear strategy for delineating organisational decision rights, managing performance outcomes or mitigating key operational and strategic risks. Similar criticisms have been expressed by Krause (2003, p. 5): “The empirical evidence suggests that the methodological toolset for the implementation of the BSC lacks operational qualities.”


SEA Bank is among the largest banks in South-East Asia. With more than 600 branches globally, SEA Bank services about 10 million retail and business banking customers. Between 1999 and 2002 SEA Bank undertook a major program of strategy-based transformation, repositioning the firm for its next phase of growth. Within the context of this broader project, SEA Bank redesigned its organisational performance management architecture and took the first steps towards establishing a performance-based culture.

Situation and challenge

Before the Asian financial crisis of 1997, SEA Bank had few reasons to re-assess its performance management architecture. Like many of its regional peers, SEA Bank grew from a family business, servicing retail and small-business customers. Relative scarcity of local competition enabled SEA Bank to expand rapidly from a small operation to become the country’s largest bank by the mid-1990s. In contrast to its rapid financial development, SEA Bank’s operational practices were slow to evolve. When the financial crisis struck in 1997, SEA Bank’s outdated strategic and operational processes exacerbated its organisational distress. Fortunate to have survived the financial haemorrhage, SEA Bank embarked on a major organisational change program in 1999, a key component of which was the redesign of its performance management architecture.

The diagnostic phase of the performance management project revealed inefficiency in corporate decision-making, as well as sub-optimal employee performance. Three key hypotheses emerged for the inefficiency underlying corporate decision-making: ineffective delegation of decision-making authority, ambiguous processes for managing performance outcomes, and decision-making based on short-term optimisation of business unit performance, rather than corporate performance. One central reason for sub-optimal employee performance was identified as lack of a strong “performance culture”. Figure 8 illustrates this.

To investigate these problems, five elements of performance management were investigated: strategic planning (long-term), annual target-setting, decision rights, performance measurement and consequence management (see Figure 9).

The investigation suggested a number of factors constraining efficient corporate decision-making. First, decision-making authority was heavily centralised, tying up senior management time with decisions on day-to-day issues. In addition to slowing the execution of operational decisions, this also contributed to SEA Bank’s short-term, largely tactical corporate focus. Further, it removed decision-making responsibility from the individuals in the best position to execute such decisions quickly – the operational line managers. Second, the boundaries of decision-making responsibility were highly ambiguous among stakeholders at SEA Bank. SEA Bank’s organisational accountability framework lacked clarity; decision rights were not defined clearly and communication channels were typically informal and interpersonal. The absence of clear decision rights for the SEA Bank board, executive management team and operational line managers served to reinforce centralised decision-making. Third, SEA Bank’s approach to performance management was disorganised and reactive in nature, focused towards short-term optimisation at the business-unit level. A series of management workshops revealed significant differences in operational managers’ understanding of the drivers of corporate value. While the fundamental drivers of business-unit value were well understood, the interrelationship of drivers within and between business units was unclear to most managers. A review of the annual financial planning process revealed that business-unit targets were established independently of one another, frequently conflicting. Consequently, plans were iterated multiple times as interdependencies among business units were discovered (and re-discovered) and their budget targets adjusted accordingly. These findings, summarised in Figure 10, were broadly consistent with the hypotheses that SEA Bank’s inefficient corporate decision-making was associated with ineffective delegation of authority, ambiguous processes for managing performance outcomes, and an emphasis on business-unit rather than group performance.

Investigating SEA Bank’s performance-management architecture also uncovered a number of issues relating to the management of employee performance. First, SEA Bank lacked a strong framework of accountability. Measurement systems at the business-unit and employee levels failed to capture sufficiently detailed data to establish strong performance contracts. This exacerbated a general reluctance of SEA Bank to manage underperforming individuals. second, promotion and career growth at SEA Bank was closely linked to tenure and relationships. As a result, high-performing employees with relatively short tenure were often dissatisfied with slow career trajectory at SEA Bank, some leaving the firm for this reason. These findings, summarised in Figure 11, were consistent with the hypothesis that SEA Bank lacked a strong “performance culture”.

The solution

Design and implementation

Over a period of three years, a wide range of performance architecture initiatives were designed and implemented as part of the broader strategic transformation at SEA Bank. Figure 12 summarises these performance architecture initiatives by area. The following discussion focuses on the design and implementation of target-setting, decision rights and consequence management initiatives.

Like AsTelco, SEA Bank lacked a clear, coherent strategic vision. One of the first initiatives was to clarify, codify and communicate the strategic vision throughout the organisation. Extending the planning horizon to incorporate a longer-term view (in this instance five years) was central to the redesign initiatives, shifting the mindset of the management team away from short-term optimisation towards long-term value creation:

“An institutional capability was designed and implemented to enable a well-considered long-term strategic plan to be created. The scope of the planning process was extended to five years, and the planning unit was reconfigured to accommodate this capability” – BAH analyst.

Next, divisional target-setting was aligned to the strategic planning process. The five-year plan was embedded in the divisional target-setting process, providing business units with clear guidance on how their individual budgets should be set to achieve this overarching plan. In addition, a formalised process of divisional budget sign-off was established. Introduction of the sign-off process also contributed to shifting the SEA Bank culture, formalising the right of operational managers to question the executive team:

“Historically, the reach of centralised control at SEA Bank extended to the point where many individuals felt unable to ‘push back’ or ask questions of superiors” BAH analyst.

The sign-off process also codified a performance contract between each division and the executive team. In this way, SEA Bank’s framework of divisional accountability was significantly strengthened – the executive team felt greater confidence that divisional targets were aligned to the corporate plan, and operational managers felt more comfortable about their involvement in the planning process.

Establishing clear and appropriate decision rights was the next step towards enabling a “performance culture” at SEA Bank. Workshops were held with the SEA Bank executive team to discuss the value of greater delegation of responsibility. A conceptual model of decision rights was developed and ultimately accepted by the SEA Bank executive (Figure 13).

Ultimately, the SEA Bank executive team recognised that holding operational managers accountable for divisional performance required those managers to control the results of their divisions. Many decisions historically made by the executive team were consequently released, consistent with the “performance culture” goal.

Finally, initiatives to strengthen consequence management at SEA Bank further supported a “performance culture” and more efficient decision-making. Balanced scorecards were rolled out to each division of SEA Bank. Each scorecard served to clarify the accountability of the division, codifying the dimensions of performance on which the division would be evaluated. To support the introduction of the balanced scorecard a series of consequence management workshops was conducted with representatives from each business unit. Coupled with the clarification of decision rights, the workshops served to communicate the responsibilities of divisional management, and articulate the link between the performance measurement system and expected divisional actions.


The SEA Bank strategic transformation is considered a major success by both internal and external stakeholders. From an external perspective, SEA Bank’s stock price has grown significantly. From an internal perspective, the change program has helped to shift the executive focus toward a longer-term horizon, empowering individuals involved in operations and aligning divisional activities with corporate strategy.


Although there has been significant progress in developing a “performance culture”, SEA Bank continues to grapple with residual cultural issues. Many organisational decisions remain tightly held, despite the creation of a process to foster delegation. To a large extent, distributing responsibility is still perceived by SEA Bank executives as releasing control. Cultural attitudes of conservatism associate releasing control with risk. While the clarification of decision rights, enhancement of consequence management and introduction of the BSC have introduced greater transparency and strengthened accountability, cultural attitudes towards risk and control still constrain the efficiency and effectiveness of performance management at SEA Bank.


In the case of SEA Bank, enhancing the performance management system was inextricably linked to transforming organisational culture. Releasing control to the periphery was essential and central to the new framework of accountability. This finding is consistent with the research of Nohria et al (2003, p. 47) : “. . . building the right culture is imperative . . . promoting one that champions high level performance and ethical behaviour . . . Winning companies . . . design and support a culture that encourages outstanding individual and team contributions, one that holds employees – not just managers – responsible for success.”

The case of SEA Bank also illustrates that effective performance management requires more than just the implementation of a performance measurement system. The axiom that “what gets measured gets managed” is often accepted without question for organisational performance. This view has galvanised a measurement paradigm among managers – a proliferation of performance metrics captured by systems but rarely used for management (Brignall and Ballantine 2004, Neely 2003). The question of what needs to be managed should drive the question of what needs to be measured, not vice versa. Consistent with this view, Shutler and Storbeck (2002, p. 245) suggest: “Concentration on performance measurement to the exclusion of management can result in the failure to achieve the overall objectives of an organisation.”

The SEA Bank case illustrates that measurement and management, while intertwined, are distinct issues. Setting appropriate targets, articulating clear decision rights and building competence are key steps in supporting performance management.


The Oz Bank, AsTelco and SEA Bank case studies support the delineation of the concepts of “performance measurement” and “performance management” (Kloot and Martin 2000). At the highest level, we find that performance measurement is a necessary but insufficient condition for effective performance management. Second, the case studies support the usefulness of broad-based performance measurement frameworks capturing financial and nonfinancial performance metrics (Ittner et al 2003). Managers from Oz Bank, AsTelco and SEA Bank all indicated improved decision-making capabilities associated with the introduction of greater measurement diversity. Finally, the cases, particularly SEA Bank, are broadly consistent with the notion that effective performance management is crucially supported by organisational culture (Nohria et al 2003).

Review of the Oz Bank, AsTelco and SEA Bank cases also suggests a number of practical lessons for managers confronting the issues of performance measurement/management. At Oz Bank, developing a consistent approach to measuring project effectiveness has enhanced the ability of the organisation to allocate scarce resources, track progress against targets and communicate successes to the market. Underpinning the success of the Oz Bank performance measurement/management system was an iterative approach, periodically realigning the system with the information needs of the business. Similarly, at AsTelco, managers were placed at the centre of the design process for the introduction of a new balanced-scorecard-based management system.

Implementation of the scorecard helped AsTelco clarify and quantify corporate strategy, aligning decisions made at the operational periphery to the goals formulated at the corporate centre. At SEA Bank, particular focus on the complements of performance measurement underpinned the construction of a performance management architecture. Clearly articulating divisional targets, delineating decision rights and strengthening the framework of consequence management has empowered employees, constituting a “performance culture” within the bank.

This paper indicates the importance of understanding the interrelationship between organisational context and performance measurement/management processes in practice. The Oz Bank, AsTelco and SEA Bank cases indicate that while performance measurement/management in practice is reasonably aligned with the guidance of the extant literature, most success is borne out of moulding the performance measurement/management system to fit the organisational context. The benefits of measurement diversity appear limited in the absence of customisation. We also observe that successful implementation of performance measurement/management is contingent on a deep understanding of operational, behavioural and cultural issues within the individual organisation. Further research and practitioner reflections are required to better understand this topic of performance management and its behavioural and organisational antecedents and consequences.


1 It should be noted, however, that the BSC is not the only extant measurement system that comprises both financial and non-financial measures (eg, the French Tableau de Bord [Lebas 1994] and the integrated performance measurement system [Laitinen 2002]).

2 This is one key issue considered in our OzBank case study.

3 Pseudonyms are used to protect the identity of the client organisations under examination.

4 Workshops with key AsTelco executives identified a number of drivers of intrinsic shareholder value. From a financial perspective, earnings before interest, tax, depreciation and amortisation (EBITDA) was selected as the appropriate measure. Specific performance levers of EBITDA were subsequently identified (eg, operating revenue, operating costs, capital investment).

5 AsTelco’s actual corporate scorecard also stipulated target values for each metric. The target values are not illustrated to protect the interests of AsTelco. The objectives for each dimension of the scorecard are italicised, with the metrics for each objective indicated as bullet points underneath.


Ballou, B., D.L. Heitger and R. Tabor, 2003, “Nonfinancial Performance Measures in the Healthcare Industry”, Management Accounting Quarterly 5, 1: 11-16.

Bourne, M., M. Franco and J. Wilkes, 2003, “Corporate Performance Management”, Measuring Business Excellence 7, 3: 15-21.

Brignall, S., and J. Ballantine, 2004, “Strategic Enterprise Management Systems: New Directions for Research”, Management Accounting Research 15, 2: 225-40.

Chartered Institute of Management Accountants, 2002, “Corporate Performance Measurement”, New Straits Times, 26 October.

De Waal, A., 2003, “The Future of the Balanced Scorecard: An Interview with Professor Dr Robert S. Kaplan”, Measuring Business Excellence 7, 1: 30-5.

De Waal, A., 2004, “Behavioral Factors Important for the Successful Implementation and Use of Performance Management Systems”, Management Decision 41, 8: 688-97.

Fonvielle, W., and L.P. Carr, 2001, “Gaining Strategic Alignment: Making Scorecards Work”, Management Accounting Quarterly 3, 1: 4-14.

Ittner C.D., and D.F. Larcker, 2003, “Coming Up Short on Nonfinancial Performance Measurement”, Harvard Business Review 81, 11: 88-95.

Ittner, C.D., D.F. Larcker and T. Randall, 2003, “Performance Implications of Strategic Performance Measurement in Financial Services Firms”, Accounting, Organizations and Society 28, 7-8: 715-41.

Jalbert, T., and S.P. Landry, “Which Performance Measurement is Best for your Company?”, Management Accounting Quarterly 4, 3: 32-41.

Kaplan, R.S., and D.P. Norton, 1992, “The Balanced Scorecard-Measures That Drive Performance”, Harvard Business Review 70, 1: 71-80.

Kaplan, R.S., and D.P. Norton, 1996, The Balanced Scorecard: Translating Strategy Into Action, Harvard Business School Press.

Kloot, L., and J. Martin, 2000, “Strategic Performance Management: A Balanced Approach to Performance Management Issues in Local Government”, Management Accounting Research 11, 2: 231-51.

Krause, O., 2003, “Beyond BSC: A Process Based Approach to Performance Management”, Measuring Business Excellence 7, 3: 4-14.

Laitinen, E.K., 2002, “A Dynamic Performance Measurement System: Evidence From Small Finnish Technology Companies”, Scandinavian Journal of Management 18, 1: 65-99.

Langfield-Smith, K., 2003, Management Accounting, 3rd edn, McGraw-Hill.

Lebas, M., 1994, “Managerial Accounting in France: Overview of Past Tradition and Current Practice”, European Accounting Review 3, 3: 471-87.

Lohman C., L. Fortuin and M. Wouters, 2004, “Designing a Performance Measurement System: A case Study”, European Journal of Operational Research 156, 2: 267-86

Malina, M.A., and F.H. Selto, 2001, “Communicating and Controlling Strategy: An Empirical Study of the Effectiveness of the Balanced Scorecard”, Journal of Management Accounting Research 13: 47-90.

Marr, B., and G. Shiuma, 2003, “Business Performance Measurement – Past, Present and Future”, Management Decision 41, 8: 680-7.

Miller, J., and E. Israel, 2002, “Improving Corporate Performance Measures to Drive Results”, Financial Executive 18, 5: 51-2.

Neely, A., 2003, “Performance Measurement”, New Straits Times, 30 August.

Nohria, N., W. Joyce and B. Roberson, 2003, “What Really Works”, Harvard Business Review 81, 7: 43-52.

Rutter, J., 2002, “Revealing a New Competitive Advantage”, Global Investor 150 (March): 61-6.

Shutler, M., and J. Storbeck, 2002, “Performance Management – Part Special Issue Editorial”, Journal of Operational Research 52, 3: 245-6.

Silk, S., 1998, “Automating the Balanced Scorecard”, Management Accounting 11, 17: 38-44.

Simons, R., 2000, Performance Measurement & Control Systems For Implementing Strategy: Text & Cases, Prentice Hall.

Sinclair, D., and M. Zairi, 2001, “An Empirical Study of Key Elements of Total Quality-Based Performance Measurement Systems: A Case Study Approach in the Service Industry Sector”, Total Quality Management 12, 4: 535-50.

Speckbacher, G., J. Bischof and T. Pfeiffer, 2003, “A Descriptive Analysis on the Implementation of Balanced Scorecards in German-Speaking Countries”, Management Accounting Research 14, 4: 361-87

Wongrassamee, S., P.D. Gardiner and J.E.L. Simons, 2003, “Performance Measurement Tools: The Balanced Scorecard and the EFQM Excellence Model”, Measuring Business Excellence 7, 1: 14-29.

Chistopher McNamara is with Booz Allen Hamilton, Sydney. Steven Mong is in the School of Accounting, University of New South Wales. The authors appreciate the assistance of Jane Baxter, Vanessa Wallace, Tony Wessling, Maree Zammit and the Oz Bank Fastrack team.

Copyright Australian Society of Certified Practising Accountants Mar 2005

Provided by ProQuest Information and Learning Company. All rights Reserved