Performance Measurement: It’s A Benefit!

Performance Measurement: It’s A Benefit!

Jo An Zimmermann

Remember the last time your boss returned from a conference or workshop with the hottest idea on the planet? Seven or eight years ago it probably related to the development of performance measures for parks and recreation. You dismissed it as another fad or trend and did not give it much thought because you knew it would go away in a year or two. Well, it is still here and it is gaining momentum. More and more parks and recreation agencies are attempting to develop performance measurement plans for all of its divisions or units. Although some agencies have been very successful in developing these plans, performance measurement concepts still remain relatively new to the profession. This article will provide a brief historical perspective of performance measurement, a description of performance measures and their present use in municipal parks and recreation agencies. Also, it will demonstrate how the Benefits Movement and Benefits Based Programming can be used to establish program effectiveness measures that are an integral element of performance measurement plans for an agency.

Historical Overview of Performance Measurement

The origins of performance measurement “dates back to the 50’s and 60% when the RAND Corporation of Santa Monica, California, introduced what it called systems analysis into its work for the Department of Defense” (Hatry, 1999, p. xiii). This early work led to the development of the Planning-Programming-Budgeting Systems (PPBS) first used in the military and later “introduced into non-defense federal agency planning by President Lyndon Johnson in the late 1960s” (Hatry, 1999, p. xiii). What made these analytical tools unique is that rather than looking only at outputs such as number of arrests or number of miles paved, they looked at outcomes such as quality and differences that had occurred due to the specific program being evaluated.

At about the same time, what we know as program evaluation became very popular in non-defense governmental agencies. Program evaluation focused on outputs and while this gave information regarding what had happened in the past, it gave no indication of quality or efficiency and also no way of projecting what might happen in the future.

The 1980s brought the publication of In Search of Excellence and terms such as “managing for results” became the buzzwords of businesses around the country. This movement created a greater attention to customers and their satisfaction.

In 1992 Osborn and Gaebler published the often-quoted book Reinventing Government where they asserted that “we must change the basic incentives that drive our governments” and offered a challenge to create something called entrepreneurial government. They envisioned a government that was catalytic, community-owned, competitive, mission-driven, results-oriented, customer-driven, enterprising, anticipatory, decentralized and market-oriented. And thus, the concept of reinventing government was born. One way to achieve this was through the use of performance measurement in government operations.

The concern for public accountability became so strong in the 80’s and 90’s that the federal government passed the Government Performance and Results Act in 1993. This Act requires all federal agencies to document the outcomes and benefits of their services to the public. It has spawned considerable efforts to make government more accountable. The Maxwell School of Citizenship and Public Affairs with funding from PEW Charitable Trust has developed a report card for governmental agencies (http://www.maxwell.syr.edu/gpp/).

The examination of federal agencies showed that many of them are at least trying to implement performance measures, and in fact several of the agencies received fairly high grades. The results for state agencies, on the other hand, were somewhat of a mixed bag in terms of results. However, the pressure to apply performance measurement has created a trend toward performance partnerships and performance contracting as various agencies try to document outcomes.

This effort in the public sector also crossed over into the non-profit sector. Many national organizations established elaborate plans for documenting the impact of their services. Most notable are the efforts of Big Brothers Big Sisters of America, Girls Scouts of America, the Child Welfare League of America and United Way of America (Plantz, Greenway and Hendricks, 1997). These organizations have well documented program effectiveness measures that provide them strong support and endorsement by their constituents.

Use of Performance Measures

Performance measurement is sweeping across the nation with amazing speed. Unlike its predecessors [planning/programming/budgeting (PPB), management by objectives (MBO), and zero-based budgeting (ZBB)], “it is clear that performance measurement is not going away”(Theurer, 1998, p. 21). What started with Vice-President Al Gore and the National Performance Review (NPR) is now taking place in state government and local government agencies as well. “Mayors, council members, general citizens, and municipal administrators, too want to know how to judge the service delivery performance of their local government” (Ammons, 1996, p. ix). Cities cited as models for this process are Charlotte, North Carolina; Sunnyvale and Palo Alto, California; Phoenix, Arizona; Dayton, Ohio; Dallas, Texas and more.

Performance measurement “involves the selection, definition, and application of performance indicators, which quantify the efficiency and effectiveness of service-delivery methods” (Fine, 1999, p. 24). It is “measurement on a regular basis of the results (outcomes) and efficiency of services or programs” (Harry, 1999, p. 3). Outcomes are defined as “the events, occurrences, or changes in conditions, behavior, or attitudes that indicate progress toward achievement of the mission and objectives of the program” (Hatry, 1999, p. 15). Generally, in parks and recreation agencies, the focus has been on documenting outputs as opposed to outcomes. Outputs are a record of the number of services or customers served.

Now more than ever, local parks and recreation agencies are seeking to explain what effect the parks, facilities, and programs have on the lives of citizens rather than just documenting what they did. Performance measurement helps guide decision-making because it is based totally on the mission, goals and objectives of the organization.

Types of Performance Measures

As previously stated, performance measurement is the application of indicators that measure outcomes. Ammons (1996)further defines performance measures through the development of measurement categories; workload, efficiency, effectiveness, and productivity.

Workload measures define the amount of work performed or services received (Ammons, 1996). Efficiency measures “reflect the relationship’ between work performed and the resources required to perform it” (Ammons, 1996, p. 12). Workload and efficiency measures are currently used in municipal parks and recreation agencies. Much of their source of information is gathered from outputs such as the number of programs offered or citizens served and expenditure data (Hatry, 1999). Maintaining workload measures allows administrators to visualize workload volume. Examples include routine maintenance to facilities (acres of grass mowed, number of swimming pool inspections, or number of sport fields prepared) and recording of services and consumers (number of classes offered or number of participants). Efficiency measures are often used to reflect the resources required to offer a program. For instance, agencies currently may use this measure to show the unit cost per participant ($1.25/softball participant) or per program (S 137.00/arts and crafts class). Other forms of efficiency measures may reflect work performed relative to assumed standards (acres of grass mowed/hour or number of fields lined/ hour). Workload and efficiency measures have historically served municipal recreation administrators well when it comes to tracking and utilizing outputs and creating standards of operation. Other examples of current use of these two measures can be found in Figure 1.

FIGURE 1. WHO’S MEASURING PERFORMANCE?

(Examples of existing performance measures)

Performance City of Indianapolis City of Rock Hill, SC

Measure Department of Parks Parks and Recreation

and Recreation(*) and Tourism(**)

Workload Number of miles of Number of therapeutic

new greenway trail recreation educational

developed sessions offered

Number of greenway Number of therapeutic

project reviews and program volunteer

public meetings opportunities offered

Efficiency Percent of greenway Percent of facility safety

projects on schedule inspections conducted

twice each month

Dollar value of greenway Percent of accident

partnerships reports processed within

48 hours

Effectiveness The use of Benefits Based Programming can directly

produce effectiveness measures

Productivity

Performance Savannah, GA

Measure Parks and

Recreation(***)

Workload Number of participants

in athletic program

Efficiency Percent of all local youth

participating in athletic

programs

Percent of all local

youth participating in all

recreation programs

Effectiveness The use of Benefits Based Programming can directly

produce effectiveness measures

Productivity

SOURCES:

(*) City of Indianapolis (1998). 1998 Annual Budget: Department of

Parks and Recreation. Retrieved September 24, 2000 from the World

Wide Web: http://www.IndyGov.org/controller/budget/d7_d720

.htm#P126_6516

(**) Excerpted from the City of Rock Hill, SC Parks, Recreation, and

Tourism Mid-Year Performance Budget Report

(***) Ammons, D. N. (1996). Municipal Benchmarks: Assessing local

performance and establishing community standards. Thousand Oaks, CA:

Sage Publications.

Effectiveness measures describe the quality of the performance of the service (Ammons, 1996). Participant satisfaction has been regularly used as an effectiveness measure in parks and recreation. However, other measures of specific outcomes relating to attitude or behavioral change in participants as suggested by Hatry (1999) and others have not been readily utilized. Types of effectiveness measures that could be reported are:

* 75% of participants in the softball league will be very satisfied with the league

* 65% of participants in the after-school program will increase their self esteem

* 50% of the participants attending the community festival will indicate that they feel more committed to bettering their community

* 75% of citizens will rate park maintenance as above average.

The percentages are based upon the estimate of the number of consumers that realistically would meet the intended objective. Very rarely would 100% of the participants ever achieve a stated measure.

And finally, productivity measures “combine the dimensions of efficiency and effectiveness in a single indicator” of performance (Ammons, 1996, p. 12). Examples of productivity measures that could be reported are:

* At a cost of $3.45/participant, 75% of the participants will be very satisfied with the softball league

* 65% of the participants in the after-school program will increase their level of self-esteem while maintaining a cost of $36/participant

* 75% of the local residents will have above average satisfaction with the parks while maintaining a park maintenance cost of $11/acre of developed parkland.

Whereas there are many examples of workload and efficiency measures in the profession, the use of effectiveness and productivity measures is not as prevalent at this time. Effectiveness and productivity measures command the observation and recording of outcomes. The current challenge for municipal parks and recreation agencies is implementing the use of these measures, by establishing a system for identifying and documenting outcomes. One approach that offers promise to the profession is the Benefits Movement, which is supported and endorsed by the National Recreation and Park Association (NRPA).

Performance Measurement and Benefits Based Programming

Although community recreation practitioners historically may not be accustomed to utilizing effectiveness performance measures, new efforts associated with the Benefits Movement provide practitioners with mechanisms for establishing effectiveness measures and eventually productivity measures. More specifically the Benefits Based Programming approach has great potential for identifying and documenting the outcomes of the services and programs offered through municipal parks and recreation agencies. Utilizing the Benefits Based Programming (BBP) model, the recreation programmer can identify outcome objectives prior to program implementation and track program outcome results (Allen, 1996).

With appropriate documentation of the outcomes at the conclusion of the program, the outcomes or impact of the program can be determined. As practitioners maintain documentation for extended periods of time, the record of outcomes becomes more significant, resulting in a well-documented record of long-term program effectiveness. A close examination of BBP reveals that it provides a clear and systematic means of establishing program effectiveness measures for the profession. As illustrated in Figure 2, the procedures for implementing BBP are quite consistent with many of the procedures suggested in developing a performance measurement (PM) plan.

FIGURE 2. PERFORMANCE MEASUREMENT VS. BENEFITS BASED PROGRAMMING

Performance Measurement Benefits Based Programming

Procedures Procedures

Hatry, H.P. (1999). Performance Allen, L. (1996). A Primer:

Measurement, Getting Results. Benefits-Based management of

Washington, DC: The Urban recreation services. Parks and

Institute Press Recreation, March, 64-76.

* Establish the purpose and

scope of the working group

* Identify the mission, * Analyze agency mission, goals,

objectives, and clients of the and management plan.

program

* Identify the results * Identify potential benefits

(outcomes)that the program sought by users and other

seeks stakeholder groups

* Determine core group of

benefits that users seek and

management can realistically

provide

* Modify mission and goals for

administrative units, if

necessary, to reflect target

agency benefits

* Hold meetings with interest

groups such as customer groups

in order to identify outcomes

desired from a variety of

viewpoints

* Select specific indicators * Develop linkage between

for measuring each outcome and identified benefits and potential

efficiency indicator activity opportunities offered

by agency

* Identify structural elements

for each recreation opportunity

which are essential to benefits

achievement

* Identify appropriate data * Modify recreation sites, areas,

sources for each indicator or services to meet essential

and the specific data structural requirements for

collection procedures needed target benefits

to obtain the data.

Develop data collection

instruments such as survey * Select control sites, where

questionnaires. feasible, which match modified

sites and/or services

* Develop instrumentation and

procedures for monitoring

benefits achievement.

* Identify the specific

breakouts needed for each

indicator, such as breakouts

by customer demographic

characteristics, organizational

unit, geographical location,

type of approach used, etc.

Breakout information is

extremely useful in determining

the conditions under which

successful outcomes are occurring

* Identify appropriate

benchmarks against which to compare

compare program results

* Develop an analysis

plan-ways that the performance

data will be examined to make the

findings useful for program

officials and others

* Select formats for presenting

the performance information that

is informative and user-friendly

* Determine the roles that any * Orientation and training of all

program partners (such as project staff, including part-time and

grantees and contractors) with volunteer staff

substantial responsibility for

service delivery should play in

developing and implementing the

performance measurement process

* Establish a schedule for

undertaking the above steps, for

pilot-testing the procedures,

and for making subsequent

modifications based on pilot

results

* Plan, undertake, and review a * Implement services

pilot test of any new or

substantially modified data * Monitor participation and

collection procedures conduct assessment of users over

an extended period of time using

behaviorally based measures

* Review ongoing formative

evaluations for content or

structural changes

* Analysis of monitoring data to

determine effects of recreation

participation on benefit

achievement

* Determine if untargeted

benefits were achieved.

* Prepare a long-term * Develop final reports

schedule for implementation, documenting benefit achievement

indicating the timing of data and implementation process

collection and analysis

relevant to each year’s

budgeting cycle and the

person’s responsible for each

step in the process

* Identify the uses of the * Disseminate findings to

performance information by appropriate local, state, and

agency personnel. national audiences.

Briefly, both BBP and PM start with an analysis or identification of mission, goals, objectives and who will be served. Next, both require the identification of outcomes (PM) or potential benefits (BBP). Both processes require the development of specific indicators for measuring the outcomes or benefits being sought, as well as the development of procedures for the measurement and monitoring of those indicators. Once the procedures have been developed both BBP and PM call for the training and orientation of those who will be involved in service delivery, including part-time and seasonal staff. Finally, both processes call for continuous monitoring of the indicators, modifications in service delivery to improve the performance and the establishment of documenting and reporting procedures.

One area of difference between BBP and PM comes after the development of performance indicators. At this point, a PM plan would suggest that an agency establish targets or benchmarks for each indicator. A benchmark “is a targeted level of service” (Fine & Snyder, 1999, p.13). In a public agency, benchmarks are based upon “service levels of top performers in their chosen area of study … but also on professional standards and directions from policymakers and other public officials” (Fine & Snyder, 1999, p. 13). By establishing a measure of comparison an administrator is able to more accurately determine how well the agency is doing. Benchmarks in terms of maintenance standards and facility development standards are prevalent in municipal parks and recreation, but benchmarks relating to program effectiveness are not a well developed. As BBP evolves within the parks and recreation profession, program effectiveness standards could be established.

Summary

The development of workload and efficiency measures has been beneficial to the parks and recreation field. Improved operating procedures and administrative performance have resulted from the profession’s efforts to embrace this management strategy. Further efforts, however, need to be undertaken to establish measures relating to program effectiveness.

The Benefits Movement and specifically, Benefits Based Programming, offer the recreation professional a framework to develop an assessment plan for measuring program effectiveness. A plan that incorporates both BBP and PM creates a systematic process for evaluating program outcomes that other public and human service agencies have been attempting to establish. Further, this assessment process enhances the profession’s ability to document its impacts on the public and thus, provides decision-makers with substantive information from which to make resource allocations.

References

Allen, L.R. (1996). A Primer: Benefits-based management of recreation services. Parks and Recreation, March, 64-76.

Ammons, D. N. (1996). Municipal Benchmarks: Assessing local performance and establishing community standards. Thousand Oaks, CA: Sage Publications.

Fine, T., & Snyder, L. (1999). What is the difference between performance measurement and benchmarking? Public Management, 81(1), 24-25.

Hatry, H.P. (1999). Performance Measurement. Washington, DC: Urban Institute Press.

Osborne, David and Ted Gaebler (1999). Reinventing Government. In Frederick S. Lane (Ed.), Current Issues in Public Administration. 6th edition (pp. 350-360). New York: St. Martin’s Press.

Plantz, M.C., Greenway M.T., & Hendricks, M. (1997). Outcome measurement: Showing results in the nonprofit sector. In K.E. Newcomer (Ed.), Using performance measurement to improve public and nonprofit programs (pp. 15-30). San Francisco, CA: Josey-Bass.

Theurer, J. (1998). Seven pitfalls to avoid when establishing performance measures. Public Management, 80(7), 21-24.

Jo An Zimmerman, CPRP, is a Ph.D. student at Clemson University in Parks, Recreation, and Tourism Management. Her emphasis is Community Leisure Services with research interests in service delivery issues. She has a B.S. from Western Illinois University and an MBA from Olivet Nazarene University. Prior to attending Clemson, Jo An worked for the Park District of Oak Park in Illinois as recreation program manager. While at Oak Park, she was responsible for managing a gymnastics center and district-wide children programming including early childhood programs, day camps, dance, and special events.

Nelson Cooper is a first year Ph.D. graduate student at Clemson University in Parks, Recreation, and Tourism Management. His study emphasis is in Community Leisure Services. Prior to attending Clemson, he served as a lecturer in the Department of Recreation and Leisure Studies at East Carolina University. While at ECU, he taught courses in recreation programming and in administration and also directed service projects in youth after-school programming. He also has previous work experience in municipal recreation and campus recreation services. His continuing research interests include resiliency development through recreation participation.

Lawrence R. Allen, Ph.D., associate dean, College of Health, Education, and Human Development at Clemson University is an active member of NRPA and the American Association for Leisure and Recreation, serving on several committees in these organizations. He has been a member of the Society of Park and Recreation Educators since 1974 and he was elected to the SPRE Board of Directors in 1991. In 1995, he served as the president of the Academy of Park and Recreation Administration. He has a very strong commitment to professional practice in leisure and has served on the board of directors for two state professional associations.

COPYRIGHT 2001 National Recreation and Park Association

COPYRIGHT 2001 Gale Group