Excellence in Curriculum Development and Assessment

Abate, Marie A

Pharmacy and other health sciences educators have often faced curriculum and assessment issues. However, as expectations for accountability and assessment have continued to grow for higher education institutions, there is increasing concern about the development of assessment plans and the appropriate use of assessment data. A variety of approaches have been used for development of both curriculum and program assessment plans. Although there is no single correct method for accomplishing either task, there are important principles, concepts, characteristics, and approaches that should be considered, such as beginning with well-defined student learning outcomes, using educational approaches designed to facilitate student achievement of those outcomes, and designing assessment strategies that target the specific outcomes. Faculty at schools and colleges of pharmacy need to understand educational concepts and theories, the principles/characteristics of effective assessment programs, obstacles to assessment plan development and ways to minimize them, and methods to create an environment conducive to curriculum and assessment efforts. They should also consider their own unique circumstances when undertaking curriculum modifications and preparing/implementing a comprehensive assessment plan. Professional associations and accrediting agencies can also fill an important role by assisting schools and colleges in their efforts to improve student learning.

Keywords: curriculum, assessment, excellence

INTRODUCTION

Institutions of higher education strive to be recognized for their commitment to providing effective, high quality educational programs, thus fostering academic excellence in both faculty and students. Students and their parents demand high quality programs and use “quality” as a metric in making the decision about which college to attend. Faculty want to be part of a program with established excellence, knowing that this will enhance their reputation and career development. The public also seeks measures of quality, whether real or imagined, and expects academic institutions to be of high quality. Pharmacy education has undergone major change over the past decade with the approval of the entry-level Doctor of Pharmacy (PharmD) degree program. The American Council on Pharmaceutical Education (ACPE) developed new standards for the professional PharmD degree program that were adopted in June 1997 and became effective july 1, 2000 (ic, Standards 2000). These standards and their associated guidelines are designed to assist pharmacy education institutions develop and maintain academically strong, effective programs that are responsive to changing health care needs.

Since quality and excellence in education are important to all aspects of society, focus has been placed upon curricula and assessment strategies to assure that programs are accomplishing their missions. An overview is provided of selected aspects of these topics that are of particular interest and concern to pharmacy education, along with additional recommended readings. Areas discussed include higher education and pharmacy education effectiveness, curriculum development (student learning outcomes, instructional methods – concepts, approaches, student learning evaluation, curriculum mapping), and program assessment (principles and characteristics, barriers and challenges, approaches and methods). Recommendations to enhance progress in these areas are provided for consideration by schools/colleges, the American Association of Colleges of Pharmacy (AACP), and ACPE.

A lack of literature consensus, as well as considerable confusion, exists for definitions of many assessment related terms.’ A glossary is provided (Appendix 1) for several of these terms and includes other terms used similarly in the literature. The definitions were selected based upon whether they reflected those used in the majority of published literature or concepts that most literature sources appeared to agree upon. It should be kept in mind that much of the published program assessment information comes from the field of medicine. Since pharmacy and medicine education share many similarities with other health-related fields, the health care disciplines are urged to work together to adopt and use common terminology for the same assessment-related descriptions.

HIGHER EDUCATION EFFECTIVENESS

Mechanisms are in place to judge and certify the quality and effectiveness of higher education institutions. Most academic institutions are accredited by organizations called regional accreditation agencies, such as the Higher Learning Commission (HLC), a commission of the North Central Association of Colleges and Schools. Regional accreditation qualifies an institution to receive federal financial aid, and it is a prerequisite in order for a given degree program at an institution to be accredited by a professional organization. Tn the case of pharmacy, that professional organization is ACPE. Further, a variety of national rankings purport to identify the highest quality institutions for a given discipline. While such rankings have potential problems and limitations, these “stamps of approval” are often very important in attracting students to enroll and in their job placement after graduation. Accountability, institutional program review, and higher education accreditation all play a role in determining higher education effectiveness.

Higher education generally receives broad public support. The National Center for Postsecondary Improvement (NCPI) reported that in a random sample of 1,000 adults, 79% rated higher education’s performance as good or excellent.2 On the other hand, policy makers generally believe that colleges and universities are not as effective as they could be. Institutional effectiveness examines the extent to which institutions meet their stated mission, goals and objectives. It is in this context that the issue of “accountability” is raised.

For example, federal legislators are likely to demand more “accountability” from colleges and universities as part of the process of reauthorizing the Higher Education Act. Republicans in Congress have proposed that federal financial aid would be denied to colleges whose completion rates do not measure up to a certain standard. Similarly, the major lobbying group representing for-profit colleges, the Career College Association (CCA), is asking Congress to oblige colleges to publish annual “report cards” that would measure success in retaining and graduating students and in preparing students for life after college.3 Items on the report card would include: success of graduates in obtaining jobs, performance of graduates on licensing or certification exams, and alumni and employers’ satisfaction. Interestingly, student learning per se is not a focus of this proposal.

Colleges are already accountable to a number of entities: accreditation authorities, state governments, and the Department of Education. For some, the real issue is not about being accountable but about the performance of graduates in the state and national economies. Some see accountability as a way to leverage institutional change. The question really being asked is, “What value did the students receive for the education they just paid for?”

At many institutions, academic programs are reviewed on a regular basis, either through an internal process or by an external advisory or governing board. The effectiveness of academic programs has traditionally been judged on the basis of “inputs,” eg, number of students, faculty, physical and financial resources, viability, necessity, and consistency.

The adequacy and quality of an academic program has historically been measured by the preparation and performance of its faculty and students and the adequacy of the physical facilities. Issues that supposedly speak to program adequacy include: the degree requirements and significant features of the curriculum, the percentage of faculty holding tenure, the extent to which part-time faculty are used, the level of academic preparation of the faculty, admission standards and entrance abilities of students as judged by results on standardized tests (ACT, SAT, TOEFL, GRE, etc) and high school or baccalaureate grade point average (GPA), and physical and financial resources.

Programs are also evaluated for their viability. Viability can be defined as a program’s past ability and future prospects to attract students and sustain a workable, cost effective program. Viability is tested by an analysis of the unit cost factors, the ability to sustain a critical mass, and the relative productivity of the program. Evidence of viability is also based upon past trends in enrollment and patterns of graduates.

Another evaluative issue that can be addressed is a program’s necessity. Is the program necessary for the institution’s service region? Is the program needed by society, as judged by current employment opportunities, evidence of future need, and rate of placement of the program’s graduates?

Consistency of a program with the institution’s mission is another factor for consideration. A program needs to be a component of and appropriately contributing to the fulfillment of the institution’s mission. This involves determining the centrality of the program to the institution, or how well the program complements or draws from the institution’s other programs.

While all of the above factors have been used to judge program quality, the evaluation of a program’s quality and effectiveness has moved more recently from “input-based” to more “outcome-based” evaluation. Outcomes are sometimes thought to be items such as the number of graduates of a program. However, when the term outcome-based is used today, it generally refers to the assessment of student learning outcomes. This approach places student learning at the center of assuring and advancing quality of higher education.

In the 1980s, A Nation at Risk focused primarily on the declining quality of primary and secondary schools but helped establish the context for a similar analysis of postsecondary education. It engendered the report, Involvement in Learning: Realizing the Potential of American Higher Education, that identified the need for enhanced student involvement, for higher expectations, and for the assessment of student learning.5

Higher education faculty members are quite good at collecting data but less proficient at analyzing those data, especially as they pertain to learning. This is the step where evaluation, the process of reflecting upon and interpreting the collected data to determine what represents new knowledge, needs to occur. It is only through this evaluation that the final, critical step in the assessment process can be undertaken. The phrase “closing the assessment feedback loop” is used to describe changes made to a curriculum based on what the faculty have concluded and learned from the assessment data and its evaluation. Changes in the curriculum and its delivery, as well as new faculty development programs, should arise from an analysis of assessment data. Institutions should work toward a culture of assessment in which there is a willingness to not only create measures and collect data about outcomes, but to also use this information to make changes that will improve student learning.

Recently, the HLC, the regional accreditation group for higher education institutions in the 19-state North Central region, announced a new set of criteria for accreditation that will go into effect in 2005. Other regional accreditation groups such as the Middle States Association of Colleges and Schools (MSA), the New England Association of Schools and Colleges (NEASC), the Northwest Association of Schools, Colleges and Universities (NWA), and the Southern Association of Colleges and Schools (SACS) have undergone or are undergoing similar transformations. The new HEC criteria for accreditation are characteristic of the changes occurring in higher education. The focus shifts from what the institution has done in the past to what it is prepared to do in the future. Emphasis is placed on learning rather than on teaching, reminiscent of the recommendations of the Kellogg Commission in their report, Returning to our Roots: Toward a Coherent Campus Culture.6 The new criteria for accreditation move from inputs, resources and structures to outcomes. For example, the HEC New Criterion Three: Student Eearning and Effective Teaching asks the institution to provide evidence that supports the following: the organization’s goals for student learning outcomes are clearly stated for each educational program and make effective assessment possible; the organization values and supports effective teaching; the organization creates effective learning environments; and the organization’s learning resources support student learning and effective teaching.

Institutions of higher education struggle to portray the qualities of a learning organization, including the readiness to define priorities, measure progress, create feedback loops, and apply what is learned to improve performance. Despite many years of the assessment movement, few institutions systematically use assessment results to improve the curriculum and student learning.7 Unlike other “movements,” the assessment movement is not going to go away. As long as there are external forces calling for accountability, assessment will be an expectation. The good news is that faculty have the opportunity to “own the process” – it does not have to be “done to them.” Assessment should be used to transform the enterprise from teaching-centered and rooted in the past to one that is learning-centered with an eye to the future.

PHARMACY EDUCATION EFFECTIVENESS

Overview

Pharmacy education has made substantial strides forward in recent years in the area of curricular development and refinement based upon the expected abilities of graduates, as well as in the development and initiation of assessment plans. AACP has facilitated these efforts by sponsoring curriculum and assessment related institutes and workshops, establishing commissions and focus groups to explore key topics, and by establishing the Center for the Advancement of Pharmaceutical Education (CAPE) Advisory Panel on Educational Outcomes. ACPE’s Standards 2000 includes several curriculum/assessment related standards that should be met by pharmacy schools/colleges for accreditation purposes.8 Key aspects of these standards include establishment and maintenance of a system to assess the extent to which the educational mission, goals and objectives are being achieved, including use of formative and summative indicators, evaluation of knowledge/skills application to patient care, and analysis of outcomes measures for purposes of continuing development and improvement (Standard No. 3); delineation of professional competencies that should be achieved, development of outcome expectations for student performance in those competencies, and inclusion of student self-assessment of performance (Standard No. 11); description of ways in which curricular content is taught and learned, including teaching efficiencies and effectiveness, innovation, curricular delivery, educational techniques/technologies integration, fostering of critical thinking/problem-solving skills, meeting diverse learner needs, involvement of students as active, self-directed learners, and transitioning from dependent to independent learning (Standard No. 12); establishment of principles and methods for formative and summative evaluation of achievement using a variety of measures throughout the program, assessments that measure cognitive learning, skills mastery, communication ability, use of data in critical thinking/problem-solving, and measurement of student performance in all professional competencies (Standard No. 13); and use of systematic and sequential evaluation measures throughout the curriculum, focusing on all aspects, and application of outcomes and achievement data to modify/revise the professional program (Standard No. 14).

An increasing number of pharmacy literature reports describing curriculum revision, mapping of course content and objectives to program learning outcomes, and assessment efforts at various schools and colleges attests to the progress made. Higher education and health professions accreditation organizations have stated the need for assessment data to document educational effectiveness and make ongoing curricular changes to enhance learning. The logical questions then become: How well has pharmacy education performed with regard to developing comprehensive student learning outcomes assessment plans? What are their assessment findings?

Two surveys examined educational outcomes assessment efforts at US schools/colleges of pharmacy. A 1998 survey (64% response rate) gathered data about tools used to assess/measure student abilities and competencies.9 The responses were categorized into five areas, including an assessment center approach, use of an objective structured clinical examination (OSCE), educational outcomes assessment surveys, clerkship outcomes assessment surveys, and a combination approach, eg, surveys, NAPLEX results, experiential performance, etc. The most commonly used tool was the educational outcomes survey approach, followed by a combination approach, clerkship outcomes assessment, and OSCE use. It was concluded that most schools/colleges were at only the beginning stage of outcomes assessment and lacked data from use of their tools. Limitations of this survey include a lack of response from about one third of schools/colleges, no quantitative data for the number using each type of tool, and the fairly narrow survey focus.

A 2000 survey (69% response rate) obtained data from pharmacy schools/colleges regarding the persons involved with outcomes assessment, the factors that drive the process, the prevalence of formalized outcomes and assessment plans, and the instruments being used.10 Twenty-nine percent of respondents had undergone an ACPE accreditation visit since 1998. Only 49% of respondents had an assessment committee, although the curriculum committees at some schools/colleges might have similar responsibilities. Assessment committees were less likely than curriculum committees to involve students or practitioners. Only 11% had the equivalent of a full-time professional position assigned to an assessment role. While 71% of respondents had an approved list of general education abilities for their program, only 44% of respondents had a written outcomes assessment plan and, of these, only 65% (about 29% of respondents overall) indicated their plan was formally adopted. The extent to which assessment data were obtained and actions taken were not described. The dean, another administrative officer, and faculty were indicated as drivers (multiple drivers could be selected) of the assessment process at 71%, 63%, and 54%, respectively, of those schools/colleges with a written plan. The most frequent instrument used for outcomes assessment was NAPLEX, with small numbers (

A number of literature reports since 2000 describe learning outcomes assessment efforts or plans at various pharmacy schools/colleges. Some focused on the assessment of specific skills or abilities such as literature evaluation, critical thinking, problem solving, or writing, while others are developing models for exploring learning outcomes assessment across the curriculum. However, only limited data from these assessments are reported. In conclusion, the majority of schools/colleges of pharmacy appear to be characterized at best as being in only the early stages of establishing an institutional culture of assessment and comprehensive outcomes assessment plans, with relatively few findings available to date.

Curriculum Development

Although the term “curriculum” is used frequently in pharmacy education, curriculum is often defined narrowly. Webster defines curriculum as “the courses offered by an educational institution or one of its branches” or “a set of courses constituting an area of specialization.” However, curriculum also encompasses learning experiences set forth by a program or school and should include all aspects of these experiences. In addition to content, curriculum considers student learning outcomes, teaching and learning processes, student evaluation, and program (student learning outcomes) assessment. Assessment is a critical step in curriculum development by not only determining if the learning outcomes of a course or program were met but also by directly influencing student learning. The Report of the Focus Group on Liberalization of the Professional Curriculum appointed by AACP defined curriculum as “an educational plan which is designed to assure that each student achieves well-defined performance-based abilities.”12 Curriculum development should be an ongoing process that is responsive to changes in pharmacy practice and society and that incorporates new scientific discovery.

Student learning outcomes. Development of student learning outcomes is the foundation to building curricula because learning outcomes must guide content development and selection of instructional methodologies. Further, learning outcomes should be derived from the educational mission of the institution, and in the case of pharmacy education, should be congruent with clinical practice. The CAPE Advisory Panel on Educational Outcomes provides an excellent starting place for development of a professional pharmacy program’s learning outcomes.13 Student learning outcomes provide the student with the institutions’ expectations of them upon completion of the program of study. Using an outcomesbased approach, the focus of curriculum development is on what students will be able to do rather than what faculty will do. Thus, the curriculum should be planned around student learning outcomes that link knowledge, skills and behavior/attitudes/valucs, rather than simply using content or subject areas as a road map for curricular development.14 Once outcomes are set forth, teaching and learning strategies are then developed to support their achievement. Thus, the educational environment is created as a product of an outcomes-based curriculum.

Student learning outcomes should be explicit and measurable, enabling the institution to assess the effectiveness of the curriculum, and to describe to stakeholders (eg, students, faculty, administrators, pharmacy practitioners, accreditors) what the curriculum hopes to achieve. Good educational outcomes should specify five essential components: “Who/will do/how much/of what/by when?”14 For example, asking a first year PharmD student to “understand the components of a patient’s medical record” upon completion of a course may describe “who” and “by when,” but is ambiguous about what specifically and how much students should achieve. A better understanding of the outcome would be achieved if a first year PharmD student was asked to “collect relevant information from the patient’s medical record to create an accurate and complete patient profile.” Use of an action verb (eg, classify, evaluate) that describes the outcome expectation assists the student in understanding what should be accomplished. Bloom’s taxonomy categorizes cognitive levels of learning on a continuum from simple to complex (ie, knowledge, comprehension, application, analysis, synthesis, evaluation).15 Thus, student learning outcomes should be developed for the desired cognitive level of learning. In addition to professional learning outcomes, the AACP Commission to Implement Change in Pharmaceutical Education recommends development of outcomes that describe general abilities (eg, communication, ethics).16 A program should also have a process in place for continuously reevaluating and modifying as indicated its student learning outcomes. The CAPE Educational Outcomes document will be undergoing review and revision in 2004.

Not only should learning outcomes be developed for a program, but they should also be developed and assessed for individual courses and lesson plans. As the first step in course development, instructors should prepare outcomes for individual courses that are in accord with the program’s student learning outcomes. Outcomes should complement or build upon those in other related or previous courses, be appropriate for the level of the student, and be actionoriented. For each student learning outcome developed for a course, specific performance criteria (ie, criteria which the instructor will use to evaluate student performance) should be developed and communicated to students.17 The criteria should describe clearly what the students need to do to achieve the outcome in sufficient detail that another instructor could use the criteria and arrive at the same conclusion about the student’s performance. The performance criteria help to describe the level of expertise required for students to achieve a given outcome, and they should be used to determine the content and instructional methods necessary to achieve the outcome. Student learning outcomes that require higher cognitive levels of learning (eg, evaluation) will require different types of educational and evaluative methods compared to those that require lower cognitive levels of learning (eg, knowledge).18

Instructional Methods. Concepts. In order to select and develop appropriate instructional methods, some basic conceptual knowledge is needed. What are some important basic points concerning teaching and learning? A key point is that teaching is not equal to learning. Talking to a student audience, ie, passive learning, does not guarantee that they will understand, process, synthesize, apply and retain what is heard. Semantic networks consisting of a number of related concepts must be built in order to learn, and these knowledge networks change when new learning is experienced. The learners themselves are the center of the learning process (ie, student-centered learning or constructivism); they structure, organize, and use new information gained through interactions with their environment and need to have adequate self-study time to accomplish this. The recall and use of information is also affected by the situation or context in which learning occurs. For example, it can be difficult to quickly identify a person one usually sees only at work when that individual is encountered in a non-work environment.19

Learning style, the manner(s) in which an individual prefers to learn, can also affect teaching and learning effectiveness. Although learning has been said to be more effective when the teaching and learning environment match the learner’s style,20,21 providing “creative teaching/learning style mismatches” might actually help stimulate optimal learning.22 Some pharmacy schools/colleges reported that students preferred more than one type of learning modality or activity,23-25 and that incorporation of diverse learning activities in one course overcame individual differences in learning style.26 However, a problem with the learning style literature is that many different versions of cognitive or learning style measures exist, making the individual results from studies difficult to interpret and compare.21 How should pharmacy faculty address the issue of teaching and learning styles? As a first step, use of the same instrument(s) by schools/colleges to determine student learning styles would allow for better comparison of findings across campuses. More work is needed to address questions such as the role of diversity in affecting pharmacy student learning and styles, whether learning style data can be used to predict student success and how this could best be accomplished, and whether (and how) to adapt teaching styles to accommodate learning styles. Based on currently available data, a variety of learning approaches should be considered for use in courses.2,27,28 Faculty members should develop strategies for helping students adjust their learning approaches as appropriate for the specific task or situation.22,27 Faculty members should also recognize that since some pharmacy students may prefer passive learning methods,28,29 acclimating them to active learning approaches might take some effort.

In summary, to enhance the educational process and employ appropriate instructional methods, teachers need to apply learning concepts. They should guide student learning and draw upon prior learning as well as expose students’ inconsistencies between current understanding and new learning experiences. They should engage students in active learning that allows for “construction” of their own knowledge, provide students with sufficient time in the curriculum to reflect upon and learn from new experiences, integrate knowledge and concepts rather than teach them in isolation, and use a variety of learning approaches in their courses. They also need to provide knowledge in a professionally meaningful manner, include different contexts and scenarios as well as work with authentic problems, and use assessment to drive and improve learning.11,19,30,31

Approaches. A variety of instructional approaches are needed to meet all the learning outcomes of a program. Since outcomes serve as a guide to students by describing what they should be able to do as they progress through a course or a program, strategies to provide students with sufficient opportunities to practice achievement of the outcomes are needed. Opportunity to practice skills should not be limited to the experiential year of the curriculum, but rather provided as a continuum throughout the curriculum. In addition, students should be given the opportunity to develop their problem-solving skills, integrate information from one discipline to another, and conceptualize how each piece of information relates to other materials learned previously and to pharmacy practice as a whole. Integrative teaching most likely will be necessary to achieve learning outcomes. However, integrated teaching does not necessarily mean the creation of formal integrated courses and curricula.

A trend occurring in medical schools is the implementation of integrated curricula.32 To a somewhat lesser extent, schools/colleges of pharmacy are also implementing integrative curricula that consist of multidisciplinary blocks of material. Examples from the pharmacy literature have described the integration of medicinal chemistry, pharmacology, pathophysiology, therapeutics, patient assessment, drug literature evaluation and/or pharmacokinetics.33-37 Sprague et al. described their experiences with a five and a half week cardiovascular module that was taught on a fulltime basis and that integrated pathophysiology, pharmacology, medicinal chemistry, pharmacokinetics, therapeutics, and drug information for students in the fourth year of a six-year PharmD program.33 Students agreed that the course objectives were met and that the flow of the materials was appropriate. Most of the citations in the pharmacy literature are available in abstract form only. In addition, most reports do not contain evaluative data and do not compare learning to more traditional classes. As integrative curricula become more widespread in schools/colleges of pharmacy, faculty members are encouraged to share their experiences and student outcomes by publication of their findings.

Ultimately, it is the faculty’s responsibility to determine the best educational strategies and methods of instruction to employ to achieve course and program outcomes. A brief synopsis of different instructional methods follows. Distance education and experiential education, including service learning, are not included in the discussion and the reader is referred to separate papers by AACP.

Lecture is a common instructional method used in higher education and may be particularly beneficial for topics requiring lower cognitive levels of learning for which students are primarily recalling information or describing/explaining concepts. Advocates of the lecture point to its relatively low costs since the faculty:student ratio is low. In addition, course development costs are lower than for other methods of instruction. However, if achievement of outcomes requires higher levels of cognitive learning (eg, application, analysis, synthesis), lectures alone will likely be inadequate to meet course or program outcomes since lectures place students in a passive rather than active role.

Active-learning strategies have been introduced into large-group classrooms to increase problem-solving and critical thinking skills of students by placing them in a more student-centered environment.38 In addition to serving as a source of information, faculty become facilitators of learning. Examples of active-learning instructional strategies include evaluating case studies, solving authentic patient problems, peer group teaching, role-playing, writing, and building concept maps. Difficulties in transitioning to an active-learning environment can be minimized or avoided by setting faculty and student expectations at the start and by providing students with many opportunities to practice and learn problemsolving techniques.

Small-group teaching, recitation sessions, and pharmacy skills laboratories have the advantage of promoting problem-solving skills, facilitating teamwork, and enhancing the acquisition of skills. They are particularly useful for students that learn by “doing” and who may not otherwise participate in the large-group teaching environment. In addition, small-group discussions can be multidisciplinary in nature to increase collaborations among different health care professionals (eg, pharmacy, nursing, medicine, dentistry). The primary disadvantages to small-group teaching are that additional faculty resources (eg, facilitators, moderators) are needed, differences in learning may occur dependent upon differences in the facilitators’ knowledge and participation style, and unreliable individual student evaluations can exist secondary to multiple evaluators.

Problem-based learning (PBL) is an instructional technique used to promote meaningful learning and problem-solving skills. Although the definition of PBL has varied, it is an educational method focusing on the acquisition, synthesis and appraisal of knowledge by actively working through problems in small groups using facilitated and self-directed learning.39 Cisneros et al. provide an excellent overview of problem-based learning research in pharmacy and medical education.40 Some studies show that student learning is at least equivalent to learning by traditional instructional methods, and student participants report an improvement in their problem-solving skills, use of information resources, and communication/interaction skills.41 In addition, PBL may facilitate opportunities for interdisciplinary learning.40 However, although there are numerous reports of incorporation of PBL into the pharmacy curriculum, most of the reports are descriptive in nature and few document the impact of PBL on student learning and problem-solving skills. Cisneros et al. note the need for “more long term assessments of the effects of PBL on student learning.”40 A primary disadvantage of PBL is that additional resources (eg, facilitator time) is needed. Other disadvantages include inconsistencies between facilitators and less student exposure to a broad range of content areas.

A variety of technologic tools are being used in pharmacy education, including computer-assisted instruction, web-based course development/management software, audio/video tapes, and personal digital assistants (PDAs). Although little evaluative information is available in the literature about the use of PDAs in health sciences education, they are being used to collect patient information, access the medical literature, document clinical interventions, complete quizzes, and manage lecture and course material.42,43 Zarotsky and Jaresko provided an excellent review of the use of technology in pharmacy education and highlighted the limited data that support improved learning with the use of computerassisted instruction.44 Studies should continue to be undertaken to determine how and when to optimally incorporate technology into educational experiences and whether learning is improved with its use.

In conclusion, selecting appropriate instructional methods requires skill and expertise in the area of education. Thus, faculty should approach their role as educators in the same manner as they do their roles as clinicians or researchers.19,45 As clinicians or researchers, faculty generally strive to remain current with the literature in their field, consider new approaches to enhancing and improving their work, seek peer review or feedback about the quality of their work, and take care that new graduates are sufficiently prepared to enter a career as a researcher or clinician. Often, schools/colleges provide “release time” from teaching or committee responsibilities to allow new faculty members to start their research laboratory or to develop their practice site. In contrast, as educators, faculty in the health sciences disciplines often do not receive specific education or training concerning instructional approaches, learning theories, or how to best facilitate student learning. They may not have sufficient time, given other demands, to explore and learn relevant educational theories, concepts, and the advantages/disadvantages of various instructional methods on their own. As a result, they may negatively view critiques of their teaching or suggestions for change as an attack on their individual status as a faculty member. As educators, faculty must not be satisfied with simply the use of an “acceptable educational approach” but rather should continually ask whether their teaching is effective and if their teaching/educational strategies can be improved.19 Drawing on an evidence-based approach to pharmacy education is one method that might allow for enhancement of teaching and learning,46 although more work is needed to determine the extent to which various educational techniques and methods can be transferred reliably from one environment to another. Faculty should take advantage of available development opportunities in the area of education through attendance at relevant national meetings and workshops, or perhaps through use of professional development or leave time that colleges, schools, or universities might offer. For example, the Education Scholar modules, available through AACP, describe educational approaches and assessment and could potentially be completed as a professional development activity using leave time.

Student learning evaluation. Evaluation of student learning should include formative individual feedback that provides students with the opportunities to practice attaining outcomes and to learn from mistakes. Summative individual feedback is intended to judge or verify performance. Formative evaluations may be perceived by faculty members as time consuming and unnecessary because they are not part of an overall course grade. However, formative feedback provides valuable information to the student in terms of their learning, areas of weakness, and ultimately their ability to successfully achieve the course’s learning outcomes. Self-evaluations can also provide students with an opportunity to monitor their own progress and can provide faculty with valuable insight into student strengths and weaknesses.

Evaluative data can be obtained through a variety of means (described in more detail later), including rating forms, self-evaluation forms, oral or written examinations, assignments, papers, questionnaires, interviews, role-playing exercises, simulations, OSCEs, portfolios, and direct observation in practice. Student evaluation data can be used in assessment plans to modify course content and educational methods employed, based on what the techniques reveal that students have learned compared to what they should have learned. Student evaluations should serve as the link between a course’s learning outcomes and instructional methods. When assessment strategies are employed to determine students’ achievement of learning outcomes across courses and the entire program, the resulting data should be used to make changes within and across courses and the curriculum.

Curriculum mapping. Due to the scope and breadth of a curriculum, particularly as revisions and modifications are made over time to individual courses and sections, it can be easy to lose sight of its structure as a whole. Curriculum mapping is one technique used to diagrammaticalIy demonstrate the relationships or links between different aspects of the curriculum: content, learning outcomes, learning resources, educational strategies, student assessment, etc.47 Curriculum mapping can help ensure that there are no gaps or unnecessary redundancies in content, promote an integrated curriculum by showing the relationship between different content areas, and identify the types and range of assessment methods being used.48-50 Curriculum mapping can also be used to identify additional assessment opportunities that can be incorporated into a program assessment plan.5

Program Assessment

Principles and characteristics. A program (student learning outcomes) assessment plan should ultimately allow faculty to make informed changes in the curriculum to improve student performance. Assessment data collected should be sufficiently precise to not only provide evidence of a need for improvement but also to indicate the specific steps that can be taken to make the improvement.

Surveys are often used as an indicator of student learning. Students might be asked to indicate the extent to which a program actually addressed its learning outcomes and their perception of the degree to which they have achieved or can perform each of the outcomes. Graduate surveys are used to obtain the opinions of graduates about a program’s learning outcomes and areas of strength and weakness in their learning. Employer surveys are used to gather information about employers’ satisfaction with the performance of program graduates in the work place and ways in which graduates’ education might be improved. These are indirect measures of student learning in that they ascertain only the perceived extent or value of the learning experiences and outcomes. Other examples of indirect measures include focus group learning-related discussions and job placement statistics.52

While indirect measures of student learning provide useful information, most accreditation agencies expect that at least some of the assessment methods will use direct measures of students’ knowledge or skills based upon comparisons to measurable objectives or outcomes. Examples of direct measures include: examinations, presentations, performance appraisals, and portfolios, among several others.52

While some may refer to high course grades or low student dropout rates as an indication of program success, neither student course grades nor GPAs are reliable or adequate measures of student learning across a program of study. Grades or GPAs “tell us little about what a student has actually learned in the course” and “very little about what a student actually knows or what that student’s competencies or talents really are.”53 Grades are influenced by many factors including course grading policies, instructors’ experience and academic rank, and predominant modes of instruction. They usually encompass a variety of types of knowledge and skills, some of which students might master and others which they might not. Thus, it is hard to make comparisons between student grades in terms of what students actually know or can do.54

The literature clearly indicates that assessment drives learning. Student evaluation and assessment methods employed will determine how and what students learn since they will focus attention on those aspects. For example, if acquisition of factual knowledge is predominantly assessed, students will primarily study to acquire and memorize facts. This has also been referred to as a “steering effect.”55 Thus, program assessment is ultimately critical to educational effectiveness. What are the principles and characteristics of effective program assessment practices? Several are identified (Table 1), including:

1. Assessment must be integrated into an institution’s culture, at the highest level.56 Administrators must be “on-board,” meaning that they should not only advocate and promote assessment practices but should be knowledgeable about assessment-related issues and the value of assessment. The likelihood that faculty will eagerly participate in assessment processes is slim if the administration is not an active facilitator and supporter.

2. Assessment should be ongoing with sustained commitment by all departments and faculty. This is more likely to occur when it clarifies questions or concerns that people care about and when it provides evidence in areas important for decisions.56

3. Assessment should be based upon clear, explicit, focused, and measurable student learning outcomes for a program, which in turn should reflect the educational mission and goals.56,57

4. Assessment should reflect learning as multidimensional and integrated and should reveal performance over time. Thus, multiple methods that are carefully selected, with consideration given to their reliability and validity, should be used to assess the program’s student learning outcomes.56,57

5. Attention must be given not only to the outcomes from assessment methods but also to the experiences leading to those outcomes. The processes of teaching and curriculum development used to enhance student learning define successful outcomes assessment.56

6. Assessment should involve representatives from across the educational community, including faculty, staff, students, and external stakeholders such as employers.56 They should be included throughout the assessment process, including the planning, implementation, review of drafts of assessment instruments, review and analysis of data, etc.

7. Assessment should be a part of a larger set of practices to promote change, such as holding assessment-related faculty development sessions, having ongoing faculty discussions related to assessment and learning, and using assessment data to make curricular changes.56

8. Assessment data should be used in reports to external stakeholders to show that a program’s goals are being met.56

9. Assessment is most effective when undertaken in an environment that is receptive, supportive and enabling. This includes having strong support from central administrators, adequate resources for implementation, creation of an atmosphere of trust that data will be used for improvement and not for punitive measures, and the establishment of avenues for communicating results to a variety of audiences.56

10. Assessment findings should be used as a basis for funding decisions or reallocation of resources as indicated.57

11. Assessment efforts should be directed by persons who are competent, motivated, and trustworthy to enhance the credibility and acceptance of the findings,57,58 but all faculty must assume responsibility for assessment quality.58

12. Assessment plans should themselves be reevaluated on a regular basis.55,59

As schools/colleges develop, implement, review, and refine their learning outcomes assessment plans, they should continually strive for incorporation of the principles and characteristics of successful assessment programs.

Regardless of whether a program has just initiated an assessment plan or whether the program has an established plan, it would be useful to be able to compare a program’s progress against a “standard” of some type. The Higher Learning Commission of the North Central Association (NCA) of Colleges and Schools has developed a tool, the Levels of Implementation, that has several assessment-related potential uses.60 The Levels of Implementation is a matrix consisting of three levels (Level One = “Beginning Implementation of Assessment Programs;” Level Two = “Making Progress in Implementing Assessment Programs;” Level Three = “Maturing Stages of Continuous Improvement”) and four patterns of descriptions or characteristics that are associated with each level. The information in the tool was prepared based upon the observations and comments obtained during numerous institutional site visits conducted by NCA. The descriptors/characteristics are divided into four main areas: Institutional Culture, Shared Responsibility, Institutional Support, and Efficacy of Assessment. A worksheet accompanying the tool can be used for rating. For example, for the first pattern one can review the various descriptors/characteristics related to Institutional Culture – Collective/Shared Values that describe Level One, Level Two, and Level Three, and use the worksheet to rate the extent to which the characteristics describe their own department, unit, division, or school. The Levels of Implementation tool can be used by programs to establish their baseline assessment characteristics as well as to measure progress, as a guide as they develop assessment plans, to determine whether all their faculty members and administrators share the same impressions about their assessment efforts, and to identify and initiate specific changes needed to advance their assessment plans. Accreditation evaluation teams could also use the Levels of Implementation tool to help identify important questions about assessment to ask programs in order to determine the program’s progress and efforts to improve student learning. The tool might further assist evaluation teams in providing consistent advice to programs about assessment.

Barriers and challenges. Implementing and establishing an assessment process is not necessarily an easy task. What are some of the key barriers and challenges to assessment success that schools/colleges must continually work to overcome? These can be characterized as faculty-related, resource-related, and student-learning related barriers and challenges (Table 2).

A major barrier to successful assessment implementation is lack of faculty support,56 for which there are several contributing factors. The lack of literature consensus about assessment terminology and the use of different terms for the same meanings are confusing at best. A shared language and concepts are essential to ensure a clear understanding of the meaning of “assessment” and to identify appropriate methods for assessing student learning outcomes. Faculty support can also be enhanced by creating the proper culture and environment as described previously. The availability of a sufficient number of faculty development opportunities and regularly scheduled discussions related to assessment and its importance are important for enlisting and retaining faculty commitment.

Proceeding slowly can help establish a shared trust that negative findings will not affect a faculty member’s status or be used to punish. Beginning slowly and making the assessment process manageable in size will decrease the likelihood of overwhelming already overburdened faculty. However, results obtained should be shared with faculty as rapidly as possible so they can observe progress being made.56 The use of course-embedded assessment methods should also be considered when feasible to increase efficiency of the assessment process. Courseembedded assessment uses existing course tools, instruments, or measures to generate program assessment data.61 This does not mean using course grades or global class examination scores for assessment purposes. Rather, a faculty member might have developed a method to assess a specific ability in their course. Data from this measure could be incorporated into the assessment plan, and it could also be combined with data from other program areas that assess that same ability. For example, a medical literature evaluation course might use a rubric to provide a more objective assessment of students’ abilities to appropriately critique a journal article. Experiential rotations might also assess students’ literature evaluation skills. Findings from the rubric and the experiential rotations can be used together to help identify specific weaknesses in these abilities. Course-embedded methods can be used to facilitate the collection of assessment data for specific outcomes, provided that appropriate tools, instruments, or other measures that are valid, reliable and allow for meaningful interpretation exist. If such measures do not exist, time and effort must be expended to create them.61

Another barrier is that pharmacy faculty might inappropriately feel that as long as students pass state licensure examinations, their program is not in need of change and additional assessment methods are unnecessary. Licensure examinations have very structured and prescribed content and context, and as such might determine at best that a graduate knows how. They do not adequately measure desired attributes such as professional values, ethics, judgment processes, or skills requiring interpersonal interactions.11

An additional faculty-related assessment barrier is that faculty members may prefer structured responses to unstructured ones and objectivity rather than judgment. Since authentic (ie, context that reflects actual practice) evaluations tend to be less structured with more emphasis placed on observation and judgment, their use for assessment purposes might make some uncomfortable. Faculty members might also incorrectly believe that objectivity is legally defensible and judgments are not; however, objectivity and judgments are defensible in court for determining student competency as long as the judgments are not arbitrary or capricious.” Faculty “buy-in” will probably be an ongoing issue with regard to assessment due to periodic personnel turnover, which can be compounded further by any changes in the program’s leadership. To help minimize “buy-in” problems, efforts should be made to determine the faculty members’ goals with respect to student learning and incorporate them into the school’s culture.

There are several resource-related barriers in establishing an assessment program. Even with the best of intentions, the process of defining clear, specific criteria for the cognition, skills and behaviors for each desired learning outcome can be time-consuming and difficult. 17,55 Identifying the most appropriate assessment approaches to use for each type of outcome, the best analyses to use for the data, and how to assess and interpret the results can be a daunting task, especially for most pharmacy faculty that lack background and education/training in these areas. Thus, adequate resources are needed to develop, implement, and maintain a sound program assessment plan. School/colleges located at institutions that have a school/college of education might call upon their education colleagues for assessment-related assistance and advice as needed.

A variety of other factors beyond learning outcomes and the curriculum per se also contribute to educational effectiveness. The interrelationships among input (eg, selection of students, budget, quality of faculty and graduate students who teach, physical resources, etc.), process (eg, goals/objectives, educational approaches, curriculum organization, course content, counseling, etc), and output (eg, drop-out rate, employment statistics, actual graduate’s abilities and values, etc) factors are complex, and one factor is unlikely to completely explain another.55 They still need to be considered, though, when assessing program effectiveness. Collecting and interpreting data related to these factors involve resource considerations (eg, time, personnel, etc). Further, an institution’s need to implement budget cuts might result in assessment efforts or their supportive personnel being targeted for elimination, reduction, or reassignment (in the case of personnel) if assessment is not sufficiently valued. Budgetary shortfalls reinforce the need to use sound assessment data to make curriculum changes.

Students might also represent a barrier to assessment activities. They can be resistant to or suspicious of assessment efforts if they do not understand their purpose, ie, to ultimately improve the educational process and their learning, if they are not sufficiently involved in assessment implementation and result interpretation,56 if the process is inefficient and requires a great deal of time, if it is unfair, or if it is unrealistic.62 Assessment processes should allow students to receive feedback about their performance to not only increase their appreciation of the role of assessment but to enhance their learning.31,59

Assessment related barriers and challenges are not unique to health sciences education. Many academic disciplines struggle to develop learning outcomes and assessment plans, for reasons similar to and in some instances different than the health sciences. For example, creative arts faculty can have difficulty developing learning outcomes and performance criteria due to their desire to foster individual student expression. Faculty members in the liberal arts, in which all majors do not complete the same coursework for graduation and where majors and non-majors are often enrolled in the same courses, can find it difficult to develop learning outcomes common to all majors and assessment strategies geared only to majors. Faculty members in some disciplines (including pharmacy) in which graduates are readily employed may not be easily convinced that assessment is necessary in their program (the “if it’s not broke, don’t fix it” philosophy). Despite the barriers and challenges, assessment efforts can be successful with careful planning, enthusiastic and accepting faculty, strong dedicated leaders, and appropriate support and resources.

Approaches and methods. Questions to address. How should a comprehensive program assessment plan be prepared? Consulting a variety of existing resources can be useful initially in this regard. Boyce developed a comprehensive document that contains much information about assessment plan development and operation that individual institutions can use.63 Trent recently published a nice overview of learning outcomes assessment planning as applied to the health sciences.55 In addition, Winslade published a summary of a system intended to be used as part of an institution’s assessment plan, based upon comprehensive analysis of the health sciences assessment literature with an emphasis on medicine.62

Following completion of the necessary first step, development of student learning outcomes, the next step in preparing an assessment plan involves considering the answers to several questions.55,62 These questions are summarized in Table 3. When answering these questions and developing a plan, consider the principles and characteristics of effective assessment strategies, as well as their barriers/challenges and techniques to minimize them.

Questions one through five in Table 3 are very important considerations; remember to start and proceed slowly to keep the process manageable. Question six in Table 3, selection of appropriate assessment formats and methods, represents a key, but often difficult, aspect of the program assessment plan. Addressing this question requires identifying the specific learning outcome(s) to be assessed, linking or matching an assessment format(s) (eg, written, verbal, simulations, authentic or performance assessments, projects) to the abilities and tasks represented by each outcome, and considering the method(s) for accomplishing this. Format selection should include consideration of validity (particularly the extent to which the format can predict future real-life practice), reliability (not only objectivity, ie, inter-rater reliability, but also generalizability or global reliability), educational impact (ie, how the assessment method will influence learning), feasibility (eg, efficiency, cost, resources, etc), acceptability (students, faculty, external stakeholders), and applicability across programs (eg, benchmarking). It is important to realize that it is not the format per se that determines the level of competency being assessed, ie, knows, knows how, shows how, does, but rather the nature of the specific questions themselves included within that format.62

Assessment formats. Written assessment formats include essays, true/false, multiple-choice questions (of varying forms), short answer and modified essay questions, extended matching items questions (ie, a list of up to 26 options with a short lead-in question or statement followed by a series of stems consisting of short case -based scenarios or vignettes for which the correct option is chosen), and key features questions (ie, a short realistic case description followed by questions, multiple choice or open-ended, that focus on only essential or critical decisions). Extended matching items and key features questions can have advantages related to validity, reliability, and the types of abilities assessed as compared to standard multiplechoice questions.62,64,65

Other assessment formats include simulations (eg, papers, case -based tests, computer-based simulations, some PBL activities, models, simulated patients, etc), portfolios, observational ratings, and exemplary products [eg, documented cases, research projects or papers, other tangible evidence of students’ work used to infer their ability].11,62 Techniques being explored for assessing professional competence in medicine, that could be applicable to pharmacy, include use of patient-conducted evaluations of students, portfolios containing videotapes of patient encounters, unannounced standardized patients in the clinical setting, and peer assessment of professionalism.66 The reader is encouraged to consult the recommended readings and references for more detailed information about the various assessment formats and their potential advantages/disadvantages.

Since observational ratings are used extensively during experiential rotations, pharmacy faculty members should be aware of the potential problems that exist with these assessments even when student learning outcomes are well-defined. The problems include validity and reliability concerns, potentially limited direct observation of certain skills during relatively short rotations, desired outcomes that might not be adequately discriminated by the ratings, susceptibility to a “halo effect” in which subsequent ratings are skewed by either a very good or bad performance on a previous item or task, ratings that are unduly influenced by students’ communication skills and interpersonal relationships, and a possible time lag between when observations occur and feedback is provided.11,17,55,62,64,67 Approaches to improve observational ratings as an assessment format include providing rater training and sufficient feedback to preceptors about their ratings,17 ensuring that rating forms are concise,62 incorporating a minimum number of documented observations by preceptors of certain activities,62 the use of encounter cards (that take only a minute or so to complete) designed to provide feedback to students each time a patient encounter is observed,67 and the supplementation of observational ratings by more objective exercises, tests, etc.68

The OSCE, progress testing (ie, students in each year of the program receive the same test that reflects the knowledge and applications expected upon graduation, or a mix of early stage and later stage items, multiple times per year), and use of an assessment center represent administration methods since they could incorporate a mix of assessment formats, eg, written or verbal, simulations, etc. Progress testing has generally used the true/false or multiple-choice formats, although other formats are possible. Any of these administration methods can have advantages or disadvantages, depending on the specific tasks and outcomes targeted and the formats selected.55,62,64,69

In summary, it is impossible to maximize every ideal characteristic for the formats and methods used in an assessment plan. Thus, trade-offs exist, eg, an employed method might be valid and reliable but not very feasible, or valid and feasible but not as reliable.11 As students advance through the program, multiple aspects of the profession are introduced into the curriculum which increases task complexity and its assessment. In addition, since content influences student performance, good performance on one particular case study does not reliably predict student performance on other cases involving different topics.19 Thus, all applicable dimensions should be measured by the assessment plan employed, ideally progressing from use of a variety of discrete, non-authentic to integrated, authentic and performance assessments.31 Finally, assessment plans should not focus primarily on survey use but should rather include both direct as well as indirect measures of learning.

SUMMARY

Student learning represents the heart of the existence of schools and colleges of pharmacy. Outcomes for student learning must therefore be the guiding force behind the content and format of pharmacy curricula. Assessment of the degree of student learning can also influence learning itself by the methods employed. Thus, it is clear that student learning outcomes, the curriculum, and program assessment are not only critical, but also interrelated, components of pharmacy schools/colleges. Schools and colleges must continually refine and update as appropriate their student learning outcomes to reflect state-of-the-art practice, develop a variety of educational experiences to assist students in maximally achieving these outcomes, and obtain and use valid, reliable assessment data to make changes in the curriculum. Many published and other resources are available to assist with the development and implementation of comprehensive program assessment plans; schools and colleges are urged to consult these resources for additional detailed information as needed.

RECOMMENDATIONS

Several recommendations are provided to facilitate the ongoing processes of curriculum development and learning outcomes assessment. These recommendations are targeted to schools/colleges and their faculty/administration, AACP, and ACPE.

Schools and Colleges

1. Ensure, to the extent possible, that administrators at all levels (provost, dean, assistant/associate deans, chairs) understand and communicate to others the value of assessment and the important relationship between assessment and learning. They should actively advocate, encourage, and support solid curriculum development and assessment practices in their programs.

2. Create an environment that motivates, develops and supports teaching/learning advances, curricular change and sound assessment practices. An institutional commitment to faculty development is necessary for advancing knowledge of educational theories, learning styles, outcomes assessment, and curricular enhancement. Schools/colleges should send faculty members to AACP Institutes as appropriate and encourage faculty to attend national meetings involving teaching/learning, curriculum, and assessment (eg, AACP, American Association of Higher Education [AAHE] meetings).

3. Institute a reward system for faculty members who develop innovative teaching practices to help students achieve learning outcomes. Similarly, appropriately reward or recognize (eg, teaching awards, consideration as part of promotion/tenure decisions, etc) faculty members who provide evidence that their course assessment methods appropriately link with desired learning outcomes and who provide students with formative feedback and the opportunity to practice desired outcomes and skills.

4. Support the scholarship of teaching/learning, and add well-designed, documented, and evaluable assessment-related practices to the definition of the scholarship of teaching/learning.

5. Schedule regular meetings/discussions pertaining to teaching/learning and assessment at both the departmental and school/college levels so they become part of the institutional culture.

6. Determine prospective faculty candidates’ teaching philosophy. Ask them to provide examples of their teaching and any innovative teaching or evaluation methods employed. To the extent feasible/possible, hire candidates who express enthusiasm for teaching and interest in enhancing student learning.

7. Encourage new faculty members and allow them sufficient time to develop their teaching skills, to learn about key educational concepts and the education literature, and to develop desired learning outcomes and performance criteria for their teaching that are consistent with the school/college or department learning outcomes. If new faculty members are provided “release” time to develop their research or service areas, consider asking these faculty members to also use part of this time to formulate learning outcomes and educational approaches for the material they will teach.

8. Develop student learning outcomes for the program, identify the course(s) that will address each learning outcome, develop course instructional strategies consistent with the outcomes, and ensure that course assessment measures are consistent with the learning outcomes.

9. Consider use of multiple instructional activities and strategies as appropriate for specific outcomes to help accommodate and develop different student learning styles.

10. Determine students’ baseline knowledge and skills (eg, through surveys, questionnaires, pre-tests, etc) at the beginning of individual courses and experiential rotations. This will assist faculty members in building appropriately on prior knowledge and facilitate the development of students’ individual knowledge (semantic) networks.

11. Invest necessary resources into developing solid assessment plans and measures. This should include assuring that faculty members assuming a leadership role in the program’s assessment plan have sufficient time to devote to these activities. Allow for a period of up to several years for the assessment plan to become fully developed and established, during which time resource needs are likely to be greater.

12. Seek assistance as needed from those with expertise in education and assessment (eg, faculty at a school/college of education if present at an institution) during assessment plan development. It is unrealistic to expect that faculty members with little to no background or training in this area and with little knowledge of psychometric measures will be able to develop and implement valid, comprehensive learning outcomes assessment plans without significant time, training/development, support, and guidance.

13. Enlist/hire other individuals as appropriate (eg, senior students to help with grading, practitioners, residents, graduate students) to assist with assessment efforts and minimize costs (eg, faculty resources, time constraints).

14. Ask faculty members, for each of their courses or areas of teaching responsibility, to identify possible course-embedded assessment strategies that could provide data useful to the program’s overall assessment plan.

15. Include information/data about inputs, processes, and outputs as part of the assessment plan.

16. Include both direct and indirect measures in the assessment plan.

17. Use multiple measures to assess the achievement of learning outcomes. Include the assessment of attitudes/values, using a mix of measures as appropriate.

18. Schools/colleges should select assessment methods appropriate for their circumstances/needs; there is no one right method to use for obtaining student learning outcomes data. Strive to use where possible existing methods with demonstrated validity and reliability. Use external judgment, an assessment committee and/or another internal review group to help establish the reliability/validity of institutionally developed tools and instruments. Supplement use of assessment methods that have lower reliability/validity with those of higher reliability/validity.

19. Examine overall and individual raters’ consistency of ratings during experiential rotations and examine ratings data for predictive value. Ensure that preceptors receive adequate training and appropriate feedback as raters. Consider incorporating additional assessment methods, eg, encounter cards, portfolios, specific exercises or assignments, etc, within rotations to assist with formative as well as summative assessments.

20. Form focus groups to examine and provide input into any or all of the assessment-related areas.

21. Establish a standing external quality assurance group comprised of members from academia and practice, with specific charges, eg, to review/comment upon the program’s assessment plan and processes, identify additional types of assessment data needed by the program, review and comment on assessment findings/conclusions and any actions taken by the program based upon the data.

22. Ensure that the results from the assessment plan are shared with all stakeholders as appropriate and are used as indicated to make curricular changes, ie, completing the loop. Expend or reallocate resources as needed to remedy any significant student learning problems identified.

AACP

1. Work with the American Association of Medical Colleges (AAMC), the American Association of Colleges of Nursing (AACN), and the American Dental Education Association (ADEA) to help develop a common assessment language and terminology across health care disciplines.

2. Provide and support mechanisms by which school/colleges can easily share with others their student learning outcomes and instructional strategies, including successes and failures. To help accomplish this, consider the development and maintenance of an “Outcomes, Teaching/Learning, Assessment Resources” section on the AACP web site. Resources provided in this section could include copies of student learning outcomes documents developed by schools/colleges, brief descriptions of instructional strategies or learning experiences used (both successfully and unsuccessfully) by schools/colleges along with contact information for their faculty willing to serve as advisors/consultants to others, proceedings from the annual AACP Institutes, and links to other relevant web sites.

3. Facilitate the full publication of curriculum, teaching/learning, and assessment-related abstracts presented at the annual meetings. Alternatively, recommend or require the submission of a paper instead of an abstract for meeting presentation and publish the papers in a meeting program book.

4. Hold “assessment fairs” at the annual meetings, at which each school/college could be assigned a table/booth for sharing their assessment-related strategies. Interested faculty can then easily visit several schools/colleges to ask questions and gather information.

5. Facilitate and assist with the development, testing, validation, and refinement of assessment tools, instruments, surveys, and rubrics that can be used by multiple schools and colleges. Assist interested individual schools/colleges in learning how to validate internally developed tools, instruments, etc. Serve as a clearinghouse for the distribution of these assessment-related materials along with guidelines/recommendations for their appropriate use.

6. Work with the National Association of Boards of Pharmacy to provide schools/colleges with more detailed feedback about their graduates’ performance on the licensing examination that could better assist with program assessment efforts. Examination performance data could be subdivided into additional categories corresponding to specific, individual outcomes/objectives, eg, literature evaluation skills, detection and evaluation of drug interactions, ability to select appropriate therapeutic agents, etc.

7. Facilitate and assist with the evaluation of various instruments used to measure learning styles and with determining if, when, and how they should be used. Recommend and distribute specific ones for use by interested schools/colleges.

8. Facilitate and assist with the development of progress testing examinations that can be used early in the curriculum, late in the curriculum, and for various disciplines. Make these examinations available to schools/colleges for use as desired within their curricula or assessment plans. Consider establishing a center to which schools/colleges could confidentially or anonymously submit examination results, with compiled and analyzed data provided to those interested.

9. Obtain and distribute data on the uses and effectiveness of various technologies in pharmacy education.

10. Assist schools/colleges with the development of guidelines for incorporation of the scholarship of assessment into the scholarship of teaching/learning.

11. Modify the current CAPE Educational Outcomes and establish a process for ensuring their revision/updating on a regular basis.

ACPE

Since ACPE has standards in place that address the following, it is recommended that focus continue to be placed on these areas:

1. Ensure that appropriate curriculum development and assessment cultures exist at schools/colleges. The administration at all levels should be both supportive of and knowledgeable about assessment.

2. Ensure that there is an institutional commitment to faculty development, with opportunities provided related to teaching/learning, curriculum, and assessment. Regularly scheduled department and school meetings/discussions/ workshops should be held related to curriculum, assessment, and discussion of the program’s assessment plan.

3. Ensure that students’ and appropriate external stakeholders’ input are included throughout the various steps of a school’s/college’s assessment process.

4. Evidence of strong leadership and adequate resources/support should exist for the development and implementation of the assessment plan. All faculty should also be knowledgeable about and involved in the assessment process.

5. Ensure that schools/colleges have welldefined, appropriate student learning outcomes as well as course assessment methods that are consistent with the learning outcomes. Each course/experience should be able to indicate the learning outcomes they address, as well as how their mastery by students is assessed.

6. Ensure that the curriculum contains sufficient room to allow time for students to engage in reflection and to practice needed skills/abilities.

7. Ensure that schools/colleges have a solid, well-developed, assessment plan that is reexamined and modified as needed on a continuing basis.

8. Evidence should exist that schools/colleges have initiated appropriate changes to improve student learning when assessment data indicate that deficiencies or weaknesses exist. School/colleges should also have a plan in place to assess the effectiveness of any changes made.

9. Consider the use of a tool such as “Levels of Implementation” available from the Higher Learning Commission of the NCA, to assist site evaluation teams in asking appropriate assessment-related questions of schools/colleges.

RECOMMENDED READINGS

Assessment – Methods

Beck DE. Performance based assessment: using preestablished criteria and continuous feedback to enhance a student’s ability to perform practice tasks. J Pharm Pract. 2000; 13:347-64.

Chambers DW, Glassman P. A primer on competency-based evaluation. JDent Educ. 1997;6l: 651-66.

Fowell SL, Bligh JG. Recent developments in assessing medical students. Postgrad Med J. 1998; 74:18-24.

Friedman Ben-David M, Davis MH, Harden RM, Howie PW, Ker J, Pippard MJ. AMEE medical education guide no. 24: portfolios as a method of student assessment. Med Teach. 2001;23:535-51.

McMullan M, Endacott R, Gray MA, et al. Portfolios and assessment of competence: a review of the literature. J Adv Nurs. 2003;41:283-94.

Schuwirth LWT, van der Vleuten CPM. Written assessment. BMJ. 2003;326:643-5.

Selby C, Osman L, Davis M, Lee M. Set up and run an objective structured clinical exam. BMJ. 1995;310:187-90.

Smee S. Skill based assessment. BMJ. 2003;326:703-6.

Assessment – Reviews

Palomba CA, Banta TW. Assessment Essentials. San Francisco: Jossey-Bass Inc, Publishers; 1999.

Maddux MS. Institutionalizing assessment as learning within an ability based program. JPharm Teach. 2000;7:141-60.

Trent AM. Outcomes assessment planning: an overview with applications in health sciences. J Vet Med Educ. 2002;29:9-19.

van der Vleuten CPM, Dolmans DHJM, Scherpbicr AJJA. The need for evidence in education. Med Teach. 2000;22:246-50.

Winslade N. A system to assess the achievement of Doctor of Pharmacy students. Am JPharm Educ. 2001;65:363-92.

Curriculum Development

Chalmers RK, Grotpeter JJ, Hollenbeck RG, et al. Ability-based outcome goals for the professional curriculum: a report of the focus group on liberalization of the professional curriculum. Am J Pharm Educ. 1992; 56:304-9.

Kern DE, Thomas PA, Howard DM, Bass EB. Curriculum Development for Medical Education: A Six Step Approach. Baltimore, MD: The Johns Hopkins University Press; 1998.

Prideaux D. Curriculum design. BMJ. 2003;326:268-70.

Zlatic TD. Abilities-based assessment within pharmacy education: preparing students for practice of pharmaceutical care. J Pharm Teach. 2000;7:5-27.

Educational Strategies

Barrows HS. A taxonomy of problem-based learning methods. Med Educ. 1986;20:481-6.

Cantillon P. Teaching large groups. BMJ. 2003;326:437-40..

Hurd PD. Active learning. J Pharm Teach. 2000;7:29-47.

Jaques D. Teaching small groups. J Pharm Teach. 2003;326:492-4.

Shatzer JH. Instructional methods. Acad Med. 1998;73:S38-S45.

Wood DF. Problem based learning. BMJ. 2003;326:328-30.

Learning Styles

Curry L. Cognitive and learning styles in medical education. Acad Med. 1999;74:409-13.

Vaughn L. Teaching in the medical setting: balancing teaching styles, learning styles and teaching methods. Med Teach. 2001;23:610-2. (Tables at: http://www.medicaltcacher.org).

Regional accreditation organizations, institutional expectations

Middle States Association of Colleges and Schools (MSA). Available at: www.msache.org. Accessed August 22, 2003

New England Association of Schools and Colleges (NEASC-CIHE). Available at: www.neasc.org. Accessed August 22, 2003

North Central Association of Colleges and Schools (NCA-HLC). Available at: www.ncahigherlearningcommission.org. Accessed August 22, 2003

Northwest Association of Schools, Colleges and Universities (NWA). Available at: www.nwccu.org. Accessed August 22, 2003

Southern Association of Colleges and Schools (SACS). Available at: www.saccoc.org. Accessed August 22, 2003

Western Association of Schools and Colleges (WASC-ACSCU). Available at: www.wascweb.org. Accessed August 22, 2003

REFERENCES

1. Prideaux D. The emperor’s new clothes: from objectives to outcomes. Med Educ. 2000;34:168-9.

2. National Center for Postsecondary Improvement (NCPI). A report to stakeholders on the condition and effectiveness of postsecondary education, part two: a respectable “B.” Change. 2001; 33:23-38.

3. Burd S. Will congress require colleges to grade themselves? Chron Higher Edite. 2003;XLIX:A27.

4. Gardner DP, et al. A nation at risk: the imperative for educational reform. Washington DC: National Commission on Excellence in Education (US), 1983. Stock No. 065-000-00177-2. Superintendent of Documents. Washington DC: US Government Printing Office.

5. Involvement in learning: realizing the potential of American higher education. Final report of the Study Group on the Conditions of Excellence in American Higher Education. Washington DC: National Institute of Education (US), 1984. Stock No. 065-000-00213-2. Superintendent of Documents. Washington DC: US Government Printing Office.

6. Kellogg Commission on the Future of State and Land-Grant Universities. Returning to our roots: toward a coherent campus culture. Fifth report. An open letter to the presidents and chancellors of state universities and land-grant colleges, 2000. National Association of State Universities and Land-Grant Colleges. Available at: www.nasulgc.org/publications/Kellogg/Kellogg2000_Culture. pdf. Accessed August 25, 2003.

7. Wegner GR. Beyond Dead Reckoning: Research Priorities for Redirecting American Higher Education, 2002. National Center for Postsecondary Improvement. Available at: www. Stanford. edu/group/ncpi/documcnts/pdfs/beyond_dead_r cckoning.pdf. Accessed August 25, 2003.

8. American Council on Pharmaceutical Education. Accreditation Standards And Guidelines for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree Adopted June 14, 1997. Available at: www.acpeaccredit.org/framcset_Pubs.htm. Accessed June 15, 2003.

9. Scott DM, Robinson DH, Augustine SC, Roche EB, Ueda CT. Development of a professional pharmacy outcomes assessment plan based on student abilities and competencies. Am J Pharm Educ. 2002;66:357-61.

10. Bouldin AS, Wilkin NE. Programmatic assessment in U.S. schools and colleges of pharmacy: a snapshot. Am J Pharm Educ. 2000;64:380-7.

11. Chambers DW, Classman P. A primer on competency-based evaluation. J Dent Educ. 1997;61:651-66.

12. Chalmers RK, Grotpeter JJ, Hollenbeck RG, et al. Ability-based outcome goals for the professional curriculum: a report of the focus group on liberalization of the professional curriculum. Am J Pharm Educ. 1992; 56:304-9.

13. CAPE educational outcomes. Alexandria: American Association of Colleges of Pharmacy, 1998.

14. Kern DE, Thomas PA, Howard DM, Bass EB. Curriculum Development for Medical Education: A Six Step Approach. Baltimore, MD: The Johns Hopkins University Press; 1998.

15. Bloom BS (cd). Taxonomy of Educational Objectives, handbook I: Cognitive domain. New York: Longmans, Green; 1956.

16. Commission to Implement Change in Pharmaceutical Education, Background Paper II. Alexandria: American Association of Colleges of Pharmacy, 1991.

17. Beck DE. Performance based assessment: using prcestablishcd criteria and continuous feedback to enhance a student’s ability to perform practice tasks.. J Pharm Pract. 2000; 13: 347-64.

18. National Institute for Science Education. Field-tested Assessment Learning Guide. Madison WI: University of Wisconsin -Madison. 2003. Available at: www.flaguide.org. Accessed June 10,2003.

19. van der Vleuten CPM, Dolmans DHJM, Scherpbier AJJA. The need for evidence in education. Med Teach. 2000;22:246-50.

20. Grace M. Learning styles. Br Dent J. 2001; 191:125-8.

21. Curry L. Cognitive and learning styles in medical education. AcadMed. 1999;74:409-13.

22. Vaughn L. Teaching in the medical setting: balancing teaching styles, learning styles and teaching methods. Med Teach. 2001;23:610-2. (Tables accessed at http://www.medicalteachcr.org).

23. Hurd PD, Hobson EH. Pharmacy students’ learning style profile: course and curricular implications [abstract]. Am J Pharm Ediic. 1999;63(suppl):XXS.

24. Bouldin AS. Determination of learning style preferences at the University of Mississippi School of Pharmacy [abstract]. Am JPharm Ediic. 2002;66(suppl):XXS.

25. Cobb HH, Thomas PC, Schramm LC, Chisholm MA, Francisco GE. Learning style assessment of a second year pharmacy class at the University of Georgia College of Pharmacy [abstract]. AmJ Pharm Ediic. 2000;64(suppI):XXS.

26. Fox LM. Influence of student learning style on learning modality preference [abstract]. Am J Pharm Ediic. 2000;64(suppl):XXS.

27. Anderson J. Tailoring assessment to student learning styles. AAHE Bulletin 2001 (March). Available at: aahebulletin.com/public/archive/archivc.asptfassessment. Accessed August 25, 2003.

28. Shuck AA, Phillips CR. Assessing pharmacy students’ learning styles and personality types: a ten-year analysis. Am J Pharm Ediic. 1999;63:27-33.

29. Davis LE, Boyce EG, Blumberg P. Inventory of student approaches to learning and studying during an entry-level Doctor of Pharmacy program [abstract]. Am J Pharm Edite. 2001;102(suppl):XXS.

30. Kaufman DM. Applying educational theory in practice. BMJ. 2003;326:213-6.

31. Friedman Ben-David M. The role of assessment in expanding professional horizons. Med Teach. 2000;22:472-7.

32. Irby DM, Wilkerson L. Educational innovations in academic medicine and environmental trends.. J Gen Intern Med. 2003;18:370-6.

33. Sprague JE, Christoff J, Allison JC, Kisor DF, Sullivan DL. Development and implementation of an integrated cardiovascular module in a PharmD curriculum. Am J Pharm Edite. 2000;64:20-6.

34. Chan ES, Monk-Tutor MR, Sims PJ. Progressive model for integrating learning concepts across pharmacy practice courses to avoid the learn-and-dump phenomenon [abstract]. Am J Pharm Educ. 2001;65(suppl):XXS.

35. Deloatch KH, Joyner PU, Raasch RH. Integration of general and professional abilities across the Doctor of Pharmacy curriculum at the University of North Carolina [abstract]. Am J Pharm Educ. 2001;65(suppl):XXS.

36. Wolf W, Besinque KH, Wincor MZ. Flexible and integrated approach to professional teaching: therapeutics module [abstract]. Am JPharm Educ. 2000;64(suppl):XXS.

37. Stull R, Billeter M, Carter R, et al. Integrating the curriculum at Shenandoah University School of Pharmacy [abstract]. Am J Pharm Educ. 1998;62(suppl):XXS.

38. Modell HI. Preparing students to participate in an active learning environment. Am J Physiol. 1996;270:S69-S77.

39. Maudsley G. Do we all mean the same thing by ‘problembased learning’? A review of the concepts and a formulation of the ground rules. Acad Med. 1999;74:178-85.

40. Cisneros RM, Salisbury-Glennon JD, Anderson-Harper HM. Status of problem-based learning research in pharmacy education: a call for future research. Am J Pharm Educ. 2002;66:19-26.

41. Abate MA, Meyer-Stout PJ, Stamatakis MK, et al. Development and evaluation of computerized problem-based learning cases emphasizing basic sciences concepts. Am J Pharm Educ. 2000;64:74-82.

42. Carlson S. Are personal digital assistants the next musthave tool? Chronicle of Higher Education. 2002;49:A33-36.

43. Bertling CJ, Simpson DE, Hayes AM, et al. Personal digital assistants herald new approaches to teaching and evaluation in medical education [abstract]. Am J Pharm Educ. 2003;67(suppl):XXS.

44. Zarotsky V, Jaresko GS. Technology in education – Where do we go from here? J Pharm Pract. 2000;13:373-81.

45. Piascik P. What if we approached our teaching like we approach our research? Am J Pharm Educ. 2002; 66:461-2.

46. Beck DE. Pharmacy educators: can an evidence-based approach make your instruction better tomorrow than today? Am J Pharm Educ. 2002;66:87-8.

47. Harden RM. Curriculum mapping: a tool for transparent and authentic teaching and learning. Med Teach. 2001; 23:123-37.

48. Draugalis JR, Slack MK, Sauer KA, Haber SE, Vaillancourt RR. Creation and implementation of a learning outcomes document for a Doctor of Pharmacy curriculum. Am J Pharm Educ. 2002;66:253-60.

49. Scott DM, Roche EB, Augustine SC, Robinson DH, Ueda CT. Assessment of students’ abilities and competencies using a curriculum mapping procedure. Am J Pharm Educ. 2001;65(suppl):XXS.

50. Zavod RM, Zgarrick DP. Appraising general and professional ability based outcomes: curriculum mapping project. Am J Pharm Educ. 2001;65(suppl):XXS.

51. Bouldin AS, Wilkin NE, Wyandt CM, Wilson MC. General and professional education abilities: identifying opportunities for development and assessment aeross the curriculum. Am J Pharm Educ. 2001;65(suppl):XXS.

52. Maki P. Using multiple assessment methods to explore student learning and development inside and outside of the classroom. Washington DC: National Association of Student Personnel Administrators, 2002. Available at: http://www.naspa.org/NetResults/article.cfm?ID=558. Accessed June 29, 2003.

53. Astin AW. Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education. New York:Macmillan/American Council on Education; 1991.

54. Pascarella ET, Terenzini PT. How College Affects Students: Findings and Insights from Twenty Years of Research. San Francisco: Jossey-Bass Inc, Publishers; 1991.

55. Trent AM. Outcomes assessment planning: an overview vvilh applications in health sciences. J Vet Med Educ. 2002;29:9-19.

56. Banta TW. Moving assessment forward: enabling conditions and stumbling blocks. New Directions for Higher Edue. 1997;25:79-91.

57. James Madison University. Characteristics of an effective assessment program. Harrisonburg VA: Center for Assessment and Research Studies, 2003. Available at: http://www.jmu.edu/assessment/cffcct.shtml. Accessed June 10, 2003.

58. Vrocijenstijn MA. Quality assurance in medical education. Acad Med. 1995;70 (suppl): S59-S67.

59. Fowell SL, Southgate LJ, Bligh JG. Evaluating assessment: the missing link? Med Educ. 1999;33:276-81.

60. NCA/The Higher Learning Commission. Assessment of student academic achievement: levels of implementation. Chicago: MCA/The Higher Learning Commission, 2002. Available at: www.higherlearningcommission.org/resources/assessment/index. html. Accessed June 20, 2003.

61. Palomba CA, Banta TW. Assessment Essentials. San Francisco: Jossey-Bass Inc, Publishers; 1999.

62. Winslade N. A system to assess the achievement of Doctor of Pharmacy students. Am J Pharm Educ. 2001;65:363-92.

63. Boyce EG. A guide for doctor of pharmacy program assessment. Alexandria: American Association of Colleges of Pharmacy, 2000. Available at: http://www.aacp.org/Docs/MainNavigation/Resources/5416_phar macyprogramassessment_forweb.pdf. Accessed August 31, 2003.

64. Fowell SL, Bligh JG. Recent developments in assessing medical students. Postgrad Med J. 1998;74:18-24.

65. Schuwirth LWT, van der Vleuten CPM. Written assessment. BMJ. 2003;326:643-5.

66. Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002;287:226-35.

67. Hatala R, Norman GR. In-training evaluation during an internal medicine clerkship. Acad Med. 1999;74:S118-S20.

68. Cunnington J. Evolution of student assessment in McMaster University’s MD programme. Med Teach. 2002;24:254-60.

69. Purkerson DL, Mason HL, Chalmers RK, Popovich NG, Scott SA. Expansion of ability-based education using an assessment center approach with pharmacists as assessors. Am J Pharm Educ. 1997;61:241-8.

70. James Madison University – Dictionary of Student Outcome Assessment. Harrisonburg VA: Center for Assessment and Research Studies, 2003. Available at: http://www.jmu.edu/assessment/aresource.shtml. Accessed June 30, 2003.

71. Wilson M, Sloanc K. From principles to practice: an embedded assessment system. Appl Meas Edite. 2000; 13:181-208.

72. Maddux MS. Institutionalizing assessment as learning within an ability based program. J Pharm Teach. 2000;7:141-60.

Marie A. Abate, PharmD,a Mary K. Stamatakis, PharmD,a and Rosemary R. Haggett, PhDb

a School of Pharmacy, West Virginia University

b Office of the Provost, West Virginia University

Corresponding Author: Marie A. Abate, PharmD. Mailing Address: School of Pharmacy, West Virginia University, 1124 Health Sciences North, Morgantown, WV 26506-9520. Tel: (304) 293-1463. Fax: (304) 2937672. E-mail: mabate@hsc.wvu.edu.

Copyright American Association of Colleges of Pharmacy 2003

Provided by ProQuest Information and Learning Company. All rights Reserved

You May Also Like

LETTERS

LETTERS Somma, Melissa A ”Community of Learning” in Experiential Education To the Editor: There have been many recent calls…

Meet the president: Robert E Smith

Meet the president: Robert E Smith Cocolas, George H Robert (Bob) E. Smith was born in San Diego and moved to Oakdale, California wh…

Chair report for the Research and Graduate Affairs Committee

Chair report for the Research and Graduate Affairs Committee Brazeau, Gayle A Chair Report for the Research and Graduate Affairs Com…

Part 4. Planning for tomorrow: The American Association of Colleges of Pharmacy, 1975-1999

Part 4. Planning for tomorrow: The American Association of Colleges of Pharmacy, 1975-1999 The consumer movement of the early 1970s focuse…