Toward a schoolwide reading improvement model

Schools as hosts environments: Toward a schoolwide reading improvement model

Kame’enui, Edward J

Despite vast differences among school districts across the country, all students must learn how to read in a complex “host-environment” called a school. A challenge in beginning reading, therefore, is to transcend these differences and focus, instead, on the essential task of teaching reading in schools. Teaching reading involves attending to what we know about beginning reading and the alphabetic writing system, the difficulties of reading, and the challenges associated with dyslexia. Teaching reading in a school requires that interventions be tailored to the unique needs of an individual school and implemented and sustained at the school building level. In this article, we outline the Schoolwide Reading Improvement Model (SRIM). This model is characterized by the strategic integration of research-based practices in assessment, instructional design, and beginning reading instruction. Additionally, the SRIM acknowledges the specific needs of individual schools and is customized to provide the best fit with each unique “host-environment.” First we provide a description of each major stage of the SRIM and then an example of its application in a school district in western Oregon.

Schools are inherently complex environments that are made even more complex by significant social, political, economic, pedagogical, legal, cultural, demographic, and historical forces. Although some of these forces are whimsical and others are coercive, they unwittingly shape the very nature and function of schools. As a result, the over 85,000 public elementary and secondary schools in the United States vary in any number of ways. To give just one example, schools come in all sizes, and size matters. Urban schools tend to be larger than rural schools. Moreover, urban school districts usually consist of a vast number of schools. Not surprisingly, bigger, urban schools are likely to be more complex fiscally and administratively than smaller, rural schools. By way of illustration, the Los Angeles Unified School District (LAUSD), the second largest school district in the country, has 420 elementary schools, 72 middle schools, and 49 senior high schools, and an enrollment of 697,143 students who speak more than 80 different languages and dialects, a certified staff of more than 41,000, and a total district budget of $6.5 billion. In fact, the budget for the LAUSD is bigger than the state budgets, for example, of Alaska, Colorado, Delaware, Hawaii, New Hampshire, and Wyoming. In contrast to the LAUSD, the Bethel School District (BSD) in Eugene, Oregon, has six elementary schools, two middle schools, and one high school, with a total enrollment of 5,246, a certified staff of 272, and an entire district budget of $30 million. The numerical differences between these disparate districts are staggering and provocative. Specifically, the Los Angeles School District has 70 times more schools, 133 times more students, 150 times more certified staff, and a budget that is 220 times greater than the Bethel School District. Size matters, and sometimes, in a very big way. Students in both Los Angeles and Bethel, however, share a fundamental similarity despite the differences in the size of their schools and districts. They all have to learn to read. Furthermore, they all have to learn to read in a complex environment of people, pedagogy, policies, and programs called a school. One challenge in beginning reading, therefore, is to get beyond the often overwhelming differences among school districts and focus, instead, on the essential task of teaching reading in schools. Teaching reading involves attending to what we know about beginning reading and the alphabetic writing system, the difficulties of reading, and the challenges associated with dyslexia. Teaching reading in a school requires that interventions be tailored to the unique needs of an individual school and implemented and sustained at the school building level. These are the essential components necessary to improve the reading outcomes of students, irrespective of the school or district they attend.

Domain-specific knowledge should inform our efforts to teach reading and prevent reading failure. Fortunately, we know more about reading disability than about all the other learning disabilities put together. This assertion, recently expressed by Stanovich (1999), underscores the substantial knowledge base that exists in our field. This knowledge base comes from the sizable body of converging research evidence accumulated over the past thirty years and reflects a significant advancement in our understanding of both the nature of dyslexia and the ways in which we as educators and parents can work to ensure that children become successful readers (National Research Council 1998; Adams 1990). We know that reading failure is a serious and pervasive concern in our society (U.S. Department of Education 1999). We know that a clear and unforgiving “line in the sand” exists at the end of grade 3 after which students who are poor readers almost never catch up to their peers who are good readers (Felton and Pepper 1995; Juel 1988). We know that we must identify students at risk of reading failure early, that systematic and strategic intervention should begin no later than kindergarten, and that monitoring of student progress should be formative and ongoing (Kame’enui, Simmons, and Coyne 1999). We know that interventions targeted toward students at risk of reading failure should focus on at least the three “big ideas” of beginning reading: phonological awareness, alphabetic understanding, and automaticity with the code (Simmons and Kame’enui 1998; Kame’enui and Carnine 1998).

The singular act of teaching reading, however, cannot be separated from the contexts in which it takes place. Too often, effective reading programs are insufficient because educators consider only one “context” (i.e., the learner context) of the learning, teaching, and schooling process (Carroll 1963, 1989; Mosenthal 1984). In addition, such interventions fail to reflect and accommodate the vagaries of changing student performance in the shifting milieu of classrooms and schools (Hedges and Waddington 1993). As Mosenthal (1984) noted more than a decade ago, there is not “one ideal and absolute geometry,” but multiple geometries or contexts that influence the teaching and learning space (p. 206). Teaching reading takes place in a complex “host environment” (Zins and Ponti 1990; Sugai and Horner 1999) and the host environment that holds constant across all of public education is the school. Therefore, it is necessary to take into account the distinctive combination of multiple contexts that exist within an individual school and customize interventions to provide the best fit with each unique school environment.

What we now know about reading disability and the intricacies of schools compels us to intervene in more complex, comprehensive, and coordinated ways. It is not enough to teach reading by relying on vague and general guidelines that attempt to span all disciplines. Teaching reading is, indeed, rocket science (Moats 1999); therefore, we must focus our efforts on the specifics of dyslexia and beginning reading in an alphabetic writing system (Stanovich 1999). Neither is it enough to assume that individual teachers, working independently, can implement and sustain the host of research-based practices that we know are necessary to prevent reading failure. Rather, our scope should extend beyond individual teachers to mirror and capture the genuine complexities in “real world” schools (Kame’enui and Simmons 1998). We are at a point in our field where we have the knowledge to effect schoolwide, coordinated efforts to ensure that every student learns to read, from the rural areas of Eugene, Oregon, to the sprawling urban neighborhoods of Los Angeles. What we know requires that we do no less.

Below, we outline the Schoolwide Reading Improvement Model (SKIM). This model attempts to integrate what we know about the specifics of dyslexia and beginning reading with what we know of the realities of implementing and sustaining effective practices in complex host environments called schools. In the sections that follow, we provide a description of each major stage of the SRIM and then an example of its application in a school district in western Oregon. A SCHOOLWIDE READING IMPROVEMENT MODEL (SRIM) The SKIM consists of five stages (see figure 1) and combines four primary components:

1. dynamic assessment of big ideas or target performance indicators

2. research-based practices and procedures in beginning reading 3. validated principles of effective curriculum and instruction

4. customized interventions in integrated contexts as the basis for reading improvement models that fit the host environment. A key feature of this model is the essential linkage of assessment and instruction. Though integrating assessment and intervention is not a novel concept and is indeed a signature of effective special education (Deno 1992; Fuchs and Fuchs 1994), what is innovative and effective about this process is the timely, strategic fit of the measures (what to assess), the targets of reading improvement (what to teach), and interventions that have a high probability of improving reading (how to teach). This confluence of performance indicators and instructional intervention positions a school to identify children early who are at serious risk of reading failure, intervene strategically, and modify instruction responsively in accord with learner performance.

The model and its decision-making processes draw extensively on the work in reading assessment of Shinn (1997) and Kaminski and Good (1996) and combine their procedures for identifying, grouping, problem solving, and performance monitoring with Kame’enui and Simmons’ (1998) components of contextual interventions to reflect an integrated and comprehensive intervention model anchored to the distinguishing characteristics of individual schools. STAGE I: ANALYZE CONTEXTS AND

ASSESS STUDENT PERFORMANCE USING DYNAMIC INDICATORS OF BIG IDEAS Description. The goals of Stage I are twofold and operate concurrently. The first goal is to determine what is currently in place in the school with respect to instructional priorities, time allocation to reading instruction, instructional materials and programs, organizational strategies, and overall student performance. Schools conduct an internal audit guided by a “Planning and Evaluation Tool for Effective Schoolwide Reading Programs” (Kame’enui and Simmons 1999) that examines school goals, instructional priorities, teacher philosophy, and current practices. Results of the inventory illuminate the unique interactions of the multiple contexts existing within a school and provide a framework for anchoring decisions made during subsequent stages of the SRIM to the particular realities of the host environment or school. The second goal of Stage I is to identify children who are at risk of reading disabilities or delay Kaminski and Good (1996) describe this process as “problem identification.” At the beginning of the school year, all children, kindergarten through Grade 3, are screened with measures that correspond to the big ideas in beginning reading: phonological awareness, alphabetic understanding, and automaticity with the code. The premise behind big idea indicators is that while these screening measures do not tell us everything about reading achievement, they serve as valid and reliable predictors of skills highly associated with later reading achievement. Deno (1992) describes such measures as indicators or “vital signs of growth in basic skills comparable to the vital signs of health used by physicians” (Deno 1992, p. 6). Performance indicators provide fast and efficient indication of the reading well being of students with respect to reading skills essential to successful performance in the general education curriculum (Kaminski and Good 1998).

Screening measures differ according to grade and learner performance. For example, in kindergarten and first grade, Dynamic Indicators of Basic Early Literacy Skills (DIBELS) (Kaminski and Good 1998), which include onset recognition, phonemic segmentation, letter naming, and nonsense word reading, are used to identify and monitor children whose performance differs significantly from their same-age peers. Once students are able to read words in connected text (approximately mid-first grade through third grade), measures of oral reading fluency from curriculum-based passages are used as indicators of reading achievement (Shinn 1997). Students’ performance on these indicators is then compared to performance expectations, or “where we would expect children to perform,” to identify children at risk of reading disability or delay. Performance expectations may be derived from two sources: local normative data or performance associated with early reading success (Kaminski and Good 1996).

This stage integrates several contexts including setting (school), task (specific reading measures), and learner (performance on critical indicators). This integrative model allows schools to examine learner performance not only at the individual level, but also at the school level, to determine the magnitude of the problem. From this big-picture analysis, the scope and intensity of the intervention can be assessed. Furthermore, schools are better able to respond to children’s needs proactively through early screening and identification. Stage I involves initiating and maintaining a centralized system for managing student-performance data at the school level to enable timely and informed decisions. This dynamic database and record-keeping system is a common feature of effective schools and is an essential feature of SRIM.

Application. A small, suburban school district in western Oregon with an enrollment of approximately 5,000 students has recently embarked on the process of implementing the SKIM in six elementary schools. The school district had been experiencing a significant and recurring increase in the percentage of students identified with special needs (e.g., approximately 20 percent identified for special education). After examining the results of the six schoolwide inventories (Kame’enui and Simmons 1999), the district concertedly identified beginning reading as a top instructional priority and made a commitment to improve the reading outcomes of all students in kindergarten through Grade 3. In addition, the district pledged to provide the administrative support necessary to build the capacity to implement and sustain a comprehensive districtwide reading intervention. The first-year focus was on implementing the model in kindergarten. In subsequent years, the implementation will expand to encompass the other primary grades (Grades 1 to 3).

During sessions in the fall, administrators and kindergarten teachers, as well as Title I teachers, special education teachers, and instructional assistants that work with kindergarten students, evaluated their current assessment and instructional practices and became familiar with the goals and components of the SRIM. The participants also received training in administering and interpreting the DIBELS measures that assess the three big ideas of phonological awareness, alphabetic understanding, and automaticity with the code. Two types of measures have been particularly effective for early identification: (1) a test of letter names or sounds, and (2) a measure of phonemic awareness (Torgesen 1998). Measures of letter knowledge are strong predictors of reading difficulties, and measures of phonemic awareness enhance the accuracy of the prediction. In the fall of kindergarten, a letter naming fluency measure was used to assess students’ familiarity with the letters of the alphabet, and a phonemic awareness measure was used to assess students’ ability to recognize the first sounds in words. In winter and spring, a measure of letter-sound knowledge (i.e., nonsense word fluency) and a more sophisticated measure of phonemic awareness (i.e., phonemic segmentation fluency) was used to assess a student’s progressive skill with the individual sound units in words.

In November 1998, all kindergarten students received assessments on the following performance indicators: Letter Naming Fluency: ability to name letters accurately and rapidly.

Onset Recognition Fluency: ability to recognize the first sounds in words. In February and May 1999, all students received assessments on the previous measures and the following indicators: Phonemic Segmentation Fluency: ability to produce phonemes in words (auditory). Nonsense Word Fluency: ability to produce letter-sound correspondences and use them to read words. The school district established a computer-based data management system that allowed teachers to input student data and then analyzed and graphed results. The district reported results of the November, February, and May assessments for the entire district, each of six schools, and for all 20 of the individual kindergarten sessions (morning and afternoon). Figure 2 presents an example of the districtwide results of Phonemic Segmentation Fluency. These results indicated the magnitude of the problem in kindergarten. The benchmark goal for the Phonemic Segmentation Fluency measure is a score of 35 to 45 phoneme segments per minute by spring of kindergarten (Kaminski and Good 1998). According to these data, only 20 percent of students in the district demonstrated established segmentation skills on the February assessment. More worrisome, 32 percent of the students (N = 127) scored below 10 segments and were at serious risk to meet the spring benchmark (Kaminski and Good 1998). By examining these results, along with those of the other measures, the district was able to confirm that there was a significant reading problem in kindergarten. Moreover, this performance data served as an anchor for guiding decisions related to the development of the components of the SKIM outlined in Stages II-V. STAGE II: ANALYZE INDIVIDUAL PERFORMANCE AND PLAN INSTRUCTIONAL GROUPS Description. Using normative information from performance indicators of big ideas, an analysis of individual student results determines the child’s current level of performance and other children who have similar performance profiles. Using a process developed by Shinn (1997), children’s results on big– idea indicators and other information from teachers are used to perform “instructional triage”; that is, children who are at greatest risk are identified from those with less risk. To make this process operational, we use the following criteria: Intensive students are those who are seriously at risk based on extremely low performance on one or more performance indicators. The greater the number of measures and the lower the performance, the greater the risk. In general, these children are performing more than two standard deviations below the mean on local norms or expected levels of performance. Similar to children with serious medical conditions, children in need of intensive care in reading are in acute need of the most effective interventions available and require frequent monitoring to ensure that their reading performance does not remain seriously low. Educators must intervene with a sense of urgency.

Strategic students need systematic, strategic intervention and monitoring because of increased risk factors and low performance. Their condition, however, is less acute than students in the intensive group. In general, the performance of these children falls more than one standard deviation below the mean. Nonetheless, strategic students require more carefully designed and delivered instruction than is typical of most classrooms. Shinn (1997) recommends monthly monitoring on critical reading indicators to evaluate these students’ performance.

Benchmark students’ performance seems to be on target on critical literacy skills, and these students are not at risk of reading delay, based on current performance. We monitor benchmark students three times a year-in the fall, winter, and spring. Once children’s performance profiles are analyzed, we group children according to reading performance in small homogeneous groups designed for purposeful intervention for children with intensive and strategic needs. As a rule, the number of students in intensive groups should be smaller than either the strategic or benchmark groups and comprise no more than five students. A word of caution is warranted regarding grouping. The purpose of grouping is to enable children to receive instruction (e.g., increased opportunities to respond) that is appropriate for the needs of the learner. Groups should remain dynamic. Strategic and frequent monitoring of performance provides a mechanism for adjusting groups in response to instruction and assessment.

Application. The districtwide results from the western Oregon school district’s February assessment indicated the scope of the problem in kindergarten and emphasized the need for developing comprehensive reading interventions at each school. School teams then examined the results of individual student performance and determined instructional groupings. By comparing students’ results on the different kindergarten measures to performance expectations (i.e., benchmarks) that were known to reliably predict future reading success, teachers were able to identify students as benchmark, strategic, or intensive (Kaminski and Good 1998). For example, individual student performance on the Phonemic Segmentation Fluency measure (see figure 2) indicated that in February, 20 percent of kindergarten students should be considered benchmark students, 48 percent strategic students, and 32 percent intensive students. Identifying instructional groups based on student performance set the stage for school teams to plan interventions that would address the needs of all students. STAGE III: DESIGN INSTRUCTIONAL INTERVENTIONS FOR THE FULL RANGE OF LEARNERS Description. In Stages I-II, the context is set for what is arguably the most critical and complex component of the SRIM process: intervention. Stage III focuses on the multiple contexts that must be considered when designing interventions and the importance of instructional fit with the host environment. Too often, interventions fail because Intervention A has been implemented in School B with Teachers C and D without really understanding the fit among factors A, B, C, and D. A key difference of the SRIM from other models is the focus of intervention that moves beyond the learner to the school, classroom, teacher, curriculum, materials, and tasks (Kame’enui and Simmons 1998). Site-based coordinators (e.g., a teacher or administrator serving as a building coordinator for SRIM) facilitate the analysis of contexts and the development of intervention elements with collaborative grade-level intervention teams. In this process, grade-level teams work from a framework of research-based practices (e.g., specific curriculum, supplemental practices) and alterable variables (e.g., instructional time, groupings, concentration of low performers, delivery of instruction) to customize intervention models.

In this model, there are standard intervention dimensions across all grades and classrooms within the school, and there are dimensions that are discretionary. At a minimum, we recommend considering a “core” set of features to address at a schoolwide level. These core features include: setting reading goals based on research-based targets;

adopting and implementing core curriculum programs of documented efficacy;

scheduling fixed and protected times for teacher-directed daily reading instruction;

differentiating instruction based on learners’ current level of reading performance;

instituting a centralized system of student achievement data collection;

coordinating the delivery of instruction across school personnel (e.g., general education, special education, Title I); and

establishing and supporting grade-level teams who study, analyze, and respond to students who fail to make adequate progress (Simmons et al. in press). At every stage of the intervention definition process, collaborative intervention teams construct or customize the intervention from a menu of validated options. For example, in selecting a core reading program, teams review programs of documented efficacy (American Federation of Teachers 1999) such as SRA Open Court Reading (SRA 2000), Success for All (Slavin et al. 1996), and Reading Mastery (Engelmann and Bruner 1998) to determine the fit of those research-based programs with the philosophy, needs, and resources of the school. It is this customization or “fit” within the school that further distinguishes the SKIM from more traditional translations of research into practice.

Application. In Stage III, kindergarten teachers and administrators from the school district in western Oregon worked together to customize instructional interventions that targeted the full range of learners and acknowledged the unique host environment of each school. First, school teams reviewed several phonological awareness/reading programs that would serve as the core curriculum for all students and supplemental programs that would augment instruction for strategic and intensive students. All kindergarten teachers received training and guidance on the review and selection of core and supplemental reading programs. Kindergarten programs reviewed included Open Court, Reading Mastery, Phonemic Awareness in Young Children (Adams et al. 1997), Ladders to Literacy (Notari-Syverson et al. 1998), and Phonological Awareness Training for Reading (Torgesen and Bryant 1994). School teams selected core and supplemental programs on the basis of strong research support and the contextual fit with the needs of each school.

Next, the schools determined the minimum amount of time that would be set aside for teacher-directed reading instruction during the half-day kindergarten sessions. School teams decided that between 30 to 45 minutes of direct reading instruction each day was essential to meet the needs of all kindergarten students within the district. Furthermore, some schools concluded that intensive students would require an additional period (i.e., “double dose”) of reading instruction daily. Teachers and administrators also discussed options for the grouping of students and the scheduling of reading instruction. Depending on the instructional preferences of teachers and the availability of additional staff support at each school, teams considered grouping possibilities (e.g., within class, across class, and across grade) and discussed options for the delivery of instruction to intensive and strategic students utilizing classroom, Title I, and special education teachers, and instructional assistants. In general, intensive intervention groups were no larger than five students. The inclusion of administrators on each school team permitted conversations about ways in which schoolwide scheduling could help facilitate these various grouping and service delivery alternatives. Finally, individual teachers made decisions about additional curricular materials and instructional practices that they would use to enhance the reading instruction in their classrooms. STAGE IV: SET REASONABLE BUT AMBITIOUS INSTRUCTIONAL GOALS AND MONITOR FORMATIVELY Description. The next stage of the Schoolwide Reading Improvement Model involves using individual student performance to set four-week and long-term instructional goals. In early literacy, we have a reliable knowledge base to determine expected performance for early literacy success (Fuchs and Fuchs 1994; Kaminski and Good 1996; Hasbrouck and Tindal 1992; Markell and Deno 1997). For example, in second grade, children gain approximately 1.46 words correct per minute per week in oral reading fluency (Fuchs, et al. 1993) and students in the 50th percentile exit second grade reading approximately 90 correct words per minute (Hasbrouck and Tindal 1992). Children who are successful early readers orally segment words into phonemes at a rate of approximately 35 to 45 phonemes per minute by spring of kindergarten (Kaminski and Good 1998). These levels of expected performance are critical as we develop goals for children whose early reading trajectories are less than adequate, and they serve an important function in the SKIM process.

It is sometimes necessary to establish goals for multiple measures and monitor progress formatively. Shinn (1997) recommends weekly monitoring for intensive students, and monthly monitoring for strategic students. All students are measured quarterly on critical performance indicators to determine their progress toward long-term goals. Teachers also can calculate four week and even weekly instructional goals for intensive students by using current student achievement on dynamic measures, expected performance (i.e., benchmark goals), and time remaining until the next measurement point to ascertain the rate of learning necessary to reach the benchmarks. Teachers can use this same process for each target measure of reading. Application. The western Oregon school district used research-based performance objectives to establish benchmark goals for the phonological awareness measures administered in kindergarten. The goals for kindergarten students were to score between 25 to 35 onsets per minute on the Onset Recognition Fluency measure by winter and between 35 to 45 segments per minute on the Phonemic Segmentation Fluency measure by spring. Because phonological segmentation is such a strong predictor of reading success in first grade, the goal was for all students to have established phonological segmentation skills by the end of kindergarten. Additionally, the Nonsense Word Fluency measure that assesses alphabetic understanding also was administered throughout kindergarten. By using these results, teachers could set short term instructional goals to ensure that students would meet the first grade benchmark of between 40 to 50 letter-sound correspondences per minute on this measure by the following winter.

School teams decided to assess benchmark students in the fall, winter, and spring and monitor strategic students Some schools planned to monitor intensive students every week while other schools decided to assess these students every other week. By establishing clear goals of expected performance and instituting an assessment schedule based on degree of student risk, schools created a feedback loop that allowed for formative evaluation of instruction. STAGE V: EVALUATE EFFECTIVENESS OF INTERVENTIONS FORMATIVELY AND MAKE INSTRUCTIONAL ADJUSTMENTS Description. In this final stage of the SRIM, we illustrate the critical linkage between assessment and instruction. Using students’ performance on big ideas as indicators collected weekly for intensive students and monthly for strategic students, teachers evaluate progress toward goals to determine whether the rate of progress is adequate to achieve performance benchmarks and, therefore, eliminate risk of long-term reading difficulty. In essence, we address the questions: Is the students’ current rate of progress sufficient to close the gap, and is the rate sufficient so the student will learn enough (Carnine 1997) to be on a positive trajectory toward reading success? Grade level teams meet frequently (e.g., every two weeks) to monitor the effectiveness of interventions and make instructional adjustments. Teams work collaboratively to alter instructional variables based on student data. At meetings, teachers make decisions about the allocation of instructional time, ways to regroup students, the use of supplemental materials, assessment schedules, short-term objectives, and instructional focus. In this way, teams are able to customize interventions for intensive and strategic students in a way that is dynamic and integrally linked to student performance.

Application. School teams from the western Oregon school district made a commitment to meet frequently to monitor the effectiveness of their kindergarten interventions and to make instructional adjustments. The scheduling and frequency of meetings varied by school and were guided by time and staffing considerations. All teams met every week or every other week. Teams worked collaboratively to alter instructional variables based on student data. At meetings, decisions were made about the allocation of instructional time, ways to regroup students, the use of supplemental materials, assessment schedules, short-term objectives, and instructional focus. In this way, teams were able to customize interventions for intensive and strategic students in a way that was dynamic and integrally linked to student performance.

Results of the districtwide SRIM implementation in kindergarten indicate that students’ phonemic awareness skills increased substantially from February to May (see figure 3). In May, only 7 percent of kindergarten students had phonemic awareness skills of the level that would require intensive intervention, as opposed to 32 percent in February. In the absence of a control group, we cannot draw conclusions about relative growth; however, we can conclude that in an absolute sense, students made notable growth in phonemic segmentation, which is a reliable predictor of word reading in Grade 1. CONCLUSION The SRIM is an integrated, data-based intervention model for teaching reading in schools. This model is based on the methodological integration of (a) general and special education research in assessment (e.g., Good, Simmons, and Smith 1998), (b) effective instructional design principles (Kame’enui and Carnine 1998), (c) validated methods of early reading instruction (Simmons and Kame’enui 1998), and (d) interventions that fit the school as the host environment (Sugai and Horner 1999). The Schoolwide Reading Improvement Model can intercept and prevent early reading risk from becoming long-term, intractable difficulties.

If we take seriously the widespread call to educate all children, and not view it as just another slogan in which “the rhetoric” of educating all is in effect the reality of educating some or even most (Kame’enui 1998), then we face enormous challenges. Perhaps the most important challenge is that of designating beginning reading as the top instructional priority for elementary schools in kindergarten through Grade 3, making a schoolwide commitment to focus relentlessly and strategically on this priority, and implementing a data-based intervention model that provides a formative and continuous feedback loop about student reading performance. Finally, if we embrace an intervention model that acknowledges and honors the differences among individual schools, wherever they may be located, we can truly say, yes, size matters, but teaching reading matters more.

ACKNOWLEDGMENTS The contents of this document were developed in part for the Office of Special Education Programs, U.S. Department of Education under Grant Number H324M980127. This material does not necessarily represent the policy of the U.S. Department of Education, nor is the material necessarily endorsed by the Federal Government.

We gratefully acknowledge and warmly thank the dedicated, hard-working, and enthusiastic Bethel District elementary administrators, teachers, and educational assistants who so expertly implemented the Schoolwide Reading Model. We extend a special thanks to the reading coordinators for their leadership and perseverance.

Address correspondence to: Edward J. Kame’enui, Institute for the Development of Educational Achievement, Education Annex, 1211 University of Oregon, Eugene, OR 97403-1211


Adams, M. J. 1990. Beginning To Read: Thinking and Learning About Print. Cambridge, MA: The MIT Press.

Adams, M. J., Foorman, B. R., Lundberg, I., and Beeler, T. D. 1997. Phonemic Awareness in Young Children: A Classroom Curriculum. Baltimore, MD: Paul H. Brookes Publishing Co.

American Federation of Teachers. 1999. Building on the Best, Learning From What Works: Seven Promising Reading and English Language Arts Programs. Washington, DC. Carnine, D. 1997. Instructional design in mathematics for students with learning disabil

ities. Journal of Learning Disabilities 30:130-31.

Carroll, J. B. 1963. A model of school learning. Teachers College Record 64:723-33.

Carroll, J. B. 1989. The Carroll model: A 25-year retrospective and prospective view. Educational Researcher 18:26-31.

Deno, S. L. 1992. The nature and development of curriculum-based measurement. Preventing School Failure 36:5-10.

Engelmann, S., and Bruner, E. 1998. Reading Mastery I: Distar Reading. Chicago, IL: Science Research Associates, Inc.

Felton, R. H., and Pepper, P. P. 1995. Early identification and intervention of phonological deficits in kindergarten and early elementary children at risk for reading disability. School Psychology Review 24:405-14.

Fuchs, D., and Fuchs, L. 1994. Classwide curriculum-based measurement: Helping general educators meet the challenge of student diversity. Exceptional Children 60:518-37.

Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., and Germann, G. 1993. Formative evaluation of academic progress: How much growth can we expect? School Psychology Review 22:27-48.

Good, R., III, Simmons, D. C., and Smith, S. 1998. Effective academic interventions in the United States: Evaluating and enhancing the acquisition of early reading skills. School Psychology Review 27:740-53.

Hasbrouck, J. E., and Tindal, G. 1992. Curriculum-based oral reading fluency norms for students in grades 2 through 5. Teaching Exceptional Children 24:41-44.

Hedges, L. V., and Waddington, T. 1993. From evidence to knowledge to policy: Research synthesis for policy formation. Review of Educational Research 63:345-52. Juel, C. 1988. Learning to read and write: A longitudinal study of 54 children from first through fourth grades. Journal of Educational Psychology 80:437-47.

Kame’enui, E. J. 1998. The rhetoric of all, the reality of some, and the unmistakable smell of mortality. In Literacy for All: Issues in Teaching and Learning, eds. J. Osborn and F. Lehr. New York: Guilford.

Kame’enui, E. J., and Carnine, D. W. 1998. Effective Teaching Strategies That Accommodate Diverse Learners. Columbus, OH: Merrill, Prentice Hall.

Kame’enui, E. J., and Simmons, D. C. 1998. Beyond effective practice to schools as host environments: Building and sustaining a school-wide intervention model in reading. Oregon School Study Council Bulletin 41:3-24.

Kame’enui, E. J., and Simmons, D. C. 1999. Planning and Evaluation Tool for Effective Schoolwide Reading Programs. Unpublished document.

Kame’enui, E. J., Simmons, D. C., and Coyne, M. D. 1999. Kindergarten Reading Instruction and the Tyranny Of Time: Toward a Schoolwide Reading Improvement Model. Manuscript submitted for publication.

Kaminski, R. A., and Good, R. H., III. 1996. Toward a technology for assessing basic early literacy skills. School Psychology Review 25:215-27.

Kaminski, R. A., and Good, R. H., IlL. 1998. Assessing early literacy skills in a problemsolving model: Dynamic indicators of basic early literacy skills. In Advanced Applications of Curriculum-Based Measurement, ed. M. R. Shinn. New York: Guilford.

Markell, M. A., and Deno, S. L. 1997. Effects of increasing oral reading: Generalization across reading tasks. The Journal of Special Education 31:233-50.

Moats, L. C. 1999. Teaching Reading Is Rocket Science: What Expert Teachers of Reading Should Know and Be Able To Do. Washington, D.C: American Federation of Teachers. Mosenthal, P. 1984. The problem of partial specification in translating reading research

into practice. The Elementary School Journal 85:199-227.

National Research Council 1998. Preventing Reading Difficulties in Young Children. Washington, DC: National Academy Press.

Notari-Syverson, A., O’Connor, R. E., and Vadasy, P. F. 1998. Ladders to Literacy: A Kindergarten Activity Book. Baltimore: Paul H. Brookes Publishing Co.

SRA Open Court Reading 2000. Columbus, OH: SRA McGraw-Hill.

Shinn, M. 1997. Instructional Decision Making Using Curriculum-Based Measurement. Unpublished workshop materials.

Simmons, D. C., and Kame’enui, E. J. 1998. What Reading Research Tells Us About Children With Diverse Learning Needs: Bases and Basics. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Simmons, D., King, K., Kuykendall, K., Cornachione, C., and Kame’enui, E. J. 2000. Implementation of a school-wide reading improvement model: No one ever told us it would be this hard. Learning Disabilities Research and Practice 15 (2):92-100.

Slavin, R. E., Madden, N. A., Dolan, L. J., and Wasik, B. A. 1996. Every Child, Every School: Success for All. Thousand Oaks, CA: Corwin.

Stanovich, K. E. 1999. The sociopsychometrics of learning disabilities. Journal of Learning Disabilities 32:350-61.

Sugai, G., and Horner, R. H. 1999. Discipline and behavioral support: Practices, pitfalls, and promises. Effective School Practices 17:10-22.

Torgesen, J. K. 1998. Catch them before they fall: Identification and assessment to prevent reading failure in young children. American Educator 22:32-39.

Torgesen, J. K., and Bryant, B. T. 1994. Phonological Awareness Training for Reading. Austin, TX: PRO-ED.

U. S. Department of Education. 1999. Start Early, Finish Strong: How To Help Every Child Become a Reader. Washington, DC: U.S. Department of Education, America Reads Challenge.

Zins, J. E., and Ponti, C. R. 1990. Best practices in school-based consultation. In Best Practices in School Psychology – 11, eds. A. Thomas and J. Grimes. Washington, DC: National Association of School Psychologists.

Edward J. Kameenui

Deborah C. Simmons

Michael D. Coyne

University of Oregon Eugene, Oregon

Copyright International Dyslexia Association 2000

Provided by ProQuest Information and Learning Company. All rights Reserved