The Promise of Meaningful Eligibility Determination: Functional Intervention-Based Multifactored Preschool Evaluation
David W. Barnett
Despite considerable controversy, standardized norm-referenced assessment procedures are widely used to determine eligibility for special education services. Under Public Law 105-17, children ages 3 through 9 may receive special services under the broad category of child with a disability–rather than traditional diagnostic labels–based on a delay in one or more developmental domains. Due to the limitations of traditional ability and developmental norm-referenced measures, the determination of developmental delays may be highly error prone and unrelated to intervention decisions. As an alternative, we describe minimal requirements for functional intervention-based assessment and suggest strategies for using these methods to analyze developmental delays and make special service eligibility decisions for preschool children (intervention-based multifactored evaluation or IBMFE). The IBMFE model provides a basis for deriving logical, natural, and meaningful discrepancies in behavior or performance through the contextual analysis of child-related, environmental, and instructional variables.
Parents and professionals have lived with unanticipated and unwanted side-effects of guidelines for disability evaluation stemming from federal legislation since 1975 (Hardman, McDonnell, & Welch, 1997; Reschly & Ysseldyke, 1995). Questions have been raised about the benefits of categorical eligibility, high error rates associated with developmental test use, and intervention efforts that are not well informed by assessment data. Some problems associated with measuring developmental delay may be intractable (Macmann & Barnett, 1999; Macmann, Barnett, Lombard, Belton-Kocher, & Sharpe, 1989).
The Individuals with Disabilities Education Act (IDEA) was reauthorized and amended in 1997 (P.L. 105-17). However, the law implies a mixed model by defining the need for special services based on developmental delays (a labeling and deficit-focused approach), while stressing the importance of functional assessment information for educational programming within natural environments (which is intervention- and needs-driven).
Intervention-based assessment (IBA), a term currently used in an Ohio statewide project, may be defined as the use of high-quality, low-inference data obtained by direct assessment in natural settings for the design and evaluation of interventions to meet referral concerns (e.g., Deno & Mirkin, 1977). Many features of IBA fit the stipulations of P. L. 105-17. IBA includes well-developed methods for functional, direct assessment of behavior in actual environments and methods for acquiring functional information from parents (Bell, 1997). Likewise, racial and cultural fairness, key features of the law, have been addressed in intervention-based assessment (IBA; Barnett et al., 1995). Last, many have argued that intervention-based measures can be used to determine eligibility for special services (Reschly & Ysseldyke, 1995). This paper describes IBAs and IBMFEs that satisfy requirements for determining eligibility of preschool children.
Intervention-Based Assessment
The underlying framework for IBA is a problem-solving process (e.g., Allen & Graden, 1995). The model that we apply to IBA emphasizes collaboration among key participants, notably parents and classroom teachers, including mutual goal setting, shared ownership, and shared decision making. Problem solving generally follows a sequence of stages: (a) problem identification and clarification, (b) problem analysis (including examination of key ecological variables), (c) goal setting, (d) plan development and implementation, and (e) evaluation of outcome data from interventions and consideration of need to modify plans. Assessment strategies are used sequentially and interactively based on the problem-solving model.
An intervention-based model is characterized by the following (Baer, Wolf, & Risley, 1968; Bijou, Peterson, Harris, Allen, & Johnston, 1969; Flugum & Reschly, 1994): (a) behavioral definition of the problem; (b) direct measures of targeted variables or other outcomes associated with needed changes in behavior, school (or other setting) performance, or caregiver expectations; (c) a well-designed sampling plan for ongoing data collection (Barnett, Lentz, Bauer, Macmann, Stollar, & Ehrhardt, 1997); (d) measurement of the targeted variables in the natural setting during baseline and through intervention phases; (e) a strong (defined as meaning the “weakest that works”; Yeaton & Sechrest, 1981) and detailed intervention plan built on problem solving with care providers, empirical data, and intervention research; (f) measurement of intervention integrity; (g) graphing of intervention results and comparison of postintervention performance with baseline data; and (h) evidence of reliability and validity of key measured variables used for decision purposes (Macmann, Barnett, Allen, Bramlett, Hall, & Ehrhardt, 1996).
A unifying construct for IBA is functional assessment, which focuses on skills that improve a child’s ability to interact with present and future environments in a way that increases independence. Closely related, functional analysis is a general strategy of altering environmental variables to determine their function as the basis for intervention selection (O’Neill et al., 1997). Additional concerns of relevance to early childhood include: (a) parent participation as a major theme, (b) the ambiguities of target variable selection for young children as compared to school-age children, and (c) intervention design that fits into the routines of natural settings. To deal with these issues, a preschool intervention-based model has been developed (Barnett, Ehrhardt, Stollar, & Bauer, 1994). Referred to as PASSKey, the acronym highlights three elements of a functional approach to early intervention: Planned Activities, Strategic Sampling, and Keystone behaviors or variables. The PASSKey model has three purposes: (a) to identify significant problem situations; (b) to plan interventions that are based on natural interactions, activities, and the realities of settings; and (c) to provide sufficient information for determining eligibility for special services.
In PASSKey, interventions are built around planned activities or routines that take place during a child’s day. Strategic sampling refers to selecting times for observing planned activities, target behaviors, interventions, and outcomes (Barnett, Lentz, et al., 1997). Keystone variable identification is a target “behavior” selection strategy (Barnett, Bauer, Ehrhardt, Lentz, and Stollar, 1996; Evans & Meyer, 1985) emphasizing relatively narrow targets for change that have the best potential for widespread benefits. Intervention scripts are developed collaboratively to increase acceptance and facilitate the self-regulation of interventions and monitoring by consultants (Ehrhardt, Barnett, Lentz, Stollar, & Reifin, 1996). Progress monitoring and data analysis follow accountability methods derived from single-case designs.
Intervention-Based Multifactored Evaluation
Intervention-based multifactored evaluation (IBMFE) adds a set of components for disability evaluation to the basic IBA model. From P.L. 105-17, teams of parents and professionals determine (a) a nontraditional category (child with a disability) based on one or more developmental delays in physical, cognitive, communication, social or emotional, or adaptive development (or a traditional category at state/local agency discretion); (b) present levels of performance and educational needs; (c) special education and related services needed to support development in the least restrictive environment; and (d) modifications of special education and related services needed to help a child meet annual IEP goals and to participate in typical classroom activities.
Inclusionary Programming
There are significant contextual features of P.L. 105-17 for young children. IDEA, both philosophically and programmatically, is committed to inclusionary settings and developmentally appropriate practice (Bredekamp & Copple, 1997; Carta, 1994; Wolery, 1994a; Wolery & Bredekamp, 1994). In this sense, assessment and interventions must (a) enhance progress toward important learning and developmental goals, (b) be ongoing and tailored to the age and experiences of young children, and (c) be carried out within authentic and typical learning experiences.
Designing Appropriate Evaluations
As a requirement, the IEP team shall “review existing evaluation data … including information provided by the parents…. current classroom-based assessments and observations, and teacher and related services providers observations; and identify what additional data, if any, are needed [to meet the full requirements of disability evaluation, service eligibility, and placement]” (SEC.614.[c] p. 82, italics added). Thus technically sound evaluation methods for identifying individual children’s needs and clarifying problem situations must be designed. Evaluation methods must be able to both demonstrate that discrepancies from expected student performance exist and yield data that support the need for interventions or services. Among the other stipulations are that multiple assessment methods must be used and that decisions must not be based on a single procedure. Assessment procedures should yield functional, developmental, and educational information that is directly relevant to planning the IEP and participation in activities.
Paraphrasing Yeaton and Sechrest (1981), a strong evaluation design yields the minimal amount of information needed for ethical and valid decision making. The minimal criterion is based on expected ecological and economic benefits of evaluation plans. An evaluation design based on the least amount of information needed to successfully resolve a problem situation is likely to be the most natural and acceptable, least disruptive, least error prone, and most cost-effective (Macmann & Barnett, 1999; Macmann et al., 1996). This idea has important precursors. From Sechrest (1963; also Cronbach & Gleser, 1965; Meehl & Rosen, 1955; and others), “validity must be claimed for a test in terms of some increment in predictive efficiency over the information easily and cheaply available” (p. 154). Most critical, beyond the information gained from the multiple assessment methods provided by functional developmental histories (including health-related information), curriculum-based assessment, structured problem solving with parents and teachers, naturalistic observations of problem settings, and the actual results of interventions, standardized tests have little (if any) incremental value.
IBMFE Decision Making
Consideration of eligibility within an intervention-based approach focuses on examining over time data on the nature of supports and services that are required for students to be successful. Within this approach, variables such as intensity of interventions, students’ reactions to them, and differences from typical peers are examined to make eligibility decisions.
Eligibility Linked to Intervention Efforts. To document intervention efforts for accountability purposes, interventions must be carefully planned and clearly defined, the length and intensity of the intervention must be determined, change agents with appropriate expertise must be identified, and intervention outcomes must be evaluated. Effective interventions would by definition have sufficient “strength” to change problem behavior (Yeaton & Sechrest, 1981). Judgments that interventions have insufficient strength require a functional analysis and replanning with respect to target variables, implementation of interventions, and appropriate services and support for caregivers.
The classification of a behavior as resistant to well-conceptualized interventions using natural setting resources leads to necessarily stronger interventions and greater allocation of resources in the form of special services (e.g., Gresham, 1991; see Houlihan & Brandon, 1996; Nevin, 1996). Classifications of disability status (e.g., child with a disability) are based on ongoing documentation of the outcomes of interventions that can be supported through the natural resources of an educational setting (without special education assistance) and hypotheses about the actual supports and services required to meet identified needs (Hardman et al., 1997).
Logical, Natural, and Meaningful Discrepancies. The concept of eligibility linked to the degree of intervention effort leads to the need for data about actual intervention requirements in specific situations. The behaviors of the targeted child, and the behaviors of teachers interacting with the child (or modification of routines or curriculum), may be compared to the behaviors of typical peers (or routines, etc.) and the behaviors of teachers interacting with typical peers. Empirical clarification of differences in behaviors, interactions, and modifications in routines or curriculum leads to logical, natural, and meaningful discrepancies that are developmental or age-related and, in appropriate combinations, are directly indicative of the need for special services at specific points in time. We define a logical discrepancy as a pattern (over time) of differences in behavior between a child and typical peers in specific settings, and the intervention-related tactics necessary for inclusion. Natural means that the discrepancies are found in real-world interactions and they are not contrived. Meaningful (from Hart & Risley, 1995) implies that individual differences matter greatly in the lives of children, parents, and teachers and focuses on features of the child’s behavior or supporting context that are modifiable. Child variables sometimes are not modifiable (e.g., blindness, chronic illness), but the focus remains on what is modifiable in the context to support the child’s competence.
Operationalizing Discrepancies. The components added for eligibility determination are the specific criteria and procedures for a functional discrepancy analysis and establishment of the necessary interventions or services. In IBMFE, discrepancies may be operationalized by using peer micronorms (or local norms) or by examining differences in the instructional or related classroom strategies that are required to meet a particular child’s needs.
Micronorms refer to an empirical expectancy in student performance for a particular teacher, class, and activity (Alessi, 1988; Bell & Barnett, in press; Walker & Hops, 1976). Local norms are developed for a specific setting at the district or school level (Marston & Magnusson, 1988). Thus norms within the IBMFE process are developed for measures of direct performance in significant tasks. They are used to define how different the target child’s behavior is from some central classroom level. This allows clarity in setting goals and describing the severity of discrepancies. For individual intervention decisions, micronorms (or local norms) are preferred because of the need for contextual information for individual decisions (Anastasi, 1988).
Other IBA tactics are used to identify severe discrepancies between the instructional strategies and curricular content necessary to facilitate the referred child’s developmental progress in comparisons to peers. Typical instructional strategies may need to be modified to incorporate necessary intervention strategies (e.g., milieu language teaching), creating a discrepancy by the need for individualizing instruction. Changes may include strategies that are developed and maintained at least partially by specialized personnel such as speech and language therapists or special education teachers. Special education for preschool children implies the need to incorporate unique curricular content (along with progress monitoring), special instruction, and/or specialized equipment in order for the child to meet either established curricular and developmental goals or individual goals in environments with typically developing children. For eligibility decisions to be based on instructional strategies and curricular content, empirical evidence is needed about a significant discrepancy between regular classroom procedures and those required for a child.
Relative Contributions of Cognitive, Behavioral, Physical, or Developmental Factors. P.L. 105-17 stipulates that technically sound instruments should be used to assess the relative contributions of these factors. Questions about technical soundness ultimately relate to validity, including consideration of (a) the utility of any inferences made from some procedure, measure, or instrument to meet a clearly defined purpose, and (b) the value implications and social consequences stemming from those inferences (Messick, 1995). Given that judgments of validity hinge on the analysis of these issues and the development of necessary and efficient services, IBA procedures currently may be the most technically sound method for answering questions on these eligibility factors.
We offer, as examples, situations that may lead to erroneous inferences about both the presence of disabilities and the design of effective interventions in line with the intent of P.L. 105-17. In these situations IBA would help to clarify the relevant issues in the context of deciding about the need for special services. For example, if a child was referred because of noncompliance with adult requests and disruptive behaviors, noncompliance might indicate hearing problems, a lack of cognitive understanding of the request, a learned pattern of reinforced responding typical of conduct or behavioral problems, or ineffective or developmentally inappropriate requests. IBA procedures have been used to analyze problem situations related to noncompliance and disruptive behaviors leading to intervention design (Ducharme, 1996; Forehand & McMahon, 1981). Other examples reflecting differential relations between similar presenting problems and effective interventions include selective mutism (e.g., physical, developmental, behavioral; Schill, Kratochwill, & Gardner, 1996), disruptive behaviors in instructional contexts (e.g., behavioral, cognitive, or developmental; Durand, 1990), and toileting skills (e.g., physical, developmental, or behavioral factors; Snell, 1995). In each case, through established IBA procedures, teams can consider relative contributions of the aforementioned factors and help with educationally relevant decisions.
Eligibility Determination
We have identified examples of tactics from research (e.g., Barnett, Bell, et al., 1997) and literature suitable for analyzing aspects of the differences discussed above (child and environment) that may exist between a referred child and typical peers. They are elaborations of the idea of analyzing discrepancies between a child’s behavior and environmental expectancies as fundamental to educational programming (Evans & Meyer, 1985; Wolery, Bailey, & Sugai, 1988). The tactics may be directly linked to intervention decisions and service delivery costs (Noell & Gresham, 1993). The result is the ongoing measurement of a problem or instructional situation to distinguish between those that are resistant to intervention efforts and those that can be logistically supported without special services.
The tactics described in this section are based on established procedures, including measures of behavioral states or events, task analysis, and curriculum-based assessment. Another set of tactics arises when the opportunity for responses during educational activities is controlled by parents or teachers (e.g., trials to criteria, discriminated [also controlled or restricted] operants such as parent or teacher requests or prompts). One additional type of measurement is based on a teacher’s planned changes in managing or instructing a child. Here the basic measurement procedures are assessment of treatment integrity (Ehrhardt et al., 1996) along with the child’s performance. Last, a specialized curriculum may be used for specific skill sequences (e.g., Johnson-Martin, Attermeier, & Hacker, 1990), and progress through the sequence can be used as a measure. The tactics are not necessarily discrete and may be used in combinations (e.g., rate of learning and curricular adaptations). The tactics are summarized in Table 1 and illustrated in Figures 1 through 7 for hypothetical cases based on experiences with actual implementation.
TABLE 1. Tactics for IBMFE Eligibility Determination
Tactic Description
Caregiver monitoring The caregiver (e.g., parent,
teacher, related service
personnel) may spend a great
deal of time watching the child
because of potential concerns
measured as a rate (or state
if highly prevalent) and
compared to peer micronorms.
Activity engagement The child may spend less time
participating in classroom
routines or activities.
Activity engagement may be
measured as a state or event
(rate of activity changes),
depending on preliminary
observations, and the data may
be compared to peer micronorms.
Levels of assistance The caregiver may spend
increased amounts of time
interacting with the child
while facilitating learning,
play, or adaptation to routines
through assistance or prompts.
Levels of assistance is
measured through specific
prompting strategies and can be
compared to peer micronorms.
The comparison may be
unnecessary if peers require
minimal or no assistance.
Rate of learning (trials to The caregiver may interact
criterion) with the child in a manner
that requires more direct
instructional time through the
use of repeated structured
trials (or practice). The
number of learning trials for
specific tasks may be compared
to data from peer micronorms.
Other similar measures include
rate of acquisition, mastery,
retention, maintenance, or
generalization.
Behavior fluency There may be pronounced
differences in the fluency
(accuracy and rate) of the
child’s performance of
specific skills or behaviors
(e.g., language or play entry
skills). The fluency of
performance may be compared
to data from micronorms.
Modifying activities Activities or instruction for
the referred child may involve
unique or more complex
interventions than those for
other children. The tactics
are compared to typical
activities or instruction.
Curricular adaptations Domains, skill sequences, or
instructional techniques may
need to be modified (e.g.,
IEP). Peer micronorms help
establish the need for
modifications in curricular
content or presentation.
Curricular adaptations are
measured by the integrity of
implementation and continuous
progress monitoring.
Tactic 1: Caregiver Monitoring
Rationale. There are major reasons for measuring the natural rates and effectiveness of caregiver monitoring. (a) Monitoring is a foundation for interventions that require responsivity to children’s behavior (i.e., Hart & Risley, 1995). (b) When the intervention demands are too great, indicated by high levels of required monitoring, the result may be fatigue and loss of caregiver motivation or the unintended consequence of learned helplessness on the part of a child. (c) Special settings, in contrast to typical classrooms, are characterized by relatively greater teacher effort in supervising and monitoring behavior for individual children with special needs. Reducing the need for caregiver monitoring may lead to placement in more typical settings. (d) The type of monitoring may be inappropriate or ineffective (e.g., yelling, reprimands, repeat commands), inconsistent, and may be contributing to the problem. Thus, important outcomes may focus on increasing, reducing, or improving required caregiver monitoring of student behavior.
Definition, Measurement, and Example. The number of caregiver contacts with a child can be counted as events (Saudargas & Lentz, 1986). Bramlett and Barnett (1993) measured caregiver monitoring as a state scored when the teacher is in close proximity (6 feet) to the target child and is looking at the child or the child’s activities.
Figure 1 illustrates logical discrepancies in rates of caregiver contacts between the target child and typically developing peers within the same caregiving environment. Initial observations indicated baseline rates of caregiver monitoring for the target child to be five to seven times higher than the median peer rates over three observation sessions. Following implementation of an intervention targeting the child’s play skills, the need for caregiver monitoring, although decreasing, continued to be four to five times that of typically developing peers. This discrepancy with regard to monitoring could be among the considerations for demonstrating a need for services or supports.
[Figure 1 ILLUSTRATION OMITTED]
Tactic 2: Activity Engagement
Rationale. Child engagement in meaningful activities is generally considered an important indicator of the quality of the environment and of instructional effectiveness (Greenwood, 1996; McWilliams & Bailey, 1995; McWilliams, Trivette, & Dunst, 1985; Risley, 1972). Activity (task or play) engagement promotes learning, reduces the opportunity for disruptive behaviors, and generally contributes to the impact of the preschool experience. Although engagement is partially a function of child characteristics (e.g., interests), instructional strategies (e.g., incidental teaching, praise) and environmental variables (e.g., accessible and interesting play areas) also contribute to a child’s engagement.
Definition, Measurement, and Example. Activity engagement may be defined as the time a child spends interacting with the environment in a developmentally and contextually appropriate manner. Play engagement and preacademic engagement may be measured as states using time sampling (Bramlett & Barnett, 1993; see also Saudargas & Lentz, 1986). A target child’s activity engagement is compared to that of typical peers in terms of patterns of duration and appropriateness of the engagement. A discrepancy is determined by comparing the target child’s engagement in play or learning activities with those of typical peers in the same setting.
Figure 2 compares percentage of activity engagement for the target child and typical peers during baseline and intervention phases. Although there were large discrepancies in percentage of activity engagement during baseline observations (e.g., 30% for the targeted child, 82% for the typical peers), the gap initially closed with the introduction of an intervention incorporating curricular adaptations and teacher facilitation strategies. However, subsequent observations found the activity engagement stabilizing at half the rate of the typical peers.
[Figure 2 ILLUSTRATION OMITTED]
Tactic 3: Levels of Assistance
Rationale. Levels of assistance refers to the support needed for participation in activities or, alternatively, the support that must be ultimately removed before the child can function independently (Wolery, 1996). There are many ways to organize and measure levels of assistance, the most common being response-prompting strategies. There are three common features of prompting strategies (Snell, 1995; Wolery, Ault, & Doyle, 1992): the prompts help teach skills or initiate performance, they are removed as soon as possible, and they are combined with differential reinforcement. Prompts must be matched to the individual activity or task and the child’s learning characteristics.
Definition, Measurement, and Example. Levels of assistance can be operationalized in terms of instructional prompts. We use as an example the system of least prompts, which involves the presentation of a target/task stimulus and the introduction of a series of the least to most intrusive prompts necessary for the production of a correct response (Doyle, Wolery, Ault, & Gast, 1988). Prompts include verbal instructions, gestures such as pointing, models or graphic illustrations, and physical prompts, such as hand-over-hand assistance (Snell, 1995). The occurrence-nonoccurrence of child behavior is recorded differently according to the levels of assistance provided to the child. A discrepancy is determined by identifying developmentally appropriate levels of assistance for peers in the same preschool setting and comparing that level of performance with prompts required for successful performance of a desired skill or behavior by the target child.
Figure 3 compares the levels of assistance defined by the type of prompt (group direction, individual verbal prompt, physical guidance by taking the child by the hand for a few steps) the target child and typical peers required in order to transition from lunch to group time. (The child must have the skills necessary to comply with the request, or the focus of the intervention would shift to teaching the specific routine.)
[Figure 3 ILLUSTRATION OMITTED]
Classroom data (interviews and observations) indicated that the target child knew the transition routine but still required the teacher’s physical prompt during the transition, whereas typical peers complied with the group direction. The intervention consisted of a least-to-most prompt hierarchy for the transition period (group direction [ 1 ], to individual verbal prompt [2], to physical guidance [3]). Data represent compliance to requests based on the type of prompt. After the intervention was introduced, the target child began to transition following an individual verbal prompt, and the typical peers transitioned following the group direction. However, other classroom activities also required intensive levels of assistance, and these data then become useful in decisions about the necessity for support services.
Tactic 4: Rate of Learning (Trials to Criterion)
Rationale. Recording the number of learning trials required for acquisition of a new skill is a direct measure of learning rate and the effort involved in teaching a child through direct instruction. The need for repeated trials in comparison to peers may indicate attention to both the quality and quantity of the learning trials (Skinner, Fletcher, & Henington, 1996). This measure also is helpful for comparing time allocated for learning or alternative teaching techniques for the same skills.
Definition, Measurement, and Example. “Trials to criteria is the report of the number of times response opportunities are presented before an individual achieves a preestablished level of accuracy or proficiency” (Cooper, Heron, & Heward, 1987, p. 74). An example of measurement would be that a particular child needed 10 trials to independently perform a needed task at 100% accuracy (Cooper et al.). This would be compared to the number of learning trials from peer performances.
Figure 4 illustrates both the use of peer comparisons to establish expectancies for the rate of learning a set of discrete skills (counting) as well as resource intensive intervention (direct instruction). During baseline, educational games were played in a large-group format in which children were incidentally taught to count objects. Counting was considered mastered when children counted objects on 3 consecutive days without teacher prompts or assistance. During the incidental teaching in a large group, typical peers took a median of 8 trials to master counting 3 objects. In comparison, the target child required 19 trials to reach the mastery criterion. Brief direct instruction (incorporating increased feedback and direct learning opportunities) was added to the classroom routine for the targeted child. Subsequently, there was a slight decrease in the number of instructional trials the targeted child required to reach mastery of counting increasingly larger sets of objects (5, 7, 10), but large discrepancies from peer performance continued (7 to 10 additional trials). This is an example of documentation of the need for special instruction and additional resources while using peer micronorms to assess performance discrepancies. Likewise, it provides evidence that the child can learn and move toward curriculum mastery with special assistance.
[Figure 4 ILLUSTRATION OMITTED]
Tactic 5: Behavioral Fluency
Rationale. A skill may be performed but not adeptly, thus indicating the need for fluency building. Lack of fluency is observed in performances that are too slow or not natural or smooth (Wolery et al., 1988). Fluency in performance increases the potential for maintenance, transfer of a skill to new situations, and positive affect related to mastery of a skill (Binder, 1996). A number of researchers have described a practical learning hierarchy (e.g., Hating, Lovitt, Eaton, & Hanson, 1978; Snell, 1995). This hierarchy emphasizes the critical nature of fluency development and its importance for skills eventually needed for more complex tasks and for improving maintenance across time.
Definition, Measurement, and Example. Fluency of response refers to accuracy of response and rate of response that describe relatively effortless, flowing, or automatic competent performance in a natural setting. Fluency may be measured directly by comparing the time required by the target child to complete some task to criterion performances of selected peers or indirectly by social validity judgments through interviews (Wolery et al., 1988).
Figure 5 demonstrates the use of continuous peer comparisons for goal development and intervention effectiveness for behavioral fluency in a cleanup routine, as measured by minutes until completion. During baseline peer comparisons, the target child took 5 to 6 minutes longer to initiate and complete cleanup responsibilities and begin the transition to the next activity. Following the intervention, consisting of a teacher prompt combined with a peer-mediated “buddy system,” minutes to completion steadily approached peer levels. However, from collaboration with the teacher and observations, other activities also were identified that needed similar intensive intervention, and data were collected that would be useful in subsequent interventions or special services decisions.
[Figure 5 ILLUSTRATION OMITTED]
Tactic 6: Modifying Activities
Rationale. Wolery (1994a) stated: “Most interventions needed by young children with special needs should be implemented within the context of ongoing activities in high-quality early childhood programs” (p. 102). Environmental intervention to support children with disabilities is one of the most important historical and philosophical ideas in inclusionary educational programming and IBA/IBMFE. It has impressive empirical support over many behaviors of concern (Broussard & Northup, 1995; Dunlap & Kern, 1996; Greenwood, 1996; Schwartz, Carta, & Grant, 1996; Umbreit, 1996) and is limited only by a team’s creativity (McGee, Daly, Izeman, Mann, & Risley, 1991).
Definition, Measurement, and Example. Including children with disabilities in typical settings may be based on making changes in activities, routines, or classroom management. These required changes or differences in activities may be used as natural discrepancies for special services consideration.
There are well-developed steps for defining interventions in inclusionary settings (Barnett et al., 1994; Ehrhardt et al., 1996; Gresham, 1989; LeLaurin & Wolery, 1992; Lentz, Allen, & Ehrhardt, 1996) and their outcomes (Noell & Gresham, 1993; Martens & Witt, 1985). Two critical steps are identifying and quantifying the events and behaviors comprising the intervention (adapted from LeLaurin & Wolery, p. 281). The end results are estimates of the amount of time and resources a teacher would spend planning and carrying out an intervention, and the impact of the planned changes on the overall instructional ecology.
Figure 6 illustrates discrepancies in instructional strategies necessary for successful child performance. In this example, the missing item format (Tirapelle & Cipani, 1992) intervention was used to increase functional requesting of a target child. Measures of functional requests during other times of the day were recorded for potential generalization. During baseline, the target child was not heard to make functional requests. With successfully increasing functional requests during lunch, other instructional periods were sequentially added to increase the number of trials using the technique in a natural way (e.g., the missing item format was carded out during lunch during the second phase, and lunch and toothbrushing during the third phase, and art was added during the fourth phase).
[Figure 6 ILLUSTRATION OMITTED]
Intervention efforts were successful in increasing functional requesting by the target child during instructional sessions and nonintervention periods. In this case, a discrepancy is indicated by the need to modify activities and, related, the nature of the teacher’s sustained effort. Future goals also will involve language competence.
Tactic 7: Curricular Adaptations
Rationale. One of the most useful approaches for making educational decisions involves the systematic and ongoing assessment of children within the contexts of a well-constructed curriculum, instructional techniques, and environmental conditions necessary for competent performance (LeBlanc, Etzel, & Domash, 1978). Curriculum-based assessment circumvents many problems because developmental sequences are tied to ongoing measurement related to instructional efforts and skill progression rather than to a “profile” of skills at one point in time, and the results are used in planning for intervention decisions (Fuchs, Fuchs, Hamlett, Phillips, & Karns, 1995; Munk & Repp, 1994; Shinn, 1989).
Definition, Measurement, and Example. To enable decisions about interventions, a curriculum must have (a) a wide range of functional, developmentally sequenced tasks; (b) ongoing measurement of progress through the tasks; and (c) a variety of teaching and learning strategies. Children who remain at a particular level of a skill sequence, or who are unable to complete a task, determined by the above tactics, may require a change in instructional strategies in order to progress to the next step (Wolery et al., 1988).
The developmental domains may be assessed by ongoing observations of the child’s performance in preacademic, social, and other developmental areas through curriculum-referenced measurement if the curriculum includes the developmental sequences. If not, sequences of developmental skills from specialized curriculum, along with progress monitoring, may be added for children. Judgments about developmental delay may be made; through peer micronorms, taking into account the appropriateness of instructional strategies that are being used.
The most basic measurement strategy is monitoring of a child’s progress through a sequenced curriculum. Progress is reported as objectives that are mastered. However, the information gained from a curriculum-based assessment is much broader (LeBlanc, Etzel, & Domash, 1978; Lentz & Shapiro, 1986) and may include (a) current level of performance or functioning; (b) rate of learning new skills; (c) strategies necessary to learn new skills; (d) length of time the new skill is retained; (e) generalization of previously taught skills to a new task; (f) observed behaviors that deter learning; (g) environmental conditions needed to learn skills (individual, group, peer instruction); (h) motivational techniques used to acquire skills; and (i) skill acquisition in relationship to peers.
Figure 7 compares the number of curricular objectives mastered for the target child and randomly selected typical peers. From October to January the target child consistently mastered approximately 5 curricular objectives. The peers steadily increased the number of objectives mastered from 20 to 27 during this same time period. An intervention was implemented in order to help the target child increase the number of objectives mastered. The intervention was successful in increasing the number of curricular objectives mastered to 17 by the end of the school year. The peer data continued to steadily increase to 33 objectives mastered. The discrepancies demonstrate the need for sustained intensive instructional efforts, including the possible adaptation of a specialized curriculum.
[Figure 7 ILLUSTRATION OMITTED]
Using IBM/BMFE Data for Decision Making
The tactics are used to provide data for special services eligibility considerations. To study discrepancy patterns and their impact on decision making, we recommend the use of an IBMFE Decision Summary (adapted from Helmstetter & Guess, 1987, pp. 262-263; Wolery, 1994b, 1996). Essentially, this is a format to organize and use assessment information provided by the functional discrepancy analysis to plan intervention programs. Other formats could be adapted. In the PASSKey model, settings and activities are listed, followed by domains linked to keystone targets for change (e.g., communication, adaptive). Next, intervention plans are summarized, as are the tactics (or combinations) used for discrepancy analysis. Other pertinent information about the child or setting also is communicated. The last entry includes team decisions about intervention needs and service delivery allocations. The end result is an activity-by-domain/discrepancy-by-intervention services plan.
Eligibility decision making is based on both (a) a discrepancy between educational performance in a critical or significant curriculum area of the referred child in comparison to expectancies for the performance of a typical child (Wolery, Bailey, & Sugai, 1988, p. 287) and (b) a desired change in performance that is resistant to planned intervention efforts that are naturally sustainable within the educational service unit. Together, these factors are direct measures of “intensity of the need for special support” (Hardman et al., 1997, p. 64). The patterns represent both meaningful differences in children’s performance and extraordinary or extensive effort by team members or other resources that are required in order to reduce performance discrepancies. During the intervention period, documentation of failure of normal classroom modifications and the necessity of special effort is collected. Together, discrepancy and resistance analysis clarify the unique and special curriculum that is special education.
Problem Areas
Discrepancy Analysis
Discrepancy analysis is a thorny measurement topic (Macmann & Barnett, 1999), and the use of IBMFE does not resolve the fact that decision errors and risks are involved. However, in moving from traditional to IBMFE tactics, the potential benefits of discrepancy analysis may outweigh the costs. Logical discrepancies derived from contextualized data are directly linked to needed interventions and service delivery features such as teacher effort or specialized interventions. Moreover, the discrepancies of interest for IBMFE are relevant to ongoing progress monitoring and modifications of intervention plans.
Reliability and Validity
IBMFE does not yet have an extensive empirical basis of support. Problem solving provides a significant structure and process to IBMFE, but when a problem-solving model is used, there may be considerable divergence in steps, data, judgments, and outcomes across problem solvers, and there is no inherent a priori guarantee that outcomes may be beneficial. We have examined these potential problem areas and have made recommendations in other work (Macmann & Barnett, 1999; Macmann et al., 1996). The basics have to do with (a) reasonableness in target variable selection and measurement, (b) defendable intervention selection (the intervention was functionally relevant and/or empirically supported), (c) intervention integrity, (d) carefully designed intervention outcome measurement and follow-up, and (e) at the least, implied cost appraisals (that the most efficient and effective strategies will be used).
Technical Adequacy of Micronorms
The use of micronorms presents challenges. Perhaps the most fundamental is the need for well-functioning classrooms in order to interpret findings meaningfully. Questions about selecting comparison children, sampling behaviors and times to observe, effects of class or school base rates, reliability, and validity may be raised. We argue that the use of peer micronorms remains an effective technique for documenting discrepancies in child behaviors and instructional strategies within the context of the child’s classroom routines and activities if technical adequacy criteria can be satisfied for the individual case (Bell & Barnett, in press; Macmann et al., 1996).
Children at Home and in Noninclusionary Settings
The proposed tactics present challenges for disability evaluations in home settings and noninclusionary special classrooms. However, the basic procedures remain the same. The heart of the procedures would be curriculum referenced and based on local expectancies to derive discrepancies.
Conclusions
The crux of our discussion has been to shift the focus of discrepancy analysis in special education evaluation from test scores (psychometric criteria) to behaviors in natural settings as a basis for intervention planning and service allocation (contextual criteria). Within IBMFE, the collaboration of stakeholders yields demonstrations of the services needed to meet the needs of a child within inclusive settings. Moreover, there is no empirical reason to believe that traditional methods of disability determination have a stronger research base than IBMFE, especially with respect to the validity criterion of positive outcomes for children.
There obviously would be many problems in changing the established system of special education placement–where special education has been a place, not a service. If IBMFE were to become the standard, there would need to be large changes in professional preparation. The actual culture in most educational settings would have to change, and that is never an easy process. Changes also would have to occur in terms of systemic contingencies that would support an intervention-oriented process. Yet, if educators are to comply with regulations, in spirit and in fact, then change will have to occur. IBMFE offers a coherent, conceptually consistent alternative and deserves serious consideration by both practitioners and researchers. We have discussed the technical adequacy of IBMFE; additional discussion and data are critical.
AUTHORS’ NOTES
(1.) This article is based in part on a paper presented at the Annual Convention of the American Psychological Association, Chicago, Illinois, August 1997. Parts of the article also appeared in the Passkey Training Guide and Manual funded by the Ohio Department of Education, Division of Early Childhood Education. Opinions do nor necessarily reflect the position or policy of the Division.
(2.) Appreciation is extended to Annie Bauer, Ed Daly, Kelly Maples, Karin Nelson, and Amy Van Buren for their thoughtful reviews.
REFERENCES
Alessi, G. J. (1988). Direct observation methods for emotional/behavior problems. In E. S. Shapiro & T. R. Kratochwill (Eds.), Behavioral assessment in schools: Conceptual foundations and practical applications (pp. 14-75). New York: Guilford Press.
Allen, S. J., & Graden, J. L. (1995). Best practices in collaborative problem solving for intervention design. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology III (pp. 667-678). Washington, DC: NASP.
Anastasi, A., & Urbina, S. (1990) Psychological testing (7th ed.). New York: Macmillan.
Baer, D. M., Wolf, M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91-97.
Barnett, D. W., Bauer, A. M., Ehrhardt, K. E., Lentz, F. E., & Stollar, S. A. (1996). Keystone targets for change: Planning for widespread positive consequences. School Psychology Quarterly, 11, 95-117.
Barnett, D. W., Bell, S. H., Bauer, A., Lentz, F. E., Jr., Petrelli, S., Air, A., Hannum, L., Ehrhardt, K. E., Peters, C. A., Barnhouse, L., Reifin, L. H., & Stollar, S. (1997). The Early Childhood Intervention Project: Building capacity for service delivery. School Psychology Quarterly, 12, 293-315.
Barnett, D. W., Collins, R., Coulter, C., Curtis, M. J., Ehrhardt, K., Glaser, A., Reyes, C., Stollar, S., & Winston, M. (1995). Ethnic validity and school psychology: Concepts and practices associated with cross-cultural professional competence. Journal of School Psychology, 33, 219-234.
Barnett, D. W., Ehrhardt, K. E., Stollar, S. A., & Bauer, A. M. (1994). PASSKey: A model for naturalistic assessment and intervention design. Topics in Early Childhood Special Education, 14, 350-373.
Barnett, D. W., Lentz, F. E., Bauer, A. M., Macmann, G., Stollar, S., & Ehrhardt, K. E. (1997). Ecological foundations of early intervention: Planned activities and strategic sampling. The Journal of Special Education, 30, 471-490.
Bell, S. H. (1997). Parent preferences for involvement in assessment and intervention design. Unpublished doctoral dissertation, University of Cincinnati.
Bell, S. H., & Barnett, D. W. (in press). Peer micronorms in the assessment of young children: Methodological review and examples. Topics in Early Childhood Special Education.
Bijou, S. W., Peterson, R. F., Harris, F. R., Allen, K. E., & Johnston, M. S. (1969). Methodology for experimental studies of young children in natural settings. The Psychological Record, 19, 177-210.
Binder, C. (1996). Behavioral fluency: Evolution of a new paradigm. The Behavior Analyst, 19, 163-197.
Bramlett, R., & Barnett, D. W. (1993). The development of a direct observation code for use in preschool settings. School Psychology Review, 22, 49-62.
Bredekamp, S., & Copple, C. (Eds.) (1997). Developmentally appropriate practice in early childhood programs (rev. ed.). Washington, DC: National Association for the Education of Young Children.
Broussard, C. D., & Northup, J. (1995). An approach to functional assessment and analysis of disruptive behavior in regular education classrooms. School Psychology Quarterly, 10, 151-164.
Carta, J. J. (1994). Developmentally appropriate practices: Shifting the emphasis to individual appropriateness. Journal of Early Intervention, 18, 342-348.
Cooper, J. O., Heron, T E., & Heward, W. L. (1987). Applied behavior analysis. Columbus, OH: Merrill.
Cronbach, L. J., & Gleser, G. C. (1965). Psychological tests and personnel decisions. Urbana: University of Illinois Press.
Deno, S. L., & Mirkin, P. K. (1977). Data-based program modification: A manual. Reston, VA: Council for Exceptional Children.
Doyle, P. M., Wolery, M., Ault, M. J., & Gast, D. L. (1988). System of least prompts: A literature review of procedural parameters. Journal of the Association for Persons with Severe Handicaps, 13, 28-40.
Ducharme, J. M. (1996). Errorless compliance training: Optimizing clinical efficiency. Behavior Modification, 20, 259-280.
Dunlap, G., & Kern, L. (1996). Modifying instructional activities to promote desirable behavior: A conceptual and practical framework. School Psychology Quarterly, 11, 297-312.
Durand, V. M. (1990). Severe behavior problems: A functional communication training approach. New York: Guilford Press.
Ehrhardt, K. E., Barnett, D. W., Lentz, F. E., Stollar, S. M., & Reifin, L. (1996). Innovative methodology in ecological consultation: Use of scripts to promote treatment acceptability and integrity. School Psychology Quarterly, 11, 149-168.
Evans, I. M., & Meyer, L. H. (1985). An educative approach to behavior problems: A practical decision model for interventions with severely handicapped learners. Baltimore: Brookes.
Flugum, K. R., & Reschly, D. J. (1994). Prereferral interventions: Quality indices and outcomes. Journal of School Psychology, 32, 1-14.
Forehand, R. L., & McMahon, R. J. (1981). Helping the noncompliant child: A clinician’s guide to parent training. New York: Guilford Press.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., & Karns, K. (1995). General educators’ specialized adaptation for students with learning disabilities. Exceptional Children, 61, 440-459.
Greenwood, C. R. (1996). The case for performance-based instructional models. School Psychology Quarterly, 11, 283-296.
Gresham, F. M. (1989). Assessment of treatment integrity in school consultation/ prereferral intervention. School Psychology Review, 18, 37-50.
Gresham, F. M. (1991). Conceptualizing behavior disorders in terms of resistance to intervention. School Psychology Review, 20, 23-36.
Hardman, M. L., McDonnell, J., & Welch, M. (1997). Perspectives on the future of IDEA. Journal of the Association for Persons with Severe Handicaps, 22, 61-77.
Haring, N. O., Lovitt, T., Eaton, M., & Hanson, C. (1978). The fourth R: Research in the classroom. Columbus, OH: Merrill.
Hart, B., & Risley, T R. (1995). Meaningful differences in the everyday experience of young American children. Baltimore: Brookes.
Helmstetter, E., & Guess, D. (1987). Applications of the individualized curriculum sequencing model to learners with severe sensory impairments. In L. Goetz. D. Guess, & K. Stremel-Campbell (Eds.), Innovative program design for individuals with dual sensory impairments (pp. 255-282). Baltimore: Brookes.
Houlihan, D., & Brandon, P. K. (1996). Compliant in a moment: A commentary on Nevin. Journal of Applied Behavior Analysis, 29, 549-555.
Individuals with Disabilities Education Act Amendments of 1997, P.L. 105-17, 20 U.S.C. [sections] 1400 et seq.
Johnson-Martin, N. C., Attermeier, S. M., & Hacker, B. (1990). The Carolina curriculum for children with special needs. Baltimore: Brookes.
LeBlanc, J. M., Etzel, B. C., & Domash, M. A. (1978). A functional curriculum for early intervention. In K. E. Allen, V. A. Holm, & R. L. Schiefelbusch (Eds.), Early intervention–A team approach (pp. 331-381). Baltimore: University Park Press.
LeLaurin, K., & Wolery, M. (1992). Research standards in early intervention: Defining, describing, and measuring the independent variable. Journal of Early Intervention, 16, 275-287.
Lentz, F. E. Jr., Allen, S. J., & Ehrhardt, K. E. (1996). The conceptual elements of strong interventions in school settings. School Psychology Quarterly, 11, 118-136.
Lentz, F. E. Jr., & Shapiro, E. S. (1986). Functional assessment of the academic environment. School Psychology Review, 15, 346-357.
Macmann, G. M., & Barnett, D. W. (1999). Diagnostic decision making in school psychology: Understanding and coping with uncertainty. In C. R. Reynolds & T. Gutkin (Eds.), Handbook of School Psychology (3rd ed., pp. 519-548). New York: Wiley.
Macmann, G. M., Barnett, D. W., Allen, S. J., Bramlett, R. K., Hall, J. D., & Ehrhardt, K. E. (1996). Problem solving and intervention design: Guidelines for the evaluation of technical adequacy. School Psychology Quarterly, 11, 137-148.
Macmann, G. M., Barnett, D. W., Lombard, T. J., Belton-Kocher, E., & Sharpe, M. N. (1989). On the actuarial classification of children: Fundamental studies of classification agreement. The Journal of Special Education, 23, 127-149.
Marston, D., & Magnusson, D. (1988). Curriculum-based measurement: District level implementation. In J. L. Graden, J. E. Zins, & M. C. Curtis (Eds.), Alternative educational delivery systems: Enhancing instructional options for all students (pp. 137-172). Washington, DC: NASP.
Martens, B. K., & Witt, J. C. (1985). On the ecological validity of behavior modification. In J. C. Witt, S. N. Elliott, & F. M. Gresham (Eds.), Handbook of behavior therapy in education (pp. 325-340). New York: Plenum Press.
McGee, G., Daly, T., Izeman, S. G., Mann, L. H., & Risley, T. R. R. (1991, Summer). Use of classroom materials to promote preschool engagement. Teaching Exceptional Children, 44-47.
McWilliams, R. A., & Bailey, D. B. (1995). Effects of classroom social structure and disability on engagement. Topics in Early Childhood Special Education, 15, 123-147.
McWilliams, R. A., Trivette, C. M., & Dunst, C. J. (1985). Behavior engagement as a measure of the efficacy of early intervention. Analysis and Intervention in Developmental Disabilities, 5, 33-45.
Meehl, P. E., & Rosen, A. (1955). Antecedent probability and the efficiency of psychometric signs, patterns, or cutting scores. Psychological Bulletin, 52, 194-216.
Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741-749.
Munk, D. D., & Repp, A. C. (1994). The relationship between instructional variables and problem behavior: A review. Exceptional Children, 60, 390-401.
Nevin, J. A. (1996). The momentum of compliance. Journal of Applied Behavior Analysis, 29, 535-547.
Noell, G. H., & Gresham, F. M. (1993). Functional outcome analysis: Do the benefits of consultation and prereferral intervention justify the costs? School Psychology Quarterly, 200-226.
O’Neill, R. E., Horner, R. H., Albin, R. W., Sprague, J. R., Storey, K., & Newton, J. S. (1997). Functional assessment and program development for problem behavior: A practical handbook. Pacific Grove, CA: Brooks/ Cole.
Reschly, D. J., & Ysseldyke, J. E. (1995). School psychology paradigm shift. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology-III (pp. 17-31). Washington, DC: NASP.
Risley, T. (1972). Spontaneous language and the preschool environment. In J. C. Stanley (Ed.), Preschool programs for the disadvantaged: five experimental approaches to early childhood education (pp. 92-110). Baltimore: Johns Hopkins University Press.
Saudargas, R. A., & Lentz, F. E., Jr. (1986). Estimating percent of time and rate via direct observation: A suggested observation procedure. School Psychology Review, 15, 36-48.
Sechrest, L. (1963). Incremental validity: A recommendation. Educational and Psychological Measurement, 23, 153-158.
Schill, M. T., Kratochwill, T. R., & Gardner, W. I. (1996). An assessment protocol for selective mutism: Analogue assessment using parents as facilitators. Journal of School Psychology, 34, 1-21.
Schwartz, I. S., Carta, J. J., & Grant, S. (1996). Examining the use of recommended language intervention practices in early childhood special education classrooms. Topics in Early Childhood Special Education, 16, 251-272.
Shinn, M. R. (1989) Identifying and defining academic problems: CBM screening and eligibility procedures. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 90-129). New York: Guilford Press.
Skinner, C. H., Fletcher, P. A., & Henington, C. (1996). Increasing learning rates by increasing student response rates: A summary of research. School Psychology Quarterly, 11, 313-325.
Snell, M. E. (1995). Instruction of students with severe disabilities (4th ed.). New York: Merrill.
Tirapelle, L., & Cipani, E. (1992). Developing functional requesting: Acquisition, durability, and generalization effects. Exceptional Children, 58, 260-269.
Umbreit, J. (1996). Functional analysis of disruptive behavior in an inclusive classroom. Journal of Early Intervention, 20, 18-29.
Walker, H. M., & Hops, H. (1976) Use of normative peer data as a standard for evaluating classroom treatment effects. Journal of Applied Behavior Analysis, 9, 159-168.
Wolery, M. (1994a). ]Designing inclusive environments for young children with special needs. In M. Wolery & J. S. Wilbers (Eds.), Including children with special needs in early childhood programs (pp. 97-118). Washington, DC: NAEYC.
Wolery, M. (1994b). Implementing instruction for young children with special needs in early childhood classrooms. In M. Wolery & J. S. Wilbers (Eds.), Including’ children with special needs in early childhood programs (pp. 151-166). Washington, DC: NAEYC.
Wolery, M. (1996). Monitoring child progress. In M. McLean, D. B. Bailey, Jr., & M. Wolery (Eds.). Assessing infants and preschoolers with special needs (2nd ed., pp. 519-560). Columbus, OH: Merrill.
Wolery, M., Ault, M. J., & Doyle, P. M. (1992). Teaching students with moderate and seven, disabilities: Use of response prompting strategies. White Plains, NY: Longman.
Wolery, M., Bailey, D. B., & Sugai, G. M. (1988). Effective teaching: Principles and procedures of applied behavior analysis with exceptional students. Needham Heights, MA: Allyn & Bacon.
Wolery, M., & Bredekamp, S. (1994). Developmentally appropriate practices and young children with disabilities: Contextual issues in the discussion. Journal of Early Intervention, 18, 331-341.
Yeaton, W. H., & Sechrest, L. (1981). Critical dimensions in the choice and maintenance of successful treatments: Strength, integrity, and effectiveness. Journal of Consulting and Clinical Psychology, 49, 156-167.
COPYRIGHT 1999 Pro-Ed
COPYRIGHT 2004 Gale Group