Assessment in Early Intervention and Early Childhood Special Education: Building on the Past to Project Into Our Future
Scott R. McConnell
Unlike other disciplines or areas within the broad educational enterprise, throughout its history early childhood special education has had a central focus on the role of high-quality assessment practices in services to young children with disabilities and their families. Indeed, some of the first examples of early intervention or early childhood special education were occasioned by assessment studies, tools, and systems that allowed pediatricians and other general early childhood providers to identify young children who not only experienced developmental delays, but also were at accelerated risk for learning and behavior problems later in life (Shonkoff & Meisels, 1990).
Assessment practices have also long been linked directly to intervention approaches used in early childhood special education. When early intervention programs first appeared throughout the United States in the 1960s and 1970s, many practitioners used task-analytic or other developmental checklists as a basis for their curriculum of intervention and frequent close-point evaluation of child performance throughout instructional activities. Thus, for years, many early childhood special educators have accepted as a standard of practice the frequent collection of child performance data to assess the effects of intervention (McLean, Bailey, & Wolery, 1996).
Finally, unlike our colleagues who work with older children, early childhood special educators have little tradition of assessment for nosological or diagnostic purposes. Since the U.S. government first prompted widespread adoption of early childhood special education programs in the 1970s, states and local educational agencies have been able to identify young children receiving special education as “developmentally delayed,” rather than having to conduct testing to sort each child into one of several putatively discrete categories (e.g., Mild Mental Retardation, Learning Disability, Emotional and Behavioral Disorder). There is little reason to believe that these categorical assignments (and related nosological research) offer any information of substance to special educators working in K-12 education; there is even less reason to believe that such an approach would be useful in identifying young children who might benefit from early intervention (Bricker, 1993b, 1996).
SOME CONTEMPORARY CHALLENGES: A VISION OF THE SHORT-TERM FUTURE
So, researchers and practitioners in early childhood special education have a long history of intelligently applying assessment practices to describe children’s levels of development and need and to plan and evaluate individual programs. Everything is perfect … right? Unfortunately, several significant challenges confront contemporary assessment practices for young children with disabilities and their families; each deserves some attention from caring practitioners interested in sorting strengths and promises from weaknesses and wrong turns over the coming few years. Attention to these issues over the short term also will help our field be prepared to realize some of the more far-reaching opportunities described later.
Embrace the promise of assessment practices. In some quarters within the broad early childhood education community, there is reluctance or active resistance to formal assessment practices for individual children. This reluctance or resistance typically appears to exist among small (but sometimes vocal) segments of practitioners and commentators and seems to be based on some atypical interpretations of the principles of developmentally appropriate practice (cf. Carta, Atwater, Schwartz, & McConnell, 1993; Wolery, 1994). Although I fully acknowledge the potential risk in assessment practices inappropriately applied and assessment information inappropriately used (as will be discussed later in this section), I also believe that skilled and respectful practitioners can and must, in many conditions, use a variety of assessment practices and the resulting information to produce the most positive possible outcomes for young children–particularly children with more pressing needs or greater challenges, like those served in early childhood special education.
Remember that “assessment” is more than testing. There is an ongoing press from some quarters to expand “testing” in early childhood special education, often at the expense of high-quality assessment. An essential distinction is important here: Testing can be assessment but it need not be, and assessment can include testing but it need not. Assessment can be best described as the systematic collection and analysis of information to make a decision (McLean et al., 1996). The quality of information available for analysis and decision making is important, and sometimes (but certainly not always–see Salvia & Ysseldyke, 1996) we can assume that tests will produce data of known psychometric characteristics. But sometimes tests produce reliable but not needed information. And sometimes, teachers and their colleagues can select or construct data-gathering procedures that produce high-quality data that are well suited to the specific questions at hand (Bricker, Bailey, & Slentz, 1990; Schwartz & Olswang, 1996). So, although elementary and secondary models of special education may rely on testing for diagnosis and categorization and although test publishers may proclaim procedural innovations of the most recent edition of a test, early childhood special educators must still focus on the bottom line: What do we need to know to answer the question(s) that we currently need to answer, and is this test the best way to gather that information?
Conduct assessment that informs, but only as needed. Closely related to the challenges of reluctance to engage practices and the ongoing pressure to test, I worry that early childhood special educators sometimes spend too much time collecting data that has no apparent or functional role in monitoring children’s progress, evaluating intervention effectiveness, or planning new services. Data collection “just because,” whether it is to fill some programmatic or administrative requirement (e.g., “we observe children three times a year”) or is based on some broadly adopted but poorly understood professional tradition (e.g., “we always ask about family composition”), is at best inefficient and at worst unethical. Engaging teachers, children, family members, or others in the compilation of information that has no direct and explicit function wastes time and keeps all participants from engaging in other, perhaps more beneficial, activities. This type of data gathering is also potentially harmful: When we collect information that we do not need and then save it, we create a risk that this information will be used for some other, as yet unknown and potentially undesirable, purpose in the future. I believe, as do professionals who work in service to children and families, we have a real and affirmative obligation to collect and use information that helps us serve children and families and improve outcomes, and we have a similar obligation to keep our attention focused and the information that we collect closely held.
BOLDLY GOING FORWARD: VISIONS FOR THE FUTURE
As early childhood special educators, we have strong traditions and practices on which to build and contemporary challenges we must face. All things considered, I believe that assessment practices in the coming century will move in at least three relatively new and very exciting directions. Each of these directions stands to advance our understanding of young children’s development and, more importantly, to provide new resources for improving the quality of early childhood special education services for individuals and groups of young children with disabilities and their families.
Assessing Child Growth and Development
The major purpose of early intervention and education is improving skills, competencies, or adjustment for individual children and their families. Consequently, “change” or “progress” are the major organizing metaphors for many discussions about the effects of education. Teachers want to know if their intervention services and supports are helping children change. Parents want to know if their children are progressing. Administrators, evaluators, funders, and policymakers want to know if programs produce change. When we think about the products of educational programs, we often think first about child change or progress.
Describing child change from one time to another is a hallmark strength of special education. Our professional practices, now codified in federal and state laws and regulations, require that children’s individual intervention programs focus on specific and measurable goals and objectives; that the child’s progress toward these goals and objectives be measured frequently and reviewed at least annually; and that teachers, parents, and others use the information from these assessments to evaluate the quality of a child’s program and, where desired, plan for changes in either the goals or procedures of intervention.
Formal systems for monitoring child progress have long been available for children in early childhood special education, and recent developments have produced new and improved options. Historically, teachers and others have had access to curriculum guides that explicitly describe skill or developmental hierarchies, such that elements of these hierarchies can be used for both assessment and intervention (Johnson-Martin, Jens, Attermeier, & Hacker, 1991). This developmental approach represents one of the major thrusts in progress monitoring.
Two recently published systems are good examples of this approach. The Assessment, Evaluation, and Programming System for Infants and Children (or AEPS), in its respective volumes for infants and toddlers (Bricker, 1993a) and preschoolers (Bricker & Pretti-Frontczak, 1996), describes a comprehensive system of test items, procedures, IFSP/IEP goals, and instructional recommendations for work with children with disabilities and children at risk for developmental delays. The AEPS offers formal procedures for assessing children in various ways and for relating this assessment directly to intervention through related curricula (Bricker, Pretti-Frontczak, & McComas, 1998; Cripe, Slentz, & Bricker, 1993). The Work Sampling System (WSS; Meisels, Liaw, Dorfman, & Nelson, 1995) is part of a more comprehensive assessment and progress monitoring system, designed currently for children from 2 to 12 years of age. The WSS was designed specifically to be used in the context of “performance-based assessment” (or assessment of child performance in naturalistic settings). Like the AEPS, the WSS is organized as a set of developmentally related skills; unlike the AEPS, the WSS system is organized by age, describing the developmental expectations of that age and adjacent (younger and older) ages so that teachers and parents can place a child’s current performance in the context of both previous and future development and within age-based developmental expectations.
Existing approaches to monitoring child progress or change have many distinctive strengths; they are explicit in their description of child skills and competencies, developmental in their description of the typical sequence or a hierarchical pattern of emergence of these skills and competencies, and psychometrically sound with careful evaluation and reporting of reliability and validity dimensions. Further, several of these available tools (most notably the AEPS) are linked closely to treatment programs and procedures, allowing teachers and parents to negotiate a tight and desirable relation between the ways in which child progress is assessed and intervention is provided.
Yet, while many available tools mark progress or change for individual children, they are not well-designed for monitoring the rate of progress or change (a dimension that might be called “growth”). There is little reason to believe that the successive items in a developmental sequence like AEPS and WSS have a known scale; that is, the “amount” of development required to move from mastery of one skill to mastery of the next more demanding skill is not consistent across the range. Available tools based on developmental skills help answer such questions as, “Has the child acquired skills or competencies that we judged, last time we met, to be important?” or “Is the child’s skill or competency different than it was on the previous assessment?” However, these tools provide little support for answering questions about rate of change, such as, “Is the child developing or acquiring skills more quickly now than she was before?” or “Is the child developing or acquiring skills quickly enough that he will be likely to meet the academic demands of a later classroom environment?”
In coming years, I expect progress monitoring tools to increasingly reflect two different paradigms of assessment (Deno, 1997; Fuchs & Deno, 1991)–a “critical skills mastery approach” represented by many existing measures, and an emerging “general outcome measures” approach, which will be represented by new sets of measures. This newer general outcomes measurement approach will offer substantial advances for assessment practices and intervention outcomes in early childhood special education.
General outcome measures can be distinguished from critical skills mastery approaches by several features. First, all general outcome measures within a particular domain or area are indexed against a common long-term goal. For instance, rather than assessing acquisition of specific semantic and syntactic skills we might assign to the broad domain of expressive language, a general outcome measure might assess child progress toward a goal, such as “The child uses gestures, sounds, and words to express wants and needs and convey meaning to others” (Priest et al., 1998).
Second, general outcome measures typically incorporate common measurement procedures and metrics across an extended period of time and development. To continue our example of monitoring expressive language development, performance and development might be assessed by measuring the amount of gestural, vocal, and verbal acts a child produces in social situations with adults or peers at different points in time from 6 to 36 months of age. In this way, general outcome measurement approaches produce one indicator of long-term development (e.g., the number of communicative acts); as a result, general outcome measures can estimate rate of change, or growth, across time or intervention conditions (Shin, Deno, Espin, & McConnell, 1999).
General outcome measures can be incorporated easily into continuous progress monitoring systems (Early Childhood Research Institute on Measuring Growth and Development, 1998a). Such a system might offer frequent and repeatable measurement, sensitivity to small changes due to development or intervention, and evaluation of child change at the level of individuals or groups.
Development and evaluation of general outcome measures will add a powerful set of tools to the assessment portfolio in early childhood special education. These measures provide more direct attention to the rate of growth for individual children and provide explicit means for linking assessment and intervention across developmental or service delivery boundaries. General outcome measures will contribute to more frequent and systematic evaluation of the overall effectiveness of intervention programs; where critical skills mastery approaches isolate and describe the proximal effects of instruction or intervention and thus are useful for building and evaluating individual intervention plans, general outcome measures more directly measure child progress in a broader developmental context, allowing interventionists and others to determine whether current situations are leading to desired rates of development toward long-term goals. Critical skills mastery and general outcome approaches can complement one another well; as more general outcome measurement approaches become available, I am confident a better approach to monitoring intervention effectiveness in the short- and long-term will emerge.
Assessing Factors That Contribute to Child Growth and Development
In the past two decades, research in child development and early intervention has increasingly adopted approaches that view child behavior change in context, with an implicit or explicit assumption that some degree of child change is due to the arrangement of proximal and distal variables influencing the child in that context (Bronfenbrenner, 1979; Schroeder, 1990; Strain et al., 1992). Ecobehavioral analysis, a special case of this ecological perspective, has made particular inroads in studying the development of young children with disabilities and those considered at risk (Carta, 1986; Carta, Atwater, Schwartz, & Miller, 1990; Carta et al., 1997; Greenwood, Carta, Kamps, & Arreaga-Mayer, 1990; Hart & Risley, 1996; Odom, Peterson, McConnell, & Ostrosky, 1990; Rush, 1999).
This ecobehavioral research has made special contributions to assessment practice. Ecobehavioral assessment improves our ability to identify conditions associated with desired or undesired developmental outcomes, identifying potentially effective treatment conditions or components and extending our definition of “treatment settings” to include a wider variety of naturalistic settings. At a molar level, ecobehavioral assessment has identified activities or other broad environmental variables that are associated with particular child behaviors, levels of engagement, or development (Odom et al., 1990; Rush, 1999). The same logic has been extended to more proximal variables as well, identifying child–child and child–adult interactions that appear to be more directly associated with these outcomes (Carta et al., 1997; Hart & Risley, 1996). What emerges from this research is a fuller, more contextual description of the processes of development (including both child behaviors and the conditions associated with their acquisition or elaboration) that provides a rich metaphor for planning and evaluating both natural development and purposeful intervention.
This approach is being extended, with great promise, into the world of professional practice. Early examples specifically demonstrated the ties between ecobehavioral assessment and intervention planning (Ager & Shapiro, 1995; Hoier, McConnell, & Pallay, 1987). These studies demonstrated a “template matching” approach, in which observational assessments compared child behaviors and the conditions associated with these behaviors in two different settings and then used this information to plan transition services to facilitate transfer of child competence from one setting to the next. More recently, observational and other tools for ecobehavioral assessment and analysis have become available for practitioners, including both general models for assessment and treatment planning (Barnett, Ehrhardt, Stollar, & Bauer, 1994; Barnett et al., 1997), as well as specific tools adapted from effective research procedures (Greenwood, Carta, Kamps, Terry, & Delquadri, 1994). These tools can, and are, being used by practitioners to conduct assessments that sample child behaviors and important environmental variables in one or multiple settings and produce results that assist teachers, parents, and related service professionals in evaluating existing intervention services and, where needed, plan new ones.
Ecobehavioral assessments, and other approaches that “proceduralize” a contextual view of child behavior and competence, promise to expand our notion of assessment, providing a richer and more detailed view of the process and product of children’s development. These approaches help us conceptualize and identify the developmental “resources and liabilities” present for individual children in different settings and provide a strong basis for expanding our notion of treatment to include both formal services and informal supports in both structured settings (like classrooms) and more naturalistic ones (like homes) where children spend their time.
Assessing to Plan for Improved Services and Supports
In coming years, formal systems for assessing child growth and development and using ecobehavioral data to plan intervention programs will merge into well-articulated decision-making models for monitoring child progress and planning revised intervention. These new decisionmaking models will blend existing and emerging measures of child progress, assessment data that describe relations between child behavior and ecobehavioral variables and specific and explicit procedures for analyzing and interpreting data to generate a range or treatment options and select intervention plans. Part and parcel of a more comprehensive approach to assessment in early childhood special education, these decision-making models will contribute to improved child and family outcomes by reducing the uncertainty in selecting or planning intervention options.
Models that approximate or demonstrate this comprehensive approach have already begun to emerge in early childhood special education (Bricker, 1993a; Bricker & Pretti-Frontczak, 1996; Good & Kaminski, 1996; Notari & Bricker, 1990), as well as in K–12 educational programs (Deno & Mirkin, 1977; Shinn, 1998; Shinn, Habedank, & Baker, 1993). The general features of such a decisionmaking model–frequent monitoring of child progress and growth, formal decision rules for identifying desirable and undesirable rates of progress and growth, explicit procedures for producing data that help evaluate the likely merit of different intervention options, and a formal and explicit role for parents and families in evaluating all data and informing all decisions–have been described (Early Childhood Research Institute on Measuring Growth and Development, 1998b). The logic of such a model seems compelling, and the empirical support for many pieces seems strong. What is missing, however, are clear demonstrations that such decisionmaking models do indeed contribute to better outcomes for young children with special needs and their families. Our field is already profiting from the research, design, and dissemination of essential ingredients of a more comprehensive decision-making model and will benefit even more if, or when, the “value added” by these models is demonstrated empirically.
CONCLUSION
Assessment practices have long been seen as essential in early intervention and early childhood special education. Our field has benefited from this strong commitment: We have a long history and substantial information from empirical research in this area, we have had strong conceptual and theoretical leadership from leaders in our field, and we have had a rich tradition of direct application by a wide array of professionals. In short, assessment practices are seen as part of the fabric and form of services to young children with special needs and their families.
This history provides a strong foundation for continued research, development, and integration of new approaches and applications in assessment with young children and their families. In my view, this future work will refine and improve the core assessment resources available in early intervention and early childhood special education and will be further integrated into the fabric of what we do in service for young children and their families. If assessment is “collecting and analyzing information to make decisions,” I think the coming years will provide better tools for collection and analysis and will thus support better decisions in ways that directly contribute to better outcomes for many young children.
AUTHOR’S NOTE
Preparation of this article was supported by the University of Minnesota and by “Early Childhood Research Institute on Measuring Growth and Development” (Grant number H024560010), a cooperative agreement between the U.S. Department of Education, Office of Special Education Programs and the universities of Minnesota, Kansas, and Oregon. However, the opinions expressed in this paper are those of the author only, and no official endorsement should be inferred.
REFERENCES
Ager, C. L., & Shapiro, E. S. (1995). Template matching as a strategy for assessment of and intervention for preschool students with disabilities. Topics in Early Childhood Special Education, 15, 187-218.
Barnett, D. W., Ehrhardt, K. E., Stollar, S. A., & Bauer, A. M. (1994). PASSKey: A model for naturalistic assessment and intervention design. Topics in Early Childhood Special Education, 14, 350-373.
Barnett, D. W., Lentz, F. E., Bauer, A. M., Macmann, G., Stollar, S., & Ehrhardt, K. E. (1997). Ecological foundations of early intervention: Planned activities and strategic sampling. The Journal of Special Education, 30, 471-490.
Bricker, D. (1993a). AEPS measurement for birth to three years (Vol. 1). Baltimore: Brookes.
Bricker, D. (1993b). A rose by any other name, or is it? Journal of Early Intervention, 17(2), 89-96.
Bricker, D. (1996). The goal: Prediction or prevention? Journal of Early Intervention, 20(4), 294-296.
Bricker, D., & Pretti-Frontczak, K. (1996). AEPS measurement for three to six years (Vol. 3). Baltimore: Brookes.
Bricker, D., Pretti-Frontczak, K., & McComas, N. (1998). An activity-based approach to early intervention (2nd ed.). Baltimore: Brookes.
Bricker, D. D., Bailey, E. J., & Slentz, K. (1990). Reliability, validity, and utility of the Evaluation and Programming System: For Infants and Young Children (EPS–I). Journal of Early Intervention, 14(2), 147-160.
Bronfenbrenner, U. (1979). The ecology of human development: Experiments by nature and design. Cambridge: Harvard University Press.
Carta, J. J. (1986, May). Using eco-behavioral data to evaluate educational programs for handicapped preschoolers. Paper presented at the Twelfth Annual Convention of the Association for Behavior Analysis, Milwaukee, WI.
Carta, J. J., Atwater, J. B., Schwartz, I. S., & McConnell, S. R. (1993). Developmentally appropriate practices and early child special education: A reaction to Johnson and McChesney Johnson. Topics in Early Childhood Special Education, 13(3), 243-254.
Carta, J. J., Atwater, J. B., Schwartz, I. S., & Miller, P. A. (1990). Applications of ecobehavioral analysis to the study of transitions across early education settings. Education and Treatment of Children, 13(4), 298-315.
Carta, J. J., McConnell, S. R., McEvoy, M. A., Greenwood, C. R., Atwater, J. B., & Baggett, K. (1997). Developmental outcomes associated with in utero exposure to alcohol and other drugs. In M. Hack (Ed.), Drug-dependent mothers and their children (pp. 64-90). New York: Springer.
Cripe, J., Slentz, K., & Bricker, D. (1993). AEPS curriculum for birth to age three (Vol. 2). Baltimore: Brookes.
Deno, S. L. (1997). Whether thou goest … Perspectives on progress monitoring. In J. W. Lloyd, E. J. Kameenui, & D. Chard (Eds.), Issues in educating students with disabilities (pp. 77-99). Mahwah, NJ: Erlbaum.
Deno, S. L., & Mirkin, P. K. (1977). Data-based program modification: A manual. Reston VA: Council for Exceptional Children.
Early Childhood Research Institute on Measuring Growth and Development. (1998a). Research and development of individual growth and development indicators for children between birth and age eight (Vol. 4). Minneapolis: University of Minnesota.
Early Childhood Research Institute on Measuring Growth and Development. (1998b). Theoretical foundations of the Early Childhood Research Institute on measuring growth and development: An early childhood problem-solving model (Vol. 6). Minneapolis: University of Minnesota.
Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional Children, 57, 488-500.
Good, R. H., & Kaminski, R. A. (1996). Assessment for instructional decisions: Toward a proactive/prevention model of decision-making for early literacy skills. School Psychology Quarterly, 11 (4), 1-11.
Greenwood, C. R., Carta, J. J., Kamps, D., & Arreaga-Mayer, C. (1990). Ecobehavioral analysis of classroom instruction. In S. R. Schroeder (Ed.), Ecobehavioral analysis and developmental disabilities (pp. 33-62). New York: Springer Verlag.
Greenwood, C. R., Carta, J. J., Kamps, D., Terry, B., & Delquadri, J. (1994). Development and validation of standard classroom observation systems for school practitioners: Ecobehavioral Assessment Systems Software (EBASS). Exceptional Children, 61(2), 197-210.
Hart, B., & Risley, T. (1996). Meaningful differences in the everyday experiences of young American children. Baltimore: Brookes.
Hoier, T. S., McConnell, S., & Pallay, A. G. (1987). Observational assessment for planning and evaluating educational transitions: An initial analysis of template matching. Behavioral Assessment, 9(1), 5-19.
Johnson-Martin, N., Jens, K. G., Attermeier, S. M., & Hacker, B. J. (1991). Carolina curriculum for infants and toddlers with special needs (2nd ed.). Baltimore: Brookes.
McLean, M., Bailey, D. B., & Wolery, M. (1996). Assessing infants and preschoolers with special needs (2nd ed.). Columbus, OH: Merrill.
Meisels, S. J., Liaw, E R., Dorfman, A., & Nelson, R. N. (1995). The Work Sampling System: Reliability and validity of a performance for young children. Early Childhood Research Quarterly, 10, 277-296.
Notari, A. R., & Bricker, D. D. (1990). The utility of a curriculum-based assessment instrument in the development of individualized education plans for infants and young children. Journal of Early Intervention, 14(2), 117-132.
Odom, S. L., Peterson, C., McConnell, S., & Ostrosky, M. (1990). Ecobehavioral analysis of early education/specialized classroom settings and peer social interaction. Special Issue: Organizing caregiving environments for young children with handicaps. Education & Treatment of Children, 13(4), 316-330.
Priest, J. S., McConnell, S. R., Walker, D., Carta, J. J., Kaminski, R. A., McEvoy, M. A., Good, R. H., Greenwood, C. R., & Shinn, M. R. (1998). General growth outcomes for children birth to age eight: Where do you want young children to go today and tomorrow? Technical Reports of the Early Childhood Research Institute on Measuring Growth and Development, University of Minnesota.
Rush, K. L. (1999). Caregiver-child interactions and early literacy development of preschool children from low-income environments. Topics in Early Childhood Special Education, 19, 3-14.
Salvia, J., & Ysseldyke, J. E. (1996). Assessment in special and remedial education (6th ed.). Boston: Houghton Mifflin.
Schroeder, S. R. (Ed.). (1990). Ecobehavioral analysis and developmental disabilities. New York: Springer Verlag.
Schwartz, I. S., & Olswang, L. B. (1996). Evaluating child behavior change in natural settings: Exploring alternative strategies for data collection. Topics in Early Childhood Special Education, 16(1), 82-101.
Shin, J., Deno, S. L., Espin, C., & McConnell, S. R. (1999). Technical requirements for the assessment of child progress. Unpublished manuscript, University of Minnesota.
Shinn, M. R. (Ed.). (1998). Advanced applications of curriculum-based measurement. New York: Guilford.
Shinn, M. R., Habedank, L., & Baker, S. (1993). Reintegration as part of a problem-solving delivery service. Exceptionality, 4(4), 245-251.
Shonkoff, J. P., & Meisels, S. J. (1990). Early childhood intervention: Evolution of a concept. In J. P. Shonkoff & S. J. Meisels (Eds.), Handbook of early childhood intervention (pp. 3-32). Cambridge: Cambridge University Press.
Strain, P. S., McConnell, S. R., Carta, J. J., Fowler, S. A., Neisworth, J. T., & Wolery, M. (1992). Behaviorism in early intervention. Topics in Early Childhood Special Education, 12(1), 121-141.
Wolery, M. (1994). Assessing children with special needs. In M. Wolery & J. S. Wilbers (Eds.), Including children with special needs in early childhood programs (pp. 71-96). Washington, DC: National Association for the Education of Young Children.
Address: Scott R. McConnell, 215 Pattee Hall, University of Minnesota, Minneapolis, MN 55455; e-mail: smcconne@tc.umn.edu
COPYRIGHT 2000 Pro-Ed
COPYRIGHT 2000 Gale Group