A review of psychosocial interventions for children with chronic health conditions
Laurie J. Bauman
A child’s serious illness or disability can place psychologic and social burdens on both child and family.[1-10] Epidemiologic data show that children with chronic health conditions have higher rates of mental health problems than children without such conditions.[11-14] Pless and Wadsworth have documented that higher rates of psychologic morbidity persist into adulthood. Although children themselves bear the major psychosocial burden of their chronic health condition, studies have documented increased psychologic risk among their parents.[16-20] The number of children and families who are vulnerable to psychologic problems secondary to child chronic illness is large; estimates of the prevalence of chronic health conditions in children range between 10% and 30%[3,4,9,11,21-23]
There is evidence that the psychologic and social needs of these children and families are not adequately addressed through conventional systems of care.[24-28] In particular, it has been documented that access to psychologic and support services is limited.[25,27-29] For example, Cadman et al reported that only one-quarter of the children with a chronic physical illness and a significant mental health problem receive mental health services. Weiland et al reported that although children with chronic illnesses are recognized by primary care providers to have more mental health problems, they do not receive more services. Since these reports, an increasing proportion of children with chronic health conditions and disabilities are being cared for in managed care arrangements. It is possible that comprehensive services will be even harder to obtain and more difficult to fund.
The pressing psychologic, developmental, and social needs of these children have spurred the development of psychosocial interventions to address mental health problems and maximize the functioning of children and families as productive members of society. Evaluations of the effectiveness of such interventions have begun to emerge, although the theoretical, methodologic, and logistic difficulties associated with this research are substantial. The results of these evaluations, when reported, have been published in a wide range of publications, making it difficult to summarize and evaluate the conclusions and consider their implications for research and practice. To our knowledge, there has been no prior summary of this research. As competing demands for the health care dollar become more common, it is important that interventions be carefully evaluated for efficacy, and that successful interventions be identified and disseminated. To address this need, we systematically reviewed the psychologic and medical literature from 1979 to 1993 with the following goals: (1) to identify psychosocial intervention programs whose efficacy had been objectively evaluated; (2) to describe these programs and the theoretical models on which they were based; (3) to describe the adequacy of design and evaluation methods; (4) to summarize their findings; and (5) to recommend future directions in the development of the interventions themselves and the methods used to evaluate their effectiveness. These recommendations, although derived from a review of a narrow domain of research, may be applicable to the broader field of biobehavioral research.
A systematic search of the medical and psychologic literature was performed using two computerized data bases, Index Medicus and Psychological Abstracts. We revised the search parameters several times by adding or deleting key words to assure a complete review. In addition, we examined the citations of articles identified through computer searches, and contacted experts to identify other work we may have missed.
To be eligible, articles had to meet the following criteria:
1. The article had been published in a peer-reviewed journal during the 15-year period between 1979 and 1993.
2. The intervention targeted children with a chronic health condition or their family members. Although we define chronic conditions noncategorically,[31,32] Index Medicus and Psychological Abstracts rarely index papers using generic terms such as chronic condition or disability. Therefore, we included in addition a wide range of specific conditions as keywords. Thus, we may have missed interventions for children with conditions not included in the search criteria. (A list of the keywords is available from the first author on request.)
3. The study evaluated a planned psychosocial intervention. We excluded articles that examined naturally occurring family resources (eg, maternal support systems) because these were not planned programs. We also excluded medical interventions such as innovative service arrangements (eg, the effects of multidisciplinary clinical teams), and medical or physical therapy, medications, or treatment regimens. These interventions were designed to improve health, not psychosocial, outcomes. Any potential psychosocial effects were considered a secondary benefit. We chose to limit this review to programs designed to achieve psychosocial benefits rather than hold primarily medical interventions to a standard of outcome that they never intended.
4. The evaluation examined psychologic or social outcomes. Papers could include medical or functional outcomes in addition, but these results are not reported here.
5. The study met two minimal methodologic criteria: a minimum of 15 participants in the experimental group; and a comparison group. A random control group, a matched comparison group, or a convenience comparison group were acceptable, but a comparison to published norms was not.
6. The article was published in English.
Two of the authors (E.C.P. and I.B.P.) examined each computer search to identify potentially eligible articles using time, population, intervention, outcome, and language criteria. They based their initial judgment on titles and abstracts; subsequently, all articles were reviewed and eligibility was determined by consensus of all authors. Each eligible article was then reviewed independently by two authors. A review consisted of completing three rating forms for each paper: intervention, theory, and methods. All rating discrepancies were identified, discussed among all the authors, and reconciled by consensus. Although a few papers noted that more information was available from the authors about programs, we did not request additional information and limited our review to the material included in peer-reviewed articles.
Each paper was coded for the type(s) of intervention conducted, the target group, the intensity of intervention, how integrated the program was with the child’s medical care, and level of training of the intervener. The form also included whether a manual was followed to assure consistency of intervention across interveners, clients and over time (fidelity), and whether consistency was monitored systematically, (eg, through observation, video or audiotapes), or quality assurance forms. Articles rarely discussed cultural appropriateness or sensitivity, therefore, we were unable to collect these data.
Each report was rated on the extent to which a theoretical model was used to develop the intervention, or to make various methodologic decisions. For example, we coded whether formal behavioral theory was cited as a rationale for the choice of program content or type, and how closely the program was tied to the theory. Also rated was whether a program theory was presented. Program theory is an implicit or stated rationale that explains how a program will accomplish its effects.[33,34] Finally, the role of theory in the selection of outcomes, the measures used, timing of measurement, and power calculations, was also assessed.
The methodology of each study was described using the following criteria: whether the comparison group was randomized, matched, or convenience; sample size; the sample’s representativeness; sociodemographic characteristics of the sample; type(s) of chronic illnesses included; whether losses to follow-up were accounted for; whether this was a replication of another program; the outcomes considered; whether findings were statistically significant in the expected direction; and whether the magnitude of any significant change was clinically important.
The computer searches generated 266 articles, of which 16 met our eligibility criteria.[35-50] In one instance, 2 articles reported results of the same intervention[47,48] and are counted as one program. Thus, this review covers 15 separate programs. Two articles about related programs[38,39] were considered evaluations of different interventions because the programs could be implemented independently and could succeed or fail independently. The vast majority of articles were ineligible because they did not describe an intervention program at all, or did not report any evaluation data on program effectiveness. Few studies were rejected because they failed to meet methodologic criteria. We had considered applying meta-analysis to the studies identified for review, but rejected this option because we found so few studies, and these were very heterogeneous in the nature of the programs evaluated, the research designs used, and the populations targeted.
Description of Intervention Programs
There are two noteworthy observations about the interventions themselves (Table 1). First, programs were rarely described in sufficient detail. We were often uncertain about what was actually done, what the target population of the intervention was, how well the clients were reached and by what means, what the substantive focus was (eg, counseling, education, or skills training), the duration and frequency of sessions, who the intervener was (discipline, experience, and training), or whether a program manual was used to assure consistency of delivery of the intervention. More often than not, the data provided in Table 1 were derived from skimpy information and accordingly, some of the interventions may have been misinterpreted.
[TABULAR DATA 1 NOT REPRODUCIBLE IN ASCII]
Second, in one form or another, education was chosen as part or all of the intervention modality in 11 programs. However, there was considerable variation around this theme depending on whether the child alone, parent alone, or both, were the target groups. Similarly, there was variation in the means used to deliver the educational messages: direct contact, written materials, or in one case a computer-assisted game. Several novel, noneducational programs were included, eg, one that focused entirely on social skills training, another that evaluated social workers, and a third that involved peer counseling training.
The interventions varied in their intensity. Six programs averaged only 5 to 6 hours of client contact, but the others invested considerably more. The duration of programs also varied, and ranged from 3 weeks to 15 months (4 did not specify).
Use of Program and Formal Theory
All studies were rated concerning the degree to which theory, as defined by DeGroot, was used to guide development and/or implementation of the intervention. One-third (5) clearly specified a theory. The role of theory either was not clear or was only implicit in the 10 others. Specific theories or frame-works that were used included self-efficacy, social support,49 coping and stress management, emotional support and decision-making,[38,39] and helper theory.
We also assessed whether studies reported using program theory to provide a rationale for the hypotheses, to predict short- or long-term effects of interventions, or to justify decisions concerning the nature and timing of interventions, choice of measures, or power estimates. The majority (11) cited empirical evidence rather than theory to support a link between the program and the outcomes assessed. Although most investigators (11) provided a plausible argument for the expected effects of interventions, few (3) specified the time when program effects might be expected to emerge. Only six made any reference to theory to justify the type of intervention that was conducted. In contrast, the majority (13) used results of prior research as a rationale for the type of intervention conducted. Other key decisions, such as the timing of the intervention, nature and timing of measurement, and statistical power, appeared rarely to be based on theory. Only a handful of investigators (one to three in each category) used theory in this manner.
Methodology and Program Effectiveness
Methodologic characteristics of the studies are summarized in Table 2.
[TABULAR DATA 2 NOT REPRODUCIBLE IN ASCII]
Samples tended to be heterogeneous, which can make detection of program effects difficult. Examples of heterogeneity included age of child, social class, age at disease onset, duration of illness, severity, care requirements, and mobility. Seven programs focused on children with asthma, three on children with cancer, two on children with epilepsy, and three included children with various diagnostic conditions. Most often, participants were poorly described. Moreover, it was difficult to evaluate how special features or characteristics of the study population might have influenced the capacity of the program to achieve its outcomes (eg, was the population referred for particular problems, predominantly minority or low-income, mostly upper-income with strong resources, likely to have particularly severe disease?). Further, interventions generally were conducted on institutionally-based samples of convenience. Because there were few details about these samples, we were unable to assess the generalizability of results.
Ten studies used experimental designs with random assignment of subjects to experimental or control groups and pre-post measurement. One study, that evaluated a school-based intervention, matched schools and randomly assigned one of each pair to intervention and control groups. The other four studies used convenience samples for comparison.
Few investigators provided a rationale for sample size, such as power calculations. Experimental groups ranged in size from 20 to 200 and total sample sizes ranged from less than 100 to over 300. Based on our analysis, only about half of the studies could detect small to medium effect sizes.
Twelve studies included some standardized measures as part of their outcome assessment. However, few provided an explicit rationale for the selection of these measures, and none acknowledged that some scales used had not been validated on children with health problems. This omission is particularly important when psychiatric or behavioral symptom checklists are used because medical conditions can sometimes mimic somatic symptoms that are counted as mental health symptoms.[53,54] Several studies developed their own measures, but few of these were standardized before their use.
Most studies tested several outcomes, but few adjusted for Type II error. Similarly, few presented a priori criteria for establishing program success; (eg, whether one significant effect among 10 tests would constitute a successful program). Also, many of the outcome measures had multiple subscales and often each was examined for program effects without postulating in advance which subscales the program was intended to affect.
As shown in Table 2, 11 studies demonstrated a positive effect on at least one psychosocial outcome. One of these appeared to show some benefits on a standardized mental health measure, but the authors interpreted the magnitude of this effect to be too small to warrant a claim of success. In one study, a significant short-term effect did not persist long-term. In another, the only positive effects were found in a subgroup of the study population. Four studies demonstrated mixed results, in that some outcomes showed positive effects although others of equal importance did not. The size of the program effects was impossible to summarize. We were unable to calculate effect sizes at all for one-third of the studies. In the 10 studies that presented adequate data for effect size calculation, most used multiple measures, and measures with multiple subscales. These effect sizes were extremely difficult to interpret and summarize.
It should be noted that many of the studies examined other outcomes besides psychosocial ones, and some found significant effects (eg, on illness-related knowledge or school performance). Thus, programs may appear in Table 2 to be less successful than they actually were because for the purpose of this review only psychosocial outcomes were evaluated.
Outcomes Affected by the Programs
Several types of psychosocial outcomes were represented in these studies: psychiatric or behavioral symptoms; self-esteem, self-worth and social competence; locus of control; and family functioning. The type of outcome that was significantly affected most often was psychiatric or behavioral symptoms. Eight studies used 15 standardized psychiatric or behavioral symptom scales as their measure of outcome and in six studies, the intervention had a significant positive effect. One tested school adjustment and reported a significant improvement.
Nine of the studies measured effects of their intervention on self-esteem, self-efficacy or social competence, using 12 measures to assess this. However, only 4 reported significant improvement. Locus of control, a distinct but related construct, was examined by three studies. Of these, 1 had a positive effect, and 2 had no effect.
Seven studies included measures of other psychologic or social dimensions, including stress, protectiveness, social support, or satisfaction, and several examined illness-related anxiety or fear. Only 3 found program effects in any of these domains. Of 3 studies that examined family functioning, 1 reported a significant improvement. Finally, 2 of the articles we reviewed could be considered replications, in that both evaluated the same program (Superstuff) in different settings and with different populations.
We identified 15 examples of psychosocial interventions published in peer-reviewed journals that were adequately evaluated for their effects on psychologic or social outcomes among children with chronic health conditions or their family members. Based on this review, we draw two sharply contrasting conclusions.
First, there was good news. Of the 15 studies that met our criteria, most worked. At the level of statistical significance, and when examined against conventional scientific criteria, there was reasonably convincing evidence of efficacy in 10 of the 15 studies included in the final review. Positive effects were found, for example, for a number of programs designed to promote knowledge and self-management in asthma;[36,42,43] one examining reintegration of children with cancer in the school setting; and support and coordination of care among parents of children with a range of conditions.[47,48] These findings are particularly impressive when one considers the logistic difficulties involved in implementing interventions and evaluating them properly. These programs present interesting intervention models that may be applicable in other settings. Consequently, one priority for future research would involve replication of these programs in broader populations, different sites, and extended over time to assess longer term influences.
On the other hand, there was bad news as well. Although a large number of reports were identified in the initial search, only a small fraction were evaluation studies of programs. Few programs reported in the literature had been evaluated at all, and even fewer used acceptable research procedures. Many evaluation studies had serious methodologic flaws that limit the interpretation of results. Clearly, the most urgent issue arising from this review is the necessity of methodologically sound evaluations of interventions.
Based on our assessment of the literature, including studies identified in the initial search, there are several ways that the quality of intervention research could be improved. First, failure to demonstrate program effectiveness may result when sample sizes are too small to identify any but the most spectacular effects. A power analysis always should be conducted to assure that the study will have the statistical power to detect meaningful program effects. Second, a comparison group is always needed to insure that changes observed after program implementation can be attributed to the intervention. Third, measurement tools should be sufficiently sensitive to the phenomenon under study, and have adequate validity and reliability. Use of inadequate measures may limit detection of program effects. Fourth, programs may appear to fail simply because they were too new and not yet ready for outcome evaluation. For this reason, it is helpful to conduct a process evaluation first to ascertain that clients can be identified, recruited, and maintained in the project; that necessary staff can be hired, trained, and kept involved; that the intervention can be delivered consistently client to client; and that clients are receptive and satisfied. Fifth, if several outcome measures are used, investigators should state a priori which ones must improve in order to label the program a success. Although it is reasonable to hypothesize that a program might have many kinds of benefits, conducting multiple statistical tests inflates the chances of finding significant results. Programs rarely succeed in achieving all desired outcomes. Therefore, the investigator should provide such guidelines so that programs with sufficient effect are replicated and adopted.
We acknowledge limitations with our own methods. We did not weight the results to take into account the strength of designs or measures used. Generally, evidence from large scale randomized trials is more convincing than that arising from matched controls, and even more so, from controls that are not matched.[55,56] In addition, some measures are better than others, and some of those included are now considered outdated. Further, limiting the review to articles published in English may limit generalizability, although several studies included in this review were conducted outside the US.
In addition to these two contrasting conclusions, we offer four additional observations. First, many reports included insufficient details about interventions being evaluated. At minimum, readers should be told the proportion of the target population that accepted the intervention and completed it, exactly what was provided, and the intensity of the intervention. The paucity of program information may reflect (in part) pressures from journal editors to keep manuscripts short. In this case, we recommend strongly that authors indicate in the publication that the details of the intervention are available as a separate communication or provide them in an appendix. We further suggest that program manuals include a detailed summary of program theory, the strategies and forms used for monitoring program implementation, and any procedural details necessary to replicating the program in another site.
Second, researchers need to pay more attention to the clinical significance of their findings. Intervention effects should be described in terms of their ability to change children’s psychiatric or behavioral symptom scores to below clinically significant levels. Effects should also be considered for their ability to change psychologic adjustment in other relevant areas of functioning, such as school, peer relations, and family. In our review, we noted that effect sizes were rarely provided, and some studies did not provide the information necessary to calculate these effects.
A third observation is that few program evaluations were guided by theory, and it seems reasonable to assume that most programs would have been improved if they had had explicit theoretical grounding. Program theory is a concept that appears insufficiently understood by many health care researchers. Intentional interventions generally are logical attempts to accomplish planned change, but more often than not, the rationale is implicit rather than explicit. Program theory is simply the rationale or framework that justifies the program as a possible solution to the stated problem. Theory helps specify program aims and connects its operations to its effects. Such theory can guide both the overall nature and specific characteristics of the program. It may be derived from empirical and/or theoretical sources, and often incorporates a theoretical model about the presumed cause of the problem to be addressed and possible pathways to prevent or intervene in its development.
Accordingly, we recommend that clearly stated program theory should be part of every evaluation. At a minimum, authors should specify what it is about their program that would be expected to affect the outcome they are measuring, and how they expect this effect to occur. A specific rationale for an intervention program will help: (1) define possible alternative designs of effective programs; (2) suggest who or what should be the focal point of the intervention; (3) direct decisions about the nature and scope of the outcome objectives, as well as their relative weights in case of conflicting results; and (4) make for compatibility between proposed procedures and projected outcomes.
A fourth observation is that researchers rarely analyzed whether their intervention effects were uniform or whether particular subgroups benefited more than others. Interventions may be highly effective for one subgroup but not at all effective for another. Benefits of interventions may depend on a match between the intervention mode and the kind of family or patient served. Rather than asking simply whether the program works, evaluation studies should examine whether programs differ in their effects depending on child or family characteristics. This approach is challenging to implement; however, because statistical power is rarely sufficient to support subgroup analyses unless the research is specifically designed to permit them.
A fifth observation is that many studies we reviewed included interdisciplinary teams. Well-conceived and implemented evaluation studies are difficult to do. They may benefit from the diverse clinical and methodologic contributions of experts from different disciplines. Varied professional backgrounds provide different theoretical perspectives, methodological skills, and knowledge and result in a richer and more comprehensive product. Building interdisciplinary teams that include many kinds of skills may help avoid both program and research errors.
The scarcity of funding for methodologically sophisticated evaluation studies is of some concern. Large sample sizes are necessary for adequate statistical power and often require multiple sites; careful process evaluation is prudent before outcome evaluation to assure program viability; multidisciplinary teams add quality but increase budgets; and randomized controlled trials are complex and time consuming. Further, although the need for replication of successful programs is clear, enthusiasm of both funding agencies and researchers for such efforts is often low.
In sum, the evidence is clear that there are some interventions that can help children and families cope with the psychologic and social consequences of chronic health conditions. In the past, research on children with these disorders tended to focus on the identification of risk factors for maladjustment. Recently, investigators have begun to be concerned with devising programs aimed at minimizing or reversing adjustment problems. These efforts should be encouraged, but at the same time it is essential to stress that future work of this kind must include acceptable evaluations. The creative energy apparent in the current program literature should be followed by meticulous evaluation work. With health care dollars at a premium, strong and convincing evidence will be needed to justify the costs of psychosocial interventions for children with chronic illnesses and disabilities.
We are grateful to the William T. Grant Foundation and to the Maternal and Child Health Bureau for financial support; to our colleagues in the Research Consortium on Chronic Illness in Childhood for their contributions to the manuscript (Steven Gortmaker, PhD; Robert J Haggerty, MD; Paul Newacheck, DrPH; James Perrin, MD; Ruth E.K. Stein, MD; and Deborah Klein Walker, EdD); to reader Lauren Westbrook, PhD; and to Elyse Park for her assistance in preparing the manuscript.
REFERENCES[1.] Haggerty R. Challenges to maternal and child health: research in the 1980’s. In: Klerman L, ed. Research Priorities in Maternal and Child Health. Report of a Conference Sponsored by Brandeis University and the Office of Maternal and Child Health. Health Services Administration, Public Health Services, US Department of Health and Human Services. 1981:321-353[2.] Klerman LV. Research Priorities in Maternal and Child Health. Report of a Conference; Brandeis University: 1981; Waltham, MA[3.] Newacheck PW, McManus MA, Fox HB. Prevalence and impact of chronic illness among adolescents. Am J Dis Child. 1991;145:1367-1373[4.] Newacheck PW, Taylor WR. Childhood chronic illness: prevalence, seventy, and impact. Am J Public Health. 1992;82:364-371[5.] Pless IB, Nolan T. Revision, replication and neglect: research on maladjustment in chronic illness. J Child Psychol Psychiatry. 1991;32:347-365[6.] President’s Commission on Mental Health. Report on the Task Panel on Community Support Systems, II. Washington, DC: US Government Printing Office; 1978[7.] Select Panel on the Promotion of Child Health. Better Health for Our Children. A National Strategy, I-IV. Washington, DC: US Government Printing Office; 1981[8.] Stein REK, ed. Caring for Children With Chronic Illness: Issues and Strategies. New York, NY: Springer Publishing Company; 1988[9.] Pless IB, Pinkerton P. Chronic Childhood Disorders: Promoting Patterns of Adjustment. Chicago, IL: Year Book Medical Publishers; 1975[10.] Steinhauser P, Mushin D, Rae-Grant Q. Psychological aspects of chronic illness. Pediatr Clin North Am. 1974;21:825[11.] Cadman DK, Boyle M, Offord D, et al. Chronic illness, disability, and mental and social well-being: findings of the Ontario child health study. Pediatrics. 1987;79:805-813[12.] Gortmaker SL, Walker DK, Weitzman M, Sobol AM. Chronic conditions, socioeconomic risks, and behavior problems in children and adolescents. Pediatrics. 1990;85:267-276[13.] Lavigne JW, Faier-Routman J. Correlates of psychological adjustment to pediatric physical disorders: a meta-analytic review and comparison with existing models. J Dev Behav Pediatr. 1993;14:117-123[14.] Wallander JL, Varni JW, Babani L, et al. Children with chronic physical disorders. J Pediatr Psychol. 1988;13:197-212[15.] Pless IB, Wadsworth ME. The unresolved question: long-term psychological sequelae of chronic illness in childhood. In: Stein REK, ed. Caring for Children With Chronic Illness: Issues and Strategies. New York NY: Springer Publishing Company; 1988[16.] Affleck G, Tennen H, Rowe J. Infants in Crisis: How Parents Cope With Newborn Intensive Care and Its Aftermath. New York, NY: Springer-Verlag; 1991[17.] Breslau N, Staruch K, Mortimer M. Psychological distress in mothers of disabled children. Am J Dis Child. 1982;136:682-686[18.] Kronenberger W, Thompson R. Medical stress, appraised stress, and the psychological adjustment of mothers of children with myelomeningocele. J Dev Behav Pediatr. 1992;13:405-411[19.] Timko C, Stovel K, Moos R. Functioning among mothers and fathers of children with juvenile rheumatic disease: a longitudinal study. J Pediatr Psychol. 1992;17:705-724[20.] Wallander JL, Varni JL, Babani L, Dehann C, Wilcox KT. The social environment and the adaptation of mothers of physically handicapped children. J Pediatr Psychol. 1989;14:371-387[21.] Cadman D, Boyle M, Offord D, et al. Chronic illness and functional limitation in Ontario children: findings of the Ontario child health study. Can Med Assoc J. 1986;135:761-767[22.] Gortmaker S, Sappenfield W. Chronic childhood disorders: prevalence and impact. Pediatr Clin North Am. 1984;31:3-18[23.] Starfield B, Pless IB. Physical health. In: Brim OG, Kagen J eds. Constancy and Change in Human Development. Cambridge, MA: Harvard University Press; 1980[24.] Ireys HT. Health care for chronically disabled children and their families. In: The Select Panel on the Promotion of Child Health, Better Health for Our Children: A National Strategy. Washington, DC: US Government Printing Office; 1981:321-353[25.] Kanthor H, Pless IB, Satterwhite B, Myers F. Areas of responsibility in the health care of multiply handicapped children. Pediatrics. 1974;54:779-785[26.] Okamoto G, Shurtleff D. Perceived first contact care for disabled children. Pediatrics. 1981;67:530-535[27.] Palfrey J, Levy JC, Gilbert KL. Use of primary care facilities by patients attending specialty clinics. Pediatrics. 1980;65:567-572[28.] Stein REK. A home care program for children with chronic illness. Children’s Health Care. 1983;12:90-92[29.] Coupey S, Cohen M. Special considerations for the health care of adolescents with chronic illness. Pediatr Clin North Am. 1984;31:211-220[30.] Weiland S, Pless IB, Roghmann K. Chronic illness and mental health problems in pediatric practice: results from a survey of primary care providers. Pediatrics. 1992;89:445-449[31.] Perrin EC, Newacheck P, Pless IB, et al. Issues involved in the definition and classification of chronic health conditions. Pediatrics. 1993;91:787-793[32.] Stein REK, Bauman LJ, Westbrook LE, Coupey SM, Ireys HT. A framework for identifying children who have chronic conditions: the case for a new definition. J Pediatr. 1993;122:342-347[33.] Bickman L. Using Program Theory in Evaluation: New Directions for Program Evaluation. Newbury Park, CA: Sage; 1987[34.] Chen H. Theory-Driven Evaluations. Newbury Park, CA: Sage; 1990[35.] Clark NM, Feldman CH, Evans D, Wasilewski Y, Levison MJ. Changes in children’s school performance as a result of education for family management of asthma. J Sch Health. 1984;54:143-145[36.] Evans D, Clark NM, Feldman CH, et al. A school health education program for children with asthma aged 8 to 11 years. Health Educ Q. 1987;14:267-279[37.] Katz ER Rubinstein CL, Hubert NC, Blew A. School and social reintegration of children with cancer. J Psychosoc Oncol. 1988;6:123-140[38.] Lewis M, Salas I, de la Sota A, Chiofalo N, Leake B. Randomized trial of a program to enhance the competencies of children with epilepsy. Epilepsia. 1990;31:101-109[39.] Lewis MA, Hatton CL, Salas I, Leake B, Chiofalo N. Impact of the children’s epilepsy program on parents. Epilepsia. 1991;32:365-374[40.] Michielutte R, Patterson RB, Herndon A. Evaluation of a home visitation program for families of children with cancer. Am J Pediatr Hematol Oncol. 1981;3:239-245[41.] Nolan T, Zvagulis I, Pless IB. Controlled trial of social work in childhood chronic illness. Lancet. 1987;2:411-415[42.] Parcel GS, Nader PR, Tiernan K. A health education program for children with asthma. J Dev Behav Pediatr. 1980;1:128-132[43.] Perrin JM, Maclean WE, Gortmaker SL, Asher KA. Improving the psychological status of children with asthma: a randomized controlled trial. J Dev Behav Pediatr. 1992;13:241-247[44.] Rakos RF, Grodek MV, Mack KK. The impact of a self-administered behavioral intervention program on pediatric asthma. J Psychosom Res. 1985;29:101-108[45.] Rubin D, Leventhal JM, Sadok RT, et al. Educational intervention by computer in childhood asthma: a ramdomized clinical trial testing the use of a new teaching intervention in childhood asthma. Pediatrics. 1986;77:1-10[46.] Silver EJ, Coupey SM, Bauman LJ, Doctors SR, Boeck M. Effects of a peer counseling training intervention on psychological functioning of adolescents. I Adolesc Res. 1992;7:110-128[47.] Stein REK, Jessop DJ. Does pediatric home care make a difference for children with chronic illness? Findings from the pediatric ambulatory care treatment study. Pediatrics. 1984;73:845-853[48.] Stein REK, Jessop DK. Long term effects of a pediatric home care program. Pediatrics. 1991;88:490-496[49.] Varni JW, Katz ER, Colegrove R, Dolgin M. The impact of social skills training on the adjustment of children with newly diagnosed cancer. J Pediatr Psychol. 1993;18:751-767[50.] Weiss JH, Hermalin JA. The effectiveness of a self-teaching asthma self-management training program for school age children and their families. Prev Health. 1987;3:57-88[51.] DeGroot AD. Methodology: Foundations of Inference and Research in the Social Sciences. The Hague, Netherlands: Mouton; 1969[52.] Cohen, J. A power primer. Psychol Bull. 1992;112:155-159[53.] Perrin EC, Stein REK, Drotar D. Cautions on using the Child Behavior Checklist: observations based on research about children with a chronic illness. J Pediatr Psychol. 1991;16:411-421[54.] Drotar D, Perrin EC, Stein REK. Methodological issues in using the Child Behavior Checklist and its related instruments in clinical child psychology research. J Clin Child Psychol. 1995;24:184-192[55.] Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research. Skokie, IL: Rand McNally; 1966[56.] Rossi PH, Freeman HE. Evaluation: A Systematic Approach. 4th ed. Newbury Park: Sage; 1989
Laurie J. Bauman, PhD(*); Dennis Drotar, PhD([doubledagger]); John M. Leventhal, MD([sections]); Ellen C. Perrin, MD([parallel]); and I. Barry Pless, MD([para.])
From the (*) Department of Pediatrics, Albert Einstein College of Medicine, Bronx, New York; [doubledagger]Department of Pediatrics, Rainbow Babies and Children’s Hospital, Cleveland, Ohio; [parallel]Department of Pediatrics, Yale University School of Medicine, New Haven, Connecticut; ([parallel])Department of Pediatrics, University of Massachussetts Medical Center, Worcester, Massachusetts; and the [para.]Departments of Pediatrics, Epidemiology and Biostatistics, McGill University, Montreal, Quebec, Canada. Received for publication Jan 2,1996; accepted Jan 2,1997. Reprint requests to (L.J.B.) Albert Einstein College of Medicine, 1300 Morns Park Ave NR 7 South 21, Bronx, NY 10461.
COPYRIGHT 1997 American Academy of Pediatrics
COPYRIGHT 2004 Gale Group