A new measurement of contemporary life stress: development, validation, and reliability of the CRISYS

A new measurement of contemporary life stress: development, validation, and reliability of the CRISYS – Crisis in Family Systems

Madeleine U. Shalowitz

Consumers, employers, and the government are demanding greater accountability from the healthcare industry in terms of quality, cost-effectiveness, and service. The application of clinical algorithms and the measurement of health outcomes are two methods used to assess the effectiveness of healthcare systems. Optimal outcome evaluation models include risk adjustment for sick populations. Similarly, for general populations who are vulnerable (at high risk for adverse physical and psychosocial outcomes), models should include an adjustment for patients’ lives outside the treatment setting because their circumstances may affect these outcomes directly, obscure treatment effects, and interfere with treatment. To address this issue, health outcomes research often includes demographic characteristics that capture relatively stable aspects of patients’ lives, such as socioeconomic status, educational level, and ethnicity. We propose that including measures of the changing aspects of patients’ lives, called life stressors, will increase the predictive power of health outcome models. We have developed a new instrument, the Crisis in Family Systems (CRISYS), to quantify contemporary sources of life stress. In recognition of both the health risks associated with low-income status and the paucity of instruments that are appropriate for these populations, the CRISYS includes items particularly relevant to, but not limited to, people with low incomes.

A large body of literature supports a linkage between life events and health outcomes (e.g., Holmes and Rahe 1967; Paykel, Myers, and Dienelt 1969; Rabkin and Streuning 1976; Sarason, Johnson, and Siegel 1978; Putter 1979; Kanner el al. 1981; Orr, James, and Charney 1989; DuBois, Felner, Brand, et al. 1992). Although a number of life events indexes exist, traditional measures of life events are dated, rigid, and culturally and socioeconomically biased (e.g., Holmes and Rate 1967; Brown and Birley 1968; Paykel 1974; Dohrenwend et al. 1978; Sarason, Johnson, and Siegel 1978; Kanner et al. 1981; Makosky 1982; Patterson and McCubbin 1983; Orr, James, and Charney 1989). These measures fail to capture a cross-section of contemporary life experiences, particularly in the measures’ representation of family issues, work issues, and exposure to violence and substance abuse. Further, their limited flexibility and range cannot represent an individual interpretation of life experiences in a statistically powerful way. We address these concerns with the CRISYS, an instrument that elicits qualitative detail in a quantitative format for ease of administration and analysis.

METHODS AND RESULTS

We report three studies involved in developing the CRISYS and establishing its psychometric properties. The respondents in all three studies were the adult primary caregivers of children residing in low-income urban areas. These children received pediatric care at an academic medical center. All instruments were administered by interview, either face-to-face or by telephone, to minimize concerns about literacy and missing data. Selected items, which included probes for qualitative detail to understand how respondents interpreted the items, aided in further scale development. Interviewers sought specific feedback from respondents on the wording and salience of existing items and probed for additional items. The Institutional Review Board approved all studies and informed consent documents.

STUDY 1: INSTRUMENT DEVELOPMENT

The purpose of Study 1 was to develop a life events measure with breadth, flexibility, and ease of administration. Instrument development proceeded in three phases. First, an extensive review of the literature on life events guided both item selection and instrument format. Second, two group discussions with eight community case managers refined and added to the item list. Finally, we conducted a pilot study with 32 members of the target community who were clients of the case management agency.

Types of Stressors. Despite the abundant literature that attempts to distinguish among day-to-day irritations, called “daily hassles,” major life stressors (episodic major crises), and chronic stressors (endemic parts of daily life) (Kanner et al. 1981; Makosky 1982), experience suggests that a cross-section of life includes all of these perturbations. In addition, the boundaries among types of stressors are not clear and may be arbitrary. Although financial concerns might be a “daily hassle” for one person, money problems might represent a major life stress for another. Each instrument, however, selectively includes or excludes such items, making an a priori decision on the magnitude or chronicity of the stressor. Because we wanted to capture the broadest possible representation of life experiences, we did not artificially categorize any event for purposes of inclusion or exclusion. Rather, we included events based on their salience to the target population.

Weighting. In an effort to capture the relative effect of events, some researchers have devised numerical weighting systems for their scale items (Holmes and Rare 1967; Dohrenwend et al. 1978). Others have demonstrated that weighted and unweighted items perform similarly when statistical methods are applied (Ross and Mirowsky 1979; Skinner and Lei 1980; Tausig 1982; Zimmerman 1983). Further, a uniform weighting scheme applied to all respondents potentially limits predictive validity (Sarason, Johnson, and Siegel 1978). Accordingly, we chose to forgo any type of a priori weighting.

Valence and Distress. Similarly, some inventories designate events a priori as favorable or unfavorable and calculate subscales accordingly. One cannot presume, however, that the birth of a baby is a “positive” experience for a single teen mother with two or three children (Sarason, Johnson, and Siegel 1978), or that “incarceration of a family member” is “negative.” Birnbaum and Sotoodeh (1991) suggested that the distress caused by a given event is perceived by the respondent relative to the effects of other co-occurring stressors. We chose to allow respondents to rate the endorsed events along two dimensions: degree of distress (not at all difficult, a little, or a lot) and positive/negative/neutral experience of the event. In the wording of the instrument we used the term “difficulty” rather than “distress” for greater clarity for the target population.

Format. Even instrument format presents methodological concerns. Interview and checklist (“yes-no”) formats produce different results (Parry, Shapiro, and Davies 1989; Gorman 1993). Checklists are easy to administer and score, lending themselves to quantitative analysis, but traditional checklists have not been sensitive to the nuances of meaning of the events to the subject. Interviews provide rich qualitative detail but are time-consuming, expensive, and difficult to analyze. We followed the suggestions of others (Miller and Salter 1989; Raphael, Cloitre, and Dohrenwend 1991) and linked quantitative and qualitative methods. Our respondents indicated whether or not they experienced each event by responding “yes” or “no.” We used two methods to incorporate qualitative detail in a quantitative format. First, we used typical interview probes, such as “Was it difficult to go through this?” but used a Likert response format rather than an open-ended response. Second, on selected items the interviewer probed for more detail. For example, following endorsement of “Was your child a victim of crime?” the interviewer probed for qualitative detail with the probe, “What happened?”

Time Frame. Recall of life events diminishes over time (Jenkins, Hurst, and Rose 1979; Paykel 1983; Funch and Marshall 1984). Experience led us to believe that asking subjects in our samples to recollect events that occurred more than six months prior to the interview would result in uniformly high life event scores. As a result, we elected to use a six-month recall period and to establish variability and test-retest reliability.

Breadth of Scale. The life events instrument must include events that represent the communities to be assessed and that allow for individual variation. Scales require periodic reassessment to reflect changing times (Dohrenwend et al. 1978; Funch and Marshall 1984; Hernandez 1994). To ensure that our instrument reflected a cross-section of current life experiences, we created our list of events by blending traditional items with the feedback provided through discussion groups and pilot testing within the target communities.

Item Selection. We developed an initial list of life events by using as a foundation existing measures and our knowledge of inner-city families encountered in clinical settings. We led two group discussions with community-based case managers who worked with low-income families of children who were chronically ill or disabled. Participants confirmed the appropriateness of the original items and added to the list based on their own experiences, both personal and through their contact with families. Additions included concrete concerns about the difficulty in providing food, clothing, and shelter; witnessing violence or drug activity in the neighborhood; experiencing difficulties with social service agencies and healthcare professionals; and problems with rodents and insects. The final list totaled 50 items, including many found in traditional measures (see Table 1; note that asterisked items were added following Study 2). We phrased each item to capture a single incidence rather than prevalence or recurrence.

Pilot Study. Thirty-two primary caregivers of children with chronic illness or disability completed the first version of the CRISYS. The mean number of life events was 11.3 (s.d. 6.7, range 2-27). The frequency distribution indicated considerable variance using the six-month time frame. Clarification of unexpected ratings underscored the value of allowing respondents to ascribe personal meaning to the events. Some respondents felt that “hearing violence” was a “positive” experience because it made them aware of safety issues in their neighborhoods. Some felt the word “experience” in the rating question (“Was this experience positive, negative, or neutral?”) was too vague or subjectively interpreted. The target population felt that the instrument represented their life experiences well; no additional items were suggested. The format was comfortable for interviewers and respondents.

Summary. As a result of Study 1, the CRISYS instrument consisted of 50 life events and two dimensions (distress and valence) in a format that linked quantitative and qualitative methods. In response to the vagueness of the word “experience” noted earlier, we revised the CRISYS, asking the respondents how the event “turned out.” An article published by Turner and Avison (1992) demonstrated that unresolved events contribute significantly to psychological distress while the contribution of resolved events is less convincing. As a result, we offered a third alternative to rating the outcome as positive or negative: “unresolved,” indicating an uncertain outcome at the time of the interview.

STUDY 2: VALIDATION

The purpose of Study 2 was to test the psychometric properties of the CRISYS with a large sample. First, we sought to establish the frequency with which the target population endorsed individual items. Second, we planned to investigate whether the two dimensions and an intuitive aggregation of items (content domains) enriched the yield of the instrument. Finally, we planned to assess the convergent construct validity of the CRISYS. Theory and prior research predict that a measure of the stress caused by life events will correlate substantially and positively with a measure of depression (Turner and Avison 1992; Tausig 1982; Sarason, Johnson, and Siegel 1978). Stress-buffering theory suggests that social support may lessen the relationship between depression and life events (Cohen et al. 1984; Cohen and Wills 1985; Lin, Dean, and Ensel 1986).

METHODS

Sample. A clinic at a large inner-city hospital served as the site for Study 2 for two reasons: (1) the demographics of the clinic’s population closely matched those of the respondents in Study 1; and (2) the clinic served a general pediatric population, rather than a group of children with chronic illness whose caregivers may have been under extra stress.

The CRISYS was administered to 311 adult caregivers. Most of the respondents were women (95.2 percent), African American (97.7 percent), and single (74.6 percent). The mother of the child was the most frequent respondent (89.4 percent). Most of the sample (92.3 percent) consisted of caregivers 18 years of age or older (mean age = 27). Seventy-eight percent of the sample had at least a high school education, and most (63.3 percent) were unemployed. Fifty-three percent of the respondents reported annual household incomes of $10,000 or less, and 60 percent received Aid to Families with Dependent Children (AFDC).

Measures. We selected symptoms of depression (Center for Epidemiologic Studies-Depression (CES-D), Radloff 1977) as an outcome measure in the validation of the CRISYS for comparison to other similarly validated instruments (Tausig 1982; Turner and Avison 1992; Orr, James, and Charney 1989). We used a measure of perceived social support (Personal Resources Questionnaire (PRQ85-Part 2) (Weinert 1987) to see if we could demonstrate a moderating effect on depressive symptoms, as other investigators have shown (Cohen et al. 1984, Cohen and Wills 1985; Lin, Dean, and Ensel 1986) (Study 2).

The CES-D has demonstrated validity as a measure of symptoms of depression (Radloff 1977; Roberts and Vernon 1983; Parikh et al. 1988) and is internally consistent (alphas of .85 or higher, Radloff 1977). The PRQ85 consists of two parts. The PRQ85-Part 1 identifies concrete sources of support for different areas of need. In this study, we use only the PRQ85-Part 2 that measures the respondent’s perceived level of social support and has demonstrated good validity (Brandt and Weinert 1981; Weinert 1984; Weinert and Tilden 1990) and internal consistency (alpha of .89, Brandt and Weinert 1981). The CRISYS does not duplicate any of the items on the CES-D or the PRQ85-Part 2(a problem that has plagued other researchers (Dohrenwend et al. 1984; Dohrenwend and Shrout 1985).

Procedures. Research staff recruited caregivers for participation in the interview if (1) they were in the clinic for a scheduled (not for acute illness) visit, and (2) they lived in one of the predetermined zip codes (zip codes represented by participants in Study 1). We limited the potential for illness-related stress to modify responses to the PRQ85-Part 2 or CES-D by excluding caregivers who might be under added stress because they had a sick child requiring medical attention. Research staff conducted the interviews either in the waiting area or in an examination room prior to the physician visit. After the interview, interviewers sought the respondents’ feedback on item wording, content, and instrument format.

RESULTS

Item Frequencies. In Table 1, we show the CRISYS items, the percentages of respondents who reported that each event had occurred within the prior six months, and the percentage who reported that the event had a positive outcome. Respondents showed variability in the kinds of events they reported. The event with the highest occurrence (70 percent) was hearing violence outside of the home, and the least frequent events were miscarriage and abortion (one and two percent, respectively). As most respondents were bringing infants to the clinic for well-baby visits, the infrequency of these latter events for this sample was expected.

Dimensions. The average number of events over the six-month period was 8.8 (s.d. 4.8, range 0-25). In the analysis of outcome (positive, negative, or unresolved), we grouped events with a negative outcome together with events not yet resolved, thus making the assumption that events with an uncertain outcome were negative at the time of the interview. The mean number of events whose outcome was rated positive was 3.5 (s.d. 2.7, range 0-15) and the mean number of negative or unresolved outcomes was 5.2 (s.d. 4.2, range 0-24). On average, respondents rated 43 percent of the outcomes positive, 41 percent negative, and 15 percent unresolved. Positive and negative or unresolved counts did not significantly correlate with one another.

We calculated the difficulty dimension by dividing the sum of the Likert ratings by the number of events reported. This calculation resulted in a “mean” difficulty score, taking into account the number of life events one reports (mean 1.2, s.d. 0.5, range 0-2). This mean difficulty score correlated significantly with total count (r = .36, p [less than] .001), with events rated negative/unresolved (r = .50, p [less than] .001), and with events rated positive (r = -.14, p [less than] .05), indicating a relationship among these dimensions without redundancy.

Structure. We constructed the CRISYS to represent a broad range of events relevant to contemporary urban life in order to represent stressful experiences adequately. Since the intention was to capture breadth by sampling across a large number of event domains, we did not expect that traditional scaling techniques, such as factor analysis, would yield a distinctive structure (Tausig 1982). Furthermore, many items within the same conceptual domain tended to preclude one another in a six-month time span (e.g., an abortion and a birth, income increasing “a lot” and decreasing “a lot,” relationship breakup and marriage), thus rendering factor-analytic techniques and measures of internal consistency inappropriate. Instead, we grouped items a priori into “content domains,” as other researchers have done (Dohrenwend et al. 1978; Tausig 1982), and tested the utility of these domains.

The following ten content domains were created for the CRISYS, using 48 of the 50 items: financial issues (nine items); legal issues (three items); career (four items); relationships (five items); medical issues pertaining to the respondent (six items); medical issues pertaining to others (four items); safety in the community (five items); safety at home (two items); home issues (six items); and difficulty with authority (four items) (see Table 1 for items within each domain). Each domain score reflected the sum of the number of events reported within that domain. The items pertaining to the respondent’s use of drugs or alcohol “to get through a day” and whether the respondent’s children had “gotten into trouble” were left as individual items and were not included in any domain. These items did not fit conceptually into any of the defined ten domains, nor did they form an eleventh domain.

Intercorrelations among the ten domains ranged from .01 (financial with medical issues pertaining to others) to .42 (p [less than] .001; financial with safety in the home). The majority of the intercorrelations were low: 17 of the 45 intercorrelations fell between -.10 and .10 and an additional 13 fell between .10 and .20. Only four correlations were above .30 (p [less than] .001) (financial with safety in the home: r = .42; financial with home issues: r = .36; financial with safety in the community: r = .36; and safety in the home with home issues: r = .35). The low degree of intercorrelation indicates little redundancy among the ten content domains; all ten domains represent unique sources of variance.

Validity. We established the face and content validity of the CRISYS by selecting items carefully, by asking respondents in a pilot sample to identify additional items for inclusion, and by substantively reviewing existing measures. In this phase, we assess the construct validity of the CRISYS by correlating the instrument with an established measure of depressive symptoms and by evaluating the stress-buffering effect of perceived social support.

The CRISYS and Depression

Thirty-four percent scored 16 or more on the CES-D (the cut-off for significant depressive symptomatology) (Radloff 1977), a level comparable to other studies of mothers with low income: 38 percent (Bums, Doremus, and Potter 1990), 32 percent (Kemper and Babonis 1992), and 49 percent (Hall 1990). The correlation between the total count of events reported in the CRISYS and the CES-D was .47 (p [less than] .001), indicating that 22 percent of the variance in CES-D scores was accounted for by the total count of events. The correlation between the CES-D and the number of these events rated as positive was .22 (p [less than] .001), and between the CES-D and the number of events rated negative or unresolved it was .40 (p [less than] .001). The correlation between mean difficulty score and the CES-D was .29 (p [less than] .001).

An analysis that regressed CES-D scores on both the counts of positively rated and negatively rated/unresolved events yielded an adjusted [R.sup.2] of .23 (p [less than] .001). This percentage of variance explained in CES-D scores (23 percent) was not substantially greater than the variance explained by the total count of events alone (22 percent), although both predictors contributed significantly to the equation (p [less than] .05). Including the mean difficulty score as a third predictor did not increase the variance explained (adjusted [R.sup.2] = .23, p [less than] .001), [TABULAR DATA FOR TABLE 1 OMITTED] although again all three predictors contributed significantly to the equation.

Of the ten content domains, only the domains of career and difficulty with authority did not correlate significantly with the CES-D in bivariate analyses. The domains most strongly associated with depression were the financial domain (r = .39, p [less than] .001) and (lack of) safety in the home (r = .35, p [less than] .001). The single item pertaining to use of drugs or alcohol to get through a day was modestly correlated with the CES-D (r = .18, p [less than] .01) and the single item about one’s children getting into trouble was not related to the CES-D. A multiple regression predicting CES-D scores using all ten content domains plus the two individual items yielded an adjusted [R.sup.2] of .28 (p [less than] .001; see Table 2).

CRISYS, Depression, and Social Support

Stress-buffering theory predicts that the relationship between stress and symptoms of depression will differ under various levels of social support. No clinical cutoffs have been established for the adequacy of social support as measured by the PRQ85-Part 2, so we divided respondents into three groups (lowest quartile, middle two quartiles, and highest quartile) based on their scores. We regressed the CES-D on total number of events for different levels of perceived social support. Contrary to the stress-buffering hypothesis, at all three levels of social support the relationship between life events and symptoms of depression is significant.

Table 2: Regression Predicting Depression from Individual Domains

and Two Individual Items

Predictor [Beta]

Financial .24(***)

Legal .08

Career -.01

Relationships .15(**)

Safety in the home .16(**)

Safety in the community -.02

Medical issues pertaining to self .17(***)

Medical issues pertaining to others .01

Home issues .12(*)

Authority .04

You use drugs or alcohol(****) .12(*)

Child gets into troubler -.10(*)

[R.sup.2] = .31, F = 11.15, p [less than] .001

* p [less than] .05; ** p [less than] .01; *** p [less than] .001.

**** These are single items not included in domains.

To see if the effects of social support might be more complex, we regressed CES-D scores on positive event counts, negative/unresolved event counts, and mean difficulty within the three social support subgroups. The results indicate that both positive and negative events predict depressive symptomatology when perceived social support is low or middle (see Table 3). When perceived social support is high, however, only negative events are associated with increased symptoms of depression. Mean difficulty is significantly related to symptoms of depression only for those with low perceived social support. These results are consistent with stress-buffering theory, providing more evidence for the construct validity of the CRISYS. Further, these results support the unique contributions of the dimensions of the CRISYS.

CRISYS Modification. We modified slightly the wording of 8 of the original 50 items based on feedback from participants and interviewers in Study 2. These modifications made the items more precise or inclusive. We added 13 experimental items based on suggestions made by participants. (Experimental items are denoted by asterisks in Table 1.) Four of these [TABULAR DATA FOR TABLE 3 OMITTED] items pertained to experiences of prejudice in the last six months, including prejudice based on age, ethnicity, gender, or financial status. Adding these 13 items resulted in the creation of a new content domain, the prejudice domain (four items), and expanded four of the original ten content domains. One new item was not included in any content domain (trouble reading or understanding something that was important to you).

We also created two dimensions out of what had been a single dimension: the positive, negative, or unresolved rating of an event experienced in the previous six months. We separated the valence from the resolution of the event because many respondents had chosen combination answers to the single dimension, indicating especially that an event was positive although unresolved. In the new version of the scale, each event itself rather than its outcome is rated as positive, negative, or neutral. Regardless of the valence, the respondent indicates whether the event is resolved or ongoing. In addition to these changes, we have expanded the range of the Likert ratings of the difficulty of the event from three responses (not at all, a little, a lot) to seven (0 = not at all, 7 = most difficult thing I’ve ever lived through) to see whether increased variability enhances the unique value of the mean difficulty score.

Summary. The results of Study 2 established the item frequency in our low-income sample that consisted predominantly of women who were the primary caregivers of young children. The CRISYS instrument demonstrated good face, content, and construct validity. The difficulty and valence dimensions showed some promise to enrich the usefulness of the instrument. We modified these dimensions in an effort to enhance their properties. The content domains improved the predictive power of the instrument for depressive symptoms over total event count alone. Feedback from participants and interviewers further refined item wording and offered additional items for inclusion. The revised version of the CRISYS has 63 items, 3 dimensions (valence, difficulty, and chronicity), and 11 content domains (financial, legal, career, relationships, safety in the home, safety in the community, medical issues pertaining to respondent, medical issues pertaining to others, home issues, difficulty with authority, and prejudice).

STUDY 3: RELIABILITY

The purpose of Study 3 was to establish the test-retest reliability of the CRISYS. We chose a sample of women with low income whose children were significantly malnourished (failure to thrive), a diagnosis often associated with multiple social risk factors. We reasoned that this sample would provide a more rigorous test of the reliability of the CRISYS, and that it would support the utility of the CRISYS with a particularly vulnerable population. We sought to establish test-retest reliability over a two-week period for the total count and for the three dimensions: valence, chronicity, and distress. Further, we calculated a “hit rate” for each respondent at the two time points that yielded a measure of whether respondents recalled the same events and not merely the same number of events.

METHODS

Sample. We recruited a small convenience sample of 17 caregivers with a working telephone in a pediatric specialty clinic to assess response to the newest version. Power analysis indicated that this sample size was sufficient to establish test-retest reliability.(1) All but one of the respondents were women, all were African American, and they ranged in age from 21 to 58 (mean = 35). Of the 17 caregivers, 13 had completed at least high school equivalency and half reported an annual household income of less than $10,000.

Procedures. Respondents were recruited and interviewed in the clinic waiting area or in examination rooms. Trained interviewers administered the CRISYS. Approximately two weeks later, respondents completed the CRISYS again by telephone.

RESULTS

Descriptive Statistics. We report figures throughout this section on both the 50 original items of the CRISYS and the newer 63-item version. For these descriptive statistics, the numbers describing the 63-item version are in parentheses.

The sample reported an average of 8.7 (10.7) total events at the first administration and 7.9 (9.2) at the second administration, 1.9 (2.1) and 2.0 (2.2) positively rated events, 5.8 (7.1) and 4.6 (5.5) negatively rated events, and 1.1 (1.5) and 1.1 (1.3) neutral events. At the first test administration, respondents considered an average of 5.5 (6.4) events unresolved or ongoing and considered 4.5 (5.3) events unresolved at the second administration.

In the current version respondents rated the distress of each event on a seven-point scale (0 through 6). We calculated the mean distress score across the 50 and 63 items, so the potential range on the mean distress score was zero through six. At the first administration, the mean distress score was 4.3 (4.1), and it was 4.3 (3.8) at the second administration.

Test-Retest Reliability. As noted earlier, statistics measuring internal consistency were not appropriate tests for the reliability of the CRISYS. Accordingly, we assessed the test-retest reliability of various aspects of the CRISYS. Because of the time-sensitive nature of the instrument, we chose a short (two-week) recall period; that is, even if recall were perfect, we would not expect reliability over longer periods of time as events changed. We tested the reliability of the total count of events; the positive, negative, and neutral ratings; the number of ongoing/resolved events; and the mean distress scores, first using only those 50 items in the original formulation of the CRISYS and second using the 63 items of the modified version (see Table 4 for the test-retest correlation coefficients).

Test-retest reliability for the CRISYS is high overall. For both versions, the count of total events is quite reliable over a two-week period. We calculated a “hit rate” for each respondent (the number of events identified at the first and recalled at the second administration divided by the total number of events named either at the first or at the second administration). Of the 17 respondents, 15 had hit rates of 84 percent or higher; the other two had hit rates of 68 percent and 73 percent. These findings indicate that respondents mostly recalled the same events at each administration.

Table 4: Study 3 – Test-Retest Correlations for Two-Week Time

Interval

CRISYS Dimension 50-Item Version 63-Item Version

Total .86(***) .88(***)

Number of events rated positive .69(**) .53(*)

Number of events rated negative .93(***) .94(***)

Number of events rated ongoing .87(***) .89(***)

Mean overall distress .32 -.08

* p [less than] .05; ** p [less than] .01; *** p [less than] .001.

The number of events rated negative was highly consistent, the number of positive and neutral ratings somewhat less reliable. The count of events reported as unresolved or ongoing was reliable over the two-week period. Since the same events were recalled at both time points, these findings suggest that perceptions of “positive” and “neutral” are somewhat more fluid than “negative” ratings. The overall mean distress score was the only dimension that did not demonstrate high test-retest reliability. This may in part be because of the range (0 through 6) along which respondents could rate the distress of events; during administration, interviewers noted that respondents had some trouble making such fine distinctions.

Summary. The CRISYS demonstrated high test-retest reliability, with the exception of the distress dimension. A sample of women whose children were failing to thrive (a diagnosis often associated with multiple social risk factors) was able to recall events over a two-week period with a high degree of consistency. Furthermore, the use of the telephone for the second administration of the CRISYS suggests that face-to-face interviews are not necessary.

DISCUSSION

The CRISYS (Crisis in Family Systems) instrument is a multidimensional measure of contemporary life events developed in several stages of administration and follow-up. The format is easy to use, is well accepted by respondents, and yields complete, reliable data. While initially tested with low-income urban populations, the CRISYS has the breadth, flexibility, and multidimensionality to support its use with a broader population base.

The current version of the CRISYS includes 63 items in a checklist format. It takes anywhere from 10 to 30 minutes to administer, depending on the number of events endorsed. The instrument phrases items as discrete events so that the respondent can focus on whether or not an event took place during the time period. The variability in response rates to each item supports the discrete nature of the events and the ability of the items to distinguish among individual experiences within a community. Further, the six-month reporting period generates an appropriate range of total counts of events without producing a ceiling effect.

The CRISYS yields a number of summary scores that can be useful for profiling respondents in a variety of clinical and research settings. First, the numbers of identified events are summed for a total count of life events. Eleven content domains cluster the items conceptually. In addition, one can calculate the mean degree of distress experienced; the number of events rated positive, negative, and neutral; and the number of events rated resolved or ongoing. These ratings always reflect the respondents’ points of view, and not the viewpoint of the investigator or a referent group.

The CRISYS total count of events predicts scores on a depression measure better than other researchers have found using traditional life events measures. Previous studies have found correlation coefficients ranging up to .28 (Turner and Avison 1992; Tausig 1982; Sarason, Johnson, and Siegel 1978). In contrast the CRISYS accounted for approximately 25 percent of the variance in CES-D (r = .47). Although a direct comparison is not possible, the relationship of the CRISYS to symptoms of depression compares favorably to the relationship between daily struggles and psychological symptoms (correlation coefficients ranging from .41 through .60) and is much shorter (Kanner et al. 1981).

The scores on the dimensions of “positive/negative/neutral” and “resolved/ongoing” show statistically significant relationships to the occurrence of symptoms of depression. The value of these scores becomes apparent in the subsamples stratified by the level of perceived social support. At low levels of perceived social support, positive and negative events, as well as mean distress, predicted symptoms of depression. At middle levels of perceived support, only positive and negative events predicted depressive symptomatology, and at high levels of support, only negative events were predictive.

The CRISYS has substantial test-retest reliability. Since a key purpose of the instrument is to describe the experiences of vulnerable populations, we purposely chose to establish test-retest reliability with a sample of the caregivers of young children at high risk for adverse physical and psychosocial outcomes. Clinically, the mothers showed a range of symptomatology (including cognitive limitations) and had a child with failure to thrive. The instrument demonstrated good reliability despite the high-risk sample and the use of a telephone interview for the second administration. Although this sample was small, it provided reasonable evidence to support the use of the CRISYS by telephone interview.

This initial validation effort focused on a particular population group that is of interest to those professionals caring for or studying families with young children. Comparisons by gender and across generations, cultures, and socioeconomic strata must follow. The experience of a group of respondents over several assessment periods would reflect changes in their lives over time. An ambitious, but worthwhile activity would establish the criterion validity of the CRISYS. This effort would entail independent corroboration of reported events (and corroboration that an event did not, in fact, happen). The construct validity of the CRISYS with outcome measures other than symptoms of depression should be assessed.

The CRISYS quantifies the occurrence and experience of life events that can then be applied to the analysis of health outcomes. The instrument measures changing aspects of respondents’ lives that typical demographic variables fall even to identify. Tapping otherwise unmeasured variables allows researchers to tease out true intervention effects, an effort that improves the predictive power of health outcome models.

Although studies must continue to evaluate the properties of the CRISYS, the instrument demonstrates usefulness as an indicator of life stressors in a research or clinical setting. Health services researchers and policy analysts may find the CRISYS useful when evaluating the success of a clinical model or a healthcare system, or the effectiveness of an insurance plan or government program. When evaluating the success (or failure) of an intervention, the CRISYS can help identify for whom the intervention does and does not work. Further, the CRISYS may suggest an additional approach, perhaps beyond traditional medical care, that would help a population group to benefit from the intervention under study. Beyond research, the CRISYS may serve as an effective screen for family needs because it identifies stressful events, the effects of the events on the respondent, and whether of not those events continue to cause concern.

ACKNOWLEDGMENTS

The authors wish to thank all of the families, case managers, and interviewers who participated in this project; Donna Hope Wegener, M.A. for her assistance in the early phases of the project; and Arthur F. Kohrman, M.D. for his continued guidance and support.

This research was supported by University of Chicago Home Health Care Grant #6-95360 and the Children’s Research Foundation Grant #6-93585. It was conducted while Dr. Shalowitz and Dr. Berry served on the faculty of the University of Chicago, Pritzker School of Medicine.

NOTE

1. Using an expected value for the correlation coefficients of .80 and setting alpha at .05 enabled the null hypothesis to be rejected with power greater than .99 with 17 respondents (Cohen 1988).

REFERENCES

Birnbaum, M. H., and Y. Sotoodeh. 1991. “Measurement of Stress: Scaling the Magnitude of Life Changes.” Psychological Science 2 (4): 236-43.

Brandt, P., and C. Weinert. 1981. “The PRQ: A Social Support Measure.” Nursing Research 30 (5): 277-80.

Brown, G. W., and J. L. Birley. 1968. “Crisis and Life Changes and the Onset of Schizophrenia.” Journal of Health and Social Behavior 9 (3): 203-14.

Burns, E. I., P. C. Doremus, and M. B. Potter. 1990. “Value of Health, Incidence of Depression, and Level of Self-Esteem in Low-Income Mothers of Preschool Children.” Issues in Comprehensive Pediatric Nursing 13: 141-53.

Cohen, J. 1988. Statistical Power for Behavioral Sciences. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Cohen, L. H, J. McGowan, S. Fooskas, and S. Rose. 1984. “Positive Life Events and Social Support and the Relationship Between Life Stress and Psychological Disorder.” American Journal of Community Psychology 12 (5): 567-87.

Cohen, S., and T. A. Wills. 1985. “Stress, Social Support, and the Buffering Hypothesis.” Psychological Bulletin 98 (2): 310-57.

Dohrenwend, B. S., L. Krasnoff, A. R. Askenasy, and B. P. Dohrenwend. 1978. “Exemplification of a Method for Scaling Life Events: The PERI Life Events Scale.” Journal of Health and Social Behavior 19 (June): 205-29.

Dohrenwend, B. S., B. P. Dohrenwend, M. Dodson, and P. E. Shrout. 1984. “Symptoms, Hassles, Social Supports and Life Events: Problems of Confounded Measures.” Journal of Abnormal Psychology 93 (2): 222-30.

Dohrenwend, B. P., and P. E. Shrout. 1985. “‘Hassles’ in the Conceptualization and Measurement of Life Stress Variables.” American Psychologist 40: 780-85.

Du Bois, D. L., R. D. Felner, S. Brand, A. M. Adan, and E. G. Evans. 1992. “A Prospective Study of Life Stress, Social Support and Adaptation in Early Adolescence.” Child Development 63 (3): 542-57.

Funch, D. P, and J. R. Marshall. 1984. “Measuring Life Stress: Factors Affecting FallOff in the Reporting of Life Events.” Journal of Health and Social Behavior 25 (December): 453-64.

Gorman, D. M. 1993. “A Review of Studies Comparing Checklist and Interview Methods of Data Collection in Life Event Research.” Behavioral Medicine 19 (2): 66-73.

Hall, L. A. 1990. “Prevalence and Correlates of Depressive Symptoms in Mothers of Young Children.” Public Health Nursing 7 (2): 71-79.

Hernandez, D. J. 1994. “Children’s Changing Access to Resources: A Historical Perspective.” In Social Policy Report, pp. 1-23. Ann Arbor, MI: Society for Research in Child Development.

Holmes, T. H., and R. H. Rahe. 1967. “The Social Readjustment Rating Scale.” Journal of Psychosomatic Research 11 (2): 213-18.

Jenkins, C. D., M. W. Hurst, and R. M. Rose. 1979. “Life Changes’ Do People Really Remember?” Archives of General Psychiatry 36 (4): 379-84.

Kanner, A. D., J. C. Coyne, C. Schaefer, and R. S. Lazarus. 1981. “Comparison of Two Modes of Stress Measurement: Daily Hassles and Uplifts Versus Major Life Events.” Journal of Behavioral Medicine 4 (1): 1-39.

Kemper, K. J., and T. R. Babonis. 1992. “Screening for Maternal Depression in Pediatric Clinics.” American Journal of Diseases in Childhood 146 (7): 876-78.

Lin, N., A. Dean, and W. Ensel. 1986. Social Support, Life Events and Depression: Center for Epidemiological Studies Depression Scale. Orlando, FL: Academic Press.

Makosky, V. P. 1982. “Sources of Stress.” In Lives in Stress, edited by D. Belle, pp. 35-53. Beverly Hills, CA: Sage Publications.

Miller, P., and D. P. Salter. 1989. “Is There a Shortcut? An Investigation into the Life Event Interview.” In Stressful Life Events, edited by T. W. Miller, pp. 149-64. Madison, CT: International Universities Press.

Orr, S. T., S. A. James, and E. Charney. 1989. “A Social Environment Inventory for the Pediatric Office.” Journal of Developmental and Behavioral Pediatrics 10 (6): 287-91.

Parikh, R. M., D. T. Eden, T. R. Price, and R. G. Robinson. 1988. “The Sensitivity and Specificity of the Center for Epidemiological Studies Depression Scale in Screening for Post-stroke Depression.” International Journal of Psychiatry in Medicine 18 (2): 169-81.

Parry, G., D. A. Shapiro, and L. Davies. 1989. “Reliability of Life-Event Ratings: An Independent Replication.” In Stressful Life Events, edited by T. W. Miller, pp. 123-26. Madison, CT: International Universities Press.

Patterson, J. P., and H. I. McCubbin. 1983. “The Impact of Family Life Events and Changes on the Health of a Chronically Ill Child.” Family Relations 32: 255-64.

Paykel, E. S. 1983. “Methodological Aspects of Life Events Research.” Journal of Psychosomatic Research 27 (5): 341-52.

—–. 1974. “Life Stress and Psychiatric Disorder: Applications of the Clinical Approach.” In Stressful Life Events: Their Nature and Effects, edited by B. S. Dohrenwend and B. P. Dohrenwend, pp. 135-49. New York: John Wiley.

Paykel, E. S., J. K. Myers, and M. N. Dienelt. 1969. “Life Events and Depression.” Archives of General Psychiatry 21 (6): 753-60.

Rabkin, J. G., and E. L. Streuning. 1976. “Life Events, Stress and Illness.” Science 194 (4269): 1013-20.

Radloff, L. S. 1977. “The CES -D Scale: A Self-Report Depression Scale for Research in the General Population.” Applied Psychological Measurement 1 (3): 385-410.

Raphael, K. G., M. Cloitre, and B. P. Dohrenwend. 1991. “Problems of Recall and Misclassification with Checklist Methods of Measuring Stressful Life Events.” Health Psychology 10 (1): 62-74.

Roberts, R. E., and S. W. Vernon. 1983. “The Center for Epidemiological Studies Depression Scale: Its Use in a Community Sample.” American Journal of Psychiatry 140 (1): 41-46.

Ross, C., and J. Mirowsky. 1979. “A Comparison of Life Event Weighting Schemes: Change, Undesirability and Effect-Proportional Indices.” Journal of Health and Social Behavior 20 (2): 166-77.

Rutter, M. 1979. “Protective Factors in Children’s Responses to Stress and Disadvantage.” In Primary Prevention of Psychopathology, Vol 3. Social Competence in Children, edited by M. W. Kent and J. E. Roll, pp. 49-74. Hanover, NH: University Press of New England.

Sarason, I. G., J. H. Johnson, and J. M. Siegel. 1978. “Assessing the Impact of Life Changes: Development of the Life Experiences Survey.” Journal of Consulting and Clinical Psychology 46 (5): 932-46.

Skinner, H. A., and H. Lei. 1980. “The Multi-Dimensional Assessment of Stressful Life Events.” Journal of Nervous and Mental Disease 168 (9): 535-41.

Tausig, M. 1982. “Measuring Life Events.” Journal of Health and Social Behavior 23 (March): 52-64.

Turner, R.J., and W. R. Avison. 1992. “Innovations in the Measurement of Life Stress: Crisis Theory and the Significance of Event Resolution.” Journal of Health and Social Behavior 33 (1): 36-50.

Weinert, C. 1984, “Evaluation of the PRQ: A Social Support Measure.” In Social Support and Families of Vulnerable Infants, edited by K. Barnard, P. Brandt, and B. Raff, pp. 59-97. Birth Defects, Original Article Series, 20 (5). White Plains, NY: March of Dimes.

—–. 1987. “A Social Support Measure: PRQ85.” Nursing Research 36 (): 273-77. Weinert, C., and V. P. Tilden. 1990. “Measures of Social Support: Assessment of Validity.” Nursing Research 39 (4): 212-16.

Zimmerman, M. 1983. “Weighted vs. Unweighted Life Event Scores: Is There a Difference?” Journal of Human Stress 9 (December): 30-35.

COPYRIGHT 1998 American College of Healthcare Executives

COPYRIGHT 2000 Gale Group