Review of the literature on survey instruments used to collect data on hospital patients’ perceptions of care

Review of the literature on survey instruments used to collect data on hospital patients’ perceptions of care

Nicholas G. Castle

Patient evaluations of hospital care can be useful to payers, regulatory bodies, accrediting agencies, hospitals, and consumers. All of these parties can use this information to gauge quality of hospital care from the patients’ perspective (Marino, Marino, and Hayes 2000). Hospitals can use this information to focus on specific areas for improvement, strategic decision making (Sower et al. 2001), managing the expectations of patients (Hickey et al. 1996), and benchmarking (Dull, Lansky, and Davis 1994). Ultimately, the reporting of patient evaluations can influence the delivery of care (Howard et al. 2001).

Many of the benefits of measuring and reporting patient evaluations of hospital care result from using standardized performance information. Clearly, to adequately make comparisons across hospitals requires each facility to measure and report the same information. As described elsewhere in this issue (Goldstein et al. 2005), systematic efforts are underway by the Centers for Medicare and Medicaid Services (CMS) to make standardized performance information on hospitals publicly available. As part of the background for this effort, we reviewed the existing literature on survey instruments used to collect data on patients’ perceptions of hospital care. We describe and compare the format, content, and administration issues associated with these previously used survey instruments.

METHODS

Literature Search

We searched the PubMED, MEDLINE Pro, MEDSCAPE, MEDLINEplus, MDX Health, CINAHL (Cumulative Index for Nursing and Allied Health Literature), ERIC, and JSTOR databases. These searches were conducted with a combination of key words. We limited the searches to articles in English and those with abstracts. Searches returning more than 250 articles were further filtered by using terms such as “questionnaire” and “hospital.” We undertook 51 searches with each of the eight databases, for a total of 408 searches.

After the searches were conducted, the abstracts of the returned articles were examined, to determine their applicability for review. Relevant studies were defined liberally to be those that included any discussion of perceptions of hospital care. Articles that included a survey instrument were included in the analyses. When more than one article was identified reportedly using the same survey instrument, all the articles were included in the analyses; we did not restrict this review to one article per survey instrument. This approach was used because it provided more information on the instruments, such as response rates and psychometric properties.

Analyses

We identified articles that included a patient survey of hospital care for further examination. We also consulted several survey development texts (Krowinski and Steiber 1996; Cohen-Mansfield, Ejaz, and Werner 2000) to construct our approach for characterizing the hospital survey instruments.

These texts describe how to develop the content of a survey instrument, implementation issues to have a usable survey, and performance of the instrument. To characterize hospital survey instruments, we followed these same general steps. First, we provide some basic information, including the name of the instrument. Second, the contents of the instruments are presented, including the number of domains used. Third, implementation characteristics associated with conducting the surveys are presented, including the sample size per facility. Fourth, performance characteristics of the instruments are presented, including the response rates and psychometric properties.

Descriptive Information

We first identified the study author(s) and the name of the survey instrument developed (if any). Some instruments were modified from preexisting instruments, or were amalgams of preexisting instruments. Details on the origins/ modifications of the survey instrument are given. The setting includes the number and type of hospitals in which the study was conducted. We also identified the type of respondent from whom the instrument was designed to collect data: patients, family, or staff. The number of respondents in the study is also provided.

Instrument Content

Second, the contents of the survey instruments are further described. We note the number of items in the instrument, excluding demographic and other background questions. Patient survey instruments often classify “like” questions together; for example, capabilities of staff, staff politeness, and the caring nature of staff might be sorted into a staff “bucket” or category. These similar questions are generally referred to as “domains.” We present the number of domains included in each instrument.

In addition, we present the type of domains included in each survey instrument. We also present the type of rating scale used in the instruments (Krowinski and Steiber 1996), and categorize the response scale in terms of whether it is open-ended or close-ended, the number of close-ended response options (dichotomous or multiple categories), and the nature of the response scale. The nature of the response scale included: evaluation (e.g., poor, fair, good, very good, excellent), frequency (e.g., none of the time to all of the time), satisfaction (e.g., very satisfied to very dissatisfied), visual analog, or Chernoff face formats. A visual analog format (also called graphic scaling) is a pictorial scale that usually has some implied interval value (e.g., scale from 0 to 10). Chernoff faces are a pictorial representation with smiles and frowns.

Implementation Characteristics

Third, we present characteristics of how the survey instrument was used–that is, implementation characteristics. We present whether any information is provided as to when the instrument was given (or mailed) to respondents (e.g., 2 days after discharge). Survey initiatives can also differ on the target sample size of respondents per facility (or unit). We record these target sample sizes. We also report whether the survey was administered by in-person interviews, telephone, mail, or drop-box.

In some cases, specific sample inclusions are given–for example, including only persons 18 years and older. These sample inclusions are also noted. In addition, in some cases sample restrictions are made–for example, excluding patients receiving hospice services. We record whether any such restrictions are made.

Performance Characteristics

Fourth, we document the performance characteristics of the survey instruments. This includes the response rates and whether information about the reliability (internal consistency, test-retest, and interrater) and construct validity are reported.

We provide information on the time to conduct interviews and further psychometric properties of the instruments. In the interest of space, we do not report the actual levels of reliability and validity achieved for each instrument, instrument domains, or individual questions. Rather, we report whether reliability or validity of the instrument was evaluated (yes or no). Nevertheless, we do note any unusual results (e.g., poor performance), what analyses were used (e.g., factor analysis), or whether any other instrument assessment was undertaken.

RESULTS

The key words and results for the first nine key word searches are summarized in the on-line Appendix Table A. The results in the first column of figures of this table show the number of articles identified from the PubMED literature database. For example, 1,289 articles were identified in PubMED using the search term “survey and data collection protocols.” Results in subsequent columns show the number of additional articles identified, using the other literature databases. For example, using this same search term (“survey and data collection protocols”) eight additional articles were identified using MEDLINE Pro. This literature search identified 246 articles, of which all of the abstracts were reviewed. From these 246 abstracts, 84 full-length articles were subsequently examined, with 59 presenting sufficient information to be included in this review.

Descriptive Information

The descriptive characteristics of the survey instruments are shown in Table 1. The study settings are diverse, ranging from single hospitals to a system comprised of 135 medical centers. Studies are also geographically diverse coming from many regions of the U.S., Europe, and the Middle East. Likewise, the number of respondents included in these studies varied widely from 70 to approximately 25,000. Most studies used patients as respondents, although a few assessed family or caregivers. Twenty-six studies used mail surveys, 13 telephone, four drop-boxes, and 12 used in-person interviews.

Instrument Content

Summary characteristics of the content, implementation, and performance of the survey instruments are shown in Table 2. The information is also provided by each of the major modes of survey administration (mail, telephone, drop-box, and in-person interviews). The number of items included in the instruments varied from eight to 121. The average values show more questions were generally asked in mail surveys (average = 45 questions) and fewer in drop-box surveys (average = 16 questions). Likewise, the number of domains varied and included instruments with one domain to as many as 14. However, the average number of domains by mode of administration seemed quite consistent at about six.

We also identified various response formats; however, the most common was an evaluation type response format. The names of the domains and response formats are shown in the on-line Appendix Table B. Looking across studies, we found that the five most-common domains were nursing, physicians, food, services, and care (not shown in the table).

Implementation Characteristics

The lag postdischarge until mailing of the survey instrument varied from 1 week to 6 months, although many (19 percent) studies using mail surveys were sent more than 4 weeks postdischarge. Telephone surveys had a shorter lag time; among the studies for which data were available, most were conducted between 2 and 4 weeks postdischarge. The majority of studies using drop-box surveys or in-person interviews were conducted on-site prior to patient discharge. Few studies provided a target sample size when using the survey instrument. Studies that did give target sample sizes varied from 10 per department to 1,400 per hospital. The target sample size averaged 510 per hospital for mail surveys and 10 per hospital for drop-box surveys. Sample inclusions and exclusions are also shown in the on-line Appendix Table C.

Performance Characteristics

Response rates varied widely, with one study having a 17 percent response rate and another study having a 92 percent response rate. The average response rate for mail surveys was 47 percent, telephone interviews 70 percent, drop-box surveys 63 percent, and in-person interviews 75 percent. The majority of studies provided little information on instrument reliability or validity. For example, 54 percent of studies using mail surveys provided measures of internal consistency; but only 15 percent provided measures of construct validity.

More detailed information on the performance characteristics of the survey instruments, including the completion time, reliability and validity, are provided in the on-line Appendix Table D. However, few studies provided information on the time needed to complete the instrument. For the six studies that provided this information, the time needed to complete instruments varied from 10 to 60 minutes.

DISCUSSION

Prior reviews of the literature on patient perceptions of hospital care have cited the existence of relatively few survey instruments (e.g., Rubin 1990). In this review we examined 59 studies providing information on 54 different survey instruments. This provides some evidence of the increasing salience of use of patient survey instruments addressing hospital care in recent years.

In examining these survey instruments we provide details on descriptive information, instrument content, implementation characteristics, and performance characteristics. Following these general categories a critique of these existing instruments follows, along with suggestions for future research.

Descriptive Information

The survey instruments varied greatly with respect to both the number of institutional settings in which they had been used and the number of patients to whom they had been administered (see Table 1). On the one hand, many survey instruments have been administered in only a few institutional settings and to a limited number of patients; on the other hand, we identified instruments that haven been administered at hundreds of hospitals with thousands of patients. The SERVQUAL, Press Ganey Associates instrument, and Picker questionnaires are notable examples of survey instruments falling in the latter category.

Instrument Content

A variety of different domains of patient perceptions are represented (see Table 2 and on-line Appendix A). In some cases this occurs because survey instruments were developed for very specific purposes (e.g., for use in the ER). The more general instruments measuring patient perceptions of hospital care did yield domains common to these instruments: nursing, physicians, food, services, and care. However, these domains differ in the level of detail of questions and number of items. This divergence in emphasis may be a consequence of the fact that many instruments were developed using expert opinion rather than patient input. Expert opinion is often confounded with clinical measures of care quality (Oermann and Templin 2000) and does not necessarily correspond with patient evaluation of care quality. Indeed, of the 54 different survey instruments we examined, 13 (24 percent) were developed using expert opinion, six (11 percent) used patient input, seven (13 percent) used both expert opinion and patient input, and for 28 survey instruments (52 percent) we could not determine how they were developed.

In future questionnaire development initiatives, consulting studies that have examined patients’ evaluations of care may be useful. The Institute of Medicine’s (IOM 1999) nine domains of care were developed from patient input and can provide useful guidelines for survey-item development. These nine domains are: respect for patient’s values; attention to patient’s preferences and expressed needs; coordination and integration of care; information, communication, and education; physical comfort; emotional support; involvement of family and friends; transition and continuity; and access to care. The CAHPS Hospital Survey domains (nurse communication, nursing services, doctor communication, physical environment, pain control, communication about medicines, and discharge information) were derived from the IOM domains (Goldstein et al. 2005). These domains derived from patient input may be influenced by cultural factors, and may not apply to settings outside of the U.S. For example, some modifications to items (e.g., race/ ethnicity questions) were made and items were added in a recent adaptation of the CAHPS hospital survey for use in Dutch hospitals (Arah et al. 2005).

It was not surprising that we identified survey instruments developed for very specific purposes (e.g., for use in the ER [Burstin et al. 1999], nuclear medicine [Harding et al. 1994], psychiatric care [Eisen et al. 2002], oncology [Bredart et al. 1999], and critical care [Conover et al. 1999]). General instruments may not be specific enough to identify areas for quality improvement in all hospital departments. Longer instruments can be advantageous, as they can provide more detailed information to departments, but there are limits on how many questions can be included in a survey instrument before response rates are adversely affected. An alternative approach to extending the length of instruments is to use a brief core set of questions, followed by a series of specific questions more relevant to individual departments. States and accreditation bodies can use the core instrument to assess perceptions of care in the aggregate, and the more-specific items could be used by the facility for quality improvement. However, this requires a more-sophisticated targeting approach that would require a patient receive the correct department-specific instrument.

Implementation Characteristics

Instruments measuring patient perceptions of hospital care were administered by telephone, mail, and interview; or were collected by drop-box (see Table 2 and on-line Appendix C). However, the majority of survey instruments were administered by mail. No web-based patient surveys were identified.

No agreement on when the instruments should be administered was evident. Many instruments were mailed months after patient discharge. This may have something to do with the limits of hospital administrative databases that are used to construct the mailing lists. Still, a potential bias to collecting information is recall bias. That is, over time patients’ abilities to reliably remember their hospital care may decline (Krowinski and Steiber 1996). For example, Ley et al. (1976) found ratings of care to be less positive at 8 weeks compared with those at 2 weeks. However, we cannot simply generalize that a shorter lag time is more beneficial. If patients’ perceptions become more or less negative as time passes, this does not necessarily mean that they are based on less reliable recollections. Recollections may be just as accurate, but the features of care patients regard as important may change over time. It may also be that additional time postdischarge gives patients additional data points to consider (e.g., regarding coordination or care and/or success of treatment) by the time they are asked to evaluate their care. In these cases, it would be reasonable for patients’ evaluations to be affected by this new, additional data, and thus change/differences in evaluations associated with the passage of time may not necessarily reflect memory reliability at all.

Several studies found telephone interviews to be advantageous in terms of more-rapid contact with patients and higher response rates (e.g., Woodside and Shinn 1988; Hargraves et al. 2001). However, a potential bias to surveys involves social desirability, leading to more positive assessments of care (Hays and Ware 1986). Social desirability might be more of a problem with telephone administration because this involves more-direct contact, and it may be more difficult for the respondent to feel anonymous. In addition, phone interviews may cost more than mail surveys.

The length of the survey instruments was highly varied. As discussed above, short, very general instruments may be less useful than longer detailed instruments. But, longer instruments carry more response burden and may lower response rates. Indeed, examining the instruments in this review, we find a -.65 correlation between response rate and number of questions.

Performance Characteristics

One of the limitations of surveys of patient perceptions of hospital care can be low response rates (Barkley and Furse 1996). Low response rates are cited as providing different results from high response rates (Barkley and Furse 1996). Our review of the literature identified both relatively high and low response rates (see Table 2 and on-line Appendix D). Nonrespondents may have less favorable perceptions of care than respondents (Barkley and Furse 1996; Mazor et al. 2002; Elliott et al., 2005). However, often very little information is provided on how the response rates are calculated.

A related issue is the representativeness of the patients selected to receive a survey instrument. In some cases the sampling criteria that were used in the studies reviewed appear to have been biased (e.g., by including only patients hospitalized for 3 days or more). In other cases, the sampling criteria may be appropriate, but precision of estimates and power to detect differences was limited by small sample size. Few studies reviewed provided information on whether a sufficiently large sample size was selected such that reasonably accurate point estimates could be reported or that meaningful differences between units of interest at a given point in time could be reported. In addition, Ehnfors and Smedby (1993) report, such problems in sampling can greatly influence survey results.

We identified few articles providing extensive psychometric properties (see Table 2 and on-line Appendix D). In many studies even basic psychometric properties were often not reported. This is important because poor survey instruments “… act as a form of censorship imposed on patients. They give misleading results, limit the opportunity of patients to express their concerns about different aspects of care, and can encourage professionals to believe that patients are satisfied when they are highly discontented” (Whitfield and Baker 1992, p. 152).

CONCLUSION

The plethora of survey instruments measuring patient perceptions of hospital care is heartening; but, the advantages of a standardized core instrument cannot be realized when multiple different instruments are used. For example, benchmarking and report cards facilitating consumer choice may be impeded. Our review clearly shows that there are a variety of approaches regarding the instrument domains, how they are measured, and when perceptions of care are elicited. We conclude that a standardized instrument would be beneficial. Moreover, our results also show that it may also be beneficial to standardize the sampling, administration protocol, and mode of administration of survey instruments.

ACKNOWLEDGMENTS

This work was supported by grant number 5 U18 HS00924 from the Agency for Healthcare Research and Quality.

REFERENCES

Abramowitz, S., A. A. Cote, and E. Berry. 1987. “Analyzing Patient Satisfaction: A Multianalytic Approach.” Quality Review Bulletin 13: 122-30.

Applebaum, R. A., J. K. Straker, and S. M. Geron. 2000. Assessing Satisfaction in Health and Long-Term Care: Practical Approaches to Hearing the Voices of Consumers. New York: Springer Publishing Company.

Arnetz, J. E., and B. B. Arnetz. 1996. “The Development and Application of a Patient Satisfaction Measurement System for Hospital-Wide Quality Improvement.” International Journal for Quality in Health Care 8: 555-66.

Arah, O. A., G. H. Asbroek, D. M. Dehioij, J. S. de Koning, P. Stam, A. Poll, B. Vriens, P. Schmidt, and N. S. Klazinga. (2005). “Psychometric Properties of the Dutch Version of the Hospital-Level Consumer Assessment Health Plans Study (CAHPS) Instrument.” Health Services Research.

Barkley, W. M., and D. H. Furse. 1996. “Changing Priorities for Improvement: The Impact of Low Response Rates in Patient Satisfaction.” Journal on Quality Improvement 22: 427-33.

Bell, R., M. J. Krivich, and M. S. Boyd. 1997. “Charting Patient Satisfaction.” Marketing Health Services 17: 22-9.

Bredart, A., D. Razavi, C. Robertson, S. Brignone, D. Fonzo, J. Y. Petit, and J. M. de Haes. 2002. “Timing of Patient Satisfaction Assessment: Effect on Questionnaire Acceptability, Completeness of Data, Reliability and Variability of Scores.” Patient Education and Counseling 46: 131-6.

Bredart, A., D. Razavi, C. Robertson, F. Didier, E. Scaffidi, and J. M. de Haes. 1999. “A Comprehensive Assessment of Satisfaction with Care: Preliminary Psychometric Analysis in an Oncology Institute in Italy.” Annals of Oncology 10: 839-46.

Bruster, S., B. Jarman, N. Bosanquet, D. Weston, R. Erens, and T. L. Delbanco. 1994. “National Survey of Hospital Patients.” British Medical Journal 309: 1542-6.

Burroughs, T. E., A. R. Davies, J. C. Cira, and W. C. Dunagan. 1999. “Understanding Patient Willingness to Recommend and Return: A Strategy for Prioritizing Improvement Opportunities.” Joint Commission Journal on Quality Improvement 25: 271-87.

Burstin, H. R., A. Conn, G. Setnik, D. W. Rucker, P. D. Cleary, A. C. O’Neil, E. J. Orav, C. M. Sox, T. A. Brennan, and the Harvard Emergency Department Quality Study Investigators. 1999. “Benchmarking and Quality Improvement: The Harvard Emergency Department Quality Study.” American Journal of Medicine 107: 437-49.

Camilleri, D., and M. O’Callaghan. 1998. “Comparing Public and Private Hospital Care Service Quality.” International Journal of Health Care Quality Assurance 11: 127-33.

Candlish, P., P. Watts, S. Redman, P. Whyte, and J. Lowe. 1998. “Elderly Patients with Heart Failure: A Study of Satisfaction with Care and Quality of Life.” International Journal for Quality in Health Care 10: 141-6.

Carman, J. M. 2000. “Patient Perceptions of Service Quality: Combining the Dimensions.” Journal of Management in Medicine 14: 339-56.

Charles, C., M. Gauld, L. Chambers, B. O’Brien, R. B. Haynes, and R. Labelle. 1994. “How Was Your Hospital Stay? Patients’ Reports about Their Care in Canadian Hospitals.” Canadian Medical Association Journal 150: 1813-22.

Chou, S., and D. Boldy. 1999. “Patient Perceived Quality of Care in Hospital in the Context of Clinical Pathways: Development of Approach.” Journal of Quality in Clinical Practice 19: 89-93.

Cleary, P. D., S. Edgman-Levitan, M. Roberts, T. W. Moloney, W. McMullen, J. D. Walker, and T. L. Delbance. 1991. “Patients Evaluate Their Hospital Care: A National Survey.” Health Affairs 10 (4): 254-67.

Cleary, P. D., L. Keroy, G. Karapanos, and W. McMullen. 1989. “Patient Assessments of Hospital Care.” Quality Review Bulletin 15: 172-9.

Cohen, G., J. Forbes, and M. Garraway. 1996. “Can Different Patient Satisfaction Survey Methods Yield Consistent Results? Comparison of Three Surveys.” British Medical Journal 313: 841-4.

Cohen-Mansfield, J., F. Ejaz, and P. Werner. 2000. Satisfaction Surveys in Long-Term Care. New York: Springer Publishing Company, Inc.

Conover, C. J., M. L. Mah, P. J. Rankin, and F. A. Sloan. 1999. “The Impact of Tenn Care on Patient Satisfaction with Care.” American Journal of Managed Care 5: 765-75.

Coulter, A., and P. D. Cleary. 2001. “Patients’ Experiences with Hospital Care in Five Countries.” Health Affairs 20: 244-52.

Covinsky, K. E., G. E. Rosenthal, M. Chren, A. C. Justice, R. H. Fortinsky, R. M. Palmer, and C. S. Landefeld. 1998. “The Relation between Health Status Changes and Patient Satisfaction in Older Hospitalized Medical Patients.” Journal of General Internal Medicine 13: 223-9.

Coyle, J., and B. Williams. 2001. “Valuing People as Individuals: Development of an Instrument through a Survey of Person-Centeredness in Secondary Care.” Journal of Advanced Nursing 36: 450-9.

Deeks, P. A., and K. Byatt. 2000. “Are Patients Who Self-Administer Their Medicines in Hospital More Satisfied with Their Care?” Journal of Advanced Nursing 31: 395-400.

Dozier, A. M., H. Kitzman, G. L. Ingersoll, S. Holmberg, and A. W. Schultz. 2001. “Development of an Instrument to Measure Patient Perception of the Quality of Nursing Care.” Research in Nursing and Health 24: 506-17.

Draper, M., P. Cohen, and H. Buchan. 2001. “Seeking Consumer Views: What Use Are Results of Hospital Patient Satisfaction Surveys?” International Journal for Quality in Health Care 13: 463-8.

Duff, L., D. Lamping, and L. Ahmed. 2001. “Evaluating Satisfaction with Maternity Care in Women from Minority Ethnic Communities: Development and Validation of a Sylheti Questionnaire.” International Journal for Quality in Health Care 13: 215-30.

Dull, V. T., D. Lansky, and N. Davis. 1994. “Evaluating a Patient Satisfaction Survey for Maximum Benefit.” Joint Commission Journal on Quality Improvement 20: 444-53.

Ehnfors, M., and B. Smedby. 1993. “Patient Satisfaction Surveys Subsequent to Hospital Care: Problems of Sampling, Non-Response and Other Losses.” Quality Assurance in Health Care 5: 19-32.

Eisen, S. V., M. Wilcox, T. Idiculla, A. Speredelozzi, and B. Dickey. 2002. “Assessing Consumer Perceptions of Inpatient Psychiatric Treatment: The Perceptions of Care Survey.” Joint Commission Journal on Quality Improvement 28: 510-26.

Elliott, M. N., C. Edwards, J. Angeles, K. Hambarsoomians, and R. D. Hays. 2005. “Patterns of Unit and Item Non-Response in the CAHPS[R] Hospital Survey.”

Health Services Research DOI: 10.1111/j.1475-6773.2005.00476.x. Available at www.blackwell-synergy.com.

Gasquet, I., B. Falissard, and P. Ravand. 2001. “Impact of Reminders and Method of Questionnaire Distribution on Patient Response to Mail-Back Satisfaction Survey.” Journal of Clinical Epidemiology 54: 1174-80.

Goldstein, M. S., S. D. Elliott, and A. A. Guccione. 2000. “The Development of an Instrument to Measure Satisfaction with Physical Therapy.” Physical Therapy 80: 853-63.

Goldstein, L., M. B. Farquhar, C. Crofton, S. Garfinkel, and C. Darby. 2005. “Why Another Patient Survey of Hospital Care.” Health Services Research DOI: 10.1111/ j.1475-6773.2005.00477.x. Available at www.blackwell-synergy.com.

Goupy, F., O. Ruhlmann, O. Paris, and B. Thelot. 1991. “Results of a Comparative Study of In-Patient Satisfaction in Eight Hospitals in the Paris Region.” Quality Assurance in Health Care 3: 309-15.

Grimmer, K, and J. Moss. 2001. “The Development, Validity and Application of a New Instrument to Assess the Quality of Discharge Planning Activities from the Community Perspective.” International Journal for Quality in Health Care 13: 109-16.

Gustafson, D. H., N. K. Arora, E. C. Nelson, and E. W. Boberg. 2001. “Increasing Understanding of Patient Needs during and after Hospitalization.” Joint Commission Journal on Quality Improvement 27: 81-92.

Guzman, P. M., E. M. Sliepcevich, E. P. Lacey, E. M. Vitello, M. R. Matten, P. L. Woehlke, and W. R. Wright. 1988. “Tapping Patient Satisfaction: A Strategy for Quality Assessment.” Patient Education and Counseling 12: 225-33.

Hall, M. F. 1995. “Patient Satisfaction or Acquiescence? Comparing Mail and Telephone Survey Results.” Journal of Health Care Marketing 15: 54-61.

Harding, L. K., J. Griffith, V. M. Harding, N. J. Tulley, A. Notghi, and W. H. Thomson. 1994. “Closing the Audit Loop: A Patient Satisfaction Survey.” Nuclear Medicine Communications 15: 275-8.

Hargraves, J. L., I. B. Wilson, A. Zaslavsky, C. James, J. D. Walker, G. Rogers, and P. D. Cleary. 2001. “Adjusting for Patient Characteristics When Analyzing Reports from Patients about Hospital Care.” Medical Care 39: 635-41.

Hays, R. D., C. Larson, E. C. Nelson, and P. B. Batalden. 1991. “Hospital Quality Trends: A Short-Form Patient-Based Measure.” Medical Care 29 (7): 661-8.

Hays, R. D., E. C. Nelson, C. O. Larson, and P. B. Batalden. 1994. “Short-Form Measures of Physician and Employee Judgments about Hospital Quality.” Journal on Quality Improvement 20 (2): 66-77.

Hays, R., and J. E. Ware. 1986. “Social Desirability and Patient Satisfaction Ratings.” Medical Care 24: 519-25.

Hickey, M. L., S. F. Kleefield, S. D. Pearson, S. M. Hassan, M. Harding, P. Haughie, T. H. Lee, and T. A. Brennan. 1996. “Payer-Hospital Collaboration to Improve Patient Satisfaction with Hospital Discharge.” Joint Commission Journal on Quality Improvement 22: 336-44.

Hiidenhovi, H., P. Laippala, and K. Nojonen. 2001. “Development of a Patient-Oriented Instrument to Measure Service Quality in Outpatient Departments.” Journal of Advanced Nursing 34: 696-705.

Hiidenhovi, H., K. Nojonen, and P. Laippala. 2002. “Measurement of Outpatients’ Views of Service Quality in a Finnish University Hospital.” Journal of Advanced Nursing 38: 59-67.

Hoff, R. A., R. A. Rosenheck, M. Meterko, and N.J. Wilson. 1999. “Mental Illness as a Predictor of Satisfaction with Inpatient Care at Veterans Affairs Hospitals.” Psychiatric Services 50: 680-5.

Horne, R., M. Hankins, and R. Jenkins. 2001. “The Satisfaction with Information about Medicines Scale SIMS: A New Measurement Tool for Audit and Research.” Quality in Health Care 10: 135-40.

Hoskins, E. J., F. A. A. Noor, and S. A. F. Ghasib. 1994. “Implementing TQM in a Military Hospital in Saudi Arabia.” Joint Commission Journal on Quality Improvement 20: 454-64.

Howard, P. B., J. J. Clark, M. K. Rayens, V. Hines-Martin, P. Weaver, and R. Littrell. 2001. “Consumer Satisfaction with Services in a Regional Psychiatric Hospital: A Collaborative Research Project in Kentucky.” Archives of Psychiatric Nursing 15: 10-23.

Jamison, R. N., M. J. Ross, P. Hoopman, F. Griffin, J. Levy, M. Daly, and J. L. Schaffer. 1997. “Assessment of Postoperative Pain Management: Patient Satisfaction and Perceived Helpfulness.” Clinical Journal of Pain 13: 229-36.

Jenkinson, C., A. Coulter, and S. Bruster. 2002. “The Picker Patient Experience Questionnaire: Development and Validation Using Data from In-Patient Surveys in Five Countries.” International Journal of Quality and Health Care 14: 353-8.

Jenkinson, C., A. Coulter, S. Bruster, N. Richards, and T. Chandola. 2002. “Patients’ Experiences and Satisfaction with Health Care: Results of a Questionnaire Study of Specific Aspects of Care.” Quality and Safety of Health Care 11: 335-9.

John, J. 1992. “Getting Patients to Answer: What Affects Response Rates?” Journal of Health Care Marketing 12:46-51.

John, J. 1992. “Patient Satisfaction: The Impact of Past Experience.” Journal of Health Care Marketing 12: 56-64.

Johnson, T. R. 2000. “Family Matters: A Quality Initiative through the Patient’s Eyes.” Journal of Nursing Care Quality 14: 64-71.

Ketefian, S., R. Redman, M. G. Nash, and E. L. Bogue. 1997. “Inpatient and Ambulatory Patient Satisfaction with Nursing Care.” Quality Management in Health Care 5: 66-75.

Krowinski, W. J., and S. R. Steiber. 1996. Measuring and Managing Patient Satisfaction. American Hospital Publishing Inc.

Lanford, A., R. Clausen, J. Mulligan, C. Hollenback, S. Nelson, and V. Smith. 2001. “Measuring and Improving Patients’ and Families’ Perceptions of Care in a System of Pediatric Hospitals.” Joint Commission Journal on Quality Improvement 27: 415-29.

Larrabe, J. H., and L. V. Bolden. 2001. “Defining Patient-Perceived Quality of Nursing Care.” Journal of Nursing Care Quality 16: 34-60.

Larsson, B. W. 1999. “Patients’ Views on Quality of Care: Age Effects and Identification of Patient Profiles.” Journal of Clinical Nursing 8: 693-700.

Larsson, G., B. W. Larsson, and I. M. E. Munck. 1998. “Refinement of the Questionnaire “Quality of Care from the Patient’s Perspective” Using Structural Equation Modelling.” Scandinavian Journal of Caring Science 12:111-8.

Ley, P., P. W. Bradshaw, J. A. Kincey, and S. T. Atherton. 1976. “Increasing Patients’ Satisfaction with Communications.” British Journal of Social and Clinical Psychology 15: 403-13.

Lehmann, L. S., F. L. Brancati, M. C. Chen, D. Roter, and A. S. Dobs. 1997. “The Effect of Bedside Case Presentations on Patients’ Perceptions of Their Medical Care.” New England Journal of Medicine 336:1150-5.

Lohr, K. N., M. S. Donaldson, and A.J. Walker. 1991. “Medicare: A Strategy for Quality Assurance, III: Beneficiary and Physician Focus Groups.” Quality Review Bulletin 17: 242-53.

Longo, D. R., G. Land, W. Schramm, J. Fraas, B. Hoskins, and V. Howell. 1997. “Consumer Reports in Health Care: Do They Make a Difference in Patient Care?” Journal of the American Medical Association 278: 1579-84.

Marino, B. L., E. K. Marino, and J. S. Hayes. 2000. “Parents’ Report of Children’s Hospital Care: What It Means for Your Practice.” Pediatric Nursing 26: 195-8.

Mazor, K. M., B. E. Clauser, T. Field, R. A. Yood, and J. H. Gurwitz. 2002. “A Demonstration of the Impact of Response Bias on the Results of Patient Satisfaction Surveys.” Health Services Research 37: 1403-17.

McDaniel, C., and J. G. Nash. 1990. “Compendium of Instruments Measuring Patient Satisfaction with Nursing Care.” Quality Review Bulletin 16: 182-8.

McNeill, J. A., G. D. Sherwood, P. L. Stark, and B. Nieto. 2001. “Pain Management Outcomes for Hospitalized Hispanic Patients.” Pain Management in Nursing 2: 25-36.

Merakou, K., P. Dalla-Vorgia, T. Garania-Papadatos, and J. Kourea-Kremastinou. 2001. “Satisfying Patient’s Rights.” Nursing Ethics 8: 499-509.

Meterko, M., E. Nelson, and H. Rubin. 1990. “Patient Judgments of Hospital Quality. Report of a Pilot Study.” Medical Care 28: 1-56.

Mishra, D. P., J. Singh, and V. Wood. 1991. “An Empirical Investigation of Two Competing Models of Patient Satisfaction.” Journal of Ambulatory Care Marketing 4: 17-36.

Mokhtar, S. A., W. Guirguis, M. Al-Turkey, and A. Khalaf. 1991. “Patient Satisfaction with Hospital Services: Development and Testing of a Measuring Instrument.” Journal of the Egyptian Public Health Association 66: 693-720.

Oermann, M. H., and T. Templin. 2000. “Important Attributes of Quality of Health Care: Consumer Perspectives.” Journal of Nursing Scholarship 32: 167-72.

Oz, M. C., J. Zikria, C. Mutrie, J. P. Slater, C. Scott, S. Lehman, M. W. Connolly, D. T. Asher, W. Ting, and P. G. Namerow. 2001. “Patient Evaluation of the Hotel Function of Hospitals.” The Heart Surgery Forum 4: 166-71.

Roberts, J. G., and P. Tugwell. 1987. “Comparison of Questionnaires Determining Patient Satisfaction with Medical Care.” Health Services Research 22: 637-54.

Rogers, G., and D. P. Smith. 1999. “Reporting Comparative Results from Hospital Patient Surveys.” International Journal for Quality in Health Care 11: 251-9.

Rosenheck, R., N. Wilson, and M. Meterko. 1997. “Influence of Patient and Hospital Factors on Consumer Satisfaction with Inpatient Mental Health Treatment.” Psychiatric Services 12: 1553-61.

Rosenthal, G. E., and D. L. Harper. 1994. “Cleveland Health Quality Choice: A Model for Collaborative Community-Based Outcomes Assessment.” Joint Commission Journal on Quality Improvement 20: 425-42.

Rosenthal, G. E., P. J. Hammar, L. E. Way, S. A. Shipley, D. Doner, B. Wojtala, J. Miller, and D. L. Harper. 1998. “Using Hospital Performance Data in Quality Improvement: The Cleveland Health Quality Choice Experience.” Journal on Quality Improvement 24: 347-60.

Rubin, H. R. 1990. “Patient Evaluations of Hospital Care: A Review of the Literature.” Medical Care 28: S3-9.

Shannon, S. E., P. H. Mitchell, and K. C. Cain. 2002. “Patients, Nurses, and Physicians Have Differing Views of Quality of Critical Care.” Journal of Nursing Scholarship 34: 173-9.

Simon, S. E., and A. Patrick. 1997. “Understanding and Assessing Consumer Satisfaction in Rehabilitation.” Journal of Rehabilitation Outcomes 1: 1-14.

Simon, S. R., T. H. Lee, L. Goldman, A. L. McDonough, and S. D. Pearson. 1998. “Communication Problems for Patients Hospitalized with Chest Pain. “Journal of General Internal Medicine 13: 836-8.

Sower, V., J. Duffy, W. Kilbourne, G. Kohers, and P. Jones. 2001. “The Dimensions of Service Quality for Hospitals: Development and Use of the KQCAH Scale.” Health Care Management Review 2: 47-59.

Stamps, P. L., and E. H. Lapriore. 1987. “Measuring Patient Satisfaction in a Community Hospital.” Hospital Topics 65: 22-6.

Stratmann, W. C., T. R. Zastowny, L. R. Bayer, E. H. Adams, G. S. Black, and P. A. Fry. 1994. “Patient Satisfaction Surveys and Multicollinearity.” Quality Management in Health Care 2: 1-12.

Sun, B., J. Adams, and H. Burstin. 2001. “Validating a Model of Patient Satisfaction with Emergency Care.” Annals of Emergency Medicine 38: 527-32.

Thi, P. L., S. Briancon, F. Empereur, and F. Guillemin. 2002. “Factors Determining Inpatient Satisfaction with Care.” Social Science and Medicine 54: 493-504.

Thomas, L. H., and S. Bond. 1996. “Measuring Patient Satisfaction with Nursing.” Journal of Advanced Nursing 23: 747-56.

Ware, J. E., and D. M. Berwick. 1990. “Conclusions and Recommendations.” Medical Care 28: S39-44.

Ware, E. J., and R. D. Hays. 1988. “Methods for Measuring Patient Satisfaction with Specific Medical Encounters.” Medical Care 26: 393-402.

Weaver, M. J., C. L. Ow, D. J. Walker, and E. F. Degenhardt. 1993. “A Questionnaire for Patients’ Evaluations of Their Physicians’ Humanistic Behaviors.” Journal of General Internal Medicine 8: 135-9.

Welton, R., and R. Parker. 1999. “Study of the Relationships of Physical and Mental Health to Patient Satisfaction.” Journal for Healthcare Quality 21: 39-46.

Whitfield, M., and R. Baker. 1992. “Measuring Patient Satisfaction for Audit in General Practice.” Quality in Health Care 3: 151-3.

Wilson, I. B., L. Ding, R. D. Hays, M. F. Shapiro, S. A. Bozzette, and P. D. Cleary. 2002. “HIV Patients’ Experiences with Inpatient and Outpatient Care: Results of a National Survey.” Medical Care 40:1149-60.

Woodbury, D., D. Tracy, and E. McKnight. 1998. “Does Considering Severity of Illness Improve Interpretation of Patient Satisfaction Data?” Journal for Healthcare Quality 20: 33-40.

Woodside, A., and R. Shinn. 1988. “Customer Awareness and Preferences toward Competing Hospital Services.” Journal of Health Care Marketing 8: 39-47.

Zifko-Baliga, G. M., and R. F. Krampf. 1997. “Managing Perceptions of Hospital Quality.” Marketing Health Services 17: 28-35.

SUPPLEMENTARY MATERIAL

The following supplementary material for this article is available online: APPENDIX A. Results of Literature Search (1980-2003).

APPENDIX B. Content Characteristics of Instruments Collecting Patient Perceptions of Hospital Care.

APPENDIX C. Implementation Characteristics of Instruments Collecting Patient Perceptions of Hospital Care.

APPENDIX D. Performance Characteristics of Instruments Collecting Patient Perceptions of Hospital Care.

Address correspondence to Nick Castle, Ph.D., A649 Crabtree Hall, Graduate School of Public Health, 130 DeSoto Street, Pittsburgh, PA 15261. Julie Brown, M.S. and Kimberly A. Hepner, Ph.D., are with RAND, Santa Monica, CA 90407-2138. Ron D. Hays, Ph.D., is with the UCLA Department of Medicine, Los Angeles, CA 90095-1736.

Table 1: Descriptive Characteristics of Instruments Collecting Patient

Perceptions of Hospital care

Origins or

Modification of

Author(s) Name of Instrument Instrument

Stamps and Lapriore None

(1987)

Abramowitz, Cote, and None

Berry (1987)

Barkley and Furse NCG patient viewpoint

(1996) survey

Bredart et al. (1999) Comprehensive assess-

ment of satisfac-

tion with care

Bruster et al. (1994) Patient’s charter

Burroughs et al. (1999) None

Burstin et al. (1999) None

Camilleri and Callaghan None Questionnaire design

(1998) was based on

SERVQUAL

Candlish et al. (1998) Patient needs

questionnaire

Carman (2000) None Dimensions reliable

in previous studies

Charles et al. (1994) None Adapted from Cleary

et al. (1991)

Cleary et al. (1989) None

Conover et al. (1999) None Medical Outcomes

Study questions

Covinsky et al. (1998) None Adapted from Ware and

Hays (1988)

Coyle and Williams None

(2001)

Decks and Byatt (2000) None

Dozier et al. (2001) Patient perception of

hospital experience

with nursing

Duff, Lamping, and Bangladeshi women’s

Ahmed (2001) experience of

maternity services

Eisen et al. (2002) Perceptions of care

(PoC) survey

Gasquet, Falissard, and None

Ravaud (2001)

Goupy et al. (1991) None

Grimmer and Moss Prescriptions, ready

(2001) to re-enter commu-

nity, education

placement, assu-

rance of safety,

realistic expecta-

tions, empowerment,

directed to appro-

priate services

(prepared)

Gustafson et al. (2001) None

Guzman et al. (1988) The patient satisfac-

tion questionnaire

Harding et al. (1994) None

Hargraves et al. (2001) None Adapted from Cleary

et al. (1991)

Hays et al. (1994) Short-form physician

judgment system

questionnaire

Hays et al. (1994) Short-form employee

system

questionnaire

Hiidenhovi, Nojonen, None

and Laippala (2002)

Hoff et al. (1999) None Picker Institute

questions

Horne et al. (2001) Satisfaction with

information about

medicines scale

Howard et al. (2001) Kentucky consumer

satisfaction

instrument

Jamison et al. (1997) Patient discharge

questionnaire

John (1992) None

Ketefian et al. (1997) None

Lanford et al. (2001) Picker Institute

Pediatric Inpatient

survey

Larsson (1999) Quality of care from

the patient’s

perspective

Larsson, Larsson, and Quality of care from

Munck (1998) the patient’s

perspective

Marino, Marino, and None

Hayes (2000)

McNeill et al. (2001) American pain society

patient outcome

questionnaire

Meterko, Nelson, and Patient judgments of

Rubin (1990) hospital quality

(PJHQ)

questionnaire

Mokhtar et al. (1991) None

Oz et al. (2001) None

Rogers and Smith (1999) Picker-commonwealth

survey of patient-

centered care

Rosenheck, Wilson, and None Derived from Picker

Meterko (1997) Institute

Shannon, Mitchell, and Medicus viewpoint

Cain (2002)

Simon et al. (1998) Picker-commonwealth Physician-patient

survey of patient- communication

centered care questions

Sower et al. (2001) Key quality

characteristics

assessment for

hospitals scale

Stamps and Lapriore None

(1987)

Thi et al. (2002) Patient judgments of

hospital quality

questionnaire

Weaver et al. (1993) Physicians’ humanis-

tic behaviors

questionnaire

Welton and Parker None

(1999)

Wilson et al. (2002) None Adapted questions

from Picker survey

Woodbury, Tracy, and Inpatient perceptions Abridged version of

McKnight (1998) of quality long form used

questionnaire

Woodside and Shinn None

(1988)

Zifko-Baliga and None

Krampf (1997)

Author(s) Setting

Stamps and Lapriore Small community hospital (72 beds)

(1987)

Abramowitz, Cote, and Teaching hospital with 900 beds

Berry (1987)

Barkley and Furse 76 medium to large, nonprofit,

(1996) community and teaching hospitals

Bredart et al. (1999) Oncology institute in Italy

Bruster et al. (1994) 36 hospitals in England

Burroughs et al. (1999) One health system

Burstin et al. (1999) Five urban teaching hospital emergency

departments

Camilleri and Callaghan Private and public hospitals in Malta

(1998)

Candlish et al. (1998) Two hospitals in Australia

Carman (2000) One hospital

Charles et al. (1994) 57 public acute care hospitals (Canada)

Cleary et al. (1989) Brigham and Women’s Hospital

Conover et al. (1999) 13 hospitals in Tennessee and 10 hospitals

in North Carolina

Covinsky et al. (1998) University Hospitals of Cleveland

Coyle and Williams Major teaching hospital in Scotland

(2001)

Decks and Byatt (2000) Teaching hospital (U.K.)

Dozier et al. (2001) Ten hospitals

Duff, Lamping, and Four hospitals in London (U.K.)

Ahmed (2001)

Eisen et al. (2002) 14 inpatient behavioral health and substance

abuse programs

Gasquet, Falissard, and Public teaching, short-stay, hospital for

Ravaud (2001) adults (Paris, France)

Goupy et al. (1991) Eight hospitals in France

Grimmer and Moss One large tertiary public hospital in

(2001) Adelaide (Australia)

Gustafson et al. (2001) Three community hospitals

Guzman et al. (1988) One 1.50 bed, not-for-profit, community and

teaching hospital

Harding et al. (1994) One hospital

Hargraves et al. (2001) 22 regional hospitals and 51 in a health

system in one state

Hays et al. (1994) 44 hospitals owned by the Hospital

Corporation of America

Hays et al. (1994) 44 hospitals owned by the Hospital

Corporation of America

Hiidenhovi, Nojonen, One hospital in Finland

and Laippala (2002)

Hoff et al. (1999) VA medical centers

Horne et al. (2001) Hospitals in London and Brighton (U.K.)

Howard et al. (2001) Public psychiatric hospital

Jamison et al. (1997) University-based tertiary hospital

John (1992) Three hospitals

Ketefian et al. (1997) One medical center

Lanford et al. (2001) 20 hospitals

Larsson (1999) Three county Swedish hospitals

Larsson, Larsson, and Swedish hospital

Munck (1998)

Marino, Marino, and One hospital

Hayes (2000)

McNeill et al. (2001) One 400 bed regional hospital

Meterko, Nelson, and Ten hospitals

Rubin (1990)

Mokhtar et al. (1991) One general hospital in Kuwait

Oz et al. (2001) 11 hospitals within 60 miles of NYC

Rogers and Smith (1999) 50 hospitals in Massachusetts

Rosenheck, Wilson, and 13.5 Veterans Administration medical centers

Meterko (1997)

Shannon, Mitchell, and 25 critical care units in 14 hospitals

Cain (2002)

Simon et al. (1998) Brigham and Women’s Hospital

Sower et al. (2001) 3 hospitals

Stamps and Lapriore Small community hospital (72 beds)

(1987)

Thi et al. (2002) One hospital in France

Weaver et al. (1993) One hospital

Welton and Parker One hospital

(1999)

Wilson et al. (2002) N/G

Woodbury, Tracy, and 23 hospitals

McKnight (1998)

Woodside and Shinn One hospital

(1988)

Zifko-Baliga and Large Midwestern hospital

Krampf (1997)

Number of

Respondents

Author(s) Respondent in Study

Stamps and Lapriore Patient 130

(1987)

Abramowitz, Cote, and Patient 841

Berry (1987)

Barkley and Furse Patient 19,556

(1996)

Bredart et al. (1999) Patient 290

Bruster et al. (1994) Patients 5,150

Burroughs et al. (1999) Patient 7,083

Burstin et al. (1999) Patient 3,719

Camilleri and Callaghan Patient N/G

(1998)

Candlish et al. (1998) Patient 148

Carman (2000) Patient 298

Charles et al. (1994) Patient 4,599

Cleary et al. (1989) Patient 598

Conover et al. (1999) Patient 1,691

Covinsky et al. (1998) Patient 445

Coyle and Williams Patient 97

(2001)

Decks and Byatt (2000) Patient 1.52

Dozier et al. (2001) Patient 1,148

Duff, Lamping, and Patient 136

Ahmed (2001)

Eisen et al. (2002) Patient 6,972

Gasquet, Falissard, and Patient 482

Ravaud (2001)

Goupy et al. (1991) Patient 7,066

Grimmer and Moss Patient, 500

(2001) caregiver (patient),

431

(caregiver)

Gustafson et al. (2001) Patient 91

Guzman et al. (1988) Patient (or 2,156

representative)

Harding et al. (1994) Patient 200

Hargraves et al. (2001) Patients 12,726

(regional),

12,680

(state)

Hays et al. (1994) Physician 3,435

Hays et al. (1994) Employees 17,315

Hiidenhovi, Nojonen, Patients 7,679

and Laippala (2002)

Hoff et al. (1999) Patients 38,789

Horne et al. (2001) Patient

Howard et al. (2001) Patient 189

Jamison et al. (1997) Patient 119

John (1992) Patient 353

Ketefian et al. (1997) Patient 619

Lanford et al. (2001) Family 4,872 (year

14,518

(year 2)

Larsson (1999) Patient 1,056

Larsson, Larsson, and Patient 611

Munck (1998)

Marino, Marino, and Family 3,676

Hayes (2000)

McNeill et al. (2001) Patient 104

Meterko, Nelson, and Patient 1,367

Rubin (1990)

Mokhtar et al. (1991) Patient 493

Oz et al. (2001) Patient 261

Rogers and Smith (1999) Patient 12,680

Rosenheck, Wilson, and Patient 4,968

Meterko (1997)

Shannon, Mitchell, and Patients, 489

Cain (2002) nurses, and (patients),

physicians 518

(nurses),

515

(physicians)

Simon et al. (1998) Patient 637

Sower et al. (2001) Patient 663

Stamps and Lapriore Patient 130

(1987)

Thi et al. (2002) Patient 533

Weaver et al. (1993) Patient 119

Welton and Parker Patient 1,008

(1999)

Wilson et al. (2002) Patient 1,074

Woodbury, Tracy, and Patient 3,720

McKnight (1998)

Woodside and Shinn Patient 70

(1988)

Zifko-Baliga and Patient 529

Krampf (1997)

Table 2: Summary Statistics for Implementation, Content, and

Performance Characteristics of Instruments Used to Collect Patient

Perceptions of Hospital Care

Mail Surveys

Survey Characteristic (N = 26 studies) *

Content characteristics

Average number of items (range) 45 (15-72)

Average number of domains (range) 8 (1-14)

Implementation characteristics

When survey is administered: 12% (3) Less than 2 weeks

percent of studies (N) postdischarge

12% (3) 2-4 weeks

postdischarge

19% (5) > 4 weeks

postdischarge

Target sample size (range) 510(100-1400)

Performance characteristics

Average response rate (range) 47% (15-77)

Psychometrics reported: percent of 54% (14) internal

studies (N) consistency

19% (5) test-retest

NA interrater

19% (5) concurrent

15% (4) construct

Telephone Surveys

Survey Characteristic (N = 13 studies) *

Content characteristics

Average number of items (range) 23 (8-39)

Average number of domains (range) 5 (2-10)

Implementation characteristics

When survey is administered: 0% (0) Less than 2 weeks

percent of studies (N) post discharge

31% (4) 2-4 weeks post

discharge

15% (2) > 4 weeks

postdischarge

Target sample size (range) 115 (80-150)

Performance characteristics

Average response rate (range) 70% (24-91)

Psychometrics reported: percent of 15% (2) internal

studies (N) consistency

8% (1) test-retest

0% (0) interrater

8% (1) concurrent

8% (1) construct

Drop Box

Survey Characteristic (N = 4 studies) *

Content characteristics

Average number of items (range) 16 (12-30)

Average number of domains (range) 6 (4-10)

Implementation characteristics

When survey is administered: On-site

percent of studies (N)

Target sample size (range) 10 (NA) ([dagger])

Performance characteristics

Average response rate (range) 63% (27-95)

Psychometrics reported: percent of 75% (3) internal

studies (N) consistency

25% (1) test-retest

0% (0) interrater

50% (2) concurrent

25% (1) construct

In-Person Interviews

Survey Characteristic (N = 12 studies) *

Content characteristics

Average number of items (range) 33 (10-121)

Average number of domains (range) 7 (3-14)

Implementation characteristics

When survey is administered: On-site

percent of studies (N)

Target sample size (range) 160 (NA)

Performance characteristics

Average response rate (range) 75% (53-84)

Psychometrics reported: percent of 58% (7) internal

studies (N) consistency

33% (4) test-retest

8% (1) interrater

17% (2) concurrent

17% (2) construct

NA, not applicable.

* Eighty-four articles were reviewed, 59 were included in this review;

we were unable to determine the mode of administration in three

articles and a further five articles used more than one mode of

administration. Therefore, the number of studies cited in this table

does not total 59. ([dagger]) This information was only given in one

study.

COPYRIGHT 2005 Health Research and Educational Trust

COPYRIGHT 2006 Gale Group