Evaluating quantitative research reports
Cynthia L. Russell
Editor’s Note: This is the third of a series of columns contributedby the ANNA Research Committee to assist nephrology nurses in understanding research approaches and methodologies and evaluating research.
Quantitative research is evaluated for several reasons. You may be trying to decide if the findings are worthy of incorporation into your practice or you may be attempting to determine the current state of the research in a particular area. In the evaluation process, you will objectively review the strengths and the weaknesses of a report. Ultimately, you will determine whether the strengths of the work are greater than its weaknesses, whether the results can be incorporated into practice, and where the findings direct the next study.
Evaluating a quantitative research report may initially seem like a daunting task, even if you regularly read research. However, by using a systematic approach, you can become more comfortable and proficient in evaluating quantitative research. Numerous articles have been published on evaluating research reports (Beck, 1990; Pieper, 1993; Ryan-Wengar, 1992; Soeken, 1985; Summers, 1991). In addition, multimedia CD-ROMs have become available to assist with critiquing research for clinical practice use (Alderman, 1998; Beyea, 1998). The purpose of this article is to provide the knowledge and tools needed to successfully evaluate quantitative research reports.
Components of a Qualitative Research Report
Though journal criteria vary, most qualitative research reports contain five sections: research problem, review of the literature, methods (design, sampling plan, instrumentation, procedure, human subjects protection), data analysis and results, and discussion.
The introduction of the report should thoroughly describe the background of the research problem so that the need for the study is apparent. The author must build a case from the existing literature that the problem is of sufficient merit to justify further research. For example, if the study’s purpose is to measure the effect of information and support on hope and uncertainty in individuals awaiting deceased donor renal transplantation, the introduction would describe the number of people waiting for renal transplant, the potential impact that interventions could make in this area, and the problems with extent research.
The statement of the problem should flow directly from the introduction and should conclude this section. The statement of the problem broadly identifies what needs to be studied, including both who and what will bestudied. A problem statement example follows: Because of the long waiting time, there is a need for nursing interventions to assist individuals waiting for deceased donor renal transplantation. After you read the problem statement, you should have an idea of what completion of the study included. You should determine whether the problem makes a significant contribution to the science and whether it is relevant to your practice.
Review of the Literature
This section provides the foundation for helping the reader understand what the current state of the evidence is in the selected area of study. The review of the literature should present pertinent findings from selected research reports in an organized and clear fashion. Frequently, reviews are organized by headings that correspond to key study concepts. The review must also present an evaluation of the quality of the pertinent literature noting strengths and weaknesses. As you read the review of literature, it should move from the broad to the specific with the last section of the review clearly delineating the need for the study. For example, a review of literature on interventions for those awaiting deceased donor renal transplantation would begin with a section summarizing literature on the experience of waiting for a transplant and then move to a section on nursing interventions used to assist those waiting. The review of the literature would then conclude with the statement of the gaps in existing literature and how the current study will address those gaps.
The review of the literature may contain a section on the theoretical or conceptual framework. If presented, you should assess whether the theoretical or conceptual framework is clearly described including concepts and relationships. The problem statement should flow directly from the theoretical or conceptual framework.
The methods section describes the steps used by the researcher to carry out the study. This section includes the design, sampling plan, instrumentation, procedure, and the protection of human subjects.
Design. The design delineates the plan or blueprint of the study. Non experimental designs, which include descriptive and correlational designs, examine phenomena as they naturally occur, so no manipulation is involved. A descriptive design allows the researcher to describe the characteristics of the sample, while a correlational design assists the researcher in examining relationships between variables (Polit & Beck, 2004). On the other hand, experimental designs involve three key components: (a) manipulation of the independent variable, (b) use of a control group, and (c) randomization into groups (Polit & Beck, 2004). Experimental studies use the most powerful designs. These designs allow the researcher to control for extra variables that may interfere with the researcher’s ability to tell if the measured effect was due to the manipulation of the independent variable or due to interference from the undesired extra variables. Quasi experimental designs lack one of the three key experimental components.
The reader can anticipate the data analysis plan once the design is known. For example, if an experimental study is planned, the reader can anticipate use of inferential statistics such as t-tests for data analysis. If a descriptive study is planned, then descriptive statistics are anticipated e.g. means, modes, medians.
All studies have research questions. Research questions guide and direct the study. A well-developed research question includes the population and variables to be studied. At the completion of the study the research questions should be answered. Correlational, quasi-experimental, and experimental studies also have hypotheses. A hypothesis is a statement of the relationship between variables predicted by the researcher.
Sampling plan. This section must clearly describe who was asked to participate in the study and how they were identified, the characteristics of the target population (the population to which the findings are generalized), the sampling procedure, and the size of the sample. An example of a well-developed sampling plan follows:
The sample included the first 50participants agreeing to participate who were on the deceased donor renal transplantation waiting list at a university-affiliated hospital in the Midwest (Russell & Brown, 2002, p. 202).
Instrumentation. The instruments used to gather data for the study must be clearly and thoroughly described. The researcher should delineate which concepts each instrument will measure. Instrumentation may involve interviews, questionnaires, scales, observation, and/or biophysiological measures. Reliability and validity data should be reported for each instrument. Reliability is the instrument’s ability to accurately and consistently measure the concept (Brink & Wood, 2001). Validity is the instrument’s ability to measure what it is supposed to measure (Brink & Wood, 2001). The following is an example of a well-developed description of an instrument:
Depression was measured using the Beck Depression Inventory (BDI) (Beck et al., 1961). This 21 item self-administered, self-report scale addresses mood, pessimism, sense of failure, lack of satisfaction, guilty feeling, sense of punishment, self-hate, self accusation, self-punitive wishes, crying spells, irritability, social withdrawal, indecisiveness, body image, work inhibition, sleep disturbance, fatigability, loss of appetite, weight, loss, somatic preoccupation, and loss of libido. The BDI has high internal consistency with ranges from .73 to .92 with a mean of.86 (Beck, Steer, & Garbin, 1988). The BDI has a split- half reliability co-efficient of .93 (Beck et al., 1988) (Russell & Brown, 2002).
Procedure. The procedure should be the “recipe” for the research process with sufficient details provided so that you can easily follow the process. The procedure should be written very clearly and flow logically. All steps of the procedure should be described fully. An example follows.
Participants were randomly assigned to either the control group or the treatment group. Those placed in the control group received no intervention phone calls or mailings, which was the current standard of care. Those randomized into the treatment group received support, which included phone calls and mailings, once every month for six months. Because the current average waiting time at the institution was 8 months for blood group A, and longer for other blood groups, a six month intervention was selected. During the phone calls, patients were asked if they had any questions or concerns that they would like to ask about waiting on the transplant list. The researchers documented key words and phrases stated by the subjects in response to the questions. The mailings were sponsored by Signature pharmaceuticals. This program involved sending an initial welcoming letter and subsequent newsletters which provided information on pertinent transplantation issues such as medications, diet, exercise, organ allocation, waiting times, and current media topics. A web site, which could be accessed for information, was also provided by Signature. Both the control and treatment groups completed the Herth Hope Index (HHI) and the Mishel’s Uncertainy in Illness for Adults Scale (MUIS- A) at the beginning of the study and six months later (Russell & Brown, 2002, p. 203).
Protection of human subjects. The researcher should state how protection of human subjects was assured. The study should have been reviewed and approved by an Institutional Review Board and that should he stated in the report. Most often the approval is from the institution where the researcher is employed. However, if the study is conducted in multiple settings, each setting should provide approval and this should be noted by the investigator.
Data Analysis and Results
This section is often the most intimidating for beginning reviewers. However, several steps can make the process manageable. Confirm that the researcher has presented results that clearly answer the proposed research question(s). Researchers frequently organize this section by research question to facilitate readability. Since the design can assist you in determining the appropriate statistics to anticipate, review the design again. If a descriptive design is used, you should find descriptive statistics such as mean, mode, and median (which measure how the data tend to be similar or grouped together) and variance, standard deviation, and range (which measure how the data tend to be spread out). If the design is correlational, then you should anticipate a correlation coefficient, such as Person r or Spearman rho. A correlation coefficient identifies the strength and the direction of the relationship between two variables (Holcomb, 2002). An example of descriptive statistics follows:
The sample consisted of 35 males (70%) and 15 females (30%). The mean age was 48.5years (SD = 12.6, range 2070). Participants had an average of 12.5 years of education (SD = 2.52, range 5-20). Eighty-four percent were Caucasian and 70% were currently married. The average months since diagnosis of end-stage renal disease was 52.3 (SD = 74.18, range 1-300). The average number of days waiting for transplant was 450.4 (SD = 1084, range 1-4752) (Russell & Brown, 2002, p. 203).
If the design is experimental or quasi experimental, you should anticipate the use of inferential statistics. Inferential statistics answer questions about relationships between variables and differences between groups (Holcomb, 2002). An outstanding quick reference guide fox assessing an author’s appropriate use of statistics based on the research question and level of measurement is available (Ryan-Wengar, 1992). Many studies set the level of statistical significance as p [less than or equal to] .05, the chance of making a Type I error in 5 or fewer tests out of 100 tests. Several texts provide further discussions of type 1 and type 11 errors (Polit & Beck, 2004). An example of a results section on inferential statistics follows:
The first research question addressed was.” What effect does the nursing intervention of providing information and support have on the levels of hope and uncertainty in individuals awaiting renal transplantation between the treatment and control groups? No statistically significant effect of the nursing intervention was found on hope and uncertainty in this sample using Hotelling’s [T.sup.2] statistic (F= 0.5322;p= 0.81) (Russell & Brown, 2002, p. 203).
Tables and graphs are frequently used to summarize research results. The tables and graphs should be clearly labeled and should complement the article text.
The discussion section should clearly flow from the data and place the study’s findings in context with what is already known. If a theoretical or conceptual framework is presented, the nature of the findings should be discussed in the context of the framework. The author may offer interpretations of the findings but these should be clearly identified as such. Based upon the logical flow of this section, a determination must be made regarding the justification of the author’s conclusions. The author should present the limitations of the study. Implications for practice and future research must be delineated.
As a novice reviewer, it is often difficult to trust your evaluation of a research report. You may feel uncertain in your interpretations. These are common concerns and can be remedied by reading and discussing research reports on research listservs, through journal clubs, or with other nephrology nurses. Practice using the criteria for research report evaluation and you too can perfect critiquing a research report!
Criteria for Research Report Evaluation
1. Is the problem clearly stated?
2. Is the problem significant?
Review of the Literature
1. Is the literature summarized?
2. Is the literature critically evaluated?
3. Are gaps and inconsistencies in the literature
4. Is the literature current and the review complete?
5. If presented, is the theoretical or conceptual framework
clearly described including concepts and relationships?
6. Does the problem clearly link to and flow from the
theoretical or conceptual framework?
1. Is the design clearly stated?
1. 1. Is the sample clearly identified?
2. Is it clear how the sample will be obtained?
3. Is the relationship between the sample and the target
population clearly delineated?
4. Is the rationale for the sample size provided?
1. Is it clear which instruments will measure which concepts?
2. Is the rationale for instrument selection acceptable?
3. Is the reliability for each instrument described and
4. Is the validity for each instrument described and
1. Are sufficient details provided in the procedure?
2. Is the procedure written clearly?
3. Does the procedure flow logically?
4. Are all steps of the procedure clearly stated?
Protection of Human Subjects
1. Has the researcher provided sufficient protection of
Data Analysis and Results
1. Is the data analysis section well organized?
2. Is the statistical method used for analysis appropriate
for the research question(s) and/or hypothesis
and level of measurement?
3. Are tables and graphs clearly labeled?
4. Do the tables and graphs complement the text?
1. Does the discussion clearly flow from the data?
2. Does the discussion place the study’s findings in
context with what is already known?
3. If a theoretical or conceptual framework is presented,
are the nature of the findings discussed in the
context of the framework?
4. If the author presents interpretations of the findings
are these clearly distinguished as such?
5. Are justifications offered for the author’s conclusions?
6. Are study limitations provided?
7. Are implications for practice and future research
Alderman, S. (1998). Critiquing research for use in clinical nursing practice: A CD-ROM review. Nurse Educator, 23(2), 8.
Beck, A.T., Steer, R.A., & Garbin, M.G. (1988). Psychometric properties of the Beck Depression Inventory: Twenty-five years of evaluation. Clinical Psychology Review, 8 (1), 77-100.
Beck, A.T., Ward, C.H., Mendelson, M., Mock, J., & Erbaugh, J. (1961). An inventory for measuring depression. Archives of General Psychiatry, 4, 561-571.
Beck, C.T. (1990). The research critique: General criteria for evaluating a research report. JOGNN, 19(1), 18-22.
Beyea, S.C. (1998). Critiquing research for use in clinical nursing practice (CD-ROM). Computers in Nursing, 16(1), 16-17.
Brink, P.J., & Wood, M.J. (2001). Basic steps in planning nursing research (5th ed.). Boston: Jones and Bartlett Publishers.
Holcomb, Z.C. (2002). Interpreting basic statistics 3rd ed.). Los Angeles: Pyrczak Publishing.
Pieper, B. (1993). Basics of critiquing a research article. Journal of ET Nursing, 20, 245-250.
Polit, D., & Beck, C.T. (2004). Nursing research: Principles and methods (7th ed.). Philadelphia: Lippincott, Williams, & Wilkins.
Russell, C.L., & Brown, K. (2002). The effects of information and support on individuals awaiting transplant. Progress in Transplantation, 12(3), 201-207.
Ryan-Wengar, N.M. (1992). Guidelines for critique of a research report. Heart & Lung, 21(4), 394-401.
Soeken, K.L. (1985). Critiquing research: Steps for complete evaluation on an article. AORN Journal, 41(5), 882-893.
Summers, S. (1991). Defining components of the research process needed to conduct and critique studies. Journal of Post Anesthesia Nursing, 6(1), 50-55.
This column is compiled by the ANNA Research Committee to assist nephrology nurses in understanding research approaches and methodologies and evaluating research. For additional information, contact Patricia A. Cowan, PhD, RN; ANNA Research Committee through the ANNA National Office; East Holly Avenue/Box 56; Pitman, NJ 08071-0056; firstname.lastname@example.org.
Cynthia L. Russell, PhD, RN, M-SCNS, is Assistant Professor, University of Missouri- Columbia, Sinclair School of Nursing, Columbia, MO; and a member of ANNA’s Central Missouri Chapter.
COPYRIGHT 2005 Jannetti Publications, Inc.
COPYRIGHT 2007 Gale Group