All research studies have flaws, but are they fatal?

All research studies have flaws, but are they fatal?

Feise, Ronald J

Chiropractic Philosophy & Clinical Technique

I recently reviewed and assessed a published research study for a colleague who is a well-respected clinical instructor and a very successful practitioner. The paper in question was authored by several prominent medical researchers and published in a reputable medical journal. After carefully reviewing the paper, I spoke with my colleague about its value and credibility. My first comments underscored the fact that the paper had serious flaws and I doubted the authors’ conclusions. My colleague’s response was one that I have heard far too often: “All studies have flaws. This journal is highly respected, and these authors are famous.” These comments revealed a common disbelief that prominent researchers and prestigious journals sometimes produce poor research with conclusions that should not he trusted. It is all too easy to look at a research paper and assume that because the authors are well known and/or the journal is well respected, the study was designed and implemented in a rigorous manner and the conclusions of the authors are valid. Such an assumption is even easier to make if the paper supports our personal theories.

This encounter with my colleague brings to the forefront four important questions:

1. Can you trust peer-reviewed journals to polish scientifically rigorous studies? It is incorrect to assume that professional journals publish only “scientifically sound” studies that are suitably designed and implemented. In fact, researchers investigating the quality of published studies have found the mean quality of articles to be only about 35 percent.1,2 Although the peer-review process is valuable and improves the quality of research papers, it does not ensure their soundness. Researchers have found that between one-third and two-thirds of the manuscripts rejected by a “prestigious” journal were subsequently published in another medical journal3,4

A common misconception is that the “prestige” of a journal ensures the quality of the studies it publishes. Assendelft investigated the quality of published reviews of spinal manipulation and reported that the British Medical Journal (BMJ) published reviews ranging between 19 percent (very poor quality) and 72 percent (good quality).5 In another study, Koes examined the quality of published randomized clinical trials of spinal manipulation and reported that Spine published trials ranging between 22 percent (very poor quality) and 56 percent (fair quality).6 Thus, these two “prestigious” medical journals displayed an important variation in quality that could impact the validity of the researchers’ conclusions.

The chiropractic profession’s own peer-reviewed and medically indexed journal did rather well, compared to the best medical journals. Assendelft found that the quality of reviews of spinal manipulation published in BMJ, JAMA, and Spine averaged 36 percent, whereas those in JMPT averaged 45 percent.5 Koes’s analysis of the quality of published randomized clinical trials of spinal manipulation found that trials published in BMJ, Lancet, and Spine averaged 44 percent, while those in JMPT averaged 51 percent.6

2. Can you trust prominent researchers to design, perform, and publish scientifically sound studies consistently? Assendelft assessed the quality of reviews of spinal manipulation published by several respected authors. His findings revealed an important variation in quality scores: Shekelle had two studies-one scored 46 (considered poor quality), and the other scored 76 (considered good quality); Deyo also had two studies one scored 18 (considered very poor quality), and the other scored 30 (considered poor quality).5 Thus, even prominent researchers can produce works of inconsistent quality that bring into question the validity of their conclusions.

3. Are all studies flawed? All research is flawed in some way, but we must have “compassion for the cooks” and learn to tell the difference between small variations of flavor and indigestible food.7 Most studies contain some type of methodological error that may or may not have an impact on the study’s conclusions. Some shortfalls are meaningless and can be disregarded, while others are catastrophic and invalidate findings. Every reader’s aim is to separate the wheat from the chaff and determine whether to apply a study’s findings or reject them.

Not every reader has the skill to critically appraise every study. But health care practitioners need to understand the basics of critical appraisal in order to make informed decisions. Critical appraisal does not require that you simply locate inadequacies. It also requires that you estimate their likely influence. Some flaws can create a bias or study defect-and thereby compromise a study’s scientific validity. Other flaws can influence the estimates of clinical outcomes under investigation by “boosting” the effect size. Defects or biases can creep into a study in several ways, but a well-designed and wellconducted study minimizes these. When a study contains serious problems in its design and methodology, the results will almost surely be invalid.

Sometimes studies fail to report certain procedures or results, and there is a good chance that whatever was not reported was not done rigorously.8 In an investigation of unreported procedures, Chalmers communicated with the authors of 59 peer-reviewed studies and discovered that in 58 percent of these studies, consequential procedures had in fact been executed, but not described.9 Other researchers have found that little consequential information can be gathered by contacting the authors.10,11 Regardless of the wisdom or practicality of contacting authors, the burden of designing, implementing, and reporting falls upon the researchers, not the readers. If a research protocol is not reported in an article, you should assume that it was not fulfilled.

Complicating these issues is the fact that the overall quality of a study is not always accurately measured by an index quality score or a list of unweighted items scored “yes,” “no,” or “maybe.” Although a clinical trial may receive a high quality score and implement most methodological items successfully, the conclusion may he invalid if a methodological error is serious. For example, if a study has high patient withdrawal from the intervention group (much greater than 20 percent) and not the control group (less than 5 percent), the findings may be invalid, regardless of the overall quality score.

4. Does the ability to critically appraise the scientific literate have an impact on the practice of chinopractic? Because the aim of our profession is the improvement of human health, the practicing chiropractor has an obligation to patients to be able to assess the effectiveness of therapeutic and diagnostic interventions.2 Moreover, if a practitioner reads a study and accepts a conclusion that should be rejected, or, conversely, rejects a finding that should he accepted, patients may receive less than optimal care and may even be harmed.

Patients, professional colleagues, and health care decision makers assume that you are a health care expert who knows how to read the scientific literature. If this is not the case, your professional authority and your ability to provide the highest level of care for your patients are diminished. How can any practitioner claim to provide high-quality care if he or she cannot distinguish between a fatally flawed study and one with only minor defects? Because the critical appraisal of diagnostic and therapeutic procedures lies at the very heart of chiropractic clinical care, the science of chiropractic is the responsibility not only of the educator and researcher, but also of the practicing chiropractor.13

Acknowledgement

Dr. Feise gratefully acknowledges Robert Cooperstein, MA, DC, for his manuscript review and recommendations.

References

1. Sonic J, Joines J. The quality of clinical trials published in The Journal of Family Practice, 1974– 1991. J Fam Pract 1994;39:225-35.

2. Rochon PA, Gurwitz JH, Cheung CM, Hayes JA, Chalmers TC. Evaluating the quality of articles published in journal supplements compared with the quality of those published in the parent journal.

JAMA 1994;272:108-13.

3. Abby M, Massey MD, Galandiuk S, Polk HC Jr. Peer review is an effective screening process to evaluate medical manuscripts. JAMA 1994;272:105-7.

4. Ray J, Berkwits M, Davidoff F The fate of manuscripts rejected by a general medical journal. Am J Med. 2000;109:131-5.

5. Assendelft WJ, Koes BW, Knipschild PG, Boater LM. The relationship between methodological quality and conclusions in reviews of spinal manipulation. JAMA 1995;274:1942.

6. Koes BW, Assendelft WJ, van der Heijden GJ, Boater LM. Spinal manipulation for low back pain. An updated systematic review of randomized clinical trials. Spine 1996;21:2860-73.

7. Gehlback SH. Interpreting the medical literature. New York: McGraw-Hill, 1993:108.

8. Oxman AD, Guyatt GH. Validation of an index of the quality of review articles. J Clin Epidemiol

1991;44:1271-8.

9. Chalmers TC, Smith H Jr., Blackburn B, Silverman B, Schroeder B, Reitman D, Ambroz A. A method for assessing the quality of a randomized clinical trial. Controlled Clin Trials 1981;2:31 49.

10. Dickinson K, Bunn F, Wentz R, Edwards P, Roberts I. Size and quality of randomised controlled trials in head injury: review of published studies. BMJ. 2000;320:1308-11.

11. Liberati A, Himel HN, Chalmers TC. A quality assessment of randomized control trials of primary treatment of breast cancer. J Clin Oncol 1986;4:942-51.

12. Keating JC. Philosophy and science in chiropractic: essential, inseparable and misunderstood. European Journal of Chiropractic 2001;

46:51-60.

13. Keating JC. Editorial: A buddy system for chiropractic research. Journal of the Canadian Chiropractic Association 1987; 31:9-10.

By RONALD J. FEISE, DC, CEBC

Dr. Feise is president of the Institute of Evidence-Based Chiropractic (IEBC). He can be reached at ric@chiroevidence.com; IEBC, 6252 Rookery Road, Fort Collins, CO 80528; phone 970/2660660;fax 9701266-0190; website www.chiroevidence.com.

This column is coordinated by Robert Cooperstein, DC, Palmer West College of Chiropractic. Dr. Cooperstein accepts manuscript submissions at Cooperstein_r@palmer.edu, or by fax at 4091944-6118.

Copyright American Chiropractic Association Jul 2002

Provided by ProQuest Information and Learning Company. All rights Reserved