Browse through our Journals...  

How will RCTs Provide Evidence for this Patient?

Caroline Porr & Vinita Mahtani-Chugani


Abstract

Results from double-blind randomized controlled trials (RCTs) are deemed the evidentiary gold standard for medical treatment choice and intervention among protagonists of the evidence-based medicine (EBM) movement. EBM and the privileged status granted RCTs have provoked extensive criticism. In this paper the main areas of contention are briefly discussed, including: EBM’s over-reliance on epidemiology to the exclusion of alternate sources of evidence, the fallibility of RCT results; and, the deemed pertinence of population-derived data for clinical decision making. In addition several recommendations are put forth prompting medical practitioners to scrutinize the results and design of RCTs and ensuring that clinical practice and medical research focuses on patient-centered issues.


HOW WILL RCTS PROVIDE EVIDENCE FOR THIS PATIENT?


Clinical research worships at the shrine of the RCT. The
ability to assign subjects randomly to either experimental
or control status confers an aura of science that is unsurpassed
.
--Robert L. Kane, 1997, p. 5


Results from double-blind randomized controlled trials (RCTs) are deemed the evidentiary gold standard for medical treatment choice and intervention. Archie Conchrane’s (1972) seminal book, Effectiveness and Efficiency: Random Reflections on Health Services and the Cochrane Library, the Cochrane Collaboration and the Cochrane Criteria, in conjunction with McMaster University’s Evidence-Based Medicine Working Group, are responsible for positioning RCTs at the top of the research evidence hierarchy and creating the catalyst for the evidence-based medicine (EBM) movement. Proponents of EBM purport traditional decision making based on intuition, clinical experience and pathophysiologic reasoning alone is substandard; whereas, judgments founded upon scientific research evidence generated from rigorous methods, namely RCTs, are superior to medicine-as-usual (Evidence-Based Medicine Working Group, 1992; Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000; Upshur, 2003). Methodologies have been explicitly ranked from the most to the least reliable. RCTs are the highest ranked methods, then quasi-experimental studies followed by non-experimental descriptive studies, and lastly unsystematic clinical observations (Flemming & Fenton, 2002; Gupta, 2003). Critics maintain that EBM and its ‘RCTism’ are nothing more than ideology and a “doctrinaire creed rather than rational principles of science” (Miles, Grey, Polychronis, Price, & Melchiorri, 2003, p. 99).
EBM’s stance has provoked divergent opinions among clinicians. Many are ethically and epistemologically opposed to reducing medicine’s ultimate source of knowledge to RCTs and their concomitant systematic reviews (Schelling, 2004; Miles, Grey, Polychronis, Price, & Melchiorri, 2004). Why have EBM and the privileged status granted RCTs provoked extensive criticism? In this paper we briefly present the main areas of contention: EBM’s over-reliance on epidemiology to the exclusion of alternate sources of evidence, the fallibility of RCT results; and, the deemed pertinence of population-derived data for clinical decision making.


Areas of Contention Regarding RCTs


First and foremost EBM’s seemingly moral high ground states Upshur (2002) is “a foundation set in shifting sand” (p. 114). Fundamentally EBM puts considerable faith in epidemiology—the study of the distribution of disease in human populations using investigational and statistical techniques. EBM’s reliance on epidemiological studies as opposed to the physiologic model of clinical knowledge assumes statistical analysis of aggregate data is the most dependable pathway to scientific truth. Charlton (1997) contends that epidemiology is a technical discipline that simply offers clinical scientists the methods or toolkit to measure the size of an effect and even then is provisional at best. Employment of RCTs may enable precise statistical prediction of the likely effect of an intervention but RCTs can never promise absolute certainty concerning causal relationships. Researchers estimate for example, that it would take 127 randomized trials, 63,500 patients and 286 years to know unequivocally the drug treatment choice for Alzheimer’s disease and for ischemic stroke, 31 trials, 186,000 patients and 155 years (Saver & Kalafut, 2001). Furthermore, statistics, the mathematical theories of uncertainty and probability, has nothing to do with causation nor do statistical summaries have any bearing on what counts as evidence (Gupta, 2003). Actual inference from empirical data and statistical results generated from large RCTs as to what constitutes evidence to support research hypotheses is a matter of clinician reasoning and subjective interpretation (Miles et al., 2004).
Second, claims of EBM’s superior effectiveness do not withstand scrutiny at the methodological level when one considers the bias inherent within the RCT design and the sources of systematic bias. Biomedical researchers have been endorsing RCTs with tremendous enthusiasm insisting RCTs do the best job at safeguarding against bias.
However there are also many critics who after conducting comparisons of randomized and non-randomized clinical trials of the same intervention discovered that measures to control internal bias including adequate concealed random allocation to treatment and controlling for methodological maneuvers or clinical differences, have been deficient or seriously flawed (Kunz & Oxman, 1998; McKee, Britton, Black, McPherson, Sanderson, & Bain, 1999). This may explain why some of the results suggested that treatment effects obtained from randomized and non-randomized studies differed slightly but one method does not give a consistently greater effect than the other. Even Archie Cochrane (1972) acknowledged that although the RCT is a beautiful design it is not without “snags” (p. 35). And more than three decades later the snags are apparent as Hampton (1997) complains, “Unfortunately it cannot be assumed that the results even of a large clinical trial are sufficiently reliable to form the ‘evidence-base’ for clinical practice” (p. 125). Systematic bias in particular consistently distorts the results from RCTs and other EBM-preferred methods. Systematic bias refers to socially derived biases such as: source of research funding, technical bias, publication bias, and researcher career advancement bias.


Source-of-funding bias. Intervention studies that will profit corporate business will more likely be funded than interventions with no ability to alter profit margins favoring one pool of data for evidence over another. Commercial funders are more apt to showcase data that demonstrate the effectiveness of their sponsored medicinal product. Consequently physicians are viewing empirical literature that is skewed towards research data in support of interventions with high commercial value (Norman, 1999; Gupta, 2003).


Technical bias. Researchers choose topics and methods that are most familiar to them. Ostensibly the research agenda focuses on phenomena that are amenable to investigation by the more recognizable methods. The exclusive evidence hierarchy privileges certain types of methodologies so that subject matter that does not ‘fit the mold’ are often neglected or presumed unworthy (Gupta, 2003). Current health technology assessment relies on the systematic reviews of RCTs and meta-analyses of the findings. Time and time again VM (second author) has known experientially that effectiveness or side effects are different between individuals based on her own observations and the anecdotal accounts of her patients yet some of the ‘tried and true’ approaches are dismissed and her practitioner expertise negated because they have not yet been validated by RCTs. VM has witnessed for example, the beneficial effects of her patients merely carrying their drugs as opposed to consuming and metabolizing their medications, yet to date, VM has found no RCTs or EBM-preferred research strategies that have investigated this phenomenon.


Publication bias. Editors of medical journals elect to publish only statistical significant results which distort and narrow the data pool available to practitioners. Studies within which RCTs have failed to prove the effectiveness of certain drugs are not published. Rather than dissemination of all available literature regarding therapeutic options, practitioners are exposed to data that favor only certain types of interventions and are “not representative of the totality of all research” (Miles et al., 2004, p. 136). The propensity to publish information about new interventions that are shown to be effective inadvertently evokes a false sense of superiority of the named intervention. Publication bias is most apparent when comparing the vast number of studies (short duration, high power) of new pharmaceuticals to treat psychiatric illness in contrast to the few studies (long duration, low power) of psychotherapy. Clinicians seeking evidence-based solutions are falsely led to believe that psychotherapy is ineffective given the paucity of research (Gupta, 2003).


Career advancement bias.

Researchers may choose only career advancing topics. Researchers seeking career advancement strive to build their publication portfolio. Experiments with the less popular, ‘gray zone’ phenomena are unlikely to achieve dramatic study results and ‘earn’ entry into reputable journals. Consequently researchers elect to not take chances and avoid such phenomena altogether (Gupta, 2003).
Third, can the results from controlled trials be realistically applied to clinical practice? Analysis of aggregate data will show how many patients from a group will benefit from treatment and how many will suffer an adverse effect but will never predict the outcome for an individual (Hampton, 1977; Norman, 1999). Thus the utilization of population-derived research in patient-centered general practice is problematic. Medical practitioners must question the clinical relevance in terms of whether or not their patients are going to respond to the intervention in the same way as did the trial participants. One cannot assume that the relative benefit and harm of interventions and the treatment effect will not vary substantially when applied to individuals from different populations with variance in age, sex, diagnosis, severity of disease, social class and so forth. RCTs are not designed to integrate this heterogeneity rather RCTs are the most powerful when heterogeneity of the trial population is negligible (Mant, 1999; Culpepper & Gilbert, 1999).
Thus RCTs tend to exclude consciously or otherwise, some types of patients to whom results will subsequently be applied. It is not uncommon for RCTs to have blanket exclusions of categories of patients such as the elderly, women and ethnic minorities and the reasons for which not specified (Mckee et al., 1999). RCTs do not guarantee the target of causal inference when the study sample is a smaller and smaller subset of the reference population. What then do the measures from large trials mean if the trial has not selectively sampled a clinically representative group? While they may determine if the intervention will work, RCTs will not, Mant (1999) argues, determine who will benefit from the intervention. Clinicians need to ask themselves, Are these results generalizable; is this individual patient going to be affected by the intervention in the same way as the trial participants?
Furthermore, Miles and colleagues (2003) assert that reasoning based on epidemiological studies and RCT results is at cross-purposes with the inductive reasoning indicative and necessary for the complexities and uncertainties characteristic of clinical medicine. More accurately, aligning with the statistical approach goes hand in hand with deductive, reductionist reasoning which ‘de-personalizes’ and ‘de-contextualizes’ with the crude application of findings to decisions about unique human beings. Proponents of EBM have acknowledged that values and preferences are important considerations in order to estimate a patient’s unique benefits and risks, and to enhance compliance to the predetermined EBM-approved therapeutic regimen (Sackett et al., 2000). While this is essential, consultation with the patient has occurred not before or during, but after the decision making process regarding choice of intervention.
Fundamental to ethical clinical practice is a knowledge base much broader than biological plausibility and probabilistic reasoning of EBM so that practitioner is able to attend to his or her patient’s problems holistically. For example, VM’s medical decision to prescribe one form of anti-depressant drug in favor of another for one of her patients depended on multiple sources of knowledge in addition to evidence from scientific and statistically rigorous experimental research verifying the effect of each pharmacological agent. When one of the agents, VM learned, produced in her patient the side effect of sexual impotence, VM elected to consult with her patient before taking it upon herself to prescribe the other less effective antidepressant and avoid this side effect. Surprisingly, VM discovered that her patient preferred to resume the more effective antidepressant and risk experiencing sexual impotence. Her patient explained that improvement in his mood and ensuring that he had the emotional resources to be more attentive to his young children was valued highly and took priority at this point in his life. In order to be accountable it was critical that VM was cognizant of the distinct bio-psycho-social functioning, values and preferences of her patient and that she modified therapy accordingly.
If findings from the pre-eminent RCT method are the appropriate external source of evidence then they should inform rather than direct the practitioner’s day-to-day clinical decision making. Anticipating potential psychological and social responses are important requisites to judicious clinical decisions requiring practitioner intuition and other forms of knowing accrued during years of practice experience including understanding of physiologic principles and contextually-specific patent data such as values, preferences, cultural aspects and other human variables. Clinical evidence exists in many forms, scientific and nonscientific, to accommodate the idiosyncrasies that are embedded within the human health-illness experience (Upshur, 2001; Hampton 1997; Culpepper & Gilbert, 1999; Mant, 1999; Miles et al.; 2004).


Conclusions


Like a toddler trying to hit everything in sight with a newly found
hammer, EBM has progressively permeated most health sciences.

--Milos Jenicek & Sylvie Stachenko, 2003, p. SR1


The purpose of this paper was not to completely denounce RCTs for there have been many advancements in medicine with well-designed rigorous studies and epidemiological research has contributed significantly to the field of public health. However, healthy skepticism about one’s evidentiary source for decision making is key to ethical care and treatment of patients in the clinical setting. The Hippocratic pre-scientific ideal reminds medical practitioners to be learned, wise, modest and humane (Milese et al., 2004). Moreover one might think that the controversy over EBM and ‘RCTism’ is the esoteric domain of physicians without realizing that the EBM movement is the impetus for the generic evidence-based paradigm. EBM extends its influence across several disciplinary lines (i.e., evidence-based practice) and is pervading managerial and political arenas (i.e., evidence-based health care). Evidence-based problem solving is becoming the mainstay of decision making beyond the clinical practitioner-patient context to health policy decisions with potential to impact the health and well-being of populations (Biller-Andorno, Lie, & Meulen, 2002).


Recommendations


1. Be discerning. Check trial results, along with the design process and researcher biographical information. Proceed with caution if you suspect potential biases threatening internal or external validity.
2. Conduct extensive literature review. Compare RCT study results with existent findings from non-randomized studies. Are there notable differences in effect size?
3. Lobby for greater public involvement. Public scrutiny will ensure trials are funded to address questions pertinent to patients and reduce perverse incentives. Public access to research reports through electronic publication and peer review of protocols will reduce publication bias.
4. Incorporate qualitative research outcomes as part of the evidence base. Widen the methodological research base by including narrative-based studies. If appropriate, employ RCTs for evidence of efficacy only. Conduct observational studies to determine suitability and baseline risk to the individual patient context. Implement qualitative research methods to discern the patients’ experiences of the intervention, the frequency of adverse events, and the effect of patients’ preferences on outcomes.
5. Consider the whole person. Address psychological, social and behavioral problems in addition to the purely biomedical domain when making a medical decision about affecting? your patient.


References


Biller-Andorno, N., Lie, R.K., & Meulen, T.R. (2002). Evidence-based medicine as an instrument for rational health policy. Health Care Analysis, 10, 261-275.
Charlton, B. G. (1997). Restoring the balance: Evidence-based medicine put in its place. Journal of Evaluation in Clinical Practice, 3, 87-98.
Cochrane, A. L. (1972). Effectiveness and efficiency: Random reflections on health services. London, UK: The Nuffield Provincial Hospitals Trust.
Culpepper, L., & Gilbert, T. T. (1999). Evidence and ethics. The Lancet, 353, 829-831.
Evidence-Based Medicine Working Group. (1992). Evidence-based medicine. A new approach to teaching the practice of medicine. Journal of the American Medical Association, 268, 2420-2425.
Flemming, K., & Fenton, M. (2002). Making sense of research evidence to inform decision making. In C. Thompson & D. Dowding (Eds.). Clinical decision making and judgement in nursing (pp. 109-129). Toronto, ON: Harcourt Publishers Limited.
Gupta, M. (2003). A critical appraisal of evidence-based medicine: Some ethical considerations. Journal of Evaluation in Clinical Practice, 9(2), 111-121.
Hampton, J. R. (1997). Evidence-based medicine, practice variations and clinical freedom. Journal of Evaluation in Clinical Practice, 3, 123-131.
Jenicek, M., & Stachenko, S. (2003). Evidence-based public health, community medicine, preventive care. Medical Science Monitor, 9(2), SR1-7.
Kane, R. L. (1997). Approaching the outcomes question. In R. L. Kane (Ed.). Understanding health care outcomes research (pp. 1-15). Gaithersburg, Maryland: Aspen Publishers, Inc.
Kunz, R., & Oxman, A.D. (1998). The unpredictability paradox: review of empirical comparisons of randomized and non-randomized clinical trials. British Medical Journal, 317, 1185-1190.
McKee, M., Britton, A., Black, N., McPherson, K., Sanderson, C., & Bain, C. (1999). Methods in health services research: Interpreting the evidence: choosing between randomized and non-randomized studies. British Medical Journal, 319, 312-315.
Miles, A., Grey, J. E., Polychronis, A., Price, N., & Melchiorri, C. (2003). Current thinking in the evidence-based health care debate. Journal of Evaluation in Clinical Practice, 9, 95-109.
Miles, A., Grey, J.E., Polychronis, A., Price, N., & Melchiorri, C. (2004). Developments in evidence-based health care debate - 2004. Journal of Evaluation in Clinical Practice, 10(2), 129-142.
Norman, G. (1999). Examining the assumptions of evidence-based medicine. Journal of Evaluation in Clinical Practice, 5, 139-147.
Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2000). Evidence-based medicine: How to practice and teach EBM (2nd ed.). Edinburgh, UK: Harcourt Publishers Limited.
Saver, J. L., & Kalafut, M. (2001). Combination therapies and the theoretical limits of evidence-based medicine, Neuroepidemiology, 20(2), 57-64.
Schelling, F. A. (2004). Clinical trials: Deliberations on their essence and value. Journal of Evaluation in Clinical Practice, 10(2), 291-296.
Upshur, R. (2001). In J. M. Morse, J. M. Swanson, & A. J. Kuzel (Eds.). The nature of qualitative evidence (pp. 5-26). Thousand Oaks, CA: Sage Publications, Inc.
Upshur, R. (2002). If not evidence, then what? Or does medicine really need a base? Journal of Evaluation in Clinical Practice, 8(2), 113-199.
Upshur, R. (2003). Are all evidence-based practices alike? Problems in the ranking of evidence. Canadian Medical Association Journal, 169(7), 672-673.

 

Copyright Priory Lodge Education 2008

First published January 2008


Click on these links to visit our Journals:
 Psychiatry On-Line 
Dentistry On-Line
 |  Vet On-Line | Chest Medicine On-Line 
GP On-Line | Pharmacy On-Line | Anaesthesia On-Line | Medicine On-Line
Family Medical Practice On-Line


Home • Journals • Search • Rules for Authors • Submit a Paper • Sponsor us   

 

priory.com
Home
Journals
Search
Rules for Authors
Submit a Paper
Sponsor Us
priory logo


 
 

Default text | Increase text size