Evidence-based medicine (EBM) is the use of the scientific method to organize and apply current data to improve health-care decisions. Thus, the best available science is combined with the healthcare professional's clinical experience and the patient's values to arrive at the best medical decision for the patient. There are 5 main steps for applying EBM to clinical practice :
EBM starts with a clinical question. The clinical question is an issue which the health-care provider addresses with the patient. After the clinical question is formulated, relevant scientific evidence is sought which relates to the clinical question. Scientific evidence includes study outcomes and opinions. Not all data has the same strength. Recommendations from an expert are not as robust as the results of a well-conducted study, which is not as good as the results of a set of well-conducted studies. Thus in evidence-based medicine, the levels of evidence or data should be graded according to their relative strength. Stronger evidence should be given more weight when making clinical decisions.
The evidence is commonly stratified into six different levels:
Level IA: evidence obtained from a meta-analysis of multiple, well-conducted and well-designed randomized trials. Randomized trials provide some of the strongest clinical evidence, and if these are repeated and the results combined in a meta-analysis, then the overall results are assumed to be even stronger.
Level IB: evidence obtained from a single well-conducted and well-designed randomized controlled trial. The randomized controlled study, when well-designed and well-conducted, is a gold standard for clinical medicine.
Level IIA: evidence from at least one well-designed and executed non-randomized controlled study. When randomization does not occur, there may be more bias introduced into the study.
Level IIB: evidence form at least one well-designed case-control or cohort study. Not all clinical questions can be effectively or ethically studied with a randomized controlled study.
Level III: evidence from at least one non-experimental study. Typically level III evidence would include case series as well as not well-designed case-control or cohort studies.
Level IV: expert opinions from respected authorities on the subject based on their clinical experience.
All clinical studies or scientific evidence can be classified into one of the above categories. The clinician must then use their professional, clinical experience to extrapolate the scientific evidence as it applies to the specific patient. Most clinical studies have specific inclusion and exclusion criteria as well as specific population studied. More often than not, the patient being treated by the clinician will have one or more substantial differences from the population in the study. The medical provider must then use their clinical judgment to determine how the variations between the patient and the study population are important or not and how they affect applying the results of the study to the specific patient.
For example, a specific patient may be a 70-year-old female with a history of hyperlipidemia and a new diagnosis of hypertension looking at treatment options for hypertension. The clinician may find a good randomized controlled trial looking at medications to control hypertension, but the inclusion criteria of the study was a population of 18 to 65-year-olds. Should the clinician ignore the results as the specific patient does not meet the study demographics? Should the clinician ignore the age difference between the specific patient and study population? This is where the clinical judgment helps bridge the gap between the relevant scientific evidence and the specific patient being treated.
Finally, clinicians using evidence-based medicine must put all of the information in the context of the patient's values or preferences. The patient's values or preferences may conflict with some of the possible options. Even strong evidence supporting a specific treatment may not be compatible with the patient's preferences, and thus, the clinician may not recommend the treatment to the patient. Also, the treatment might not apply to the specific patient.
As an example, a patient may have a particular form of cancer. Level IA evidence may suggest life expectancy can double from 8 to 16 months with chemotherapy. The chemotherapy has significant side effects. The patient may find those side effects not acceptable and elect to not pursue chemotherapy secondary to the specific patient's preferences and values.
Once the clinical question is formulated, relevant scientific information evaluated, and clinical judgment used to apply the relevant scientific evidence to the specific patient and their values, the outcome must be evaluated. The final step is a re-evaluation of the patient and clinical outcome after application of the applied information. Did the intervention help? Were the outcomes as expected? What new information is obtained? How can this information be applied to future situations and patients?
Evidenced-based medicine starts with the clinical question and returns to the clinical question at the end to see to what effect the process worked. Without continuous re-evaluation, the medical provider will not be sure if the impact they have is positive or negative. Evidence-based medicine is a perpetual wheel of improvement rather than a one-time linear process.
The function of evidence-based medicine is to bring together three different entities: the patient's preferences, the healthcare professional's clinic judgment, and the best available, relevant, scientific information to provide improved medical care. 
Evidence-based medicine is based on published results, giving more weight to class I and II evidence. Many studies have shown that positive results are more likely to be published than negative results. This leads toward a publication bias of positive result studies which can skew the available evidence. Additionally, studies funded by companies are more likely to get published to push for the use of the studied medication or device which can also skew the available evidence.
Randomized, Controlled Trial Bias
Evidence-based medicine places the highest weight in randomized controlled trials. Although such randomized controlled trials may provide strong evidence, a randomized controlled trial may not always be possible or feasible. If a disease process has a very low prevalence, it may be extremely prohibitive or impossible to obtain sufficient participants for a study.
For example, progeria is a rare disease with an incidence of around one in four to eight million live births and an average lifespan of 14 years. With a global population around 7.6 billion and annual birth rate of 18.5 births per 1,000 people per year, there would only be around 100 to 400 total individuals with progeria in the world. With so few patients it is impractical to conduct a randomized controlled trial which would produce meaningful results.
As a second example consider the ethical implications of randomized controlled trials. In a paper by Smith et al. (2003) they argue that we take for granted that parachutes help prevent injuries and save lives after a person jumps out of an airplane. This common-sense observation has not yet been studied and proven with a randomized controlled trial. The article argues that people should accept certain common-sense ideas, and, randomized, controlled trials are not always necessary. After all, can researchers easily find evidence-based medicine purists who would be willing to sign up for a randomized, cross-over, placebo-controlled trial testing the utility of parachutes to decrease injuries or death after jumping out of an airplane?
Finally, there are many, many more clinical questions than there are randomized controlled trials. The many questions which are suitable for a well-designed and well-conducted randomized controlled trial far exceed the available resources to conduct the trials. We must admit that resources are limited and spending time them on every possible clinical question or configuration of clinical importance may not be practical or advisable and we should rather devote such resources to focusing on higher-impact clinical questions.
A well-designed and well-conducted randomized controlled trial take time to design, carry out and report. There can be significant changes in the medical landscape between when the trial is designed and initiated and when the results are published. More than once a study has sought to examine a chemotherapy regimen for specific cancer only for that chemotherapy regimen to be antiquated and supplanted by the time the trial results are published.
Although the patient values are explicit to the model of evidence-based medicine, many healthcare practitioners omit or minimize patient values. It is not uncommon for the healthcare provider to recognize the medical issue, perform the review, evaluate and assimilate of relevant scientific information, and implement an intervention without much consideration of the patient's values. It is easy for providers to be swept away in trying to implement the "best evidence" or "best practices" before understanding how these either fit or contradict the patient's values.
Evidence-based medicine provides a framework for applying the relevant, scientific evidence to the patient's condition based on the patient's values using the clinician's clinical judgment to tailor the treatment for the patient. The goal of evidence-based medicine is to improve medical outcomes based on the highest quality evidence available. After the intervention is implemented, the outcome should be re-evaluated in the context of the clinical question to see what effect occurred.
Evidence-based medicine can also be applied to a population to generate recommendations for the population based on current medical evidence. Population recommendations are typically graded based on the underlying science behind the guidelines. Various grading schemes exist, and these schemes rank recommendations from strong evidence, (to support the guidelines) to poor or no evidence (to support the guideline with varying levels of support in between).
|||Sackett DL,Rosenberg WM,Gray JA,Haynes RB,Richardson WS, Evidence based medicine: what it is and what it isn't. 1996. Clinical orthopaedics and related research. 2007 Feb [PubMed PMID: 17340682]|
|||Sackett DL,Straus SE, Finding and applying evidence during clinical rounds: the [PubMed PMID: 9794314]|
|||Sackett DL, Evidence-based medicine. Spine. 1998 May 15 [PubMed PMID: 9615357]|
|||Steves R,Hootman JM, Evidence-Based Medicine: What Is It and How Does It Apply to Athletic Training? Journal of athletic training. 2004 Mar [PubMed PMID: 15085215]|
|||Fernandez A,Sturmberg J,Lukersmith S,Madden R,Torkfar G,Colagiuri R,Salvador-Carulla L, Evidence-based medicine: is it a bridge too far? Health research policy and systems. 2015 Nov 6 [PubMed PMID: 26546273]|
|||Horwitz RI,Charlson ME,Singer BH, Medicine based evidence and personalized care of patients. European journal of clinical investigation. 2018 Jul [PubMed PMID: 29700817]|
|||Wivel AE,Lapane K,Kleoudis C,Singer BH,Horwitz RI, Medicine Based Evidence for Individualized Decision Making: Case Study of Systemic Lupus Erythematosus. The American journal of medicine. 2017 Nov [PubMed PMID: 28711556]|
|||Jefferson T,Doshi P,Boutron I,Golder S,Heneghan C,Hodkinson A,Jones M,Lefebvre C,Stewart LA, When to include clinical study reports and regulatory documents in systematic reviews. BMJ evidence-based medicine. 2018 Oct 11 [PubMed PMID: 30309870]|
|||Horwitz RI,Singer BH, Why evidence-based medicine failed in patient care and medicine-based evidence will succeed. Journal of clinical epidemiology. 2017 Apr [PubMed PMID: 28532612]|