Evidence-based practice (EBP) is a framework for making healthcare decisions by combining three things: the best available research, the clinician’s own expertise, and the patient’s individual values and preferences. It replaced an older model where treatment decisions relied heavily on a single provider’s training and personal experience. Today, EBP is the expected standard across medicine, nursing, physical therapy, and virtually every other health profession.
The Three Pillars of EBP
David Sackett, the physician widely credited with formalizing this approach, defined the three core components: the best research evidence available, the clinician’s skills and judgment, and the patient’s expectations and wishes. All three carry weight. Research alone doesn’t dictate a decision, and neither does a provider’s gut feeling. The model works when all three inputs overlap.
In practice, this means a doctor treating knee pain doesn’t just prescribe whatever a study says works best on average. They also factor in their experience with similar patients and what the person sitting in front of them actually wants, whether that’s avoiding surgery, staying active for a specific sport, or minimizing medication. A treatment that scores well in trials but conflicts with a patient’s circumstances or preferences isn’t truly evidence-based under this framework.
How Research Gets Ranked
Not all evidence is created equal. EBP uses a hierarchy that ranks studies by how likely they are to produce biased results. At the top sit systematic reviews of randomized controlled trials (RCTs), which pool data from multiple well-designed experiments. Individual RCTs come next. These are considered the gold standard for testing treatments because they randomly assign people to different groups, which helps cancel out confounding factors that could skew results.
Below RCTs, the hierarchy moves through cohort studies (which follow groups over time but don’t randomly assign treatments), case-control studies (which look backward from an outcome to identify possible causes), and case series (detailed reports on a handful of patients). At the bottom is expert opinion without supporting data. That doesn’t mean expert opinion is worthless. It means it’s the most vulnerable to personal bias and should be treated as a starting point, not an endpoint.
When professional organizations develop clinical guidelines, they typically use a structured grading system to rate how confident they are in the evidence behind each recommendation. The most widely used system evaluates five factors that can weaken confidence in a finding: risk of bias in the study design, inconsistency across studies, whether the evidence directly applies to the question at hand, how precise the results are, and whether studies with negative results may have gone unpublished. Strong evidence on all five fronts means a recommendation can be made with high certainty. Weakness in several areas means the recommendation comes with caveats.
The Five Steps in Practice
Clinicians who follow the EBP process use a five-step method, sometimes called the “5 As”:
- Ask: Frame the clinical problem as a specific, answerable question. Instead of “what helps back pain,” the question might be “does spinal manipulation reduce chronic lower back pain more than exercise therapy in adults over 50?”
- Acquire: Search for the best available research that addresses the question, prioritizing higher levels of evidence.
- Appraise: Critically evaluate what the research actually shows, including its limitations.
- Apply: Integrate the findings with the clinician’s experience and the patient’s preferences, abilities, and resources.
- Assess: After implementing the decision, evaluate how well it worked for the patient and reflect on the process itself to improve future decisions.
That last step is often overlooked, but it’s what turns EBP from a one-time exercise into a feedback loop. A clinician who tracks their outcomes over time builds a personal evidence base that sharpens the second pillar: clinical expertise.
Where EBP Came From
The intellectual roots of EBP trace back to the early 1970s. In 1972, British epidemiologist Archie Cochrane published “Effectiveness and Efficiency: Random Reflections on Health Services,” a book that argued healthcare systems were spending enormous resources on treatments that had never been rigorously tested. He pushed the medical community to prioritize randomized controlled trials as the basis for clinical decisions. The Cochrane Collaboration, a global network that produces systematic reviews of health research, was later named in his honor.
The term “evidence-based medicine” emerged in the early 1990s from a group at McMaster University in Canada, led by Sackett and colleagues. It originally applied to physician practice. Over time, other health professions adopted the concept under the broader label “evidence-based practice,” reflecting a more multidisciplinary scope that includes nursing, physical and occupational therapy, physician assistants, and others. Before this shift, much of patient care decision-making rested on individual physician assessment and choice, with wide variation in how similar conditions were treated.
Why It’s Harder Than It Sounds
The concept is straightforward. The execution is not. Research consistently identifies several barriers that prevent clinicians from fully adopting EBP in their daily work.
The most serious obstacle appears to be insufficient knowledge. Many clinicians never received formal training in how to find, read, or critically evaluate research literature. Even those who did may find their skills outdated as methodology evolves. Protocols at their workplace may not reflect current evidence, and the gap between what’s taught in school and what happens in practice can be wide.
Time is the other persistent problem. It shows up in two ways: clinicians don’t have time to search for and read research during a busy shift, and they don’t have time to access databases or discuss findings with colleagues. High patient loads, staffing shortages, and workplace stress all compress the space available for anything beyond immediate patient needs.
Other documented barriers include lack of access to research databases (particularly in smaller or rural facilities), workplace cultures where questioning established routines is discouraged, and decision-making structures that don’t give frontline clinicians authority to change practice even when the evidence supports it. These aren’t character flaws. They’re systemic issues that require organizational support to address, from providing database access and protected learning time to updating institutional protocols on a regular cycle.
The Role of Patient Preferences
One of the most misunderstood aspects of EBP is the patient’s role. It’s not a system where a clinician looks up the “correct” answer in a database and delivers it. The third pillar, patient values and preferences, is meant to carry real weight through a process called shared decision-making.
In shared decision-making, the clinician presents what the evidence shows, explains the options along with their trade-offs, and then works with the patient to arrive at a plan that fits their life. A treatment with a slightly better statistical outcome might not be the right choice for someone whose priorities, resources, or tolerance for side effects point in a different direction.
Healthcare education programs are increasingly recognizing that EBP and shared decision-making need to be taught together rather than as separate skills. When they’re taught in isolation, students can develop a kind of tunnel vision where “following the evidence” means applying research findings without adequately considering the person receiving care. Integrating both concepts from the start helps future clinicians understand that a truly evidence-based decision is one the patient helped shape.

