Evidence-based practice in psychology (EBPP) is the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences. That definition, adopted by the American Psychological Association, rests on three equal pillars, not just one. It’s a common misconception that “evidence-based” means rigidly following research findings. In reality, EBPP is a decision-making process that weighs scientific evidence alongside the therapist’s professional judgment and the individual needs of each client.
The Three Pillars of EBPP
Each pillar carries equal weight. Removing any one of them distorts the process.
Best available research evidence refers to findings from well-designed studies that test whether specific approaches actually work. Not all research carries the same weight. At the top of the evidence hierarchy sit systematic reviews of randomized controlled trials, which pool data from multiple high-quality experiments. Individual randomized controlled trials come next, followed by cohort studies, case-control studies, and case series. Expert opinion alone sits at the bottom. A psychologist practicing EBPP looks for the strongest evidence available for a given problem, while recognizing that for some conditions or populations, the highest-quality studies simply don’t exist yet.
Clinical expertise is the skill, judgment, and knowledge a psychologist develops through training and experience. This includes the ability to recognize patterns across clients, build a strong therapeutic relationship, adapt techniques to fit a specific situation, and know when a textbook approach isn’t working. Research can identify which treatments are effective on average, but clinical expertise helps a psychologist decide how to apply those findings to the person sitting across from them.
Patient characteristics, culture, and preferences anchor the process in the client’s actual life. Two people with the same diagnosis can have very different values, cultural backgrounds, past experiences with treatment, and personal goals. A treatment with strong research support may be a poor fit if the client finds it unacceptable, can’t access it, or has cultural or religious considerations that conflict with the approach. The guiding question, as one framework puts it, shifts from “What is the matter with you?” to “What matters to you?”
How EBPP Works in Practice
Psychologists don’t just pick a treatment off a list. EBPP follows a structured cycle with five steps: ask, acquire, appraise, apply, and assess.
It starts with a clinical question. A psychologist working with a teenager experiencing panic attacks might ask: “For adolescents with panic disorder, which therapeutic approaches produce the best outcomes?” Next, they search for relevant research, looking for systematic reviews, clinical trials, and practice guidelines. They then critically evaluate what they find, considering the quality of the studies, the populations studied, and whether the findings are likely to apply to their specific client.
The fourth step is applying the evidence. This is where all three pillars converge. The psychologist uses the research findings alongside their own clinical judgment and what they know about the client’s preferences and circumstances to select and adapt an approach. The final step is assessment: tracking whether the treatment is actually working for this particular person and adjusting course if it isn’t.
Tracking Progress With Measurement
One practical tool within EBPP is measurement-based care, which involves routinely collecting brief, standardized measures from clients to monitor how treatment is going. These measures are typically short (about 5 to 8 minutes to complete), designed to detect change over time, and produce results the therapist can act on immediately. The scores get plotted session by session, creating a visual picture of progress.
This approach does more than track symptoms. Measures can also capture things like the strength of the therapeutic relationship, motivation for treatment, and whether personal goals are being met. If a client’s scores plateau or worsen, the therapist gets an early warning signal. Clinical cutoffs help identify when a client has reached a point that would be considered “well.” For example, in youth mental health settings, therapists sometimes use brief questionnaires that separately measure symptoms, therapy alliance, and treatment motivation, giving a multidimensional view of how things are going rather than relying on a single impression.
EBPP vs. Evidence-Based Treatments
These two terms sound similar but mean different things, and confusing them is one of the most common misunderstandings in the field. An evidence-based treatment (sometimes called an empirically supported treatment) is a specific therapeutic approach that has been tested and shown to work in research studies, like cognitive behavioral therapy for depression or exposure therapy for phobias. It’s a product.
Evidence-based practice, by contrast, is a process. It’s the broader framework for making clinical decisions. A psychologist using EBPP will often choose an evidence-based treatment, but they’ll also factor in their clinical expertise and the client’s individual needs when deciding how to implement it, whether to modify it, or whether a different approach is more appropriate. As one commentary in the field put it, available research should guide the clinician on appropriate management while leaving room for interpretation, so practitioners can adapt how they work without disregarding the evidence.
Why It’s Not Always Simple
The logic of EBPP is straightforward. Implementation is harder. Several well-documented barriers get in the way. Funding is a persistent issue: training therapists in new approaches, purchasing assessment tools, and dedicating time to review research all cost money that many settings don’t have. Access to resources matters too. A solo practitioner in a rural area may not have the same access to research databases, consultation, or continuing education as someone in an academic medical center.
Clinician attitudes play a role as well. Some therapists view EBPP as overly rigid or worry that manualized treatments don’t leave enough room for the relational, human side of therapy. This concern often stems from conflating EBPP with strictly following a treatment manual, which, as outlined above, is only one piece of the picture. There’s also the flexibility problem: some evidence-based interventions were developed and tested under controlled conditions that don’t reflect the complexity of real clinical settings, where clients often have multiple diagnoses and unpredictable life circumstances.
Practical recommendations for closing this gap include ensuring funding continues beyond the end of research trials, training staff specifically in how to integrate research into everyday care, and co-producing treatment approaches with the people who will actually receive them. The goal isn’t perfection. It’s making clinical decisions as transparently and thoughtfully as possible, using every source of information available.

